Publication Details
Overview
 
 
 

GECCO དྷ: Proceedings of the Genetic and Evolutionary Computation Conference Companion

Contribution To Book Anthology

Abstract 

We propose an extension of NeuroEvolution of Augmenting Topologies (NEAT), called Heterogeneous Activation Feature Deselection NEAT (HA-FD-NEAT) that evolves the weights and topography (architecture and activation functions) of Artificial Neural Networks (ANNs) while performing feature selection. The algorithm is evaluated against its ancestors: NEAT that evolves the weights and topology, FD-NEAT that evolves the weights and the topology while performing feature selection and HA-NEAT that evolves the topography and the weights. Performance is described by (i) median classification accuracy, (ii) computational efficiency (number of generations), (iii) network complexity (number of nodes and connections) and (iv) ability of selecting the relevant features. On the artificial 2-out-100 XOR problem, used as benchmark for feature selection, HA-FD-NEAT reaches a 100% accuracy in a few generations. It is significantly better than NEAT and HA-NEAT and exhibits the same performance as FD-NEAT. Even though HA-FD-NEAT needs to search in a bigger search space that includes weights, activation functions, topology and inputs, it is able to achieve the same performance as FD-NEAT. To conclude, the proposed method reduces the burden of human designers by determining the network's inputs, weights, topology and activation functions while achieving a very good performance.

Reference 
 
 
DOI DOI scopus