# Genetic algorithms are difficult to understand

## Genetic Algorithms and Deep Learning - The Evolution of Neural Networks with Evolutionary Algorithms

Genetic algorithms or evolutionary algorithms are a class of optimization algorithms that have been known for a long time and try to find solutions to certain problems using strategies of evolution known from nature. They are used in particular when standard methods cannot find the optimal solution at all or cannot find them quickly and reliably enough.

### Where is the top? Local and global maxima

An example of this is the determination of the global maximum of a function that has many more local (lower) maxima. Normal optimizations in this task would either get stuck at a local maximum or oscillate endlessly back and forth between the extreme values. Example of a function with many local extremes

Genetic algorithms always use what is known as a measurement for target achievement Fitness function. Evolution strategies are used to try to maximize the result of this fitness function (e.g. to find the highest value of a curve with many local maxima). The evolution strategy consists in initializing all available parameters randomly - and not just once, but very often and in parallel.

### The evolution of parameter populations

A whole population of functions is formed, each with different parameters, e.g. 100 different function "individuals". From this population, for example, the 10% whose fitness function has the highest value are then selected, and all others are discarded. The remaining functions or individuals of the population are further “bred” by cloning, recombination and mutation, and then again screened out using the fitness function.

Cloning in this context means copying, re-combination means that a random part of the parameters of one individual is exchanged for the corresponding part of another individual in the current population. In addition, individual parameters of an individual can develop further through random mutations. The values ​​for the recombination and mutation rates, as well as the size of the population and the percentage that survived in each case are corresponding to hyperparameters that are specified by the developer.

The improvements in genetic algorithms achieved through the evolution strategy are ultimately getting smaller and smaller and hardly change once the global maximum is reached. Finding global maximum or minimum values ​​in a rugged functional landscape is a great strength of genetic algorithms.

Can this optimization also be used for neural networks?

### Genetic Algorithms and Deep Learning

Artificial neural networks can learn independently to find answers to certain types of questions. Particularly in the rapidly developing area of ​​deep learning, it can happen that the optimization algorithms used get stuck in a local minimum when attempting to minimize the error function and the “learning” comes to a standstill.

To optimize the individual neuron weights in a neural network, what is known as backpropagation is usually used: With the help of various techniques, the share that this neuron had in the overall error is determined for each individual neuron, and the weight of this neuron for each of its input Channels is then adjusted a little bit in the direction that is supposed to minimize the error.

Many years ago there was the idea of ​​using alternative genetic algorithms to optimize neural networks. At that time, however, the computing power available and thus also the experience with deep neural networks was insufficient. In the meantime, however, the approach in the area of ​​deep learning is becoming increasingly important.

### Evolution strategies for optimizing artificial neural networks

The genetic evolution of neural networks then works as follows: A whole population of neural networks with randomly initialized neuron weights is formed. These networks are tested and those with the best results are kept and further developed through cloning, recombination and mutation as described above.

The evolution strategy here refers to the adaptation of the neuron weights of a neural network. This has the advantage that the network cannot get stuck in local minima, or only with great difficulty. The backpropagation is replaced by the genetic algorithms, i.e. each neuron is no longer consistently adapted and optimized in one direction in a single network, but complete neural networks are grown and further developed in parallel.

A few months ago, a study was published on OpenAI in which an attempt was made to simulate the successes that DeepMind was able to achieve when playing Atari games with deep learning or reinforcement learning, using neural networks optimized by genetic algorithms. In fact, the researchers were able to show that the results of the neural networks “bred” in this way were on par with those of DeepMind. Another very interesting aspect of genetic optimization algorithms for neural networks is the extremely simple parallelization of the optimization - namely through many individual instances instead of a central training instance.

### Genetic Algorithms and the Evolution of AI

Genetic algorithms can also be used for hyperparameter tuning of neural networks. To find the optimal configuration of hyperparameters of a deep learning model, many and very time-consuming test series are often required. Through the use of evolution strategies and genetic algorithms, the test effort could be reduced by up to 80% in some cases.

In summary, it can be said that genetic algorithms will play a stronger role in many areas in the future, and that by transferring evolution strategies to the development of AI, promising new types of training and “breeding” of artificial intelligence will emerge.

Graphic example of a function with many local extremes: Public Domain

Graphic Atari: OpenAI

/ by Roland Becker