Skip to content
This repository has been archived by the owner on Oct 13, 2023. It is now read-only.

PaulPauls/Neuroevolution_of_Augmenting_Topologies_Paper

Repository files navigation

Overview about Neuroevolution of Augmenting Topologies (NEAT)

Advisor: Michael Adam

This repository contains my seminar paper about Neuroevolution of Augmenting Topologies including its LaTeX source code, which intends to give a good overview of the current state of research. Until the paper is done does this README contain interesting potential research resources. Feedback welcome.

Very Important Resources

Neuroevolution and Broader Resources

  • Primer Stanley - Competitive Coevolution through Evolutionary Complexification (2004) https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume21/stanley04a-html/jairhtml.html
  • Blogpost 2017, The Year of Neuroevolution (2017) (Difficulty: Easy, Audience: Overview, Quality: 3/5) https://medium.com/@moocaholic/2017-the-year-of-neuroevolution-30e59ae8fe18 > Based on: 'Evolution Strategies as a Scalable Alternative to Reinforcement Learning' (Background Knowledge) > Evolution Strategies (ES) can be a strong alternative to Reinforcement Learning (RL) and have a number of advantages like ease of implementation, invariance to the length of the episode and settings with sparse rewards, better exploration behaviour than policy gradient methods, ease to scale in a distributed setting > ES scales extremely well with the number of CPUs available demonstrating linear speedups in run time > The communication overhead of implementing ES in a distributed setting is lower than for reinforcement learning methods such as policy gradients and Q-learning. > The whole history of deep learning is full of re-invention and re-surrection, the main neural network learning algorithm, the backpropagation, was reinvented several times. (http://people.idsia.ch/~juergen/who-invented-backpropagation.html) > NEAT is a TWEANN (Topology- and Weight-Evolving Artificial Neural Networks) > References: Genetic CNN, Large Scale Evolution of Image Classifiers, Evolving Deep Neural Networks, NMode - Neuro-MODule Evolution, PathNet, Evolution Channels Gradient Descent in Super Neural Networks
  • Primer Stanley - Neuroevolution: A different kind of deep learning (2017) https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning
  • Blogpost Introduction to Evolutionary Algorithms (2018) (Difficulty: Very Easy, Audience: Beginners, Quality: 3/5) https://towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac
  • Blogpost Deep Neuroevolution: Genetic Algorithms are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning (2018) https://towardsdatascience.com/deep-neuroevolution-genetic-algorithms-are-a-competitive-alternative-for-training-deep-neural-822bfe3291f5 > Mostly referencing and Summarizing Uber's (Deep / Accelerated - Neuroevolution) and OpenAI's (Evolution Strategies as a scalable Alternative to Reinforcement Learning) research > These results indicate that GAs (and RS) are not all out better or worse than other methods of optimising DNN, but that they are a competitive alternative that one can add to their RL tool belt. Like OpenAI, they state that although DNNs don't struggle with local optima in supervised learning, they can still get into trouble in RL tasks due to a deceptive or sparse reward signal. It is for this reason that non gradient based methods such as GAs can perform well compared to other popular algorithms in RL.
  • Primer Neuroevolution: A Primer On Evolving Artificial Neural Networks (2018) [Difficulty: Medium, Audience: Advanced, Quality: 5/5] https://www.inovex.de/blog/neuroevolution/ > References: Large Scale Evolution of Image Classifiers, Evolving Deep Neural Networks

NEAT/HyperNEAT Resources

  • Podcast Stanleys Podcast about NEAT (2018) https://twimlai.com/twiml-talk-94-neuroevolution-evolving-novel-neural-network-architectures-kenneth-stanley/
  • Blogpost NEAT, An Awesome Approach to NeuroEvolution (2019) (Difficulty: Easy, Audience: Beginners, Quality: 4/5) https://towardsdatascience.com/neat-an-awesome-approach-to-neuroevolution-3eca5cc7930f > NEAT's original paper focused solely on evolving dense neural networks node by node and connection by connection > progress we have made with training NNs through gradient descent and back propagation need not be abandoned for a neuroevolutionary process... Recent papers have even highlighted ways to use NEAT and NEAT-like algorithms to evolved neural net structure and then use back propagation and gradient descent to optimize these networks > The NEAT algorithm chooses a direct encoding methodology because of this. Their representation is a little more complex than a simple graph or binary encoding, however, it is still straightforward to understand. It simply has two lists of genes, a series of nodes and a series of connections. > Cites: 'Evolving Deep Neural Networks'
  • Blogpost HyperNEAT: Powerful, Indirect Neural Network Evolution (2019) (Difficulty: Easy, Audience: Beginners, Quality: 3/5) https://towardsdatascience.com/hyperneat-powerful-indirect-neural-network-evolution-fba5c7c43b7b > DNA is an indirect encoding because the phenotypic results (what we actually see) are orders of magnitude larger than the genotypic content (the genes in the DNA). If you look at a human genome, we\u2019ll say it has about 30,000 genes coding for approximately 3 billion amino acids. Well, the brain has 3 trillion connections. Obviously, there is something indirect going on here!

Unsorted Wikipedia Resources

Research Paper Resources

Unsorted Implementation Resources

Background Knowledge Resources

Other/Unsorted Resources

About

Overview of the current state of 'Neuroevolution of Augmenting Topologies' as a seminar paper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages