Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random genomes and add link mutations will link output nodes to other nodes #57

Open
steampoweredtaco opened this issue Jul 11, 2022 · 6 comments

Comments

@steampoweredtaco
Copy link

Upon inspecting a new random population with a larger set of max nodes I discovered that links using output nodes as input nodes to any non sensor, including other output nodes, may occur. This can also occur as far as I can tell with the add link mutations.

In most topologies, including NEAT, I think an output node should be an end state unless it is a recurrent connection back to another non-output node and never with itself. If my assumption is incorrect could you point me to a paper that defines the expected constraints on the node types? What I have read so far seems to imply constraints but doesn't really speak about it or explores the benifits of such constraints on an output node.

These type of connections are likely weeded out in normal experiments but I speculate their existence very much complicated and increases the problem domain to search and degrades the quality or consistency of the network actions. Especially in cases that I've noticed where many outputs will directly activate into another output node. In my model of asexual reproduction it is much less likely to lose an output link like this once they occur and the initial populations will randomly have many of them.

I think it is an issue this implementation provides non recurrent link genes from output nodes to hidden, output, or even itself. But I don't really know.

Perhaps it should be a configurable option to not form nonrecurrent output links, especially for mutations as pruning the new mutations at runtime from client code greatly impacts runtime in the realtime simulation use cases.

@yaricom
Copy link
Owner

yaricom commented Jul 11, 2022

Hello!

The link from output link to the hidden node our to itself can serve as a feedback channel which can be very useful for certain tasks. Furthermore, it was experimentally proven to create successful solutions. Many natural physical systems incorporate conception of direct or implicit feedback. Error backpropagation in deep learning also can be considered as a feedback.

You can take a look at my experiments related to maze navigation problem at NEAT with Novelty Search. In this experiment, the evolved successful solvers tend to create topologies with recurrent links from the output nodes (direct or through hidden nodes). Such topologies have very good rational explanations based on the particulars of task to be solved.

However, I would be glad to know if preventing such recurrent links can boost performance in other types of tasks.

Cheers!

@steampoweredtaco
Copy link
Author

Hi there. I've been thinking about what you said. I agree recurrent values are helpful. I was more concerned with non-recurrent links of output nodes to other output nodes. This and recurrent nodes to itself seem rarely beneficial at least in my use cases. I'm not sure if that extends to other use cases. This have been problematic:

  1. Any output node is recurrent with itself.
  2. Any output node has a link to another output node.
  3. Two output nodes loop with each other with a link and an output link.

See this network created using the goNEAT library after 1000+ epochs.
image
(pink are output, green is a hidden node, blue are inputs, yellow is the bias input, dotted edges are recurrent connections)

Notice the loop between 20 and 23. This loop has little chance of ever changing so 20 and 23 won't have an unbounded cycle of the activation staying at it's max or min limit because of the loop. Same behaviors with better mutability could be made with hidden nodes and it seems this loop structure is less likely to optimize to a more reactive solution.
It is even made more difficult with the fact output node 20 loops back activation to itself.

I'm using this for AI in a simulation and modified your library for Real-Time NEAT. Once these type of loops form the AI progress seems to get stuck on a single behavior even after many more mutations. I believe it is because of these output node loops and the fact they tend to stay in the population if they made it to the fittest species/population. With just a single new crossover and/or mutation each real time epoch point it is likely that the loops will not change fast enough to provide a benefit (or a further lost) to fitness so they kinda get stuck and the behavior tied to the outputs tend not to improve again.

Other well-performing implementations of NEAT I've seen that don't seem to suffer from this loop structure do the following:

No node will make a connection with another node in the same layer as itself.

Pretty simple, but it follows from that one rule outputs don't loop with outputs and it also prevents a similar issue with hidden nodes on the same layer causing the same kind of recursive looped activation issue.

It is possible that the design decisions with goNEATs implementation don't work well specifically with real-time neat modifications and evolving the population continiously instead of all at once. I do like the library though, and if it was more flexabile to configure restrictions like this (not forming loops our output connections to other output connections) I'd be able to use it more readily without heavy one off modifications.

@yaricom
Copy link
Owner

yaricom commented Aug 1, 2022

Hello!

Thank you for sharing your findings and ideas. It is greatly appreciated and will help to evolve the goNEAT library further.

I've found that loops that you mentioned tend to make general phenotype structure simpler by encapsulating idea of correlation between nodes in the loop as well as to provide some kind on memory for continuous tasks. Almost all successful solutions which was found in my experiments have some kind of loops including loops in the output nodes. Thus, I'm not sure that such behavior of the goNEAT library is wrong. Nevertheless, I may assume that for other specific tasks it can be beneficial to forbid loops.

I'll keep your proposal in my list of future improvements. Meantime, I'm working on performance improvements and some public API changes which I'm planning to release soon as v4 of the library.

Thank you for participation.

@yaricom
Copy link
Owner

yaricom commented Aug 1, 2022

I have found great paper that may be interesting in regard to problems you are trying to solve.

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities

I saw in phenotype scheme you provided that most of the inputs actually not connected at all, i.e., the sensory inputs are effectively ignored. Such topology can be a result of evolution exploiting loophole in the fitness function or experimental data structure. As it is mentioned in the paper:
'''The researchers were surprised to find that high-performing neural networks evolved that contained nearly no connections or internal neurons: Even most of the sensory input was ignored. The networks seemed to learn associations without even receiving the necessary stimuli, as if a blind person could identify poisonous mushrooms by color. A closer analysis revealed the secret to their strange performance: Rather than actually learning which objects are poisonous, the networks learned to exploit a pattern in how objects were presented.'''

Anyway, the paper has a lot of interesting facts to learn.

@steampoweredtaco
Copy link
Author

steampoweredtaco commented Oct 11, 2022 via email

@yaricom
Copy link
Owner

yaricom commented Oct 11, 2022

Hi!

Thank you for sharing your findings. It is really helpful for making this library better. I'll take a look at optimizations you implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants