Skip to content

reecemartin/binaryneuralnet

Repository files navigation

binaryneuralnet

A elementary feedforward multilayer perceptron Neural Network designed to pattern match basic binary operators such as And Or and Xor. The network will usually successfully converge when "hinted" values/ express settings are used. This is the best option for users interested in the functionality of the program who do not want to do a deep dive into the actual calculations and granular details of Neural Network Training.

Eventually the program will be fully implemented using Tensorflow in Python (tfneuralnetwork.py) for maximum performance for testing purposes, while the ground up Java code located in neuralnetwork.java will be used as a teaching and experimental low-level tool.

What is does:

The Binary Operators AND, OR, and XOR are the basis for propositional logic which is fundamental to a great deal of topics in mathematics and computer science. When many university students take their first course in propositional logic and discrete mathematics they learn the truth tables for these operators, this Neural Network does just that. When given a truth table the Neural Network cannot correctly apply the operators in it's initial (untrained) state however, following proper training the Neural Network can "learn" the correct patterns for the different operators and apply them with a high degree of accuracy.

As an example if a user selects the "XOR" (Exclusive Or) option from the console and does not use custom weights, the results from the 4 possible inputs to the neural network (i.e. 0,1 0,0 1,0 1,1) will simply be random floating point values (such that 0 <= x <= 1). These outputs are due to the initial weights in the network being set to random values. Once the network has been trained which can be done easily by selecting "express" training (this also sets some key training parameters to ideal values for successful training i.e. a sufficiently low error rate), the network's weights will be adjusted such that the network produces outputs within the margin of error specified (Note: This process can take anywhere from thousands to billions of iterative adjustments depending on the margin of error the network must meet). Once trained the network should be able to reproduce "XOR" to a high degree of accuracy, to elaborate this means that when given the inputs 0,1 the code is likely to produce a result such as 0.99xxx which is considered sufficiently close to the correct answer of 1.

Note as of October 2017 Neural Networks can be saved for later analysis. This will allow for ensemble style learning as well as other research oppurtunities.

How it works:

The basis of any Neural Network is a series of Neurons and Weights (connections with synaptic strengths between neurons)[see diagram]. A Neural Network may have i input neurons and o output neurons and some number of hidden neurons arranged in layers between the input and output neurons. When a Neural Network is initialized it's weights are set to random floating-point values which when given an input a is unlikely to produce a desired input b however, through the process of training the neural networks weights can be adjusted such that the network produces an output which is very near to the desired output b. This is achieved by combining several discrete processes iteratively. First on each iteration the error for the network is calculated using the Mean Squared Error (MSE), assuming the error rate is above the user selected threshold individual error is calculated across each weight using gradient descent (a process which can be likened to a ball rolling in a hilly valley trying to find the lowest point). At this point weights are updated backwards across the network from output to input (reverse feed-forward direction) using Backpropagation, and the process is repeated.

Save functionality for trained networks operats using the Java Serializable interface, an addition made by U/Blackspade741.

Development Plan:

Currently neuralnetwork.java functions completely while tfneuralnetwork.py (tf ~ Tensorflow) is in development, once tfneuralnetwork.py is fully functional it will inherit the user interface of neuralnetwork.java and be used purely for demonstration purposes. At this point neuralnetwork.java will be given more granular controls for use in experimentation and for teaching purposes. In addition performance testing functionality will be added and used to compare the implementation of the neural networks both bottom up, and using Tensorflow. Note: At present tfneuralnetwork.py can be used to generate potential weights to be used in our Neural Network however this functionality is in initial stages and has yet to be validated for accuracy.

Acknowlegements:

This project is largely inspired by and based on information and techniques from Jeff Heaton's videos and lectures on Neural Networks posted on Youtube: https://www.youtube.com/user/HeatonResearch

I also would like to acknowlege Ray Kurzweil's excellent book "How to Create a Mind" which provided inspiration for this project as well as a broad overview on Machine Learning. (Amazon Link: https://www.amazon.ca/How-Create-Mind-Thought-Revealed/dp/0143124048)

For more technical details, filling in gaps, and expanding on information from the previously mentioned lectures and videos I utilized Wikipedias excellent in-depth pages on Backpropagation (https://en.wikipedia.org/wiki/Backpropagation), Perceptrons (https://en.wikipedia.org/wiki/Perceptron), Multi-Layer Perceptrons (https://en.wikipedia.org/wiki/Multilayer_perceptron), Activation Functions (https://en.wikipedia.org/wiki/Activation_function), the Logistic Curve (https://en.wikipedia.org/wiki/Logistic_function), and Gradient Descent (https://en.wikipedia.org/wiki/Gradient_descent).