Skip to content

fredericoschardong/ffnet-single-hidden-layer-hyper-parameterization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Feed Forward Neural Network Single Hidden Layer Hyper Parameterization

Python implementation that explores how different parameters impact a feed-forward neural network with a single hidden layer.

A brief analysis of the results is provided in Portuguese. It was submitted as an assignment of a graduate course named Connectionist Artificial Intelligence at UFSC, Brazil.

In short, sine and cosine are fed to the FF network which tries to learn and predict their output. Different amounts of neurons, test subjects, learning rate, epochs, and activation functions are testes separately. They all use gradient descent.

The base case uses 10 neurons in the hidden layer, 200 instances for training, 20000 epochs, 0.005 learning rate, and some noise: alt text

The result folder holds the results of other scenarios where the different amount of neurons, training instances, epochs, learning rates, and noise are tested.

About

Python implementation that explores how different parameters impact a single hidden layer of a feed-forward neural network using gradient descent

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages