Skip to content

Artificial Neural Network (MLP) and Deep Q-Learning Implementation from scratch, only using numpy.

License

Notifications You must be signed in to change notification settings

nonkloq/nn_dqn-from-scratch

Repository files navigation

Artificial Neural Network and Deep Q-Learning Network from Scratch

Implementation of a Neural Network (MLP) and Deep Q-Learning Network (DQN) using only the numpy library. The DQN is trained to play the Cartpole game.

Multilayered Perceptron (ANN)

Neural Network Construction: Notebook

This notebook provides a step-by-step procedure for constructing a multilayered perceptron.

Neural Network Implementation: NeuralNetwork

This file contains the full implementation of the neural network, with added momentum to the weight updating step. To save and load the NeuralNetwork, use save_network and load_network from saveload.py.

Deep Q-Learning Network

Train DQN to Play Cartpole: Notebook

This notebook demonstrates how to use the NeuralNetwork to implement the DQN algorithm.

Custom Gym Environment

Maze Harvest: Environment

Check the Agent Training Notebook to learn more about the environment.

DQN Using TensorFlow to Play Maze Harvest

DQN Using TensorFlow: DQN

Agent Training: Notebook

Networks Folder

This folder contains pre-trained networks. Refer to the notebooks to learn how to load and use the networks.

License

This project is licensed under the terms of the GNU General Public License v3.0 - see the LICENSE file for details.