Skip to content

imhgchoi/pytorch-implementations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch Implementation Examples

This repository holds source codes of machine learning and modulized neural networks implemented with Pytorch.

Any comments or feedbacks are welcomed, email me at imhgchoi@korea.ac.kr

Contents

  1. Gradient Descent : Not Pytorch -- simple gradient descent with several different conditions.
  2. Logistic Regression : Not Pytorch
  3. Deep Neural Networks : predicting handwritten numbers with MNIST dataset
  4. Convolutional Neural Networks : predicting handwritten numbers with MNIST dataset
  5. Recurrent Neural Networks : predicting future stock price trend with RNN(LSTM cells)
  6. AutoEncoders    
    6.1 Feed Forward AutoEncoder : regenerating MNIST images with a feed forward AutoEncoder    
    6.2 Convolutional AutoEncoder : regenerating MNIST images with a convolutional AutoEncoder    
    6.3 Beta-Variational AutoEncoder : regenerating MNIST images with a Beta-Variational AutoEncoder
        I found it hard to build a vanilla VAE. So I adopted the Beta-VAE with an incremental Beta to help convergence.    
    6.4 Sparse AutoEncoder : regenerating MNIST images with a sparse AutoEncoder with 1300 hidden code units.    
    6.5 Denoising AutoEncoder : regenerating MNIST images that has gaussian noise with a denoising AutoEncoder.
  7. Deep Q Network    
    7.1 Feed Forward DQN : training Cartpole with an RL feed forward DQN    
    7.2 Convolutional DQN : training Cartpole with an RL Convolutional DQN. Referenced here, but failed to master the game

NOTE : All Neural Network Models are built without train/dev/test splits. Models will be prone to overfitting.


License: MIT