Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mini-batch/stochastic gradient descent #3

Open
KenNewcomb opened this issue Feb 8, 2020 · 0 comments
Open

Add mini-batch/stochastic gradient descent #3

KenNewcomb opened this issue Feb 8, 2020 · 0 comments
Labels
enhancement New feature or request

Comments

@KenNewcomb
Copy link
Owner

KenNewcomb commented Feb 8, 2020

Currently, the only optimizer available is batch gradient descent, where all training examples are used the compute the gradient during each epoch. I'd like to implement stochastic gradient descent, where random samples are used to compute the gradient, and mini-batch gradient descent, where small subsets of the training data (mini-batches) are used to compute the gradient.

Introduce batch_size parameter, along with a sensible default.

Perhaps these should be in their own class (Optimizer)?

@KenNewcomb KenNewcomb added the enhancement New feature or request label Feb 8, 2020
@KenNewcomb KenNewcomb added this to To do in ML Team Board Feb 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
ML Team Board
  
To do
Development

No branches or pull requests

1 participant