Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuously improving a neural network over time using small batches. #135

Open
BjarkeCK opened this issue Aug 27, 2020 · 1 comment
Open

Comments

@BjarkeCK
Copy link

BjarkeCK commented Aug 27, 2020

Hey, first of thanks for a fantastic library!

The library is really easy and simple to use if you have a large dataset and want to train a network in one go.

But I'm building a DQN and I want to continuously improve a neural network from small batches of training data, with as little overhead as possible. Is that something that's easily possible in SharpLearning?

Right now, the only way I see it can be achive, is by doing something like this:

var net = new NeuralNet();
// ...

while (true)
{
    var learner = new NeuralNetLearner(net, new CopyTargetEncoder(), new SquareLoss());
    // ...
    net = learner.Learn(observations, targets);
}

However there's a lot of overhead and data copying going on there. Are there better ways to go about it?

Thanks :)

Edit 1 : Seems like my example doesn't work either since the weights are randomized when a learning begins.

Edit 2: My seccond attempt, throws a nullreference exception on net.Forward(input, output). (Allthough I imagine that this is not a very good way to go about it either? And probably wrong on many levels 😊)

var delta = Matrix<float>.Build.Dense(1, 1);
var input = Matrix<float>.Build.Dense(inputCount, 1);
var output = Matrix<float>.Build.Dense(1, 1);

while (true)
{
        PopulateInput(input)
        net.Forward(input, output);
        var expected = GetExpected();
        delta[0, 0] = (float)(expected - output[0, 0]);
        net.Backward(delta);
}
@mdabros
Copy link
Owner

mdabros commented Oct 11, 2020

Hi @EliasCK,

Sadly there is currently no easy way to do continued learning via the NeuralNet package. The training loop is hidden inside the NeuralNetLearner class with dependencies on local methods to copy the next minibatch, and much worse a dependency on calling initialize on all layers in the very beginning of the loop. I have been planning to refactor the NeuralNet project to make it possible to train using a separate minibatch source and an open loop, to make it support more flexible use cases like the one you need. However, it is a bit more long term.

Best regards
Mads

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants