You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for putting in tremendous effort in this github repo.
I understand the work is still in progress, but do you mind helping me to get to the commit that was used to produce the RL results in your paper, here: https://wandb.ai/sequoia/crl_study?workspace=user- .
Or any other similar commit with functioning examples to use continual learning with reinforcement learning.
Thank you in advance.
The text was updated successfully, but these errors were encountered:
Hello there @salmashukry! Thanks for taking a look at the paper and repo, sorry for not getting back to you sooner.
Is your goal to reproduce some of these results as exactly as possible? Or just to run the same kind of Continual Reinforcement Learning experiment?
In the first case, to be honest I'm not 100% sure what the best commit would be, but my best guess is probably this one. I've picked up the good habit of adding tags before launching runs after launching those, sadly. The command-line API has changed a little bit since those runs, but the code hasn't changed much, so they should still be totally runnable with the master branch, the commands would just need to be tweaked a little bit. Let me know if that's something you want to do.
In the second case, then you should be able to just use the master branch to run a CRL experiment. Did you run into some trouble when using the master branch?
You can run experiments two ways: either in code, or from the command-line.
$ sequoia run <setting_name> (... setting options ...) <method_name>
where <setting_name> here could be any of the CRL settings (see below for how to get more info)
For example, if you want to run a simple experiment, then a command like this would be useful:
$ sequoia run task_incremental_rl --dataset CartPole-v0 --nb_tasks 5 sb3.dqn
For a more involved experiment like the ones in that CRL study, taking this run as an example, you can get the same kind of experiment going now using this:
to see the methods and configuration options available for a setting, for example task_incremental_rl:
$ sequoia run task_incremental_rl --help
to see the hyper-parameters and configuration options for a given method when applied on a given setting, for example sb3.dqn:
$ sequoia run task_incremental_rl sb3.dqn --help
This setup isn't really ideal. There's also a sequoia info command that is meant to just give you info about a specific setting or method but it's not great currently. I'll make it better as soon as I can.
Thank you so much for putting in tremendous effort in this github repo.
I understand the work is still in progress, but do you mind helping me to get to the commit that was used to produce the RL results in your paper, here: https://wandb.ai/sequoia/crl_study?workspace=user- .
Or any other similar commit with functioning examples to use continual learning with reinforcement learning.
Thank you in advance.
The text was updated successfully, but these errors were encountered: