New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify Reinforcement Learning Agent Creation #3648
Comments
(This is my first Open Source Attempt to help, I apologise if I have broken any protocol, and am open to suggestions to improve) I have tried to implement the template as- template < and overloaded the TD3(ReplayType& replayMethod); constructor method in td3.hpp in td3_impl.hpp, i have expanded upon the constructor TD3<
in training_config.hpp, i have implemented the GetDefaultConfig() method- } } I have currently made these changes in my vscode, however I am not sure of the current syntax errors, since I dont know how to compile the entire mlpack library with my modifications. is there a way I can give you the specific .hpp files where I have made the changes? or should I put a pull request so that my differences wrt the original code are visible? |
I'd suggest you build mlpack main branch first. If you face any problems, feel free to open an issue. After you build mlpack, you should be able to follow RL tutorials
You should open a PR. |
This can also be done by using Wrapper Function or Class: A wrapper function or class that creates a TD3 agent with default parameters. Would be better right . It is safer than modifying the TD3 class itself. Will be more simple and elegant |
This issue has been automatically marked as stale because it has not had any recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions! 👍 |
What is the desired addition or change?
Simplify the creation of reinforcement learning agents in mlpack by having default values for common parameters, including network architectures and learning rates.
Defining a TD3 agent
Current Approach
Proposed Approach:
TD3<Pendulum> agent(replayMethod);
What is the motivation for this feature?
The current approach of manually configuring all agent parameters requires extra steps from users who want to quickly set up a basic reinforcement learning agent. Default constructors would simplify agent creation.
If applicable, describe how this feature would be implemented.
The text was updated successfully, but these errors were encountered: