-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use RL algorithms with continuous action space #49
Comments
Hi @AizazSharif , From the error logs, it looks like the DDPG's critic network's concat operation is failing to concat tensors of different rank: |
Thanks for the reply @praveen-palanisamy. I will look into it and let you know. |
I also wanted to ask whether it is possible to have one agent with discrete and another with continuous actions in a same driving scenario? @praveen-palanisamy |
Hi @AizazSharif , |
Hi @praveen-palanisamy |
Hi @praveen-palanisamy
I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.
I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space.
I also followed the guide mentioned here but now its giving me the following error.
error.txt
Any help in this regard would be appreciated.
Thanks.
The text was updated successfully, but these errors were encountered: