Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use RL algorithms with continuous action space #49

Open
AizazSharif opened this issue Aug 23, 2021 · 5 comments
Open

Unable to use RL algorithms with continuous action space #49

AizazSharif opened this issue Aug 23, 2021 · 5 comments

Comments

@AizazSharif
Copy link

Hi @praveen-palanisamy

I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.

I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space.
Screenshot from 2021-08-23 19-44-09
Screenshot from 2021-08-23 19-44-23

I also followed the guide mentioned here but now its giving me the following error.
error.txt

Any help in this regard would be appreciated.
Thanks.

@praveen-palanisamy
Copy link
Owner

Hi @AizazSharif ,
Good to hear about your continued interest and experiments on top of macad-gym.
You did the right thing w.r.t macad-gym i.e, setting "discrete_actions": False to make the environment use continuous action space. Now w.r.t the agent's policy, the policy network needs to generate continuous-valued actions of appropriate shape.
For example, you would create a PPO/DDPG policy with policy network's output and shape as ~ Box(2) instead of Discrete(9).
Where the Box(2) refers to two continuous valued outputs (one for steering, another for throttle).

From the error logs, it looks like the DDPG's critic network's concat operation is failing to concat tensors of different rank: ValueError: Shape must be rank 4 but is rank 2 for 'car1/critic/concat' (op: 'ConcatV2') with input shapes: [?,84,84,3], [?,8]
This operation is defined in RLLib's DDPG (ddpg_policy.py) which you need to configure to generate actions of appropriate shape and range (using the example above).
Hope that helps.

@AizazSharif
Copy link
Author

Thanks for the reply @praveen-palanisamy. I will look into it and let you know.

@AizazSharif
Copy link
Author

AizazSharif commented Aug 30, 2021

I also wanted to ask whether it is possible to have one agent with discrete and another with continuous actions in a same driving scenario? @praveen-palanisamy
As an example, one car is trained using PPO and another using DDPG.

@praveen-palanisamy
Copy link
Owner

Hi @AizazSharif ,
Missed your new question until now. Yes, you can use different algorithms per agent/car. The RLLib example agents in the MACAD-Agents repository is a good starting point for Multi-Agent autonomous driving setting.
You can refer to this sample for a generic, PPO, DQN sample using RLLib

@AizazSharif
Copy link
Author

Hi @praveen-palanisamy
Thanks for the reply. I have looked at these examples but they have the same type of action space agents in an environment. I couldn't find any example implementation where both discrete and continuous agents are running in a multi-agent setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants