Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance gap in Predator-Prey #9

Open
pengzhenghao opened this issue Mar 16, 2021 · 3 comments
Open

Performance gap in Predator-Prey #9

pengzhenghao opened this issue Mar 16, 2021 · 3 comments

Comments

@pengzhenghao
Copy link

Hi there! Thanks for this excellent repo! The code is really nice and great lifesaver for other researchers!

I am trying to reproduce the result in the paper "Deep Multi-Agent Reinforcement Learning for Decentralised Continuous Cooperative Control" and find that there is some performance gap between my result and the reported results. I think this might due to my carelessness on the hyper-parameters, so I am looking for help in this issue.

Apart from using the default comix.yaml and particle.yaml configs, I additionally introduces these parameters in the config:

# According to paper F.1
batch_size: 1024
gamma: 0.85
lr: 0.01
rnn_hidden_dim: 64
t_max: 2000000
test_interval: 2000
save_model: True
save_model_interval: 200000

And this is the result of COMIX in Continuous Predator-Prey environment with 8 repetitions (no difference in the config, just repeat):

image

For reference, this is the learning curve in original paper:

image

For details, I find that final performance (episode return) of 8 trials varying drastically:

244.5
2.0
248.0
207.25
1.0
271.25
274.0
163.5

so I guess there is some hyper-parameters that I have ignored, which leads to the failure of some trials.

Could anyone help to provide some suggestions on this issue? Thanks!!!

@pengzhenghao
Copy link
Author

More information: some trials fail after long running

image

@beipeng
Copy link
Collaborator

beipeng commented Mar 19, 2021

Hi, Thanks for raising the issue! The hyperparameters you are using for continuous predator-prey look very similar to what we used (assuming you are using batch_size_run=1). We ran 10 different seeds before for this task and didn't see this performance degradation issue. But we did see a similar problem when we use gamma=0.99. We think this is probably due to the q-value overestimation bias in QMIX (can be more severe due to the mixing network), which can be a problem in certain tasks and causes catastrophic performance degradation. So maybe the performance gap you are seeing here is due to some random seeds. COVDN shouldn't have this problem (tend to be quite stable). Maybe you can run COVDN to see if you can get similar result to us to double check if you are using the same hyperparameter setting. Hope that helps!

@pengzhenghao
Copy link
Author

Thanks @beipeng ! I am using gamma = 0.85 and batch_size_run = 1.

COVDN indeed performs much stable than COMIX and the result is perfectly match the one in paper.

May I ask how to actually repeating the experiment? I can't find clear place to insert the random seed. In my experiment, I just run the same script for multiple time with random seed set globally (like np.random.seed(xx), xx = 0, 100, 200, ..., 700).

Thanks a lot for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants