Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate results question #137

Open
AlRodA92 opened this issue Apr 22, 2022 · 0 comments
Open

Evaluate results question #137

AlRodA92 opened this issue Apr 22, 2022 · 0 comments

Comments

@AlRodA92
Copy link

AlRodA92 commented Apr 22, 2022

Hi,

I have a question about the evaluation of a model. I use the code as described

`python3 src/main.py --config=qmix --env-config=sc2 with env_args.map_name=Multi_task_6m1M_vs_12m1M checkpoint_path=results/models/qmix_best/ save_replay=True test_nepisode=5 evaluate=True'

So, I run the model for evualuation 5 episodes but the resulst with the return_mean and the other metric have only one value

image

I try some modifications on the config but I get the same results.

What I try to do is obtain the same number of results that the number of episodes. That is to say, the return and the other metric obtained on each of the episodes.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant