Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DQN with Doom - Agent Performance #82

Open
HWerneck opened this issue Aug 12, 2020 · 0 comments
Open

DQN with Doom - Agent Performance #82

HWerneck opened this issue Aug 12, 2020 · 0 comments

Comments

@HWerneck
Copy link

HWerneck commented Aug 12, 2020

I was testing the code, and skipped training to check the agent with random choices.
I changed the original code where the agent plays after training so to let the agent have 5 attempts with a trained model.
I also commented the command "action = random.choice(possible_actions)" to "action = random.choice(possible_actions)".

The problem is, the agent never gets a different reward than -1, even though it chooses all of the three possible actions at random. I checked the code and I realised it was suppose to give rewards as:
• +101 for a successful hit;
• -5 for a shot miss;
• -1 for every action.
But they were never programmed, nor in the code itself, not in the both the .cfg and the .wad files.
Either that or I missed something.

On top of that, the game is rendered on-screen, but it never updates according to the actions the agent is choosing.
I'm about to start checking differences between the code in the tutorial and the basic.py code that came with the VizDoom game. Anyone knows what's going on there?

@HWerneck HWerneck changed the title DQN with Doom - DQN with Doom - SyntaxError Aug 12, 2020
@HWerneck HWerneck changed the title DQN with Doom - SyntaxError DQN with Doom - Agent Performance Aug 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant