Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question - Is it possible to apply the A3C RL method to this environment? #5

Open
aurotripathy opened this issue Mar 19, 2019 · 2 comments

Comments

@aurotripathy
Copy link

aurotripathy commented Mar 19, 2019

This is quite nice!
There are several A3C PyTorch implementations for Atari.
Is it possible to do the same with this Truck environment?
Thank you.

@aleju
Copy link
Owner

aleju commented Mar 22, 2019

A3C is usually based on having multiple workers playing in parallel. This implementation here expects a GUI to run the game in and record its state via screenshots. Getting that to work would probably require multiple machines running the model in parallel, including all the communication code. The implementation is currently not geared towards such a scenario, so it would require a significant rewrite of large parts of the code to get these things -- and thus A3C -- to work.

@aurotripathy
Copy link
Author

Thank you.
If the environment was Gym-like then it would be easy to make it work with any RL algo.
For example, create the env, render the state, take an action as input, etc.

import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
    env.render()
    env.step(env.action_space.sample()) # take a random action
env.close() 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants