Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why DQV is on-policy? #2

Open
deligentfool opened this issue Jan 13, 2022 · 2 comments
Open

Why DQV is on-policy? #2

deligentfool opened this issue Jan 13, 2022 · 2 comments

Comments

@deligentfool
Copy link

I saw that DQV used samples sampled with the behavior policy (the epsilon-greedy policy), not the current policy (the greedy policy). Why do you divide DQV into the on-policy methods?

@paintception
Copy link
Owner

Thanks for your question!

What makes DQV closer to an on-policy algorithm than an off-policy one is its temporal-difference (TD) target which is computed by the state-value network. Its goal is in fact to estimate the state-value function V(s), which by definition depends on the policy the network is following. V(s_{t+1}) always comes from policy $\pi$.

Note however that if a memory-buffer is used, then this on-policy vs off-policy distinction becomes less clear as indeed we use trajectories collected by numerous policies. Yet, DQV's TD-error, is still mathematically closer to that of on-policy algorithm than to that of an off-policy algorithm.

Hope this helps :-)

@deligentfool
Copy link
Author

I get it now. Thank you very much for your reply and the interesting work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants