You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw that DQV used samples sampled with the behavior policy (the epsilon-greedy policy), not the current policy (the greedy policy). Why do you divide DQV into the on-policy methods?
The text was updated successfully, but these errors were encountered:
What makes DQV closer to an on-policy algorithm than an off-policy one is its temporal-difference (TD) target which is computed by the state-value network. Its goal is in fact to estimate the state-value function V(s), which by definition depends on the policy the network is following. V(s_{t+1}) always comes from policy $\pi$.
Note however that if a memory-buffer is used, then this on-policy vs off-policy distinction becomes less clear as indeed we use trajectories collected by numerous policies. Yet, DQV's TD-error, is still mathematically closer to that of on-policy algorithm than to that of an off-policy algorithm.
I saw that DQV used samples sampled with the behavior policy (the epsilon-greedy policy), not the current policy (the greedy policy). Why do you divide DQV into the on-policy methods?
The text was updated successfully, but these errors were encountered: