Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

total_loss = actor_loss + 0.5*critic_loss? PPO中actor与critic网络更新为什么都使用total_loss #80

Open
CeibaSheep opened this issue Jan 5, 2022 · 2 comments
Assignees

Comments

@CeibaSheep
Copy link

请问,在PPO代码的agent.py 文件,

为啥要算total_loss = actor_loss + 0.5*critic_loss? PPO讲解中未见分析欸,而且 PPO原文中也未看到相关操作。

另外,为什么AC网络均使用total_loss的梯度, 这个地方合理吗???

@zichunxx
Copy link

zichunxx commented May 7, 2022

请问,在PPO代码的agent.py 文件,

为啥要算total_loss = actor_loss + 0.5*critic_loss? PPO讲解中未见分析欸,而且 PPO原文中也未看到相关操作。

另外,为什么AC网络均使用total_loss的梯度, 这个地方合理吗???

你好,请问你找到理论依据了吗,我也有同样的困惑。

@ecsfu
Copy link

ecsfu commented Feb 8, 2024

请问,在PPO代码的agent.py 文件,

为啥要算total_loss = actor_loss + 0.5*critic_loss? PPO讲解中未见分析欸,而且 PPO原文中也未看到相关操作。

另外,为什么AC网络均使用total_loss的梯度, 这个地方合理吗???

我理解是不是求梯度的时候还是各求各的,不相关的视为常数,导数为0,这样就分开计算损失是一样的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants