Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pvlv / boa: put adapting params into (Global?) state, save / load state with weights #304

Open
rcoreilly opened this issue Aug 29, 2023 · 0 comments

Comments

@rcoreilly
Copy link
Member

A next-level PVLV / boa step is to start adapting / learning key params such as:

  • Expected effort -- this can depend on the context and controls when to give up -- NE neuromodulator widely thought to be involved in regulating this (implicated in ADHD, etc). These are currently set to fixed values in PVLV.Effort.Max* params.

  • Expected reward magnitude -- scaling of DA responses as a function of overall expected rewards is well documented (e.g., 2 drops of juice in context of 1-2 range is max DA, but in context of 2-4 range is reduced). This is a separable factor from VSPatch prediction of timing and magnitude for an individual reward -- depends on overall context (Niv et al have studied). These are currently set in PVLV.USs.Gain* and VTA params.

Mechanically, just need to put params somewhere appropriate -- Globals if NData specific, or some layer's specific params, as in the case of VTALayer, and then critically save and load these adapting values with weights file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant