Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

factor out agents? #25

Open
KiaraGrouwstra opened this issue Jun 5, 2018 · 1 comment
Open

factor out agents? #25

KiaraGrouwstra opened this issue Jun 5, 2018 · 1 comment

Comments

@KiaraGrouwstra
Copy link

Looking at a few RL libs, I noticed some had their agents as the main program, while e.g. the DAT257x notebooks expressed agents as classes, implementing methods to give an action as well as to learn from the environment feedback.

It seemed to me like over here agents were tightly coupled with the main program as well, but was wondering if some Haskell equivalent to the Python class would be nice.
I'm still new-ish to Haskell, but had been thinking in the direction of some type class with state monad covering the state (agent-specific) and act/learn methods.

Mostly just throwing this out there to get a sense of what the considerations on this were over here. :)

@stites
Copy link
Contributor

stites commented Jun 5, 2018

That's probably the right way forward (and was starting to be the direction of the algorithms repo).

I was a bit afraid of over-abstracting before introducing function approximators, so I think I left this project thinking that what was really needed was more raw, unabstracted examples in the reinforce-zoo. The idea being that if there are enough examples a more natural abstraction should fall out of the woodwork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants