Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't find psi-run-per-demand and psi-step-per-demand #3032

Open
CICS-Oleg opened this issue Mar 30, 2018 · 10 comments
Open

Can't find psi-run-per-demand and psi-step-per-demand #3032

CICS-Oleg opened this issue Mar 30, 2018 · 10 comments
Assignees
Labels

Comments

@CICS-Oleg
Copy link

I've met the problem while experimenting with OpenPsi (and broken examples particularly): psi-step-per-demand and psi-run-per-demand are identified as "unbound variables". Are there some other aliases/funcs/vars for them?

@ngeiswei
Copy link
Member

@CICS-Oleg hopefully @amebel can look into this. Generally speaking it is good practice to explain in detail how to reproduce the problem (like which examples in particular), though it might just be enough for @amebel in this instance, I can't tell.

@linas
Copy link
Member

linas commented Mar 31, 2018

I believe Amen's answer is that the code has recently (last month) been updated, while the examples still use old interfaces.

Oleg: one important thing to understand about OpenPsi: it is an ongoing experiment, and the result of that experiment has identified that it should be refactored into two distinct parts: One part is a generic rule selection and planning subsystem that solves the problem "if I want to accomplish some task, what steps or rules carry me towards achieving that goal?" The second subsystem is a human psychological model: "If I feel happy, what goals should I have?" Right now, these two subsystems are tangled with one another.

I tried untangling them. Others have tried also. It remains unfinished work. A key issue with the rule-selection mechanism is that it is currently hard/impossible to select rule sequences that have variables in them. That is, if the plan (for movement, for action, for speaking) has one or more variables, which need to be connected together in going from one rule to the next, then OpenPsi cannot yet do this. I think that maybe Amen is working to fix this, but this is a "hard problem" that needs the correct level of abstraction to work right.

Another difficult area is in the handling of "demands", where one might want to accomplish several tasks at once. Some demands can be easily factored into distinct parts (stand on one leg and rub your belly) while others cannot be (raise your hand and scratch your knee).

Finally, as a model of human psychological state, OpenPsi is very naive.

So: the combination of these three problems mean that Amen and others will be continuing to change the code and the API's. We also need a hero who can understand these meta-issues, and make the code actually accomplish them. I am very interested in using the task-planning and task-accomplishment subsystem in a variety of places (robot movement, speech planning, planning for reasoning/deduction).

@WhiteMt
Copy link

WhiteMt commented Mar 31, 2018 via email

@ngeiswei
Copy link
Member

Linas said

A key issue with the rule-selection mechanism is that it is currently hard/impossible to select rule sequences that have variables in them. That is, if the plan (for movement, for action, for speaking) has one or more variables, which need to be connected together in going from one rule to the next, then OpenPsi cannot yet do this.

The URE could handle that.

@linas
Copy link
Member

linas commented Mar 31, 2018

The URE could handle that.

Yeah, I guess it could. OpenPsi does not need to chain rules, merely to select a class of them. A year or two ago, I was trying to string together a fairly small number of chat rules that coordinated chat with motion, and at the time, URE seemed like overkill or too hard to use -- since I only had a small handful of rules.

I am still very very interested in expanding the eva chat and movement prototype. The goal of this expansion would be to identify the "natural" places to wire in OpenPsi, URE, PLN, maybe R2L and maybe ghost, so that it functions end-to-end. The need to make it "actually work" forces a strong constraint and discipline on these otherwise-theoretical debates. For example, it forced me to discover that R2L sliced the data the wrong way for this task. Perhaps R2L could be useful in nearby subtasks, but it could not be central. Exactly how PLN, URE, etc would fit together in this specific robot perception-chat-movement pipeline remains a big unknown to me. I can imagine it, but the details need to be pinned down.

@amebel
Copy link
Contributor

amebel commented Apr 1, 2018

@CICS-Oleg as Linas mentioned that is the old interface, will update the examples in the coming weeks.

@linas I don't know what there is to untangle. The code is already separated, there is (opencog openpsi) and there is (opencog openpsi dynamics). By borrowing from the game industry we can rename (opencog openpsi) to (opencog goap) (Goal-Oriented Action Planning) or any other name, that will help stop the confusion. We can also replace the term demand, in (opencog openpsi) with something else, I think goal would work.

@ngeiswei What I understood from our discussion in HK, is that URE doesn't work with virtual-links. Of course, the code to work with URE can be written using rules which have explicit implicants. Just remember that the (opencog openpsi) rules that are written by ghost can have virtual-links. Maybe ArrowLink can be used?

@ngeiswei
Copy link
Member

ngeiswei commented Apr 1, 2018

The URE supports virtual links as rule preconditions, not as premises, so maybe it would still work, would need to look into.

Regarding updating the examples with the new interface, maybe @CICS-Oleg could do it to familiarize himself with OpenPsi.

lcultx added a commit to lcultx/opencog that referenced this issue Apr 19, 2018
@ngeiswei
Copy link
Member

I believe this can be closed, @amebel?

@linas
Copy link
Member

linas commented Nov 29, 2018

Missed reply to @amebel from April, re "untangling": the README.md describes a version of openpsi that no longer exists. It's inaccurate in 120 different ways ... I recognize it, as something that I wrote long ago. As far as I know, there is no adequate documentation for openpsi that

-- explains what it is
-- explains how it works
-- explains the kinds of problems that it solves
-- illustrates how to use it.

Opening distinct bug report for that.

@linas
Copy link
Member

linas commented Nov 29, 2018

Opened #3351 to cover openpsi documentation. Unrelated to the documentation is "dynamics" -- this was the part that was tangled up, back when I worked on it. In particular, the the updater and modulator were .. squonky, and the concept of "emotion" was incorrect; emotions are something different, they are not what's in openpsi. Once again, I refer to the ostrofsky paper for a quick review of what "emotions" are. See opencog/eva/README-affects.md for a more accurate description of "emotions".

Anyway, if openpsi can function independently of "dynamics", then the "dynamics" directory should be moved to a different location. If openpsi still depends on dynamics, then I claim its still "tangled".

@linas linas added the openpsi label Jun 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants