Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feedback] create Labjs Robot #4

Open
vsoch opened this issue Dec 19, 2017 · 1 comment
Open

[feedback] create Labjs Robot #4

vsoch opened this issue Dec 19, 2017 · 1 comment

Comments

@vsoch
Copy link
Member

vsoch commented Dec 19, 2017

If there is a consistent structure for labjs experiments, I'd like to add a robot to run labjs exports! @FelixHenninger I just implemented this for the experiment factory, see a quick explanation here --> https://vsoch.github.io/2017/expfactory-robots/

Is this something that would be useful for labjs? How do users currently test / do automated stuffs with experiments?

@FelixHenninger
Copy link

Hej Vanessa,

thanks for this fantastic suggestion -- robots would indeed be super-cool! As it stands, lab.js has an automated test suite that covers the core library pretty well, but I think study-level tests would be an awesome addition. Let me sketch out some thoughts, hoping that they make any sense at all:

Studies in lab.js have a consistent tree-like nested structure built out of individual component objects. Basically, you might build a basic study in pure JS like this, where screens are nested in flow control components:

const screen = new lab.html.Screen({
  content: 'Press "s" or click the button!',
  responses: {
    'keypress(s)': 'foo',
    'click button.xyz': 'bar',
  }
})

const study = new lab.flow.Sequence({
  content: [
    screen,
    /* more nested content ... */
  ]
})

study.run()

(the builder is just a fancy UI that outputs the options for every component as an object literal, from which the library creates instances of the corresponding component class using lab.util.fromObject)

Because most basic responses are contained in the responses option, a robot could parse these and simulate events, or use the respond method to skip the event handling entirely and trigger the response logic directly (e.g. via screen.respond('foo')). I think that should make it possible to test a study's logic by running it, though this mechanism would break down for very complex studies using custom event handlers or for components that hook into the DOM themselves.

What's missing, I think, is a good framework for following along the study as it is running. Currently, sequences (and loops, the other main flow control component) keep track of their directly nested children, but often structures are nested more two or more layers deep, and there is no good way to monitor which screen deep down is currently being shown. It shouldn't be hard, though, to find the active components by traversing the tree and looking for the active leaf component, though it would probably be worthwhile investigating a nicer mechanism, for example setting things up so that a robot gets a callback when a screen ends and the next is shown, and can hook into the presentation that way.

Ok, so much for now -- this is something I would love to discuss and help with! I sincerely hope, but very much doubt, that this brief description makes sense: Please do let me know where I can clarify or explain things better. Again, thanks so much for your initiative, I think this would be a great and helpful addition, and would love to support this on the library side!

The very best regards,

-Felix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants