Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset streamer #75

Open
agamemnonc opened this issue Jun 7, 2019 · 1 comment
Open

Dataset streamer #75

agamemnonc opened this issue Jun 7, 2019 · 1 comment
Labels
enhancement New feature, improved performance, etc.

Comments

@agamemnonc
Copy link
Contributor

agamemnonc commented Jun 7, 2019

Just an idea :)

I think it would be nice to give the user the opportunity to use the Pipeline infrastructure of axopy to process publicly available datasets (e.g. EMG/EEG etc.) for offline analyses, or perhaps 'replay' datasets recorded with axopy.

To implement that, I think the only requirement would be a Dataset DAQ implementation whose read() method would work much like an iterator. The user could then sub-class this to create custom classes appropriate for the dataset at hand. The actual data should be fed into the dataset object either upon construction or a later stage. Optionally, there could be a real_time simulation parameter controlling whether a Sleeper.sleep() is called after each read() operation to emulate real-time data recording.

If this would be of interest, I am happy to submit a PR.

@ixjlyons ixjlyons added the enhancement New feature, improved performance, etc. label Jun 13, 2019
@ixjlyons
Copy link
Member

I've thought about this idea as well and have found things like this useful in the past (it's great for demos for visitors to avoid hardware setup and would be valuable for testing as well). A couple notes:

Is the goal here to swap a specific hardware device for a file streamer experiment-wide and completely replay an experiment session? This could be done but would take some fairly deep integration I think. A task would need to feed the device its own TaskReader in this case, for example, so you have to avoid overwriting that during the replay. As a side note, I really think the relationship between storage and task should be more flexible so this would be less of an issue. In general, the debugging capabilities are very lacking and it takes quite a bit of effort and clever coding to debug complex experiments/tasks by using a keyboard for control, for example.

Requiring the user to implement their own class could work and might be somewhat cleaner than what I was thinking, but here's my idea anyway. The dataset streaming device could take any TaskReader (or maybe just an iterator of arrays to de-couple from axopy.storage) and iteratively output frames from the arrays. Something like this:

def prepare_storage(self, storage):
    self.daqstream.daq.set_source(storage.require_task(...).iterarray(...), read_size=100)

I think it'd make sense to output frames of data from a single trial until there are no more left and stop (firing off the finished transmitter), then the next call to start increments the file/dataset to stream from.

One remaining question is what to do after the last trial. Could raise a stop iteration exception the user has to catch to finish the task.

This does seem to require quite a few modifications to an existing task to get it to work. Any ideas for making that less so? I think how the user designs the task implementation could make it better or worse, so maybe it's just a matter of providing some good examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature, improved performance, etc.
Projects
None yet
Development

No branches or pull requests

2 participants