You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it would be nice to give the user the opportunity to use the Pipeline infrastructure of axopy to process publicly available datasets (e.g. EMG/EEG etc.) for offline analyses, or perhaps 'replay' datasets recorded with axopy.
To implement that, I think the only requirement would be a Dataset DAQ implementation whose read() method would work much like an iterator. The user could then sub-class this to create custom classes appropriate for the dataset at hand. The actual data should be fed into the dataset object either upon construction or a later stage. Optionally, there could be a real_time simulation parameter controlling whether a Sleeper.sleep() is called after each read() operation to emulate real-time data recording.
If this would be of interest, I am happy to submit a PR.
The text was updated successfully, but these errors were encountered:
I've thought about this idea as well and have found things like this useful in the past (it's great for demos for visitors to avoid hardware setup and would be valuable for testing as well). A couple notes:
Is the goal here to swap a specific hardware device for a file streamer experiment-wide and completely replay an experiment session? This could be done but would take some fairly deep integration I think. A task would need to feed the device its ownTaskReader in this case, for example, so you have to avoid overwriting that during the replay. As a side note, I really think the relationship between storage and task should be more flexible so this would be less of an issue. In general, the debugging capabilities are very lacking and it takes quite a bit of effort and clever coding to debug complex experiments/tasks by using a keyboard for control, for example.
Requiring the user to implement their own class could work and might be somewhat cleaner than what I was thinking, but here's my idea anyway. The dataset streaming device could take any TaskReader (or maybe just an iterator of arrays to de-couple from axopy.storage) and iteratively output frames from the arrays. Something like this:
I think it'd make sense to output frames of data from a single trial until there are no more left and stop (firing off the finished transmitter), then the next call to start increments the file/dataset to stream from.
One remaining question is what to do after the last trial. Could raise a stop iteration exception the user has to catch to finish the task.
This does seem to require quite a few modifications to an existing task to get it to work. Any ideas for making that less so? I think how the user designs the task implementation could make it better or worse, so maybe it's just a matter of providing some good examples.
Just an idea :)
I think it would be nice to give the user the opportunity to use the
Pipeline
infrastructure ofaxopy
to process publicly available datasets (e.g. EMG/EEG etc.) for offline analyses, or perhaps 'replay' datasets recorded withaxopy
.To implement that, I think the only requirement would be a
Dataset
DAQ implementation whoseread()
method would work much like an iterator. The user could then sub-class this to create custom classes appropriate for the dataset at hand. The actual data should be fed into the dataset object either upon construction or a later stage. Optionally, there could be areal_time
simulation parameter controlling whether aSleeper.sleep()
is called after eachread()
operation to emulate real-time data recording.If this would be of interest, I am happy to submit a PR.
The text was updated successfully, but these errors were encountered: