Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache pybids database for dataset to speed up loading large datasets (i.e. narratives) #149

Open
adelavega opened this issue Feb 16, 2022 · 0 comments

Comments

@adelavega
Copy link
Contributor

For very large datasets, constructing a database is time consuming.

It would be ideal to be able to pre-construct a dataset level cached pybids DB.

However, in NS, we need a database that also includes the specific events that are indexed for a particular analysis. Thus, we can't currently simply cache the dataset db, and pass it to fitlins, as it would not include the events.

In order to do so, we would need to re-create the pybids layout in Neuroscout-CLI for each dataset, save it in a standardized location (i.e. in the dataset itself) and then add the events for that specific analysis, and pass the cached db location to fitlins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant