-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify where the AHBA data is located #640
base: master
Are you sure you want to change the base?
Conversation
It comes in multiple compressed CSV files and is converted into single HDF5 file optimized for indexing. More details at https://github.com/NeuroVault/NeuroVault/blob/master/ahba_docker/preparing_AHBA_data.py |
The HDF5 itself is optimized for indexing, or there are specific indexes being built? What are other files along with |
That's a lot of questions. Maybe it would help if you provided some context about what you are trying to achieve. |
Just trying to figure out the auditory of this image. Allows more people discover it if needed. |
The secondary goal is to decouple main Dockerfile from this image. I am almost sure that Python 3 porting will be impossible on Jessie. |
What's an auditory of an image? Python 3 is definitely possible on jessie (see https://github.com/docker-library/python/blob/341c752e5435f4cf4c008fbae67ae4b5b6209a02/3.6/jessie/Dockerfile) |
The auditory of the image are people who need to work with AHBA data. Those people may appreciate when they can download the data already prepared in HDF5. |
No need to make it Python dependent - it could be just an image |
As I understand there is nothing more important in this image. However, I can not give description about what this data is and how it is different from AHBA. I see that it is postprocessed, but it is not clear why.