You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone, I would like to share an error when using "docker run" command to perform fmriprep. Although this problem has been solved by myself, more information may be helpful for community and other users.
When directly using "docker run" command instead of fmriprep-docker, sometimes a sqlite3.OperationalError Disk I/O error will happen. Others have reported similar issues(e.g. #2668 , #2643 , #2506 , #2313 ), these solutions did not address the disk I/O error in our cloud server. I found that changing NFS mount type from soft to hard can address this issue in our server.
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1968, in _exec_single_context
self.dialect.do_execute(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 920, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: disk I/O error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/fmriprep/bin/fmriprep", line 8, in<module>sys.exit(main())
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/cli/run.py", line 43, in main
parse_args()
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/cli/parser.py", line 786, in parse_args
config.from_dict({})
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/config.py", line 678, in from_dict
execution.load(settings, init=initialize('execution'), ignore=ignore)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/config.py", line 232, in load
cls.init()
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/fmriprep/config.py", line 476, in init
cls._layout = BIDSLayout(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/layout.py", line 176, in __init__
indexer(self)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/index.py", line 148, in __call__
self._index_dir(self._layout._root, self._config)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/index.py", line 226, in _index_dir
self._index_dir(d, config, force=force)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/index.py", line 226, in _index_dir
self._index_dir(d, config, force=force)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/index.py", line 226, in _index_dir
self._index_dir(d, config, force=force)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/bids/layout/index.py", line 199, in _index_dir
config_entities.update(c.entities)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 563, in __get__
return self.impl.get(state, dict_) # type: ignore[no-any-return]
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 1084, in get
value = self._fire_loader_callables(state, key, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 1119, in _fire_loader_callables
return self.callable_(state, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/strategies.py", line 972, in _load_for_state
return self._emit_lazyload(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/strategies.py", line 1103, in _emit_lazyload
lazy_clause, params = self._generate_lazy_clause(state, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/strategies.py", line 858, in _generate_lazy_clause
value = mapper._get_state_attr_by_column(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/mapper.py", line 3551, in _get_state_attr_by_column
return state.manager[prop.key].impl.get(state, dict_, passive=passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 1084, in get
value = self._fire_loader_callables(state, key, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 1114, in _fire_loader_callables
return state._load_expired(state, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/state.py", line 798, in _load_expired
self.manager.expired_attribute_loader(self, toload, passive)
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 1626, in load_scalar_attributes
result = load_on_ident(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 482, in load_on_ident
return load_on_pk_identity(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/loading.py", line 668, in load_on_pk_identity
session.execute(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2232, in execute
return self._execute_internal(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2127, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
result = conn.execute(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1413, in execute
return meth(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 483, in _execute_on_connection
return connection._execute_clauseelement(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1637, in _execute_clauseelement
ret = self._execute_context(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
return self._exec_single_context(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1987, in _exec_single_context
self._handle_dbapi_exception(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1968, in _exec_single_context
self.dialect.do_execute(
File "/opt/conda/envs/fmriprep/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 920, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
[SQL: SELECT configs.name AS configs_name, configs._default_path_patterns AS configs__default_path_patterns
FROM configs
WHERE configs.name =?]
[parameters: ('bids',)]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Additional information / screenshots
First, I found when using docker run command, /tmp will be used as a working directory to save some intermediate files. This is acceptable when our storage space is enough. However, for a could server, /tmp under the root directory sometimes will be limited. Using -w parameter to specify a working directory will be helpful.
Second, in the context of cloud server, it is common to use a NAS to storage neuroimage data while computing are executed by an additional computing node. About our server, NFS is used to ensure the access to data. The OS for our server is Ubuntu 20.04. When we using the mount command to mount a source to our computing node, the default setting is soft mounting rather than hard. It seems soft mounting will cause the Disk I/O error. After modifying the /etc/fstab file, such error disappered. The modified /etc/fstab is shown in below:
Such modification successfully address our issue. I would like to share the solution here if someone else encounted the same error.
Best.
Kunru
The text was updated successfully, but these errors were encountered:
What happened?
Hello everyone, I would like to share an error when using "docker run" command to perform fmriprep. Although this problem has been solved by myself, more information may be helpful for community and other users.
When directly using "docker run" command instead of fmriprep-docker, sometimes a sqlite3.OperationalError Disk I/O error will happen. Others have reported similar issues(e.g. #2668 , #2643 , #2506 , #2313 ), these solutions did not address the disk I/O error in our cloud server. I found that changing NFS mount type from soft to hard can address this issue in our server.
What command did you use?
participants=(sub-GD030 sub-GD031 sub-GD032 sub-GD033 sub-GD034 sub-GD035 sub-GD036 sub-GD037 sub-GD038 sub-GD039 sub-GD040 sub-GD041 sub-GD044 sub-GD045 sub-GD046 sub-GD047 sub-GD049 sub-GD050 sub-HC001 sub-HC002 sub-HC004 sub-HC005 sub-HC006 sub-HC007 sub-HC008 sub-HC009 sub-HC010 sub-HC011 sub-HC012 sub-HC013 sub-HC014 sub-HC015 sub-HC016 sub-HC017 sub-HC018 sub-HC019 sub-HC020 sub-HC022 sub-HC023 sub-HC024 sub-HC025 sub-HC026 sub-HC027 sub-HC028 sub-HC029 sub-HC030 sub-HC031 sub-HC033 sub-HC034 sub-HC038 sub-HC039 sub-HC040 sub-HC041) docker run -it --rm \ -v /nfs/z1/userhome/Kunru/data/CR_stress/BIDS:/data:ro \ -v /nfs/z1/userhome/Kunru/data/CR_stress/fmriprep_C:/out:rw \ -v /nfs/z1/userhome/Kunru/data/tmpfiles:/work:rw \ -v /nfs/z1/userhome/Kunru/Freesurfer_license:/license \ nipreps/fmriprep:23.1.4 /data /out participant \ --fs-license-file /license/license.txt \ --participant-label "${participants[@]}" \ --output-spaces T1w MNI152NLin2009cAsym:res-2 fsaverage fsaverage5 \ --cifti-output \ --n-cpus 15 \ --mem-mb 102400 \ --work-dir /work \ --write-graph
What version of fMRIPrep are you running?
23.1.4
How are you running fMRIPrep?
Docker
Is your data BIDS valid?
Yes
Are you reusing any previously computed results?
No
Please copy and paste any relevant log output.
Additional information / screenshots
First, I found when using docker run command, /tmp will be used as a working directory to save some intermediate files. This is acceptable when our storage space is enough. However, for a could server, /tmp under the root directory sometimes will be limited. Using -w parameter to specify a working directory will be helpful.
Second, in the context of cloud server, it is common to use a NAS to storage neuroimage data while computing are executed by an additional computing node. About our server, NFS is used to ensure the access to data. The OS for our server is Ubuntu 20.04. When we using the mount command to mount a source to our computing node, the default setting is soft mounting rather than hard. It seems soft mounting will cause the Disk I/O error. After modifying the /etc/fstab file, such error disappered. The modified /etc/fstab is shown in below:
Such modification successfully address our issue. I would like to share the solution here if someone else encounted the same error.
Best.
Kunru
The text was updated successfully, but these errors were encountered: