Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The following error occurred when I was training the upernet_mpvit_base_160k_ade20k.py file with a single GPU. #15

Open
1islist opened this issue Oct 9, 2022 · 0 comments

Comments

@1islist
Copy link

1islist commented Oct 9, 2022

The following error occurred when I was training the upernet_mpvit_base_160k_ade20k.py file with a single GPU.
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
Traceback (most recent call last):
File "./tools/train.py", line 176, in
main()
File "./tools/train.py", line 165, in main
train_segmentor(
File "/export/home/rny/SegNeXt-main/mmseg/apis/train.py", line 110, in train_segmentor
cfg.device,
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/mmcv/utils/config.py", line 510, in getattr
return getattr(self._cfg_dict, name)
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/mmcv/utils/config.py", line 48, in getattr
raise ex
AttributeError: 'ConfigDict' object has no attribute 'device'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2698) of binary: /export/home/rny/.conda/envs/openmmlab/bin/python
Traceback (most recent call last):
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/export/home/rny/.conda/envs/openmmlab/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
SegNeXt-main in line 9 is the code for another article.
How do I solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant