We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
epoch: 2 iter: 659/662 lr: 0.006748 loss: 0.5660 (0.4378) epoch: 2 iter: 660/662 lr: 0.006748 loss: 0.3179 (0.4366) epoch: 2 iter: 661/662 lr: 0.006748 loss: 0.4615 (0.4369) epoch: 2 iter: 662/662 lr: 0.006748 loss: 0.6110 (0.4386) Traceback (most recent call last): File "main_gpu.py", line 190, in main() File "main_gpu.py", line 123, in main for i, (inputs, target) in enumerate(dataset_loader): File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in iter return _DataLoaderIter(self) File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 469, in init w.start() File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/context.py", line 277, in _Popen return Popen(process_obj) File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/popen_fork.py", line 20, in init self._launch(process_obj) File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/popen_fork.py", line 67, in _launch self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory
I have train 2 epochs,but it shows 'Cannot allocate memory'.how to handle with this error?
The text was updated successfully, but these errors were encountered:
Your cuda memory is not enough for your dataset, change your batchsize.
Sorry, something went wrong.
No branches or pull requests
epoch: 2 iter: 659/662 lr: 0.006748 loss: 0.5660 (0.4378)
epoch: 2 iter: 660/662 lr: 0.006748 loss: 0.3179 (0.4366)
epoch: 2 iter: 661/662 lr: 0.006748 loss: 0.4615 (0.4369)
epoch: 2 iter: 662/662 lr: 0.006748 loss: 0.6110 (0.4386)
Traceback (most recent call last):
File "main_gpu.py", line 190, in
main()
File "main_gpu.py", line 123, in main
for i, (inputs, target) in enumerate(dataset_loader):
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 193, in iter
return _DataLoaderIter(self)
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 469, in init
w.start()
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/popen_fork.py", line 20, in init
self._launch(process_obj)
File "/home/lijingyuan/.conda/envs/deeplabv3/lib/python3.6/multiprocessing/popen_fork.py", line 67, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
I have train 2 epochs,but it shows 'Cannot allocate memory'.how to handle with this error?
The text was updated successfully, but these errors were encountered: