You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes, when exporting a dataset for training, the progress bar dialog stops at 271 frame images, an error pops up in the terminal, and the export never finishes. The only way I have gotten around the error is from closing the program and restarting. The error message is confusing because it states that it cannot allocate 23.1Gb of space for a smaller array, however, I am not attempting to allocate nearly that much space, nor are any of my videos that large in size. I am not sure what it means or where the real issue is coming from. The entire size of the pkg.slp file to be created shouldn't be larger than ~2Gb. I have not identified a specific pattern for when the error occurs except for that it has not happened during the first attempt to export a training dataset after a fresh opening of the SLEAP program. It often, but not always has been occurring with the second attempt to export a training dataset. I can still close the windows and shut down the program from the GUI, so the windows are not stalled,
Expected behaviour
I expect that the training export progresses forward and does not produce a numpy MemoryError.
Traceback (most recent call last):
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\gui\learning\dialog.py", line 814, in export_package
suggested=include_suggestions,
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\gui\commands.py", line 1432, in export_dataset_gui
progress_callback=update_progress if verbose else None,
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\dataset.py", line 1997, in save_file
write(filename, labels, *args, **kwargs)
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\format\main.py", line 162, in write
return disp.write(filename, source_object, *args, **kwargs)
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\format\dispatch.py", line 79, in write
return adaptor.write(filename, source_object, *args, **kwargs)
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\format\hdf5.py", line 253, in write
progress_callback=progress_callback,
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\dataset.py", line 2373, in save_frame_data_hdf5
frame_numbers=frame_nums,
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\video.py", line 1433, in to_hdf5
frame_data = self.get_frames(frame_numbers)
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\sleap\io\video.py", line 1117, in get_frames
return np.stack([self.get_frame(idx) for idx in idxs], axis=0)
File "<__array_function__ internals>", line 6, in stack
File "C:\ProgramData\Anaconda2\envs\sleap133\lib\site-packages\numpy\core\shape_base.py", line 433, in stack
return _nx.concatenate(expanded_arrays, axis=axis, out=out)
File "<__array_function__ internals>", line 6, in concatenate
numpy.core._exceptions.MemoryError: Unable to allocate 23.1 GiB for an array with shape (3982, 1080, 1920, 3) and data type uint8
Screenshots
How to reproduce
Go to '...'
Click on '....'
Scroll down to '....'
See error
The text was updated successfully, but these errors were encountered:
Bug description
Sometimes, when exporting a dataset for training, the progress bar dialog stops at 271 frame images, an error pops up in the terminal, and the export never finishes. The only way I have gotten around the error is from closing the program and restarting. The error message is confusing because it states that it cannot allocate 23.1Gb of space for a smaller array, however, I am not attempting to allocate nearly that much space, nor are any of my videos that large in size. I am not sure what it means or where the real issue is coming from. The entire size of the pkg.slp file to be created shouldn't be larger than ~2Gb. I have not identified a specific pattern for when the error occurs except for that it has not happened during the first attempt to export a training dataset after a fresh opening of the SLEAP program. It often, but not always has been occurring with the second attempt to export a training dataset. I can still close the windows and shut down the program from the GUI, so the windows are not stalled,
Expected behaviour
I expect that the training export progresses forward and does not produce a numpy MemoryError.
Actual behaviour
Your personal set up
Windows 10
SLEAP v1.3.3, python 3.7.12
Environment packages
Logs
Screenshots
How to reproduce
The text was updated successfully, but these errors were encountered: