Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

622 write fails when we hit maximum disk space #625

Merged

Conversation

AdvancedImagingUTSW
Copy link
Collaborator

Ran into different problems for each of our image saving formats (TIFF, OME-TIFF, HDF5, N5).

To circumvent all of these errors, some of which did not percolate up in a fashion that allowed us to handle them with try/except statements, now put a hard stop on the acquisition if the anticipated file format is larger than the disk space available. Message is passed from the model to the controller, and a GUI dialog pops up to inform the user of the problem.

One unique observation is that the N5 file size did not match the actual file size. After some investigation, the N5/Zarr library automatically implemented blosc compression.

Also added a bunch of numpydocs so that our Sphinx documentation continue to improve.

image

Now throws a GUI box to warn the user that the acquisition exceeds disk space, and stops the acquisition.
By using the debugger in Pycharm, I was able to look at the Zarr Group information and identify that the blosc compression is automatically implemented. This is why our file sizes have a discrepancy. Will leave as is.
@AdvancedImagingUTSW AdvancedImagingUTSW linked an issue Sep 22, 2023 that may be closed by this pull request
Provides default value if delay matrix fails.
self.wait_until_done vs self.wait_until_done_delay

The first is a boolean for if we should delay, the second is a float for how long to delay.
@AdvancedImagingUTSW
Copy link
Collaborator Author

I think we can proceed with this. It prevents deeper problems which are more difficult to address from arising. If it considered too aggressive, I could implement a messagebox that warns the user and asks if they want to continue at their own risk, but this will be more involved.

@AdvancedImagingUTSW
Copy link
Collaborator Author

Let me know what you prefer @zacsimile.

Should we want the option to have the user proceed with the acquisition after being warned, I would do tkinter.messagebox.askyesno popup. The popup would be triggered by passing a message to the controller, which would make the message box top-level. The model would have to go into a waiting loop while the user interacts with the message box. Then, depending on what the user inputs, we would have to pass the result back from the controller to the model and ultimately the image writer object to break the waiting loop.

@zacsimile
Copy link
Collaborator

I think we only need the message box if this commit prevents us from acquiring data in cases where there is sufficient free space. If this PR works on two different microscopes, I think we're clear to merge. Also need to fix a failing test on the Sutter filter wheel.

@AdvancedImagingUTSW
Copy link
Collaborator Author

I'll fix the test. We can test it out on CT-ASLM-V1, V2, and BT-MesoSPIM.

@codecov
Copy link

codecov bot commented Sep 29, 2023

Codecov Report

Merging #625 (25a67c8) into develop (56b4533) will increase coverage by 0.03%.
Report is 25 commits behind head on develop.
The diff coverage is 25.71%.

@@             Coverage Diff             @@
##           develop     #625      +/-   ##
===========================================
+ Coverage    48.06%   48.10%   +0.03%     
===========================================
  Files          161      160       -1     
  Lines        16721    16702      -19     
===========================================
- Hits          8037     8034       -3     
+ Misses        8684     8668      -16     
Flag Coverage Δ
unittests 48.10% <25.71%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/aslm/model/data_sources/__init__.py 55.55% <ø> (+13.88%) ⬆️
src/aslm/model/data_sources/bdv_data_source.py 95.00% <ø> (ø)
src/aslm/model/data_sources/data_source.py 94.38% <ø> (ø)
src/aslm/model/data_sources/tiff_data_source.py 96.42% <ø> (ø)
src/aslm/tools/file_functions.py 100.00% <100.00%> (+5.47%) ⬆️
.../model/devices/filter_wheel/filter_wheel_sutter.py 86.84% <50.00%> (-2.20%) ⬇️
src/aslm/controller/controller.py 0.00% <0.00%> (ø)
src/aslm/model/features/image_writer.py 72.22% <22.22%> (-10.93%) ⬇️

... and 7 files with indirect coverage changes

Copy link
Collaborator

@JinlongL JinlongL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested on ctASLMv1, ctASLMv2 and BT-mesoSPIM by trying to save data on a flash drive with only 537 MB free space. ASLM software perfectly detected there were not enough disk space and acquisition was terminated before even started.
IMG_7947

@AdvancedImagingUTSW AdvancedImagingUTSW merged commit 1a3b04a into develop Oct 6, 2023
1 check passed
@AdvancedImagingUTSW AdvancedImagingUTSW deleted the 622-write-fails-when-we-hit-maximum-disk-space branch December 15, 2023 01:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Write fails when we hit maximum disk space
3 participants