Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DICOM scale=preserve not working as intended and performance consideration #1918

Open
jpambrun opened this issue Jan 22, 2024 · 0 comments
Open

Comments

@jpambrun
Copy link

The implementation of the DICOM image decoder has some limitation and would benefit from some tweaks.

  1. It's using DCMTK's getOutputData() which renders presentation pixels rather than returning modality values. For CT image, modality values follows a specific scale of Hounsfield unit where water density shows as 0 and air density is represented by -1000. getOutputData() looses this information. getInterData() should be used. Otherwise the scale=preserve option doesn't really work as intended.
  2. getOutputData() is configured to output uin64 which seems inefficient and wasteful. For most modality the dynamic range is 12 bits at most. I don't know of any device or technology that would produce 64 bits of dynamic range. Even if it did, it's probably best to make this dynamic rather than hard coded.
  3. Looking at the implementation, it seems it does a string comparison for the scale parameter for every single pixel. This should be taken out of the critical loop.

I have made most of the proposed change here some time ago. The scale implementation is missing, but it should be easy to add (and more efficient?) with by scaling the output between tf.reduce_min and tf.reduce_max.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant