New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why would geometry mismatch occur when mask and image are derived from the same DICOM series? #482
Comments
This is because the default tolerance is too stringent. The easiest would be to enable resampling. |
@warkentinmatt It may also be possible to fix by enabling What I suspect happened is that your mask has the same spacing/direction as the image, but size and origin are different, as Slicer now stores mask by cropping the area and adjusting origin accordingly. This saves memory, but requires you to tell PyRadiomics that it's OK to resample masks (which, in this latter case, is equal to just padding until it matches the image size). PyRadiomics does not correct the mask by default, as this serves as a warning to the extra step PyRadiomics is performing. |
Thank you both for your responses. @JoostJM So just to ensure I understand your comments, when Slicer exports/saves a NRRD label map, it crops the size to the extent of the ROI? It does not maintain the original dimensionality of the input volume? When I clear the scene in Slicer, re-import the DICOM series, and then load in the NRRD mask, the mask is superimposed on the CT in the proper anatomic location. To me, I assumed this meant the original dimensions of the CT were maintained in the mask. Is Slicer just smart enough (i.e. using the right meta-data) to properly align the mask on the CT? Also, just so I am clear on what is achieved with resampling the binary mask: my understanding is that if the ROI is centrally located in the image (thus the centre of the array contains a few hundred/thousand "1's"), that resampling the nearest-neighbours of the mask at the border of the volume is essentially padding the mask with 0's? Is this a correct interpretation? Thank you for your help. |
Yes, this is possible, because Nrrd files also contain the origin, i.e. the physical location of the first voxel.
Correct |
The mask can correspond to a sub-region of the image of the size of the mask bounding box. It does not need to have the same dimensions as the image. Whether dimensions match or not will depend on how you create the segmentation in Slicer, and how you export it. If you want more details how to ensure dimensions match, let us know. Looking again over the error (first time I just glanced on the phone), here are the differences:
So you have the situation where the absolute value of origin is slightly off (and the difference is larger than the default tolerance), but also that the orientation of one image is flipped in Y as compared to the other one. This means you cannot remedy the issue by reducing the tolerance, and will need to resample either mask or the image. This resampling should not alter the pixel values though, it will effectively be a re-orient operation. |
@JoostJM Thank you for your response again, very much appreciated. @fedorov How would resampling the mask or image resolve the issue that Y is flipped in one relative to the other? This is not clear to me. My understanding was that resampling was used to ensure the same size/dimensions.
Yes, I would love more information on how to ensure dimensions match. Please and thank you. In fact, I would love any information or resources that could strengthen my understanding of some of these concepts we have discussed (i.e. spacing, direction, origin). Naively, I might think that the origin for an image is the [x,y,z] coordinates of [0,0,0], in other words, one of the "corners" (right anterior superior?) of an image array. Do the origin values shown in the log have a tangible interpretation (e.g. millimetres or something)? What do these numbers represent? Forgive my ignorance, I am self-taught on all things imaging/radiomics, but I do want to understand these concepts versus just implementing solutions to make things work. |
@warkentinmatt no worries about all the questions, it is a steep curve for a beginner!
Image directions is essentially a transformation that rotates coordinate system of the image array (IJK) to the anatomic space (XYZ). In a simple 1D example below, "array index" is the coordinate system of your 1d array, and Left-Right is the coordinate system in the physical space of a 1d world. In Image2, the order of the values in the array is opposite to the direction of the physical coordinate system axis, i.e., the array of voxels is oriented differently. Resampling takes the geometry of an image defined by the array size, direction and origin, and samples values from another image at the voxels of the reference geometry. Does this make sense?
See the screenshot below on how to export a segment from 3D Slicer into a labelmap (which effectively is a binary image). Note that even if you follow this procedure, it is not impossible that you will still get geometry mismatch, since the default tolerance values are too stringent. But the orientation of the image array should be the same.
Here are some resources that might be useful:
Hope this helps! |
@fedorov Thank you very much for taking the time to explain these concepts, that was very helpful. It makes a lot more sense, and I appreciate your patience. So it seems to me, that resampling has two important but distinct roles for ensuring the geometric alignment between two volumes:
Would this be a fair summary of the role of resampling? Also, thank you for sharing these resources, I look forward to working through them to strengthen my understanding. |
@warkentinmatt, almost. Steps 1 & 2 happen simultaneously. What occurs is that you define a grid of points in physical space and then sample your image at those points. If the new gridpoints do not match exactly with existing grid points, new values are calculated using the interpolation algorithm, based on the surrounding (original) points (pixels) of the image. You can also find an extensive explanation in the IBSI docs, section on interpolation. This also contains an image illustrating the resampling grid. |
@fedorov @JoostJM I know this issue is closed, and I am deeply appreciative of all of the help both of you have provided. I just wanted to circle back to clarify my understanding. After doing a ton of reading through the resources you each provided, and watching a few helpful videos, I am much more comfortable with these concepts. So in the example that spawned this issue, since the mask was derived from the same image which was being used for feature extraction the spacing was the same. In other words, a one-unit change in voxel location in any of the x, y, z directions for either the mask or the image carries the same physical change interpretation. In cases where the spacing is the same between a mask and image, interpolation would not be needed to "fill the gaps", correct? So resampling the mask would simply ensure a geometric match with the image with respect to both dimensionality of the volume, as well as ensuring that both volumes that are in physical alignment are also array-indexed in the same way (with respect to direction and origin). Have I got this correct finally? If spacing were not the same between two volumes, you would need to interpolate voxels in order to get two volumes in geometric alignment. For example, if the image spacing was [1mm, 1mm, 1mm] and the mask was [2mm, 2mm, 2mm], you would need to interpolate voxels in the mask to "fill the gaps" in physical spacing. Thanks again for all your help. It has made all the difference. |
@warkentinmatt yes, this makes sense, I think you got it right! Another useful resource related to resampling is this page: https://www.slicer.org/wiki/Registration:Resampling |
@fedorov Great. Thanks again for all your help. This may not be the time or place, but it is my understanding that you are based out of Boston. Currently, I am PhD candidate in Toronto, but I am moving to Boston is two weeks for a research fellowship in the Department of Biostatistics at the HSPH. I would love to connect in-person and perhaps continue some of these conversations, if you would be at all interested and had the time. Either way, your help has been much appreciated. |
Sure, happy to connect and learn more about how you use pyradiomics! Just send me email (it's public on my github profile), and we can meet for coffee. |
Hi,
For context, I have been able to successfully run PyRadiomics from the command-line, both for single image-mask pairs, as well as running batch extraction for tens of image-mask pairs simultaneously with various YAML configs. However, I have run into an issue that I know how to get around, but I'm still curious why it occurs.
The setup: I have a single patient DICOM series with 158 slices, each of dimension 512x512 pixels. I loaded the series into Slicer and contoured an ROI (nodule) and then exported the binary label map as a .nrrd file. Then, I used the command-line tool dcm2niix to convert the same DICOM series to a .nii volume.
When I run pyradiomics from the terminal using the .nii image and .nrrd mask, I get a geometry mismatch as described below. Why would this occur if the mask and image volume were derived from the same DICOM series? Is this an expected occurrence?
As mentioned, two solutions are: (1) save the image as a .nii or .nrrd volume directly from Slicer, and this seems to work fine; (2) Adjust the tolerance value (though this solution leaves me feeling uneasy). Eventually, I want to perform batch conversion of hundreds of DICOM series' to .nii or .nrrd volumes, and I already have masks for these hundreds of series. So I was hoping to use a command-line tool for batch conversion from dcm >> nii or nrrd instead of Slicer.
Thank you for your help.
The text was updated successfully, but these errors were encountered: