Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request:Voxel sizes for PSF and data #12

Closed
dmilkie opened this issue Feb 7, 2019 · 7 comments
Closed

Feature request:Voxel sizes for PSF and data #12

dmilkie opened this issue Feb 7, 2019 · 7 comments

Comments

@dmilkie
Copy link

dmilkie commented Feb 7, 2019

This library really rocks!
Feature request: The voxel sizes for the images to be deconvolved do not always match the voxel sizes of the measured PSF. Can you these be added as parameters and have the necessary interpolation done on the GPU?

@eric-czech
Copy link
Collaborator

Thanks @dmilkie ,

3D image scaling with TensorFlow is a bit of a PIA in my experience but to at least get a rough sense of the improvement that might come from this, here is a quick benchmark for bilinear interpolation on GPU vs CPU:

import tensorflow as tf
import numpy as np
from skimage import transform

# Generate ~530MB image with random data for rescaling as well as 
# a target shape at 80% of original
img = np.random.uniform(-1, 1, size=(1440, 1920, 48)).astype(np.float32)
target_shape = tuple((np.array(img.shape)*.8).astype(int))
img.shape, target_shape, img.nbytes
#> ((1440, 1920, 48), (1152, 1536, 38), 530841600)

# Define a method for 3D resize via TF
def resize_volume(new_shape, input_tensor):
    """
    Lift from https://niftynet.readthedocs.io/en/dev/_modules/niftynet/layer/linear_resize.html
    
    :param new_shape: Target shape (``X, Y, Z``)
    :param input_tensor: Image tensor with shape ``batch, X, Y, Z, Channels``
    :return: interpolated volume
    """
    b_size, x_size, y_size, z_size, c_size = \
        input_tensor.shape.as_list()
    x_size_new, y_size_new, z_size_new = new_shape

    squeeze_b_x = tf.reshape(
        input_tensor, [-1, y_size, z_size, c_size])
    resize_b_x = tf.image.resize_bilinear(
        squeeze_b_x, [y_size_new, z_size_new])
    resume_b_x = tf.reshape(
        resize_b_x, [b_size, x_size, y_size_new, z_size_new, c_size])
    reoriented = tf.transpose(resume_b_x, [0, 3, 2, 1, 4])
    
    squeeze_b_z = tf.reshape(
        reoriented, [-1, y_size_new, x_size, c_size])
    resize_b_z = tf.image.resize_bilinear(
        squeeze_b_z, [y_size_new, x_size_new])
    resume_b_z = tf.reshape(
        resize_b_z, [b_size, z_size_new, y_size_new, x_size_new, c_size])

    return tf.transpose(resume_b_z, [0, 3, 2, 1, 4])

# Open a session and define input/output tensors to best emulate execution time
# without graph construction and session initialization times (but not transfer to GPU memory)
sess = tf.InteractiveSession()
b_img = img[np.newaxis, ..., np.newaxis]
t_in = tf.placeholder(tf.float32, shape=b_img.shape)
t_out = resize_volume(target_shape, t_in)

%%timeit -r 10 -n 1
t_out.eval({t_in: b_img})
#> 585 ms ± 45.7 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
sess.close()

%%timeit -r 10 -n 1
transform.resize(img, target_shape, mode='constant', anti_aliasing=False)
#> 9.72 s ± 89.1 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)

~10s on a CPU vs half a second on a GPU does seem like it's worth working in somehow, though I'm a little torn on how to do it given that it's a one time transformation and would still be at least an order of magnitude slower than deconvolution itself.

I'm thinking it would probably be best as a standalone context manager with a companion helper function like:

# PSF downscaling factors as X, Y, Z
scale_x, scale_y, scale_z = 1, 1, .8

# For scaling multiple images without recreating the TF graph each time
with flowdec.transform.Transformer() as trans:
    # In a loop over per-channel PSFs
    psf_image_scaled = trans.rescale(psf_image, scale=(scale_x, scale_y, scale_z))

# OR if you don't care about the graph initialization time (which would be negligible with few PSFs)
# a function like this would wrap the above
psf_image_scaled = flowdec.transform.rescale(psf_image, scale=(scale_x, scale_y, scale_z))

Does that make sense to you? Users would still have to do the math on the scaling factors but that would let me continue to avoid having to add anything to the API concerning units or deciding what kind of PSF transforms are detrimental (e.g. upsampling).

@dmilkie
Copy link
Author

dmilkie commented Feb 8, 2019

Thanks, that makes sense. And it looks like someone is already implementing this idea:
https://github.com/VolkerH/Lattice_Lightsheet_Deskew_Deconv/blob/master/Python/01_Lattice_Light_Sheet_Deconvolution.ipynb

@dmilkie dmilkie closed this as completed Feb 8, 2019
@eric-czech
Copy link
Collaborator

Nice, @VolkerH got you covered! I haven't checked in a while but it's a bummer to see the current public opinion of the likelihood of official OpenCL support in TF as centering around the "probability of alien invasion and finding a way to travel back in time" (tensorflow/tensorflow#22). Sigh..

Anyways for someone who encounters this in the future, gputools is likely a better way to manipulate 3D volumes/PSFs prior to deconvolution.

@VolkerH
Copy link
Contributor

VolkerH commented Feb 8, 2019

wow, I only created the repo a couple of days ago ...
I tried to tag you to let you know about it, but apparently cross-repo tagging is not possible.

I haven't put up a requirements file, but caveat emptor: gputools.affine used to use a different ordering of the coordinates from scipy.ndimage.affine. This has changed in the develop branch that I've been using. I think the master branch still flips coordinates. See maweigert/gputools#12

@VolkerH
Copy link
Contributor

VolkerH commented Feb 8, 2019

@dmilkie, just saw on your profile you're at Janelia, are you also dealing with LLS data?

@dmilkie
Copy link
Author

dmilkie commented Feb 10, 2019 via email

@VolkerH
Copy link
Contributor

VolkerH commented Feb 10, 2019

@dmilkie, thanks for the reply. That's good to know.
I think the deconvolution will still have to happen offline though, even with flowdec.

I had some discussions with David who runs our lightsheet and had some ideas about live deskew visualization that would have me look into the Labview control software.

I don't want to hijack the flowdec issue for a lattice specific discussion though.
If you're interested we can continue the discussion here:
VolkerH/Lattice_Lightsheet_Deskew_Deconv#11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants