You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I think PyFAI is a great thing to have available but as there aren't people (to my knowledge) using pyxem for analyzing x-ray data it might be better to have that as an optional dependancy. This requires that we:
Rewrite our own Azimuthal Integrator class
Write an efficient caking algorithm
Describe the solution you'd like
The nice this about doing this is that we can make this GPU (cuda) accelerated which should be fairly fast and work inline with the template matching code (the main reason for this rewrite).
I've been playing around with a method that first takes a set of control points which define the azimuthal pixels as polygons. It then overlays the cartesian pixel array over the polygons and determines how much of each pixel lies in each polygon. This defines a ragged array of factors to multiply vs slices of the data. The nice thing is that this takes most of the time and can be precomputed.
The array is then sliced many times for each pixel and the slices are multiplied by the pre-computed factors. It's a pretty embarrassingly parallel problem/ would work very well with a GPU. It's also about 2 times faster than the warp_polar function with the added benefit that the total intensity is conserved. I was going to try it out using a GPU and see what if any gains there are from that.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I think PyFAI is a great thing to have available but as there aren't people (to my knowledge) using pyxem for analyzing x-ray data it might be better to have that as an optional dependancy. This requires that we:
Describe the solution you'd like
The nice this about doing this is that we can make this GPU (cuda) accelerated which should be fairly fast and work inline with the template matching code (the main reason for this rewrite).
I've been playing around with a method that first takes a set of control points which define the azimuthal pixels as polygons. It then overlays the cartesian pixel array over the polygons and determines how much of each pixel lies in each polygon. This defines a ragged array of factors to multiply vs slices of the data. The nice thing is that this takes most of the time and can be precomputed.
The array is then sliced many times for each pixel and the slices are multiplied by the pre-computed factors. It's a pretty embarrassingly parallel problem/ would work very well with a GPU. It's also about 2 times faster than the
warp_polar
function with the added benefit that the total intensity is conserved. I was going to try it out using a GPU and see what if any gains there are from that.The text was updated successfully, but these errors were encountered: