Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Fourier" inverse Abel transform #198

Open
stggh opened this issue Apr 10, 2017 · 23 comments
Open

"Fourier" inverse Abel transform #198

stggh opened this issue Apr 10, 2017 · 23 comments
Assignees
Milestone

Comments

@stggh
Copy link
Collaborator

stggh commented Apr 10, 2017

A preliminary Python version of Pretzler's Fourier cosine series transform is in my PyAbel fork fourier branch. The file abel/fourier.py is self-contained, and will run with your own PyAbel distribution: python3 fourier.py.
plot_example_fourier

I like the method because the concept is simple, and it illustrates how the other basex methods (may) work.

The code would benefit from @DanHickstein's vectorization magic for hbasis. I have yet to tidy up the code, link to the correct basis storage functions in tools/basis.py, documentation, unit test.

Open for discussion, before I prepare it for PyAbel.

@DanHickstein
Copy link
Member

Looks cool! Thanks for looking into this.

How does this compare with the "Fourier Hankel" method?

And how does fitting cosine functions to the data differ from just taking the FFT?

@stggh
Copy link
Collaborator Author

stggh commented Apr 11, 2017

Q1:

How does this compare with the "Fourier Hankel" method?

The selection of this algorithm was from the reference given in issue #61, a method that appeared to be favoured by the user.

I am not familiar with the Fourier Hankel (FH) method. It looks to be a next step to this method,
although, this reference Applied Optics Vol. 47, Issue 9, pp. 1350-1357 (2008) (see also the open issue #24, with publication query) claims that Fourier Expansion (FE) is better than FH.

Accuracies of the discrete FE and FH methods have been compared and analyzed. The FE method is very accurate even for a small set of data, while the FH method appears to have systematic negative errors.
:
Though the FE method has a high accuracy, it is very sensitive to noise and the inversion matrix is
difficult to calculate, which restricts its application; a smoothing procedure must be implemented when it has been used. The FH method can only be considered when applying it to data with a large number of points ...

Q2:

how does fitting cosine functions to the data differ from just taking the FFT

This is a good question, that is not straight forward to answer. The Pretzler basis function f(r) is in a slightly different form to a standard Fourier transform series. In principle scipy.fftpack.rfft() should produce the same coefficients as scipy.optimize.least-squares(). However, I have not yet managed to make this happen.

@DanHickstein
Copy link
Member

Yeah, in my mind, I always thought of the FT as a magical way to simultaneously "fit" a bunch of cosine (and sine) functions to your data. So, it seems like we should be able to switch out the fitting for a FFT for a big speed gain. But, perhaps I am missing something here.

So, it sounds like this method is somewhat distinct from the FH, which is great. I will try to have a closer look through your code soon.

@stggh
Copy link
Collaborator Author

stggh commented Apr 16, 2017

From what I can tell extracting cosine amplitudes from the FFT spectrum requires some form of peak-finding, ok for a simple oscillating function, but not clear cut for a Gaussian shape?
fft

Edit: updated figure with correct frequency amplitudes.

@DanHickstein
Copy link
Member

Isn't getting the amplitudes of the cosine functions as simple as taking the real part of the FFT?

@stggh
Copy link
Collaborator Author

stggh commented Apr 17, 2017

You are right, on a frequency scale the coefficients are A0 ... An. I had missed that subtly.

@stggh
Copy link
Collaborator Author

stggh commented Apr 21, 2017

Update:

I have not made much progress on reconciling the cosine series coefficients from scipy.fftpack.rfft with those from the least-squares fit of the cosine series to the (forward Abel transformed) profile.

The end aim is to use the fast rfft to yield the Fourier cosine series coefficients, rather than by using (slower?) least-squares fitting.

The following test case is the sum of two cosine functions 10 cos(2r) + 5 cos(8r). The output from rfft is as expected, with 2 frequencies at 2 and 8, corresponding to cosine series coefficients An = 10, 5. All good. Tks, @DanHickstein. See the figure below (left column).

Running the abel/fourier.py inverse Abel transform on the forward Abel transformed cosine (signal) function, returns the correct signal (sum of cosines) function. All good. See right column of figure.

However, I cannot correlate the coefficients returned from rfft with those from fourier.py. I would hope that they have some similarity, to provide for simple substitution in abel/fourier.py.

fft5

[Edit:]
Output:

python3 fft5.py
Least-squares ---------------
Coefficients An from lsq fit:
[ 1.46035527e+01 2.21482816e-01 -7.84832739e-02 3.83092563e-01
-1.01436389e+01 -8.72142698e-02 -7.21669731e-03 5.10639305e-02
-3.73914968e-02 8.67044297e-02 -2.71182916e-02]

Fourier transform------------
frequencies:
[ 0. 0.99502488 1.99004975 2.98507463 3.9800995 4.97512438
5.97014925 6.96517413 7.960199 8.95522388 9.95024876]

Coefficients An from amplitude of fft at frequency n:
[ 0.15 0.18364941 10.12176371 -0.02333343 0.03258162
0.06200434 0.09961855 0.1960748 4.97852142 -0.20016258
-0.0940682 ]

Test code, requires abel/fourier.py:

import numpy as np
from scipy.fftpack import rfft, fftfreq
import matplotlib.pyplot as plt

import abel

def cosine(r):
    return 10*np.cos(2*np.pi*2*r/r[-1]) +\
            5*np.cos(2*np.pi*8*r/r[-1])

# signal
r = np.linspace(0, 1, 201)
signal = cosine(r)

n = len(r)
n2 = n//2

# Fourier transform --------------------
fourier_signal = rfft(signal)

# fourier_signal structure [y(0),Re(y(1)),Im(y(1)),...,Re(y(n/2))] n even
# restructure array to extract real (cosine) coefficients, this appears clumsy(?)
fft_signal = fourier_signal[0]
fft_signal = np.append(fft_signal, fourier_signal[1::2])/n2

# frequencies
freq_signal = fftfreq(signal.size, r[1]-r[0])[:n2]


# Least-squares -------------------

# forward transform using hansenlaw
forward_signal = abel.hansenlaw.hansenlaw_transform(signal, direction='forward')

# inverse Abel transform via fitting Fourier series
# fitting Fourier series to forward transformed profile
abel_inverse_of_forward_signal, An = abel.fourier.fourier_transform(
                                                  forward_signal, Nu=21)


# Results ----------------------
print("Least-squares ---------------")
print("Coefficients An from lsq fit:")
print(An[:11], end="\n\n")
print("Fourier transform------------")
print("frequencies:")
print(freq_signal[:11], end="\n\n")
print("Coefficients An from amplitude of fft at frequency n:")
print(fft_signal[:11])


# Plots ---------------

fig, ((axsignal, axforward), (axfft, axinverse)) = plt.subplots(2, 2)

# input signal 2x cosines with frequencies 2 and 8
axsignal.plot(r, signal)
axsignal.set_title(r"$10\cos(2\pi\, 2r)+5\cos(2\pi\, 8r)$")
axsignal.set_ylabel("signal")
axsignal.set_xlabel(r"radius ($r$)")

# the fast-Fourier transform
axfft.plot(freq_signal, fft_signal[:-1])
axfft.set_title("fft signal")
axfft.set_xticks([2, 8, 50])
axfft.set_yticks([5, 10])
axfft.set_xlabel(r"frequency ($n$)")
axfft.set_ylabel(r"Cosine coefficients $A_n$")
axfft.axis(xmin=0, xmax=50)

# forward Abel transform
axforward.plot(r, forward_signal)
axforward.set_title("forward Abel transform")
axforward.set_xlabel("radius ($r$)")

# inverse Abel transform via abel/fourier.py
# "should" look like the signal input
axinverse.plot(r, signal, '--', label='signal')
axinverse.plot(r, abel_inverse_of_forward_signal.T, label='fourier')
axinverse.set_title("inverse Abel transform")
axinverse.set_xlabel("radius ($r$)")
axinverse.legend(fontsize='smaller')

plt.subplots_adjust(hspace=0.6, wspace=0.3)
plt.savefig("fft5.png", dpi=75)
plt.show()

@stggh
Copy link
Collaborator Author

stggh commented Apr 21, 2017

Oh, I now see. The rfft should be take of the forward transform in order to compare coefficients An.

@DanHickstein
Copy link
Member

So, you're able to replace the fitting procedure with an FFT? This should be much faster.

@stggh
Copy link
Collaborator Author

stggh commented Apr 24, 2017

Unfortunately, no. I see no common pattern between the coefficients of the two methods. I think this may be due to the phase factors/complex components of rfft, although the lack of a simple relationship could be the consequence of limited brain power on my part.

At the moment I am packaging abel/fourier_expansion.py for PyAbel. The least-squares approach was employed by Pretzler, and it does allow the basis function f(r) to be tailored to suit a particular problem.

The method is slow, a factor of 2000 slower than two_point(!) for example_fourier_expansion.py, and the low intensity background has oscillations, due to the Fourier series truncation. But, note, the oscillation reduces toward the image centre, opposite in behaviour to the other non-basis methods.

plot_example_fourier_expansion

Perhaps, this method belongs in a separate subsection of PyAbel inverse Abel transforms, along with onion_bordas, present for completeness, but not necessarily useful(?).

@stggh
Copy link
Collaborator Author

stggh commented Apr 29, 2017

Update:

I have removed superfluous row storage for hbasis and fixed the basis storage functions. The fourier_expansion method is now usable, the time comparison for two_point is now only x200 for example_fourier.py ;-). Here Is the abel.benchmark comparison. Note, in these timing measurements fourier_expansion.py uses a default Nu = n/10, which may disadvantage it:

In [1]: import abel

In [2]: abel.benchmark.AbelTiming()
Out[2]: 
PyAbel benchmark run on i386

time in milliseconds

=========    Basis Abel implementations ==========

Implementation      n = 301           n = 501       
----------------------------------------------------------
        basex_bs    4901.5           12773.9         
fourier_expansion_bs   12926.4           38194.5         
     linbasex_bs       7.1              18.8         
onion_peeling_bs       3.7               9.1         
  three_point_bs       9.7              27.1         
    two_point_bs       3.7               8.4         


=========  Forward Abel implementations ==========

Implementation      n = 301           n = 501       
----------------------------------------------------------
        direct_C      19.6              80.7         
   direct_Python      87.7             327.0         
       hansenlaw       7.4              17.3         


=========  Inverse Abel implementations ==========

Implementation      n = 301           n = 501       
----------------------------------------------------------
           basex       2.2              21.5         
        direct_C      19.3              84.2         
   direct_Python      99.2             332.7         
fourier_expansion    1013.7            3028.4         
       hansenlaw      17.8              41.0         
        linbasex      45.2             124.4         
    onion_bordas     321.3             888.5         
   onion_peeling       0.5               2.2         
     three_point       0.6               2.4         
       two_point       0.5               2.1  

and the pyabel_benchmark_plot.py (methods dropped once execution time > 5 sec):
plot_pyabel_benchmarks

fourier_expansion is the slowest of all the PyAbel methods, however, I think it has a place in the PyAbel stable.

@stggh
Copy link
Collaborator Author

stggh commented May 3, 2017

Finally! I have decode the fft output to coefficients that agree with the least-squares fitting. The index for the real coefficients An appears to be 2*n - 1, for whatever reason. I will apply this to fourier_expansion.py, soon.

Straight comparison of the fit of a cosine series to a 2-frequency cosine function (2 and 8), using rfft and least_squares:

fftvslsq

(Edit - fixed poor code, deleted mouse triggered duplicated code)

import numpy as np
from scipy.fftpack import rfft
from scipy.optimize import least_squares
import matplotlib.pyplot as plt

import abel

# fit fft, An to the same function

def cosine(r):
    return 10*np.cos(2*np.pi*2*r/r[-1]) +\
            5*np.cos(2*np.pi*8*r/r[-1])
    #return 5+np.cos(2*np.pi*7*r/r[-1])

def series(An, r):
    sum = np.zeros_like(r) 
    for n, an in enumerate(An):
        sum += an*np.cos(2*np.pi*n*r/r[-1]) 
    return sum

def residual(An, r, signal):
    return signal - series(An, r)
    

# signal
r = np.linspace(0, 1, 101)
signal = cosine(r)

n = r.size
n2 = n//2

# Fourier transform --------------------
fourier_signal = rfft(signal)/n2

# coefficients thanks to 
# http://stackoverflow.com/questions/4258106/how-to-calculate-a-fourier-series-in-numpy
a0, a, b = fourier_signal[0], fourier_signal[1:-1].real,\
           -fourier_signal[1:-1].imag

# Least-squares -------------------
# fitting Fourier series
An = np.arange(21)

res = least_squares(residual, An, args=(r, signal))
An = res.x

# re-align fft coefficients
fA = np.zeros_like(An)
fA[0] = a0/2
fA[1:] = a[:fA.size*2-2:2]

for n in np.arange(21):
    print("{:2d}  {:5.2f}  {:5.2f}".format(n, An[n], fA[n]))


# Plots ---------------

plt.plot(r, signal, label="signal")
plt.plot(r, series(An, r), label="series")
plt.plot(r, series(fA, r), label="fft")
plt.legend()

plt.savefig("fftvslsq.png", dpi=75)
plt.show()

@DanHickstein
Copy link
Member

Awesome!

The explanation for why the indices of the coefficients from the FFT and from your fitting algorithm don't align is that the FFT is calculating the coefficients for different frequencies than what you are using in your fitting algorithm. If you look at the frequencies in both cases, then the Intensity-as-a-function-of-frequency plots line up perfectly. Just re-interpolate from the FFT to get the intensities at the exact frequencies you desire.

Alternatively, if we make sure to use the same frequencies for the least-squares and the FFT, then things match up.

I hacked up your code a bit to show how this works:

import numpy as np
# from scipy.fftpack import rfft
# FFTs are already in numpy, so no extra scipy import needed
from scipy.optimize import least_squares
from scipy.interpolate import InterpolatedUnivariateSpline
import matplotlib.pyplot as plt

import abel

def cosine(r):
    return 10*np.cos(2*np.pi*2*r/r[-1]) +\
            5*np.cos(2*np.pi*8*r/r[-1])
    #return 5+np.cos(2*np.pi*7*r/r[-1])

def series(An, r):
    sum = np.zeros_like(r) 
    for n, an in enumerate(An):
        sum += an*np.cos(2*np.pi*n*r/r[-1]) 
    return sum

def residual(An, r, signal):
    return signal - series(An, r)
    

n = 100
r = np.linspace(0, 2*np.pi, n)
leastsq_freqs = np.fft.rfftfreq(n, r[1]-r[0])

signal = cosine(r)

# Fourier transform --------------------
fourier_signal = np.fft.rfft(signal)/(0.5*r.size)
fourier_freqs  = np.fft.rfftfreq(signal.size, r[1]-r[0])

fft_DC_term, fft_cos_terms, fft_sin_terms = fourier_signal[0], fourier_signal[1:-1].real, -fourier_signal[1:-1].imag

interpolation = False
if interpolation: 
    # re-interpolate FFT to the frequencies of the least-squares
    # only needed if the fourier_freqs don't match the leastsq_freqs
    fourier_interp = InterpolatedUnivariateSpline(fourier_freqs, fourier_signal.real)
    fA = fourier_interp(leastsq_freqs)
else:
    fA = fourier_signal.real

# Least-squares:
An  = np.zeros_like(leastsq_freqs)
res = least_squares(residual, An, args=(r, signal))
An  = res.x

for n in np.arange(21):
    print("{:2d}  {:5.2f}  {:5.2f}".format(n, An[n], fA[n]))

# Plots ---------------
fig, axs = plt.subplots(2,1,figsize=(6,8))

axs[0].plot(fourier_freqs, fourier_signal.real, marker='o', color='r')
axs[0].plot(leastsq_freqs, An, marker='o', color='g')

axs[0].set_xlabel('Fourier frequency (1/rad)')
axs[0].set_ylabel('Fourier coefficient')

axs[1].plot(r, signal, label="signal")
axs[1].plot(r, series(An, r), label="series")
axs[1].plot(r, series(fA, r), label="fft")
axs[1].legend()
axs[1].set_xlabel('Angle (rad)')
axs[1].set_ylabel('Intensity')

plt.savefig("fft.png", dpi=200)
plt.show()

fft

@stggh
Copy link
Collaborator Author

stggh commented May 3, 2017

Thanks @DanHickstein!

The problem is that rfftfreq() returns n/n[-1]/d/2 and hence the maximum frequency is always <=0.5 for step d=1. i.e. the frequency scale requires some form of calibration factor, via the parameter d, otherwise we do not know where freq = 0, 1, 2, ... occur. It is not clear to me how to define d.

In [22]: np.fft.rfftfreq(10)
Out[22]: array([ 0. ,  0.1,  0.2,  0.3,  0.4,  0.5])
In [25]: np.fft.rfftfreq(20)
Out[25]: 
array([ 0.  ,  0.05,  0.1 ,  0.15,  0.2 ,  0.25,  0.3 ,  0.35,  0.4 , 0.45,  0.5 ]

@stggh
Copy link
Collaborator Author

stggh commented May 4, 2017

Do we just normalize the length of our image to 2pi, so d = 2*np.pi/cols?

Edit, no: freq_max = 1/d/2

Edit2: Yes, d = (r[1]-r[0])/np.pi/2 puts the frequencies at the correct location = 2, 8

Edit3: but, then the maximum frequency is pi!

@DanHickstein
Copy link
Member

Well, I think that the spatial frequencies that we work with are arbitrary, but they are basically defined in this case by the series function, specifically in this line:

sum += an*np.cos(2*np.pi*n*r/r[-1]) 

I think that this means that we want to have r run from 0 to 2pi in order to have the frequencies of the fft and the leastsquares line up.

The maximum frequency is given by the highest frequency that is Nyquist sampled, so it depends on how many points we have in r. It seems fine for it to end up as pi in some situations...

@stggh
Copy link
Collaborator Author

stggh commented May 10, 2017

Fail

I now realise that basis row intensity function is not a cosine series(!) and hence the fast Fourier transform (rfft) coefficients are not applicable. In the Pretzler model the basis function for the image row intensity profile is assumed to be (equation number as in Pretzler's paper):
eq

i.e not a cosine series.

rfft may be useful for a forward transform, where f(r) is a cosine series:
eq3

I will look into this next.

@stggh
Copy link
Collaborator Author

stggh commented May 13, 2017

The forward Abel transform works, providing a third PyAbel method to generate the projection.

Here is a comparison of the forward and inverse Abel transform of the 'curveA' function (see abel/tests) using the Fourier_expansion method:
fft_test

@stggh
Copy link
Collaborator Author

stggh commented Aug 1, 2017

I take back the fail (above), Fourier cosine coefficients may be obtained using a fast-fourier-transform rfft.

The secret is to duplicate (repeat) the function, to ensure that it is symmetric. I think this is a well known fact, that I had missed. :-(

This test code gist compares the two methods, cosine amplitude coefficients An extracted from a Fourier transform and by using a direct least-squares fit.

 n    lsq-An   fft-An
 0    -0.00     0.00
 1    -0.00     0.20
 2    10.00    10.09
 3    -0.00    -0.00
 4    -0.00     0.06
 5     0.00     0.09
 6    -0.00     0.13
 7    -0.00     0.23
 8     5.00     5.00
 9    -0.00    -0.14
10    -0.00     0.01
11     2.00     1.91
12    -0.00    -0.15
13    -0.00    -0.08
14     0.00    -0.06
15    -0.00    -0.04
16     0.00    -0.04
17     0.00    -0.03
18     0.00    -0.02
19    -0.00    -0.02

The figure shows a sum of 3 cosines (frequencies 2, 8, and 11), with the same computed from the fft and lsq coefficients:
fftvslsq

PyAbel fourier_expansion method to follow ...

@liuxunchen
Copy link

liuxunchen commented Aug 26, 2018

hello,
I want to share to you our work recently published on a journal using the Fourier Transfer method.
We compared several Abel inversion method, using the PyAbel package. A big Thank you.
The comparison results shows that for our study, combustion diagnostic using a few tens of point-to-point scan with laser absorption, the FT method works quite very well.
abel_compare

We also used Tikhonov regularization based on PyAbel package. Basically
Abel transform: A x = b
Tikhonov regularization: \lambdaL = 0
x is fitted by the following least square problem:
A_para_L
x = b_0

def Tik(A,b,k,para_lambda,auto,para_start,para_end,N): 
    # Tik regularization 
    # A_para_L*x = b_0 
    # k is ratio<1 of L 
    #  if auto == True, find lambda automatically,N is number of lambda to try,

    s = A.shape[0]
    L=add_L(k,s)
    m = int(k*s)
    b_0 = np.vstack((b.reshape((s,1)),np.zeros((m,1))))
    if auto:
        rho = np.zeros(N)
        eta = np.zeros(N)
        para = np.linspace(para_start,para_end,N)
        for i in range(N):
            A_para_L = np.vstack((A,para[i]*L))
            x=np.linalg.lstsq(A_para_L,flatten(b_0),rcond=None)[0]
            rho[i] = np.linalg.norm(np.subtract(np.matmul(A,x),b.reshape((s,1))))
            eta[i] = np.linalg.norm(np.matmul(L,x))
            #        plt.plot(eta[i],rho[i],'.')
            logrho = np.log(rho)
            logeta = np.log(eta)
            # now curverture of L-curve
            # note that 2nd direv is less
            kappa = 2*(np.diff(logrho)[:-1]*np.diff(logeta,2)-np.diff(logrho,2)*np.diff(logeta)[:-1])/(np.diff(logrho)[:-1]**2+np.diff(logeta)[:-1]**2)**1.5
        para_lambda = para[np.argmax(abs(kappa))]
        print 'lambda = ', para_lambda
    A_para_L = np.vstack((A,para_lambda*L))
    x=np.linalg.lstsq(A_para_L,flatten(b_0),rcond = None)[0]
    return x

DH edited 2018-08-28 to format code.

@DanHickstein
Copy link
Member

Dear Professor Liu,

Apologies for the delayed reply. Thank you very much for presenting this method and for bringing your paper to our attention! We are delighted that PyAbel has been useful for your research, and that you have made a comparison of so many Abel transform methods.

We are very interested in your results, and would like to discuss them with you. But first I must confess that we are disappointed to see no mention of PyAbel in your manuscript. Instead, all of the methods are presented as if the authors of the manuscript are entirely responsible for their implementation. Since your manuscript deals with the technical details of the various Abel transform methods, it seems inappropriate to omit the fact that the methods were part of PyAbel.

Anyway, moving on to a more interesting discussion. You show that the performance of many methods that are not currently incorporated into PyAbel greatly exceeds the methods that are currently implemented. The logical action is that we should attempt to incorporate these methods into PyAbel! I hope that you can help us understand these methods better and perhaps incorporate them into PyAbel.

I have several questions for you:

  1. Can you remake the figure that you sent using the latest version of PyAbel (v0.8.0)? We have been doing some accuracy benchmarks over the past few months, and we have implemented numerous bug-fixes to the direct and hansenlaw methods that may provide better agreement with analytical results. It would be interesting to see if this improves the performance in your analysis as well.

  2. Are the methods direct, hansenlaw, basex, twopoint, threepoint, onion_peeling using without modification from PyAbel? Is the "Fourier analysis" method Pretzler's method, as implemented in @stggh's fourier branch? Or is it a different implementation? Are the Fourier-Hankel and Tikhonov methods your own inplementations?

  3. Can you provide a minimal working example of the tikhonov code that will transform a sample image? It looks like a very simple and practical method that we should incorporate into PyAbel immediately.

  4. How does this Tikhonov method relate to the Tikhonov regularization parameter that can be used in Basex? If you look at line 184 of basex.py you will see that we are currently setting the Tikhonov regularization parameter to zero. Recently (after discussions with Mikhail Ryazanov), I learned that setting this parameter to a higher value can be very important for smoothing the image and eliminating noise at low radius. Unfortunately, we do not currently allow this parameter to be modified in the function call, but you can hack the basex.py file to set the tikhonov factor to something higher, say (100,100) and you will see a large smoothing effect. Note that you will need to manually delete and regenerate the basis sets each time you change the Tikhonov factor. I think that if you use this, then you will get much better agreement for noisy data.

  5. Somewhat related the previous question: can you comment on the amount of "smoothing" provided by the method that perform well in the presence of noise (fourier analysis, fourier expansion, and tikhonov methods). Basically, I am wondering if the amount of smoothing provided by the transform method is directly related to the performance that you observe in the presence of noise. Indeed, the three-point method provides more smoothing that the two-point method, and the performance is correspondingly increased. Indeed, I am curious if the excellent performance of the fourier analysis method (1.3%) could be matched by, for example, the three-point method if the a gaussian smoothing was applied to the image prior to the transform. In any case, a method with an adjustable smoothing parameters (like the tikhonov method) does seem advantageous, since the method can be tailored to reduce the noise in the transformed image.

  6. On page 170, why do you cite the direct methods as Ref. 40. This is the reference for BASEX. We do not know of a reference for the direct method, so if you know of one, please tell us.

  7. Which additional methods would you suggest to incorporate into PyAbel? Fourier expansion, Hankel-Fourier, Tikhonov, and Fourier Analysis? Which is the highest priority method.

  8. would you be willing to help us to incorporate one or more of these methods into PyAbel?

Thanks again for your nice comparison and for sending us the manuscript. We look forward to working with you to make PyAbel better and more useful for everyone!

With best regards,

Dan

@DanHickstein
Copy link
Member

@liuxunchen, did you get chance to take a look at these questions?

@liuxunchen
Copy link

liuxunchen commented Dec 5, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants