Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS failing for Kepler prime mission data #102

Open
lgbouma opened this issue May 21, 2022 · 5 comments
Open

TLS failing for Kepler prime mission data #102

lgbouma opened this issue May 21, 2022 · 5 comments
Labels
bug Something isn't working

Comments

@lgbouma
Copy link
Contributor

lgbouma commented May 21, 2022

Describe the bug

When running TLS on light curves from the prime Kepler mission, the algorithm sometimes (but not always) fails to find viable solutions, even though alternative implementations of BLS succeed. The following warning is raised: transitleastsquares/main.py:205: UserWarning: No transit were fit. Try smaller "transit_depth_min", and the chi2 at all proposed periods is returned as the number of data points in the light curve. Per #86, this is because TLS is not finding any proposed periods or durations that give a lower chi^2 residual than just a pure straight line. The point of this bug report is that there is a set of Kepler light curves for which TLS should be finding good, finite solutions, but it seems to not be doing so.

To Reproduce

Using TLS version 1.0.31, attached are two light curves that exemplify the bug:
KOI-7368_npoints50000.csv
KOI-7368_npoints5000.csv

When I run TLS on the former light curve, I get the aforementioned chi2 = 50,000 for all cases. When I run it on the latter, which is just the first 5,000 points of the same time-series, I get a finite periodogram.

Here's a MWE:

import pandas as pd, numpy as np
from transitleastsquares import transitleastsquares
import multiprocessing as mp
n_threads = mp.cpu_count()

csvpath = 'KOI-7368_npoints50000.csv'
df = pd.read_csv(csvpath)
time, flux = np.array(df.time), np.array(df.flux)

model = transitleastsquares(time, flux, verbose=True)
results = model.power(
    use_threads=n_threads, show_progress_bar=True, R_star=0.88,
    R_star_min=0.85, R_star_max=0.91, M_star=0.88, M_star_min=0.85,
    M_star_max=0.91, period_min=5, period_max=10, n_transits_min=10,
    transit_template='default', transit_depth_min=10e-6, oversampling_factor=5
)

# raises the following warning:
# /home/lbouma/miniconda3/envs/py37/lib/python3.7/site-packages/transitleastsquares/main.py:205:
# UserWarning: No transit were fit. Try smaller "transit_depth_min"

print(results['period'])
# gives nan! (since the chi^2 = 50,000 == the number of data points for all
# proposed periods)

Desktop (please complete the following information):

  • OS: Linux (debian). Have also gotten the behavior on MacOS.
  • Python version: 3.7 and 3.8
  • TLS Version: 1.0.31

Additional context

Dependence on parameters that shouldn't matter

This behavior seems to change with both 1) the number of data points in the light curve 2) the amplitude of the transit signals in the data. I've described the first (e.g., using 5,000 points, the above code gives finite results). Regarding the second, if one injects a large transit signal into the data, TLS sometimes actually converges, and finds the correct period. However this signal needs to be much larger than transit_depth_min (like ~2000 ppm depth, versus the 10 ppm default minimum). I think this behavior might suggest that the root problem could be related to the default TLS period grid, but I'm not certain.

The desired behavior

The KOI-7368 light curve I attached has had a stellar rotation signal removed. It contains a real 6.843 day transit with depth ~500ppm, and also an injected 7.202 day transit with depth 400 ppm. It would be nice if we could get TLS to find these signals.

Here's the periodgram astropy bls gives (takes hours to run b/c not multithreaded):
astropy_bls_npoints50000

Here's the periodogram astrobase bls gives (with a different searched period grid)
astrobase_bls_npoints50000

@lgbouma lgbouma added the bug Something isn't working label May 21, 2022
@lgbouma
Copy link
Contributor Author

lgbouma commented May 21, 2022

PS: thank you for writing this very useful code!

@hippke
Copy link
Owner

hippke commented May 22, 2022

Hi Luke, sorry you're having trouble.
I don't know what's causing this, but I suspect that the signal is overshadowed by outliers.
When I clean the data like this:

from astropy.stats import sigma_clip
flux = sigma_clip(flux, sigma_upper=3, sigma_lower=3)

Then I get a strong signal with a period of 7.20195 d.
The phase-fold looks like a transit:
Figure_1

This doesn't solve the question, but may be a work-around. Questions to better understand the issue:

  • Can you try more agressive filtering of your data?
  • For how many light curves does this come up?

@lgbouma
Copy link
Contributor Author

lgbouma commented May 23, 2022

Hi Michael,

Thank you for this response! Responding to your questions:

Can you try more aggressive filtering of your data?
Sure, the filtering can be arbitrarily aggressive... up to the usual point of not wanting to overfit transits!

For how many light curves does this come up?
I've tried running TLS on 4 Kepler light curves so far, and I got this behavior for 2 of them -- it worked for Kepler-1627 and Kepler-1643, and failed for KOI-7368 and KOI-7913. The latter two have the weakest transit signals.

Possible solution

I was intrigued though that sigma clipping helped mitigate the issue. The power spectrum I got after sigma clipping as you suggested looked like the following (I've even added axis labels... where "power" means SDE!)

tls_mwe_sigclip_run1_dy_default

so indeed, the (injected) 7.202 day signal was recovered. As a quick check, I went ahead and tried masking the 7.202 day signal, to see whether the (real) 6.843 day signal would also be recoverable. I again found though that TLS yielded fixed chi^2 for all trial periods -- it failed in the same way. This was surprising to me since in the BLS spectra from my initial message, the 6.843 day signal was the dominant one!

I thought through this a bit more, and eventually formulated the idea that the issue must be tied to the definition of chi^2, and the comparison being done in core.lowest_residuals_in_this_duration between the transit model and a straight line
(permalink). chi^2 only has so many components, so I thought: what if the issue is with the definition of the uncertainties? TLS by default assumes dy = np.std(y). Perhaps this isn't the correct definition for these processed Kepler light curves (the subtraction of stellar rotation signals can yield large residuals which drive up the standard deviation).

This turned out to be a good idea, I think. If I set the uncertainties as say the mean point-to-point RMS across the light curve, I get an SDE spectrum that makes more sense -- a) there's an actual noise floor, and b) both the 7.202 and 6.843 day signals are there. The same procedure actually then successfully finds both signals:

tls_mwe_sigclip_run1_dy_p2p

I think this means this might be "user error" more than a bug with TLS. However I wonder whether it is indeed the case that assuming dy = np.std(y) is the most robust assumption (validate.py, L40). I'll leave this issue open so that you can consider whether you want to change this assumption in a future patch, but feel free to consider this particular case closed. Thank you for your help!

-Luke

PS: here's the updated MWE that finds both the real planet in the KOI-7368 light curve from above, as well as the injected one. The key fix was passing an explicit set of dy values.

import pandas as pd, numpy as np, matplotlib.pyplot as plt
from transitleastsquares import transitleastsquares
import multiprocessing as mp
from astropy.stats import sigma_clip
n_threads = mp.cpu_count()

def p2p_rms(flux):
    dflux = np.diff(flux)
    med_dflux = np.nanmedian(dflux)
    up_p2p = (
        np.nanpercentile( np.sort(dflux-med_dflux), 84 )
        -
        np.nanpercentile( np.sort(dflux-med_dflux), 50 )
    )
    lo_p2p = (
        np.nanpercentile( np.sort(dflux-med_dflux), 50 )
        -
        np.nanpercentile( np.sort(dflux-med_dflux), 16 )
    )
    p2p = np.mean([up_p2p, lo_p2p])
    return p2p

csvpath = 'KOI-7368_npoints50000.csv'
df = pd.read_csv(csvpath)
time, flux = np.array(df.time), np.array(df.flux)

# NOTE: not actually necessary for TLS to converge, but a good idea
# regardless because it lowers the noise floor in the periodogram
flux = sigma_clip(flux, sigma_upper=3, sigma_lower=3)
stime = time[~flux.mask]
sflux = flux[~flux.mask]

dy = np.ones_like(stime)*p2p_rms(sflux)
model = transitleastsquares(stime, sflux, dy=dy, verbose=True)
results = model.power(
    use_threads=n_threads, show_progress_bar=True, R_star=0.88,
    R_star_min=0.85, R_star_max=0.91, M_star=0.88, M_star_min=0.85,
    M_star_max=0.91, period_min=5, period_max=10, n_transits_min=10,
    transit_template='default', transit_depth_min=10e-6, oversampling_factor=5
)

plt.close("all")
plt.plot(results['periods'], results['power'], lw=0.5)
plt.xlabel('period [days]')
plt.ylabel('power')
plt.savefig('tls_mwe_sigclip_run1.png', dpi=400)

from transitleastsquares import transit_mask, cleaned_array

intransit = transit_mask(stime, results.period, 2*results.duration, results.T0)

y_second_run = sflux[~intransit]
t_second_run = stime[~intransit]
t_second_run, y_second_run = cleaned_array(t_second_run, y_second_run)

dy = np.ones_like(t_second_run)*p2p_rms(y_second_run)
model2 = transitleastsquares(t_second_run, y_second_run, dy=dy, verbose=True)
results2 = model2.power(
    use_threads=n_threads, show_progress_bar=True, R_star=0.88,
    R_star_min=0.85, R_star_max=0.91, M_star=0.88, M_star_min=0.85,
    M_star_max=0.91, period_min=5, period_max=10, n_transits_min=10,
    transit_template='default', transit_depth_min=10e-6, oversampling_factor=5
)

print(results2['period'])

plt.close("all")
plt.plot(results2['periods'], results2['power'], lw=0.5)
plt.xlabel('period [days]')
plt.ylabel('power')
plt.savefig('tls_mwe_sigclip_run2.png', dpi=400)

@hippke
Copy link
Owner

hippke commented May 25, 2022

Hey, that's a very cool find. I totally agree that the issue is non-Gaussian errors. We could add a feature to TLS to calculate uncertainty in a more sophisticated way, and move away from std in case errors are non Gaussian. What is the method p2p_rms in your script? What would you recommend to determine non-Gaussian uncertainty?

@lgbouma
Copy link
Contributor Author

lgbouma commented May 25, 2022

I can imagine two alternatives (though I'm sure there are more!).

The simpler thing is to assume that the non-Gaussianity is mostly driven by a few outliers. If so, the standard-deviation will be driven up by these outliers, but a statistic like the 84th-50th percentile, or its lower converse, will not.

Another way to produce non-Gaussianity is if the user hasn't even attempted to remove a coherent signal that is present in the light curve (e.g., few-day stellar rotation, or scattered light for TESS at the edges of orbits). The point-to-point RMS statistic I quoted above is robust to these kinds of trends in the data too, because it works on the point-to-point difference time-series, which removes such trends.

Here's the output of an example code comparing these statistics against the standard deviation:

Seed 42
Gaussian errors (σ=0.05):
P2P RMS: 0.0704, STDEV: 0.0489, 68 percentile: 0.0481
Gaussian errors + outliers:
P2P RMS: 0.0806, STDEV: 0.2176, 68 percentile: 0.0520
Gaussian errors + outliers + line:
P2P RMS: 0.0806, STDEV: 0.2810, 68 percentile: 0.2156

Code that generates this is as follows.

import numpy as np

def sixtyeight(flux):
    """
    Calculate the 68th percentile of the distribution of the flux.
    """
    med_flux = np.nanmedian(flux)

    up_p2p = (
        np.nanpercentile( flux-med_flux, 84 )
        -
        np.nanpercentile( flux-med_flux, 50 )
    )
    lo_p2p = (
        np.nanpercentile( flux-med_flux, 50 )
        -
        np.nanpercentile( flux-med_flux, 16 )
    )

    p2p = np.mean([up_p2p, lo_p2p])

    return p2p

def p2p_rms(flux):
    """
    Calculate the 68th percentile of the distribution of the residuals from the
    median value of δF_i = F_{i} - F_{i+1}, where i is an index over time.
    """
    dflux = np.diff(flux)
    med_dflux = np.nanmedian(dflux)

    up_p2p = (
        np.nanpercentile( dflux-med_dflux, 84 )
        -
        np.nanpercentile( dflux-med_dflux, 50 )
    )
    lo_p2p = (
        np.nanpercentile( dflux-med_dflux, 50 )
        -
        np.nanpercentile( dflux-med_dflux, 16 )
    )

    p2p = np.mean([up_p2p, lo_p2p])

    return p2p

for seed in [42, 3141]:

    print(42*'=')
    print(f"Seed {seed}")

    np.random.seed(seed)
    n_points = 1000
    scale = 0.05

    err = np.random.normal(loc=0, scale=scale, size=n_points)
    y = np.ones(n_points) + err

    print(f"Gaussian errors (σ={scale}):")
    print(f"P2P RMS: {p2p_rms(y):.4f}, STDEV: {np.std(y):.4f}, 68 percentile: {sixtyeight(y):.4f}")

    outlier_ind = np.unique(np.random.randint(0, n_points, size=50))
    outlier_val = np.random.uniform(low=-0.1, high=0.1, size=len(outlier_ind))

    y[outlier_ind] = outlier_val

    print("Gaussian errors + 5% outliers:")
    print(f"P2P RMS: {p2p_rms(y):.4f}, STDEV: {np.std(y):.4f}, 68 percentile: {sixtyeight(y):.4f}")

    line = np.linspace(-0.3,0.3,n_points)
    y += line

    print("Gaussian errors + 5% outliers + line:")
    print(f"P2P RMS: {p2p_rms(y):.4f}, STDEV: {np.std(y):.4f}, 68 percentile: {sixtyeight(y):.4f}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants