Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flux calibration #136

Open
wants to merge 43 commits into
base: main
Choose a base branch
from
Open

Add flux calibration #136

wants to merge 43 commits into from

Conversation

kfkaplan
Copy link
Collaborator

The ultimate goal of add_flux_calibration is to port my absolute (and relative) flux calibration code from plotspec to muler. This includes the ability to read in synthetic stellar spectra using gollum and estimating slit-losses for IGRINS. The basic idea will be with a standard star, you can match a synthetic spectrum from gollum, to flux calibrate your science spectrum. Currently I have added the ability to resample synthetic stellar spectra from gollum into muler. There is more work to be done so this pull request will not be merged yet until it is ready.

@gully
Copy link
Member

gully commented Jun 22, 2023

Is there anything specific about the spectrum requiring it to come from gollum? If not, I would recommend making the name more agnostic to the heritage of the spectrum. .resample_gollum() could then become simply .resample().

Currently gollum offers a .resample() method: resampling the model spectrum to the coordinates of an input data spectrum. It appears the spirit of this Pull Request is the same operation, but expressed as a method on the data, rather than a method on the model. That is almost redundant functionality, except that as pointed out above, the input spectrum need not arise from gollum, so I agree that muler users who wish to resample a data spectrum to some other spectrum---of any origin---ought to have a simple way to do so. So, yes, I agree with moving forward on a .resample() method.

I recommend breaking your this problem into pieces:

  1. Make a .resample() method on EchelleSpectrum, as you've done, that takes in a single input.

We should consider whether the option to blend two spectra in a mixture model configuration should be supported/ and/or treated separately.

The purpose you are describing resembles model grid interpolation, and so the question arises of whether we ought to just directly support Grid Interpolation in gollum. For example a user could simply request a model spectrum a $T_\mathrm{eff} = 5880$ K, instead of having to state 80% of a 5800 K and 20% of a 5900 K. Grid interpolation is offered through other avenues, like Starfish. Still, I am open to offering Grid interpolation in gollum.

It may seem harmless to go ahead and add the function anyways, but the maintenance burden of the API grows as we add methods that duplicate functions elsewhere. So my preference would be a disciplined split between muler and gollum that segregates operations on the models in gollum.

@kfkaplan
Copy link
Collaborator Author

kfkaplan commented Jun 22, 2023

It's not really specific for gollum but its needed for gollum to work. There are peculiarities in converting gollum's spectra to EchelleSpectrum and EchelleSpectrumList objects. I'll generalize everything so that it works with gollum, but really you can input anything as long as some sort of spectrum1d specutils like object.

You are also right that stacking multiple input spectra resembles interpolating on a model grid. While that was one of the reasons I added the functionality to combine two models, there are other use cases. I have run into cases where I need to have the gollum models at different RVs to account for peculiar line profiles, or needed multiple models to fit spectroscopic binaries or contaminated spectra. I'm going to generalize how this works by just letting the user input one spectrum, or an arbitrarily long list of spectra to combine. This keeps things general and covers all use cases.

@gully
Copy link
Member

gully commented Jun 22, 2023

I recommend we make or modify a tutorial to show how this feature could be used-- we'll want one eventually, anyways, and having one now will help me and others more clearly see the use case.

I'll attempt to add one to this PR...

@gully
Copy link
Member

gully commented Jun 22, 2023

Ok, I uploaded a tutorial to this PR to serve as a sandbox for these ideas. In it I model how one would currently go about flux calibrating with muler and gollum.

One key finding is that we should support more methods on the PhoenixGrid object. That is straightforward to do, and I recommend that.

@kfkaplan
Copy link
Collaborator Author

Well adding in the ability to read in 2D IGRINS spectra into muler turned out to be a lot easier than I thought! Specutils naturally handles N-dimensional spectra so it looks like it was able to handle the 2D IGIRNS fits files just fine, thus the muler classes inherit the same ability.

Copy link
Member

@gully gully left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat Pull request, thank you @kfkaplan !
I put some review comments here. There's enough code in here that it makes me wonder whether we should have a devoted IGRINSSlit class. We could then support quality assurance operations like .plot(), and it would help to modularize some of the functions you have here.

It could look something like this:

class IGRINSSlit(object): # or pd.DataFrame?
   # do some __init__ here with all the preferred/optinoal inputs (PA, guide error, etc)
   # read in all the json files, etc.
   [...] 

   def estimate_fwhm(self):
       [...]

   def make_moffat_model(self):

   def plot1D(self):

   def plot2D(self): # or "visualize_scene"

   def estimate_slit_losses(self): 

The benefit here is we can then use our muler method chaning strategy:

slit = IGRINSSlit(filename)
slit.plot2D()

spec.absolute_flux_calibrate(slit).plot()

m = (f_through_slit_K - f_through_slit_H) / ((1/2.2) - (1/1.65))
b = f_through_slit_H - m*(1/1.65)
if print_info:
print('H-band slit throughput: ', f_through_slit_H)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use log.info instead of print, so that the default behavior is not to print




def getIGRINSSlitThroughputABBACoefficients(file, slit_length=14.8, PA=90, guiding_error=1.5, print_info=True):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

method names should be all lowercase and separated by underscores according to pep8

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use black to fix this, I think. I know this seems pedantic, but it helps future contributors to know which identifiers are classes, methods, etc. And it's not necessarily the gospel, for example ABBA could remain capitalized.

slitProfileFilepath: string
Returns the file path to .slit_profile.json file.
"""
if ".spec_a0v.fits" in filepath: #Grab base file name for the uncertainity file
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine for now but eventually we will want to replace this with a fix for #144

@@ -71,6 +75,87 @@ def getUncertainityFilepath(filepath):
"Neither .variance.fits or .sn.fits exists in the same path as the spectrum file to get the uncertainity. Please provide one of these files in the same directory as your spectrum file."
)

def getSlitProfileFilepath(filepath, band):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not need band as an input, since band can be derived from the filename.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This def (now getSlitProfile) need to be able to grab both bands even though one band is for file. This is for determining the wavelength dependent slit throughput.

guilding_error: float
Estimate of the guiding error in arcsec. This smears out the PSF fits in the East-West direction.
This should be used carefully and only for telescopes on equitorial mounts.
print_info: bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move towards log.info() for any printing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this and the information didn't print. I almost always want this information to output to the command line so do we want the logging level to default to info?

file:
Path to fits file (e.g. spec.fits) from which the slit_profile.json file is also in the same directory.
These should all be in the same IGRINS PLP output directory.
slit_length: float
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could/should be derived from the slit header, based on which telescope IGRINS was on, correct?

json_obj = json.load(json_file)
x = np.array(json_obj['profile_x']) * slit_length
y = np.array(json_obj['profile_y'])
f_through_slit_K = estimate_slit_throughput_ABBA(y, x=x, slit_length=slit_length, slit_width=slit_width, PA=PA, print_info=print_info)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would it take to propagate the uncertainty from the slit loses?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure exactly. You could maybe run a monte carlo with some distribution of Moffat function positions+parameters and some distribution of estimated guiding errors.

gg_init = g1 + g2
fitter = fitting.TRFLSQFitter()
gg_fit = fitter(gg_init, x, y)
if print_info:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log.info

if print_info:
print('FWHM A beam:', gg_fit[0].fwhm)
print('FWHM B beam:', gg_fit[1].fwhm)
#Numerically estimate light through slit
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hypothetically, could this be pre-computed? You always assume the Moffatt is symmetrically placed in the center of the slit, correct? (You don't have any way to estimate the position of the centroid relative to the dispersion direction from the profiles alone, you would need the slit-viewing camera for that.)

So you could precompute and cache a grid of "truncated Moffat2Ds". It would be a 3D grid, which is slightly annoying to store. But I imagine it's smooth and could be interpolated.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the assumption is that the Moffat function is always centered (in the dispersion direction) on the slit. Obviously this is not a perfect assumption but it assumes on average the guiding software tends to keep things symmetrically centered over the slit. The biggest issue is probably when using an off-slit guide star, so it is definitely a problem that can come up.

We could precompute "truncated Moffat2Ds", I don't see why not. At this point though it's really going to have to be on the user to figure out where the true center, since writing some automated way to read in a series of SVC images and combine that with the PSFs seen on the spectrometer's detectors seems like an involved task. Ideally this would be how it could be done though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually have tried making a "truncated" Moffat function in 1D by fixing the y axis and making it a parameter in the fit (stop by my office if you want to see the math). Surprisingly it does not seem to do any better fitting the slit profile than an ordinary Moffat function. Fitting two moffat functions, or a Gaussian and a Moffat, per beam seems to give me a much better fit, but then it's not obvious to me how to extract a FWHM from that.

Kyle Kaplan added 4 commits July 11, 2023 13:46
…ectra. Should allow multiple other defs dependent on apply_numpy_mask to also handle 2D spectra.
Kyle Kaplan added 4 commits August 24, 2023 18:44
…ly trim overlapping areas of different orders in a spectrum list so that they can easily be combined by stitch().
…int sources. Should be more accurate now when using .spec2d.fits files instead of the estimates of the slit profiles from the pipeline, due to avoiding the sides of an order where the focus isn't as good.
@kfkaplan
Copy link
Collaborator Author

@gully So I've run into an interesting issue and I wanted to get your (and others' if anyone wants to chime in) opinion on this. My absolute flux calibration code uses a few external python packages, namely dust_extinction (https://dust-extinction.readthedocs.io/en/stable/#) and tynt (https://tynt.readthedocs.io/en/stable/). Should I just go ahead and work these into muler, even though it will add to the dependencies, or should I be trying to avoid adding bloat to the list of dependencies? It's nice to just import this functionality straight in but I could come up with some work arounds.

@gully
Copy link
Member

gully commented Aug 30, 2023

Thanks for the question @kfkaplan, and well-put. There is a risk of entering dependency hell, so you're right that we should be mindful of---but not too risk-averse to---adding dependencies. I recommend one of three strategies:

1. Make the dependencies optional

For a worked example see lightkurve's interact.py module, which relies upon bokeh heavily, but optionally imports it. This strategy makes it possible to test the code, but not have it break at the installation phase (missing imports).

First try to import the requisite functions inside a try:
https://github.com/lightkurve/lightkurve/blob/7d485b69e9bbe58a1e7ba8d988387dc5d469ab36/src/lightkurve/interact.py#L36

...then, you can more-or-less treat the use of those functions as given after that, until the moment of its first use:
https://github.com/lightkurve/lightkurve/blob/7d485b69e9bbe58a1e7ba8d988387dc5d469ab36/src/lightkurve/interact.py#L1329

...where you catch the conceivable possibility that the dependency does not exist.

2. Just go ahead and add them as dependencies

Since other functions outside of your use case may also employ aspects of dust extinction of filter curve inter-comparison, you could simply treat them as useful and needed for the code. I would only recommend this path if you think these dependencies are broadly useful to other parts of the code.

3. Find a workaround

You could imagine a design where functions can accept a tynt-like object and simply expect it to have certain attributes.

def my_func(tynt_like_object):
    # do stuff here ...
     return param * tynt_like_object.my_attribute

In the pseudocode above, you didn't need to import tynt, you simply took for granted that the object you handed in has some attribute. I think this strategy should work fine because tynt is actually based on specutils and so it's Spectrum1D-like.

Finally, I would recommend asking yourself whether these functions actually belong in gollum, since they most naturally "doctor the model", meaning they operate on precomputed synthetic spectra.

Kyle Kaplan added 3 commits September 22, 2023 23:46
…ctrum to the sci fiber wavelength solution. This produced better sky subtractions.
…r forgets to do it. The sky fiber should always be interpolated onto the science fiber's wavelength solution before sky subtraction, according to the HPF folks.
@kfkaplan
Copy link
Collaborator Author

I would just like to note the above 3 commits improve the HPF sky subtraction by interpolating the sky fiber to the science fiber's wavelength solution before subtracting the sky. This eliminates residuals caused by differing wavelength solutions between the sky and science fibers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants