Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suite of validation tests on DC2 extragalactic catalogs #50

Open
yymao opened this issue Dec 14, 2017 · 59 comments
Open

Suite of validation tests on DC2 extragalactic catalogs #50

yymao opened this issue Dec 14, 2017 · 59 comments

Comments

@yymao
Copy link
Member

yymao commented Dec 14, 2017

This epic issue serves as the general discussion thread for all validation tests on the extragalactic catalogs in the DC2 era.

Note: Please feel free to edit the tables in this particular comment of mine since we will use them to keep track of the progresses of validation tests

➡️ Required tests that we have identified (for DC2):

Test WGs Implemented Validation Data Criteria "Eyeball" Check by WG Issue
p(e) WL ✔️ @evevkovacs ✔️ COSMOS ✔️ ✔️ (WL @rmjarvis ) #14 #81
p(position angle) WL ✔️ @msimet ✔️ uniform ✔️ ✔️ (WL @msimet) #76 #82
size distribution WL ✔️ @msimet ✔️ COSMOS ✔️ ✔️ (WL @msimet) #77 #80
size-luminosity WL @vvinuv ✔️ vdW+14, COSMOS ✔️ ✔️ (WL @msimet) #13 #56
shear 2pt corr. WL ✔️ @patricialarsen ✔️ camb ✔️ ✔️ (WL @patricialarsen) #35 #54
N(z) PZ, LSS ✔️ @evevkovacs ✔️ DEEP2 ✔️ ✔️ (PZ @sschmidt23 ) #11 #107
dN/dmag WL, LSS ✔️ @duncandc ✔️ HSC ✔️ ✔️ (WL @rmandelb ) #7 #47
red sequence colors CL @j-dr ✔️ DES Y1 ✔️ ✔️ (CL @erykoff ) #41 #101
CLF CL @chto ✔️ SDSS #9 #102
galaxy-galaxy corr WL, LSS, TJP ✔️ @vvinuv @morriscb ✔️ SDSS, DEEP2 ✖️ ✔️ (LSS @slosar ) #10 #38
color-dependent clustering CL ✔️ @yymao ✔️ SDSS ✖️ ✔️ (LSS @slosar ) #73 #100
galaxy bias(z) PZ @fjaviersanchez ✔️ CCL #75 #87
color distribution PZ, CL, LSS ✔️ @rongpu ✔️ SDSS, DEEP2 ✔️ (PZ @sschmidt23 ) #15 #89
shear-galaxy corr. TJP, WL ✔️ @EiffL ✔️ SDSS #118
stellar mass function - ✔️ @evevkovacs ✔️ PRIMUS #49
cluster stellar mass distribution CL, SL @Andromedanita ✔️ BOSS, CMASS #109
color-color diagram PZ ✔️ @nsevilla ✔️ SDSS #74 #88

➡️ Tests that are not currently required but good to have:

Test WGs Implemented Validation Data Validation Criteria Issue
color-mag diagram PZ, CL @DouglasLeeTucker @saharallam ✔️ SDSS / not required not required #40
Cluster radial profiles CL #63
IA 2-pt corr. TJP ✖️ @EiffL @jablazek #42
emission line galaxies PZ, LSS ✖️ @adam-broussard ❓ DEEP2 #12

Analysis WGs are encouraged to join this discussion and to provide feedback on these validation tests. This epic issue is assigned to the Analysis Coordinator @rmandelb, and will be closed when the Coordinator deems that we have implemented a reasonable set of validation tests and corresponding criteria for DC2.

@yymao, @evevkovacs, and @katrinheitmann can provide support to the implementation of these validation tests in the DESCQA framework. In addition to GitHub issues, discussions can also take place on the #desc-qa channel on LSSTC Slack.

P.S. The corresponding issue in DC2_Repo is LSSTDESC/DC2-production#30

@slosar
Copy link
Member

slosar commented Dec 19, 2017

I have a general comment regarding three test: dN/dmag, color-mag and color distribution test. These are really testing different aspects of the same thing, namely the full distribution of magnitudes. I.e. if you had statistically indistiguishable distribution of magnitudes from reality, you would automatically pass all three. The first test looks at 1D histograms of mags, the second test is mag vs dmag and the third is dmag vs dmag correlations (where dmag is delta mag, e.g u-g, e.g. color). My worry is two-fold:

  • by using disparate test, e.g HSC for something and SDSS for something else we could force ourselves into a corner we it would be impossible to satisfy all the same time (due to differing depth).

  • the focus is too much on getting 1D and 2D projections right, but you really want a general distribution to be correct enough.

So, my suggestion would be to merge them into a single test that perhaps has more than one test in it.
A possible test would be: for magnitude vector v = (u,g,r,i,z,y) calculate median, mean and its covariance matrix.
The possible validation criteria could then be (after accounting for depth):

  • median is within 0.5 mag of test dataset
  • mean is within 1 mag of test dataset
  • sqrt var in individual bands (diagonal of cov matrix) is 20% of test dataset
  • the two biggest eigenvalues of cov matrix are within 20% of test dataset and two main eigenvectors are aligned to better than 0.8 (i.e. normalized dot product is >0.8).

I think this are nice overall test without going into vagaries of color-color histograms that will never look perfect from galacticus but they also don't really matter that very much for our fundamental science. The mean and scatter of magnitudes are directly connected to the number of detected objects, their SNR, etc., so they are very relevant.

@rmandelb
Copy link

For photo-z and clusters, they might actually care about some of the color-color distributions though.

I agree with the basic idea that we don't want to paint ourselves into a corner by devising validation tests based on different datasets that turn out to be impossible to satisfy. My feeling had been that we may indeed need to check all these different things, but our validation criteria cannot be super tight, and that's how we avoid painting ourselves into a corner.

@rmandelb
Copy link

@yymao @evevkovacs - Just to collect some basic progress notes here: SL and SN confirmed they care more about the sprinkler. It's useful for the extragalactic catalogs to be at least vaguely reasonable but the existing validation tests are enough to ensure that. I will continue to work with the remaining analysis WGs.

@rmandelb
Copy link

rmandelb commented Jan 3, 2018

In this comment I will collect the list of working group contacts for extragalactic catalog validation. Currently several are listed as TBD, so this is not very useful, but I will edit the comment as I get more information:

@sschmidt23
Copy link

@morriscb is the other contact for PZ, we're planning on discussing tests on Friday and updating shortly after that.

@janewman-pitt-edu
Copy link

@sschmidt23 pointed out this thread to me -- I want to concur with Rachel on this: for assessing photo-z performance we care much much more about getting the range of galaxy SEDs correct than the overall luminosities of objects (which is what the magnitude vectors is more sensitive to). The distribution of galaxy colors is our best way of assessing SED. I don't expect Galacticus to be perfect at this by any means but rather the intention of our color tests is to be able to assess which simulations / parameter values improve things vs. make them worse.

@rmandelb
Copy link

@sschmidt23 @slosar @morriscb @j-dr @erykoff -

Thanks for agreeing to represent PZ, CL, and LSS on the extragalactic catalog validation. As a reminder, in the next 2 days we'd like to have the following:

  • a finalized list of what validation tests are needed for the DC2 extragalactic catalogs for your science; open new issues if anything is missing from the current list in this repository.

  • for each validation test, we need a clear validation criterion and validation dataset. Comment in the issues about these, please. Please remember that if we want to make sure the catalogs can support our planned DC2 work, but without being so stringent that the tests become nearly impossible to pass given the current limitation of our mock catalog-making knowledge/methods.

  • ideally, you would have a volunteer from your working group who can implement the test. If you don't, please still post the issue and we can do one further advertisement for volunteers (but we cannot assume there is enough available effort within the CS working group to implement all requested tests themselves).

If you have any questions about defining tests / validation criteria / etc., please comment on here or the relevant issue. I am happy to answer questions, as are @yymao and @evevkovacs . Also, they have tried to make it easy to implement new tests without having to learn all the ins and outs of DESCQA -- see https://github.com/LSSTDESC/descqa/blob/master/README.md .

@rmandelb
Copy link

@jablazek @elikrause @timeifler @joezuntz -

Please comment in this thread with the name / GitHub idea of the person who will work on the extragalactic catalog validation for your working group (for TJP I believe one person was asked but may not have confirmed; I did not hear a name for WL yet). See the message above this one for what we are asking those people to do, and direct them to this thread.

@timeifler
Copy link

@rmandelb For WLWG @msimet has volunteered to be the DESCQA liaison.

@jablazek
Copy link

@rmandelb : @patricialarsen has volunteered for TJP. @timeifler, she has been doing WL-related work as well on proto-DC2 and is interested in coordinating with @msimet.

@timeifler
Copy link

@rmandelb @jablazek @patricialarsen This is great to hear, Patricia has already reached out to Melanie and myself.

@slosar
Copy link
Member

slosar commented Jan 16, 2018

@rmandelb @yymao Is there a living list of current tests or does simply the list of issues with "validation test" label acts as such? Can you elevate my rights so that I can add "validation:required" to some tests, like for example this galaxy bias/clustering test? (or should I tell you which one I think are required?)

@rmandelb
Copy link

@slosar - the list of issues w/ "validation test" label is the living list of tests. I would love to elevate your rights but I'm not fancy enough to do that (I have labeling privs but not "giving other people labeling privs" privs, apparently).

Perhaps @yymao or @evevkovacs can comment on the difference between the "validation test" and "validation test:required" labels; there are far more of the former than the latter, and I'm not sure how to interpret that. Are you wanting the analysis WGs to flag which ones are particularly important so they can be called "required"? I did not quite realize that so I hadn't requested that from anybody.

@evevkovacs
Copy link
Contributor

Yes, validation test:required is intended to flag tests which are required by the working groups and which the catalog must pass in order to be satisfactory. Other validation tests are those which have been suggested and may be nice to have but aren't as high priority to implement.

@katrinheitmann
Copy link

There is also the table 10 in the planning document which now lives on Github: https://github.com/LSSTDESC/DC2_Repo/tree/master/Documents/DC2_Plan. That list provides a quick overview and has the same required/nice to have distinction. You can edit that table in principle. Yao seems to be the only one who has the power to help with the labels ...

@yymao
Copy link
Member Author

yymao commented Jan 16, 2018

@slosar: the original idea is that the WGs will report to @rmandelb and discuss here on the set of required validation tests, and then @rmandelb will add the labels on them.

However, if this workflow is not efficient, I'm happy to make necessary changes to make things easier!

@slosar
Copy link
Member

slosar commented Jan 17, 2018

@yymao @rmandelb Ok, so rachel, could you add "validation test:required" to:
#10 (bias/clustering test).
The other two that I count as required, issue 11 (N(z)) and 7 (dN(z)/dmag) already have them.
Others relevant to LSS in Table 11 would be nice to have, but I wouldn't quite count them as required. Perhaps for DC3.

@rmandelb
Copy link

Done. Thanks for thinking through which ones are more important than the others for LSS. And I believe the bias/clustering test is also required for PZ to achieve its goals with the clustering redshift analysis, as well.

@rmandelb
Copy link

@j-dr and @erykoff - can you please let us know the status of cluster-related extragalactic validation tests? See e.g. this comment: #50 (comment) earlier in this thread for info on what we are looking for.

I'm about to go offline for a day, but Yao, Eve, and others on this thread may be able to answer if you have questions about the process.

@slosar
Copy link
Member

slosar commented Jan 23, 2018

We had a discussion of validation tests within the LSS group a two main issue arose:

  • We need validation of magnification and in fact two: galaxies needs to be bigger at the constant surface brightness (so the total apparent luminosity increases, the kappa term in the distortion matrix) and they need to be moved in the right direction (change in the number density). I could find any issues related to this; the closest one was shear-shear correlations. How do we add this issue? There is no validation test, it really just needs to be implemented correctly, so this is in a way a different kind of issue. So I think validation should be against an external implementation of the same maths. How do we go around doing that?
  • Issues 7 and 11 should really be merged. Ok, getting the right colors is one thing, but they way 7 and 11 are now written just confused everyone. I suggest merging them into a test that simply demands that N(z,mag) binned in 2D in sensible bins of delta z and delta mag for some fiducial magnitude (say r) is correct. Then other magnitudes are controlled via color tests.

@evevkovacs
Copy link
Contributor

evevkovacs commented Jan 24, 2018

@slosar Could you please clarify exactly what test(s)/check(s) you are proposing under your first bullet. The galaxy-shape modeling in the extra-galactic catalog is very simple. All we have are sizes for the disk and bulge and we assume n=1 Sersic profiles for disks and n=4 profiles for bulges to get half-light radii. The value of the magnification is given at the galaxy location. I think you are proposing a check that is better done on the image simulation result rather than the extra-galactic catalog, but I may have misunderstood.

What validation data set and criterion are you proposing to use for second bullet? Validating a 2d histogram is not as straightforward as validating a 1-pt distribution and I was wondering what you had in mind.

@rmandelb
Copy link

@slosar - thanks for the feedback from LSS. To answer your questions:

  1. you are right that this is more an "is it implemented correctly" issue, rather than one where we are testing the extragalactic catalogs against data. The catalogs have lensed magnitudes and sizes in them, so I guess the question is do we need to explicitly test the densities are changing in the correct way at fixed magnitude? Or do we just need to test that unlensed vs. lensed magnitudes and sizes don't have some bug? (because if those are right, then the density trends being correct for a fixed flux cut seems to follow directly)

  2. I think you are correct that dN/dmag test #7 and N(z) test #11 could be replaced by a test of N(z,mag) for some single band, and this takes care of a number of issues with only testing N(z) and N(mag) in 1D. And, as you said, we can do this in just 1 band, because we also have tests of color distributions which take care of the other bands. I guess the question is what validation dataset would we have for this new 2D N(z,mag) test? Right now we're using DEEP2 for N(z) down to some fixed mag thresholds, but we're using HSC for N(mag) because it's a larger survey so we expect fewer issues with cosmic variance and can conveniently chop up the sample into smaller slices in magnitude without too much noise. If we do the full 2D test then I guess we would have to do some kind of parametric fits to DEEP2 in the 2D plane? Or perhaps use the 1D N(z) in mag bins, and use HSC to set the normalization of N(mag) integrated across z?

@slosar
Copy link
Member

slosar commented Jan 24, 2018

@evevkovacs @rmandelb Thanks for your quick responses:

  1. Regarding magnification. I think we should be modulating the number densities by actually displacing the galaxies, even if we do it in the born approximation (if you're doing proper ray-tracing even better!) This means that for each galaxy, you generate the kappa field (integrated mass density) which you can than transform into gamma1, gamma2 and displacement vector (delta ra, delta dec), which is a non-local and hence somewhat painful operation. Is this being done right now? Then the galaxy catalog would have, ra, dec,z,etc and also dra,ddec, kappa, gamma_1, gamma_2 (if you don't have dra and ddec, you are having only part of the lensing effect present) If we confirm that these latter quantities have the right correlations (kappa-kappa, kappa-gamma_t, displacement gamma_r, etc) that would be good enough for me. CCL can do this and alsonso could be arm-twisted into doing this. This is the same category as shear sign convention test #8 .
    While not absolutely absolutely crucial, it would be very useful to have proper magnification in DC2 because this could lead to some exciting WG projects.

  2. Yes, I think 1D N(z) in mag bins in absolute counting units (as in number/sq deg not rel probability) is the simplest thing to do. I think in each mag bin you can then use data that we have for that bin up to redshift at which you trust it. I think the criteria should be to be within 20% of measurement (including counting noise).

If that is OK, will write both issues and then rachel probably needs to close 7 and 11. I think all the work that already went into them will of course keep on being very useful.

@rmandelb
Copy link

@slosar -

For (1): the catalog has both pre- and post-lensing positions (in addition to the pre- and post-lensing magnitudes and sizes that I mentioned earlier). I assume that there is an intent to use the post-lensing quantities for everything including positions when making the sims.

You are correct that we could use the statistical properties of pre- and post-lensing positions for a flux-limited sample to test these correlations.

For (2): this sounds reasonable to me. HSC can give the overall normalization of the number density across all z in the mag bins, and DEEP2 can give the dN/dz within the mag bins. I agree that it would be best to combine these into a single test rather than having separate dN/dmag and dN/dz tests. @evevkovacs - since you were asking about what Anze intended as well, are you comfortable with this suggestion given his clarification? @slosar - I agree about what needs to be done to the issues, but I want to give Eve a chance to comment on the way you've framed this test before we do that.

@evevkovacs
Copy link
Contributor

Patricia Larsen will comment on 1). For 2), we can change the N(z) tests to check the normalization. Can you point me to the datasets?

@evevkovacs
Copy link
Contributor

Maybe I am missing something, but I thought that the observational data had redshift information and therefore selection cuts could be made to match what is in the simulated catalog if need be.

@rmandelb
Copy link

@yymao - true, there is always a redshift cut in mocks. But the issue is that if you're going to i=25, then we expect a few % of objects above z=3, but ~40% of objects above z=1. So if we do a test with a tolerance of 20%, we don't care if the mock has zmax=3. We care very much if it has zmax=1. My concern is about whether the redshift cut is sufficiently low that we're expecting to lose of order 10% or more of the galaxies we'd see in the real survey, in which case the validation test is invalid.

@evevkovacs - we don't in general have redshift information for imaging surveys. We have photo-z, but they are not sufficiently good to use in validation tests. For some of the validation tests we're using here, the validation dataset is SDSS or DEEP2, which provides spectroscopic information. That's why those tests can be defined easily in z ranges. But if our validation dataset is from an imaging survey like HSC, then we can't make the validation test in z ranges.

@rmandelb
Copy link

@katrinheitmann - I guess I figured we want these tests to be generally useful, so we have to assume that some mocks will have strict z limitations. I'm OK with saying we should design the tests for the ideal case, but if we do that, then I would strongly advocate for the test to not be run at all if the mock catalog (like protoDC2) has some condition that makes the test invalid. For example, all tests that integrate implicitly across all z for a survey the depth of HSC or LSST should all be disabled if the mock catalog has a zmax that is too low (say below z=2). Otherwise we will have the system generating plots that people will use to draw completely wrong conclusions.

Is that possible to do?

@yymao
Copy link
Member Author

yymao commented Jan 25, 2018

@rmandelb thanks for the clarification! And to your technical question, yes, a test can decide not to generate any plot (or do whatever alternative things) if it finds the catalog's max z is, say, less than 2.

Is it fair to say we need to mocks to have max z > 2? We can probably check if that's sufficient by run the test on buzzard (max z ~ 2.1). And looking ahead to cosmoDC2, what redshift cut do we think it'll have, @katrinheitmann @evevkovacs?

@evevkovacs
Copy link
Contributor

Yes, certainly. The test writer is free to specify conditions as she/he sees fit. For eaxample, it would be simple to set a requirement on the maximum redshift delivered by the catalog and if that requirement is not satisfied, the catalog is skipped.

@rmandelb
Copy link

We can probably check if that's sufficient by run the test on buzzard (max z ~ 2.1).

Sorry if I am missing something, but how can we check whether that's sufficient by running the test on buzzard?

My proposal would be to take our best understanding of dN/dz for the faintest magnitude limit for which we test the dN/dmag, integrate that to find the max redshift for which we'd be missing more than, say, 5% of the galaxies, and set that as the max redshift for the dN/dmag test.

@yymao
Copy link
Member Author

yymao commented Jan 25, 2018

@rmandelb sorry, you're right. I was thinking that we can just check if Buzzard matches to HSC dN/dmag, but then, of course, even if it doesn't match, we still don't know whether it is due to insufficient max z or something else.

@janewman-pitt-edu
Copy link

I think we'll be missing >5% of galaxies with a z>2 cut even by i~23 or so...

@msimet
Copy link
Contributor

msimet commented Jan 25, 2018

This may not be a complete list, but for the WLWG we will definitely need a power spectrum test (eg #35) and an ellipticity distribution test (#14). There was also a suggestion that #14 be done as a function of redshift--I'm not sure yet if that's required or desired, but I wanted to check if you'd consider that a separate validation test, or an implementation issue for the existing ellipticity distribution test.

@evevkovacs
Copy link
Contributor

It would not be a separate test. I am working on the ellipticity distribution. z ranges can be added in a configuration file. Do you have an idea of what z bins would be of interest? That would be helpful in configuring the plots etc.

@janewman-pitt-edu
Copy link

@rmandelb : I've gotten back further from my email and seen your pushback directly :) Yes, we certainly shouldn't compare dN/dm to redshift-incomplete samples. However, I don't see that as a reason to drop dN/dm entirely but rather as a driver to disregard it where it is irrelevant.

We need to keep in mind though that dN/dmag/dz will only be at all well-constrained (and not that well given small survey areas) to r23 and z1-1.4. The situation is worse for delta-mag bins than for integrated dN/dz down to a given magnitude because the latter seems to fit a simple functional form but the former does not (we could differentiate to get a smooth prediction for number in a delta-mag bin, but I wouldn't have much confidence that that'd look all that realistic; summing up a broader range can erase a lot of issues).

@slosar
Copy link
Member

slosar commented Jan 26, 2018

@rmandelb @janewman-pitt-edu Ok, so it seems that HSC is deep enough but without z-s and with larger cosmic variance and on the other hand DEEP2 can give some information on redshifts, but is incomplete. So I think there are two ways to generate tests:

  • Try to infer N(z,mag) together with uncertainties where we think extrapolation is dodgy. Use this extrapolated information as a "test data" to validate against
  • Just try to compare directly with HSC dN/dmag and DEEP2 N(z) separately keeping the two test as we have now.

I have a slight preference for the first option as it has two advantages: i) it naturally grows with growing catalogs (i.e. if catalog doesn't go beyond z=1, fine you don't compare there) and ii) if there are internal tensions between the two datasets they become immediately obvious. My understanding is that rachel supports this option to... However, I don't feel knowledgeable enough about this to judge if it is doable.

@janewman-pitt-edu
Copy link

HSC has much smaller cosmic variance than DEEP2...

I think once you break DEEP2 into differential magnitude bins you're already getting dodgy. I believe the N(<m, z) constraints much more.

@rmandelb rmandelb mentioned this issue Jan 30, 2018
4 tasks
@janewman-pitt-edu
Copy link

One follow-up thought: we could implement this as N(<m, z) with a variety of limiting magnitudes rather than as N( m, z) in differential magnitude bins. I think that'd work better, and we could use the DEEP2 extrapolations (with a grain of salt) to do them.

@yymao yymao mentioned this issue Jan 31, 2018
4 tasks
@msimet
Copy link
Contributor

msimet commented Feb 6, 2018

Another pair of required tests from the WLWG: galaxy size distributions and galaxy position angle distributions (assuming our ellipticity distribution test is for |e|, not e1 and e2). We're working on validation criteria now.

@evevkovacs
Copy link
Contributor

There is size-magnitude test under development.
See #13
Galaxy position angles are randomly assigned. See position-angle distribution in readiness tests: See https://portal.nersc.gov/project/lsst/descqa/v2/?run=2018-01-30_4&test=readiness_protoDC2

@yymao
Copy link
Member Author

yymao commented Feb 7, 2018

@msimet does the galaxy size-magnitude test satisfies the WL WG's need in terms of validating the galaxy size distributions? If not, can you suggest a more direct test?

@evevkovacs It is true that the current galaxy position angle distribution in protoDC2 is just a uniform distribution, but that doesn't mean that it satisfies the WG's need and we don't need a validation test for it. @msimet what kind of galaxy position angle distribution the WL WG would want to see?

@evevkovacs
Copy link
Contributor

@yao @msimet Yes thanks for clarifying. My comment was meant to be a starting point for discussion, not an answer to the issue! In addition to validation criteria, we would also need a validation data set for whatever test you are proposing.

@msimet
Copy link
Contributor

msimet commented Feb 7, 2018

For position angle, we don't need anything more complicated than a KS test comparison to a flat distribution.

I'm not sure about the size-magnitude test--was the plan for a full distribution of sizes as a function of magnitude, or just something like the mean? We care about the small-end slope of the size distribution (because it determines the effect of our selection function) and also that sizes are approximately correct so they're the right resolution relative to the PSF. The size-magnitude test should satisfy the latter, but I'm not sure about the former.

I'll speak to the WG about validation data sets for the size issue. In addition, I can contribute code for the KS test if you'd like, and the size test if we need to do something different than what you had planned for magnitude.

@rmandelb
Copy link

rmandelb commented Feb 7, 2018

@yymao - the position angle distribution test is meant as more of a bug test - it should be flat. If you feel that sufficient bug checks have been carried out that this is unnecessary, please let us know.

@msimet - you should check out #13 for what's been discussed so far re: size/luminosity.

@yymao
Copy link
Member Author

yymao commented Feb 7, 2018

@rmandelb I don't think there is unnecessary checks, and bug tests are very important tests 🙂. This one is easy to implement, so let's just do it. We can open a new issue for checking position angle distribution.

@msimet
Copy link
Contributor

msimet commented Feb 8, 2018

To answer the earlier question @evevkovacs - we probably don't need anything finer than the tomographic bins will be; I'm told a good proxy for this would be 5 (1st year) or 10 (10th year) bins in redshift with approximately equal number of objects in each bin.

I think the existing size-luminosity test has some of the information we need, but not all of it; we also want a straight-up dN/dsize so we can see what's happening at the small end. I can code up a test for this, and (probably) use the same COSMOS data used in issue #13 for validation.

@evevkovacs
Copy link
Contributor

Excellent. If you are at hack day, I can point you to some test examples that will help with this.

@msimet
Copy link
Contributor

msimet commented Feb 8, 2018

I'll be there in the morning, so I'll definitely try to find you before I have to leave, thanks!

@evevkovacs
Copy link
Contributor

@chto @yymao Sorry for posting this here. I couldn't find an issue for the CLF test.I am having a problem with the test. It is crashing with an error:
"IndexError: cannot do a non-empty take from an empty axes." I am running it on a new test catalog. Here is the descqa command that I'm using:
./run_master.sh -c 'um_hpx_v0.0' -t CLF_r -p /global/u1/k/kovacs/gcr-catalogs_um
This should work for you too, if you point to my gcr-catalogs, as above. I have checked that 'Mag_true_g_lsst_z0' and 'Mag_true_r_lsst_z0' look reasonable, but this is a very new catalog so it may be my problem. In any case it would be good to know why the test is crashing. If there is a newer version of the test that I can use, please point me to it. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests