Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve differences in shot counts from Nature paper; improve storage of details of the shot and signal sets #60

Open
felker opened this issue Jan 7, 2020 · 0 comments

Comments

@felker
Copy link
Member

felker commented Jan 7, 2020

Details reproduced from email correspondence in November 2019.

There are slight discrepancies in the output of guaranteed_preprocessed.py from the current version of the code and the figures from Kates-Harbeck et al (2019) when applied to the JET dataset.

When specifying the following definition of the all_signals dictionary in data/signals.py for the 0D FRNN model for JET carbon wall (CW) training -> ITER-like wall (ILW) testing (jet_data_0D), I end up with 5502 processed shots out of the 5524 raw input files, and 479/488 raw disruptive shots:

all_signals = {
    'q95': q95, 'li': li, 'ip': ip, 'betan': betan, 'energy': energy, 'lm': lm,
    'dens': dens, 'pradcore': pradcore,
    'pradedge': pradedge, 'pradtot': pradtot, 'pin': pin,
    'torquein': torquein,
    'energydt': energydt, 'ipdirect': ipdirect, 'iptarget': iptarget,
    'iperr': iperr,
}

PastedGraphic-5

This amounts to 8 fewer overall shots vs. the 5510 which was published in the Extended Data Table 2 (see below). Specifically, there are 3 more CW disruptive shots in the (train+validate) set in my results, which means that there are actually 4 fewer nondisruptive CW and 7 fewer nondisruptive ILW shots, too.

One of @ge-dong's .npz files of processed JET shots from October 2019 exactly matches my numbers. I have looked through the Git history of the relevant preprocessing files, and cannot account for a change to the “omit” criteria that would have caused such a change. I assume that the raw input shot lists and data have not changed for the 5524 JET candidates since the paper was published.

I have long suspected that the shot counts and some of the early JET results in the paper predate the addition of the 2x Prad,core, Prad,edge signals to the JET datasets in the code (even though they do appear in Extended Data Table 1). They were not in our original 8-9 signal set, and the files in /tigress/jk7/best_performance/deep_jet/ do not list them.

If you remove pradedge, pradcore from the all_signals dictionary and run guarantee_preprocessed.py you end up with 5514 total processed shots, the ILW set counts exactly match the test set numbers from the paper, 1191 (174):

PastedGraphic-6

In this case there are only 4 extra disruptive CW shots vs. the numbers from the paper, and the split of disruptive shots between the train and validate sets is different, but this could be due to a change in the random number sequence…. So perhaps the Nature paper numbers were from before the addition of the 2x extra radiated power signals on JET, and there has been some change in the preprocessing since publication that allows an extra 4x disruptive shots to be not be omitted.

PastedGraphic-4

To-do

The lesson from this is that guarantee_preprocessed.py should always output another .txt file (or add to the existing processed_shotlists/d3d_0D/shot_lists_signal_group_X.npz) the details of the omitted shots:

  • omitted shot numbers (so that the set of all input/raw shot numbers could be reconstructed in conjunction with the included shot numbers)
  • criterion for omission

Plaintext would be good for reproducibility; see #41. Even though this info is contained in the .npz file and/or codebase, it would be good to dump to .txt:

  • Exact shot numbers for included shots in each of the train/validate/test set
  • Precise signal path (in MDSPlus database) info for all signals in this signal group
  • Details about resampling, clipping, causal shifting, etc.

Will be useful for real-time inference model @mdboyer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant