Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A few thoughts and questions about reproducibility #1

Open
cboettig opened this issue Aug 31, 2021 · 10 comments
Open

A few thoughts and questions about reproducibility #1

cboettig opened this issue Aug 31, 2021 · 10 comments

Comments

@cboettig
Copy link

Hi Dr. Bury,

I stumbled across this GitHub repository and found the analysis and results you present most exciting!
Thanks for sharing this excellent research in your public repository! While all-in-all I found this rather nicely organized and documented, I still struggled to follow through with a few parts, and so I thought I'd just pass along a few quick notes of potential stumbling blocks. As this may be a work-in-progress anyway, I thought perhaps these notes might help you further improve the reproducibility of your analysis for other readers who will no doubt be interested in following in your footsteps. Apologies if I've miss-understood or misconstrued anything in my notes below, and would definitely welcome your clarification on these issues!

Workflow

It would be nice to more clearly document the overall workflow of the project, from generation of training data & training to generation of the evaluation data and final evaluation and figure creation.

In particular, the repository sometimes includes 'intermediate' data objects, like the data used to generate the figure, that would be re-generated in a 'from scratch' reproduction, while at other times not including any ability to access other core 'intermediate' objects, like the training data library (see below). (For instance, I don't see any reference to where "best_model" created by dl_train/DL_training.py is used any other script -- presumably that trained model is used in generating the various data/ml_preds in each of the test_models and test_empirical sub-dirs, but I couldn't find out where that happens...)

Training data

Consider archiving the training data. While the code provided can potentially replicate the training data, this takes some time and may not be fully reproducible. After attempting to reproduce this, a zip of the entire output directory generated for 500,000 time series weighed in at 5.6 GB; not tiny but not in the range that would be difficult to archive freely on a public repository. (e.g. the CERN-backed database Zenodo will archive up to 50 GB files with a permanent DOI).

  • The provided code is not configured with the appropriate settings to generate the training data actually used. It appears to be configured to generate only an illustrative example with 2 timeseries for each of the four types, rather than 500,000 timeseries. This is reasonable, but ought to be more clearly documented. Ideally, separate execution scripts would be provided that could be executed without modification to generate either the example of 8 timeseries or the full length of 500,000 timeseries. (The README notes a runtime of ~1 min, but does not make it obvious that this is producing only the small subset...)
  • It is not obvious how to configure some elements of this generation, for instance, how to alter the length of the timeseries used. This appears to be hardcoded instead of provided as a runtime argument.
  • I needed several small modifications to make the code actually execute (an additional file copy, and changing some instances of rb to r in some of the text file parsers -- perhaps encoding issues due to Windows/Unix differences?)
  • Regenerating the training data threw constant string of warning messages about underflows and other potential numerical problems. (The complete log of console output after generating 500,000 timeseries weighs in at 6.4 GB before compression -- larger than the full corpus of data acutally produced!) While these may be expected, it suggests the results may differ due to various numerical issues and it is difficult for a user to determine their importance.

DL Training script

The training script refers to a hard-coded absolute path to an archive that doesn't exist. It would make more sense to align the scripts into a clean workflow, e.g. either have the training script refer directly to the relative path training_data/output as generated by the run_job.sh script there, or have the run_job.sh generate the zip archive at the relative path assumed in dl_train/DL_training.py.

It's a bit unclear how batches are to be dealt with. I believe the intent is to read in all files regardless of which batch they are generated in. For the groups.csv and out_labels.csv, this probably means that those files in each batch need to be stacked in the same order each time. The code doesn't handle this, presumably shows only a single-batch logic?

Perhaps more importantly, it looks like the training script provided does not utilize the hyperparameters reported in the paper. For instance, the code appears to use hyperprameters corresponding to 1 CNN layer and no LSTM layers, while the reported architecture actually used for the results I think suggests you used 1 CNN layer and 2 LSTM layers, if I've followed it correctly?

Lastly, though I stuck with installing dependencies from your requirements.txt, I could not get tensorflow to recognize the val_acc monitor or metric, but could get code to excute only when that was commented out. (see commit-diffs in my fork)

Running the trained model

This is perhaps the most important bit and also where I'm currently stuck, which may just be my unfamiliarity with certain aspects of the toolset.

One of the most obvious applications of this work would be to run the trained agent on other data. I think the archive should include a copy of the trained agent (the best_model_1_1.pkl folder) created by your dl_train scripts. Ideally, I think the repository would include a python script which takes data in the required format and uses the trained agent to report the classification probabilities -- exactly how this step is done is still entirely unclear to me, though presumably is involved in generating the data in the various ml_preds directories. Being able to reproduce/evaluate the performance of the trained classifier on arbitrary timeseries residuals is at least as important as generating the training data or training the classifier, but the provided code to do so seems more opaque to me on this point.

More Minor Issues

  1. I found the following libraries referenced in various scripts but apparently missing from requirements.txt (and were not automatically pulled in as an upstream dependency when running pip install -r requirements.txt)

scikit-learn
pandas
matplotlib
plotly
kaleido

These things probably mostly don't depend on the versions that much anyway, but still would be good to have.
Still, it would be better to provide a more comprehensive requirements.txt with the versions of all packages used by a fresh virtualenv in regenerating these results.

Additionally, it would be relatively straight-forward to include an installation script for auto-07p. I have added that to my fork.

@ThomasMBury
Copy link
Owner

Hi Dr. Boettiger,
Thank you for taking the time of going through our research and leaving these constructive comments! Reproducibility is important to us, so I will make the time to address your comments as best I can and will report back.
Best,
Tom

@cboettig
Copy link
Author

cboettig commented Sep 1, 2021

Thanks! And sorry for the long list, it's mostly things I could figure out anyway but could help to clarify. The main part where I'm stuck is just how to load the best_model saved by the dl_train script and run it on some example timeseries data. Couldn't see where that step happens.

@cboettig
Copy link
Author

@ThomasMBury Great to see your paper out in PNAS today! 🎉 Congrats again on a really nice analysis.

Also, any updates on this thread?

@ThomasMBury
Copy link
Owner

Thank you @cboettig! I'm still working on making this repo as intuitive as possible for viewers to run on their own systems. So far I've made edits code in /training_data such that

  1. There are scripts to generate the full set of time series (e.g submit_multi_batch_500.py). Though these scripts are specific to the Slurm workload manager that we use to submit to CPU clusters at UWaterloo. Do you have any advice on how to share code that has been written for a specific cluster system? Is best practise to share a script that would run all the batches on a personal computer (even though this would take ages!).
  2. Time series length is now a command line parameter to run_single_batch.sh
  3. I've stored the training data on Zenodo (see Readme).

I've collected the code from my collaborator who trained the deep learning model, including the pkl files that contain the trained classifiers. It's in the repo under dl_train/. Combining the code to work with the same directory names etc. and writing the workflow is something I haven't go around to -- but is high up on my list of things to do (busy start to term!).

One other question regarding intermediate files. I uploaded intermediate files so viewers could run certain scripts such as the ROC computations from predictions made by the DL and EWS, without having to run the longer scripts of actually generating the training data / DL model. Is this appropriate, or should intermediate files be removed altogether?

Cheers,
Tom

@cboettig
Copy link
Author

Hi @ThomasMBury , thanks for the update! Looking forward to trying it out, but in brief this sounds very good. A few replies:

In principle the cluster details can be abstracted away (e.g. in #rstats world drake and now targets do a reasonable job of handling execution over slurm or other queues as well as other parallelizations). But I'm not too worried about this part (I think this part worked out just fine for me after a few tweaks at least). The slurm script should help document more precisely what you ran, and having the simulated training data on Zenodo means that many users won't have to actually run it.

Having the pkl files is great. Definitely looking forward to trying the trained classifier out on some other timeseries. If you're really serious about this line of work, I think it wouldn't be bad idea to consider a little python packaging infrastructure so we can just install the trained classifier from PyPi (or at least from github) as a standard python module and call it on a given timeseries, but of course all software engineering takes time. Definitely looking forward to seeing the workflow code though, I think that will be a big help since that's where I got stuck on my first pass.

Personally I'm all for including intermediate files! I think the best-practices in the field always include these, but include some make-like workflow setup so that the end products can be quickly re-made from these stored intermediate products, but the ambitious user also has the option to run something like make clean to purge these intermediate objects and then see if they can reproduce things from scratch. (Again there's various prior art on this and some literature, but I know it more on the R side. I'm going to take the liberty of tagging @wlandau who is an expert in these things).

Cheers,
Carl

@wlandau
Copy link

wlandau commented Sep 29, 2021

Thanks, Carl. Anything specific I can help with?

Yes, intermediate files help break down the workflow into manageable pieces. targets does this by abstracting each target as an R object and saving it in a data store to be retrieved as an R object later. The intermediate file is there, but to the user, it behaves like an R variable. IIRC there are Python pipeline toolkits that do this as well. Maybe Prefect? Definitely Metaflow (although both are more Airflow-like than Make-like).

@wlandau
Copy link

wlandau commented Sep 29, 2021

https://github.com/pditommaso/awesome-pipeline has a bunch of other pipeline tools, mostly language-agnostic or Python-focused. Trying to remember the other one I saw with file/object abstraction, but it's not coming to me.

@ThomasMBury
Copy link
Owner

Thanks @wlandau I'll check those out.

@ThomasMBury
Copy link
Owner

Hi @cboettig, I've spent some time improving the workflow for training the DL classifiers (DL_training.py) and applying them to other time series data (DL_apply.py). They are also now connected via relative file paths. Let me know if you have any other comments/ issues with running the code. Thanks again for your feedback.

@cboettig
Copy link
Author

Thanks for the heads up! We'll give it a spin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants