Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed Recipes for Large Ensemble pCO2 testbed #219

Open
jbusecke opened this issue Nov 7, 2022 · 7 comments
Open

Proposed Recipes for Large Ensemble pCO2 testbed #219

jbusecke opened this issue Nov 7, 2022 · 7 comments

Comments

@jbusecke
Copy link
Contributor

jbusecke commented Nov 7, 2022

Dataset Name

Large ensemble pCO2 testbed by @lgloege

Dataset URL

https://figshare.com/collections/Large_ensemble_pCO2_testbed/4568555

Description

This is a collection of randomly selected ensemble members from 4 large ensemble projects:

Each ensemble member was interpolated from its native grid to a 1x1 degree lat/lon grid. The variables are monthly over the 1982-2017 time frame and sampled as the SOCATv5 data product. Historical atmospheric CO2 is used up to 2005 with RCP8.5 after 2005.

The intention of this dataset is to evaluate ocean pCO2 gap-filling techniques.

License

Unknown

Data Format

NetCDF

Data Format (other)

No response

Access protocol

HTTP(S)

Source File Organization

The data is organized on different levels:

  • There are 5 models that provide a Large Ensemble (many different members to quantify internal variability)
  • For each model there is one file per ensemble member given as <model><member_id>.tar.gz example
  • Each of the tar files contains several netcdf files that represent different variables

image

These variables are already concatenated in time

image

Example URLs

https://ndownloader.figshare.com/files/16129505

I actually have some trouble getting these from figshare. I was wondering if anyone here has had experience with pulling files from a collection/dataset in figshare? Id be happy to understand the figshare API and parse http links, but maybe there is something more clever to do with these archive/doi repos like figshare/zenodo?

Authorization

No; data are fully public

Transformation / Processing

This is pretty straightforward.

Id suggest to have one recipe per model (in a recipe dict), that simply combines variables by merging them.

There should probably be some rechunking, but I think I need some input from the actual users (cc @hatlenheimdalthea @galenmckinley) what is the best chunking structure for the use cases (e.g. are the gap filling models trained on single time step maps or location timeseries).

Target Format

Zarr

Comments

No response

@jbusecke
Copy link
Contributor Author

jbusecke commented Nov 7, 2022

I was also unable to find the license of this dataset. I assume that it has some derived license from each of the used model datasets? Maybe @lgloege can help here

@galenmckinley
Copy link

We did put this at BCO DMO, per requirement of NSF funding: https://www.bco-dmo.org/dataset/840334 There are some more references there, in case useful.

I don't know any more specifically about licenses, but I concur with your assumption . I hope @lgoege can reply there.

@jbusecke
Copy link
Contributor Author

jbusecke commented Nov 7, 2022

I have looked into this a bit more, but I have one aspect that I am struggling with: Each url points to a tar file that then contains multiple netcdf files which need to be merged in xarray.
This does brake the assumption that there is a 1:1 mapping between urls and files, has anybody solved this previously? @pangeo-forge/dev-team ?

@rabernat
Copy link
Contributor

@martindurant - do you know if it's possible for fsspec to index into a .tar.gz file the way it can with a .zip file? That is the key technical question. If so, we can use the same approach described in #90 (comment) to point at the individual files.

If not, we will not be able to ship this recipe without some more serious refactoring to pangeo forge recipes.

@martindurant
Copy link

Offsets within a gzip stream are not possible. There are no block markers and sequences can even start mid-byte. I had some vague ideas about brute force options to find viable offsets, but nothing has come of them. Much better than tar.gz would be a tar of gzipped files (which would be a static version of zip), but no one does this.

We can already index into tar and zip, and have plans to index into block-compressed files like blosc and zstd (even bzip2!) but never gzip.

@rabernat
Copy link
Contributor

Martin, thank for the quick reply! That makes sense.

Just brainstorming workarounds here... @lgloege - is there any chance you could publish a new version of this dataset using .zip files instead of .tar.gz?

@galenmckinley
Copy link

@lgloege tells me he will work on this. he'll let us know when there's a new posting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants