Skip to content

jhu-sheridan-libraries/dataverse_py

Repository files navigation

Dataverse Python Script(s)

Script to automate the process of adding potentially thousands of FITS files to a simple DOI within a Dataverse server. The mimetypes is automatically detected and the appropriate metadata is extracted and added to the description of the DOI. If the mine type is not recognized, the script will assign it as a binary file (application/octet-stream) to prevent upload failure. Several shape files mime types are included in the script (application/x-esri-shape & application/x-qgis) but more can be added as needed.

Files

  1. py_add_fits_files_to_dio.py: Rewrote Bash script behavior. see below
  2. fits_extract.py: Extract FITS metadata. see below
  3. FITS_Description.md: Describe FITS files and what is possible to extract. see below
  4. generate_test_files.md: Generate test files. see below
  5. grouped_files.py: Grouping to zip large number of files. see below

Found that using pyDataverse.api added options that would have been difficult to replicate in a bash script.

Requirement

  1. Python 3 (tested on 3.10.12)
  2. Python libraries "dvuploader" & "pyDataverse" installed.
  3. Datatverse API Token (https://archive.data.jhu.edu/dataverseuser.xhtml) click on API Token tab after logging in
  4. The DOI to run on
  5. FITS files to process
  • All FITS files need to be together in a single directory (no subdirectories).

There are 2 setups suggested in this README

  • Python's Virtual Environment
  • Locally set up: Utilizing pip install to configure the script's dependencies has been the conventional method for setting up Python scripts. However, this approach is becoming less favorable over time.

Setup 'API_KEY' Before You Start

Before running the scripts, you need to obtain your API token from Dataverse. This is optional so that you don't have to enter the API key into the terminal and instead can pass '$API_KEY' to the scripts.

  1. Navigate to [Site_URL]/dataverseuser.xhtml?selectTab=dataRelatedToMe in your web browser.
  2. Click on the "API Token" tab.
  3. Copy the displayed token string.

Next, set the 'API_KEY' environment variable in your terminal:

For Linux and Mac:

Open your terminal and execute the following command, replacing 'xxxxxxxxxxxxxxxxxxxxxxxxxx' with your actual API token string:

export API_KEY='xxxxxxxxxxxxxxxxxxxxxxxxxx'

To make the 'API_KEY' persist across terminal sessions, you can add the above line to your '~/.bashrc', '~/.bash_profile', or '~/.zshrc' file, depending on your shell and operating system.

For Windows:

Open Command Prompt or PowerShell and execute the following command, replacing xxxxxxxxxxxxxxxxxxxxxxxxxx with your actual API token string:

Command Prompt:

set API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxx

PowerShell:

$env:API_KEY='xxxxxxxxxxxxxxxxxxxxxxxxxx'

To make the 'API_KEY' persist across sessions in Windows, you can set it as a user or system environment variable through the System Properties. This can be accessed by searching for "Edit the system environment variables" in the Start menu. In the System Properties window, click on the "Environment Variables" button, and then you can add or edit the 'API_KEY' variable under either User or System variables as needed.

Suggested Setup (virtual environment) - not required

Install pipenv to simplifying dependency management and providing consistent environments across different installations and it should avoid version conflicts with libraries already installed.

# Install pipenv (Linux)
python -m pip install pipenv
# OR (Mac)
brew install pyenv

# Linux
git clone https://github.com/pyenv/pyenv.git $(python -m site --user-base)/.pyenv
echo 'export PYENV_ROOT="$(python -m site --user-base)/.pyenv"' >> ~/.bashrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init --path)"' >> ~/.bashrc
source ~/.bashrc

# Install Python 3.10.12 using pyenv
pyenv install 3.10.12

# Create a virtual environment at a specific Python version
pipenv --python 3.10.12

# 2 Ways to install packages into the virtual environment.
# Either manually install packages into the virtual environment.
pipenv install dvuploader pyDataverse mimetype-description astropy shutil grequests requests
# OR use the Pipfile files (preferred).
# This is useful for ensuring consistent environments across different installations.
pipenv install

# Optional: To run the following commands as python instead of pipenv run python
# Run a shell within the virtual environment
pipenv shell
# To exit the shell
exit
# To remove the virtual environment
pipenv --rm

Install libraries (locally)

python -m pip install dvuploader pyDataverse mimetype-description shutil grequests requests

# Optional Packages (for the fits_extract.py and group_files scripts)
python -m pip install astropy pandas

Note: "./" is a shorthand notation used by the computer to specify the execution of a file, especially when the file itself indicates that it's a Python script. In simpler terms, "python foo.py" and "./foo.py" essentially perform the same action.

To elaborate further, when running a script with pipenv instead of the local Python installation, you can simply replace the "./" notation with "pipenv run python" This allows you to execute the script within the virtual environment managed by pipenv.

For Example

# Run using the "Locally" installed
./py_add_fits_files_to_dio.py --help

# Run using pipenv
pipenv run python py_add_fits_files_to_dio.py --help

File Descriptions

These files can be executed either independently or as dependencies within other scripts. As of the time of writing, there are no instances where they are being called as dependencies.

py_add_fits_files_to_dio.py

Run help for options

# Run using the "Locally" installed
./py_add_fits_files_to_dio.py --help

# Run using pipenv
pipenv run python py_add_fits_files_to_dio.py --help

Support files

These file can be used independently of the main script.

FITS Metadata Extractor (fits_extract.py)

See fits_extract.md for details.

Generate Test Files (generate_test_files.py)

See generate_test_files.md for details.

Mime Type Checker (mimetype.py)

See mimetype.md for details.

Information on FITS files in general

See FITS_Description.md for details.

Grouping to zip large number of files (grouped_files.py)

See grouped_files.md for details.

Gotchas

  1. Processing order: there is no telling the order at which the system is reading in the files. It does sort alphabetically but that doesn't mean it will process them in that order. This is important to know because the order of the files is important to the user.
  2. Subdirectories with FITS files: if the directory has subdirectories we need to discuss the expected behavior and modify this code accordingly.
  3. Large Number of files: will cause the script to take a long time to run. Ingestion of data in Dataverse is currently handled natively within Java, using a single-threaded process that reads each cell, row by row, column by column. The performance of ingestion mainly depends on the clock speed of a single core.

ToDos

...

Troubleshooting

  • error:
    • SystemError: (libev) error creating signal/async pipe: Too many open files
  • solution (for Mac & Linux):
    • ulimit -n 4096

References

  1. Sample FITS File

About

Script to automate the process of adding potentially thousands of FITS files to a simple DOI within a Dataverse server.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages