Skip to content

Releases: gdcc/pyDataverse

v0.3.2

10 May 16:01
Compare
Choose a tag to compare

We are excited to announce the release of the latest patch version of pyDataverse after a significant period of inactivity. This update brings a range of new functionalities and bug fixes, aimed at improving the stability and performance of the pyDataverse library.

The library has been equipped with a CI/CD pipeline to ensure consistent integration with Dataverse. To achieve this, we have utilized the Dataverse Action which uses the progress made by the Dataverse Containerization Working Group to create local instances of Dataverse with ease. This has made contributions safer and made it easier to test pull requests.

PyDataverse has also switched from the requests library to HTTPX, a powerful library for performing HTTP requests. The library offers better performance and compatibility and allows async requests, which were previously impossible. For more information on how to use the new async, please refer to PR #175 for now.

Finally, pyDataverse's building and dependency management has been transferred to pyproject.toml from setup.py. The current de facto standard in packaging Python projects offers numerous advantages over setup.py while maintaining compatibility with the pip installer.

What's Changed

Fixes

New Contributors

Full Changelog: 0.3.1...v0.3.2

Chat with us!

If you are interested in the development of pyDataverse, we invite you to join us for a chat on our Zulip Channel. This is the perfect place to discuss and exchange ideas about the development of pyDataverse. Whether you need help or have ideas to share, feel free to join us!

PyDataverse Working Group

We have formed a pyDataverse working group to exchange ideas and collaborate on pyDataverse. There is a bi-weekly meeting planned for this purpose, and you are welcome to join us by clicking the following WebEx meeting link. For a list of all the scheduled dates, please refer to the Dataverse Community calendar.

0.3.1

06 Apr 11:47
Compare
Choose a tag to compare

Small bugfix of #126.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Bugs

  • Fix: missing topicClassVocabURI value in Dataset model (#126)

Thanks

Thanks to Karin Faktor for finding the bug.

PyDataverse is supported by AUSSDA and by funding as part of the Horizon2020 project SSHOC.

v0.3.0 - Ruth Wodak

27 Jan 01:47
Compare
Choose a tag to compare

This release is a big change in many parts of the package. It adds new API's, re-factored models and lots of new documentation.

Overview of the most important changes:

  • Re-factored data models: setters, getters, data validation and JSON export and import
  • Export and import of metadata to/from pre-formatted CSV templates
  • Add User Guides, Use-Cases, Contributor Guide and much more to the documentation
  • Add SWORD, Search, Metrics and Data Access API
  • Collect the complete data tree of a Dataverse with get_children()
  • Use JSON schemas for metadata validation (jsonschemas required)
  • Updated Python requirements: Python>=3.6 (no Python 2 support anymore)
  • Curl required, only for update_datafile()
  • Transfer pyDataverse to GDCC - the Global Dataverse Community Consortium (#52)

Version 0.3.0 is named in honor of Ruth Wodak (Wikipedia), an Austrian linguist. Her work is mainly located in discourse studies, more specific in critical discourse analysis, which looks at discourse as a form of social practice. She was awarded with the Wittgenstein-Preis, the highest Austrian science award.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Use-Cases

The new functionalities were developed with some specific use-cases in mind:

See more detailed in our Documentation.

Retrieve data structure and metadata from Dataverse instance (DevOps)

Collect all Dataverses, Datasets and Datafiles of a Dataverse instance, or just a part of it. The results then can be stored in JSON files, which can be used for testing purposes, like checking the completeness of data after a Dataverse upgrade or migration.

Upload and removal of test data (DevOps)

For testing, you often have to upload a collection of data and metadata, which should be removed after the test is finished. For this, we offer easy to use functionalities.

Import data from CSV templates (Data Scientist)

Importing lots of data from data sources outside dataverse can be done with the CSV templates as a bridge. Fill the CSV templates with your data, by machine or by human, and import them into pyDataverse for an easy mass upload via the Dataverse API.

Bugs

  • Missing JSON schemas (#56)
  • Datafile metadata title (#50)
  • Error long_description_content_type (#4)

Features & Enhancements

API

Summary: Add other API's next to Native API and update Native API.

  • add Data Access API:
    • get datafile(s) (get_datafile(), get_datafiles(), get_datafile_bundle())
    • request datafile access (request_access(), allow_access_request(), grant_file_access(), list_file_access_requests())
  • add Metrics API:
    • total(), past_days(), get_dataverses_by_subject(), get_dataverses_by_category(), get_datasets_by_subject(), get_datasets_by_data_location()
  • add SWORD API:
    • get_service_document()
  • add Search API:
    • search()
  • Native API:
    • Get all children data-types of a Dataverse or a Dataset in a tree structure (get_children())
    • Convert Dataverse ID's to its alias (dataverse_id2alias())
    • Get contents of a Dataverse (Datasets, Dataverses) (get_dataverse_contents())
    • Get Dataverse assignements (get_dataverse_assignments())
    • Get Dataverse facets (get_dataverse_facets())
    • Edit Dataset metadata (edit_dataset_metadata()) (#19)
    • Destroy Dataset (destroy_dataset())
    • Dataset private URL functionalities (create_dataset_private_url(), get_dataset_private_url(), delete_dataset_private_url())
    • Get Dataset version(s) (get_dataset_versions(), get_dataset_version())
    • Get Dataset assignments (get_dataset_assignments())
    • Check if Dataset is locked (get_dataset_lock())
    • Get Datafiles metadata get_datafiles_metadata()
    • Update datafile metadata (update_datafile_metadata())
    • Redetect Datafile file type (redetect_file_type())
    • Restrict Datafile (restrict_datafile())
    • ingest Datafiles (reingest_datafile(), uningest_datafile())
    • Datafile upload in native Python (no CURL dependency anymore) (upload_datafile())
    • Replace existing Datafile replace_datafile()
    • Roles functionalities (get_dataverse_roles(), create_role(), show_role(), delete_role())
    • Add API token functionalities (get_user_api_token_expiration_date(), recreate_user_api_token(), delete_user_api_token())
    • Get current user data (get_user()) (#59)
    • Get API ToU (get_info_api_terms_of_use())
    • Add import of existing Dataset in create_dataset() (#3)
    • Datafile upload natively in Python (no curl anymore) (upload_datafile())
  • Api
    • Set User-Agent for requests to pydataverse
    • Change authentication during request functions (get, post, delete, put): If API token is passed, use it. If not, don't set it. No auth parameter used anymore.

Models

Summary: Re-factoring of all models (Dataverse, Dataset, Datafile).

New methods:

  • from_json() imports JSON (like Dataverse's own JSON format) to pyDataverse models object
  • get() returns a dict of the pyDataverse models object
  • json() returns a JSON string (like Dataverse's own JSON format) of the pyDataverse models object. Mostly used for API uploads.
  • validate_data() validates a pyDataverse object with a JSON schema

Utils

  • Save list of metadata (Dataverses, Datasets or Datafiles) to a CSV file (write_dicts_as_csv()) (#11)
  • Walk through the data tree from get_children() and extract Dataverses, Datasets and Datafiles (dataverse_tree_walker())
  • Store the results from dataverse_tree_walker() in seperate JSON files (save_tree_data())
  • Validate any data model dictionary (Dataverse, Dataset, Datafile) against a JSON schema (validate_data())
  • Clean strings (trim whitespace) (clean_string())
  • Create URL's from identifier (create_dataverse_url(), create_dataset_url(), create_datafile_url())
  • Update read_csv_to_dict(): replace dv. prefix, load JSON cells and convert boolean cell strings

Docs

Many new pages and tutorials:

Tests

  • Add tests for new functions
  • Re-factor existing tests
  • Create fixtures
  • Create test data

Miscellaneous

  • Add Python 3.8 and Python 2.7, 3.4 and 3.5 removed (Python>=3.6 required now)
  • Add jsonschema as requirement
  • Add JSON schemas for Dataverse upload, Dataset upload, Datafile upload and DSpace to package
  • Add CSV templates for Dataverses, Datasets and Datafiles from pyDataverse_templates
  • Transfer pyDataverse to GDCC - the Global Dataverse Community Consortium (#52)
  • Improve code formatting: black, isort, pylint, mypy, pre-commit
  • Add pylint linter
  • Add mypy type checker
  • Add pre-commit for managing pre-commit hooks.
  • Add radon code metrics
  • Add GitHub templates (PR, issues, commit) (#57)
  • Re-structure requirements
  • Get DOI:10.5281/zenodo.4470151 for GitHub repository

Other

Thanks to Daniel Melichar (@dmelichar), Vyacheslav Tykhonov (Slava), GDCC, @ecowan, @BPeuch, @j-n-c and @ambhudia for their support for this release. Special thanks to the Pandas project for their great blueprint for the Contributor Guide.

PyDataverse is supported by funding as part of the Horizon2020 project SSHOC.

v0.2.1

19 Jun 15:52
Compare
Choose a tag to compare

This release fixes a bug in the Dataset.dict() generation.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Bug Fixes

  • FIXED: calling of the attributes series, socialScienceNotes and targetSampleSize caused error in Dataset.dict(), cause the contained sub-values were stored directly in own class-attributes.

Contribute

To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!

The most important contribution you can make right now is to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).

About pyDataverse

pyDataverse includes a collection of functionalities to import, export and manipulate data and it's metadata via the Dataverse API.

-- Greetz, Stefan Kasberger

v0.2.0 - Ida Pfeiffer

18 Jun 03:38
Compare
Choose a tag to compare

This release adds functionalities to import, manipulate and export the metadata of Dataverses, Datasets and Datafiles.

Version 0.2.0 is named in honor of Ida Pfeiffer (Wikipedia), an Austrian traveler and travel book author. She went on for several travels around the world, where she collected plants, insects, mollusks, marine life and mineral specimens and brought most of them back home to the Natural History Museum of Vienna.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Features

  • add Datavers Api metadata functionalities:
    • set allowed attributes via a list of dict()
    • import of Dataverse and Dataset metadata from Dataverse Api JSON
    • validity check of Dataverse, Dataset and Datafile attributes necessary for Dataverse Api upload
    • export Dataverse, Dataset and Datafile attributes as dict() and JSON
    • export Dataverse and Dataset metadata JSON necessary for Dataverse Api upload
    • tests for Dataverse, Dataset and Datafile
  • add PUT request and edit metadata request to Api() (PR #8)
  • read in csv files and convert to Dataverse compatible dict() for automatic import of datasets into a Dataset() object

Improvements

  • improved documentation: added docstrings where missing, cleaned them up and added examples
  • added PyPI test to tox.ini
  • added test fixtures for frequently used functions inside tests

Dependencies

  • fixed requests version: requests>=2.12.0 or newer needed

Contribute

From 18th to 22nd of June 2019, pyDataverse's main developer Stefan Kasberger will be at the Dataverse Community Conference in Cambridge, MA to exchange with others about pyDataverse end develop it further. If you are interested and around, drop by and join us. If you can not attend, you can connect with us via Dataverse Chat.

To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!

The most important contribution you can make right now is to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).

Another way is, to share this release with others, who could be interested (e. g. retweet my Tweet, or send an Email).

About pyDataverse

pyDataverse includes a collection of functionalities to import, export and manipulate data and it's metadata via the Dataverse API.

https://twitter.com/stefankasberger/status/1140832352517668864

Thanks to Ajax23 for the PR #8. Great contribution, and it's always amazing to see the idea of Open Source in action. :)

-- Greetz, Stefan Kasberger

v0.1.1

28 May 16:03
Compare
Choose a tag to compare

This release is a quick bugfix. It adds requests to the install_requirements and updates the packaging and testing configuration.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Bugfixes

Improvements

  • cleaned setup.py
  • add badges to index.rst
  • cleaned tools/tests-requirements.txt
  • tox.ini: add python versions, add dist test, add pypitest test, clean up and re-structure configuration
  • update docs

Contribute

To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!

The most important contribution right now is simply to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).

About pyDataverse

pyDataverse includes the most basic data operations to import and export data via the Dataverse API. The functionality will be expanded in the next weeks with more requests and a class-based data model for the metadata. This will allow to easily import and export metadata, and upload it directly to the API.

Thanks to @moumenuisawe for mentioning this bug.

-- Greetz, Stefan Kasberger

v0.1.0 - Marietta Blau

22 May 11:00
Compare
Choose a tag to compare

This release is the initial, first one of pyDataverse. It offers basic features to access the Dataverse API via Python, to create, retrieve, publish and delete Dataverses, Datasets and Datafiles.

Version 0.1.0 is named in honor of Marietta Blau (Wikipedia), an Austrian researcher in the field of particle physics. In 1950, she was nominated for the Nobel prize for her contributions.

For help or general questions please have a look in our Docs or email stefan.kasberger@univie.ac.at.

Features

  • api.py:
    • Make GET, POST and DELETE requests.
    • Create, retrieve, publish and delete Dataverses via the API
    • Create, retrieve, publish and delete Datasets via the API
    • Upload and retrieve Datafiles via the API
    • Retrieve server informations and metadata via the API
  • utils.py: File IO and data conversion functionalities to support API operations
  • exceptions.py: Custom exceptions
  • tests/*.py: Tests with test data in pytest, tested with tox on travis ci.
  • Documentation with Sphinx, published on Read the Docs
  • Package on PyPI
  • Open Source (MIT)

Contribute

To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!

The most important contribution right now is simply to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).

Another way is, to share this release with others, who could be interested (e. g. retweet my Tweet, or send an Email).

About pyDataverse

pyDataverse includes the most basic data operations to import and export data via the Dataverse API. The functionality will be expanded in the next weeks with more requests and a class-based data model for the metadata. This will allow to easily import and export metadata, and upload it directly to the API.

Thanks to dataverse-client-python, for being the main orientation and input for the start of pyDataverse. Also thanks to @kaczmirek, @pdurbin, @djbrooke and @4tikhonov for their support on this.

-- Greetz, Stefan Kasberger