Skip to content

Data-Linkage/dlh_utils

Repository files navigation

DLH_utils

MIT License PyPI version PyPi Python Versions

A Python package produced by the Linkage Development team from the Data Linkage Hub at Office for National Statistics (ONS) containing a set of functions used to expedite and streamline the data linkage process.

It's key features include:

  • it's scalability to large datasets, using spark as a big-data backend
  • profiling and flagging functions used to describe and highlight issues in data
  • standardisation and cleaning functions to make data comparable ahead of linkage
  • linkage functions to derive linkage variables and join data together efficiently

Please log an issue on the issue board or contact any of the active contributors with any issues or suggestions for improvements you have.

Installation steps

DLH_utils supports Python 3.6+. To install the latest version, simply run:

pip install dlh_utils

Or, if using CDSW, in a terminal session run:

pip3 install dlh_utils

The -U argument can be used to upgrade the package to its newest version:

pip3 install -U dlh_utils

Demo

For a worked demonstration notebook of these functions being applied within a data linkage context, head over to our separate demo repository

Contributing

This repository adheres to pep8 coding standards. These can be automatically checked for when you're making new commits by the repository's pre-commit hooks. To get this working:

  • pip install both 'flake8' and 'pre-commit'
  • install the git hook scripts pre-commit install
  • When adding new git commits, the pre-commit hooks will now run and make suggestions needed to adhere to pep8 code standards

Common issues

When using the jaro/jaro_winkler functions the error "no module called Jellyfish found" is thrown

These functions are dependent on the Jellyfish package and this may not be installed on the executors used in your spark session. Try submitting Jellyfish to your sparkcontext via addPyFile() or by setting the following environmental variables in your CDSW engine settings (ONS only):

  • PYSPARK_DRIVER_PYTHON = /usr/local/bin/python3.6
  • PYSPARK_PYTHON = /opt/ons/virtualenv/miscMods_v4.04/bin/python3.6

Using the cluster function

The cluster function uses Graphframes, which requires an extra JAR file dependency to be submitted to your spark context in order for it to run.

We have published a graphframes-wrapper package on Pypi that contains this JAR file. This is included in the package requirements as a dependency.

If outside of ONS and this dependency doesn't work, you will need to submit graphframes' JAR file dependency to your spark context. This can be found here:

https://repos.spark-packages.org/graphframes/graphframes/0.6.0-spark2.3-s_2.11/graphframes-0.6.0-spark2.3-s_2.11.jar

Once downloaded, this can be submitted to your spark context by adding this parameter to your SparkSession config:

spark.conf.set('spark.jars', path_to_jar_file)

Thanks

Thanks to all those in the Data Linkage Hub, Data Engineering and Methodology at ONS that have contributed towards this repository.

Any questions?

If you need any additional help, or have any feedback on the package, please contact the Data Linkage Hub at Linkage.Hub@ons.gov.uk .

About

A Python package containing a set of functions used to expedite and streamline the data linkage process.

Resources

License

Stars

Watchers

Forks

Packages

No packages published