Skip to content

Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. See our docs: https://docs.deepchecks.com

License

Notifications You must be signed in to change notification settings

nirhutnik/deepchecks

 
 

Repository files navigation

Join Slack   |   Documentation   |   Blog   |   Twitter

Test Suites for Validating ML Models & Data

build Documentation Status pkgVersion pyVersions Maintainability Coverage Status

Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. This includes checks related to various types of issues, such as model performance, data integrity, distribution mismatches, and more.

Installation

Using pip

pip install deepchecks -U --user

Using conda

conda install -c deepchecks deepchecks

Try it Out!

Head over to the Quickstart Notebook <https://docs.deepchecks.com/en/stable/ examples/guides/quickstart_in_5_minutes.html? utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=try_it_out>__ and choose the binder badge image or the colab badge image to have it up and running, and to then apply it on your own data and models.

Usage Examples

Running a Suite

A Suite runs a collection of Checks with optional Conditions added to them.

To see it in action, we recommend trying it out.

To run an existing suite all you need to do is to import the suite and run it with the required (suite-dependent) input parameters. The list of all built-in suites can be found here.

Let's take the "iris" dataset as an example

from sklearn.datasets import load_iris
iris_df = load_iris(return_X_y=False, as_frame=True)['frame']

and run the single_dataset_integrity suite, which requires only a single Dataset and can run also directly on a pd.DataFrame, like in the following example.

from deepchecks.suites import single_dataset_integrity
suite = single_dataset_integrity()
suite.run(iris_df)

Will result in printing the suite's output, that starts with a summary of the check conditions

Single Dataset Integrity Suite

The suite is composed of various checks such as: Mixed Data Types, Is Single Value, String Mismatch, etc...
Each check may contain conditions (which will result in pass / fail / warning, represented by / / ! ), as well as other outputs such as plots or tables.
Suites, checks and conditions can all be modified (see the Create a Custom Suite tutorial).


Conditions Summary

Status Check Condition More Info
Single Value in Column - Test Dataset Does not contain only a single value for all columns Columns containing a single value: ['target']
!
Data Duplicates - Test Dataset Duplicate data is not greater than 0% Found 2.00% duplicate data
Mixed Nulls - Test Dataset Not more than 1 different null types for all columns
Mixed Data Types - Test Dataset Rare data types in all columns are either more than 10.00% or less than 1.00% of the data
String Mismatch - Test Dataset No string variants for all columns
String Length Out Of Bounds - Test Dataset Ratio of outliers not greater than 0% string length outliers for all columns
Special Characters - Test Dataset Ratio of entirely special character samples not greater than 0.10% for all columns

Followed by the visual outputs of all of the checks that are in that suite, that isn't appended here for brevity. In the following section you can see an example of how the output of a single check may look.

Running a Check

To run a specific single check, all you need to do is import it and then to run it with the required (check-dependent) input parameters. More details about the existing checks and the parameters they can receive can be found in our API Reference.

from deepchecks.checks import TrainTestFeatureDrift
import pandas as pd

train_df = pd.read_csv('train_data.csv')
train_df = pd.read_csv('test_data.csv')
# Initialize and run desired check
TrainTestFeatureDrift().run(train_data, test_data)

Will produce output of the type:

Train Test Drift

The Drift score is a measure for the difference between two distributions, in this check - the test and train distributions.
The check shows the drift score and distributions for the features, sorted by feature importance and showing only the top 5 features, according to feature importance. If available, the plot titles also show the feature importance (FI) rank.

Key Concepts

Check

Each check enables you to inspect a specific aspect of your data and models. They are the basic building block of the deepchecks package, covering all kinds of common issues, such as:

  • Model Error Analysis
  • Label Ambiguity

- Data Sample Leakage and many more checks.

Each check can have two types of results:

  1. A visual result meant for display (e.g. a figure or a table).
  2. A return value that can be used for validating the expected check results (validations are typically done by adding a "condition" to the check, as explained below).

Condition

A condition is a function that can be added to a Check, which returns a pass ✓, fail ✖ or warning ! result, intended for validating the Check's return value. An example for adding a condition would be:

from deepchecks.checks import BoostingOverfit
BoostingOverfit().add_condition_test_score_percent_decline_not_greater_than(threshold=0.05)

which will return a check failure when running it if there is a difference of more than 5% between the best score achieved on the test set during the boosting iterations and the score achieved in the last iteration (the model's "original" score on the test set).

Suite

An ordered collection of checks, that can have conditions added to them. The Suite enables displaying a concluding report for all of the Checks that ran. See the list of predefined existing suites to learn more about the suites you can work with directly and also to see a code example demonstrating how to build your own custom suite. The existing suites include default conditions added for most of the checks. You can edit the preconfigured suites or build a suite of your own with a collection of checks and optional conditions.

What Do You Need in Order to Start Validating?

Depending on your phase and what you wish to validate, you'll need a subset of the following:

  • Raw data (before pre-processing such as OHE, string processing, etc.), with optional labels
  • The model's training data with labels
  • Test data (which the model isn't exposed to) with labels
  • A model compatible with scikit-learn API that you wish to validate (e.g. RandomForest, XGBoost)

Deepchecks validation accompanies you from the initial phase when you have only raw data, through the data splits, and to the final stage of having a trained model that you wish to evaluate. Accordingly, each phase requires different assets for the validation.

See more about typical usage scenarios and the built-in suites in the docs.

Documentation

Community

  • Join our Slack Community to connect with the maintainers and follow users and interesting discussions
  • Post a Github Issue to suggest improvements, open an issue, or share feedback.

About

Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. See our docs: https://docs.deepchecks.com

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.7%
  • Makefile 1.2%
  • Shell 0.1%