Skip to content
/ rexmex Public
forked from AstraZeneca/rexmex

A general purpose recommender metrics library for fair evaluation.

Notifications You must be signed in to change notification settings

QUSEIT/rexmex

 
 

Repository files navigation

Version License repo size build badge codecov


reXmeX is recommender system evaluation metric library.

Please look at the Documentation and External Resources.

reXmeX consists of utilities for recommender system evaluation. First, it provides a comprehensive collection of metrics for the evaluation of recommender systems. Second, it includes a variety of methods for reporting and plotting the performance results. Implemented metrics cover a range of well-known metrics and newly proposed metrics from data mining (ICDM, CIKM, KDD) conferences and prominent journals.


An introductory example

The following example loads a synthetic dataset which has the source_id, target_id, source_group and target group keys besides the mandatory y_true and y_scores. The dataset has binary labels and predictied probability scores. We read the dataset and define a defult ClassificationMetric instance for the evaluation of the predictions. Using this metric set we create a score card, group the predictions on with the source_group key and return a performance metric report.

from rexmex.scorecard import ScoreCard
from rexmex.dataset import DatasetReader
from rexmex.metricset import ClassificationMetricSet

reader = DatasetReader()
scores = reader.read_dataset()

metric_set = ClassificationMetricSet()

score_card = ScoreCard(metric_set)

report = score_card.generate_report(scores, groupping=["source_group"])

Scorecard

A rexmex score card allows the reporting of recommender system performance metrics, plotting the performance metrics and saving those. Our framework provides 7 rating, 38 classification, Z ranking, and W coverage metrics.

Metric Sets

Metric sets allow the users to calculate a range of evaluation metrics for a label - predicted label vector pair. We provide a general MetricSet class and specialized metric sets with pre-set metrics have the following general categories:

  • Rating
  • Classification
  • Ranking
  • Coverage

Rating Metric Set

These metrics assume that items are scored explicitly and ratings are predicted by a regression model.

Expand to see all rating metrics in the metric set.

Classification Metric Set

These metrics assume that the items are scored with raw probabilities (these can be binarized).

Expand to see all classification metrics in the metric set.

Ranking Metric Set

Expand to see all ranking metrics in the metric set.

Coverage Metric Set

These merics measure how well the recommender system covers the available items in the catalog. In other words measure the diversity of predictions.


Documentation and Reporting Issues

Head over to our documentation to find out more about installation and data handling, a full list of implemented methods, and datasets. For a quick start, check out our examples.

If you notice anything unexpected, please open an issue and let us know. If you are missing a specific method, feel free to open a feature request. We are motivated to constantly make RexMex even better.


Installation via the command line

RexMex can be installed with the following command after the repo is cloned.

$ python setup.py install

Installation via pip

RexMex can be installed with the following pip command.

$ pip install rexmex

As we create new releases frequently, upgrading the package casually might be beneficial.

$ pip install rexmex --upgrade

Running tests

$ pytest ./tests/unit -cov rexmex/
$ pytest ./tests/integration -cov rexmex/

License

About

A general purpose recommender metrics library for fair evaluation.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%