Skip to content

enveda/kgem-ensembles-in-drug-discovery

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ensembles of knowledge graph embedding models improve predictions for drug discovery

License: CC BY-NC 4.0 DOI Maturity level-1 PyPI pyversions

This repository accompanies the source code and data of the paper titled "Ensembles of knowledge graph embedding models improve predictions for drug discovery".

Overview

In this work, we investigate the performance of ten knowledge graph embedding (KGE) models on two public biomedical knowledge graphs (KGs). Till date, KGE models that yield higher precision on the top prioritized links are preferred. In this paper, we take a different route and propose a novel concept of ensemble learning on KGEMs for drug discovery. Thus, we assess whether combining the predictions of several models can lead to an overall improvement in predictive performance. Our results highlight that such an ensemble learning method can indeed achieve better results than the original KGEMs by benchmarking the precision (i.e., number of true positives prioritized) of their top predictions.

Below, the 10 models investigated in this paper:

With the following knowledge graphs benchmarked:

The figure below provied the distribution of the Precision@100 achieved for each model trained with different hyperparameters in the OpenBioLink and BioKG KGs.

Installation Dependencies

The dependencies required to run the notebooks can be installed as follows:

$ pip install -r requirements.txt

The code relies primarily on the PyKEEN package, which uses PyTorch behind the scenes for gradient computation. If you want to train the models from scratch it would be advisable to ensure you install a GPU enabled version of PyTorch first. Details on how to do this are provided here.

Reproducing Experiments

This repository contains code to replicate the experiments detailed in the accompanying manuscript. Each model is trained on a GPU server using the train_model.py script.

Please note that the trained models will be saved in the models directory at the root of this repository within its respective KG directory.

Trained models and predictions

All the above mentioned models that were trained for the two KGs and their respecitve predictions can be found on Zenodo

Results and outcomes

We found that the baseline ensemble models outperformed each of the individual ones at all investigated K, highlighting the benefit of applying ensemble learning to KGEMs. The figure below shows the Precision at Top K in the test set using different values of K in the OpenBioLink and BioKG. For predefined values of K, the Precision@K for top predicted drug-disease triples are displayed for two ensembles (i.e., ensemble-all and ensemble-top5) and 2 independent KGEMs (i.e., RotatE and ConvE) using the 99th (BioKG) and 95th (OpenBioLink) percentile normalization approach. Although the latter two KGEMs represent the two best performing benchmarked models, the ensemble models outperform each of these individual models.

Repository structure

The current repository is structured in the following way:

|-- LICENSE
|-- README.md
|-- data (Data folder)
|   |-- kg
|   |   |-- biokg
|   |   `-- openbiolink
|   |-- kgem-params
|   |-- network
|   `-- plots
|-- notebooks (Python script for data processing)
|   |-- Step 1.0 - Data Pre-processing.ipynb
|   |-- Step 1.1 - Data Splitting.ipynb
|   |-- Step 2.1 - Score Distribution.ipynb
|   |-- Step 2.2 - KGEMs benchmarking.ipynb
|   |-- Step 2.3 - Validation-Test evaluation - Supplementary Table 1.ipynb
|   |-- Step 2.4 - Analyze Prediction Intersection.ipynb
|   |-- Step 3 - Exploration of Normalization methods.ipynb
|   `-- Step 3 - Analyze ensembles.ipynb
|-- requirements.txt
`-- src (Python utils for data manipulations)
    |-- analysis.py
    |-- constants.py
    |-- ensemble.py
    |-- full_pipeline.py
    |-- get_predictions.py
    |-- models.py
    |-- plot.py
    |-- predict.py
    |-- train_model.py
    |-- utils.py
    |-- version.py

Citation

If you have found our work useful, please consider citing:

Daniel Rivas-Barragan, Daniel Domingo-Fernández, Yojana Gadiya, David Healey, Ensembles of knowledge graph embedding models improve predictions for drug discovery (2022). Briefings in Bioinformatics, 23(6), bbac481. https://doi.org/10.1093/bib/bbac481

About

Source code and data repository for "Ensembles of knowledge graph embedding models improve predictions for drug discovery"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published