Skip to content

Implementations for CHIL 2023 paper ''PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis'' and KDD 2022 paper ''Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning''

Notifications You must be signed in to change notification settings

Owen-Yang-18/BrainNN-PreTrain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pre-Training and Meta-Learning of Graph Neural Networks for Brain Network Analysis

This repository contains the implementation of the frameworks presented in the papers PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis (CHIL 2023 Oral) and Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning (SIGKDD 2022). The figures above present the visual overviews for multi-task meta-learning (top) and self-supervised pre-training (bottom). Please refer to the manuscripts for technical details.

Instructions

  1. Run the main.py file will start the pre-training process:
python main.py --dir=<path leading to the project directory>

The only parameter needing custom handling is the --dir argument, which stands for the local path leading to (not including) the project directory. Some other key parameters are defaulted as the following:

  • backbone, default = GCN {GCN, GAT, GIN}. The backbone GNN encoder.
  • rdim, default = 64, type = int. The dimension the atlas mapping pre-preocessing reduce to.
  • filename, default = 'pretrained.pth', type = str. The filename used to store the pre-trained parameters. Must be using the suffix .pth.
  1. How to prepare custom datasets:
  • Please store your datasets into a .npy or numpy loadable file containing a python dictionary.
  • Store the datasets for pre-training in the data/source folder, and for fine-tuning in the data/target folder.
  • Delete the empty source.txt and target.txt files originally contained in the folders.
  • For each dataset file, the dictionary is required to contain at least an 'adj' attribute which stands for graph adjacencies, with the shape of $k \times n \times n$, where $k$ refers to number of samples, and $n$ stands for node/ROI quantity.
  • The file can also recognize other attributes such as 'label' for classifications, 'feat' for node features, 'conn' referring to the connectivity matrix of the ROIs' 3D coordinates.
  1. Once pre-training is done, run the 'test.py' file will begin the fine-tuning and testing process:
python test.py --dir=<path leading to the project directory>

Please make sure the parameters are set identically to those in the pre-training main.py setups.

Dependencies

The frameworks require the following dependencies to operate (latest versions are recommended):

torch~=1.10.2
numpy~=1.22.2
scikit-learn~=1.0.2
networkx~=2.6.2
scipy~=1.8.0
tqdm~=4.62.3
torch-geometric~=2.0.3
higher~=0.2.1

Citation

Please cite our work if you find this repository helpful:

@inproceedings{yang2022data,
  title={Data-efficient brain connectome analysis via multi-task meta-learning},
  author={Yang, Yi and Zhu, Yanqiao and Cui, Hejie and Kan, Xuan and He, Lifang and Guo, Ying and Yang, Carl},
  booktitle={Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
  pages={4743--4751},
  year={2022}
}
@inproceedings{yang2023ptgb,
  title={PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis},
  author={Yang, Yi and Cui, Hejie and Yang, Carl},
  booktitle={Conference on Health, Inference, and Learning},
  pages={526--544},
  year={2023},
  organization={PMLR}
}

About

Implementations for CHIL 2023 paper ''PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis'' and KDD 2022 paper ''Data-Efficient Brain Connectome Analysis via Multi-Task Meta-Learning''

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages