Skip to content

ariaaay/clip2brain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

clip2brain

DOI

Code from paper "Natural language supervision with a large and diverse dataset builds better models of human high-level visual cortex"

Step 1: Clone the code from Github and download the data

git clone https://github.com/ariaaay/clip2brain.git
cd clip2brain

You will also need to install the natural scene datasets. For downstream analysis you might also need to download coco annotations. It is optional for just running encoding models. The one used for analysis are: 2017 Train/Val annotations (241MB).

(OPTIONAL) Models in comparison to CLIP (SimCLR, SLIP) can be downloaded at: https://github.com/facebookresearch/SLIP?tab=readme-ov-file, under ViT-base structure.

(OPTIONAL) Pycortex DB for the NSD subjects can be downloaded here: https://drive.google.com/drive/folders/1YXAYggtlsfFxMtiqWs0UrMihxI125MHb?usp=sharing

Step 2: Install requirements

Requirements.txt contains the necessary package for to run the code in this project.

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Install torch and torchvision from PyTorch according to your own system setup.

Step 3: Set up paths in config.cfg

Modify config.cfg to reflect local paths to NSD data and COCO annotations. See current config.cfg for an example.

Step 4: Reproduce results!

Note: project_commands.sh by default runs all models for all subjects. If you would like to speed up the process and only test out certain model, please comment out models you don't need.

sh project_commands.sh

About

Code from paper "Natural language supervision with a large and diverse dataset builds better models of human high-level visual cortex"

Resources

License

Stars

Watchers

Forks

Packages

No packages published