Skip to content
/ BM3 Public

Pytorch implementation for "Bootstrap Latent Representations for Multi-modal Recommendation"-WWW'23

License

Notifications You must be signed in to change notification settings

enoche/BM3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BM3 (WWW'23)

Pytorch implementation for "Bootstrap Latent Representations for Multi-modal Recommendation"-WWW'23 Official ACM

Overview of BM3

Data

Download from Google Drive: Baby/Sports/Elec
The data already contains text and image features extracted from Sentence-Transformers and CNN.

How to run

  1. Put your downloaded data (e.g. baby) under data dir.
  2. Enter src folder and run with
    python main.py -m BM3 -d baby
    You may specify other parameters in CMD or config with configs/model/*.yaml and configs/dataset/*.yaml.

Best hyper-parameters for reproducibility

We report the best hyper-parameters of BM3 to reproduce the results in Table III of our paper as:

Datasets layers dropout reg_weight
Baby 1 0.5 0.1
Sports 1 0.5 0.01
Elec 2 0.3 0.1

Citation

@inproceedings{zhou2023bootstrap,
author = {Zhou, Xin and Zhou, Hongyu and Liu, Yong and Zeng, Zhiwei and Miao, Chunyan and Wang, Pengwei and You, Yuan and Jiang, Feijun},
title = {Bootstrap Latent Representations for Multi-Modal Recommendation},
booktitle = {Proceedings of the ACM Web Conference 2023},
pages = {845–854},
year = {2023}
}

About

Pytorch implementation for "Bootstrap Latent Representations for Multi-modal Recommendation"-WWW'23

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages