Skip to content

In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

Notifications You must be signed in to change notification settings

sisinflab/CNNs-in-VRSs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 

Repository files navigation

A Study on the Relative Importance of Convolutional Neural Networks in Visually-Aware Recommender Systems

This is the official implementation of our paper A Study on the Relative Importance of Convolutional Neural Networks in Visually-Aware Recommender Systems published in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021.

Authors: Yashar Deldjoo, Tommaso Di Noia, Daniele Malitesta*, Felice Antonio Merra.
*corresponding author

(a) AlexNet

(b) ResNet50

Table of Contents:

Requirements

To begin with, please make sure your system has these installed:

  • Python 3.6.8
  • CUDA 10.1
  • cuDNN 7.6.4

Then, install all required Python dependencies with the command:

pip install -r requirements.txt

Finally, you are supposed to structure the dataset folders in the following way:

./data
  amazon_baby_vgg19/
    original/
       images/
        0.jpg
        1.jpg
        ...
  amazon_boys_girls_alexnet/
    original/
      images/
        0.jpg
        1.jpg
        ...

N.B. The dataset folder structure requires the notation <dataset_name>_<cnn_name>, even though the different dataset folders contain the exact same files. This is due to the fact that, when training and evaluating state-of-the-art visual-based recommender systems on these datasets through Elliot, they need to be recognized as different datasets.

Run and evaluate recommendations

To reproduce the results discussed in the paper, please follow these three steps:

  1. Extract visual features from item images. You can refer to this GitHub repository.
  2. Train and evaluate the visual-based recommenders through this version of Elliot (TO BE MERGED INTO THE MAIN BRANCH SOON).
  3. Evaluate the visual diversity (VisDiv). Again, you can refer to this GitHub repository.

Datasets

Dataset k-cores # Users # Products # Feedbacks
Amazon Baby* 5 606 1,761 3,882
Amazon Boys & Girls* 5 600 2,760 3,910

* https://jmcauley.ucsd.edu/data/amazon/

Parameters for Image Feature Extractors

Fully-connected layers

CNN Output Layer (script) Output Shape
AlexNet 5 (1, 4096)
VGG19 fc2 (1, 4096)
ResNet50 avg_pool (1, 2048)

Convolutional layers (e.g., ACF)

CNN Output Layer (script) Output Shape
AlexNet Not necessary (36, 256)
VGG19 block5_pool (49, 512)
ResNet50 avg_pool (49, 2048)

Visual Recommenders

Model Paper
Visual Bayesian Personalized Ranking (VBPR) He and McAuley
Deep Style Liu et al.
Attentive Collaborative Filtering (ACF) Chen et al.
Visual Neural Personalized Ranking (VNPR) Niu et al.

Configuration Files

The Authors

*corresponding author

About

In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published