Skip to content

tamlhp/awesome-recsys-poisoning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 

Repository files navigation

A Survey of Poisoning Attacks and Countermeasures in Recommender Systems

Awesome arXiv Website GitHub stars Hits Contrib

A repository of poison attacks against recommender systems, as well as their countermeasures. This repository is associated with our systematic review, entitled Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures.

A sortable version is available here: https://awesome-recsys-poisoning.github.io/

Please read and cite our paper: arXiv

Nguyen, T.T., Nguyen, Q.V.H., Nguyen, T.T., Huynh, T.T., Nguyen, T.T., Weidlich, M. and Yin, H., 2024. Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures. arXiv preprint arXiv:2404.14942.

Citation

@article{nguyen2024manipulating,
  title={Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures},
  author={Nguyen, Thanh Toan and Nguyen, Quoc Viet Hung and Nguyen, Thanh Tam and Huynh, Thanh Trung and Nguyen, Thanh Thi and Weidlich, Matthias and Yin, Hongzhi},
  journal={arXiv preprint arXiv:2404.14942},
  year={2024}
}

Existing Surveys

Paper Title Venue Year Note
Poisoning Attacks against Recommender Systems: A Survey arXiv 2024 Focus on benchmarking
Manipulating vulnerability: Poisoning attacks and countermeasures in federated cloud–edge–client learning for image classification KBS 2023 Focus on federated learning
A Survey on Data Poisoning Attacks and Defenses DSC 2022 Not focus on recommender systems
A survey of attack detection approaches in collaborative filtering recommender systems Artificial Intelligence Review 2021 Classic heuristic attacks only
Understanding Shilling Attacks and Their Detection Traits: A Comprehensive Survey IEEE Access 2020 Classic heuristic attacks only
Shilling attacks against collaborative recommender systems: a review Artificial Intelligence Review 2020 Classic heuristic attacks only
A Comparative Study on Shilling Detection Methods for Trustworthy Recommendations SESC 2018 Classic heuristic attacks only
A comparative study of shilling attack detectors for recommender systems ICSSSM 2015 Classic heuristic attacks only

Taxonomy

taxonomy


Poison Attacks

Poisoning attacks are the process of tampering with the training data of a machine learning (ML) model in order to corrupt its availability and integrity. Below figure presents the typical process of a poisoning attack compared to the normal learning process. In the latter case, an ML model is trained based on data, which is subsequently used to derive a recommendation. As such, the quality of the ML model depends on the quality of the data used for training. In a poisoning attack, data is injected into the training process, and hence the model, to produce unintended or harmful conclusions. Attack-Process

Paper Venue Year Type Data Code
Poisoning GNN-based recommender systems with generative surrogate-based attacks ACM TOIS 2023 Model-intrinsic FR, ML, AMCP, LF -
Shilling Black-Box Recommender Systems by Learning to Generate Fake User Profiles IEEE Trans. Neural Netw. Learn. Syst 2022 Model-agnostic ML, FT, YE, AAT -
Knowledge enhanced Black-box Attacks for Recommendations KDD 2022 Model-agnostic ML, BC, LA -
LOKI: A Practical Data Poisoning Attack Frameworkagainst Next Item Recommendations TKDE 2022 Model-agnostic ABT, St,GOW -
Pipattack: Poisoning federated recommender systems for manipulating item promotion WSDM 2022 Model-intrinsic ML, AMCP -
Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios IJCAI 2022 Model-intrinsic ML, ADM Python
FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling Arxiv 2022 Model-intrinsic ML, ABT Python
UA-FedRec: Untargeted Attack on Federated News Recommendation Arxiv 2022 Model-intrinsic MIND, Feeds Python
Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems KDD 2021 Model-agnostic ML, FT Python
Reverse Attack: Black-box Attacks on Collaborative Recommendation CSS 2021 Model-agnostic ML, NF, AMB & ADM, TW, G+, CIT -
Attacking Black-box Recommendations via Copying Cross-domain User Profiles ICDE 2021 Model-agnostic ML, NF -
Simulating real profiles for shilling attacks: A generative approach KBS 2021 Model-agnostic ML -
Ready for emerging threats to recommender systems? A graph convolution-based generative shilling attack IS 2021 Model-agnostic DB, CI Python
Data poisoning attacks on neighborhoodbased recommender systems ETT 2021 Model-intrinsic FT, ML, AMV -
Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data KDD 2021 Model-intrinsic ML, AIV -
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction RecSys 2021 Model-intrinsic ML,St, ABT Python
Poisoning attacks against knowledge graph-based recommendation systems using deep reinforcement learning Neural. Comput. Appl. 2021 Model-intrinsic ML, FTr -
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start WWW 2021 Model-intrinsic AMMN, TC Python
Data poisoning attacks to deep learning based recommender systems arXiv 2021 Model-intrinsic ML, LA -
Practical data poisoning attack against next-item recommendation WWW 2020 Model-intrinsic ABT -
Influence function based data poisoning attacks to top-n recommender systems WWW 2020 Model-intrinsic YE, ADM -
Attacking recommender systems with augmented user profiles CIKM 2020 Model-agnostic ML, FT, AAT -
Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems ICDE 2020 Model-agnostic St, AMCP, ML -
Revisiting adversarially learned injection attacks against recommender systems RecSys 2020 Model-agnostic GOW Python
Adversarial attacks on an oblivious recommender RecSys 2019 Model-intrinsic ML -
Targeted poisoning attacks on social recommender systems GLOBECOM 2019 Model-intrinsic FT -
Data poisoning attacks on cross-domain recommendation CIKM 2019 Model-intrinsic NF, ML -
Poisoning attacks to graph-based recommender systems ACSAC 2018 Model-intrinsic ML, AIV -
Fake Co-visitation Injection Attacks to Recommender Systems NDSS 2017 Model-intrinsic YT, eB, AMV, YE, LI -
Data poisoning attacks on factorization-based collaborative filtering NIPS 2016 Model-intrinsic ML Python
Shilling recommender systems for fun and profit WWW 2004 Model-agnostic ML

Countermeasures

In this section, we review detection methods in more detail, starting with supervised methods, before turning to semi-supervised methods and unsupervised methods

Paper Venue Year Type Data Code
Anti-FakeU: Defending Shilling Attacks on Graph Neural Network based Recommender Model WWW 2023 UnSupervised GOW, Yelp -
An unsupervised detection method for shilling attacks based on deep learning and community detection Soft Comput. 2021 UnSupervised NF, AMV -
Identification of Malicious Injection Attacks in Dense Rating and Co-Visitation Behaviors TIFS 2020 UnSupervised ML, AMB, LT, Trip -
Recommendation attack detection based on deep learning JISA 2020 Supervised ML -
Detecting shilling attacks with automatic features from multiple views Secur. Commun. Netw. 2019 Supervised NF, ML, AMV -
Detecting shilling attacks in social recommender systems based on time series analysis and trust features KBS 2019 Supervised CI, EP -
BS-SC: An Unsupervised Approach for Detecting Shilling Profiles in Collaborative Recommender Systems TKDE 2019 UnSupervised SYN, AMP -
Detecting shilling attacks in recommender systems based on analysis of user rating behavior KBS 2019 UnSupervised NF, ML, AMP -
Trustworthy and profit: A new value-based neighbor selection method in recommender systems under shilling attacks DSS 2019 UnSupervised BC, AMB -
Quick and accurate attack detection in recommender systems through user attributes RecSys 2019 UnSupervised ML -
UD-HMM: An unsupervised method for shilling attack detection based on hidden Markov model and hierarchical clustering KBS 2018 UnSupervised ML, NF, AMP -
Shilling attack detection for recommender systems based on credibility of group users and rating time series Plos One 2018 UnSupervised ML -
Spotting anomalous ratings for rating systems by analyzing target users and items Neurocomputing 2017 UnSupervised ML, AMP -
Estimating user behavior toward detecting anomalous ratings in rating systems KBS 2016 UnSupervised ML -
SVM-TIA a shilling attack detection method based on SVM and target item analysis in recommender systems J. Neucom. 2016 Supervised ML -
Re-scale AdaBoost for attack detection in collaborative filtering recommender systems KBS 2016 Supervised ML -
Catch the black sheep: unified framework for shilling attack detection based on fraudulent action propagation IJCAI 2015 UnSupervised AMV -
Shilling attacks detection in recommender systems based on target item analysis Plos One 2015 UnSupervised ML, NF, EM -
A novel item anomaly detection approach against shilling attacks in collaborative recommendation systems using the dynamic time interval segmentation technique IS 2015 UnSupervised ML -
A novel shilling attack detection method Procedia Comput. Sci. 2014 UnSupervised ML -
Detection of abnormal profiles on group attacks in recommender systems SIGIR 2014 UnSupervised NF, ML, EM -
HHT–SVM: An online method for detecting profile injection attacks in collaborative recommender systems KBS 2014 Supervised ML -
Shilling attack detection utilizing semi-supervised learning method for collaborative recommender system WWW 2013 Semi Supervised ML -
𝛽P: A novel approach to filter out malicious rating profiles from recommender systems DSS 2013 UnSupervised ML -
A Meta-learning-based Approach for Detecting Profile Injection Attacks in Collaborative Recommender Systems J. Comput. 2012 Supervised ML -
HySAD: A semi-supervised hybrid shilling attack detector for trustworthy product recommendation KDD 2012 Semi Supervised NF, ML -
A clustering approach to unsupervised attack detection in collaborative recommender systems ICDATA 2011 UnSupervised ML -
Unsupervised strategies for shilling detection and robust collaborative filtering User Model. User-Adapt. Interact. 2009 UnSupervised ML -
Unsupervised retrieval of attack profiles in collaborative recommender systems RecSys 2008 UnSupervised ML -
Lies and propaganda: detecting spam users in collaborative filtering IUI 2007 UnSupervised ML -
Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness TOIT 2007 Supervised ML -
Classification features for attack detection in collaborative recommender systems KDD 2006 Supervised ML -
Detecting profile injection attacks in collaborative recommender systems CEC-EEE 2006 Supervised ML -
Attack detection in time series for recommender systems KDD 2006 UnSupervised ML -
Preventing shilling attacks in online recommender systems WIDM 2005 Supervised ML -

Datasets

Dataset Name
AAT Amazon Automotive
ABT Amazon Beauty
ADM Amazon Digital Music
AIV Amazon Instant Video
AMB Amazon Book
AMMN Amazon Men
AMV Amazon Movie
AMCP Amazon Cell-phone
AMP Amazon Product
BC Book-Crossing
CI Ciao
CIT Citation Network
DB Douban
eB eBay
EM Eachmovie
EP Epinions
Feeds Microsoft news App
FR FRappe - Mobile App Recommendations
FT Film Trust
Ftr Fund Transactions
G+ Google+
GOW Gowalla
MIND Microsoft News
LA Last.fm
LI Linkedin
LT LibraryThing
ML MovieLens
NF Netflix
St Steam
TC Tradesy
Trip TripAdvisor
TW Twitter [Code]
YE Yelp
YT YouTube
SYNC Synthetic datasets

Disclaimer

Feel free to contact us if you have any queries or exciting news. In addition, we welcome all researchers to contribute to this repository and further contribute to the knowledge of this field.

If you have some other related references, please feel free to create a Github issue with the paper information. We will gladly update the repos according to your suggestions. (You can also create pull requests, but it might take some time for us to do the merge)

HitCount visitors