Skip to content

CapriRecSys/CAPRI

Repository files navigation

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI-Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI is a specialized framework implemented in Python for evaluating and benchmarking several state-of-the-art POI recommendation models. The framework is equipped with state-of-the-art models, algorithms, well-known datasets for POI recommendations, and multi-dimensional evaluation criteria (accuracy, beyond-accuracy and user-item fairness). It also supports reproducibility of results using various adjustable configurations for comparative experiments.

💡 You can have a general overview of the framework on its web-page, or check the documentation on readthedocs.

Workflow of CAPRI

Below figure illustrates the general workflow handled by CAPRI.

CAPRI

🚀 Using the Framework

Do you want to start working with CAPRI? It is pretty easy! Just clone the repository and follow the instructions below:

⏳ The framework works on Python 3.9.x for now, and will be upgraded to newer versions in the recent future.

⏳ We are working on making the repository available on pip package manager. Hence, in the next versions, you will not need to clone the framework anymore.

☑️ Prerequisites

Before running the framework, there are a set of libraries to be installed:

- numpy >= 1.26.1
- pandas >= 2.1.3
- Pandas >= 0.25.2
- scikit_learn >= 1.3.2
- scipy >= 1.11.3
- PyInquirer >= 1.0.3
- typing_extensions >= 4.8.0

Looking for a simpler solution? Simply run the below command in the root directory after cloning the project:

# Create a new virtual environment (Python 3.9.x)
python -m venv capri

# Install
pip install -r requirements.txt

Everything is set. Now you can use the framework! 😊

🚀 Launch the Application

💡 Before you start, check if the terminal is locked or no. The system has an interactive mode (in which you can select parameters in the terminal) and a non-interactive mode (in which the default values will be used - check "Default Parameters (non-interactive)" section in config.py). You can choose between these modes by setting isInteractive = False/True in config.py

You can start the project by running the main.py file in the root directory. With this, the application settings are loaded from the config.py file. You can select from different options to choose a model (e.g. GeoSoCa, available on the /Models/ folder) and a dataset (e.g. Yelp, available on the /Data/ folder) to be processed by the selected model, along with a fusion operator (e.g. prodect or sum). The system starts processing data using the selected model and provides some evaluations on it. The final results (containing a evaluation file and the recommendation lists) will be added to the /Outputs/ folder, with a name template indicating your choices for evaluation. For instance:

# The evaluation file containing the selected evaluation metrics - It shows that the user selected GeoSoCa model on Gowalla dataset with Product fusion type, applied on 5628 users where the top-10 results are selected for evaluation and the length of the recommendation lists are 15
Eval_GeoSoCa_Gowalla_Product_5628user_top10_limit15.csv
# The txt file containing the evaluation lists with the configurations described above
Rec_GeoSoCa_Gowalla_Product_5628user_top10_limit15.txt

🧩 Contribution Guide

Contributing to open source codes is a rewarding method to learn, teach, and gain experience. We welcome all contributions from bug fixes to new features and extensions. Do you want to be a contributer of the project? Read more about is in our contribution guide page.

Team

CAPRI is developed with ❤️ by:

Ali Tourani Hossein A. Rahmani MohammadMehdi Naghiaei Yashar Deldjoo

📝 Citation

If you find CAPRI useful for your research or development, please cite the following paper:

@article{tourani2024capri,
  title={CAPRI: Context-aware point-of-interest recommendation framework},
  author={Tourani, Ali and Rahmani, Hossein A and Naghiaei, Mohammadmehdi and Deldjoo, Yashar},
  journal={Software Impacts},
  volume={19},
  pages={100606},
  year={2024},
  publisher={Elsevier}
}

🟢 Versions

  • Version 1.0
    • Implementation of the framework with a simple GUI
    • Supporting well-known models, datasets, and evaluation metrics for POI recommendation
    • Saving the previously executed calculation files for reusability

🟡 TODOs

  • Adding the impact of Weighted Sum Fusion when running models
  • Adding a separate metric evaluations class for fairness
  • Making a pip package for CAPRI (release planning)
  • Improving the docs and ReadTheDocs