Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Software . 2025
Data sources: ZENODO
ZENODO
Software . 2025
Data sources: Datacite
ZENODO
Software . 2025
Data sources: Datacite
versions View all 2 versions
addClaim

A Lightweight Method for Modeling Confidence in Recommendations with Learned Beta Distributions

Authors: Knyazev, Norman; Oosterhuis, Harrie;

A Lightweight Method for Modeling Confidence in Recommendations with Learned Beta Distributions

Abstract

A Lightweight Method for Modeling Confidence in Recommendations with Learned Beta Distributions This repository contains the code used for the experiments in "A Lightweight Method for Modeling Confidence in Recommendations with Learned Beta Distributions" published at RecSys 2023 (open access article). Citation If you use this code to produce results for your scientific publication, or if you share a copy or fork, please refer to our RecSys 2023 paper: @inproceedings{knyazev2023alightweight, Author = {Knyazev, Norman and Oosterhuis, Harrie}, Booktitle = {Seventeenth ACM Conference on Recommender Systems (RecSys '23)}, Organization = {ACM}, Title = {A Lightweight Method for Modeling Confidence in Recommendations with Learned Beta Distributions}, Year = {2023} } License The contents of this repository are licensed under the MIT license. If you modify its contents in any way, please link back to this repository. Usage This code makes use of Python 3 and the following packages: jupyter, matplotlib, numpy, scipy, pandas, tqdm, dotenv, tensorflow==2.12.0 and tensorflow-probability. Make sure they are installed. The code can be accessed by running jupyter notebook . in the project folder and navigating to src/notebooks. The process to replicate the results reported in the publication consists of four steps: Modify the variable PROJECT_ROOT in the .env file contained in the root directory of this project to point to the global path of the root directory. Run src/notebooks/preprocess_data.ipynb to download and preprocess the dataset used for evaluation. Run src/notebooks/run_models.ipynb to train the models and export test fold predictions. Each cell trains one model on every one of 10 train-test splits and for each run exports the test set predictions (and the intermediate representations) to logs/LBD_results/{model_name}/{model_name}-{fold_id}-0/export. Run src/notebooks/RQ{research_question_number} to load the above predictions and to obtain the reported numerical results and/or visualizations. Useful tips: By default, training and evaluating different models on multiple folds in src/notebooks/run_models.ipynbis done in a sequential manner. It is also possible to train only some of the models within each runtime by running the chosen cells. Alternatively, all runs for one model can be executed in parallel by setting JOB_TYPE="new_process" or via slurm by setting JOB_TYPE="slurm". For the latter, ensure that src/modules/utils/slurm/slurm_header.txt corresponds to your slurm environment. To evaluate a model on a subset of test folds (e.g. 1), the folds can be specified in the model's config under data_params['params']['folds_to_use_outer'], for example [0, 2, 3, 9]. A single training-evaluation loop can also be executed by running the function src.modules.training.train_run with appropriate parameters.

Related Organizations
Keywords

Learning to Rank, Uncertainty Quantification

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities