Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Software . 2025
Data sources: ZENODO
ZENODO
Software . 2025
Data sources: Datacite
ZENODO
Software . 2025
Data sources: Datacite
versions View all 2 versions
addClaim

skcpda/bandit-quota: v1.0 Bandit-quota

Authors: Priyank Jayraj;

skcpda/bandit-quota: v1.0 Bandit-quota

Abstract

Bandit‑Quota (6‑arm) — BEIR (all 13 BEIR datasets) A lightweight contextual‑bandit retrieval demo that combines six off‑the‑shelf dense encoders with a latency‑aware Thompson‑sampling policy. The pipeline reproduces the headline results reported in our CIKM 2025 resource‑track submission: ```Bandit nDCG@10 ≈ 0.704 mean latency ≈ 0.91 s/queryUnion‑6 nDCG@10 ≈ 0.491 mean latency ≈ 6.97 s/query``` Everything lives in a single, self‑contained script — `scripts/bandit_quota_artifact.py` — that you can run on any CPU‑only machine with ≥16 GB RAM. --- Requirements Python 3.9 – 3.12pip install -r requirements.txt (≈ 900 MB once all HF models are cached)No GPU needed — the reranker and encoders run comfortably on a modern laptop. > Tip: first run with `TRANSFORMERS_OFFLINE=1` if you have already cached the models elsewhere. --- Quick‑start bash -1) clone and enter$ git clone https://github.com/skcpda/bandit-quota$ cd bandit‑quota 2) (optional) create virtual‑env$ python -m venv .venv && source .venv/bin/activate$ cd bandit-quota$ pip install -r requirements.txt 3) run the artifact script$ python scripts/bandit_quota_artifact.py The script automatically downloads the BEIR SciFact test split (~9 MB) on first launch, produces per‑arm baselines, the naïve union run, and the Bandit‑Quota scores. Expected terminal tail: === SciFact test (300 queries) ===Bandit nDCG@10 0.7043 mean lat 0.907sUnion‑6 nDCG@10 0.4908 mean lat 6.970sSimilarly any other BEIR dataset can be run using commands like this: python scripts/bandit_quota.py --dataset nfcorpus python scripts/bandit_quota.py --dataset trec-covid Here is full list of BEIR datasets: TREC-COVID (COVID-19 literature) NFCorpus (natural facts) SciFact (scientific claim verification) SCIDOCS (scientific document retrieval) FEVER (fact verification) Climate-FEVER (climate change verification) HotpotQA (multi-hop QA) NaturalQuestions (open-domain QA) FiQA-2018 (financial QA) ArguAna (argument retrieval) CQADupStack (forum question duplication) – treated as separate sub-sets (AskUbuntu, SuperUser, ServerFault, Webmasters, etc.) DBPedia (entity retrieval) TREC-NEWS (news article retrieval) Command-line interface at a glance scripts/rerank_single.py python scripts/rerank_single.py \ --arm bge \ --dataset scifact \ --topk 200 \ --rerank 50 Flag Required? Default Accepted values What it controls --arm yes – bge, contr, mpnet, gtr, minilm, distil Which dense encoder to fire. --dataset no scifact any BEIR key you’ve mapped in URLS Target benchmark corpus. --topk no 200 positive int How many hits to pull per encoder before merging. --rerank no 50 positive int How many of the merged hits the MiniLM cross-encoder re-scores. Citation If you build on this work, please cite the resource paper: (To be updated soon) --- License Released under the MIT License — see the `LICENSE` file for full text.

Keywords

BEIR, Information Retrieval, SciFact, latency optimisation

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities