Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset
Data sources: ZENODO
addClaim

Dataset for Reproducible Benchmarking of Multi-Agent Platforms

Authors: Pałka, Piotr;

Dataset for Reproducible Benchmarking of Multi-Agent Platforms

Abstract

This repository contains the data and supplementary materials associated with the article presenting a reproducible Docker-based framework for quantitative evaluation of multi-agent platforms. The dataset supports experiments comparing JADE and SPADE (with Prosody and Tigase XMPP backends) across scalability, distributed deployment, and workload scenarios. It may include raw and processed measurements, logs, configuration files, and scripts used to collect and aggregate performance results related to latency, variability, throughput, scalability, platform readiness, agent creation time, and resource utilization. The repository is intended to support reproducibility, artifact-based validation, and future comparative research on standardized benchmarking of multi-agent platforms.

Powered by OpenAIRE graph
Found an issue? Give us feedback