An Uncertainty-Aware Approach To Optimal Configuration Of Stream Processing Systems

Dataset UNKNOWN
Jamshidi, Pooyan ; Casale, Giuliano (2016)
  • Publisher: Zenodo
  • Related identifiers: doi: 10.5281/zenodo.56238
  • Subject: Big Data | Stream Processing System | Machine Learning | Performance Tuning | Bayesian Optimization | Auto-tuning | Software Engineering | Cloud Computing | Software Variability | Software Performance Engineering | DevOps | Dataset

<p>The datasets in this release support the results presented in the paper</p> <blockquote> <p>P. Jamshidi, G. Casale, "An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing Systems", accepted for presentation at MASCOTS 2016.</p> </blockquote> <p>An open access to the paper is available at https://arxiv.org/abs/1606.06543</p> <blockquote> <p>Also open source code is available at https://github.com/dice-project/DICE-Configuration-BO4CO</p> </blockquote> <p>The archive contains 10 comma separated datasets representing performance measurements (throughput and latency) for 3 different stream benchmark applications. These have been experimentally collected on 5 different cloud cluster over the course of 3 months (24/7). Each row in the datasets represents a different configuration setting for the application and the last two columns represent the average performance of the application measured over the course of 10 minutes under that specific configuration setting. The datasets contains a full factorial and exhaustive measurements for all possible settings limited to a predetermined interval for each variable. Each dataset is named in the following format: "<em>benchmark_application-dimensions-cluster_name</em>". For example, "wc-6d-c1" refers to WordCount benchmark application with 6 dimensions (i.e., we varied 6 configuration parameters) and the application was deployed on c1 cluster (OpenNebula, see Appendix). This resulted in a dataset of size 2880, i.e., it has taken 2880*10m=480h=20days for collecting the data!  </p> <p>For more information about the data refer to the appendix of the paper: https://arxiv.org/abs/1606.06543. </p> <p>When referring to the dataset or code please cite the paper above.</p>
Share - Bookmark

  • Download from
    Zenodo via Datacite (Dataset, 2016)
  • Cite this research data