Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2022
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2022
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Identification of Performance Changes at Code Level (Jetty Evaluation Dataset)

Authors: Anonymous For Reviewing;

Identification of Performance Changes at Code Level (Jetty Evaluation Dataset)

Abstract

This is the anonymous reviewing version; the source code repository will be added after the review. This dataset provides the results of measuring jetty with the 1,000 artificial regressions with Peass and with JMH. The creation of the artificial regressions and the measurement is defined here: https://anonymous.4open.science/r/jetty-evaluation-6F58/ (Repository named jetty-evaluation, GitHub link will be provided after review) An example regression is contained in https://anonymous.4open.science/r/jetty-experiments-202D We obtained these data from measurement on Intel Xeon CPU E5-2620 v3 @ 2.40GHz. The dataset contains the following data: regression-results-peass-0.tar.xz (Results of the measurement with Peass, part 0) regression-results-peass-1.tar.xz (Results of the measurement with Peass, part 1) regression-results-peass-2.tar.xz (Results of the measurement with Peass, part 2) regression-results-peass-3.tar.xz (Results of the measurement with Peass, part 3) regression-results-jmh.tar.xz (Results of the measurement with JMH) tree-results.tar.xz (Metadata of the trees) To get the data in a usable format, extract the peass data to one folder (the folder will be named $PEASS_RESULT_FOLDER): mkdir peass for file in *; do echo $file; tar -xf $file; done for i in {0..3}; do mv $i/* .; done This will yield to a folder containing 1000 folders named regression-$i, where each consists of deps.tar.xz: The regression test selection results logs.tar.xz: The logs of the test executions results: The traces of the regression test selection and a file named changes_*testcase.json, which contains statistical details of the measured performance change (if present) jetty.project_peass: Detailed measurement data and logs of individual JVM starts To analyse the Peass results, run cd scripts/peass ./analyzeChangeIdentification.sh $PEASS_RESULTS_FOLDER ./analyzeFrequency.sh $PEASS_RESULTS_FOLDER This will take some time, since partial results need to be unpacked for analysis. The first script will create the following results: and the second will yield the following results: Correct Measurement: 587 Not selected changes: 146 Wrong measurement result: 267 Wrong analysis (should be 0): 0 Overall: 1000 Share of changed method on correct measurements: 0.109571 0.11238 32 Method call count on correct measurement: 15638.4 32853.7 32 Average tree depth on correct measurements: 1.1022 2.6875 32 Share of changed method on wrong measurements: 0.17692 0.180365 968 Method call count on wrong measurement: 711415 180438 968 Average tree depth on wrong measurements: 1.23239 2.42252 968 To analyze the JMH data, first extract the metadata (the folder will be named $TREEFOLDER): tar -xf tree-results.tar.xz Afterwards extract the JMH results (the folder will be named $JMH_RESULTS_FOLDER): tar -xvf regression-results-jmh.tar.xz This will yield a folder containing a measurement for each regression with two files: basic.json: The performance measurement result of the basic version regression-$i.json: The performance measurement result of the version containing the regression Afterwards, run the analysis in the jetty-evaluation repository: cd scripts/jmh ./analyzeFrequency.sh $JMH_RESULTS_FOLDER $TREEFOLDER Since the regression are injected in the call tree of the benchmark, there are now unselected changes. The analysis will yield the following results: Share of changed method on correct measurements: 0.184631 0.271968 587 Method call count on correct measurement: 14628.2 4979.6 587 Share of changed method on wrong measurements: 0.180981 0.235614 267 Method call count on wrong measurement: 14902.2 4333.58 267

Data will be de-anonymized after reviews

Keywords

software performance engineering, regression benchmarking, performance benchmarking

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 5
    download downloads 4
  • 5
    views
    4
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
5
4