Downloads provided by UsageCounts
SV-COMP 2022 Competition Results This file describes the contents of an archive of the 11th Competition on Software Verification (SV-COMP 2022). https://sv-comp.sosy-lab.org/2022/ The competition was run by Dirk Beyer, LMU Munich, Germany. More information is available in the following article: Dirk Beyer. Progress on Software Verification: SV-COMP 2022. In Proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2022, Munich, April 2 - 7), 2022. Springer. Copyright (C) Dirk Beyer https://www.sosy-lab.org/people/beyer/ SPDX-License-Identifier: CC-BY-4.0 https://spdx.org/licenses/CC-BY-4.0.html To browse the competition results with a web browser, there are two options: start a local web server using php -S localhost:8000 in order to view the data in this archive, or browse https://sv-comp.sosy-lab.org/2022/results/ in order to view the data on the SV-COMP web page. Contents index.html: directs to the overview web page LICENSE.txt: specifies the license README.txt: this file results-validated/: results of validation runs results-verified/: results of verification runs and aggregated results The folder results-validated/ contains the results from validation runs: *.xml.bz2: XML results from BenchExec *.logfiles.zip: output from tools *.json.gz: mapping from files names to SHA 256 hashes for the file content The folder results-verified/ contains the results from verification runs and aggregated results: index.html: overview web page with rankings and score table design.css: HTML style definitions *.xml.bz2: XML results from BenchExec *.merged.xml.bz2: XML results from BenchExec, status adjusted according to the validation results *.logfiles.zip: output from tools *.json.gz: mapping from files names to SHA 256 hashes for the file content *.xml.bz2.table.html: HTML views on the detailed results data as generated by BenchExec’s table generator *.All.table.html: HTML views of the full benchmark set (all categories) for each tool META_*.table.html: HTML views of the benchmark set for each meta category for each tool, and over all tools <category>*.table.html: HTML views of the benchmark set for each category over all tools iZeCa0gaey.html: HTML views per tool validatorStatistics.html: Statictics of the validator runs quantilePlot-*: score-based quantile plots as visualization of the results quantilePlotShow.gp: example Gnuplot script to generate a plot score*: accumulated score results in various formats The hashes of the file names (in the files *.json.gz) are useful for validating the exact contents of a file and accessing the files from the witness store. Other Archives Overview over archives from SV-COMP 2022 that are available at Zenodo: https://doi.org/10.5281/zenodo.5838498 Verification Witnesses from SV-COMP 2022 Verification Tools. Witness store (containing the generated verification witnesses) https://doi.org/10.5281/zenodo.5831008 Results of the 11th Intl. Competition on Software Verification (SV-COMP 2022). Results (XML result files, log files, file mappings, HTML tables) https://doi.org/10.5281/zenodo.5831003 SV-Benchmarks: Benchmark Set of SV-COMP 2022 and Test-Comp 2022. Verification tasks, version svcomp22 https://doi.org/10.5281/zenodo.5720267 BenchExec, version 3.10. Benchmarking framework All benchmarks were executed for SV-COMP 2022 https://sv-comp.sosy-lab.org/2022/ by Dirk Beyer, LMU Munich, based on the following components: https://gitlab.com/sosy-lab/sv-comp/archives-2022 svcomp22 a6b18082 https://gitlab.com/sosy-lab/benchmarking/sv-benchmarks svcomp22 ad265d07 https://gitlab.com/sosy-lab/sv-comp/bench-defs svcomp22 0332884a https://gitlab.com/sosy-lab/software/benchexec 3.10 4e8716bd https://gitlab.com/sosy-lab/benchmarking/competition-scripts svcomp22 3c959671 https://github.com/sosy-lab/sv-witnesses svcomp22 e4695d2b Contact Feel free to contact me in case of questions: https://www.sosy-lab.org/people/beyer/
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 61 | |
| downloads | 21 |

Views provided by UsageCounts
Downloads provided by UsageCounts