Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Article . 2021
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Article . 2021
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

OBPMark (On-Board Processing Benchmarks) ��� Open Source Computational Performance Benchmarks for Space Applications

Authors: Steenari, David; Kosmidis, Leonidas; Rodriguez-Ferrandez, Ivan; Jover-Alvarez, Alvaro; Förster, Kyra;

OBPMark (On-Board Processing Benchmarks) ��� Open Source Computational Performance Benchmarks for Space Applications

Abstract

Computational benchmarking of on-board processing performance for space applications has often been done in a case-to-case basis, taking into account only a small subset of devices and specific, often proprietary, applications, limiting domain coverage and reproducibility. While commercial benchmarks exists for embedded systems, they are usually limited to CPUs and are based on synthetic algorithms non-relevant for space. Consequently, they are not generally suitable for assessing highly parallel processors (GPUs, DSPs, etc.) and/or hardware implementations (i.e. ASICs and FPGAs) which are commonplace in space systems. For on-board processing, there are a number of application types which reoccur over multiple missions. These applications and algorithms are often driving the overall computational requirements of the mission, e.g. in the case of image and radar processing, RF signal processing and compression. In each case, there are certain performance metrics ��� such as the number of pixels processed per second ��� which are well-known and easily understandable by designers and users. Finally, with the rise of machine learning applications in on-board space applications, tasks such as image classification and object detection using SVMs and CNNs are becoming commonly used. OBPMark (On-Board Processing Benchmarks) defines a set of benchmarks covering the typical classes of applications commonly found on-board spacecraft. The benchmark suite is publicly available to enable easy comparison of different systems and to quickly down-select possible processing solutions for a mission. It is open source and includes multiple implementations, while it is easily extensible allowing porting and optimization to target platforms, including heterogeneous ones, for fair comparison. Currently, implementations in standard C, OpenMP, OpenCL and CUDA are included. A technical note, defining the algorithms used is also provided to allow implementers to provide additional dedicated versions, including reference inputs and outputs for correctness verification as well as an optional automated launching framework for reproducibility. This also allows the benchmarks to be implemented in FPGAs, while ensuring equivalence with the reference implementations. Five categories of benchmarks are defined 1) Image Processing Pipelines; 2) Standard Compression Algorithms; 3) Standard Encryption Algorithms; 4) Processing Building Blocks; and 5) Machine Learning Inference. In each category, specific benchmarks are included, e.g. both image and radar image compression. Recommended parameters for the CCSDS compression standards 121.0, 122.0 and 123.0 are provided. The processing building blocks include e.g. FIR filters and FFT processing. Two ML applications have been chosen: cloud screening and ship detection. Both will be provided as standard pre-trained machine learning models, both floating point and quantized integer models ��� to allow support for multiple microarchitectures. The specification of OBPMark has been initiated by ESA together with BSC as an open source project to allow transparent and open performance comparison of devices and systems. The project will also maintain a list of available benchmark results on its open repository. The work has been carried out both internally at ESA, and at BSC through the on-going ESA-funded GPU4S activity, whose optimised versions of algorithmic building blocks implemented in the open source GPU4S Bench benchmarking suite were used as a basis.

Keywords

obdp2021, obdp, on-board processing

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    3
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Top 10%
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 89
    download downloads 171
  • 89
    views
    171
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
3
Top 10%
Top 10%
Average
89
171
Green