Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Norwegian Open Resea...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
versions View all 1 versions
addClaim

Explainable AI for RGB to HyperSpectral CNN Models

Authors: Issa, Hamzeh;

Explainable AI for RGB to HyperSpectral CNN Models

Abstract

HyperSpectral Imaging (HSI) is a vital tool to many industries and fields. It is however very costly, time consuming, and in need of dedicated hardware. Lots of research was dedicated to find alternatives to traditional HSI systems. One of the most promising ones is RGB to Hyperspectral reconstruction. These models are usually CNNs that take in a single RGB image and estimates the hyperspectral image for the same scene (in the visible range). Such models can dramatically cut on costs and time needed to acquire a hyperspectral image given the availability and ease of acquiring RGB images. However, to fully adopt such models we need to establish trust in them (or distrust). To do that, we need to understand and explain how these models work on a fundamental level at least. This is especially the case because these models deal with a highly ill-posed problem of mapping only 3 RGB bands into a much larger number of bands (typically 31) to perform this estimation. Users do not have any evidence of how these models actually do that and how they are able to estimate the illuminant of the scene to avoid metameric effects and how they perform the ’one-to-many’ mapping involved. In this thesis, we work on filling this major gap. We take 7 of the most prominent RGB to hyperspectral reconstruction models and apply many explainable AI (XAI) methods to try to understand how they work. We classify these models based on the different ways they perform the reconstruction. We establish points of failure where some or all models cannot perform as expected. We establish their spatial feature area in the input image. We try to find what kind of parameters and features they use and where in the network they use them. We present a theory on how they do illuminant estimation and present supporting evidences for that theory. Finally, we bring all tests together and try to break down these models into more simple sub-models that could be replicated by simpler explainable equivalents. We also introduce novel modifications to existing XAI methods that allows them to be used in any hyperspectral model explainability project in the future. The outcomes of this work support that these models work in an intelligible manner. Meaning that they could be understood and equated by other explainable models. However, these models cannot be trusted all the time, since the work shows that they fail consistently under certain conditions. This work does not fully explain these models since some aspects are still unclear, but it does explain many important parts and paves the way for a clearer understanding of these networks.

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green