Actions
  • shareshare
  • link
  • cite
  • add
add
auto_awesome_motion View all 10 versions
Publication . Preprint . Other literature type . Part of book or chapter of book . Article . Conference object . 2020

Contextual Semantic Interpretability

Diego Marcos; Ruth Fong; Sylvain Lobry; Rémi Flamary; Nicolas Courty; Devis Tuia;
Open Access
English
Published: 01 Dec 2020
Publisher: Springer Science and Business Media Deutschland GmbH
Abstract

International audience; Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a \emph{semantic bottleneck}. Once the attributes are learned, they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision. In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clear and interpretable explanations for each prediction.

Subjects by Vocabulary

Microsoft Academic Graph classification: Convolutional neural network Natural language processing computer.software_genre computer Bottleneck Representation (mathematics) Computer science Baseline (configuration management) Task (project management) Interpretability Context (language use) Artificial intelligence business.industry business Test set

Subjects

PE&RC, Laboratory of Geo-information Science and Remote Sensing, Explainable AI, Interpretability, Sparsity, Laboratorium voor Geo-informatiekunde en Remote Sensing, [STAT.ML]Statistics [stat]/Machine Learning [stat.ML], [INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing, Deep learning, Interpretable deep learning, Lanscape scenicness, Ecosystem services, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences

66 references, page 1 of 7

1. Lapuschkin, S., Waldchen, S., Binder, A., Montavon, G., Samek, W., Muller, K.R.: Unmasking clever hans predictors and assessing what machines really learn. Nature communications 10 (2019) 1096 [OpenAIRE]

2. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

3. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)

4. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., Song, D.: Natural adversarial examples. arXiv preprint arXiv:1907.07174 (2019)

5. Edwards, L., Veale, M.: Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev. 16 (2017) 18

6. Biran, O., Cotton, C.: Explanation and justi cation in machine learning: A survey. In: IJCAI-17 workshop on explainable AI (XAI). Volume 8. (2017) 1

7. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (2019) 206{215

8. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: An overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), IEEE (2018) 80{89

9. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classi cation models and saliency maps. In: Proc. ICLR workshop. (2014)

10. Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: The all convolutional net. (2015)

Funded by
ANR| 3IA@cote d'azur
Project
3IA@cote d'azur
3IA Côte d'Azur
  • Funder: French National Research Agency (ANR) (ANR)
  • Project Code: ANR-19-P3IA-0002
,
ANR| OATMIL
Project
OATMIL
OptimAl Transport for MachIne Learning
  • Funder: French National Research Agency (ANR) (ANR)
  • Project Code: ANR-17-CE23-0012
moresidebar