publication . Preprint . 2016

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos;
Open Access English
  • Published: 17 Nov 2016
Abstract
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for wh...
Subjects
free text keywords: Statistics - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Learning
Related Organizations
Download from

[1] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In International Conference on Computer Vision (ICCV), 2015.

[2] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. Journal of Machine Learning Research, 11, 2010.

[3] Pedro Domingos and Geoff Hulten. Mining high-speed data streams. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '00, pages 71-80, New York, NY, USA, 2000. ACM. ISBN 1-58113-233-6. doi: 10.1145/347090.347107. [OpenAIRE]

[4] Jerome H. Friedman. Lazy decision trees. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 1, AAAI'96, pages 717-724. AAAI Press, 1996. ISBN 0-262-51091-X.

[5] Yash Goyal, Akrit Mohapatra, Devi Parikh, and Dhruv Batra. Interpreting visual question answering models. ICML Workshop on Visualization for Deep Learning, 2016. [OpenAIRE]

[6] W Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, pages 13-30, 1963.

[7] Johan Huysmans, Karel Dejaeger, Christophe Mues, Jan Vanthienen, and Bart Baesens. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst., 51(1):141-154, April 2011. ISSN 0167-9236. doi: 10.1016/j.dss. 2010.12.003. [OpenAIRE]

[8] Been Kim, Cynthia Rudin, and Julie A Shah. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1952-1960. Curran Associates, Inc., 2014.

[9] Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pages 1675- 1684, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672. 2939874. [OpenAIRE]

[10] Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. December 2015.

[11] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “why should I trust you?”: Explaining the predictions of any classifier. In Knowledge Discovery and Data Mining (KDD), 2016.

[12] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Model-agnostic interpretability of machine learning. In Human Interpretability in Machine Learning workshop, ICML '16, 2016.

Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue