publication . Other literature type . Preprint . Article . 2019

The challenge of crafting intelligible intelligence

Daniel S. Weld; Gagan Bansal;
Open Access
  • Published: 21 May 2019
  • Publisher: Association for Computing Machinery (ACM)
Abstract
Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using inherently interpretable models or by developing new methods for explaining and controlling otherwise overwhelmingly complex decisions using local approximation, vocabulary alignment, and interactive explanation. This paper argues that intelligibility is essential, surveys...
Subjects
free text keywords: Computer Science - Artificial Intelligence, General Computer Science
Funded by
NSF| RI: Small: Improving Crowd-Sourced Annotation by Autonomous Intelligent Agents
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1420667
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
40 references, page 1 of 3

[1] J. R Anderson, F. Boyle, and B. Reiser. 1985. Intelligent tutoring systems. Science 228, 4698 (1985), 456-462.

[2] D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.

[3] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. 2017. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In CVPR.

[4] M. Brooks, S. Amershi, B. Lee, S. M Drucker, A. Kapoor, and P. Simard. 2015. FeatureInsight: Visual support for error-driven feature ideation in text classification. In VAST.

[5] R. Calo. 2014. The case for a federal robotics commission. (2014). https://www.brookings.edu/research/the-case-for-a-federal-roboticscommission/.

[6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.

[7] T. Dietterich. 2017. Steps Towards Robust Artificial Intelligence. AI Magazine 38, 3 (2017). [OpenAIRE]

[8] F. Doshi-Velez and B. Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. ArXiv (2017). arXiv:1702.08608

[9] M. Fox, D. Long, and D. Magazzeni. 2017. Explainable Planning. http://arxiv.org/ abs/1709.10256

[10] I. J. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and Harnessing Adversarial Examples. ArXiv (2014). arXiv:1412.6572

[11] B. Goodman and S. Flaxman. 2016. European Union regulations on algorithmic decision-making and a “right to explanation”. ArXiv (2016). arXiv:1606.08813

[12] P. Grice. 1975. Logic and Conversation. 41-58.

[13] J. Halpern and J. Pearl. 2005. Causes and explanations: A structural-model approach. Part I: Causes. The British journal for the philosophy of science 56, 4 (2005), 843-887.

[14] M. Hardt, E. Price, and N. Srebro. 2016. Equality of opportunity in supervised learning. In NIPS.

[15] L. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, and T. Darrell. 2016. Generating visual explanations. In ECCV.

40 references, page 1 of 3
Abstract
Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using inherently interpretable models or by developing new methods for explaining and controlling otherwise overwhelmingly complex decisions using local approximation, vocabulary alignment, and interactive explanation. This paper argues that intelligibility is essential, surveys...
Subjects
free text keywords: Computer Science - Artificial Intelligence, General Computer Science
Funded by
NSF| RI: Small: Improving Crowd-Sourced Annotation by Autonomous Intelligent Agents
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1420667
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
40 references, page 1 of 3

[1] J. R Anderson, F. Boyle, and B. Reiser. 1985. Intelligent tutoring systems. Science 228, 4698 (1985), 456-462.

[2] D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.

[3] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. 2017. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In CVPR.

[4] M. Brooks, S. Amershi, B. Lee, S. M Drucker, A. Kapoor, and P. Simard. 2015. FeatureInsight: Visual support for error-driven feature ideation in text classification. In VAST.

[5] R. Calo. 2014. The case for a federal robotics commission. (2014). https://www.brookings.edu/research/the-case-for-a-federal-roboticscommission/.

[6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.

[7] T. Dietterich. 2017. Steps Towards Robust Artificial Intelligence. AI Magazine 38, 3 (2017). [OpenAIRE]

[8] F. Doshi-Velez and B. Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. ArXiv (2017). arXiv:1702.08608

[9] M. Fox, D. Long, and D. Magazzeni. 2017. Explainable Planning. http://arxiv.org/ abs/1709.10256

[10] I. J. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and Harnessing Adversarial Examples. ArXiv (2014). arXiv:1412.6572

[11] B. Goodman and S. Flaxman. 2016. European Union regulations on algorithmic decision-making and a “right to explanation”. ArXiv (2016). arXiv:1606.08813

[12] P. Grice. 1975. Logic and Conversation. 41-58.

[13] J. Halpern and J. Pearl. 2005. Causes and explanations: A structural-model approach. Part I: Causes. The British journal for the philosophy of science 56, 4 (2005), 843-887.

[14] M. Hardt, E. Price, and N. Srebro. 2016. Equality of opportunity in supervised learning. In NIPS.

[15] L. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, and T. Darrell. 2016. Generating visual explanations. In ECCV.

40 references, page 1 of 3
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue