The Challenge of Crafting Intelligible Intelligence

Preprint English OPEN
Weld, Daniel S.; Bansal, Gagan;
(2018)
  • Subject: Computer Science - Artificial Intelligence

Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are dep... View more
  • References (40)
    40 references, page 1 of 4

    [1] J. R Anderson, F. Boyle, and B. Reiser. 1985. Intelligent tutoring systems. Science 228, 4698 (1985), 456-462.

    [2] D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR.

    [3] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. 2017. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In CVPR.

    [4] M. Brooks, S. Amershi, B. Lee, S. M Drucker, A. Kapoor, and P. Simard. 2015. FeatureInsight: Visual support for error-driven feature ideation in text classification. In VAST.

    [5] R. Calo. 2014. The case for a federal robotics commission. (2014). https://www.brookings.edu/research/the-case-for-a-federal-roboticscommission/.

    [6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.

    [7] T. Dietterich. 2017. Steps Towards Robust Artificial Intelligence. AI Magazine 38, 3 (2017).

    [8] F. Doshi-Velez and B. Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. ArXiv (2017). arXiv:1702.08608

    [9] M. Fox, D. Long, and D. Magazzeni. 2017. Explainable Planning. http://arxiv.org/ abs/1709.10256

    [10] I. J. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and Harnessing Adversarial Examples. ArXiv (2014). arXiv:1412.6572

  • Related Organizations (6)
  • Metrics
Share - Bookmark