Interpreting CNNs via Decision Trees

Preprint English OPEN
Zhang, Quanshi; Yang, Yu; Ma, Haotian; Wu, Ying Nian;
  • Subject: Computer Science - Computer Vision and Pattern Recognition

This paper aims to quantitatively explain rationales of each prediction that is made by a pre-trained convolutional neural network (CNN). We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level. ... View more
  • References (31)
    31 references, page 1 of 4

    [1] M. Aubry and B. C. Russell. Understanding deep features with computer-generated imagery. In ICCV, 2015. 2

    [2] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In CVPR, 2017. 1

    [3] S. Branson, P. Perona, and S. Belongie. Strong supervision from weak annotation: Interactive training of deformable part models. In ICCV, 2011. 5

    [4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. 3

    [5] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, 2014. 5, 6

    [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 5, 6

    [7] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016. 2

    [8] R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In arXiv:1704.03296v1, 2017. 2

    [9] N. Frosst and G. Hinton. Distilling a neural network into a soft decision tree. In arXiv:1711.09784, 2017. 3

    [10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1

  • Metrics
    No metrics available
Share - Bookmark