Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

Preprint English OPEN
Wang, Tong (2018)
  • Subject: Statistics - Machine Learning | Computer Science - Learning

Interpretable machine learning models have received increasing interest in recent years, especially in domains where humans are involved in the decision-making process. However, the possible loss of the task performance for gaining interpretability is often inevitable. This performance downgrade puts practitioners in a dilemma of choosing between a top-performing black-box model with no explanations and an interpretable model with unsatisfying task performance. In this work, we propose a novel framework for building a Hybrid Decision Model that integrates an interpretable model with any black-box model to introduce explanations in the decision making process while preserving or possibly improving the predictive accuracy. We propose a novel metric, explainability, to measure the percentage of data that are sent to the interpretable model for decision. We also design a principled objective function that considers predictive accuracy, model interpretability, and data explainability. Under this framework, we develop Collaborative Black-box and RUle Set Hybrid (CoBRUSH) model that combines logic rules and any black-box model into a joint decision model. An input instance is first sent to the rules for decision. If a rule is satisfied, a decision will be directly generated. Otherwise, the black-box model is activated to decide on the instance. To train a hybrid model, we design an efficient search algorithm that exploits theoretically grounded strategies to reduce computation. Experiments show that CoBRUSH models are able to achieve same or better accuracy than their black-box collaborator working alone while gaining explainability. They also have smaller model complexity than interpretable baselines.
  • References (37)
    37 references, page 1 of 4

    [1] A. Aamodt and E. Plaza. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI comm, 7(1):39-59, 1994.

    [2] P. Adler, C. Falk, S. A. Friedler, G. Rybeck, C. Scheidegger, B. Smith, and S. Venkatasubramanian. Auditing black-box models for indirect inƒuence. In ICDM, pages 1-10. IEEE, 2016.

    [3] E. Angelino, N. Larus-Stone, D. Alabi, M. Seltzer, and C. Rudin. Learning certi€- ably optimal rule lists. In SIGKDD, pages 35-44. ACM, 2017.

    [4] I. Bichindaritz and C. Marling. Case-based reasoning in the health sciences: What's next? Arti€cial intelligence in medicine, 36(2):127-135, 2006.

    [5] T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. In SIGKDD, pages 785-794. ACM, 2016.

    [6] F. Doshi-Velez and B. Kim. A roadmap for a rigorous science of interpretability. arXiv preprint arXiv:1702.08608, 2017.

    [7] Y. Freund and R. E. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In European conference on computational learning theory, pages 23-37. Springer, 1995.

    [8] A. d. Garcez, T. R. Besold, L. De Raedt, P. Fo¨ldiak, P. Hitzler, T. Icard, K.-U. Ku¨hnberger, L. C. Lamb, R. Miikkulainen, and D. L. Silver. Neural-symbolic learning and reasoning: contributions and challenges. In Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Stanford, 2015.

    [9] A. S. d. Garcez, K. B. Broda, and D. M. Gabbay. Neural-symbolic learning systems: foundations and applications. Springer Science & Business Media, 2012.

    [10] K. Hornik, A. Zeileis, T. Hothorn, and C. Buchta. Rweka: an r interface to weka. R package version, pages 03-4, 2007.

  • Metrics
    No metrics available
Share - Bookmark