You are the only possible oracle: Effective test selection for end users of interactive machine learning systems

Article English OPEN
Groce, A. ; Kulesza, T. ; Zhang, C. ; Shamasunder, S. ; Burnett, M. ; Wong, W-K ; Stumpf, S. ; Das, S. ; Shinsel, A. ; Bice, F. ; McIntosh, K. (2014)
  • Publisher: Institute of Electrical and Electronics Engineers
  • Related identifiers: doi: 10.1109/TSE.2013.59
  • Subject: QA75

How do you test a program when only a single user, with no expertise in software testing, is able to determine if the program is performing correctly? Such programs are common today in the form of machine-learned classifiers. We consider the problem of testing this common kind of machine-generated program when the only oracle is an end user: e.g., only you can determine if your email is properly filed. We present test selection methods that provide very good failure rates even for small test suites, and show that these methods work in both large-scale random experiments using a “gold standard” and in studies with real users. Our methods are inexpensive and largely algorithm-independent. Key to our methods is an exploitation of properties of classifiers that is not possible in traditional software testing. Our results suggest that it is plausible for time-pressured end users to interactively detect failures—even very hard-to-find failures—without wading through a large number of successful (and thus less useful) tests. We additionally show that some methods are able to find the arguably most difficult-to-detect faults of classifiers: cases where machine learning algorithms have high confidence in an incorrect result.
  • References (63)
    63 references, page 1 of 7

    [1] IEEE Std. Glossary Software Eng. Terminology. IEEE Press, 1990.

    [2] S. Amershi, J. Fogarty, and D. Weld. Regroup: interactive machine learning for on-demand group creation in social networks. In CHI '12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 21-30. ACM Request Permissions, May 2012.

    [3] A. Arcuri, M. Iqbal, and L. Briand. Formal analysis of the effectiveness and predictability of random testing. In Intl. Symp. Software Testing and Analysis, pages 219-230, 2010.

    [4] A. Asuncion and D. Newman. UCI machine learning repository, 2007.

    [5] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. 1999.

    [6] A. Blackwell. First steps in programming: A rationale for attention investment models. In IEEE Conf. Human-Centric Computing, pages 2-10, 2002.

    [7] D. Brain and G. Webb. On the effect of data set size on bias and variance in classification learning. In D. Richards, G. Beydoun, A. Hoffmann, and P. Compton, editors, Proc. of the Fourth Australian Knowledge Acquisition Workshop, pages 117- 128. 1999.

    [8] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. http://www.csie.ntu.edu.tw/∼cjlin/libsvm.

    [9] T. Chen, T. Tse, and Z. Quan Zhou. Fault-based testing without the need of oracles. Information and Software Technology, 45(1):1- 9, 2003.

    [10] T. Y. Chen, S. C. Cheung, and S. Yiu. Metamorphic testing: a new appraoch for generating next test cases. Technical Report HKUST-CS98-01, Hong Kong Univ. Sci. Tech., 1998.

  • Metrics
    No metrics available
Share - Bookmark