publication . Article . 2017

Code Coverage and Postrelease Defects: A Large-Scale Study on Open Source Projects

Pavneet Singh Kochhar; David Lo; Julia Lawall; Nachiappan Nagappan;
Open Access English
  • Published: 01 Dec 2017
  • Publisher: HAL CCSD
  • Country: France
Abstract
International audience; Testing is a pivotal activity in ensuring the quality of software. Code coverage is a common metric used as a yardstick to measure the efficacy and adequacy of testing. However, does higher coverage actually lead to a decline in post-release bugs? Do files that have higher test coverage actually have fewer bug reports? The direct relationship between code coverage and actual bug reports has not yet been analysed via a comprehensive empirical study on real bugs. Past studies only involve a few software systems or artificially injected bugs (mutants).In this empirical study, we examine these questions in the context of open-source software ...
Subjects
free text keywords: open-source, sofware testing, post-release defects, code coverage, Empirical study, [INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE]
41 references, page 1 of 3

[1] I. Ahmed, R. Gopinath, C. Brindescu, A. Groce, and C. Jensen. Can testedness be effectively measured. In ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), 2016.

[2] J. Andrews, L. Briand, Y. Labiche, and A. Namin. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Transactions on Software Engineering, 32(8):608-624, 2006.

[3] J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In Proceedings of the 27th International Conference on Software Engineering (ICSE), pages 402-411, 2005. [OpenAIRE]

[4] S. Androutsellis-Theotokis, D. Spinellis, M. Kechagia, and G. Gousios. Open source software: A survey from 10,000 feet. Foundations and Trends in Technology, Information and Operations Management, 4(3- 4):187-347, 2011. [OpenAIRE]

[5] A. Bachmann and A. Bernstein. When process data quality affects the number of bugs: Correlations in software engineering datasets. In 7th IEEE Working Conference on Mining Software Repositories (MSR), pages 62-71, 2010.

[6] X. Cai. Coverage-based testing strategies and reliability modeling for fault-tolerant software systems. PhD thesis, The Chinese University of Hong Kong (People's Republic of China), 2006.

[7] X. Cai and M. R. Lyu. The effect of code coverage on fault detection under different testing profiles. SIGSOFT Software Engineering Notes, 30(4):1-7, 2005.

[8] C. Casalnuovo, P. Devanbu, A. Oliveira, V. Filkov, and B. Ray. Assert use in github projects. In Proceedings of the 37th International Conference on Software Engineering (ICSE), pages 755-766, 2015.

[9] J. Cohen, P. Cohen, S. G. West, and L. S. Aiken. Applied multiple regression/correlation analysis for the behavioral sciences. Lawrence Erlbaum, 2003.

[10] N. E. Fenton and M. Neil. Software metrics: roadmap. In Proceedings of the Conference on The Future of Software Engineering, pages 357- 370, 2000.

[11] G. Gill and C. Kemerer. Cyclomatic complexity density and software maintenance productivity. IEEE Transactions on Software Engineering (TSE), 17(12):1284-1288, 1991.

[12] J. B. Goodenough and S. L. Gerhart. Toward a theory of test data selection. In Proceedings of the International Conference on Reliable Software, pages 493-510, 1975. [OpenAIRE]

[13] R. Gopinath, C. Jensen, and A. Groce. Code coverage for suite evaluation by developers. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 72-82, 2014. [OpenAIRE]

[14] R. Gopinath, C. Jensen, and A. Groce. Mutations: How close are they to real faults? In IEEE 25th International Symposium on Software Reliability Engineering (ISSRE), pages 189-200, 2014. [OpenAIRE]

[15] G. Gousios, M. Pinzger, and A. v. Deursen. An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 345- 355, 2014. [OpenAIRE]

41 references, page 1 of 3
Abstract
International audience; Testing is a pivotal activity in ensuring the quality of software. Code coverage is a common metric used as a yardstick to measure the efficacy and adequacy of testing. However, does higher coverage actually lead to a decline in post-release bugs? Do files that have higher test coverage actually have fewer bug reports? The direct relationship between code coverage and actual bug reports has not yet been analysed via a comprehensive empirical study on real bugs. Past studies only involve a few software systems or artificially injected bugs (mutants).In this empirical study, we examine these questions in the context of open-source software ...
Subjects
free text keywords: open-source, sofware testing, post-release defects, code coverage, Empirical study, [INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE]
41 references, page 1 of 3

[1] I. Ahmed, R. Gopinath, C. Brindescu, A. Groce, and C. Jensen. Can testedness be effectively measured. In ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), 2016.

[2] J. Andrews, L. Briand, Y. Labiche, and A. Namin. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Transactions on Software Engineering, 32(8):608-624, 2006.

[3] J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In Proceedings of the 27th International Conference on Software Engineering (ICSE), pages 402-411, 2005. [OpenAIRE]

[4] S. Androutsellis-Theotokis, D. Spinellis, M. Kechagia, and G. Gousios. Open source software: A survey from 10,000 feet. Foundations and Trends in Technology, Information and Operations Management, 4(3- 4):187-347, 2011. [OpenAIRE]

[5] A. Bachmann and A. Bernstein. When process data quality affects the number of bugs: Correlations in software engineering datasets. In 7th IEEE Working Conference on Mining Software Repositories (MSR), pages 62-71, 2010.

[6] X. Cai. Coverage-based testing strategies and reliability modeling for fault-tolerant software systems. PhD thesis, The Chinese University of Hong Kong (People's Republic of China), 2006.

[7] X. Cai and M. R. Lyu. The effect of code coverage on fault detection under different testing profiles. SIGSOFT Software Engineering Notes, 30(4):1-7, 2005.

[8] C. Casalnuovo, P. Devanbu, A. Oliveira, V. Filkov, and B. Ray. Assert use in github projects. In Proceedings of the 37th International Conference on Software Engineering (ICSE), pages 755-766, 2015.

[9] J. Cohen, P. Cohen, S. G. West, and L. S. Aiken. Applied multiple regression/correlation analysis for the behavioral sciences. Lawrence Erlbaum, 2003.

[10] N. E. Fenton and M. Neil. Software metrics: roadmap. In Proceedings of the Conference on The Future of Software Engineering, pages 357- 370, 2000.

[11] G. Gill and C. Kemerer. Cyclomatic complexity density and software maintenance productivity. IEEE Transactions on Software Engineering (TSE), 17(12):1284-1288, 1991.

[12] J. B. Goodenough and S. L. Gerhart. Toward a theory of test data selection. In Proceedings of the International Conference on Reliable Software, pages 493-510, 1975. [OpenAIRE]

[13] R. Gopinath, C. Jensen, and A. Groce. Code coverage for suite evaluation by developers. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 72-82, 2014. [OpenAIRE]

[14] R. Gopinath, C. Jensen, and A. Groce. Mutations: How close are they to real faults? In IEEE 25th International Symposium on Software Reliability Engineering (ISSRE), pages 189-200, 2014. [OpenAIRE]

[15] G. Gousios, M. Pinzger, and A. v. Deursen. An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 345- 355, 2014. [OpenAIRE]

41 references, page 1 of 3
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue