publication . Preprint . Conference object . 2016

An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects

Leitner, Philipp; Bezemer, Cor-Paul;
Open Access
  • Published: 06 Oct 2016
  • Publisher: PeerJ
Abstract
<jats:p>The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives...
24 references, page 1 of 2

[1] M. Aberdour. Achieving quality in open-source software. IEEE Software, 24(1):58{64, Jan 2007. [OpenAIRE]

[2] A. Avritzer, J. Kondek, D. Liu, and E. J. Weyuker. Software performance testing based on workload characterization. In Proceedings of the 3rd International Workshop on Software and Performance (WOSP), pages 17{24. ACM, 2002.

[3] S. Baltes, O. Moseler, F. Beck, and S. Diehl. Navigate, understand, communicate: How developers locate performance bugs. In 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 1{10, Oct 2015.

[4] M. Beller, G. Gousios, A. Panichella, and A. Zaidman. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 179{190. ACM, 2015. [OpenAIRE]

[5] S. M. Blackburn, R. Garner, C. Ho mann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanovic, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In Proceedings of the 21st ACM SIGPLAN Conference on Object-oriented Programming Systems, Languages, and Applications (OOPSLA), pages 169{190. ACM, 2006.

[6] L. Bulej, T. Bures, J. Keznikl, A. Koubkova, A. Podzimek, and P. Tuma. Capturing performance assumptions using stochastic performance logic. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE), pages 311{322. ACM, 2012.

[7] J. Cito, P. Leitner, T. Fritz, and H. C. Gall. The making of cloud applications: An empirical study on software development for the cloud. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 393{403. ACM, 2015.

[8] A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous java performance evaluation. In Proceedings of the 22nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications (OOPSLA), pages 57{76. ACM, 2007.

[9] C. Heger, J. Happe, and R. Farahbod. Automated Root Cause Isolation of Performance Regressions During Software Development. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 27{38. ACM, 2013.

[10] I. Herraiz and A. E. Hassan. Beyond lines of code: Do we need more complexity metrics? Making software: what really works, and why we believe it, pages 125{141, 2010.

[11] V. Horky, P. Libic, L. Marek, A. Steinhauser, and P. Tuma. Utilizing performance unit tests to increase performance awareness. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 289{300. ACM, 2015.

[12] Z. M. Jiang and A. E. Hassan. A survey on load testing of large-scale software systems. IEEE Transactions on Software Engineering (TSE), 41(11):1091{1118, 2015.

[13] G. Jin, L. Song, X. Shi, J. Scherpelz, and S. Lu. Understanding and detecting real-world performance bugs. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77{88. ACM, 2012.

[14] J. Larres, A. Potanin, and Y. Hirose. A study of performance variations in the mozilla refox web browser. In Proceedings of the Thirty-Sixth Australasian Computer Science Conference (ACSC), pages 3{12. Australian Computer Society, Inc., 2013.

[15] J. D. Long, D. Feng, and N. Cli . Ordinal analysis of behavioral data. Handbook of psychology, 2003.

24 references, page 1 of 2
Abstract
<jats:p>The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives...
24 references, page 1 of 2

[1] M. Aberdour. Achieving quality in open-source software. IEEE Software, 24(1):58{64, Jan 2007. [OpenAIRE]

[2] A. Avritzer, J. Kondek, D. Liu, and E. J. Weyuker. Software performance testing based on workload characterization. In Proceedings of the 3rd International Workshop on Software and Performance (WOSP), pages 17{24. ACM, 2002.

[3] S. Baltes, O. Moseler, F. Beck, and S. Diehl. Navigate, understand, communicate: How developers locate performance bugs. In 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 1{10, Oct 2015.

[4] M. Beller, G. Gousios, A. Panichella, and A. Zaidman. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 179{190. ACM, 2015. [OpenAIRE]

[5] S. M. Blackburn, R. Garner, C. Ho mann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanovic, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In Proceedings of the 21st ACM SIGPLAN Conference on Object-oriented Programming Systems, Languages, and Applications (OOPSLA), pages 169{190. ACM, 2006.

[6] L. Bulej, T. Bures, J. Keznikl, A. Koubkova, A. Podzimek, and P. Tuma. Capturing performance assumptions using stochastic performance logic. In Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering (ICPE), pages 311{322. ACM, 2012.

[7] J. Cito, P. Leitner, T. Fritz, and H. C. Gall. The making of cloud applications: An empirical study on software development for the cloud. In Proceedings of the 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE), pages 393{403. ACM, 2015.

[8] A. Georges, D. Buytaert, and L. Eeckhout. Statistically rigorous java performance evaluation. In Proceedings of the 22nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications (OOPSLA), pages 57{76. ACM, 2007.

[9] C. Heger, J. Happe, and R. Farahbod. Automated Root Cause Isolation of Performance Regressions During Software Development. In Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 27{38. ACM, 2013.

[10] I. Herraiz and A. E. Hassan. Beyond lines of code: Do we need more complexity metrics? Making software: what really works, and why we believe it, pages 125{141, 2010.

[11] V. Horky, P. Libic, L. Marek, A. Steinhauser, and P. Tuma. Utilizing performance unit tests to increase performance awareness. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (ICPE), pages 289{300. ACM, 2015.

[12] Z. M. Jiang and A. E. Hassan. A survey on load testing of large-scale software systems. IEEE Transactions on Software Engineering (TSE), 41(11):1091{1118, 2015.

[13] G. Jin, L. Song, X. Shi, J. Scherpelz, and S. Lu. Understanding and detecting real-world performance bugs. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77{88. ACM, 2012.

[14] J. Larres, A. Potanin, and Y. Hirose. A study of performance variations in the mozilla refox web browser. In Proceedings of the Thirty-Sixth Australasian Computer Science Conference (ACSC), pages 3{12. Australian Computer Society, Inc., 2013.

[15] J. D. Long, D. Feng, and N. Cli . Ordinal analysis of behavioral data. Handbook of psychology, 2003.

24 references, page 1 of 2
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue