
In this paper, we conduct a large scale statistical study to explore if there exists any difference between the quality of cloned methods and non cloned methods. The dataset consists of 4,421 open source Java projects containing 644,830 cloned and 842,052 non cloned methods. The study uses 27 software metrics as a proxy for quality, spanning across complexity, modularity, and documentation (code-comments) categories. We did not find any statistically significant difference (p0.1) between the quality of cloned and non cloned methods for most of the metrics, except for 3 metrics. We, however, found that the cloned methods are on an average 20% smaller than the non cloned methods.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 6 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
