
Computer software fails because of the presence of intellectual faults, ranging from simple coding faults to fundamental design faults. In principle, such faults can be permanently removed when they are detected by failure of the software. Then the software will exhibit reliability growth. The problem considered here is the one of forecasting this growth: it includes the estimation of the current reliability of the program from the previous failure data. We begin with a brief description of the software failure process: a non-stationary stochastic process. Several of the best-known software reliability growth models are described, and examples given of their performance on real software failure data. They shown marked disagreement and thus reveal a need for methods of comparing and evaluating software reliability forecasts. Several simple techniques for conducting this evaluation are described and illustrated using several different models on real data sets. Finally, it is shown how in certain circumstances it is possible to improve the predictive accuracy of software reliability models by a re-calibration technique.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 8 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
