
doi: 10.1109/6.815440
Twenty years ago, only a handful of visionaries could have predicted that powerful software born of supercomputing would butt its way into almost every desktop PC. Few foresaw the scale of data that would be manipulated or the complexity of the tasks that would be performed by software tools costing a few hundred dollars. But now, all developers of technical software take it as given that users may need to process gigabytes of data drawn from a combination of sources: instrument output; archived data; and publicly available materials, such as census data downloaded from the Internet. In this paper, the author argues that, in a sophisticated marketplace, the success of those developers hinges on equipping users to gain ever swifter insight into many reams of data.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
