
doi: 10.1007/bf00348718
This paper reviews some knowledge representation approaches devoted to the sensor fusion problem, as encountered whenever images, signals, text must be combined to provide the input to a controller or to an inference procedure. The basic steps involved in the derivation of the knowledge representation scheme, are: (A) locate a representation, based on exogeneous context information (B) compare two representations to find out if they refer to the same object/entity (C) merge sensor-based features from the various representations of the same object into a new set of features or attributes, (D) aggregate the representations into a joint fused representation, usually more abstract than each of the sensor-related representations.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 50 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 1% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
