
doi: 10.1117/12.969329
This paper presents some basic algorithms for manipulating decision trees with thresholds. The algorithms are based on discrete decision theory. This algebraic approach to discrete decision theory, in particular, provides syntactic techniques for reducing the size of decision trees. If one takes the view that the object of a learning algorithm is to give an economical representation of the observations then this reduction technique provides the key to a method of learning. The basic algorithms to support the incremental learning of decision trees are discussed together with the modifications required to perform reasonable learning when threshold decisions are present. The main algorithm discussed is an incremental learning algorithm which works by maintaining an association irreducible tree representing the observations. At each iteration a new observation is added and an efficient reduction of the tree enlarged by that example is undertaken. The results of some simple experiments are discussed which suggest that this method of learning holds promise and may in some situations out perform standard heuristic techniques.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
