
The present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. We characterize strong-monotonic, monotonic, weak-monotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of Angluin (1980). Moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. In particular, it is proved that strong-monotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weak-monotonic inference cannot.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 43 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
