
handle: 10261/243425
Argumentation can be used by a group of agents to discuss about the validity of hypotheses. In this paper we propose an argumentation-based frame-work for multiagent induction, where two agents learn separately from individual training sets, and then engage in an argumentation process in order to converge to a common hypothesis about the data. The result is a multiagent induction strategy in which the agents minimize the set of examples that they have to exchange (using argumentation) in order to converge to a shared hypothesis. The proposed strategy works for any induction algorithm which expresses the hypothesis as a set of rules. We show that the strategy converges to a hypothesis indistinguishable in training set accuracy from that learned by a centralized strategy.
Argumentation, Multiagent learning, induction, Induction
Argumentation, Multiagent learning, induction, Induction
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
