
doi: 10.1007/bf01099299
An intelligent advisory system should be able to provide explanatory responses that correct mistaken user beliefs. This task requires the ability to form a model of the user's relevant beliefs and to understand and address feedback from users who are not satisfied with its advice. This paper presents a method by which a detailed model of the user's relevant domain-specific, plan-oriented beliefs can gradually be formed by trying to understand user feedback in an on-going advisory dialog. In particular, we consider the problem of constructing an automated advisor capable of participating in a dialog discussing which UNIX command should be used to perform a particular task. We show how to construct a model of a UNIX user's beliefs about UNIX commands from several different classes of user feedback. Unlike other approaches to inferring user beliefs, our approach focuses on inferring only the small set of beliefs likely to be relevant in contributing to the user's misconception. And unlike other approaches to providing advice, we focus on the task of understanding the user's descriptions of perceived problems with that advice.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
