
We introduce a method for using images for word sense disambiguation, either alone, or in conjunction with traditional text based methods. The approach is based in recent work on a method for predicting words for images which can be learned from image datasets with associated text. When word prediction is constrained to a narrow set of choices such as possible senses, it can be quite reliable, and we use these predictions either by themselves or to reinforce standard methods. We provide preliminary results on a subset of the Corel image database which has three to five keywords per image. The subset was automatically selected to have a greater portion of keywords with sense ambiguity and the word senses were hand labeled to provide ground truth for testing. Results on this data strongly suggest that images can help with word sense disambiguation.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 15 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
