
Aspect Term Extraction (ATE) plays an important role in aspect-based sentiment analysis. Syntax-based neural models that learn rich linguistic knowledge have proven their effectiveness on ATE. However, previous approaches mainly focus on modeling syntactic structure, neglecting rich interactions along dependency arcs. Besides, these methods highly rely on results of dependency parsing and are sensitive to parsing noise. In this work, we introduce a syntax-directed attention network and a contextual gating mechanism to tackle these issues. Specifically, a graphical neural network is utilized to model interactions along dependency arcs. With the help of syntax-directed self-attention, it could directly operate on syntactic graph and obtain structural information. We further introduce a gating mechanism to synthesize syntactic information with structure-free features. This gate is utilized to reduce the effects of parsing noise. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on three widely used benchmark datasets.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 4 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
