
The rapid growth of toxic content on social media is a critical issue, but existing detection models often fail because they ignore the message's context, leading to poor accuracy, especially with veiled aggression. This work proposes a Multi-dimensional Context-aware Model (MCDM) to solve this. The MCDM integrates the message text with its discursive, user, and temporal context using a modified Transformer architecture (e.g., BERT/RoBERTa). This novel approach is theoretically justified to significantly improve the identification of toxic content, including sarcastic and subtle forms of aggression.
Hate Speech, Context-Aware Model, Toxic Content Detection, Natural Language Processing (NLP), Social Media Analysis, Cyberbullying
Hate Speech, Context-Aware Model, Toxic Content Detection, Natural Language Processing (NLP), Social Media Analysis, Cyberbullying
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
