
The performance of machine translation systems has improved significantly in recent years, but they are not immune to errors. This motivates the requirement to identify translations in need of human review and post-edits. Most current research focuses on quality estimation, that is predicting translation quality on a continuous numerical scale, which can be difficult to interpret in applied contexts. In contrast, this project focused on predicting binary labels indicating whether translations contain errors that alter the meaning of the original text, also referred to as critical errors. Such errors are likely of high priority for human review and post-edit in vast majority of use cases.
quality estimation, large language models, pretrained language model, machine translation
quality estimation, large language models, pretrained language model, machine translation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
