

You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
Using Whole Document Context in Neural Machine Translation
In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence.
Accepted paper to IWSLT2019
- Université Paris Diderot France
- Aix-Marseille University France
ACM Computing Classification System: ComputingMethodologies_DOCUMENTANDTEXTPROCESSING
[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing, [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL], Computer Science - Computation and Language, Computation and Language (cs.CL), FOS: Computer and information sciences, [INFO.INFO-TT] Computer Science [cs]/Document and Text Processing, [INFO.INFO-CL] Computer Science [cs]/Computation and Language [cs.CL]
[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing, [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL], Computer Science - Computation and Language, Computation and Language (cs.CL), FOS: Computer and information sciences, [INFO.INFO-TT] Computer Science [cs]/Document and Text Processing, [INFO.INFO-CL] Computer Science [cs]/Computation and Language [cs.CL]
ACM Computing Classification System: ComputingMethodologies_DOCUMENTANDTEXTPROCESSING
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).5 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average visibility views 111 download downloads 82 citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).5 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average Powered byBIP!
- 111views82downloads



In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence.
Accepted paper to IWSLT2019