publication . Conference object . Preprint . 2019

Controlling the Output Length of Neural Machine Translation

Lakew, Surafel Melaku; Di Gangi, Mattia; Federico, Marcello;
Open Access English
  • Published: 02 Nov 2019
Comment: To appear at the 16th International Workshop on Spoken Language Translation (IWSLT), 2019
free text keywords: Computer Science - Computation and Language
Download fromView all 3 versions
Open Access
Conference object . 2019
Provider: Datacite
Open Access
Conference object . 2019
Provider: Datacite
Open Access
Conference object . 2019
Provider: ZENODO
33 references, page 1 of 3

[4] H. Hassan, A. Aue, C. Chen, V. Chowdhary, J. Clark, C. Federmann, X. Huang, M. JunczysDowmunt, W. Lewis, M. Li, S. Liu, T. Liu, R. Luo, A. Menezes, T. Qin, F. Seide, X. Tan, F. Tian, L. Wu, S. Wu, Y. Xia, D. Zhang, Z. Zhang, and M. Zhou, “Achieving human parity on automatic chinese to english news translation,” CoRR, vol. abs/1803.05567, 2018. [Online]. Available:

[5] S. La¨ubli, R. Sennrich, and M. Volk, “Has machine translation achieved human parity? A case for document-level evaluation,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 2018, pp. 4791-4796. [Online]. Available: https://aclanthology. info/papers/D18-1512/d18-1512

[6] K. Murray and D. Chiang, “Correcting length bias in neural machine translation,” in Proceedings of the Third Conference on Machine Translation: Research Papers, 2018, pp. 212-223.

[7] X. Shi, K. Knight, and D. Yuret, “Why neural translations are the right length,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016, pp. 2278-2282.

[8] A. M. Rush, S. Chopra, and J. Weston, “A neural attention model for abstractive sentence summarization,” arXiv preprint arXiv:1509.00685, 2015.

[9] Y. Kikuchi, G. Neubig, R. Sasano, H. Takamura, and M. Okumura, “Controlling output length in neural encoder-decoders,” arXiv preprint arXiv:1609.09552, 2016.

[10] A. Fan, D. Grangier, and M. Auli, “Controllable abstractive summarization,” arXiv preprint arXiv:1711.05217, 2017.

[11] Y. Liu, Z. Luo, and K. Zhu, “Controlling length in abstractive summarization using a convolutional neural network,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 4110-4119.

[12] T. Sho and O. Naoaki, “Positional encoding to control output sequence length,” in Proceedings of the HLT-NAACL 2019, 2019. [Online]. Available: [OpenAIRE]

[13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 6000-6010.

[14] M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” arXiv preprint arXiv:1508.04025, 2015.

[15] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.

[16] M. Johnson, M. Schuster, Q. V. Le, M. Krikun, Y. Wu, Z. Chen, N. Thorat, F. Vie´gas, M. Wattenberg, G. Corrado et al., “Google's multilingual neural machine translation system: Enabling zero-shot translation,” arXiv preprint arXiv:1611.04558, 2016.

[17] T.-L. Ha, J. Niehues, and A. Waibel, “Toward multilingual neural machine translation with universal encoder and decoder,” arXiv preprint arXiv:1611.04798, 2016. [OpenAIRE]

[18] R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” arXiv preprint arXiv:1508.07909, 2015. [OpenAIRE]

33 references, page 1 of 3
Any information missing or wrong?Report an Issue