publication . Preprint . 2018

Neural Abstractive Text Summarization with Sequence-to-Sequence Models: A Survey

Shi, Tian; Keneshloo, Yaser; Ramakrishnan, Naren; Reddy, Chandan K.;
Open Access English
  • Published: 04 Dec 2018
Abstract
In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve the seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this paper, we provide a comprehensive literature and technical survey on...
Subjects
free text keywords: Computer Science - Computation and Language, Computer Science - Machine Learning, Statistics - Machine Learning
Related Organizations
Funded by
NSF| III: Small: New Machine Learning Approaches for Modeling Time-to-Event Data
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1707498
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
,
NSF| SCH: INT: Collaborative Research: Data-driven Stratification and Prognosis for Traumatic Brain Injury
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1838730
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
Download from
117 references, page 1 of 8

[1] D. R. Radev, E. Hovy, and K. McKeown, “Introduction to the special issue on summarization,” Computational linguistics, vol. 28, no. 4, pp. 399-408, 2002. [OpenAIRE]

[2] M. Allahyari, S. Pouriyeh, M. Assefi, S. Safaei, E. D. Trippe, J. B. Gutierrez, and K. Kochut, “Text summarization techniques: A brief survey,” arXiv preprint arXiv:1707.02268, 2017. [OpenAIRE]

[3] I. Mani and M. T. Maybury, Advances in automatic text summarization. MIT press, 1999.

[4] M. Gambhir and V. Gupta, “Recent automatic text summarization techniques: a survey,” Artificial Intelligence Review, vol. 47, no. 1, pp. 1-66, 2017.

[5] R. M. Verma and D. Lee, “Extractive summarization: Limits, compression, generalized model and heuristics,” Computacio´n y Sistemas, vol. 21, 2017.

[6] N. Bhatia and A. Jaiswal, “Automatic text summarization and it's methods-a review,” in Cloud System and Big Data Engineering (Confluence), 2016 6th International Conference. IEEE, 2016, pp. 65-72.

[7] E. Lloret and M. Palomar, “Text summarisation in progress: a literature review,” Artificial Intelligence Review, vol. 37, no. 1, pp. 1-41, 2012.

[8] H. Saggion and T. Poibeau, “Automatic text summarization: Past, present and future,” in Multi-source, multilingual information extraction and summarization. Springer, 2013, pp. 3-21.

[9] A. Nenkova, K. McKeown et al., “Automatic summarization,” Foundations and Trends R in Information Retrieval, vol. 5, no. 2-3, pp. 103-233, 2011.

[10] D. Das and A. F. Martins, “A survey on automatic text summarization,” Literature Survey for the Language and Statistics II course at CMU, vol. 4, pp. 192-195, 2007.

[11] Y. Wu and B. Hu, “Learning to extract coherent summary via deep reinforcement learning,” 2018.

[12] A. See, P. J. Liu, and C. D. Manning, “Get to the point: Summarization with pointer-generator networks,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2017, pp. 1073-1083.

[13] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neural document summarization by jointly learning to score and select sentences,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, 2018, pp. 654-663.

[14] R. Nallapati, B. Zhou, C. dos Santos, C¸ . glar Gulc¸ehre, and B. Xiang, “Abstractive text summarization using sequence-to-sequence RNNs and beyond,” CoNLL 2016, p. 280, 2016. [OpenAIRE]

[15] A. M. Rush, S. Chopra, and J. Weston, “A neural attention model for abstractive sentence summarization,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 379-389.

117 references, page 1 of 8
Abstract
In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve the seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this paper, we provide a comprehensive literature and technical survey on...
Subjects
free text keywords: Computer Science - Computation and Language, Computer Science - Machine Learning, Statistics - Machine Learning
Related Organizations
Funded by
NSF| III: Small: New Machine Learning Approaches for Modeling Time-to-Event Data
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1707498
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
,
NSF| SCH: INT: Collaborative Research: Data-driven Stratification and Prognosis for Traumatic Brain Injury
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1838730
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
Download from
117 references, page 1 of 8

[1] D. R. Radev, E. Hovy, and K. McKeown, “Introduction to the special issue on summarization,” Computational linguistics, vol. 28, no. 4, pp. 399-408, 2002. [OpenAIRE]

[2] M. Allahyari, S. Pouriyeh, M. Assefi, S. Safaei, E. D. Trippe, J. B. Gutierrez, and K. Kochut, “Text summarization techniques: A brief survey,” arXiv preprint arXiv:1707.02268, 2017. [OpenAIRE]

[3] I. Mani and M. T. Maybury, Advances in automatic text summarization. MIT press, 1999.

[4] M. Gambhir and V. Gupta, “Recent automatic text summarization techniques: a survey,” Artificial Intelligence Review, vol. 47, no. 1, pp. 1-66, 2017.

[5] R. M. Verma and D. Lee, “Extractive summarization: Limits, compression, generalized model and heuristics,” Computacio´n y Sistemas, vol. 21, 2017.

[6] N. Bhatia and A. Jaiswal, “Automatic text summarization and it's methods-a review,” in Cloud System and Big Data Engineering (Confluence), 2016 6th International Conference. IEEE, 2016, pp. 65-72.

[7] E. Lloret and M. Palomar, “Text summarisation in progress: a literature review,” Artificial Intelligence Review, vol. 37, no. 1, pp. 1-41, 2012.

[8] H. Saggion and T. Poibeau, “Automatic text summarization: Past, present and future,” in Multi-source, multilingual information extraction and summarization. Springer, 2013, pp. 3-21.

[9] A. Nenkova, K. McKeown et al., “Automatic summarization,” Foundations and Trends R in Information Retrieval, vol. 5, no. 2-3, pp. 103-233, 2011.

[10] D. Das and A. F. Martins, “A survey on automatic text summarization,” Literature Survey for the Language and Statistics II course at CMU, vol. 4, pp. 192-195, 2007.

[11] Y. Wu and B. Hu, “Learning to extract coherent summary via deep reinforcement learning,” 2018.

[12] A. See, P. J. Liu, and C. D. Manning, “Get to the point: Summarization with pointer-generator networks,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2017, pp. 1073-1083.

[13] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neural document summarization by jointly learning to score and select sentences,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, 2018, pp. 654-663.

[14] R. Nallapati, B. Zhou, C. dos Santos, C¸ . glar Gulc¸ehre, and B. Xiang, “Abstractive text summarization using sequence-to-sequence RNNs and beyond,” CoNLL 2016, p. 280, 2016. [OpenAIRE]

[15] A. M. Rush, S. Chopra, and J. Weston, “A neural attention model for abstractive sentence summarization,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 379-389.

117 references, page 1 of 8
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue