publication . Conference object . 2020

Style versus Content: A distinction without a (learnable) difference?

Somayeh Jafaritazehjani; Gwénolé Lecorvé; Damien Lolive; John D. Kelleher;
Open Access English
  • Published: 08 Dec 2020
  • Publisher: HAL CCSD
  • Country: France
Abstract
International audience; Textual style transfer involves modifying the style of a text while preserving its content. This assumes that it is possible to separate style from content. This paper investigates whether this separation is possible. We use sentiment transfer as our case study for style transfer analysis. Our experimental methodology frames style transfer as a multi-objective problem, balancing style shift with content preservation and fluency. Due to the lack of parallel data for style transfer we employ a variety of adversarial encoder-decoder networks in our experiments. Also, we use a probing methodology to analyse how these models encode style-relat...
Persistent Identifiers
Subjects
free text keywords: [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI], ENCODE, Adversarial system, Computer science, Fluency, Artificial intelligence, business.industry, business, Natural language processing, computer.software_genre, computer
Funded by
ANR| TREMoLo
Project
TREMoLo
Language register transformation using linguistic pattern extraction
  • Funder: French National Research Agency (ANR) (ANR)
  • Project Code: ANR-16-CE23-0019
,
SFI| ADAPT: Centre for Digital Content Platform Research
Project
  • Funder: Science Foundation Ireland (SFI)
  • Project Code: 13/RC/2106
  • Funding stream: SFI Research Centres
Communities
Digital Humanities and Cultural Heritage
32 references, page 1 of 3

Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR).

Keith Carlson, Allen Riddell, and Daniel N. Rockmore. 2017. Zero-shot style transfer in text using recurrent neural networks. CoRR, abs/1711.04731.

Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of the 28th Neural Information Processing Systems (NIPS), Workshop on Deep Learning. [OpenAIRE]

Alexis Conneau, Germa´n Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Volume 1: Long Papers, pages 2126-2136.

Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the 32nd AAAI conference on artificial intelligence.

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Proceedings of the 28th conference in Neural Information Processing Systems (NIPS), pages 2672-2680. Curran Associates, Inc.

Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Controllable text generation. CoRR, abs/1703.00955.

Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2018. Disentangled representation learning for text style transfer. CoRR, abs/1808.04339.

Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vie´gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics (TACL), 5:339-351.

K. Krippendorff. 1980. Content analysis: An introduction to its methodology. Beverly Hills: Sage Publications.

Wouter Leeftink and Gerasimos Spanakis. 2019. Towards controlled transformation of sentiment in sentences. CoRR, abs/1808.04365.

Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 16th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long Papers), pages 1865-1874.

Shuming Ma and Xu Sun. 2017. A semantic relevance based neural network for text summarization and text simplification. Computational Linguistics, Volume: 1(1).

Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).

Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227-2237.

32 references, page 1 of 3
Any information missing or wrong?Report an Issue