publication . Preprint . Conference object . 2017

Structured Attentions for Visual Question Answering

Chen Zhu; Yanpeng Zhao; Shuaiyi Huang; Kewei Tu; Yi Ma;
Open Access English
  • Published: 07 Aug 2017
Abstract
Visual attention, which assigns weights to image regions according to their relevance to a question, is considered as an indispensable part by most Visual Question Answering models. Although the questions may involve complex relations among multiple regions, few attention models can effectively encode such cross-region relations. In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions. We demonstrate how to convert the iterative inferen...
Subjects
free text keywords: Computer Science - Computer Vision and Pattern Recognition, Pattern recognition, Question answering, Visualization, Machine learning, computer.software_genre, computer, Artificial neural network, Computer science, Belief propagation, Inference engine, Artificial intelligence, business.industry, business, Encoding (memory), Source code, media_common.quotation_subject, media_common, Inference
40 references, page 1 of 3

[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Learning to compose neural networks for question answering. NAACL, 2016.

[2] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In CVPR, 2016.

[3] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. VQA: Visual question answering. In ICCV, 2015.

[4] L.-C. Chen, A. G. Schwing, A. L. Yuille, and R. Urtasun. Learning deep structured models. In ICML, 2015.

[5] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.

[6] A. Das, H. Agrawal, C. L. Zitnick, D. Parikh, and D. Batra. Human attention in visual question answering: Do humans and deep networks look at the same regions? EMNLP, 2016. [OpenAIRE]

[7] T.-M.-T. Do and T. Artieres. Neural conditional random fields. In AISTATS, 2010. [OpenAIRE]

[8] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. EMNLP, 2016.

[9] Y. Gal and Z. Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016.

[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.

[11] I. Ilievski, S. Yan, and J. Feng. A focused dynamic attention model for visual question answering. arXiv preprint arXiv:1604.01485, 2016. [OpenAIRE]

[12] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Deep structured output learning for unconstrained text recognition. ICLR, 2015.

[13] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. CVPR, 2017.

[14] K. Kafle and C. Kanan. Answer-type prediction for visual question answering. In CVPR, 2016. [OpenAIRE]

[15] K. Kafle and C. Kanan. Visual question answering: Datasets, algorithms, and future challenges. arXiv preprint arXiv:1610.01465, 2016. [OpenAIRE]

40 references, page 1 of 3
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue