publication . Article . Preprint . Other literature type . 2019

Personalized Saliency and Its Prediction

Xu, Yanyu; Gao, Shenghua; Wu, Junru; Li, Nianyi; Yu, Jingyi;
Open Access
  • Published: 01 Dec 2019 Journal: IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 41, pages 2,975-2,989 (issn: 0162-8828, eissn: 1939-3539, Copyright policy)
  • Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Nearly all existing visual saliency models by far have focused on predicting a universal saliency map across all observers. Yet psychology studies suggest that visual attention of different observers can vary significantly under specific circumstances, especially a scene is composed of multiple salient objects. To study such heterogenous visual attention pattern across observers, we first construct a personalized saliency dataset and explore correlations between visual attention, personal preferences, and image contents. Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predi...
Subjects
ACM Computing Classification System: ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
free text keywords: Computational Theory and Mathematics, Software, Applied Mathematics, Artificial Intelligence, Computer Vision and Pattern Recognition, Semantics, Multi-task learning, Computer science, Salience (neuroscience), Visualization, Visual saliency, Saliency map, Pattern recognition, Convolutional neural network, Feature extraction, business.industry, business, Computer Science - Computer Vision and Pattern Recognition
Related Organizations
52 references, page 1 of 4

[1] L. Itti, ”Automatic foveation for video compression using a neurobiological model of visual attention.” in IEEE Trans. on Image Proc., vol. 13, no.10, 2004, pp.1304-1318. [OpenAIRE]

[2] V. Setlur, S. Takagi, R. Raskar, M. Gleicher, and B. Gooch, ”Automatic image retargeting.” in In Proceedings of the 4th international conference on Mobile and ubiquitous multimedia, 2005, December ,pp. 59-68 [OpenAIRE]

[3] M.M.L. Chang, S.K. Ong, A.Y.C. Nee, ”Automatic Information Positioning Scheme in AR-assisted Maintenance Based on Visual Saliency.” in International Conference on Augmented Reality, Virtual Reality and Computer Graphics. 2016, pp.453-462.

[4] M. Gygli, H. Grabner, H. Riemenschneider, F. Nater, and L. Van Gool, ”The interestingness of images.” in Proc. IEEE Int. Conf. Comput. Vis. 2013, pp.1633-1640. [OpenAIRE]

[5] A. Borji, M.M. Cheng, H. Jiang, and J. Li, ”Salient object detection: A survey.” in arXiv preprint arXiv:1411.5878, 2014. [OpenAIRE]

[6] T. Judd, K. Ehinger, F. Durand, and A. Torralba, ”Learning to predict where humans look.” in Proc. IEEE Int. Conf. Comput. Vis., 2009, pp.2106-2113. [OpenAIRE]

[7] Y. Li, X. Hou, C. Koch, J.M. Rehg, and A.L. Yuille, ”The secrets of salient object segmentation.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2014, pp.280-287. [OpenAIRE]

[8] P. Xu, K. A. Ehinger, Y. Zhang, A. Finkelstein, A. R. Kulkarni, and J. Xiao, ”Turkergaze: Crowdsourcing saliency with webcam based eye tracking.” in arXiv preprint arXiv:1504.06755, 2015. [OpenAIRE]

[9] X. Huang, C. Shen, X. Boix, and Q. Zhao, ”Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks.” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp.262-270.(2015).

[10] J. Pan, E. Sayrol, X. Giro-i-Nieto, K. McGuinness, and N. E. O'Connor, ”Shallow and deep convolutional networks for saliency prediction.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 598-606.

[11] N. Liu, J. Han, D. Zhang, S. Wen, and T. Liu, ”Predicting eye fixations using convolutional neural networks.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp.362-370.

[12] S. S. Kruthiventi, K. Ayush, and R. V. Babu, ”Deepfix: A fully convolutional neural network for predicting human eye fixations.” in IEEE Trans. on Image Proc., 2017.

[13] S. S. Kruthiventi, V. Gudisa, J. H. Dholakiya, and R. Venkatesh Babu, ”Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 5781-5790. [OpenAIRE]

[14] J. Xu, M. Jiang, S. Wang, M. S. Kankanhalli, and Q. Zhao, ”Predicting human gaze beyond pixels.” in Journal of vision, vol. 14, no.1, 2014, pp.28-28.

[15] M. Jiang, S. Huang, J. Duan, and Q. Zhao, ”Salicon: Saliency in context.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp.1072-1080.

52 references, page 1 of 4
Abstract
Nearly all existing visual saliency models by far have focused on predicting a universal saliency map across all observers. Yet psychology studies suggest that visual attention of different observers can vary significantly under specific circumstances, especially a scene is composed of multiple salient objects. To study such heterogenous visual attention pattern across observers, we first construct a personalized saliency dataset and explore correlations between visual attention, personal preferences, and image contents. Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predi...
Subjects
ACM Computing Classification System: ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
free text keywords: Computational Theory and Mathematics, Software, Applied Mathematics, Artificial Intelligence, Computer Vision and Pattern Recognition, Semantics, Multi-task learning, Computer science, Salience (neuroscience), Visualization, Visual saliency, Saliency map, Pattern recognition, Convolutional neural network, Feature extraction, business.industry, business, Computer Science - Computer Vision and Pattern Recognition
Related Organizations
52 references, page 1 of 4

[1] L. Itti, ”Automatic foveation for video compression using a neurobiological model of visual attention.” in IEEE Trans. on Image Proc., vol. 13, no.10, 2004, pp.1304-1318. [OpenAIRE]

[2] V. Setlur, S. Takagi, R. Raskar, M. Gleicher, and B. Gooch, ”Automatic image retargeting.” in In Proceedings of the 4th international conference on Mobile and ubiquitous multimedia, 2005, December ,pp. 59-68 [OpenAIRE]

[3] M.M.L. Chang, S.K. Ong, A.Y.C. Nee, ”Automatic Information Positioning Scheme in AR-assisted Maintenance Based on Visual Saliency.” in International Conference on Augmented Reality, Virtual Reality and Computer Graphics. 2016, pp.453-462.

[4] M. Gygli, H. Grabner, H. Riemenschneider, F. Nater, and L. Van Gool, ”The interestingness of images.” in Proc. IEEE Int. Conf. Comput. Vis. 2013, pp.1633-1640. [OpenAIRE]

[5] A. Borji, M.M. Cheng, H. Jiang, and J. Li, ”Salient object detection: A survey.” in arXiv preprint arXiv:1411.5878, 2014. [OpenAIRE]

[6] T. Judd, K. Ehinger, F. Durand, and A. Torralba, ”Learning to predict where humans look.” in Proc. IEEE Int. Conf. Comput. Vis., 2009, pp.2106-2113. [OpenAIRE]

[7] Y. Li, X. Hou, C. Koch, J.M. Rehg, and A.L. Yuille, ”The secrets of salient object segmentation.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2014, pp.280-287. [OpenAIRE]

[8] P. Xu, K. A. Ehinger, Y. Zhang, A. Finkelstein, A. R. Kulkarni, and J. Xiao, ”Turkergaze: Crowdsourcing saliency with webcam based eye tracking.” in arXiv preprint arXiv:1504.06755, 2015. [OpenAIRE]

[9] X. Huang, C. Shen, X. Boix, and Q. Zhao, ”Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks.” in Proc. IEEE Int. Conf. Comput. Vis., 2015, pp.262-270.(2015).

[10] J. Pan, E. Sayrol, X. Giro-i-Nieto, K. McGuinness, and N. E. O'Connor, ”Shallow and deep convolutional networks for saliency prediction.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 598-606.

[11] N. Liu, J. Han, D. Zhang, S. Wen, and T. Liu, ”Predicting eye fixations using convolutional neural networks.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp.362-370.

[12] S. S. Kruthiventi, K. Ayush, and R. V. Babu, ”Deepfix: A fully convolutional neural network for predicting human eye fixations.” in IEEE Trans. on Image Proc., 2017.

[13] S. S. Kruthiventi, V. Gudisa, J. H. Dholakiya, and R. Venkatesh Babu, ”Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016, pp. 5781-5790. [OpenAIRE]

[14] J. Xu, M. Jiang, S. Wang, M. S. Kankanhalli, and Q. Zhao, ”Predicting human gaze beyond pixels.” in Journal of vision, vol. 14, no.1, 2014, pp.28-28.

[15] M. Jiang, S. Huang, J. Duan, and Q. Zhao, ”Salicon: Saliency in context.” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015, pp.1072-1080.

52 references, page 1 of 4
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue
publication . Article . Preprint . Other literature type . 2019

Personalized Saliency and Its Prediction

Xu, Yanyu; Gao, Shenghua; Wu, Junru; Li, Nianyi; Yu, Jingyi;