Visual Attention-based Image Watermarking

Article English OPEN
Bhowmik, D. ; Oakes, M. ; Abhayaratne, C. (2016)

Imperceptibility and robustness are two complementary but fundamental requirements of any watermarking algorithm. Low strength watermarking yields high imperceptibility but exhibits poor robustness. High strength watermarking schemes achieve good robustness but often infuse distortions resulting in poor visual quality in host media. If distortion due to high strength watermarking can avoid visually attentive regions, such distortions are unlikely to be noticeable to any viewer. In this paper, we exploit this concept and propose a novel visual attention-based highly robust image watermarking methodology by embedding lower and higher strength watermarks in visually salient and non-salient regions, respectively. A new low complexity wavelet domain visual attention model is proposed that allows us to design new robust watermarking algorithms. The proposed new saliency model outperforms the state-of-the-art method in joint saliency detection and low computational complexity performances. In evaluating watermarking performances, the proposed blind and non-blind algorithms exhibit increased robustness to various natural image processing and filtering attacks with minimal or no effect on image quality, as verified by both subjective and objective visual quality evaluation. Up to 25% and 40% improvement against JPEG2000 compression and common filtering attacks, respectively, are reported against the existing algorithms that do not use a visual attention model.
  • References (81)
    81 references, page 1 of 9

    [1] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psychology, vol. 12, no. 1, pp. 97 - 136, 1980.

    [2] O. Hikosaka, S. Miyauchi, and S. Shimojo, “Orienting of spatial attention-its reflexive, compensatory, and voluntary mechanisms,” Cognitive Brain Research, vol. 5, no. 1-2, pp. 1-9, 1996.

    [3] L. Itti and C. Koch, “Computational modelling of visual attention,” Nature Reviews Neuroscience, vol. 2, no. 3, pp. 194-203, Mar 2001.

    [4] R. Desimone and J. Duncan, “Neural mechanisms of selective visual attention,” Annual review of neuroscience, vol. 18, no. 1, pp. 193-222, 1995.

    [5] A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” IEEE Transactions on Pattern Analalysis Machine Intelligence, vol. 35, no. 1, pp. 185-207, Jan. 2013.

    [6] M. Carrasco, “Visual attention: The past 25 years,” Vision Research, vol. 51, no. 13, pp. 1484 - 1525, 2011, vision Research 50th Anniversary Issue: Part 2. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S0042698911001544

    [7] S. Frintrop, E. Rome, and H. I. Christensen, “Computational visual attention systems and their cognitive foundations: A survey,” ACM Trans. Appl. Percept., vol. 7, no. 1, pp. 6:1-6:39, Jan. 2010. [Online]. Available: http://doi.acm.org/10.1145/1658349.1658355

    [8] H. Liu and I. Heynderickx, “Visual attention in objective image quality assessment: Based on eye-tracking data,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 7, pp. 971-982, July 2011.

    [9] S. Frintrop, “General object tracking with a component-based target descriptor,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, May 2010, pp. 4531-4536.

    [10] A. Mishra, Y. Aloimonos, and C. Fermuller, “Active segmentation for robotics,” in IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems, (IROS 2009), Oct 2009, pp. 3133-3139.

  • Metrics
    No metrics available
Share - Bookmark