publication . Preprint . 2019

CSGAN: Cyclic-Synthesized Generative Adversarial Networks for Image-to-Image Transformation

Kancharagunta, Kishan Babu; Dubey, Shiv Ram;
Open Access English
  • Published: 11 Jan 2019
Abstract
The primary motivation of Image-to-Image Transformation is to convert an image of one domain to another domain. Most of the research has been focused on the task of image transformation for a set of pre-defined domains. Very few works are reported that actually developed a common framework for image-to-image transformation for different domains. With the introduction of Generative Adversarial Networks (GANs) as a general framework for the image generation problem, there is a tremendous growth in the area of image-to-image transformation. Most of the research focuses over the suitable objective function for image-to-image transformation. In this paper, we propose...
Subjects
ACM Computing Classification System: ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
free text keywords: Computer Science - Computer Vision and Pattern Recognition
Download from
34 references, page 1 of 3

[1] Z. Cheng, Q. Yang, and B. Sheng, “Deep colorization,” in IEEE International Conference on Computer Vision, 2015.

[2] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision, 2016.

[3] T. Guo, H. S. Mousavi, and V. Monga, “Deep learning based image super-resolution with coupled backpropagation,” in IEEE Global Conference on Signal and Information Processing, 2016, pp. 237-241.

[4] J. Chen, X. He, H. Chen, Q. Teng, and L. Qing, “Single image superresolution based on deep learning and gradient transformation,” in IEEE International Conference on Signal Processing, 2016, pp. 663- 667.

[5] Z. Liu, X. Li, P. Luo, C.-C. Loy, and X. Tang, “Semantic image segmentation via deep parsing network,” in IEEE International Conference on Computer Vision, 2015, pp. 1377-1385.

[6] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in IEEE conference on computer vision and pattern recognition, 2016, pp. 3640-3649.

[7] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414-2423. [OpenAIRE]

[8] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for realtime style transfer and super-resolution,” in European Conference on Computer Vision, 2016, pp. 694-711.

[9] S. Zhang, X. Gao, N. Wang, J. Li, and M. Zhang, “Face sketch synthesis via sparse representation-based greedy search,” IEEE transactions on image processing, vol. 24, no. 8, pp. 2466-2477, 2015.

[10] L. Zhang, L. Lin, X. Wu, S. Ding, and L. Zhang, “End-to-end photosketch generation via fully convolutional representation learning,” in ACM International Conference on Multimedia Retrieval, 2015, pp. 627-634.

[11] X. Wang and X. Tang, “Face photo-sketch synthesis and recognition,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 11, pp. 1955-1967, 2008.

[12] R. Tylecˇek and R. Sˇ a´ra, “Spatial pattern templates for recognition of objects with regular structure,” in German Conference on Pattern Recognition, 2013.

[13] Z. Yi, H. Zhang, P. Tan, and M. Gong, “Dualgan: Unsupervised dual learning for image-to-image translation,” in IEEE International Conference on Computer Vision, 2017, pp. 2868-2876.

[14] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired imageto-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision, 2017, pp. 2242- 2251.

[15] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 60-65. [OpenAIRE]

34 references, page 1 of 3
Abstract
The primary motivation of Image-to-Image Transformation is to convert an image of one domain to another domain. Most of the research has been focused on the task of image transformation for a set of pre-defined domains. Very few works are reported that actually developed a common framework for image-to-image transformation for different domains. With the introduction of Generative Adversarial Networks (GANs) as a general framework for the image generation problem, there is a tremendous growth in the area of image-to-image transformation. Most of the research focuses over the suitable objective function for image-to-image transformation. In this paper, we propose...
Subjects
ACM Computing Classification System: ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
free text keywords: Computer Science - Computer Vision and Pattern Recognition
Download from
34 references, page 1 of 3

[1] Z. Cheng, Q. Yang, and B. Sheng, “Deep colorization,” in IEEE International Conference on Computer Vision, 2015.

[2] R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in European Conference on Computer Vision, 2016.

[3] T. Guo, H. S. Mousavi, and V. Monga, “Deep learning based image super-resolution with coupled backpropagation,” in IEEE Global Conference on Signal and Information Processing, 2016, pp. 237-241.

[4] J. Chen, X. He, H. Chen, Q. Teng, and L. Qing, “Single image superresolution based on deep learning and gradient transformation,” in IEEE International Conference on Signal Processing, 2016, pp. 663- 667.

[5] Z. Liu, X. Li, P. Luo, C.-C. Loy, and X. Tang, “Semantic image segmentation via deep parsing network,” in IEEE International Conference on Computer Vision, 2015, pp. 1377-1385.

[6] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in IEEE conference on computer vision and pattern recognition, 2016, pp. 3640-3649.

[7] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414-2423. [OpenAIRE]

[8] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for realtime style transfer and super-resolution,” in European Conference on Computer Vision, 2016, pp. 694-711.

[9] S. Zhang, X. Gao, N. Wang, J. Li, and M. Zhang, “Face sketch synthesis via sparse representation-based greedy search,” IEEE transactions on image processing, vol. 24, no. 8, pp. 2466-2477, 2015.

[10] L. Zhang, L. Lin, X. Wu, S. Ding, and L. Zhang, “End-to-end photosketch generation via fully convolutional representation learning,” in ACM International Conference on Multimedia Retrieval, 2015, pp. 627-634.

[11] X. Wang and X. Tang, “Face photo-sketch synthesis and recognition,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 11, pp. 1955-1967, 2008.

[12] R. Tylecˇek and R. Sˇ a´ra, “Spatial pattern templates for recognition of objects with regular structure,” in German Conference on Pattern Recognition, 2013.

[13] Z. Yi, H. Zhang, P. Tan, and M. Gong, “Dualgan: Unsupervised dual learning for image-to-image translation,” in IEEE International Conference on Computer Vision, 2017, pp. 2868-2876.

[14] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired imageto-image translation using cycle-consistent adversarial networks,” in IEEE International Conference on Computer Vision, 2017, pp. 2242- 2251.

[15] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 60-65. [OpenAIRE]

34 references, page 1 of 3
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue