Timeline editing of objects in video

Article English OPEN
Lu, Shao-Ping ; Zhang, Song-Hai ; Wei, Jin ; Hu, Shi-Min ; Martin, Ralph Robert (2013)
  • Publisher: IEEE
  • Related identifiers: doi: 10.1109/TVCG.2012.145
  • Subject: QA75

We present a video editing technique based on changing the timelines of individual objects in video, which leaves them in their original places but puts them at different times. This allows the production of object-level slow motion effects, fast motion effects, or even time reversal. This is more flexible than simply applying such effects to whole frames, as new relationships between objects can be created. As we restrict object interactions to the same spatial locations as in the original video, our approach can produce high-quality results using only coarse matting of video objects. Coarse matting can be done efficiently using automatic video object segmentation, avoiding tedious manual matting. To design the output, the user interactively indicates the desired new life spans of objects, and may also change the overall running time of the video. Our method rearranges the timelines of objects in the video whilst applying appropriate object interaction constraints. We demonstrate that, while this editing technique is somewhat restrictive, it still allows many interesting results.
  • References (40)
    40 references, page 1 of 4

    [1] G. R. Bradski, “Computer vision face tracking for use in a perceptual user interface,” Intelligence Technology Journal, vol. 2, pp. 12-21, 1998.

    [2] D. B. Goldman, C. Gonterman, B. Curless, D. Salesin, and S. M. Seitz, “Video object annotation, navigation, and composition,” in Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, Oct. 2008, pp. 3-12.

    [3] X. L. K. Wei and J. X. Chai, “Interactive tracking of 2D generic objects with spacetime optimization,” in Proceedings of the 10th European Conference on Computer Vision: Part I, 2008, pp. 657- 670.

    [4] A. Schodl and I. A. Essa, “Controlled animation of video sprites,” in ACM SIGGRAPH/Eurographics symposium on Computer animation, 2002, pp. 121-127.

    [5] S. Yeung, C. Tang, M. Brown, and S. Kang, “Matting and compositing of transparent and refractive objects,” ACM Transactions on Graphics, vol. 30, no. 1, p. 2, 2011.

    [6] K. He, C. Rhemann, C. Rother, X. Tang, and J. Sun, “A global sampling method for alpha matting,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 2049-2056.

    [7] Y. Zhang and R. Tong, “Environment-sensitive cloning in images,” The Visual Computer, pp. 1-10, 2011.

    [8] Z. Tang, Z. Miao, Y. Wan, and D. Zhang, “Video matting via opacity propagation,” The Visual Computer, pp. 1-15, 2011.

    [9] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1, pp. 321-331, 1988.

    [10] Y.-Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Transactions on Graphics, vol. 21, pp. 243-248, Jul. 2002.

  • Metrics
    views in OpenAIRE
    views in local repository
    downloads in local repository

    The information is available from the following content providers:

    From Number Of Views Number Of Downloads
    Online Research @ Cardiff - IRUS-UK 0 85
Share - Bookmark