<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::45ab7e04e0de5b71be482a0e79047f19&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::45ab7e04e0de5b71be482a0e79047f19&type=result"></script>');
-->
</script>
Scene modelling is central to many applications in our society including quality control in manufacturing, robotics, medical imaging, visual effects production, cultural heritage and computer games. It requires accurate estimation of the scene's shape (its 3D surface geometry) and reflectance (how its surface reflects light). However, there is currently no method capable of capturing the shape and reflectance of dynamic scenes with complex surface reflectance (e.g. glossy surfaces). This lack of generic methods is problematic as it limits the applicability of existing techniques to scene categories which are not representative of the complexity of natural scenes and materials. This project will introduce a general framework to enable the capture of shape and reflectance of complex dynamic scenes thereby addressing an important gap in the field. Current image or video-based shape estimation techniques rely on the assumption that the scene's surface reflectance is diffuse (it reflects light uniformly in all directions) or assume it is known a priori thus limiting the applicatibility to simple scenes. Reflectance estimation requires estimation of a 6-dimensional function (the BRDF) which describes how light is reflected at each surface point as a function of incident light direction and viewpoint direction. Due to high dimensionality, reflectance estimation remains limited to static scenes or requires use of expensive specialist equipment. At present, there is no method capable of accurately capturing both shape and reflectance of general dynamic scenes, yet scenes with complex unknown reflectance properties are omnipresent in our daily lives. The proposed research will address this gap by introducing a novel framework which enables estimation of shape and reflectance for arbitrary dynamic scenes. The approach is based on two key scientific advances which tackle the high dimensionality issue of shape and reflectance estimation. First, a general methodology for decoupling shape estimation from reflectance estimation will be proposed; this will allow decomposition of the original high dimensional problem, which is ill-posed, into smaller sub-problems that are tractable. Second, a space-time formulation of reflectance estimation will be introduced; this will utilise dense surface tracking techniques to extend reflectance estimation to the temporal domain and thereby increase the number of observations available to overcome the inherently low number of observations at a single time instant. This will build on the PI's pioneering research in 3D reconstruction of scenes with arbitrary unknown reflectance properties and his expertise in dynamic scene reconstruction, surface tracking/animation and reflectance estimation. This research represents a radical shift in scene modelling which will result in several major technical contributions: 1) a reflectance independent shape estimation methodology for dynamic scenes, 2) a non-rigid surface tracking method suitable for general scenes with complex and unknown reflectance and 3) a general and scalable reflectance estimation method for dynamic scenes. This will benefit all areas requiring accurate acquisition of shape and reflectance for real-world scenes with complex dynamic shape and reflectance without the requirement for complex and restrictive hardware setups; such scenes are a common occurrence in natural environments, manufacturing (metallic surfaces) and medical imaging (human tissue) but accurate capture of shape is not possible with existing approaches which assume diffuse reflectance and fail dramatically for such cases. This will achieve for the first time accurate modelling of dynamic scenes with arbitrary surface reflectance properties thus opening up novel avenues in scene modelling. The application of this technology will be demonstrated in digital cinema in collaboration with industrial partners to support the development of the next generation of visual effects.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::194a103fdd62bae3c206b89dfe2b1e33&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::194a103fdd62bae3c206b89dfe2b1e33&type=result"></script>');
-->
</script>
The goal of my Innovation Fellowship is to create a new form of immersive 360-degree VR video. We are massive consumers of visual information, and as new forms of visual media and immersive technologies are emerging, I want to work towards my vision of making people feel truly immersed in this new form of video content. Imagine, for instance, what it would be like to experience the International Space Station as if you were there - without leaving the comfort of your own home. The Problem: To feel truly immersed in virtual reality, one needs to be able to freely look around within a virtual environment and see it from the viewpoints of one's own eyes. Immersion requires 'freedom of motion' in six degrees-of-freedom ('6-DoF'), so that viewers see the correct views of an environment. As viewers move their heads, the objects they see should move relative to each other, with different speeds depending on their distance to the viewer. This is called motion parallax. Viewers need to perceive correct motion parallax regardless of where they are (3 DoF) and where they are looking (+3 DoF). Currently, only computer-generated imagery (CGI) fully supports 6-DoF content with motion parallax, but it remains extremely challenging to match the visual realism of the real world with computer graphics models. Viewers therefore either lose photorealism (with CGI) or immersion (with existing VR video). To date, it is not possible to capture or view high-quality 6-DoF VR video of the real world. My Goal: Virtual reality is a new kind of medium that requires new ways to author content. My goal is therefore to create a new form of immersive 360-degree VR video that overcomes the limitations of existing 360-degree VR video. This new form of VR content - 6-DoF VR video - will achieve unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system and entirely missing from existing 360-degree VR video. Specifically, the aim of this Fellowship is to accurately and comprehensively capture real-world environments, including visual dynamics such as people and moving animals or plants, and to reproduce the captured environments and their dynamics in VR with photographic realism, correct motion parallax and overall depth perception. 6-DoF VR video is a significant virtual reality capability that will be a significant step forward for overall immersion, realism and quality of experience. My Approach: To achieve 6-DoF VR video that enables photorealistic exploration of dynamic real environments in 360-degree virtual reality, my group and I will develop novel video-based capture, 3D reconstruction and rendering techniques. We first explore different approaches for capturing static and dynamic 360-degree environments, which are more challenging, including using 360 cameras and multi-camera rigs. We next reconstruct the 3D geometry of the environments from the captured imagery by extending multi-view geometry/photogrammetry techniques to handle dynamic 360-degree environments. Extending image-based rendering to 360-degree environments will enable 6-DoF motion within a photorealistic 360-degree environment with high visual fidelity, and will result in detailed 360-degree environments covering all possible viewing directions. We first target 6-DoF 360-degree VR photographs (i.e. static scenes) and then extend our approach to 6-DoF VR videos. Project partners: This Fellowship is supported by the following project partners in the UK and abroad: Foundry (London) is a leading developer of visual effects software for film, video and VR post-production, and ideally suited to advise on industrial impact. REWIND (St Albans) is a leading cutting-edge creative VR production company that is keen to experiment with 6-DoF VR video. Reality7 (Hamburg, Germany) is a start-up working on cinematic VR video.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::dca092be9981da1c923f8237adbd2b49&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::dca092be9981da1c923f8237adbd2b49&type=result"></script>');
-->
</script>
SAUCE will research, develop, pilot and demonstrate a set of professional tools and techniques for making content ‘smarter’, so that it is fully adaptive in a broad, unprecedented manner: adaptive to context (which facilitates re-use), to purpose (among or within industries), to the user (improving the viewing experience), and to the production environment (so that it is ‘future-proof’). The approach is based on research into light-field technology; automated classification and tagging using deep learning and semantic labeling to describe and draw inferences; and the development of tools for automated asset transformation, smart animation, storage and retrieval. These new technologies and tools will show that a vast reduction of costs and increases in efficiency are possible, facilitating the production of more content, of higher quality and creativity, for the benefit of the competitiveness of the European creative industries. Specifically, SAUCE will research and develop: • Methods of generating smart world-views by capturing and processing light-fields, augmenting them with semantics and making them usable for media production pipelines. • A framework and tools for automatically classifying, validating and finding smart assets, using deep learning and semantic labeling techniques on 2D and 3D data. • A framework and tools for the automatic transformation and adaptation of smart assets to new contexts, purposes, users and environments, and for the synthesis of new smart assets from existing ones. • Real-time control systems for authoring animated content using smart assets, automatically synthesizing new scenes from existing ones and integrating smart assets into virtual production scenarios. • A storage system for smart assets that combines local and cloud storage so that assets can be delivered irrespective of the user or asset location. SAUCE will also demonstrate and promote its results, for industry approval, adoption and development by third parties.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda__h2020::99a264aef5afe30b48519081cbee1aee&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda__h2020::99a264aef5afe30b48519081cbee1aee&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::087d3b49b838302bcd44d38e6cf16a81&type=result"></script>');
-->
</script>
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::087d3b49b838302bcd44d38e6cf16a81&type=result"></script>');
-->
</script>