Abstract

Creating and setting the right parameters for the virtual camera is crucial for any content creation process. However, this is not easy since most current camera models, including the X3D Viewpoint, use a 3D position and orientation in 3D space to define the final visualized image. People use authoring tools or simple interactive navigation methods (e.g. lookAt or showAll) to ease the process but at the end they still move a 6D (translation and rotation) camera beacon to get the final image. We thus propose a new X3D camera model, the CinematographicViewpoint node, which does not force the content creator to move the camera but allows the author to directly define what objects he would like to see on the screen. We borrow established techniques from the film area (e.g. rule of thirds and line of action) that allow defining objects and object-relations, which the camera model will use to automatically calculate the final transformation in space. The new camera model includes additionally a model for global visual effects (e.g. motion blur and depth of field), which allows incorporating classical film effects to real-time scenes. Both approaches combined allow content creators building visual results and camera movements that are closer to traditional filming much easier. The proposed approach also supports automatic camera movements that are bound to interactive content, which has not been possible before.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.