Abstract
At present the rendering tasks are made easer thanks to the numerous 3D graphics cards available on the marketplace. But automatic digitizing for 3D objects is the key challenge for todays 3D camputer graphics. The goal of this paper is to present research avenues towards direct 3D digitizing with multi-views cameras, in order to design 3D objects as simply as 2D scanning. Recent advances in low level vision analysis gave interesting results related to Automatic neurofocalisation, and perceptive grouping, mainly thanks to Professor Burnod researches and his team in the field of Neuro Visual hardware models. These techniques are based on multiscale hypercomplex filters such as Gaussian, Laplacian derivatives, and are able to automatically compute highly informative points or zones such as: vertices, vectors, lips, eyes, mouth etc... on natural scenes. These informations can be the basis of keypoints for spline control points. These image analysis tools are giving Neurofocalisation output informations. They simulate the low level attractiveness of the Neuro-Visual system in the brain: Neurons can be considered functionnally as hardware filters, as they perform sum of products. Neuro-visual simulations demonstrate for example that neuron ouputs are giving directly matrix transform coefficients, and edge transition in the brain. It would be envisageable to consider a new kind af filterbased data structures instead af splines/polygons. Such data base could facilitate the gap between 3D objetcs and image processing elements. (The terms used in MPEG4/SNHC is VOP for video object planes: It's the first time that an image element other than contour, edge, etc, is considered as an object. Some demonstrations and hardware simulations will be presented. Moreover, these results can be implemented in hardware. The cortical column as defined by Professor Burnod is a set of complex filters which can be simulated and integrated as well as classic digital filters. Such informations can be used as inputs for new modelling systems which could design 3D objects without interactive graphical interface via software modellers. At present, only interactive US commercial products are available. These products need an operator who must interactively input in the system vertices and polygon positions. These tasks are very long and boring, thus already a little bit easier than traditionnal modelling systems which need a complete construction with mathematical shapes, or adaptive meshing. The real take-off for 3D graphics will begin when 3D digitizing will be as simple as a camera or scanner. Some NASA systems are beginning to automate this digitizing task. Such techniques could enhance classical input design for a large sets ofobjects. Moreover, it would be possible to define new data structures based on generalized multiscale filters more than splines/polygons. A new direct bridge could be created between image analysis and synthesis, and for mixing hybrid scenes. (as in MPEG4-SNHC goals). Recent simulation models of the Neural Visual Cortex in the brain have been developped for merging 3D image & object analysis with 3D Synthesis for ultra-high image compression (MPEG4) and analysis research environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.