The Reves team presents at Siggraph various researches
The 41th edition of the International SIGGRAPH conference brings from 10 to 14 August in Vancouver more than 20,000 professionals worldwide around lectures and demonstrations to discuss movies and special effects, computer graphics, interactive technology and video games. On this occasion the REVES team from Inria Sophia Antipolis research centre will present his approaches to modeling in computer graphics.
True2Form, 3D modeling from design sketches.
True2Form allows designers to create 3D models from a single design. - @inria-reves
Product design, from the inception of an idea to its realization as a 3D concept, is extensively guided by free-hand sketches. True2Form is a sketch-based modeling system that reconstructs 3D curves from typical design sketches.
Our approach combines design and perceptual principles to reconstruct free-form, piecewise-smooth models of man-made objects from a single drawing. In particular, we note that designers favor viewpoints that maximally reveal 3D shape information, and strategically sketch descriptive curves that convey intrinsic shape properties, such as curvature, symmetry, or parallelism. Our algorithm progressively detect and enforce applicable properties, accounting for their global impact on an evolving 3D curve network. Balancing regularity enforcement against sketch fidelity at each step allows us to correct for inaccuracy inherent in free-hand sketching. The phrase “true to form” meaning “exactly as expected”, signifies our attempt to reproduce the 3D “form” viewers expect from a 2D sketch.
How to edit a LightField ?
A LightField camera captures photos of the scene in multiple views close, which allows you to change the view and the focus after the catch. - @inria-reves
Light field cameras (such as Raytrix or Lytro) are rapidly gaining popularity for their ability to change viewpoint and focus in a picture after the capture. As the number of captured and shared light fields increases, the need for editing tools arises as well. However, as opposed to the well-established editing of 2D images, user interfaces to edit light fields remain largely unexplored.
We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues.