Publication: Geosemantic Snapping for Sketch-Based Modeling
Introduction
Applications
Tools
Research Groups
Workshops
Publications
   List Publications
   Advanced Search
   Info
   Add Publications
My Account
About

Geosemantic Snapping for Sketch-Based Modeling

- Article in a journal -
 

Author(s)
Alex Shtof , Alexander Agathos , Yotam Gingold , Ariel Shamir , Daniel Cohen-Or

Published in
Computer Graphics Forum

Year
2013

Abstract
Modeling 3D objects from sketches is a process that requires several challenging problems including segmentation, recognition and reconstruction. Some of these tasks are harder for humans and some are harder for the machine. At the core of the problem lies the need for semantic understanding of the shape's geometry from the sketch. In this paper we propose a method to model 3D objects from sketches by utilizing humans specifically for semantic tasks that are very simple for humans and extremely difficult for the machine, while utilizing the machine for tasks that are harder for humans. The user assists recognition and segmentation by choosing and placing specific geometric primitives on the relevant parts of the sketch. The machine first snaps the primitive to the sketch by fitting its projection to the sketch lines, and then improves the model globally by inferring geosemantic constraints that link the different parts. The fitting occurs in real-time, allowing the user to be only as precise as needed to have a good starting configuration for this non-convex optimization problem. We evaluate the accessibility of our approach with a user study.

BibTeX
@ARTICLE{
         Shtof2013GSf,
       journal = "Computer Graphics Forum",
       title = "{Geosemantic Snapping for Sketch-Based Modeling}",
       author = "Alex Shtof and Alexander Agathos and Yotam Gingold and Ariel Shamir and Daniel
         Cohen-Or",
       pages = "245-253",
       volume = "32",
       number = "2",
       year = "2013",
       url = "http://diglib.eg.org/EG/CGF/volume32/issue2/v32i2pp245-253.pdf",
       doi = "10.1111/cgf.12044",
       abstract = "{Modeling 3D objects from sketches is a process that requires several challenging
         problems including segmentation, recognition and reconstruction. Some of these tasks are harder for
         humans and some are harder for the machine. At the core of the problem lies the need for semantic
         understanding of the shape's geometry from the sketch. In this paper we propose a method to
         model 3D objects from sketches by utilizing humans specifically for semantic tasks that are very
         simple for humans and extremely difficult for the machine, while utilizing the machine for tasks
         that are harder for humans. The user assists recognition and segmentation by choosing and placing
         specific geometric primitives on the relevant parts of the sketch. The machine first snaps the
         primitive to the sketch by fitting its projection to the sketch lines, and then improves the model
         globally by inferring geosemantic constraints that link the different parts. The fitting occurs in
         real-time, allowing the user to be only as precise as needed to have a good starting configuration
         for this non-convex optimization problem. We evaluate the accessibility of our approach with a user
         study.}"
}


back
  

Contact:
autodiff.org
Username:
Password:
(lost password)