3D Sketching using Multi-View Deep Volumetric Prediction
Presentation | Team members | Collaborations | Publications | Job offers | Contact


3D Sketching using Multi-View Deep Volumetric Prediction

Proceedings of the ACM on Computer Graphics and Interactive Techniques, Volume 1, Number 21 - may 2018
Download the publication : deep_sketch.pdf [7.3Mo]  
Sketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints, without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance.

Images and movies


See also


See also the project webpage with code and data.

Acknowledgements and Funding

Many thanks to Yulia Gryaditskaya for sketching several of our results, and for her help on the renderings and video. This work was supported in part by the ERC starting grant D3 (ERC-2016-STG 714221), the Intel/NSF VEC award IIS-1539099, FBF grant 2018-0017, ANR project EnHerit (ANR-17-CE23-0008), research and software donations from Adobe, and by hardware donations from NVIDIA.

BibTex references

  author       = "Delanoy, Johanna and Aubry, Mathieu and Isola, Phillip and Efros, Alexei and Bousseau, Adrien",
  title        = "3D Sketching using Multi-View Deep Volumetric Prediction",
  journal      = "Proceedings of the ACM on Computer Graphics and Interactive Techniques",
  number       = "21",
  volume       = "1",
  month        = "may",
  year         = "2018",
  url          = ""

Other publications in the database

» Johanna Delanoy
» Mathieu Aubry
» Alexei Efros
» Adrien Bousseau