Combining Voxel and Normal Predictions for Multi-View 3D Sketching
Recent works on data-driven sketch-based modeling use either voxel grids or normal/depth maps as geometric representations compatible with convolutional neural networks. While voxel grids can represent complete objects-including parts not visible in the sketches-their memory consumption restricts them to low-resolution predictions. In contrast, a single normal or depth map can capture fine details, but multiple maps from different viewpoints need to be predicted and fused to produce a closed surface. We propose to combine these two representations to address their respective shortcomings in the context of a multi-view sketch-based modeling system. Our method predicts a voxel grid common to all the input sketches, along with one normal map per sketch. We then use the voxel grid as a support for normal map fusion by optimizing its extracted surface such that it is consistent with the re-projected normals, while being as piecewise-smooth as possible overall. We compare our method with a recent voxel prediction system, demonstrating improved recovery of sharp features over a variety of man-made objects.
Images and movies
See also
Video
Acknowledgements and Funding
This work was supported in part by the ERC starting grant D3 (ERC-2016-STG 714221), CoMeDiC research grant (ANR-15-CE40-0006), research and software donations from Adobe, and by hardware donations from NVIDIA. The authors are grateful to Inria Sophia Antipolis - Méditerranée "Nef" computation cluster for providing resources and support.BibTex references
@Article{DCLB19, author = "Delanoy, Johanna and Coeurjolly, David and Lachaud, Jacques-Olivier and Bousseau, Adrien", title = "Combining Voxel and Normal Predictions for Multi-View 3D Sketching", journal = "Computers \& Graphics", volume = "82", pages = "65-72", month = "Aug", year = "2019", url = "http://www-sop.inria.fr/reves/Basilic/2019/DCLB19" }