Single-Image SVBRDF Capture with a Rendering-Aware Deep Network
Presentation | Team members | Collaborations | Publications | Job offers | Contact


Single-Image SVBRDF Capture with a Rendering-Aware Deep Network

ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), Volume 37, Number 128, page 15 - aug 2018
Download the publication : Deep Material Acquisition Authors_version.pdf [31.7Mo]  
Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in single pictures. Yet, recovering spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image based on such cues has challenged researchers in computer graphics for decades. We tackle lightweight appearance capture by training a deep neural network to automatically extract and make sense of these visual cues. Once trained, our network is capable of recovering per-pixel normal, diffuse albedo, specular albedo and specular roughness from a single picture of a flat surface lit by a hand-held flash. We achieve this goal by introducing several innovations on training data acquisition and network design. For training, we leverage a large dataset of artist-created, procedural SVBRDFs which we sample and render under multiple lighting directions. We further amplify the data by material mixing to cover a wide diversity of shading effects, which allows our network to work across many material classes. Motivated by the observation that distant regions of a material sample often offer complementary visual cues, we design a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation. Many important material effects are view-dependent, and as such ambiguous when observed in a single image. We tackle this challenge by defining the loss as a differentiable SVBRDF similarity metric that compares the renderings of the predicted maps against renderings of the ground truth from several lighting and viewing directions. Combined together, these novel ingredients bring clear improvement over state of the art methods for single-shot capture of spatially varying BRDFs.

Images and movies


See also

More information

Project Webpage: contains additional results, source code and data.

Acknowledgements and Funding

We thank the reviewers for numerous suggestions on how to improve the exposition and evaluation of this work. We also thank the Optis team, V. Hourdin, A. Jouanin, M. Civita, D. Mettetal and N. Dalmasso for regular feedback and suggestions, S. Rodriguez for insightful discussions, Li et al. [2017] and Weinmann et al. [2014] for making their code and data available, and J. Riviere for help with evaluation. This work was partly funded by an ANRT ( CIFRE scholarship between Inria and Optis, by the Toyota Research Institute and EU H2020 project 727188 EMOTIVE, and by software and hardware donations from Adobe and Nvidia. Finally, we thank Allegorithmic and Optis for facilitating distribution of our training data and source code for non-commercial research purposes, and all the contributors of Allegorithmic Substance Share.

BibTex references

  author       = "Deschaintre, Valentin and Aittala, Miika and Durand, Fr\'edo and Drettakis, George and Bousseau, Adrien",
  title        = "Single-Image SVBRDF Capture with a Rendering-Aware Deep Network",
  journal      = "ACM Transactions on Graphics (SIGGRAPH Conference Proceedings)",
  number       = "128",
  volume       = "37",
  pages        = "15",
  month        = "aug",
  year         = "2018",
  keywords     = "material capture, appearance capture, SVBRDF, deep learning",
  url          = ""

Other publications in the database

» Valentin Deschaintre
» Miika Aittala
» Frédo Durand
» George Drettakis
» Adrien Bousseau