Material acquisition using deep learning
Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in pictures. Designing algorithms able to leverage these cues to recover spatiallyvarying bi-directional reflectance distribution functions (SVBRDFs) from a few images has challenged computer graphics researchers for decades. I explore the use of deep learning to tackle lightweight appearance capture and make sense of these visual cues. Our networks are capable of recovering per-pixel normals, diffuse albedo, specular albedo and specular roughness from as little as one picture of a flat surface lit by a hand-held flash. We propose a method
which improves its prediction with the number of input pictures, and reaches high quality reconstructions with up to 10 images – a sweet spot between existing single-image and complex multi-image approaches. We introduce several innovations on training data acquisition and network design, bringing clear improvement over the state of the art for lightweight material capture.
Images and movies
See also
BibTex references
@InProceedings{Des19, author = "Deschaintre, Valentin", title = "Material acquisition using deep learning", booktitle = "SIGGRAPH Asia 2019 Doctoral Consortium", number = "3", pages = "1-4", month = "nov", year = "2019", publisher = "ACM", organization = "ACM", url = "http://www-sop.inria.fr/reves/Basilic/2019/Des19" }