Procedural Modeling of a Building from a Single Image
Computer Graphics Forum (Proceedings of the Eurographics conference) - 2018
Creating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling has become popular in recent years to overcome this issue, but creating a grammar to get a desired output is difficult and time consuming even for expert users. In this paper, we present an interactive tool that allows users to automatically generate such a grammar from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method. Our pipeline automatically generates the building components, from large-scale building mass to fine-scale windows and doors geometry. Each stage of our pipeline combines convolutional neural networks (CNNs) and optimization to select and parameterize procedural grammars that reproduce the building elements of the picture. In the first stage, our method jointly estimates camera parameters and building mass shape. Once known, the building mass enables the rectification of the facades, which are given as input to the second stage that recovers the facade layout. This layout allows us to extract individual windows and doors that are subsequently fed to the last stage of the pipeline that selects procedural grammars for windows and doors. Finally, the grammars are combined to generate a complete procedural building as output. We devise a common methodology to make each stage of this pipeline tractable. This methodology consists in simplifying the input image to match the visual appearance of synthetic training data, and in using optimization to refine the parameters estimated by CNNs. We used our method to generate a variety of procedural models of buildings from existing photographs.
Images and movies
See also
See also our supplementary materials for details on our shape grammars, as well as additional intermediate results.
We would like to thank the authors of DeepFacade [Liu et al. 2017] for helping us generate facade parsing results, and the anonymous reviewers for their constructive suggestions. This research was partially funded by the ERC starting grant D3 (ERC-2016-STG 714221), NSF CBET 1250232 and IIS 1302172, and by research and software donations from Adobe and Google.
BibTex references
@Article{NBG18, author = "Nishida, Gen and Bousseau, Adrien and G. Aliaga, Daniel", title = "Procedural Modeling of a Building from a Single Image", journal = "Computer Graphics Forum (Proceedings of the Eurographics conference)", year = "2018", url = "http://www-sop.inria.fr/reves/Basilic/2018/NBG18" }