Improving NeRF Quality by Progressive Camera Placement for Unrestricted Navigation in Complex Environments
Neural Radiance Fields, or NeRFs, have drastically improved novel view synthesis and 3D reconstruction for rendering. NeRFs
achieve impressive results on object-centric reconstructions, but the quality of novel view synthesis with free-viewpoint navigation
in complex environments (rooms, houses, etc) is often problematic. While algorithmic improvements play an important role
in the resulting quality of novel view synthesis, in this work, we show that because optimizing a NeRF is inherently a data-driven
process, good quality data play a fundamental role in the final quality of the reconstruction. As a consequence, it is critical
to choose the data samples – in this case the cameras – in a way that will eventually allow the optimization to converge to a
solution that allows free-viewpoint navigation with good quality. Our main contribution is an algorithm that efficiently proposes
new camera placements that improve visual quality with minimal assumptions. Our solution can be used with any NeRF model
and outperforms baselines and similar work.
Images and movies
See also
Acknowledgements and Funding
This research was funded by the ERC Advanced grant FUNGRAPH No 788065 http://fungraph.inria.fr. The authors are grateful to the OPAL infrastructure from Université Côte'Azur for providing resources and support. The and for the HPC resources from GENCI-IDRIS (Grant 2022-AD011013409). The authors would also like to thank the anonymous reviewers for their valuable feedback.
BibTex references
@InProceedings{KD23, author = "Kopanas, Georgios and Drettakis, George", title = "Improving NeRF Quality by Progressive Camera Placement for Unrestricted Navigation in Complex Environments", booktitle = "Vision, Modeling, and Visualization", year = "2023", url = "http://www-sop.inria.fr/reves/Basilic/2023/KD23" }