NeRFMeshing

Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes

Marie-Julie Rakotosaona1 Fabian Manhardt1 Diego Martin Arroyo1
Michael Niemeyer1 Abhijit Kundu1 Federico Tombari1,2
1Google 2TU Munich
3DV 2024
Arxiv
Responsive image

NeRFMeshing extracts meshes with accurate geometry and view dependent appearance given a collection of posed images.

Abstract

With the introduction of Neural Radiance Fields (NeRFs), novel view synthesis has recently made a big leap forward. At the core, NeRF proposes that each 3D point can emit radiance, allowing to conduct view synthesis using differentiable volumetric rendering. While neural radiance fields can accurately represent 3D scenes for computing the image rendering, 3D meshes are still the main scene representation supported by most computer graphics and simulation pipelines, enabling tasks such as real time rendering and physics-based simulations. Obtaining 3D meshes from neural radiance fields still remains an open challenge since NeRFs are optimized for view synthesis, not enforcing an accurate underlying geometry on the radiance field. We thus propose a novel compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach. Upon having trained the radiance field, we distill the volumetric 3D representation into a Signed Surface Approximation Network, allowing easy extraction of the 3D mesh and appearance. Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.

TL;DR: We extract an accurate neural mesh from a pre-trained NeRF model that enables real time rendering and physics-based simulations.

Method Overview

Responsive image

We exploit rendered depth distribution from NeRF to help supervise an approximated TSDF. We produce learned features that we feed to a small appearance network to predict RGB colors. We can extract the surface using marching cubes and store appearance features on the surface. Finally we render in real time using the mesh, appearance features and appearance network.

Results

Physics-based simulations:

Real time rendering:

Real-time rendering of complex Unbounded scenes:

Citation

If you want to cite our work, please use:

          @inproceedings{rakotosaona2023nerfmeshing,
          title={NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes},
          author={Rakotosaona, Marie-Julie and Manhardt, Fabian and Arroyo, Diego Martin and Niemeyer, Michael and Kundu, Abhijit and Tombari, Federico},
          booktitle={International Conference on 3D Vision (3DV)},
          year={2023}
        }