NExF: Learning Neural Exposure Fields for View Synthesis

Google
Conference on Neural Information Processing Systems (NeurIPS) 2025

NExF learns optimal exposure for every 3D point to create consistent, photorealistic views of scenes with challenging high-dynamic-range lighting.

Abstract

Recent advances in neural scene representations have led to unprecedented quality in 3D reconstruction and view synthesis. Despite achieving high-quality results for common benchmarks with curated data, outputs often degrade for data that contain per image variations such as strong exposure changes, present, e.g., in most scenes with indoor and outdoor areas or rooms with windows. In this paper, we introduce Neural Exposure Fields (NExF), a novel technique for robustly reconstructing 3D scenes with high quality and 3D-consistent appearance from challenging real-world captures. In the core, we propose to learn a neural field predicting an optimal exposure value per 3D point, enabling us to optimize exposure along with the neural scene representation. While capture devices such as cameras select optimal exposure per image/pixel, we generalize this concept and perform optimization in 3D instead. This enables accurate view synthesis in high dynamic range scenarios, bypassing the need of post-processing steps or multi-exposure captures. Our contributions include a novel neural representation for exposure prediction, a system for joint optimization of the scene representation and the exposure field via a novel neural conditioning mechanism, and demonstrated superior performance on challenging real-world data. We find that our approach trains faster than prior works and produces state-of-the-art results on several benchmarks improving by over 55% over best-performing baselines.

Method Overview

SVG mit img laden

Given images with inconsistent exposure, our method learns a 3D map of the ideal exposure for the entire scene. During rendering, it uses this map instead of the original camera settings to produce well-exposed and consistently lit views from any viewpoint.

Baseline Comparison

Kitchen Scene

Ignore Exposure

HDRNeRF

GLO Embeddings

NExF (Ours)

Livingroom Scene

Ignore Exposure

HDRNeRF

GLO Embeddings

NExF (Ours)

While ignoring exposure leads to degenerated view synthesis, both HDRNeRF and GLO Embeddings lead to better results but they still produce over- and underexposed scene parts. In contrast, our model leads to well-exposed colors for the entire scene. To ensure a fair comparison, all methods use the same ZipNeRF backbone.

Neural Exposure Field Visualization

Instead of a single exposure per image, we optimize a 3D neural exposure field predicting optimal exposure in 3D leading to well-exposed colors for all scene parts (see e.g. short exposure (dark) for outdoor and longer exposure (white) for indoor parts above).

Exposure-Conditioned View Synthesis

Our model further allows for providing input exposure at test time, faithfully adhering to the input exposure for both in-distribution and out-of-distribution exposure.

BibTeX

@inproceedings{niemeyer2025nexf,
  author    = {Niemeyer, Michael and Manhardt, Fabian and Rakotosaona, Marie-Julie and Oechsle, Michael and Tsalicoglou, Christina and Tateno, Keisuke and Barron, Johnathan T. and Tombari, Federico},
  title     = {Learning Neural Exposure Fields for View Synthesis},
  booktitle   = {International Conference on Neural Information Processing Systems (NeurIPS)},
  year      = {2025},
}

Acknowledgements

We would like to thank Peter Zhizhin, Peter Hedman, and Daniel Duckworth for fruitful discussions and advice, and Cengiz Oztireli for reviewing the draft. The results we show above are from the ZipNeRF and HDRNeRF datasets. The website is built on top of the Nerfies template and uses the image slider.