We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images. To model the illumination of a scene, existing inverse rendering works either completely ignore the indirect illumination or model it by coarse approximations, leading to sub-optimal illumination, geometry, and material prediction of the scene.
In this work, we propose a physics-based illumination model that first locates surface points through an efficient refined sphere tracing algorithm, then explicitly traces the incoming indirect lights at each surface point based on reflection. Then, we estimate each identified indirect light through an efficient neural network. Moreover, we utilize the Leibniz's integral rule to resolve non-differentiability in the proposed illumination model caused by boundary lights inspired by differentiable irradiance in computer graphics. As a result, the proposed differentiable illumination model can be learned end-to-end together with geometry and materials estimation. As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method performs favorably against existing inverse rendering methods on novel view synthesis and inverse rendering.
Our method predicts consistent roughness and realistic albedo by explicitly tracing and estimating indirect lights based on occlusion and reflectance. The well-trained SDF MLP can also predict reasonable normal maps. Given estimated albedo, roughness, and geometry, we can render images in arbitrary views with a rendering equation.
We can relight scenes by replacing the environment with Spherical Gaussians (SGs) illumination.
We can edit the estimated albedo and re-render the objects. Some surface is not fully replaced like air balloons and jugs because we manually set the threshold and replaced the albedo.
We can also edit the Fresnel coefficient to figure out what scenes look like if they have mirror surfaces.
We gradually decrease the roughness to visualize the change from a matte surface to a smooth one. We choose air balloons here since the visual effect is more obvious under human perception.
@inproceedings{deng2024dip,
author = {Deng, Youming and Li, Xueting and Liu, Sifei and Yang, Ming-Hsuan},
title = {Physics-based Indirect Illumination for Inverse Rendering},
journal = {3DV},
year = {2024},
}