Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing

Jingsen Zhu1, Fujun Luan2, Yuchi Huo1,3, Zihao Lin1, Zhihua Zhong1, Dianbing Xi1, Jiaxiang Zheng4, Rui Tang4 Rui Wang1, Hujun Bao1,
1State Key Lab of CAD&CG, Zhejiang University, 2Adobe Research, 3Zhejiang Lab, 4KooLab, Manycore

SIGGRAPH Asia 2022 (Conference Proceedings)

We present a learning-based approach for inverse rendering of complex indoor scenes with differentiable Monte Carlo raytracing. Our method takes a single indoor scene RGB image as input and automatically infers its underlying surface reflectance , geometry, and spatially-varying illumination. This enables us to perform photorealistic editing of the scene, such as inserting multiple complex virtual objects and editing surface materials faithfully with global illumination.

Abstract

Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity.

Video

InteriorVerse Dataset

We provide a large-scale indoor scene dataset, InteriorVerse, with thounsands of well-designed indoor scenes. Our dataset provides synthetic ground truths of material, geometry, spatially-varying lighting. Get access to our dataset HERE!

Object Insertion

Our method enables inserting a virtual object into the scene image, even specular ones!

Scene Edit

Our method also enables editting the material of the scene, not only color but also roughness and metallic!

BibTeX

@inproceedings{zhu2022learning,
    author = {Zhu, Jingsen and Luan, Fujun and Huo, Yuchi and Lin, Zihao and Zhong, Zhihua and Xi, Dianbing and Wang, Rui and Bao, Hujun and Zheng, Jiaxiang and Tang, Rui},
    title = {Learning-Based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing},
    year = {2022},
    publisher = {ACM},
    url = {https://doi.org/10.1145/3550469.3555407},
    booktitle = {SIGGRAPH Asia 2022 Conference Papers},
    articleno = {6},
    numpages = {8}
}