The reconstruction of object surfaces from multi-view images or monocular video is a fundamental issue in computer vision. However, much of the recent research concentrates on reconstructing geometry through implicit or explicit methods. In this paper, we shift our focus towards reconstructing mesh in conjunction with color. We remove the view-dependent color from neural volume rendering while retaining volume rendering performance through a relighting network. Mesh is extracted from the signed distance function (SDF) network for the surface, and color for each surface vertex is drawn from the global color network. To evaluate our approach, we conceived a in hand object scanning task featuring numerous occlusions and dramatic shifts in lighting conditions. We've gathered several videos for this task, and the results surpass those of any existing methods capable of reconstructing mesh alongside color. Additionally, our method's performance was assessed using public datasets, including DTU, BlendedMVS, and OmniObject3D. The results indicated that our method performs well across all these datasets.
Color-NeuS employs a Signed Distance Function (SDF) network to grasp the implicit geometry. Following that, it utilizes a global color network to learn view-independent color characteristics. Additionally, a relight network is deployed to adjust for variations that correlate with the viewing direction. During the inference stage, only the global view-independent color is used, ensuring a consistent visual representation regardless of perspective.
@inproceedings{zhong2024colorneus,
title = {Color-NeuS: Reconstructing Neural Implicit Surfaces with Color},
author = {Zhong, Licheng and Yang, Lixin and Li, Kailin and Zhen, Haoyu and Han, Mei and Lu, Cewu},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2024}
}