The purpose of shape-to-template (SFT) is to reconstruct the entire image sequence of deformed surfaces in 3D, given a known initial state. However, most existing methods cannot precisely capture local surface deformations.
A recent paper published on arXiv.org proposes a novel analysis-by-synthesis SFT method, which addresses several limitations of the current state of the art and improves the accuracy of reconstruction by a significant margin.
The researchers say that the current challenges in the domain are the result of non-awareness of the physical fold formation process. Therefore, the proposed approach explicitly models this process, and its parameters are physically meaningful. In addition, differential rendering enables the information contained in the texture to be exploited regardless of mesh resolution.
Experiments show that the proposed method is more accurate than baseline and supports local layers at a better scale.
Shape-to-template (SfT) methods estimate 3D surface distortion from a single monocular RGB camera assuming a previously known 3D state (a template). This is an important but challenging problem due to the less-constrained nature of the unicellular setting. Existing SFT techniques mainly use geometric and simplified deformation models, which often limits their reconstruction capabilities. In contrast to previous works, this paper proposes a new SFT approach explaining 2D observations through physical simulation accounting for forces and material properties. Our different physics simulator regularizes surface growth and quantifies material elastic properties such as bending coefficient, stiffness and density. We use a differentiator renderer to reduce the dense re-projection error between the projected 3D states and the input images and recover the distortion parameters using an adaptive gradient-based optimization. For evaluation, we record with a challenging RGB-D camera the real surfaces exposed to physical forces with different material properties and textures. Our approach significantly reduces the 3D reconstruction error compared to many competing methods. For source code and data, see this https URL.
Research Paper: Yang, J., Liu, S., Li, Z., Li, X., and Sun, J., “Real-time object detection for streaming perception”, 2022. Link: https://arxiv.org/abs/2203.12338