Recently, there have been significant improvements in 3D geometry modeling. However, creating fully textured 3D objects remains a challenge.
A paper recently published on arXiv.org proposes Texturify for automatic texture generation for 3D shape collection.
Given a shape’s geometry, Texturify learns to automatically generate different textures on the shape when sampled from a hidden texture space. This method uses only a set of images and a collection of 3D shape geometries from the same class range, without the need for any 3D texture supervision.
The generative adversarial network synthesizes textures directly on the mesh surface using the input shape geometry and a secret texture code. Researchers confirm Texturify’s effectiveness in the textures of ShepNet chairs and cars, trained with real-world imagery.
It has been shown that this method produces realistic, high-fidelity textures and performs better than state-of-the-art.
Texture cues on 3D objects are critical to compelling visual representations, with the potential to create high visual fidelity with inherent spatial consistency across different modes. Since the availability of textured 3D shapes is very limited, learning a 3D-supervised data-driven method that predicts textures based on 3D inputs is very challenging. Thus we propose Texturify, a GAN-based method that takes advantage of a 3D shapefile dataset of an object class and learns to reproduce the distribution of appearances seen in real images by generating high-quality textures . In particular, our method does not require any 3D color supervision or correspondence between shape geometry and images to learn the texture of 3D objects. Texturify operates directly on the surface of 3D objects by introducing face convolutional operators over hierarchical 4-RoSy parametrization to generate practical object-specific textures. By employing discrete rendering and adversarial losses, which critique consistency across different views and views, we effectively learn high-quality surface texture distributions from real-world images. Experiments on car and chair shape collections show that our approach outperforms the state-of-the-art 22% in FIDE scores.
Research Paper: Siddiqui, Y., Theis, J., Ma, F., Shan, Q., Neisner, M., and Dai, A., “Texturize: Creating Textures on 3D Shape Surfaces”, 2022. Article link: https://arxiv.org/abs/2204.02411
Project Page: https://nihalsid.github.io/texturify/