unit-code
With the development of machine learning and the appearance of text-to-image algorithms such as Stable Diffusion, artificial intelligence-generated content (AIGC) has attracted much attention in recent years. Accordingly, AIGC is an important technology that has the potential to revolutionise numerous industries, ranging from film to games to animation. While 2D AIGC is relatively sophisticated and has produced satisfying results, 3D AIGC is still in development. This paper provides a new solution to generate textures for 3D models based on stable diffusion and ControlNet, utilising the four-view depth or normal map of the 3D model to generate the four-view image of the 3D model, and then process the projection of each view separately. In this way, the consistency of different views can be guaranteed, making exquisite model texture possible for 3D AIGC.
Four view image of 3D model generated from stable diffusion based on the four view normal map.
Another version of the four view image of the 3D model generated in Stable Diffusion.