The Bartlett
B-Pro Show 2023
Explore
About the show

unit-code



Close

3D Model Texture Generation Utilising Stable Diffusion

Project details

Student Dawei Yang
Programme
Year 1

With the development of machine learning and the appearance of text-to-image algorithms such as Stable Diffusion, artificial intelligence-generated content (AIGC) has attracted much attention in recent years. Accordingly, AIGC is an important technology that has the potential to revolutionise numerous industries, ranging from film to games to animation. While 2D AIGC is relatively sophisticated and has produced satisfying results, 3D AIGC is still in development. This paper provides a new solution to generate textures for 3D models based on stable diffusion and ControlNet, utilising the four-view depth or normal map of the 3D model to generate the four-view image of the 3D model, and then process the projection of each view separately. In this way, the consistency of different views can be guaranteed, making exquisite model texture possible for 3D AIGC.

Video File Showing the Texture Generation Process

Video File Showing the Texture Generation Process

Four View Normal Map of 3D Model

Four view image of 3D model generated from stable diffusion based on the four view normal map.

Four view image of 3D model generated from stable diffusion based on the four view normal map.

Another version of the four view image of the 3D model generated in Stable Diffusion.

Another version of the four view image of the 3D model generated in Stable Diffusion.

Rendering an Image of a Textured 3D Model

Textured UV Map of a 3D Model

Share on , LinkedIn or

Close

Index of Works

The Bartlett
B-Pro Show 2023
26 September – 6 October
Explore
Coming soon