unit-code
Artificial intelligence (AI) is transforming creative industries, most prominently in 2D imagery and large language models (LLMS), where machine learning can discover patterns and workflows by utilising vast amounts of training data. We look to bridge the gap between the ideation and the 3D design phases by generating 3D forms directly from text prompts and 2D images. By training a single-shot image to the NeRF (Neural Radiance Fields) model on a custom-built database of 1000s of 3D architectural models and images, we were able to bring this cutting-edge technology into the architectural field. We streamlined this process into a pipeline hosted on GCS (Google Cloud Services): from finding effective prompts, segmenting the generated image, generating a NeRF, to finally using marching tetrahedrons to generate a usable 3D mesh. This approach utilises machine learning (ML) and advanced algorithms to enable architects and designers to input a prompt or image into our pipeline and get a usable 3D mesh output that can be optimised and sculpted into an architectural object.