unit-code
Originally a successful global model for integrated cultural and business facilities, London’s Chinatown’s relevance has significantly declined in recent years. Responding to a new set of expectations defined by younger generations, the project seeks to explore the reimagination of Chinatown as a new beacon of cultural and economic representation for the Chinese community. The notion of ‘Linguistic Landscapes’ was first explored in Soho, with methods that included language and image recognition through machine learning. The final proposal explores the reorganisation of Chinatown as an augmented urban experience in Canary Wharf, exploring the making of a large-scale network of Chinese follies built from the visual and spatial archive of the original Chinatown on the basis of long-term strategies of feedback accumulation and speech recognition.
The distribution and emotional value of high-frequency tags discovered by image cognition from social media images collected.
The model detected 56,206 images, of which 1,224 had Chinese elements. The images were divided into six categories: lanterns, stone lions, dragon dances, Chinese gates, Chinese shops, and Chinese food.
Following sound type theory, collected data shows that each five-minute audio file was clipped into a ten-second segment, then classified and recognised into one of six catalogues of Chinese, English, other, human-made, nature and mechanical.
Clichéd images are transformed into coinages using image-to-image workflow.
Clichéd images are transformed into coinages using image-to-point cloud workflows.
The diagram illustrates how the image to coinages workflow is disrupted and hybridised by sound samples captured on site.
Clichéd images are transformed into coinages using image-to-image and image-to-point cloud workflows.
The final proposal explores an augmented ‘Linguistic Landscape’ that ethnically spreads across London along the river Thames as a cultural linear park punctuated by a series of augmented reality (AR) follies.
The agent-based simulation logic behind the virtual user experience system determines real-world folly locations.
Users can switch between point cloud mode and folly mode at any moment via the user interface. Users can contaminate and see the outcomes of point clouds or follies made by others on the site.
The visual real-time programming environment provides interactive experience with 3D point clouds for users wearing the right VR lenses.
Instead of taking the cliché aesthetic of Chinese colours, the non-cliché semantics of non-functional follies with an equal perspective of colours is proposed.
The sparkling water contrasts with the AI projection of sound and architectural follies generated by fusing the onsite image with the original Chinatown dataset in the vibrant night.
Immersive virtual world experiences, with point clouds, sound image projection and folly all visible at the same time.