The Bartlett
B-Pro Show 2023
Explore
About the show

unit-code



Close

Linguistic Landscapes: Affective Visual and Sound Patterns of Chinatown

Project details

Programme
Cluster RC15
Year 1

Originally a successful global model for integrated cultural and business facilities, London’s Chinatown’s relevance has significantly declined in recent years. Responding to a new set of expectations defined by younger generations, the project seeks to explore the reimagination of Chinatown as a new beacon of cultural and economic representation for the Chinese community. The notion of ‘Linguistic Landscapes’ was first explored in Soho, with methods that included language and image recognition through machine learning. The final proposal explores the reorganisation of Chinatown as an augmented urban experience in Canary Wharf, exploring the making of a large-scale network of Chinese follies built from the visual and spatial archive of the original Chinatown on the basis of long-term strategies of feedback accumulation and speech recognition.

Linguistic Landscapes: Video

Linguistic Landscapes: Video

The distribution and emotional value of high-frequency tags discovered by image cognition from social media images collected.

Social Media Analysis

The distribution and emotional value of high-frequency tags discovered by image cognition from social media images collected.

The model detected 56,206 images, of which 1,224 had Chinese elements. The images were divided into six categories: lanterns, stone lions, dragon dances, Chinese gates, Chinese shops, and Chinese food.

Image Prediction Results

The model detected 56,206 images, of which 1,224 had Chinese elements. The images were divided into six categories: lanterns, stone lions, dragon dances, Chinese gates, Chinese shops, and Chinese food.

Following sound type theory, collected data shows that each five-minute audio file was clipped into a ten-second segment, then classified and recognised into one of six catalogues of Chinese, English, other, human-made, nature and mechanical.

Sound Classification Results

Following sound type theory, collected data shows that each five-minute audio file was clipped into a ten-second segment, then classified and recognised into one of six catalogues of Chinese, English, other, human-made, nature and mechanical.

Clichéd images are transformed into coinages using image-to-image workflow.

Image to image Workflow

Clichéd images are transformed into coinages using image-to-image workflow.

Clichéd images are transformed into coinages using image-to-point cloud workflows.

Image to Point Cloud Workflow

Clichéd images are transformed into coinages using image-to-point cloud workflows.

The diagram illustrates how the image to coinages workflow is disrupted and hybridised by sound samples captured on site.

Coinages Workflow

The diagram illustrates how the image to coinages workflow is disrupted and hybridised by sound samples captured on site.

Clichéd images are transformed into coinages using image-to-image and image-to-point cloud workflows.

Follies Generation

Clichéd images are transformed into coinages using image-to-image and image-to-point cloud workflows.

The final proposal explores an augmented ‘Linguistic Landscape’ that ethnically spreads across London along the river Thames as a cultural linear park punctuated by a series of augmented reality (AR) follies.

Connecting Chinatown to Canary Wharf: A Linear Park Punctuated by Follies

The final proposal explores an augmented ‘Linguistic Landscape’ that ethnically spreads across London along the river Thames as a cultural linear park punctuated by a series of augmented reality (AR) follies.

The agent-based simulation logic behind the virtual user experience system determines real-world folly locations.

Agent-Based Simulation

The agent-based simulation logic behind the virtual user experience system determines real-world folly locations.

Users can switch between point cloud mode and folly mode at any moment via the user interface. Users can contaminate and see the outcomes of point clouds or follies made by others on the site.

User Experience Interface

Users can switch between point cloud mode and folly mode at any moment via the user interface. Users can contaminate and see the outcomes of point clouds or follies made by others on the site.

The visual real-time programming environment provides interactive experience with 3D point clouds for users wearing the right VR lenses.

Virtual Reality (VR) View of the Point Cloud

The visual real-time programming environment provides interactive experience with 3D point clouds for users wearing the right VR lenses.

Instead of taking the cliché aesthetic of Chinese colours, the non-cliché semantics of non-functional follies with an equal perspective of colours is proposed.

Sectional Perspective

Instead of taking the cliché aesthetic of Chinese colours, the non-cliché semantics of non-functional follies with an equal perspective of colours is proposed.

The sparkling water contrasts with the AI projection of sound and architectural follies generated by fusing the onsite image with the original Chinatown dataset in the vibrant night.

Real World Night-Time Perspective

The sparkling water contrasts with the AI projection of sound and architectural follies generated by fusing the onsite image with the original Chinatown dataset in the vibrant night.

Immersive virtual world experiences, with point clouds, sound image projection and folly all visible at the same time.

Virtual World Perspective

Immersive virtual world experiences, with point clouds, sound image projection and folly all visible at the same time.

Share on , LinkedIn or

Close

Index of Works

The Bartlett
B-Pro Show 2023
26 September – 6 October
Explore
Coming soon