A live audiovisual performance and interactive installation that explores the ambivalent state of self-doubt and reconciliation.

Implementation

Processing was used for generating the visuals, connected with Kinect to recognize the performer's contour with OpenCV.

Prototyping generative visuals
Prototyping generative visuals

The choreography and soundtrack were created by Phyllis Fei.

Interactive Visuals
Interactive Visuals

Besides the body motion, the real-time audio from the performer was also fed back to the generative visual using wireless earbuds. When the performer breathes heavily, the visuals react with a change in magnitude.

Exhibition

2018, Interactive Media Fall Showcase. Group exhibition at Interactive Media, NYU Abu Dhabi, Abu Dhabi, UAE.

Acknowledgement

This work is completed under the course Sensors, Body, & Motion taught by Aaron Sherwood at NYU Abu Dhabi.


↓ more projects ↓