faceWah: Applying wah effect based on face tilt - prototype
Overview
"faceWah" is a prototype application that uses facial movements and expressions to control a wah-wah audio effect. Instead of a traditional physical foot pedal, it aims to create an intuitive and expressive interface for sound creation based on facial tilt and mouth opening.
Howitworks
This project is built using a combination of openFrameworks and Max/MSP.
1.FaceTracking(openFrameworks)
Using the C++ framework openFrameworks along with ofxFaceTracker2, the prototype captures a real-time webcam feed to track 68 facial landmarks.
2.ParameterCalculation&Transmission
From the mapped landmarks, it targets and calculates several geometric values:
- Nose line tilt (Face angle): Calculated using the top of the nose bridge and the tip of the nose.
- Mouth opening: The geometric distance between the upper and lower lips.
These extracted values are then transformed and sent across a local network using OSC (Open Sound Control).
3.AudioProcessing(Max/MSP)
A Max/MSP patch (wah.maxpat) listens for the incoming OSC data. It maps these facial expressions and tilt numbers to the parameters of a wah effect (such as modulating the cutoff frequency of a filter). As a result, the audio dynamically sweeps and changes based on real-time facial expressions.