More selected projects

Face DeeJay Beta v0.1

Express yourself and drop beats and visuals!

produced by: Luis Rubim

Face DeeJay

Face DeeJay is a video djaying/djaying app & also a de-stress desk toy. While the app was made to be a simple fun app, it is a project born out of exploration of the expressivity of the human face and my personal interests in video art ,sound as well as artistic movements, archival footage, art history and history of the 20th century. It also plays with some concepts of emotion expressivity and the impredictability of non verbal human communication with its subliminal, subtle variations in facial expression. Therefore the app somewhat forces the user to control their emotions as they see footage that may awe, shock or surprise them.

It is a relatively simple concept but the challenge is to the user in use. It is a djaying app so the challenge is to use ones facial expressions to mix and beatmatch. The app is started by a frown and frowns (usually an expression of frustration, bemusement, dislike) are used to change sample sets and visuals. Drums loops are triggered by an eyebrow raise (we raise our eyebrows not only in shock, surprise or shock but many times subconsciously when we find pleasure in something, such as in beat) and synth layers or extras by moving the head to the left (as in when we are exploring what is behind something, trying to find out more). To trigger a synth with a visual that directly appeals to pleasure centres, the user drops the jaw/opens mouth (as if in awe). The samples over run the visuals to give the user a chance to beatmatch.

In short, it also helps you to de-stress and relax.

How does it work?

Face Deejay is based around FaceOSC although other alternatives were considered, such as Wekinator. The final decision to use FaceOSC came down to different factors, but mainly because of the organicness and unpredictability that FaceOSC naturally provides, which is relatively a two way characteristic of human non verbal communications and the FaceOSC system.

Built in OpenFrameworks, the system uses expressions as explained above, currently limited to four (set to threshold values in the code), to trigger sounds and visuals, organised in different stages. The stages were then organised into a switch case that is randomly selected by the frowns. On a further technical note, the samples and video are loaded once into the system and constantly checked for their current position to avoid sample retriggers or jittering. Some video clips were also mapped to translate across the screen along with the gestures.


Future implementations:


Future implementations will include more samples and visuals, different music styles as well as the ability to load own samples and visuals, although the intent is to keep the versions by and large style specific. While inclusion of other means of interaction are being considered, the intent is to keep the app fun and simple, although the potential for other avenues is present. They could perhaps be developed as separate lines of one product (i.e. a desk toy app and a pro app).


Development stages:


The separate stages/scenes were developed separately and then put together. Below are some samples of the stages:

Stage 2- and I finally beat the lagging and the memory issues #openframeworks #coding #of

A post shared by Luis Rubim (@insta_rubim) on






Free sample packs from Loopmasters


Archival Footage:


"1920's Dances"

available at:

"Care and Feeding of a Mermaid"

available at:

licensed by: State Library and Archives of Florida, 500 S. Bronough St., Tallahassee, FL 32399-0250 USA. Contact: 850.245.6700.

"Old Car Races 1940s"

available at:

"Old Car Races and Crashes"

available at:

"Baby Bubbles (AKA Corky Dunbar) Strip"