More selected projects

Corporature

An interactive installation that allows participants to reconnect with the sounds of nature in an entertaining way. 


produced by: Julien Mercier

Concept & background research

Nowadays, most people know more logos and brand than animal species and the sound they make. “Corporature” aims to reconnect us with the sounds of nature, while making us aware of this fact. 

Technologist and philosopher Koert van Menswoort first came up with the term “Next Nature” in 1998 to describe various phenomenons and ideas. On the Next Nature blog, he gathers articles about many technological innovations that concur with those ideas. Starting from the idea that preserving the nature is essential, but that we must dare to look ahead and discuss the actual transformations nature undergoes. “Will we grow meat without slaughtering animals? Embrace the robot as a colleague? Will men have babies one day? We want to go forward – not back – to nature.” (https://www.nextnature.net/about/) 

Amongst the topics discussed on Next Nature, a recurrent one is how brands exploit representations of nature to sell more goods. It is often referred to as “greenwashing,” when a company tries to sell transformed good by making it sound “natural.” Apparently, most of us know more brands and how to recognize logotypes than animal and plant species. Sometimes, kids think that flowers in the forest smell like their shampoo, and not the other way around. 

Taking these conclusions as my starting point, I wanted to create a simple interactive installation to make us aware and discuss this fact. 

 

source: Next Nature 

Technically

The program heavily relies on the DoodleClassifier example from the ml4a-ofx collection compiled by machine learning educator Gene Kogan. It uses the AdaBoost classifying algorithm to learn how to discriminate what it sees through the webcam. It uses a series of addons (ofxCv, ofxCcv, ofxGrt, ofxGui, ofxOpenCv, ofxOsc…) to perform various tasks such as contour finding, interfacing and signal sending through Osc. It also requires the user to train their own classification model with training data, before saving the model and running it in live. 

Effectively, I link logotypes made from animal representation with the sound the actual original animal makes in nature. I created a set of cards that user play with in front of a webcam. The webcam sends what it sees to an openFrameworks addon and using machine learning (AdaBoost classification), learns how to discriminate the various cards. Each card is assigned its label. Up to this point, the code was mostly created by Gene Kogan as part of his ml4a (machine learning for artists) online course. 

Each card features a common logotype made from an animal. When placed under the webcam, the sound of the corresponding animal is played. I coded instructions (mostly using conditions) for the program to launch animal sounds when a given label is assigned. I made it so that as long as the animal is in the picture, it will make sounds. I sampled a few different sounds for each species, to avoid repetitions. One of the available sounds is picked everytime the label is still assigned. 

As long as an animal is in the scope of the camera, the sound is played in a loop. There can be as many animals as the player desires. They can play and try to find interesting combinations. 

 

As they do so, users might notice some strange behaviours, that are triggered when certain conditions are met. For instance, when a crocodile (a notoriously ferocious and antisocial creature) arrives somewhere, all of the other animal symbolically “leave” (in our situation, they stay silent). 

 

Another possible interaction is when two animals of the same species meet. At first, nothing out of the ordinary happens. But if the joker card “smooth jazz + red wine” is played, the aroused animals make a lot more noise, suggesting they found an occupation to overcome boredom. 

 


Furthermore, some animals do not get along well. The panda, a notorious communist, won’t speak if the republican elephant is in the same room (an elephant in the room, ha), probably because of the threats of economic sanctions the elephant keeps making against Panda’s motherland. Reciprocally, if the elephant arrives after a panda is already there, he will refuse to make a sound. 

These special situations were coded using simple multiconditional statements. I feel this would be an interesting path to explore further, in order to add levels of complexities to the initial simple interaction. 

 


I built a little wooden frame to hold the ps3 eye camera at a fixed distance, making it easier to reuse trained model from one time to the next (fixed distance = fixed picture scale), and making it a “ready-to-install” kit. 

At first, I intended to also feature the assigned label (the name of the spotted animal) as a projection, next to the physical card. I thus added a small projector (see below) next to the ps3 Eye camera. But unfortunately, the size of the projection was so dramatically small that it became evident it couldn’t be done in a simple, savvy way like I had hoped. 

 

 

Self-evaluation

All things considered, I am fairly happy with the outcome of the project. Technically, I didn’t face many difficulties because I kept the code I was going to need accessible. I also used what I’d learn in Rebecca Fiebrink’s course to explore and understand an application for machine learning. We studied the way AdaBoost generalizes when making predictions and it was very interesting to develop a real-life application using it. The project also allowed me to explore some important addons such as ofxGui (for interfaces), ofxCv & ofxCcv (for computer vision), and ofxGrt (for gesture recognition).

In openFrameworks, the challenge was mostly to read and understand the code I found in the example and then add a modest layer of “consequences” on the existing classifier application. At some point, I considered turning my addition into an addon to make it separate and clear what was my work, but time went short and instead I simply commented my addition to the previoulsy existing code. I used a series of tools seen in class, but most were simple such as soundplayers, loops, and if statements with many conditions. In short, the most simple bricks of openFrameworks, but I realized I didn’t need more. I know I could have made the code more efficient, more elegant and shorter but I made the decision to spend more time on the collaterals (such as designing cards, building the wooden frame and sampling sounds from various animal sound databases), since those mattered more for the users’ experience of the installation, while they couldn’t really tell how short and clean my code was. If I were to post the project on github, I would spend some time making it better, though. 

 

References

— the Next Nature design think tank

website of the Animal Sound Archive

Macaulay wildlife media archive 

— the Doodle Classifier project, Andreas Refsgaard & Gene Kogan

 — github repository for ml4a (machine learning for artists)

— the Sketchy Database

— Noah’s Ark, China vs USA, “Our Planet” documentaries and Babar the elephant who taught me crocodile were scary