More selected projects

 

Flocking Encounters

 

This work aimed at exploring the realm of interaction possibilities between a computed flocking system and contemporary dancers.

 

Produced by Romain Biros

 

 

Research background

Complex natural phenomena exist, what can we learn from them? How can they be simulated?

In the search for a complex system behaviour that would be aesthetically challenging and inspiring I was inspired by people like Craig Reynolds. He dissected the way a flocking system could be understood, describing the following rules:

  1. Separation: steer to avoid crowding local flockmates
  2. Alignment: steer towards the average heading of local flockmates
  3. Cohesion: steer to move toward the average position of local flockmates

Those set of rules can be programmed in different languages, and with the help of Daniel Schiffman’s tutorial, I was able to write them in Open Frameworks and then in OpenCL.

The major flaw of modern complex system thinking is the isolating symptomatic effect that such thinking of one system involves. It prevents to foresee and fully understand the realm of reactions of the system you are aiming to understand.

I then decided to explore the way the flocking system I designed would react to an alien body. Using computer vision, I was able to make dancers interact and get inspired by their proximity with the autonomous agent they were confronted to.

 

Technical

Flocking:

A simulated flocking system is a highly CPU consuming algorithm. After programming it on Open Frameworks using CPU only, I was quickly limited in the number of particles and on the aesthetic and interactive possibility that could emerge from it.

After a bit of research, I decided to teach myself how to use the computing power of the graphic card using OpenCL. Thanks to Memo Akten’s addon ofxMSAOpenCL, I managed to understand how it could work with Open Frameworks.

OpenCL is a framework maintained by the Khronos Group that allows programmers to code across the CPU GPU and other types of processors. It uses a language called OpenCL C based on low-level C99. It was a challenging task to get into its complexity and low-level aspect, but I eventually managed to port the entire flocking algorithm onto it.

The first result was very satisfying and after acquiring an eGPU connected to the laptop, I was able to go from 1000 to more than 20 000 real-time flocking particles.

The investment was a risky move as I wasn’t sure if Memo’s addon would detect this new card, but the following line of the addon allowed me to choose the eGPU as the main graphic card for the program to work on:

// or use deviceNumber = -1 to use last device found (best for Macs with multiple GPUs)

// TODO: allow passing in multiple devices

void        setup(int clDeviceType = CL_DEVICE_TYPE_GPU, int deviceNumber = -1);

void        setupFromOpenGL(int deviceNumber = -1);

To be able to change the numerous flocking parameters (cohesion, attraction, separation, speed etc…) I created GUIs using ofxGui and created an iPhone app with touch OSF and ofxOSC. The app was very useful while experimenting with the dancers and allowed me to discover unexpected flocking particles behaviours as you can witness in the first seconds of the performance video.

Computer vision:

​In order for the system to interact with the dancers, I am using 2 computer visions technics, Optical Flow and Contours detections from ofxCV along with a PS3 webcam.

The challenge here was being able to send all the information returned from those two algorithms to the OpenCL kernels (=functions) in real time. The flow.h and flow.cpp from ofxCV was modified to include a function that returns the vertices of the detected flow. In OpenCL, I had to randomly assign the 20k particles to be attracted to all the vertices returned by the movement detections algorithm.

With optical flow the particles are attracted to movement and are set back free when there is none, the amount of movement also influences the speed of the particles. With contour detections, the particles get attracted to any found contours and are set free if there is none.

In the main video of the performance, I only used the optical flow was used but you can see the contour detection in action on the testing videos above.

 

 

I also was inspired by work from Denis Perevalov and Igor Sodazot that used Memo’s OpenCL addon to have the particles recreate a static low-resolution image stored in the Data folder. After understanding and twisting their code I was able to decide when a snapshot from the PS3 captured image would be reconstructed by the flocking particles. This gave quite interesting and unexpected behaviours as you can see in the video below.

When the picture is formed, the particles are still influenced by the dancer’s movement allowing her/him to slowly deconstruct the image with movement and reconstruct the image when standing still. You can see such behaviour below :

 

 

 

The following add-ons were used:

  • ofxCv
  • ofxGui
  • ofxKinect
  • ofxMSAOpenCL
  • ofxOpenCv
  • ofxOsc
  • ofxPS3EyeGrabber

 

Future improvement

​There is still a lot of work to integrate and have the particle’s flocking behaviour be influenced by the computer vision returned information. I will also use a Kinect in the future as it seems to be much more convenient to have a clear contour of whatever is shown to the camera.

I want to give more agency to the particle’s behaviours when no interactions is happening. Using machine learning, I could make the particle being able to recognise some specific movement/object.

I still need to fully understand to which extent the OpenCL integration is flexible, at the moment I am using a static number of Particles that are initialised to the GPU memory at the beginning of the program.

I might use this program for other purposes than just interacting with dancers, I could also invite the audience to interact with them directly.

Aesthetically, I will add a 3rd dimension to give a sense of depth and have the interaction rotate around the y axis.

OpenCL is quite popular among image processing, this is another aspect I want to be able to investigate.


Sources

Craig Reynolds and Boids: https://www.red3d.com/cwr/boids/

Flocking algorithm inspired by Daniel Shifman's instructions https://www.youtube.com/watch?v=mhjuuHl6qHM

Particle's foundation and repulsion algorithm inspired by Memo Akten's particle example https://github.com/memo/ofxMSAOpenCL/tree/master/example-Particles

Particle's attraction to pictures key points from Denis Perevalov, Igor Sodazot https://github.com/kuflex/examples/tree/master/Morph-images