More selected projects

Endless Boundaries

A collaboration between two performers and a creative coder. In this performance I developed a motion tracking software that represent movements graphically to accompany physical theatre and contemporary dance piece. 


produced by: Amit Segall

Concept and background research

In this project I teamed up with to female performers that were working on a new piece of work. Their new work was presented as a part of “she created her life”, An all-day event made by women to empower women to create the life they would love.  I suggested the usage of motion tracking in order to add new element and depth to their work. We ended up describing the work as follows:

When we are born, our assigned sex defines our role in society: how we will behave, what will be expected from us and the boundaries we will have to face in life. Due to our social-gendered role as women, nowadays, we struggle to define within what is our voice and is it what we want or what is impose on us. Applying motion tracking technologies with physical theatre and contemporary dance, we will question women's role in the 21st century from both biology and society aspects.

I wanted to highlight different movement esthetics and enhance them with different visuals, telling the story of the piece using graphics as an additional performer on stage. Both myself and the performers were looking for different visual references.  I was exploring different technics to extract information using cameras and experimented with different types of cameras. We knew in advance that we won’t have much time to setup during the show and our dress rehearsal will be on the day of the show. A few weeks in advance we went to the venue to understand the stage settings and dimensions, verifying the screen could reach the floor and how wide is it. Taking everything into account I was ready to start working on the software.

Technical

I had several challenges when I approached this piece. The first was deciding, before starting my project, whether I’ll be using a depth camera (Kinect) or a standard camera. Using a depth camera will allow me to play with different colors for example, projecting black, since infrared technology could work in low light settings as well [1]. Unlike a standard camera that require some lighting to work efficiently and relays on accurate placing.  I decided to use a standard camera since my depth camera could not work with the stage settings in the venue we were performing. I ended up using the PS3 eye camera and it proved to be a right choice. It is flexible and accessible to calibrate to any need. I chose not to use lighting in the show, and project white as light during the show. This effected my color choice but reduced the technical challenges and allow me to control the performance and not be dependent on additional lighting from the venue.

The second challenge I had approaching this project was aligning the camera and the projector together. I wanted the graphics to represent only what is in front of the projected surface and I wanted it to have the original dimensions. I was looking into different cropping and warping technics in order to adjust the camera frame before processing it. I ended up using homography technique to solve this [2,3]. It took a while to break down and understand the topic, but I ended up having a dynamic and robust system. The system I created allows the performance to be easily transferred to different spaces and dimensions and maintain its interactivity and functionality.

Throughout the process, I used different computer vision algorithms but eventually used optical flow to detect direction and frame differencing [3,4]. I used several openFrameworks addons in my software and encapsulated all of them in my own classes. This allowed me to add features to the main software easily and debug it better. I tested every little function separately before putting it all together. One of the Addons I’ve used, “ofxFlowTools”, uses shaders to create different effects.  I had to learn the basics of shaders in order to change them to my likening and needs. Using shaders I was able to process better graphics with a standard computer. The software can run in different modes: automatically like a performance / installation or manually by changing visuals and settings. I added different menus and hot keys to control and change settings on the go and not hard-code every small change. This allows quick adjusting and calibration if needed. 

Future development

There are few things I’d still want to implement in the future in my software. The first is to create an automated homography/camera calibration. Using markers to detect the position of the screen and to adjust it accordingly could be time saver and more accurate. Also, I’d love to experiment more with shaders and explore different textures and graphics. As the visual artist I was limited by the graphical effects I could produce in the performance setting (with no lights) because of the camera I chose. Some visuals created feedback with the camera and did not produce what I expected, more experimenting could solve this. 

Self evaluation

Overall, I feel like I have created a great tool that I will surly keep developing and exploring in the future. Having no experience as a visuals artist, it is a big challenge to generate esthetics that will reflect what I imagen with the set of skills I currently hold. For me this project proved that I gained new practical skills that extend my craft. In addition, it’s great to have a chance to showcase my work it in front of an audience. We even have an additional date for another performance at the APT gallery in two weeks (May 5th 2018).

 

References

1.Han, Jungong, Ling Shao, Dong Xu, and Jamie Shotton. "Enhanced computer vision with microsoft kinect sensor: A review." IEEE transactions on cybernetics 43, no. 5 (2013): 1318-1334.

2.Moreno, Daniel, and Gabriel Taubin. "Simple, accurate, and robust projector-camera calibration." In 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on, pp. 464-471. IEEE, 2012.

3.Bradski, Gary, and Adrian Kaehler. "OpenCV." Dr. Dobb’s journal of software tools 3 2000.

4. Noble, Joshua. Programming Interactivity: A Designer's Guide to Processing, Arduino, and Openframeworks. " O'Reilly Media, Inc.", 2009.

Addons I have used in my project:

ofxFboBlur – by Oriol Ferrer Mesià

ofxFlowTools – by Matthias Oostrik

ofxCv -  by Kyle McDonald

ofxPS3EyeGrabber – by Christopher Baker

Visual references: