More selected projects

Between The Lines

An interactive projection-mapping piece that uses a Kinect, discarded Bibles and Recipe books to define its relationship with the viewer.


produced by: Owen Planchart

Concept and Research

“the syntactical nature of reality, the real secret of magic, is that the world is made of words. And if you know the words that the world is made of, you can make of it whatever you wish.”  Terence McKenna

This project began as a playful exploration on how we use words as symbols to represent reality. I thought it would be interesting to subvert this idea by using words as images rather than for the meaning they encapsulate. Initially, the projected silhouette of the person in front of the canvas would be typed from left to right and the writing would be a randomised quote about the nature of language. However, as I started to experiment with the depth camera, I found myself hitting it's natural limit, often by being too close (less than 60cm), or too far (more than 4m). This made me pivot the focus of the work from the general nature of language to the way we use it specifically to identify borders between ourselves and the rest of the world.

We live in the age where the unwritten rules about what defines us as individuals are constantly changing due to technological and attitudinal shifts in society. How do we communicate our boundaries? How do we express consent? What is privacy? This piece translates the hard-coded limitations of a Kinect camera into language cues to let you know where to stand, and projects them onto the blank pages of old Bibles and recipe books. It attempts to give a clear signal about its value system but gets cluttered with its own messages forcing us to read between the lines. 

Allowing the viewer to explore a piece by getting closer or further away became analogous to the way mankind is groping its way in the dark around the subject of individual freedom. Gender politics and internet privacy are just two of the many areas in which we simply do not have any clear guide as to what paradigm to follow. 

The Bibles and recipe books were used to represent the former structure of values and guidelines and deepen the question: how do we codify our own limits in the age of unfettered access to unlimited information?

Technical

This work was made using OpenFrameworks. I used two libraries: ofxKinect and ofxOpenCV. I created my own Typewriter class but failed to weave it through the Kinect code as I soon realised that this would entail heavy computational power. 

One of the most difficult things to achieve was using Vectors of strings to make it legible, the different sizes/widths of the words presented the main problem. The messages were enveloped in three categories: Too Far, Perfect and Too Close. They were then turned into vectors of words with similar meaning and placed randomly across specific thresholds (distance at x,y). Using the contour finder could be one way around this. I did not want it to feel like it was too grid-like, so in the end, I opted for it to be a little unitelligible and let the viewer explore the piece through movement. 

Instead of using an fbo for the video-mapping I used a simple trick of masking the borders of the canvas with 3 black rectangles.

Future development

This idea is not fully cooked yet. Connecting string vectors and the kinect is definitely a minefield of potential outcomes. These are some of the paths and improvements which could be achieved:

1) The typewriter effect will definitely be explored in future iterations.

2) Making it interact with text that is pertinent to the viewer, allowing the user to input a word and visualise the data in his or her image.

3) Using Machine Learning to create the shape of the object being portrayed in words that relate to that object; perhaps by connecting it to a wikipedia or twitter api.

4) Automatically take screenshots or video whenever people "play" with the piece for longer than a given amount of time, in order to record the interaction.

5) Achieving a similar effect in a more 3-d space by using the point-cloud.

Self Evaluation: 

I "burnt" a lot of the allocated time on trying to figure out how to do the typewriting effect within the silhouette. In retrospect I wish I had managed my time better and generated a contingency plan from the outset. 

The colour coding came about as I was trying to identify the thresholds between the three stages of closeness, and became a giant part of why the piece was effective. The level of interactivity that this piece attracted was unexpected - it gave me a huge sense of accomplishment, as it felt like the code had a life of its own. Overall I'm very proud of the work, especially because of the amount of learning that it forced out of me as I went through the conceptual permutations in the process of interacting with the tools.

References

I used the kinect example in the openframeworks example folder and watched Daniel Sciffman's series on the Kinect as part of his Coding Train channel.