More selected projects

Capgras



produced by: Joseph Rodrigues Marsh

Introduction

Capgras is a grotesque computer mirror that manipulates the viewers reflection, warping and distorting their features to create a horrifying new form, one that is like the original but altered over time into an unrecognisable composition. Capgras is a playful take on traditional fairground attractions such as the Hall of Mirrors, which encourages the viewer to question who it is they see before them. The installation engages the viewer by presenting them with a reflection that is known but unfamiliar to their own.

Concept and background research

This project was inspired by various delusional misidentification syndromes, where sufferers hold the belief that the identity of a person, object, or place has somehow changed or been altered. One such syndrome, named the Capgras Delusion, causes the sufferer to believe that people or animals close to them have been replaced by an identical imposter. I wanted to explore this illusion of reality with this project, where what is real is present but transformed in some way, with the mirror image becoming something recognisable yet uncomfortably different. Inspiration for this piece was also drawn from The Twilight Zone episode Mirror Image, where the protagonists encounter identical doubles of themselves, causing them to question whether the people they encounter are formed from a delusion or reality, real or fabricated.

Technical

To achieve this, I decided to create a computer mirror that distorts the viewers reflection over time. The live images were captured using a Kinect camera to create a depth image in three-dimensional space. These vertices were then stored in an array and manipulated in real-time using a four-dimensional noise algorithm. The original position of a vertex would be obtained, with a noise value being calculated and added to that vertices position, allowing for smooth movement in the point cloud. The algorithm was written so that over a set amount of time the vertices would return to their original positions, creating patterns that loop seamlessly.

These vertices were then meshed using Delaunay triangulation with the use of the ofxDelaunay addon. This allowed me to add faces to the point cloud, creating a meshed three-dimensional model, allowing for a more true to life representation of the reflection.

The application allows for two modes, real-time and portrait. While in real time mode, the Kinect captures a mask of the viewer and its vertices are updated every frame. Viewers can see their forms modulate in real-time, with the distortions changing depending on where they are situated in relation to the capture device. While in portrait mode, the application takes a snapshot every 10 seconds. Initially, the user’s original reflection is displayed, giving a momentary glimpse of reality. Over time however, the image is progressively distorted. This encourages the viewer to hold a pose and then inspect their newly created form.

The application consists of various scenes, or camera angles, which are autonomously chosen by a Director function as the program runs. When the Director has chosen to cut to a new angle, the camera’s position space would be updated from a saved camera position, created using the addon ofxCameraSaveLoad. Each angle offers a new perspective to the viewer, from seeing their image in traditional portraiture, to the reverse, where the mesh is flipped, displaying the portrait from the inside-out, building upon the layers of surrealism and distortion. The Director would also update the noise values used to create the distortion, choosing from a range of values depending on the current display mode.

The application uses ofxPostProcessing to add depth of field to the finished image. This was important for the overall visual effect of the project, as the image needed to feel as though it had depth by simulating effects similar to the distortion caused by curved glass.

The application is designed to be projected onto a large white wall within an exhibition space, situated in an area with high foot traffic. The Kinect would be embedded roughly at eye level of the viewer. It can capture many forms and viewers in an image, allowing for multiple viewers to interact with the installation concurrently.

Future development

My main aim for future development would be to improve the program to be able to run in real time. Currently, it only runs at around 20fps when there is a lot of information within the scene, which is not ideal for an installation environment. The possibility of manipulating the vertices through vertex shaders did occur to me throughout development, however due to the time constraints of this project, I was unable to create a functional prototype in time.

Using multiple Kinect cameras at the same time would allow for a richer and more complete model to be captured. Depending on the angle the viewer approaches the camera from, some areas of the face can be occluded by features such as the nose. This could be alleviated by using multiple Kinect cameras capturing the image in an arc surrounding the projection.

I would also seek to incorporate multiple versions of the same captured image, each with their own unique noise parameters and features, creating a collage effect. This would allow viewers to compare the different created avatars and watch them blend and morph with each other.

Self evaluation

I had initially intended to create a reaction-diffusion shader which would manipulate the vertices captured from the Kinect based on the feed and kill rate of the algorithm. The intention was to create naturally forming patterns across the viewers reflection. After finding a way for the reaction-diffusion shader to manipulate the vertices of a simple plane, the next step of manipulating the vertices from a Kinect in real-time proved a step too far for me in the time remaining.

I decided to strip back the project completely, with less focus on the technical aspect and more on the concept. While my initial idea is far removed from the finished product, and although I am happy with the outcome, it is frustrating that I couldn’t succeed with my original goal. With more time spent on this final concept I feel I would’ve been able to achieve a more polished and refined experience, that ran smoothly at 60 frames-per-second.

I would also like to improve on the physical manifestation of the project. While testing, the camera angles and depth of field were configured with finite values, meaning if the installation were to move or change position, these values would need to be manually update. This makes the installation harder than originally planned. When installing the piece, I also realised it would have been beneficial to consider the placement of the projector, with back projection being more suitable for the project.

References

Kinect Delaunay by Kamen Dmitriov - https://github.com/kamend/KinectDelaunay

Panopticon face grabber by Zach Rispoli - http://cmuems.com/2014a/zjr/12/12/final-project-panopticon/

 

Dependencies

ofxFaceTracker

ofxOpenCv

ofxKinect

ofxDelaunay

ofxPostProcessing

ofxGUI

ofxCameraSaveLoad