Open Black Box
An installation investigating thresholds between visible and invisible outcomes of computer vision processes.
produced by: Oscar Cass-Darweish
Many digital artworks use computer vision techniques that are unable to be appreciated on a technical level by a large portion of their audience. It is unlikely that a human can decipher many of the logics of a computer vision system purely through experience, even if they are interacting with it in real time. Due to the potential to convert and add layers to data from sensors, the output of a digital interactive piece is likely to be visually unrelated to the triggering processes that the user is interacting with. Apprehension towards digital images is reinforced by the ease with which digital images can be manipulated, copied and distributed. That invisible process such as motion detection and facial recognition increasingly extend into our day to day lives, where our most personal interactions with technology occur (cameras, phones and social media), highlights the need for greater depth of understanding of the tools and code that shape our worlds. It is important for us to understand the points at which we are detectable by machines so the we have the potential to navigate the environments we exist in with autonomy. It is my interest to develop visual systems that allow for simple interrogation and the figuring out of black boxed processes through aesthetic experience without then needing to fully understand the code behind it.
Concepts and aims
The visual representation of image-based calculations and smart camera readings close to their simplest form are often aesthetically interesting in themselves. Working with OpenCV and OpenFrameworks allows for interaction with digital images as data to the point that you can see the results of numerical manipulations on them. The opportunity to see the result of subtracting pixel colour values from each other over time, as in frame differencing, visually gives an idea of how machines approach the task of detecting changes. It also shows certain limitations, for example just having one angle with no depth image, as everything is happening on the same 2d plane. The background is only (temporarily) visible when something in front of it moves. Similarly, interacting with the 3d point cloud generated from a depth camera such as the Kinect also quickly helps understand the potential and limits of the technology.
My approach for this project was to work towards a system for small scale intimate interaction with visual outputs produced with data from the Kinect sensor, using the calculated distance from the nearest solid surface to trigger and blend between different states. These states are build out of different combinations of a real time point cloud and frame differencing. The IR image was used so that it would work in low light conditions, but presenting this on the display has the added effect of revealing the IR light pattern that the Kinect uses calculate depth. Different parts of the point cloud are revealed as the viewer stands at different distances, as it rotates and zooms in and out accordingly, while the sensor tilt angle is affected to emphasise the effects of frame differencing on the whole frame.
The partially transparent and reflective surfaces used to build the box/display with a Pepper's Ghost effect, also have some function in extending possibilities for perceptible interrogation. The structure recalls older forms of technology with related genealogies, while querying the usefulness of smoke and mirrors in exploring metaphysical boundaries.
Influences and development
The Forensic Architecture group and writings of Eyal Weizman (2017) have demonstrated the usefulness of cross-examining data from sensors and devices from a technical and critical perspective, using artistic methods in practice and visualisation. Their work has been useful to me in developing ways of thinking about the potential of ubiquitous hardware and software and the importance of reclaiming ownership over them. I found works from artist Adam Harvey, such as CV Dazzle (https://cvdazzle.com/) interesting in that they can be thought of as an unexpected outcome of interaction with computer vision, and a way of and taking back control over navigation of environments that we inhabit. In terms of code, I was mainly influenced by the examples in ofxCV (McDonald, K 2018) and assignments set by Theo Papatheodorou in classes at Goldsmiths in 2018.
To take this work further, I would like to take the time to create clearer distinctions and transitions between the processes of frame differencing and point cloud generation, combining with additional sensors to enable greater adaptation to different processes. I am also interested investigating further in to other modes of representation of computer vision methods for detecting features and qualities of objects and spaces.
Harvey, A, 2010-2018 CV Dazzle https://cvdazzle.com/ [accessed at: 20/04/2018]
McDonald, K, 2018 https://github.com/kylemcdonald/ofxCv [accessed at: 21/04/2018]
Papatheodorou, T. (2018). Lectures, tasks and code examples at Goldsmiths
Weizman, E., 2017. Forensic Architecture: Violence at the Threshold of Detectability, MIT Press. MIT Press.