More selected projects

Step Into The Sewers

‘Step Into The Sewers’ is a playful interactive audio-visual piece that intends to exaggerate the hyperactivity that one might encounter when traveling on Londons underground railway. The piece is a distorted rendering of images and audio recordings that I captured over a series of journeys to work.

produced by: William Parry


When I moved from a quiet town in Lincolnshire into a busy London I found the way people elbow their way around the tube stations during a morning rush hour quite comical. This was until I found myself walking quicker and quicker between platforms, getting frustrated at waiting an extra 5 minutes because of a canceled train. Regularly traveling to random end of line locations around east London I’d usually be clocking ~one hour travel to work. I’m not sure if it was the initial rush hour stress, lack of speed perception or oxygen deficiency but the hour felt like 15 minutes. I was, and still am certain there is some sort of time warp when entering this time stealing transport. ‘Step Into The Sewers’ is my attempt at expressing this bizarre sense of urgency, rush and lost time that I have experience during my travel to work.

Technical and background research

The piece is made up of three main components, computer vision, image and audio rendering. The computer vision element begins with an input from a Microsoft Kinect, this is then processed to leave a binary image which is used as a mask to reveal part of an image that is selected at random. To allow for separate images to be projected if there was more than one person detected by the kinect I used openCV’s blobfinder’s ‘boundingrect’ function which would allow me to loop over the separate ‘body masks’ to conduct a separate masking technique per person. I adopted this almost backwards chroma-keying style effect from the music video ‘Swoon’ by Marcus Lyall and Adam Smith. This music video also inspired me to add a blur effect that was found as an openFrameworks add-on by Golan Levin and Kyle McDonald.

The audio element to the piece was created by looping 10 separate audio tracks and varying their playback speed in accordance to how much the computer vision mask covered a ‘targeting square’ that was hidden on the display, the audio was manipulated and played back using ofxMaxim[2]. The backgrounding image of the tube map was an image taken from a web search and color inverted to darken it, a rectangular selection of this image was then displayed in accordance to the masks position. The use of blur, image moving, flipping, and audio time distorting are all attempts at adding to a sense of rush and nausea to the piece. When no-one is detected in the piece, it projects a blank screen and a drone of very slow audio, the slow audio when exerted through a loudspeaker sounds remarkably like the droning sound when entering the underground.

Technologies Used

This project was built within openFrameworks and used the following addons

  • ofxBlur
  • ofxKinect
  • ofxOpenCV
  • ofxMaxim
  • ofxOsc (for back-up)


Self Evaluation

During my initial building of this project I experimented with using a psEye webcam and background differencing to create a smooth source of computer vision. After trying multiple lighting setups with no luck I settled with moving to the Kinect, something I hadn't yet used. This, amongst programming audio within the c++ openFrameworks environment were some of the most useful techniques I developed throughout the project, and will definitely be exploring further. During this project I also got the chance to experiment with installation based artwork essentials like compiling projects for raspberryPi, and handling large mixed media files.


[0]The Chemical Brothers: Further - Marcus Lyall Ltd. [ONLINE] Available at: [Accessed 01 May 2018].

[1]GitHub - kylemcdonald/ofxBlur: A very fast, configurable GPU blur addon that can also simulate bloom and different kernel shapes.. [ONLINE] Available at: [Accessed 01 May 2018].

[2]GitHub - micknoise/Maximilian: C++ Audio and Music DSP Library. [ONLINE] Available at: [Accessed 01 May 2018].