my Bitrot, Gut Weed Breath
an auto-ethnographical interactive audio installation developed from my research of Alexis Pauline Gumbs’ philosophy of breath. The sonic environment of the installation is an idea of how I breathe digitally, with you.
Body-rattling bass frequencies combine with personal historical audio fragments and digitally degraded recordings of my breathing. The bass draws the audience into a communal breath while the samples express a connection between my [digital] breath and time. As someone moves through the room they affect the sounds and begin to breathe with me.
Piece Description and Explanation:
Through its incessant displacement of interior and exterior, your breath stretches out from the past into the future, while maintaining your present moment. Your breath is sustained by the breathing of past generations and influences the breathing of future generations. (footnote 1) Breathing is an intergenerational and communal process. This piece forces people to breathe with me.
The bass oscillates at 47.5Hz, with a slowly modulating amplitude. When people enter its gain increases. The bass resonates both the bodies in the room and the room itself, transforming the room into a breathing thing. Scattered around this breathing bass are digitally resynthesized sounds from my history. Irish folk songs are reimagined as shimmering pads and Jewish folk songs fuse into distorted recordings of my breath.
This piece is an exploration into what digital breathing is. It challenges a tendency to think that digital existence is immortal, and instead probes into digital decay (Henriques, 2019). Many of the samples in the piece have degraded slightly. They contain small imperfections from compression or file transfers (footnote 2). Computationally manipulating these imperfections has protracted a mortal quality to the digital recordings that haunts the room.
This work is programmed in openFrameworks.
The interactivity is achieved with a Kinect V1 in combination with the computer vision library openCv. For tracking I used the openFrameworks addon ofxCV [Kyle McDonald] rather than ofxOpenCV because it offered better functionality. The tracker class on the contour finder was particularly useful.
I chose to use a Kinect as I wanted this installation to take place in the dark to immerse the audience in the sound world.
Audience members are tracked by the Kinect. Their location and movement morphs the soundscape. There are triggers placed in 3D space that activate different progressions. These are connected to a scanning line that is projected to give the audience a hint to where sounds are.
To minimise computational expense the scanning only happens when it is needed and is turned off otherwise. The variables used for scanning can be adjusted easily and so this piece could be used in different locations. I also used background subtraction to improve the scanning performance. This was based on Theo Papatheodorou’s code from this course.
The sound is generated with Maximillian in the ofxMaxim addon*. The majority of the sound is sample based, and uses granular synthesis. mickNoises Granular example for ofxMaxim was used to understanding how to call a granular synth in ofxMaxim. (mickNoise, 2019) Some of the samples are songs that relate to my Jewish and Irish ancestry. Others are vocal recordings of my own breath taken with an iPhone. The progressions are triggered with some randomness so the piece is not repetitive. The intensity of the bass helps to engage the audience.
ofxMaxim was very restrictive compared to other DAWs I’ve worked with as it starts almost from scratch. This limited me to fairly simple sound design. However it was quite nice to be restricted. Sometimes DAWs can be overwhelming because of the amount of options whereas ofxMaxims limitations forced me into some creative solutions.
One problem that took lots of work to fix was to make sure the triggers only operated once. Initially they sent constant streams of ‘on’ ‘on’ ‘on’ which distorted the playback. To fix this I had to construct a chain of Boolean values which ensured the trigger only switched once. This functionality is now stable.
One aspect of ofxMaxim I was disappointed with was the Reverb. I experimented with the different reverbs but was not fond of any. To develop this work I would consider moving the sound design to Max MSP.
The program can be run in two ways. The first is more controlled and uses samples I have selected. The second chooses from my sample bank randomly, producing a completely new soundscape every run.
Aesthetically this piece sits between a conventional computational sound and something more organic. This aesthetic is influenced by Jon Hassel’s ‘Fourth World Music’. Hassel’s compositional method was to sample his own playing from the past and to sample musicians from various music cultures. Each sample related to a specific place and time. By then creatively combining these sources Hassel created completely alternative sound worlds. For my work, granular synthesis was particularly effective for achieving similar results because of its ability to completely transform samples while preserving the sample’s reverb and recording quality.
An influence for the use of bass in this piece was my experience of Steve Goodman’s [Kode9] sound design at Tania Bruguera’s installation ‘10,148,451’. Goodman used a stack of subs to create strong unsound [inaudible bass frequencies]. Rather than hearing unsound we feel it. The effect of being shaken by bass makes you focus in on the present moment. This helped to engender empathy with the the global migrant crisis (subject of Bruguera’s installation).
Martine Syms work Lessons I-CLXXX helped me formulate an idea of how computational practices could be an effective narrative technique. In Lessons Syms presents a 180 piece poem in disjointed fragments, Two clips are paired together which are selected randomly from a bank. This results in a shifting collage that can create unexpected relations. This is reflected in my piece where there is some randomness in which samples are used.
My work is intended as a small part of a wider audio-visual installation exploring breathing. The next step would be to add visual motifs that help guide the audience around the room. In addition to this work I have created a program that can save vocal inputs as .wav. I’d like to try both of my programs in combination with each other so that the audience could add their own voice to the installation.
1. Taken from my research project. Based on Alexis Pauline Gumbs philosophy.
2. I recorded a large set of samples for this project. When I listened through the sample I found that a small selection were digitally altered. They contained small wobbles and unexpected frequencies. Data rot in my breath which I think was down to the compression in IOS. I became interested in how the rotting parts of these recordings would sound when they were digitally manipulated. I experimented by time stretching and pitch shifting the samples. All of the breathing samples in this piece are the digitally altered ones. This helps to give the breathing its mechanical and ghostly quality.
ofxMaxim adjustment. Following Joshua Batty’s live-input granular synth method I adjusted the ofxMaxim library so it had better functionality for a live input. This allowed me to be able to record live input easily.
ofxSoundObjects from Roy MacDonald allowed for recording .wav files
my code can be found here:
‘Background Subtraction’ Theo Papatheodorou, Workshops in Creative Coding 2020, Computer Vision
mickNoise, ofxMaxim Granular example
Henriques, J., 2019. Digital Immortality. Audint-Unsound:Undead