More selected projects

Length of String

Length of String is a generative composition written in Python and Pure Data for three transducer speakers on a small network. As each cycle of the composition runs its course, the instruments all retune the root of their melody to the closest resonant mode of a different length of the room.

Introduction

Throughout this course, DSP (Digital Signal Processing) has become my primary toolkit for creative expression. Early exploration of sound through numerical expressions becomes easily confined to a conventional personal computer. For me, this was the case on the path of least practical resistance, exploring packages like Max MSP. While a programmatic approach to sound art still seems irresistible, a lack of apparent physicality has presented a big hurdle in terms of how even a straightforward composition could really move a room while reflecting the philosophies underpinning the creative DSP. For this reason, the aim of this piece was to reify this immaterial form my practice had taken over the past year. Given a particular space and considering it as a tool, can a primitive material bridge back to my generative sound practice be found?

Concept and background research

The base appeal of generative sound composition is very ethereal. A sense of some interplay between spidery deliberation and the whims of conditional chaos surfaces through the process. That sense that something is not totally defined and subtly ever-changing is what I strive to amplify in my work. The cost of this tends to emerge through the degrees of abstraction from materiality a software tool represents. Sound could be considered information whose expression is reliant on matter as a medium. It is easier to argue that the air whose compressions we describe as sonic is matter than it is to say a sound has a body in of itself. But experientially, the sense that noise has a physical presence of its own directly can feel undeniable. Software can be figured similarly because our ubiquitous approach to it is to expect not to have to deal with the silicon or electrical impulses on which these ideas rely. Today, the conflation of information and matter gets easily taken for granted. Sound and software have this in common on the terms of their creative appeal.

When composing the sound for this piece, I borrowed heavily from the melodic language Jonny Greenwood employed for the soundtrack to There Will Be Blood. This influence best reflects the kind of deliberate chaos that I find inspiring. And the film itself revolving around themes of materialism made these motifs feel especially apropos.

A lot of my motives in context with this work are owed to I am Sitting in a Room by Alvin Lucier. I am Sitting in a Room draws attention to the particular conflation of information and matter I am interested in so succinctly. The interrelation between the recording apparatus and space used make the recorded voice seem to degrade in a way that might just seem more material than it really is.

Work like Datamatics by Ryoji Ikeda steered both my thinking and aesthetic approach. My notion of hoping to materialize information belongs to this work. Also, my approach to any medium is heavily influenced by Ikeda's aesthetic decisions. For instance, their choice of frequencies which lie at the edges of perception, or the decision to present data sets over time at such a high rate that each datum rendered by the work feels like it would be much less different than its neighbors without Ikeda's work presenting them as it does. Through gestures like these, Ryoji Ikeda takes information and presents it to us in a way that draws our attention to the outer limits of the way we perceive it.

Most of all, a particular image held my attention consistently throughout the conception of this project. Phase Mother Earth by Nobuo Sekine seemed to represent all the notions I felt my practice lacked engagement with. Without permission, simply moving some matter from one place to another, and documenting the outcome photographically. The material speaks for itself in a brutally simple way. To me, it is important that a medium can anchor ethereal expressions to a certain simple tangibility. So this photograph, popularly discussed as pivotal for the Mono-Ha movement, was my core point of reference. I wanted to begin a search for the right dirt, so to speak.

Technical

Transducers and amplifier circuits drove flat surfaces hanging from simple frames. This comprised the speakers. The materials I used for the speakers were chosen primarily because of their being easily accessible. For the two larger speakers, I ended up using extruded polystyrene. Of the sheet materials I tried (others included thin sheets of wood, different card stocks, acrylic) I found the polystyrene had the 'dryest' sound while it's low density seemed to allow it to stay very resonant. I liked the idea that such a low-cost material could produce what felt like such a raw representation of the DSP sounds actuating it. The metallic sheet for the smaller central speaker is Aluminium, 1mm thick. The synth patch was the same across all of the speakers, but the shaking aluminum gave the central one a dramatic, brass-like timbre. It was a little expensive and this went against my goal of making the things out of as ubiquitous material as possible, but I loved that timbre, and how the light in the room played off it as it shook.

For sound synthesis, I used Pure Data. I knew that I wanted to try and make my piece have a relatively minimal physical footprint, and singe Pure Data is a Linux friendly open source project, it seemed up to the task. Having grown somewhat accustomed to Max MSP, the main challenge this presented was the absence of many newer, more convenient high-level objects. But I prefer sticking to vanilla components of DSP tools, so this was actually a welcome change. The patches were instantiated and run on Raspberry Pi Zeros. This particular Pi appealed to me because it is low cost, both price-wise and in terms of difficulty in terms of development cycles. This low cost and small physical footprint appealed mainly because I knew I wanted to treat each computer as an individual instrument, and make each as articulate as I could in terms of available Timbres. The caveat of this choice is perhaps that raspberry Pi and Linux are not necessarily the most stable basis for DSP projects, due to recent models lacking things like RTC (real-time clock) chips for accurate scheduling. I knew that this would present some challenges in terms of making my Pure Data patch sufficiently performant, but this challenge appealed to me because I wanted to have a sense of a low ceiling to work with when prototyping the system. As much as I designed new aspects of the system, I was forced to simplify everything to bare essentials as ALSA ground to a halt whenever I went too overboard with the design. I wanted to experience DSP with a lower ceiling than I was used to. I chose this limitation out of a desire to consider a computer more as a material than a convenient abstraction of information. The funny thing about all these considerations is that when it finally came time to connect these instruments and sequence them, even with only a simplistic implementation of start and stop messages over wifi, I actually hated how in sync they managed to be. After the fact, I programmed in a little stochastic jitter to the rhythmic distribution of the notes in the Python code. (Also, a function to keep track of when a pattern ended, and allow the rest of the instruments in the ensemble to wait the right amount of time to start up back in sync again.)

Control values used by the pure data patch were generated using Python, particularly for its' convenience for string processing. Musical scores were basically generated from melodies and parameters defined through functions and arrays, to satisfy the input specification of PureDatas qlist object. This was a really exciting challenge for me in my algorithmic thinking. I loved figuring out how to express a melody at a high level just using vanilla Python and it's ability to export (and conditionally mangle) text files.

To develop the system, and keep the three computers relatively in sync, a Local Area Network connected them together over wifi. Messages and new data were sent to the Pis using TCP messages, implemented via Pythons Socket module. I also had to learn how to use things like ssh, scp, pexpect, os, shell (Secure Shell, Secure Copy, Python Expect Library, Python OS module, and simple Shell scripting respectively) to keep all my code up to date while running all those Pis headless (remote terminal access only) in this very niche context.

Future development

In terms of composing sounds using these instruments, I'd love to take things further exploring the use (or avoidance) of resonant modes more directly. I was quite satisfied with how the melody I wrote felt in the church over long periods of time, so I was content to stick with the use of the rooms' character for tuning for the time being. I am more excited about the various practical contexts I can transfer my learning to. How would it feel to live code these instruments hooked into a DJ mixer for more performance-focused endeavors, for instance? Another idea brewing for me is trying to learn how to weatherproof the whole system, and begin to explore it in the outdoor world. Especially in heavy rain, or near trees on a very windy day. I feel like this project can go in an awful lot of new directions, not mentioning the potential for collaboration, perhaps considering the gestures of creative colleagues as novel inputs steering the more stochastic elements of the system. I am also really excited to explore DSP closer to the metal, in response to some of the limitations I experienced with the Raspberry Pi Zero in this context. I think that my next port of call will probably be an Arm Cortex M4 based microcontroller. I've heard good things about its CMSIS DSP library.

Self-evaluation

Most of the time developing this system revolved around software decisions, primarily in terms of the Pure Data patch, and making it work the way I wanted on each Pi. If I could go back, I would have spent much less time shaping this synthetic voice, and much more time on playing with a wider array of resonant materials for the speakers. Ceramics could have revealed more interesting results than the extruded Polystyrene sheets, for example. Other technologies I was curious about, such as Solenoids, and different kinds of Input/Output architectures (perhaps involving feedback loops, or sensor data) went unexplored because of the time invested in the software end. While I do feel like the work has a voice, this consideration of what it says might have come sooner if I had allocated my time differently in this way.

In terms of presentation, I would not have placed the monitor among the setup as I did for the show. I wanted the terminal to be visible so that if someone was curious, they could see which resonant mode the instruments were tuned to. While this did illustrate some of what was going on behind the scenes, I felt that this hurt the aesthetic impact of the work more than it aided its conceptual communication.

The outcome of this process yields the effect I aimed for. It is a piece of music instrumentation which, while retaining an ethereal quality, fit the space and its materials. It has moved my process back to a more tactile place. In this sense, a materialistic approach has worked to help my work speak for itself a little more. However, the next conceptual step remains. My practice in its current form still eschews considerations of more direct communication. This is what hasn't worked about the piece. The materials can now begin to speak for themselves because I have only worked hard at considering how they speak. From the attached timelapse at the top of this page, it is clear that strangers gave a little time and attention to the materials and the sounds they produced. This small curiosity is what I take as evidence supporting the material voice I am referring to. But now is an exciting time, because next, I want to try and synthesize a clear understanding of what they can really say.

Most importantly, I think that what I have made feels more like a real set of instruments than the computers or speakers comprising them.

 

 

Reading:

The way in which I synthesized and developed my ideas was influenced by these books:

Sartre, J., 2010. The Imaginary: A Phenomenological Psychology of the Imagination. Taylor & Francis, Inc..

Foucault, M., 2002. Archaeology Of Knowledge (routledge Classics) (volume 3). Routledge.

Code:

The implementation of this project is derived from the example code found at the following sources:

FM Synthesis in Pure Data

Sequencing Parameters With Text Files in Pure Data

Setting Up a Headless Pi

Sending Messages Over a Network in Python,

Receiving Control Messages in Pure Data

SSH, SCP, pexpect, os, shell for Management and Prototyping on Networked Raspberry Pis