More selected projects

Interactive Instrument

This is an interactive instrument that functions as both a performance device and an installation, hearing the frequencies in a room and creating an immersive musical soundscape.

produced by: Lou Terry

Note: The work is a sound piece, so to hear it as it is intended, please listen to it through quality headphones or speakers.

Introduction

The frequency spectrum of incoming sounds is analysed, and oscillators respond creating a ghostly reflection of sounds in the room.  Simultaneously, layers of ethereal chime and gong-like samples respond to more pronounced room sounds.   These two output levels can be adjusted in relation to one another, giving further room for exploration of the sounds, and allowing the work to function in different ways. 

Concept and background research

Many electronic instruments are built with abundance of choice and control in mind.  Sounds are replicable each time, and there is rarely randomness.  One interesting aspect of acoustic instruments is they're  different every time you play them.  A cymbal has infinite variety within it, never sounding the same twice.  You also adjust your playing to the instrument's character, in a collaborative act, not a controlling one.  I wanted an electronic instrument that was responsive to incoming signal, but contained uncontrollability, and its own sonic signature.   

The piece has two outputs, samples and oscillators, which respond to input signal differently.

The samples, consisting of electronic gong/bell-like sounds respond to amplitudes of incoming signals.  If a sound is loud enough, a sample is triggered.  The louder the incoming sound, the higher generally the sample's pitch is, from low breathy textural waves to high, shorter-lasting chime-like sounds.  Beyond this pitching, the exact sample triggered is random, lending an unpredictability.

The oscillators respond to FFT analysis.  FFT analyses incoming sound, splitting it into its constituent sinewaves, of various frequencies.  These sinewaves are then pieced back together, using sinewave oscillators, thus creating what should in theory be the exact input sound, but in reality, is a ghostly shadow of it.  Interestingly, although only sinewaves are used, the correct combination of sinewaves and amplitudes creates sound that really mimics the character of the input, including sibilences and vowel sounds. 

The adjustable volumes of these two outputs allows for both artistic sonic exploration, and also the functioning as an installation.  For instance, a performer might like to use a mixture of samples and oscillators, with a low input level, so as to only pick up their own actions.  Alternatively, raise the input level and the oscillators, to mimic resonant frequencies in the room, and respond to any sound made in it. 

The ability to mimic resonant frequencies reminded me of Alvin Lucier's 'I Am Sitting in a Room'.  You could turn up my piece, and let it feed back on itself to reach a stasis.

Technical

The piece uses openframeworks, but also requires a microphone, an audio interface and speakers to work well.  It extensively uses the ofxMaxim addon.  This technology/equipment was at hand amidst this pandemic. 

I faced many challenges in getting ofxMaxim running smoothly.  The addon and its syntax are largely new to me, and documentation that I understood hard to find.  However, some tutorial videos supplied by Leon Fedden proved useful, and helped achieve FFT analysis, and helped me get samples working.  Also introduced was the use of a struct, which helped in creating pools of oscillators and samples I could call in the audioOut. 

One challenge was getting old samples that hadn't run their course to cross-fade with new samples.  I got round this by having multiple samples in the struct with their own envelopes, with slow enough attacks and releases, triggered on and off, and having the audioOut cycle through all samples (both on and off), passed to a double with its own envelope. 

Editing and production value were limited in documentation.  I have access only to my phone camera, and no video editing software besides iMovie.  I created atmosphere by dimming the lights and using sillouettes, which made the film visually engaging with the technology I had.

 

Future development

After this pandemic, I'd like to use a condensor mic for this piece.  Currently I'm using a dynamic mic, which is designed for picking up one sound source, so is directional, and much less responsive to room sounds than I'd like.  The frequency response of the particular model is also bassy, which isn't ideal for the FFT/oscillators, which work better at mid/high frequencies.  Once I have access to a midi controller, I'd also like to map the sample and oscillator output volumes to it, allowing for easier sonic manipulation whilst performing.

I'd love to try this piece with a circular 8 speaker surround sound configuration, within which participants could walk.  It would work well as an installation, perhaps in a dark room.   Every movement could be picked up, and panned round the speakers.  Multiple mics, each responsible for one speaker, situated on opposite sides of a circle, would allow acoustic sounds to be sent across the room.  It would be a fun and immersive collaborative style of interaction. 

 

Self evaluation

The biggest obstacle with this piece was that the samples tended not to read correctly, and made horrendous noise.  This happened 9/10 times I ran the code at its worst, making progress slow.  Although the reason why was never resolved, the problem was fixed by having less samples loading and playing, having sample playback speed constant, and having no effects on them.  This was a slight shame, as there were many ways I attempted to develop the piece (e.g. having sample speed adjust to match (or harmonise with) incoming frequencies, or to add interesting effects such as modulated delays/reverbs, interacting with other elements of the input, or to have the sounds generate and morph governed by L-systems, the x and y coordinates governing frequencies/filters/delay repeats for instance).  However every time I complicated it, playback failure increased.  Therefore I kept my mechanisms simple, incorporating as much sonic variety using those mechanisms, though I do think if I'd overcome this issue I would have been able to make a more sonically dynamic work.  I think I managed well though, my instrument sounds good, is fun to use, and is both reactive to input signals as well as having elements of uncontrolability and has its own sonic character, which were all initial aims.  I think the use of FFT to resynthesise input signals is an inventive and interesting method of sysnthesis, and I'm particlulary pleased with this idea.  Its versatility in its uses is also strong. 

References

These tutorial videos by Leon Fedden gave me a basis from which to build my code.

Fedden, Leon. 'FFT, Averages & Simple Sequencing', https://www.youtube.com/watch?v=iTJKLMb0qVQ, 2015.

Fedden,Leon. 'A Simple Sampler', https://www.youtube.com/watch?v=NfFWUfXCbsM, 2015.

Fedden, Leon. 'Multiple Oscillators', https://www.youtube.com/watch?v=-hLr6DUdgyk, 2015.