Tones of Light: Darbari
How can we express simply and plainly the relationship between rhythm, sound and light? This is an interactive programme and installation which aims to do just that: to demonstrate sound and light come from one and the same source, that the sonic and light spectrums exist as representations of events in time.
produced by: Daniel S. Evans
This piece is an interactive sound and light installation which explores the relationship between interactive human level speed and the high speed wavelengths of sound and light. It uses a single button in order to have an interactive element. The software relates the interaction of multiple button presses (and the calculated speed) to the notes of the Indian raga Darbari Kanada, and to the spectrum of visible light.
Drones are played and hard panned in stereo, whilst two pillars of light appear on the left and right of the screen. The relationship between the left sound and left pillar are analagous and it is the same for the right. These two elements will change one after the other as the participant interacts with the button. The lower the interaction speed, the lower the pitch and the redder the colour; the higher the interaction speed the higher the pitch and the more blue/purple the colour. There then exists all the possibilities in between these two extremes. When a pitch and colour change occurs, a text to speech function will read out the colour, pitch and the relative note of the Indian raga system. The pitches are mapped to the notes of the raga Darbari Kanada in order to produce sometimes beautiful and sometimes dissonant harmonies of sound (rather than just random frequencies with no relationship to one another). Sometimes the colours and notes will be very close together, producing interesting phase beating patterns in the sound and interesting minimal gradients on screen.
Concept and background research
This project is inspired by a largely minimal aesthetic, one which is obsessed with the simple relationships between taken for granted elements. This very rigorous, relational approach has come from artists such as Hanne Darboven, whomst I respect deeply. It does not seek to be overstated or over complicate the relationship between its elements. As such I decided to work on a very simple interaction i.e. a person clicking one single button. It is interesting to see how your relationship develops to the button and how you begin to gain a tactile understanding of the speed required to get certain notes and to get certain colours. The speed becomes something you can lock into.
One source of inspiration seems evident, the Indian Raga system of tuning. I decided to use this 1. because I have a personal connection to it (I have Indian heritage and have studied Indian classical music before) and 2. because the ideology of raga, as a tuning system that locks into fundamental and integral relationships of perception, fits with this project as a whole.
I am also inspired by the work of Russian composer Alexander Scriabin. He was an esoteric and intruiging artist who was obsessed with the occult and the relationship between light and sound, scales and vision. His work Prometheus: The Poem of Fire has the colour of light written into the score, his work integrates his own system of scale into the visual performance of the piece. This was only realised in 2010, as technology allowed it to happen.
I would also like to acknowledge the inspiration of other composers who work in minimal, slow moving and repetitive forms. I owe them a great deal of gratitude. These include: Eliane Radigue, Ellen Arkbro, Alvin Lucier and others.
This work was made entirely using OpenFrameworks. I used a couple of libraries: ofxBlur (to achieve the blurry gradient effect on the visuals) and ofxMaxim (to produce the sound). The biggest obstacle was passing the data from the button around to all the classes and always making sure it was mapped correctly but I think I achieved the result that I was hoping for. It also took me a while to get the Text-to-Speech element working, but this was solved by a very useful Github post (see references).
I would love to make it bigger and better. It would be great to include more pitches and a wider range of those pitches, to include brightness and volume controls. It would have been great to have set it up as a large scale installation, but it was just not possible in the time frame I had. Particularly I think it would be fun to set this project up on a 8.1 speaker array system for 3D audio. I would like to continue working on this sort of aesthetic, although perhaps not on this project specifically. The use of long extended and minimal tones on a 3D audio system with complementing and directly relational lighting would be amazing.
I think I was slightly to ambitious in how I wanted the final thing to look when it was set up, i.e. a large scale installtion. This meant that it never actually came to fruition. I think the graphics could have been slightly more developed, I could have used more complex systems to develop the gradient effect but the blooming quality of the blur was so good that I found it hard to compromise. I also wish I could have done more sound design however maximillian is quite a bad audio software, more of a teaching tool, and so I had to work with what I had. However I am glad that I achieved largely what I set out to do, and that I never sought to overcomplicate. This meant I had a clear vision which I think comes across smoothly in practise, rather than being overly ambitious (in programming terms) and buggy/difficult to use. In that way I am very proud.
The only piece of code that I used from others was this class I found on Github which allows you to hack into the Text-to-Speech accessibility function on mac OSX. Link here