Loglophone explores purposeless play with data and time in meditative environments. In an idealized state, it would function both as an exploratory object for novel environment creation and as an explanatory object for developing intuition for working with data and material.
PROTYPE OF A PROTOTYPE
Loglophone is an extension of a former prototype, InFormed, but has significant technical and conceptual developments outside the scope of the original work. Video documentation of InFormed is linked in this subsection as contextual framing for Loglophone's development.
This is my first foray into the sonic space. I have primarily trained as a visual artist - having appeciation for, but nearly no experience with, creating sonic works.Three main artistic influences - mostly relating to sound - emerged throught the development process.
During the initial stages of development, the work of Peter Vogel shaped some of the aesthetic and conceptual framing. While Vogel's work is geared towards revealing time structure in sound, I thought to use sound and light to also tease out the nature of linear time. While my final physical object is visibly distant from Vogle's form-follows-function structures, his influence is notable in my included documentary photographs and in the construction of the photocell sensors.
"Sound for me was just another possibility to show time patterns. I was not interested to make sound sculptures. I wanted to show how the object changes its structure of reaction. If you show a time structure by mechanical or optical means, you cannot see details, but with sound you can hear the smallest details of changing time patterns. So I decided to use sound."- Peter Vogel 1
The last two of three influences have lived in my head for years - long before these experiments. The second influence is Ambient 1 from Brian Eno's Music for Airports. Perhaps Eno would have created similar prototypes if currently available technology existed during the production of the album. A similar interest is noted in Working Backwards:
"Studying fine art in England, Eno was interested
in the possibilities light could offer as an artist’s
medium. However, it quickly became apparent that
light was far harder to manipulate than sound.
There was no equivalent to sound-synthesizers
in the world of light, and, after some interesting
early experiments, Eno turned his attention to music."
The third and final primary artistic influence is The Canyon Wants to Hear C Sharp from Andrew Bird's Echolocations: Canyon. Loglophone is able to create sound textures similar to that of stringed instruments in the style of Bird. I am curious to see how trained instrumentalists would interact with Loglophone. Is the physical interaction familiar? Does the physical interaction feel intuitive? Do I feel drawn to think with this object?
Loglophone was an even balance of prototyping with hardware and software. While in some ways separate practices, both inform the other. My software decisions were greatly influenced by my hardware decisions and visa versa.
As for programming language, I chose to use PureData instead of MaxMSP, the latter being the course standard. After looking at my proposed prototype from various angles, I decided that I would trade the more feature, object, and community rich MaxMSP environment for the FOSS licensing and ability to run on embedded linux systems of PureData. The succinct PureData turned out to be the best tool for this particular task. While the two languages have the same initial creator, they do have a few subtle differences. One example outlined in the notes below is that the languages process execution order and message depth passing in subtly different ways.
Previously, in InFormed I used Arduino and C++ to make tones with Pulse Width Modulation(PWM). In Loglophone I used RaspberryPi Zero (RPi0) and PureData (Pd) to make tones and other sounds with PWM. By default RPi0 outputs audio through its HDMI port. I was able to study its pinout and reconfigure/redirect the sound output to its pin BMC 18 for PWM 0 and to its pin BMC 13 for PWM 1. This allowed for a simple multiplexing of 2 channels (usually left and right headphone speaker) of sound per board. In a future iteration I will probably use a Bela board. For now it was a pleasure to learn more about the Pi and Unix systems.
I made a number of simple experiments to test the sound quality of transmitting audio through LEDs on the RPi0. In future iterations I will experiment with playing Gregorian chants to create polyphonies through the lights. It seemed like a natural pairing with many tones that were reminscent of organs. Perhaps this would be tool fit for Anathem.
I then started to experiment with Pd's Objects for sending control Messages. I worked my way all the way up to Netsend to create a - more or less - wireless, stand alone Loglophone. This involved creating a local private network, creating static IP addresses and auto-join networking for each RPi0, and creating TCP Streams between master and child nodes to execute the distributed Pd patch. I also SSHed into each of the 4 RPi0s from my master computer on startup so that I could monitor and execute the gui-less Pd patches running remotely.
Control Messages were sent from the master computer to the child RPi0s. RPi0s then created audio Signals remotely to run on their own hardware. This sent less data across the network and worked around some known issues with playing many channels of audio on a single machine. This setup also allows for some scaling, but it is definitely not the best overall option for large scale multiplexing.
As an side, for fun I even made my own cloned Raspbian Stretch disk img for a Loglophone RPi0 node with the full OS, pre-installed Pd, Loglophone patch files, and preset configs. This made adding/replacing RPi0 nodes to the network basically instanteous. I thought this could be useful in installation settings if things were to go south during exhibition time.
The basic file structure is as follows: There is 1 Pd file on the master machine that initializes all TCP connections to child RPi0 nodes. This file also allows you to pick a destination node and send a control message to it containing the node number, algorithm number, and root midi value for the light to play. There is 1 Pd file on each child node for receiving control messages via TCP and routing the midi value to the correct algorithm contained in its own Pd abstraction. Ideally this would encourage others to write and share their own Pd abstractions for the Loglophone. There is 1 Pd abstraction file for each algorithm on each child node. By executing Pd on the child node with the -nogui flag, we are able to use the RPi0 without any peripherials besides the two LEDs.
While software controls the frequency of the audio signal, the distance of the physical sensor dictats the amplitude and sequencing. This made me rethink how to create my code in comparison to the purely software-based programs I wrote during the beginning of the semster. This led me to explore physically performing amplitude modulation and volume control of signals by using multiple sensors instead of using code. Below shows the process of creating multiple photo-sensors that could be embedded in a glove or other hand-based objects. One (or many) sensor is able to power a set of headphones with no external power and can interact with bluetooth modules.
In some ways this creation feels like it could be classified as 'digital folk'. It is riminiscent of hang drums and steel drums but at the same time existing within a different technology context. I quite like the idea that it is of two worlds. In the future I'd like to develop the work in terms of the physical shape of the Loglophone and interface it with a looper for complex sequencing and jam sessions with the self.