More selected projects

Sounds of the Gold Coast

Sounds of the Gold Coast is a playful audio piece which uses an African sculpture as the interface for music production and manipulation.

produced by: Karen Okpoti

Introduction

The motivation for this project was to explore the range of sounds created in both traditional and modern genres of Ghanaian music, to demonstrate the vibrant and highly spirited variation in musical instruments. Therefore, this project utilises a sculpture as a musical instrument and interface for sound mixing to recreate afrobeat sounds. The main artistic inspiration for this project was Kwame Akoto-Bamfo, who is a Ghanaian artist an sculptor specialising in portraits of Africans who were imprisoned and kidnapped into slavery and is now trying to demonstrate the richness of the communities before this.

Concept and background research

The Waruwo and Donno are musical instruments I was surrounded by when growing up and the sounds generated by these instruments have inspired much of the Afro beat/ Afro pop/ and Afro techno music genres. These instruments are from the Akan tribes in Ghana and I wanted this project to pay homage to that. Therefore, the idea of this project was to use a combination of Afro beat sound samples to create a musical instrument which generates different soundscapes whilst also distorting and manipulating these sounds as a music mixing device. The name of the project itself pays homage to the history of Ghana which was labelled as the Golden Coast before independence in 1957 due to the rich sources of goldmines during colonisation.

Traditional Ghana Music Culture - The Ghana Project 2013.

Technical

During the initial planning stages, one idea was to use the human face as the sound interface. However, there was too much movement and this resulted in glitchiness in the sound samples. The sculpture was a much better option and more stable. The sculpture is responsible for playing the different sound samples and the hand controls the sound manipulations. I used the PS3Eye camera and webcam camera for seperate trials, though the PS3Eye camera was slightly better than my inbuilt webcam in the case of adverse lighting conditions. I realised how temperamental the application run based on lighting and this often effected the performance.

Face Detection

I used OpenCv and the openCv haar cascade xml files to detect the faces. Initially the aim was to track the face, eyes, nose and mouth. However, the results were too glitchy and temperamental therefore I decided to only use the haar face detector and create grid lines within the selected parameters to separate the features. Therefore, I still had the same affect I desired. However, I was a little disappointed that this was not an exact detection of the features but rather than an estimation of the positions. Furthermore, I also had to make a decision about whether to create identifiers for the different faces detected or identifiers for the blob and found that having IDs for the blobs would be the simplest solution.

Colour/blob tracking

This part was initially very challenging as I needed to include blob lifespan and persistence to track and join the different blobs. Each blob required an identifier and label which facilitated tracking. However, the blobs sometimes disappeared which meant that the sound sample would start again or start glitching. I overcame this by changing blob distance parameters thus, optimising my code.

Sound Manipulation

I used different sound samples and created different parameters for the sounds to play. All of the sound samples used where afro inspired and I had to ensure that there would be smooth transitions between each sample. Therefore, I also included a transition sounds and chaniges in volume and speed controls to move between the sound samples so that the sound did not come to an abrupt stop as you touched different parts of the face.

Optical flow

I used the optical flow code from the computer vision lectures as a base for the volume and speed control. Therefore, as the hand moves up and down/ left to right, this alters speed and volume of the different sound samples. I had to merge the capabilities of colour tracking and optical flow and ensure that these two seperate functionalities would not interrupt each other.

Project Preperation 

Demonstrating all the sound options and their locations.

Future development and Self Evaluation

All in all, I was very pleased with the outcome of this project even though I faced many difficulties along the way. I envisioned the model sculpture to be a little bit bigger. However, this is an issue that can be solved very easily at a future date. Furthermore, I do not have much experience using a DAW for sound sample generation and I would have preferred to have been able to create and manipulate the sound samples myself, which I will do when I gain more knowledge of this area to create more authentic beats and sounds. Also in future, I would opt to also create a unique ID for each new face detected by the camera as this would simplify my code greatly and allow for more interfaces for sound production or manipulation. For example, if I wanted to add another sculpture. Also, I would want to extend classes and systems to ensure that my main app is as simplified as possible. I faced many issues with the user using their own hand as the touch object because when using Open CV the human hands are often too large and cover the face which makes face detection a lot more difficult. Therefore, in the future I would consider using other add-ons and libraries specifically for touch detection, as I do not think this was the most efficient way to create this form of interaction. Furthermore, in a different iteration of this project, I also hope to incorporate different sound generation algorithms to generate more minimalistic sound distortions rather than using sound samples. I would also like to improve the audio reactivity of the visuals as I feel tthat they could have been more complex, vibrant and fluid.

Link to project video with visuals - https://youtu.be/q6nxo_ky14k

Addons

OfxOpenCv / ofxOpenCvHaar

ofxPS3Eye Grabber

References

Technical references

Daniel Shiffman (Blob tracking) - Video - https://www.youtube.com/watch?v=r0lvsMPGEoY&t=894s. Code -https://github.com/CodingTrain/website/tree/master/Tutorials/Processing/11_video/sketch_11_9_BlobTrackingIDs

Lewis Lepton audio reactive video tutorial - https://www.youtube.com/watch?v=vGZC72fAaBI&t=710s

Lecture 12 WCC - OpenCV colour tracking

Lecture 13 WCC - Computer Vision (part 2).

Conceptual References

Artistic inspiration - Kwame Akoto-Bamfo  - https://kwameakotobamfo.com/

Augmented beat OF project -  https://www.youtube.com/watch?v=bPSqg6Okzdk

Sound samples

All samples used in this project were downloaded from Looperman - https://www.looperman.com/loops/tags/free-afrobeat-loops-samples-sounds-wavs-download.

President Nana addo speech - Prelinger Archive.(2020),.https://archive.org/details/GHANAINDEPENDENCEDAYCELEBRATIONS-Ghanasankofa. [Accessed 3rd May 2020].