Goldsmiths Digital is a consulting arm of the Department of Computing at Goldsmiths which operates out of the Emobodied Audiovisual Interaction Group. We help SMEs by producing tech prototypes through user-centred design and evaluation methodologies.
Goldsmiths Digital has raised £200,000 in its first 18 months of operation, working with approx. 40 SME partners to create prototypes. Many of them have turned into commercial products.
Prototyping is an effective research method that can lead to research outputs of high value and high impact. We treat prototyping as a form of research in the wild, articulating notions of research-led and research-informed practice through the paradigm of the prototype. Prototypes are evaluated using a range of appropriate and existing methods. In addition, new evaluation methodologies form part of our research agenda, such as for understanding iterative user-centred design research.
We treat research in accessible design as a real-time, in the wild method of producing better prototypes. Working with disabled and non-disabled communities equally may lead to research that is more easily reproducible, more easily deployable, and therefore higher quality. This follows on from existing research indicating ‘trickle up’ design research, where users with very specific needs in interaction, for example, help to evaluate the usability of prototypes for non-disabled users, leads to more deployable and usable software / hardware.
Firef.ly is £500,000 research and innovation project to facilitate the development of a mobile app exploring machine learning of user behaviour in order to make recommendations as part of travel and trip planning. The company itself has also raised significant sums of money, approx. 1 million US dollars in total.
Rapid-Mix is a €2.2 Million H2020 EU funded project bringing together research labs and creative companies, with the aim of bringing innovations in interactive technologies to users.
I am the innovation manager for the project, leading other academics at Goldsmiths, Ircam (Paris), UPF MTG (Barcelona), and in partnership with PLUX, Reactable, Roli, Somethin’ Else, and Orbe.
Soundlab Framework Project
Soundlab Framework is funded by NESTA/ACE/AHRC Digital R&D. SoundLab aims to find simple and effective ways to help people with learning disabilities to express themselves musically and collaborate with other people using both readily available musical technologies and also cutting edge research in interface design and machine learning. We want to show how technologies can be brought together and combined to allow users new ways to make music.We’re carrying out a series of workshops and events where users, developers, educators and members of the project team will experiment with different combinations of technologies in different environments and with different groups of users. Through these sessions we will evaluate what works and what doesn’t as a group. Each of the these sessions will be written up as a series of experiments and posted online at the SoundLab site with audio, video and photos of the session and how it went, together with a conclusion or outcome so that they can be useful to other arts organisations and individuals, and over time build into a valuable resource. You can check out the website here :
Maximilian is a C++ audio library that is designed to be easy to use and teach with. It has a bunch of interesting features. It doesn’t use blocking, instead relying on the compiler to do necessary optimisations. This makes it easy and flexible, both in terms of development and use.
You can find out about it here :
Sound, Image and Brain :
This project is in two parts. The first is around developing accessible audiovisual software, some of which is open-source. The second is about trying to improve commercial brain computer interface technology through algorithms. I’m working with a games company called Roll7, and a company called Neurosky. Below is some blurb about it.
“Through a multidisciplinary approach that draws on perception and cognition, media engineering, therapy, interactive gaming, sound, music and audiovisual arts, this project takes completed research in brain-computer interfaces, audio-visualisation, participation and gaming, and develops it in partnership with industry and public organisations by engaging more fully with those within the public sector who both stand to benefit from, and also contribute to the creation and enhancement of consumer-grade real-time interaction hardware and software for brain-computer interfacing and technology-led creativity.”
This work was based on my previous research project, Cultural Processing, and funded by the Arts and Humanities Research Council.
Cultural Processing :
Cultural Processing is a multidisciplinary approach to thinking about our experience of sounds and images in a way that fuses art and science. Influenced by cybernetics and systems theory, the Cultural Processing project began life as an investigation of the relationship between cognition, perception, audiovisual art and composition, incorporating signal processing and segmentation, brain-computer interfaces (EEG), information retrieval, aesthetic processing, gaming, live electronics, software development, accessibility and experimental electronic arts. It has implications for the study of experimental sound and image practice, but also demonstrates utility with respect to industry and public service.
This work was supported by an AHRC Fellowship in the Creative and Performing Arts.
Really Old Stuff
Brain Computer Interface for Music
This project has featured heavily in international press, with articles in print media, radio and international TV news.
Musicians may soon be able to play instruments using just the power of the mind. Researchers at Goldsmiths, University of London have developed technology to translate thoughts into musical notes.
BBC article on the BCI for Music
This project was produced in collaboration with the Sonic Arts Network (now Soundandmusic.org), Whitefields School for the disabled, the South Bank Centre and London Philharmonic Orchestra. It has also generated a large degree of international press attention, and demonstrates significant potential as a music and speech therapy tool.
Deaf children have been testing software that enables them to see a visual representation of sound waves. Called Lumisonic the software translates sound waves into circles that radiate on a display. It creates a real time representation of sound and is designed to elicit responses quickly in the human brain.
BBC Article on Lumisonic – Visualisations for the Deaf and Hard of Hearing
Download Lumisonic for Free from the Sonic Arts Network
Daphne Oram :
I’ve been Director of the Daphne Oram Collection since 2008. Daphne Oram was a pioneering musician, composer, audio engineer and interface designer who had a massive impact on British electronic music history, but died almost completely unacknowledged.
The Daphne Oram Collection
I cut my C++ teeth working on Michael Casey’s Soundspotter.
Strangeloop is an invisible college of friends, and a software company.
A defunct software project, but when it was big it attracted over 2000 downloads a day