Digital Music Research Network
EPSRC Network GR/R64810/01
|DMRN > News/Events|
DMRN-06: DMRN Doctoral Research Conference 2006
Goldsmiths College, University of London
|Submissions of paper abstracts||completed||31 May 2006|
|Submissions of pieces||completed||31 May 2006|
|Notification of acceptance (papers and pieces)||completed||9 June 2006|
|Submission of camera ready full papers||completed||30 June 2006|
The conference will be held at Goldsmiths College, University of London.
For location and directions, see Goldsmiths: Locations and Directions
The nearest station is New Cross Gate, which is easily accessible by trains from London Bridge and on the East London Line.
Registration is in the Ben Pimlott Building, which is the new building with "GOLDSMITHS" in large letters facing you as you exit New Cross Gate station.
The registration form is here: dmrn06_registration.pdf. Please try to ensure to send it to arrive no later than Monday 17th July. If you need to email the form rather than post or fax, you can find its source here.
Goldsmiths College summer accommodation is ideally located and very reasonably priced. For details see the conference services web page.
If you require any further information, please contact Hamish Allan.
ANALYSING MUSICAL AUDIO
In this presentation I will talkn about some recent work on the analysis of musical audio signals. Much of this work is directed towards extracting information about the musical objects and events underlying the musical audio that we hear. In this context, I will talk about problem tasks such as audio source separation, automatic music transcription, beat tracking, and object-based coding of audio. While I will not delve too deeply into many of the technical details, I will give some pointers to the techniques being used to tackle these tasks, such as independent component analysis, Bayesian signal modelling and sparse signal representations.
Field Work: From Arm-chair to Open-air Research in Sonic Arts
SEPARATING SOURCES FROM SINGLE-CHANNEL MUSICAL MATERIAL: A REVIEW AND FUTURE DIRECTIONS
Georgios Siamantas, Mark R. Every and John E. Szymanski
The problem of separating multiple audio streams from single-channel polyphonic source material is a very challenging one, since it is extremely underdetermined. Additional information is hence essential to assist with constraining the infinite solution set, depending on the application. This paper reviews the projects in our group that have addressed this problem along with related research done elsewhere, illustrating how the possible future approaches for the current doctorial study follow naturally from this foundation.
QUANTIFYING VIOLIN TIMBRE
Jane Charles, Derry Fitzgerald and Eugene Coyle
Although much research has been carried out on finding features for instrument recognition systems, little work has focused on the violin's timbre space. The effect on sound quality a player may have and the more general area of quantifying the violin timbre space will be investigated in this paper using signal processing techniques. Suitable features from which a computer can assess the quality of a violinist's playing are considered, in particular, the spectral flatness and spectral contrast measures. The eventual outcome of this work can be applied in various systems including the development of a violin or bowed string instrument teaching aid, in automatic music transcription and information retrieval or classification systems.
Phrasing is a primary concern for performers in the
process of interpretation, because its structure is
associated with the music's formal designs; many
empirical researchers have therefore considered the
relationship between timing and dynamic in performance
and phrase structure (see Todd 1992). Performers'
tendency towards dynamic modification in phrase
boundary is most often discussed in relation to timing
fluctuation in performance (e.g. Todd 1992; Dunsby
1995). For instance, Todd (1992) creates the algorithmic
model of tempo and dynamics through a series of filters.
He calls the relationship between expressive timing and
dynamic the `motor action'. Previous empirical studies
using Todd's model of performance (1992) include
papers by Bruno Repp (1998, 1999a, 1999b, 1999c): 103
commercially recorded performances of the first five
bars of Chopin's Etude in E major.
In this paper, repeated renditions in the second movement of Prokofiev's Cello Sonata Op.119 by Rostropovich and Richter (the historical premire concert in 1950 and the 1955 studio recording) will be analysed empirically and theoretically with reference to Todd's `motor action'. I use Sforzando (Johnson 1997) to measure tempo and dynamic modification and the accuracy of my method is about +/- 60 milliseconds. My empirical investigation finds that Rostropovich and Richter execute phrase boundaries in an identical fashion in the repeated renditions. Most phrase gestures in Rostropovich and Richter correspond to Todd's (1992) algorithmic model of `motor action'. Judging from how phrase gestures are shaped in the two performances in relation to my phrase analysis, it can be suggested that Rostropovich and Richter may have perceived the second movement of Prokofiev's cello sonata as neoclassical. In order to reach more conclusive remarks on the topic, more samples of performances and scores of Prokofiev's cello music should be investigated in a similar way, which shall be the next step of the research.
WIND INSTRUMENT SYNTHESIS BY MEANS OF CYCLICAL SPECTRA
Background: The Variophon is a wind synthesizer, that
was developed at the Musicological Institute of the
University of Cologne in the 1970/80ies and at that time
is based on a completely new synthesis principle: the
pulse forming process. The central idea of that principle
is, that every wind instrument sound can basically be put
down to its excitation impulses, which independently of
the fundamental always behave according to the same
principles. In a recent project, supported by the German
Research Foundation (DFG), it is planned to digitally
rebuild the Variophon in an improved version.
Aims: The aim of the software-based modelling of that synthesis principle is both, creating an experiment system for analyzing and synthesizing (wind) instrument sounds, as well as building a synthesizer, that would be an alternative to comparable Physical Modelling applications, because on the one hand this sound synthesis technique accounts for the place where the sound is generated, on the other hand just a single breath controller is required to produce all the sound- nuances, that are possible on a real instrument.
Method: First of all the analogue circuits of the different instrument modules of the Variophon will be mapped onto a digital representation by means of the analogue circuit simulation software LTSpice. In a second step the Digital Variophon will be rebuilt in the modular environment Reaktor, made available by Native Instruments, and finally the experiment system will be programmed in C++ by means of the VST-Development Library (Virtual Studio Technology by Steinberg).
Results: The analogue circuit boards of trumpet and bassoon are already digitally implemented in LTSpice and NI Reaktor. Time and frequency domain analysis of some instrument specific generated static tones in pp, mf and ff as well as different intensity sweeps show an extensive concordance of Variophon and Digital Variophon. As expected in the current phase of the project, in comparison to an original trumpet or bassoon sound, spectral differences can be observed, resulting from the limited technical feasibility at that time.
Conclusions: A software-based Variophon makes it possible to bypass these restrictions, as for example, to synthesize the excitation impulses of original instruments by means of cosinusoidal or polygonal impulses, where the rising and falling edges of the impulses can be adjusted freely. Furthermore some important features of the sound production process, as the multiplicative interconnection between pulse forming and breath noise, can now be considered.
After a brief overview of the simulation of a linear and time-invariant system through the digital convolution, the paper will start with the description of the various kinds of techniques for the calculation of the impulse response (IR) of the system that has to be simulated. For each technique, and for each signal used for the extraction, we will analyze the positive and negative aspects, then the problems and the advantages that can help the choice of one signal, instead of another, for the simulation of certain kinds of systems. Starting with the IR extraction through the reproduction and recording of the Dirac (the impulse function), we will analyze the advantages of this simple technique, and the disadvantages connected with the impossibility of a correct reproduction of the impulse function. The second technique discussed in the paper will be the white and pink noise one: we will reflect on the computational advantages of the FFT algorithm and on the phase problems of pseudo-random noise signals. Then, we will move on to describing the Minimum Length Sequence signal (MLS), the shift register and the XOR for its generation, the extraction of the Dirac through the auto-correlation between the original MLS and the one passed through the system, and the problems of this technique, which are strictly linked to the linearity of the system used to measure the IR. At the end, we will talk about the sweep signal: a simple sinusoid, modulated in frequency by an exponential function, seems to be the best method for the extraction of IR from various kinds of systems. The simplicity of the inversion of the sweep signal and its independence from the non-linearities of the measuring system, make this technique the most suitable for the IR calculation of various kinds of signals. A brief example of an IR extraction from a dummy head system (Head Related Impulse Response), should then give the idea of how this technique can be used for the simulation of all kinds of systems, from the old style compressors and equalizers, to the best sounding rooms.
The purpose of this paper is to examine and compare multi channel compositional practices in the scope of electroacoustic music and discuss arising issues regarding compositional methodology, rationale, performance and dissemination of multi-channel works with examples drawn from the author's compositional output. The paper describes principal theories of musical space and draws parallels between those and compositional practices whilst discussing the articulation of acoustic space as another expressive dimension in the musical language with reference to specific multi-channel formats.
SHAPING SOUNDS IN YORK MINSTER
This article describes the process of designing and performing an installation at York Minster taking as a point of departure an existing piece of electroacoustic music. The creation of a soundtrack that would relate to the location is described using an asymmetrical disposition of loudspeakers for a flexible method. Strategies for an integrated spatial and temporal evolution over a 12-channel speaker system in the hall are outlined considering a strategy for the movement of sound in relation to the movement of the audience in the hall. Some conclusions are drawn about the approach used to create the soundtrack from the original piece as well as the method used for the implementation. Finally, some suggestions are outlined for an improved version of the installation which could have a longer duration and could happen outside the context of a concert.
This paper introduces the concept of recurrence within acousmatic music, and explores its potential as an approach to both novel composition practices, and the examination of existing musical works. Notions of musical structuring or semblances of formal organisation can often be traced to the perception of recurrent phenomena within a work. The process of recognising returning sound identities and their transformations, drawing links between them, and trying to understand the various interrelationships can be a rewarding aspect of the acousmatic music listening experience. These sound material connections can be made through all manner of perceivable characteristics, including source associations, more subtle spectral attributes, or an evident process of progressive transformation. This paper will explore the concept of recurrence in terms of sound material identity and temporal relationships, and demonstrate its potential application to both compositional thinking and the critical examination of acousmatic works.
ENTROPY BASED BEAT TRACKING EVALUATION
Matthew E. P. Davies and Mark D. Plumbley
In this paper, we present a novel approach to beat tracking evaluation, based on finding the error between automatically generated beat locations and ground truth annotations. The error is normalised to the current inter-annotation-interval, such that the greatest observable error can be ħ 50% of a beat. We form a histogram of normalised beat error, from which we estimate the entropy as a measure of beat tracking performance, where low entropy indicates accurate beat locations, with the converse true for high entropy. We evaluate the performance of a human tapper in conjunction with three published beat tracking algorithms over an annotated test database and compare the results of our entropy based approach to existing evaluation methods.
ENSEMBLE EMPIRICAL MODE DECOMPOSITION APPLIED TO MUSICAL TEMPO ESTIMATION
Michael Fulton and Prof. J .J. Soraghan
Knowledge of the tempo of a piece of music is not only a very important part of any music transcription system but has many uses on its own, from automatic segmentation to video synchronisation. The purpose of this paper is to investigate the suitability of Empirical Mode Decomposition (EMD) when used for this task. It has already found uses in many areas such as speech processing and biomedical applications where the core physical processes involved in creating the data are of importance. It is for this reason that EMD followed by Hilbert Spectrum calculation was applied to meter analysis.
AUTOMATIC CONTENT-BASED HYPERMETRIC RHYTHM RETRIEVAL APPROACH
Jaroslaw Wojcik and Dmitry Zhatukhin
Recurrence of melodic and rhythmic patterns in various representations of music and a hybrid method of a hierarchical rhythm retrieval, employing the most promising set of ranking methods, have been conceived. On the basis of this novel approach and also authors' former research in the area of metric rhythm concerning the rhythmic salience of sounds, an application called DrumAdd, accepting symbolic representation of music on input, is proposed. The system generates automatically a drum accompaniment to a given melody on the basis of hypermetric hypothesis ranked as the first one among all hypotheses. In the paper other studies on rhythm retrieval are described in aspect of their applicability in the system of automatic drum accompaniment. Details on experimental setup and results obtained are presented, conclusions are delivered concerning the quality of the engineered methods.
SEQUENTIAL INFERENCE OF RHYTHMIC STRUCTURE IN MUSICAL AUDIO
Nick Whiteley, A. Taylan Cemgil and Simon Godsill
This paper presents a framework for the modelling of temporal characteristics of musical signals and an approximate, sequential Monte Carlo inference scheme which yields estimates of tempo and rhythmic pattern from MIDI data. These two features are quantified through the construction of a probabilistic dynamical model of a hidden `bar-pointer' and a Cox process observation model. The capabilities of the system are demonstrated by tracking the tempo of a 2 against 3 polyrhythm and detecting a switch in rhythm in a MIDI performance.
FURTHER ASPECTS OF SIMILARITY
Hamish Allan and Geraint Wiggins
In this paper, we propose that it may often be useful to make judgements on similarity based on subjective perspectives rather than a ground truth. We describe a method of using example sets to capture particular aspects of similarity and find songs similar to those in the example set, discuss some example features from the rhythmic domain, and propose an experimental methodology for testing the effectiveness of the method.
SOUND WRITING AND REPRESENTATION IN A VISUAL PROGRAMMING FRAMEWORK
Jean Bresson and Carlos Agon
This article addresses the issue of the representation and manipulation of sounds in Computer-Aided Composition and presents related works in the OpenMusic visual programming environment.
REAL-TIME INTERACTIVE MUSICAL SYSTEMS: AN OVERVIEW
A. N. Robertson and M. D. Plumbley
We present an overview of developments towards interactive musical systems. A description of an interactive system is given and we consider potential uses for the automation of creative processes within live performance. We then look at the history of research into the problem of automatic accompaniment, discuss a variety of current interactive systems and present some ideas for future research.
Music composition is a complex practice that draws upon a range of human abilities. Situating this practice in a wide range of contexts, including those not traditionally considered relevant, can help us explore and understand the practice of music composition. In turn this can help composers interested in analysing their work to better apprehend and evolve their compositional methods. In particular knowledge from science and methods provided by information technology are crucial in helping us record and investigate composers working processes. As a practice-based researcher involved in both composition and software programming, my project explores a range of contexts from which to view music and develops a software system informed by these findings with the aim to assist composers in their practice. This paper concentrates on some methodological and aesthetic considerations, but will also touch on the design of relevant software.
A TRANSDISCIPLINARY APPROACH FOR USING A WIND CONTROLLER AS A BIOFEEDBACK DEVICE FOR RESPIRATORY CONTROL
Abhay Adhikari and Dr. Tony Myatt
The objective of this research is to use a wind controller to provide aural biofeedback which will allow the user to develop better respiratory control. This paper investigates the transdisciplinary scope of the research under the following contexts: Identifying the specifications of the wind controller by investigating and combining the unique methodologies of different respiratory techniques and therapies; and exploring sound synthesis and design to provide aural biofeedback. The paper also outlines different applications of the wind controller.
In Greek mythology, Atropos was one of the three Moirae, the Fates, the female deities who supervised fate rather than determine it. Atropos was the fate who cut the thread or web of life. She was known as the "inflexible" or "inevitable" and cut this thread with the "abhorred shears". Although the title is not directly related to the content of the work it was chosen to reflect compositional processes and their relation to sound materials. Here, the direction of energy, and the movement and positioning in time and in more general structural relationships, is supervised and characterised by the intrinsic morphology of the sounds, as opposed to being deterministically formulated. In this respect, the choice of a Moira name metaphorically indicates the acousmatic processes involved in the work's composition. Atropos is a highly abstract work and does not refer to anything outside of itself. Original recordings are not traceable in the work's sound world and although most of the material has been synthetically generated it exhibits physicality in content, character and behaviour.
The events portrayed in this piece are fictitious, and any resemblance to real events, past, present, or future, is entirely coincidental.
Nikos Stavropoulos was born in Athens in 1975. He studied Piano, harmony and counterpoint at the National School of Music and Nakas conservatoire in Greece. In 2000 he graduated from the Music Department of the University of Wales, Bangor where the next year he was awarded an MMus in electroacoustic composition studying with Dr. Andrew Lewis. He has recently completed a PhD at the University of Sheffield Sound Studios with Dr. Adrian Moore specialising in tape composition in stereo and multi channel formats, as well as music for video and live electronics. His works range from instrumental to tape and mixed media. He has composed music for video and dance and his music has been awarded mentions and prizes at international competitions (Bourges, 2000,2002, Metamorphose, Brussels 2002, SCRIME, Bordeaux 2003, Musica Miso, Potrugal, 2004).
Breathing Space is an acousmatic work that uses the human voice as the only sound source. I have included verbal utterances and various extended vocal techniques but, as the title suggests, the driving force of the work is the breath and its potential to evoke different sensations of space. The piece explores the voice both as an expressive, humane tool of communication and as a more abstract, purely sonorous instrument. In general, Breathing Space continually overlaps the border between literal and metaphorical implications and also the ambiguous relationship between ÔnaturalĠ and processed sounds. The form proceeds as a relatively free exploration of these multiple vocal possiblities but comes to pivot on a transformation from intense saturation to extreme reduction.
Nichola is a first year PhD research student in electroacoustic composition studying primarily with Nick Fells at Glasgow University, supported by the AHRC. Her current compositional projects include explorations of the sonorous and expressive potential of voice, instrumental music with timbre as the primary structural determinant, and mixed-media installations in unconventional performance sites. In addition to her studies recent projects include SoundAround, an outreach workshop series with final performance at Perth Concert Hall, and Hold Your Breath, a collaborative, large-scale, soundscape installation for the Clyde Tunnel based on materials created by several community groups in Glasgow.
Early Morning is derived from five piano performances recorded in a variety of spaces over a period of several years. These performances incorporated both traditional and extended instrumental techniques, generating a wide variety of gestural and textural materials. Although these materials informed the overall unity of the piece, sound transformations proved to negate the piano as a recognisable source. Instead, the focus is upon the gradual accumulation and dispersal of spectral detail; these broad contours enhance the spatial impression, suggesting the expansive shaping of physical landscapes. The structure of the piece was inspired by the peaceful awakening of an early morning scene and its illumination in first light.
1st Prize in the Metamorphoses international competition 2006, category A
Adam Stansbie is a composer and sonic artists from Leeds, England. He studied Music Production at the Leeds College of Music where he is now a lecturer in electroacoustic composition and sound production. He is currently working towards a PhD in Electroacoustic Composition at City University, under the supervision of Denis Smalley. His output is largely acousmatic although he also composes instrumental music and has scored for several short films. His works have been performed and broadcast both nationally and internationally at festivals and events including the 11th International Festival of Electro-Acoustic Music, Cuba, the 10th Santa Fe International Festival of Electroacoustic Music, USA, Festival Internacional de Arte Electronico 404, Argentina and Art Trail Soundworks Live, Ireland. He recently received a residency price at the Bourges International Competition 2006 and 1st Prize in the Metamorphoses international competition 2006, category A
The shifting state of mind between consciousness and unconsciousness is sometimes accompanied by a sense of restlessness and anxiety. Hopefully, these feelings will subside.
Ambrose Seddon has a background in rock and electronic pop music. After graduating with a degree in music from Goldmiths College, University of London, he spent a number of years teaching, while writing, producing and performing in the band Weevil, with releases through a number of independent record labels. He completed a Masters degree in electroacoustic composition at City University in 2004, and now continues his studies at City University as a PhD student, supervised by Denis Smalley. His acousmatic work has been performed in concert and broadcast on radio around the world.
CURTO DE RESPIRACAO
(Stereo. Total Duration: circa 10 mins)
1. Introduction: (If I had...)
2. Quaver Variations
3. Distension: (... a hI fI)
6. Coda: (Resolution)
- Old Bottles -- New Wine (8-channel)
7. No. 1 (Movts: 2,4&5)
8. No. 2 (Movts: 1&6)
9. No. 3 (Movts: 1-6)
Curto de Respiracao (portuguese: short of breath, breathless) uses vocal utterances for the basis of musical composition. Each miniature is designed as an individual stand-alone composition but together they form a cohesive structure where the sounds of each miniature are interpolated, transformed and woven within the overall framework.
The work encompasses Old Bottles -- New Wine: different variations using simultaneous playback of the stereo movements. Each expresses different sound groups used within the piece. This compositional process is analogous with certain compositional practices found within the motets, rondeauxs virelais of the twelfth and thirteenth centuries.
Technical Note: Curto de Respiracao is a mixed-media work using stereo and octophonic playback. Stereo works are diffused around the system by the composer/performer. Octophonic works are not diffused, and require an equilateral 8-speaker layout to realise the performance considerations.
David is a composer studying for a doctorate in electroacoustic composition at Keele University with Rajmil Fischman and Diego Garro supported by a grant from the university. His research and compositions are concerned with the application of sound transformation in electroacoustic musical discourse.
This piece, made entirely with sounds originated from recordings of the gamelan, has been an attempt to capture some of the body and clarity of the sound of the gamelan orchestra in its soft and loud ensemble. The idea has been to explore the tonal and percussive character of the gamelan using changes in tempi to create transitions from long sounds heard as timbral tonal textures into rhythms and vice-versa. Inspired by some ideas borrowed from Javanese music, the piece evolves in an expanding cycle. This piece was composed for the 25th anniversary of gamelan Sekar Petak at the University of York and is dedicated to its enthusiastic founder Neil Sorrell.
Born in 1972 in Santiago, Chile, studied acoustics in Chile and perception of sound in Denmark, where he worked several years as a researcher in the field of musical acoustics and computer music. He studied with the Danish composer Anders Brżdsgaard and has done mostly electroacoustic music for tape, live-electronics pieces as well as music for dance and sound installations. His music has been played in festivals in Europe and the Americas. Presently he is doing a PhD in composition at the Music Department of the University of York in England with the composer Ambrose Field as tutor.
For more information see: www.otondo.net
The Call for Paper Abstracts is closed. The call follows for information only.
Submissions will initially take the form of an extended abstract of between 500-750 words giving an overview of the intended content. In addition to this, primary authors are asked to indicate their status as one of the following:
Although paper acceptance will depend on relevance to the conference and academic merit, preference will be given to those submissions where the primary author is a doctoral researcher.
Authors will be invited to present their work either by oral presentation (20 minutes) or as a poster. The conference committee will decide this allocation; however, authors may specify a preference when submitting their abstract.
The DMRN Doctoral Researcher's Conference 2006 solicits contributions from the following topics:
Abstracts should be emailed to email@example.com no later than Wednesday 31st May 2006. Authors will receive email confirmation of whether their proposal has been successful by Friday 9th June 2006.
Authors will then be asked to submit a camera-ready version of their paper (in PDF format) by Friday 30th June 2006. Latex and Word templates are available for the final papers can be downloaded below. Final PDF submissions should not exceed 4 pages. To ensure your PDF document will print reliably in the proceedings, see e.g. the ISMIR 2005 guidelines on Producing Reliable PDF Documents.
The Call for Pieces is closed. The call follows for information only:
The Digital Music Research Network invites the submission of musical pieces to be performed in during the DMRN Summer Conference 2006.
The conference aims to promote pieces by young and professional composers, born or living in the United Kingdom, that use digital technology in their creative process. Pieces will be scheduled into either the main conference concert or a composition workshop. The conference committee will decide this allocation however composers may specify a preference when submitting their work.
Pieces selected for the main concert will be performed through an 8-channel loudspeaker system in the Great Hall at Goldsmiths College. In contrast, the composition workshop will be a chaired session, and composers will be expected to contribute to discussion regarding their work. All composers participating should attend the conference and perform their pieces themselves.
Submissions should take the following format and fulfil all conditions in order to be eligible:
The conference committee will consider multi-channel works; submission should be in stereo format with a description of technical requirements and map showing speaker configuration. Works involving a performer will also be considered although allocation is subject to technical and performance availability. Submission of this type should take the following format:
Wednesday 31st May 2006 - Deadline for submissions of
Friday 9th June 2006 - Notification of acceptance of pieces
Submissions should be sent to:
70A Fountain Road
Final paper templates for both Latex and Word can be downloaded below. Please make final submissions in PDF.
|Last Updated: 17 May, 2006. © Queen Mary, University of London 2006|