Gesture Sound Experiments

Ross Bencina (home page, email)

Danielle Wilde (home page)

Somaya Langley (home page)

This page was last updated on 12 June 2008

On this page you will find:

Introduction

This page documents outcomes of a residency undertaken by Ross Bencina, Danielle Wilde and Somaya Langley at STEIM, Amsterdam in July 2007. Our main goal was to explore and experiment with new methods for controlling and performing computerised sound using whole-body gesture.

Our approach was multifaceted and reflected the interests of the collaborators. Considerations included: physicality in the space, sonic and compositional form, structure and aesthetics, conceptual semantics, sensor technologies and applications. These concerns were used as the basis for devising experiments, some of which were undertaken without interactive technology. For example in the early phases of the residency we experimented with movement-only composition, and later, some of the sound mappings were prototyped by improvising movement to pre-recorded sound.

The residency focused on two sensor technologies: 3-axis accelerometers (deployed using 7 Nintendo Wii Remotes), and a custom wireless ultrasonic range finding system which we developed to measure the distance between performers. The sensor systems drove various sound synthesis algorithms running in a custom version of AudioMulch using the Lua scripting language to specify the mappings between sensor data and sound. Max/msp was used to translate the various sensor data to Open Sound Control Protocol with the help of the aka.wiiremote max external.

NIME'08 Conference Paper

The project has been described in our paper published at the 2008 New Interfaces for Musical Expression conference. You can read the abstract below, and download the paper in pdf format here.

Abstract

This paper reports on outcomes of a residency undertaken at STEIM, Amsterdam, in July 2007. Our goal was to explore methods for working with sound and whole body gesture, with an open experimental approach. In many ways this work can be characterised as prototype development. The sensor technology employed was three-axis accelerometers in consumer game-controllers. Outcomes were intentionally restrained to stripped-back experimental results. This paper discusses the processes and strategies for developing the experiments, as well as providing background and rationale for our approach. We describe “vocal prototyping” – a technique for developing new gesture-sound mappings, the mapping techniques applied, and briefly describe a selection of our experimental results.

Full Citation

Ross Bencina, Danielle Wilde, Somaya Langley. "Gesture - Sound Experiments: Process and Mappings" In Proceedings of the 2008 International Conference on New Interfaces for Musical Expression. Genova, Italy. June 5-7 2008.

Videos

During our residency we gave a public presentation of a selection of our experimental outcomes. You can view some edited videos of these outcomes by clicking on the images below (mp4 format video). The paper linked above contains more detailed discussion and additional technical information about the mappings we used for each experiment.

Head Scrape

Head Scrape Video

A hyper-instrument in which a sound generator is triggered by the motion of one performer's head. The resulting sound is processed by a bank of resonators whose frequencies are modulated by the motion of a second performer. When a highpassed version of the first performer's acceleration exceeds a threshold, a gate is opened which causes a granular glitching sound to be generated. The processing performer wears two sensors, each controlling an amplitude modulated delay line and a bank of spaced resonators. The modulation rate and resonator frequencies are modulated by lowpassed velocity magniture while performer velocity controls the amount of signal entering the filter bank.

Motion Shatter

Motion Shatter Video

A smooth continuous drone of Tibetan monks chanting is fed through a granulator. As the performer spins in a circle holding the sensor in an outstretched hand the sound becomes less smooth. Spinning faster causes the sound to become gritty, and eventually to break up. It is necessary for the performer to spin in circles, in an increasingly desperate manner in order to effect a complete cessation of sound. The controlling signal (lowpassed acceleration magnitude) reduces grain durations (from approx 500 ms down to 10ms) while increasing the randomised interonset time from 2.6 to 500ms causing the sound to slowly break up with increased centripetal acceleration.

Leg Ratchets

Let Ratchets Video

Sensors are attached to the performer's lower legs. Each leg controls a similar synthesis patch. The patch iterates a pulse generated by gating a granular texture with pulse rate, transposition and gain modulated by performer accelleration. When the sensor is at rest the pulse is slow, silent, and lower pitch. The legs' movement results in accelerated pulses or rhythmic modulation. At some point an error was made which resulted in the performer having to move one leg to make sound, and the other leg to stop its corresponding sound. This opened up as yet unconsidered possibilities, and provided a rich space for performer experimentation.

Blades of Grass

Blades of Grass Video

Each performer wears a Wii Remote aligned to their spine, which is associated with a synthesis patch consisting of processed noise with a resonant filter swept according to the angle and direction in which they are leaning. Sensor tilt direction is processed into a triangular shaper which produces a periodic sweep as the performer rotates the tilt of their spine. This is multiplied by the amount the performer is leaning and mapped to the resonant filter cutoff frequency.

Speed Harmonics

Speed Harmonics Video

The performer wears a sensor on each forearm. The sound world consists of two resonant harmonically tuned oscillator banks, one controlled by each arm. As the speed of the arms increase (sometimes requiring spinning the whole body), white noise and additional bass is faded in, and comb filters are swept across the spectrum creating a swooshing sound. Sensor velocity (lowpassed at 4Hz) sweeps the comb filter between 400 and 4000Hz with increased performer velocity. While velocity (lowpassed at 1Hz) controls the introduction of the white noise and bass boost through a sweeping shelf filter. The filtered velocity signal is also quantized into 10 steps, and used to select one of the harmonics of the oscillator bank: the velocity signal is applied to an envelope follower associated with the selected harmonic, which boosts or sustains the current harmonic level. When the velocity no longer excites a particular harmonic it slowly fades to silence.

Tone Change

Tone Change Video

Two performers each perform with two Wii Remotes, one in hand and the other attached to the hip. Each Wii Remote is associated with two sine wave oscillators. One is slightly detuned from the other with the detune distance increasing by an offset of between .01 and 20Hz with increased performer velocity. The amplitude of each oscillator pair is modulated by an envelope follower tracking performer velocity. The polarity of the filtered Z velocity is tracked. When the sensor has been at rest and starts moving again in the opposite direction a new random note from a diatonic scale is chosen. Thus, the performers start and stop to change notes, and move in various ways to articulate their tones, creating slowly modulating random chord sequences.

Vocal Prototyping

Vocal Prototyping Video

The aim of vocal prototyping was to challenge our usual ways of thinking about movement and sound and to begin to understand the kinds of relationships we might make between them. Through this process we generated a substantial amount of material and made concrete steps towards formalising a gesture sound vocabulary.

We began by exploring a range of processes to develop appropriate sounds. Working individually we identified sounds from the Freesound creative commons database, which we used as a basis for discussing and understanding the qualities of sonic space we each desired to create. This was followed by free-form sound generation using the voice only; physical performance making sessions during which we vocalised sounds that were suggested by movement; and free-form movement and sound generation using the voice and entire body.

Torso Sweep

Torso Sweep Video

Open Questions

In each of the experimental outcomes we strove to maintain a balance in the relationship between movement and resultant sound that was easy to perceive for audience and performer alike. The mappings discussed were intentionally simple. The development of more complex mappings is a clear direction for further investigation.

Engaging the body in performance necessarily raises notions of the body as interface, and, for the audience, physical theatre, or theatre of the body. We feel that it is difficult to escape a theatrical mode of interpretation when confronted with a musical performer without an instrument, which of course also invites a dramaturgical mode of composition. We consider the dialog between musical and theatrical creation to be a significant area for future development in whole body gesture sound performance.

As previously observed by Bahn et al. performing with the whole body involves skills not always possessed by musicians – some of the authors are now considering training in this area to continue the research.

Finally, the sensor technology employed so far has been adopted as a pragmatic prototyping aid. We are now investigating options for smaller, wearable sensor platforms.

Related Links

Acknowledgements

We gratefully acknowledge the support of STEIM for hosting this residency. For their financial assistance we thank The Australia Council for the Arts, The Australian Network for Arts and Technology, Monash University Faculty of Art and Design and CSIRO Division of Textile and Fibre Technology.

All contents Copyright ©2007-2008 Ross Bencina, Danielle Wilde and Somaya Langley unless otherwise noted. Re-purposing of content from these pages without explicit permission is prohibited.