Donna Hewitt on Live Vocal Processing
Donna Hewitt is a vocalist, an electronic musician and inventor of the eMic. The eMic is a microphone stand enhanced with sensors, which Donna uses to control audio effects in her vocal performances.
In the video below we learn how Donna uses AudioMulch as a real-time vocal processing platform. The eMic controls audio effects with the help of AudioMulch's flexible MIDI mapping system.
Donna Hewitt talks about live vocal processing with AudioMulch
In this video Donna talks about her invention, the eMic; a gesture driven controller built into a microphone stand. She demonstrates her use of the eMic as an interface to control AudioMulch contraptions (via MIDI), which affect her vocal input. Donna describes how she uses the eMic to control AudioMulch in a number of ways: to process and add effects to vocal sounds, to switch between patches, and to control surround-sound speaker setups. Donna also demonstrates some of the gestures she uses to control the eMic, and talks about the visual element that this adds to her performances, allowing her to interact more with the audience.
About Donna Hewitt
Donna Hewitt is a composer of electronic music, a vocalist, and inventor of the eMic. Donna has performed widely both in Australia and internationally, including at Liquid Architecture 7, the 2011 ICMC in Huddersfield and at the 2011 Brisbane Festival. In 2010 she undertook a residency with Julian Knowles at STEIM (STudio for Electro Instrumental Music) in the Netherlands. Donna has been awarded grants from the Australia Council for the Arts to develop the eMic and to and create new works for it. Donna also lectures at the Music and Sound department of the Queensland University of Technology.
Donna introduces the eMic: Extended Mic-stand Interface Controller and talks about how she makes music with it.
I first started performing with a laptop and I sat there with a microphone and I'd sing and change the parameters and create a performance in that way, but I really missed being able to stand up in front of an audience and interact in the way I would as a singer. I started working with controllers — I just started originally with a joystick and I would use the joystick to manipulate and change the parameters while I was singing, or vocalizing, or making vocal sounds so I could create textures by processing the sound and I could layer that up. I could make a sound with my voice live, capture that, use that as a bed track and then start to layer other vocal sounds over the top of that. So...you know, I could sort of become a whole band if I wanted to.
I ended up building my own controller and I based it on a microphone stand, a tool that I would normally use in performance. I situated all of these sensors on the stand based on the kind of interactions I would use with the stand while I was performing. There's actually a tilt sensor in the base so I can use that while I am singing and that can be processing my sound in whatever way I choose.
I'm not very good at programming. I'm not sort of a tech-head really which is one of the things I just love about 'Mulch. It's really easy to use. [Referring to screencast] This is a PD patch, I'm just conditioning some of the [MIDI] signals there. I spit out the data from that and bring that into AudioMulch.
'Mulch has a really fantastic way of allowing you to shape the outputs that you're getting from this [referring to the e-Mic controller]. So basically they're all control voltage outputs. They go into a little box that spits out MIDI so all of the parameters in 'Mulch, can pretty much be controlled via MIDI.
So some of the things you might like to do is just...put a little filter over the vocals. So I've just got a basic EQ there, and I've just set up some filtering. I've got this little contraption here called a FrossCader, and I can basically just map this gain so that I end up with this filtering happening on the voice in real time. I've already set this up; so I just set up my little MIDI control there. I want it to be controller #19, which is this mic grip here [manipulates the e-Mic's microphone clip]. You can see that when I squeeze that sensor, the tighter I squeeze it, the more post-filtered voice you're going to have. So I can do that stuff live while I'm performing and kind of play the effects I suppose. You can use this little graph here [referring to the Parameter Control mapping curve in AudioMulch] to shape it so the sensor's doing exactly what you want in a musical way.
I want a distorted vocal, at the climax of the piece. I want to tilt my stand and have a nice distorted vocal sound. So there you can kind of get more of a distorted sound [demonstrates stand tilt mapped to distortion effect]. Or even just reverb; if I just want to have a little bit of reverb. I've got these distance sensors here [referring to the top of the e-Mic] and the more I bring my hand out, the more reverb I can have; so I can be playing it [Donna demonstrates controlling the reverb mix level like a theremin gesture]. So you've also got a visual element to what's happening as well...but, I can do this away from the computer which is one of the reasons I wanted to have this mic stand setup.
Typically in performance I could have a timeline with some files that playback, and then I could stand away from the computer and just perform with my microphone and change the parameters and control the processing in real time.
Donna talks about other ways she uses AudioMulch
I can do whatever I want in 'Mulch with those controllers. I can generate a piece and move through patches in 'Mulch using the little buttons, the switches or foot controllers. I can add effects to the live vocal input. As I play the distance sensors it might be introducing some sort of reverb or moving the sound out towards the speakers.
I collaborate. I do performances with other musicians where we might be synced to a clock. So I could be sending them data or I could send them a split of my vocal input, and I can do time stuff; with the granulators I can have stuff happening that's working with the other performers.
I've worked with a choreographer. We set up the choreography, then we videoed it. Then, I could sit down at the computer with 'Mulch, I had the MIDI data, I had the visuals...so it's almost like if you had a film in front of you and you were composing to the film.
I've done some work with surround sound - like a four-channel setup. So I might sing and capture the vocal and then I could use the joystick to move that around the speakers, around the space. I've been using 'Mulch since 2001 and there's so many possibilities.
Recorded at Integrate 2010, Sydney, Australia.
Thanks to Andy Stewart and everyone at AT World.
Filmed and edited by Jessica Tyrrell.