Preparation
I spent a day with mezzo-soprano and vocal
artist Frauke Aulbert working on some technical ideas for integrating voice
with analogue electronics. The aim was to bring a working system or palette of
ideas to the LLEAPP workshop and not have to scrabble around trying to make
things work during the workshop itself.
Amplitude Filter
One technique we tried was to use a noise
gate (Drawmer DS-201) key-triggered by a John Edwards noise synth or the
analogue synth, to act like the amplitude modulator used in the RAI Studio in
Milan in the early 1960s. This machine was used by Pousseur to make Scambi, and most likely by Berio in his
pieces with Cathy Berberian. This period of electronic experimentation is
central to my research so I wanted to try and achieve a practical outcome by
incorporating this into my work with Frauke.
The results were fairly awful since the
amplitude of the voice signal picked up by the microphone had to be fairly
constant, and the key-trigger signal was very very hard to control with enough
expression to generate any kind of interesting effect on the voice. In addition
to this, the only vocal sounds that worked at all were simple sustained tones,
and this precluded the huge range and scope that Frauke is capable of.
Vocoding
Another classic 60s technique is the use of
the vocoder. Using a MAM VF11 was a way of integrating both of our sounds to
produce a composite sound that we could both influence. In rehearsal this
worked well with a wide range of vocal sounds although Frauke was uncomfortable
with the amount of control ceded to the electronics. There was no easy way
around this without devoting far more time than we had available to designing a
more interactive instrument.
LLEAPP
In setting up for the Tuesday evening
performance it quickly became clear that axiom 5 from LLEAPP Axioms of
Practice: “Live microphones ALWAYS need help!!!” was
extremely important. The vocoder was almost impossible to control and the
acoustics of the performance space (Inspace) were no doubt a large factor in
this. Even though I had a mic preamp
with high and low-cut filters, the mic signal was varying so much that either
the vocoder signal would be inaudible or would feedback.
Our decision to have individual speakers
and no Front of House had effectively eliminated the options of using any of
the techniques that Frauke and I had worked on before, so I had to work out
different techniques as we went on.
Ensemble Size Matters
In a small ensemble a technical hitch like
this would have been much more problematic than in a 13-person ensemble which
we had formed. The larger number of players meant that more space was needed
and so each player could easily not play for long periods. This helped our
situation in two ways; firstly that I could try different setups and re-patch
my synth without feeling relied upon to be filling out the overall sound.
Second; whatever sound I could come up with could afford to be subtle, quiet,
of narrow bandwidth, or even just a slight enhancement, as the sound-world was
already likely to be quite full.
Having given Frauke her own speaker, I
could also simply route her mic signal directly through my own speaker and
thereby change her vocal sound simply in its spatial presence or location. This
was often enough of a transformation, and was easily controllable via a
dedicated fader.
Mic Splitting
Originally I had hoped to split the mic
signals from both Frauke (voice) and Emma (violin) and send these to anybody
who wanted to process them. For this I introduced my modular mixer for its
first outing from the workshop. Several others took advantage of the mic signal
splitter and at different points throughout the workshop, Owen Green, Jules
Rawlinson, Rob Canning and I all processed some of the acoustic signals.
When we changed the arrangement of players
this was reduced to just me, however the splitters allowed me to process the
signals without affecting them going into each players individual speaker.
Violin
Emma fitted her violin with a Fishman
contact mic. This was run through the Radial PZ-DI box, bought especially for
LLEAPP, which provides a suitably high impedance (10 MΩ) to get a really good
full-frequency range signal from a contact or piezo mic. Emma struggled with
her setup to begin with for a number of technical reasons: the signal
originally was going through Jules’ soundcard and was subject to latency; the
volume pedal Emma was using was at first patched between the contact mic and
the PZ-DI, thereby showing the wrong impedance to the contact mic; the gain
staging on the small Mackie mixer used to set overall volume for the loudspeaker
was badly setup so that the signal was distorting; the loudspeaker was setup at
head height and very close to Emma so that she was hearing her own signal
disproportionately loudly and playing quieter to mitigate against this.
After addressing these setup issues and
repositioning the speaker, Emma’s signal was much more audible and she quickly
gained confidence in using this setup which was brand new to her. As she became
familiar with the amplified sound and with the response of the volume pedal it
became much easier for me to incorporate her signal into my synthesis patches
and through simple ring modulation, spring reverb, filtering, and mixing with
my own signal I was able to fuse our sounds together quite well a number of
times. I used similar patches with the violin as with the voice, and it was
also effective at certain times to simply amplify the direct violin sound
through my own speaker to extend the spatialisation of the violin sound without
further processing.
Space (Axiom 6)
The workshop itself was going very well
until the idea of moving everybody around was suggested by Jan Hendrickse, our
musical director. My natural reaction to this was negative as I had spent,
along with Jules, Lauren and Owen, most of the previous day setting up my own gear
as well as everybody else’s – my own setup being rather larger than normal
owing to the mic preamps and modular mixer needed for mic splitting and signal
distribution.
I was in a distinct minority but, along
with Lauren Hayes (digital and analogue electronics and laptop) and Christos
Michalakos (drums, percussion and laptop) I was able to remain in my original
place. My reservations were that the new orientation would undo some of the
work we had done on building good communication skills and techniques between
ourselves as many people were now out of view, and the distances were quite
large even between people that were in each other’s field of view.
As it turned out, the decision to move from
the stage and occupy the entire space was absolutely key in transforming our
performance and activating the whole space. The communication techniques which
we had worked on evolved with our new spatial distribution. An internalization
of some of these techniques was achieved and much more communication was done,
not by hand signals or looks, but by using audio cues, and it felt to me, by
listening much more sensitively to one another’s playing. I don’t feel that
this would have been achieved if we had not worked on the deliberate and
obvious methods of communicating suggested by Jan, and if we had remained in
place in a semicircle on stage. Thus my resistance to moving was wholly
discredited – I had allowed my logistical concerns to blind me to the creative
advantages that a different spatial distribution could offer.
One other key factor that relates the space
to the number of players is that it was very easy to extend the customary
communication strategies by getting up and wandering about during the
performance. This was aided by the fact that the audience was instructed to
move about and explore the space rather than just sitting watching the stage.
This diffusion of the focus of attention allowed me to take a portable noise
synth and wander about, interacting with a number of players on the way, and
just listening to the whole sound from different points in the room. I played a
short duet with Bill Vine and another with Amit Patel in which we linked up our
noise synths to form a hybrid dual circuit synth. Other players wandered about
too and I think this also had the effect of activating the whole space so that
as one audience member said, they felt as though the music was going on all
around them and that they were participating in something immersive rather than
just watching people do stuff on stage.
Acoustic Instruments
Amongst the most important elements of
LLEAPP 2013 was the strong presence of acoustic instruments including voice. I
feel that the electronics were able both to come to meet the acoustic sounds in
terms of subtlety, timbral content and expression, but were also able to occupy
territory clearly distant from the acoustic instruments too. The electronic
processing of the double bass (Adam Linson), the electric guitar (Rob Canning),
the clarinet (Bill Vine), the drums (Christos Michalakos) and the microphone
(Owen Green) acted as a strong and broad bridge that allowed the violin and
voice on one hand and the digital and analogue electronics on the other to fit
into a coherent sound world. This is going to sound obvious, but at various
stages the acoustic instruments did what only they can do, and so did the
electronic and digital instruments, but there was a lot of common ground as
well. In this way a really rich sound world was created and I think the success
of the final performance owes a lot to the combination of instruments and the
inherent expressivity of the voice especially.
Conclusion
… need more time…
Comments
Post a Comment