Skip to main content

Gendèr miking again

The gendèr miking strategy just got simpler again. After some experimentation, it turned out I was getting better results by just whacking all seven mikes in parallel and 'mixing' them with a single 10k resistor. Loads of cross talk, but for this setup it doesn't really matter. The virtual-earth op-amp design wasn't working out, trying to make it run from a single 9v battery was giving me headaches. This is sounding pretty good, perhaps a bit too much percussive thump at the start of the note: need to find a different way of mounting the mikes, at the moment they are just blu-tacked to the casing.

trip points one-shot cap

'trip points one-shot cap' is (yet another) piece inspired by ripped off from alluding to Louis Andriessen's gritty post-minimalist classic 'Hoketus'.

There are two main building blocks. The first is… I was rummaging around in my box of old electronics, and found an optical theremin I'd built years ago. The IC at the heart of this is a bit of a classic, a Texas Instruments SN76477, a very early chip designed to make sounds for toys and games, also great for musical experimentation.

The second part of the track is itself made up of two layers. At the bottom is a two-second slice (ntfot82.aif) of an improvisation made with… well, to tell the truth, I can't remember! An out-of-tune guitar played with a chopstick, I think, but I'm not sure what I was processing it through, might have been hardware, might have been software. This short file was then sliced up and remixed in SuperCollider (code below).

The final track was composed in Logic 9: no added effects there apart from a bit of fake stereo.


//SuperCollider code
s.boot;

p =/Users/jsimon/Music/tedsound/prosim/nolap_firstofthese/slices/ntfot82.aif”;
b = Buffer.read(s, p);
b = Buffer.read(s, p, bufnum: 0);
b.play; //quick check

b.free; // eventually

(
SynthDef(\mybuf, { |out, bufnum, sig, rate=1, slices=16, slice=16|
var myenv, env, start, len;
len = BufFrames.kr(bufnum);
start = (len / slices * slice);
myenv = Env.linen(0.01, 0.2, 0.1); //attack, sustain, release
sig = PlayBuf.ar(2, bufnum, BufRateScale.kr(bufnum) * rate, startPos: start, loop: 1);
env = EnvGen.kr(myenv, Impulse.kr(0), doneAction: 2);
Out.ar(out, sig * env)
}).add;
)

(
a = Pbind(
\instrument, \mybuf,
\slice, Prand((1 .. 16), inf)
);
)

a.play;

(
b = Pbind(
\instrument, \mybuf,
\slice, Pseq((1 .. 16).scramble, inf)
);
)

b.play;

(
c = Pbind(
\instrument, \mybuf,
\slice, Pseq((1 .. 16).pyramid, inf)
);
)

c.play;

// this is medium fab
// need to get \freq or something in the synth also
// also figger out how buffer number allocation works
// could allocate several buffers and switch between?!?

TempoClock.default.tempo = 160/60;

(
d = Pbind(
\instrument, \mybuf,
\slice, Pseq((1 .. 16).pyramid(9), 1),
// careful pyramid returns all kinds of different length arrays
// 136, 256, 271 seems to be the three possibilities
// (1 .. 16).pyramid(9).size; -> 256
\dur, 0.5
);
)

d.play;

Max speech munged in Pd

Still at the point of being a tech demo, but my latest text-to-screech project has moved forward a little. Here you can see speech sounds controlled by Max 5 piped into PureData. In Pd, I'm using some old tricks with the 'freeze' function in freeverb plus some pitch shifting to further play with the sound. As a potentially interesting wrinkle, the effects in Pd are turned on and off by the words typed in Max: 'reverb', 'freeze' etc.

Hmm. Where to go next?

Gendèr mic prototype 01

 

Up to something a bit different today: electronics! Yum. I'm building a simple op-amp virtual earth mixer, which I'm going to use to combine the signal from seven cheapo tie-pin mics, one for each pair of keys. A few false starts today, bit rusty on this, but now have a simple circuit running from a 9v battery, which is producing really a very good sound indeed from a £3 mic. Off to buy six more of them, then…

Routing text-to-speech on the mac

There are any number of ways of working with the built-in text-to-speech synthesis capabilities on the mac. All of the music programming languages I use - Max, Pd and SuperCollider - offer ways of doing this, and I've also had great success with controlling the output using AppleScript. The problem is that in every case, the audio itself is actually synthesised by the mac os itself, which means it is not accessible within an audio environment for further processing.

I was inspired to have another think about this recently by a thread on the SuperCollider list where somebody was trying to do exactly this, by using Jack to route the sound from the mac back into the application for further processing. What I've started to experiment with is routing the audio into a different application: in the example above, controlling the speech synthesis in Max and passing the audio into Pd. Combined with the facility to pass midi from Max to Pd (easy), I think I can see how I can make a workable and potentially interesting system. But, for now, just proving to myself that it can be done :)

Yet more text-to-screech

There's quite a history of musicians and sound artists doing creative things with speech synthesis. One of the best known examples is the Radiohead track Fitter Happier from the album OK Computer, and its not hard to find other cases of commercial artists incorporating this kind of material in tracks.

Very often this has been done on the mac, which has always had speech synthesis built in. (There's a very interesting anecdote about how speech synthesis came to be included on the very first macs at the personal insistence of Steve Jobs.) A number of years ago - I can't find the links now - there was a small community of composers who were authoring and releasing 'tracks' which consisted of nothing but SimpleText files, which were to be 'played back' using the speech synthesis facility. This kind of thing was more effective back then: the earlier versions of the mac speech system responded in interesting and unpredictable ways to abberrant texts.

I've often used this kind of thing in my own work, and I've coined my own term for it: 'text-to-screech'. Here's an example, this is a track called 'vifyavif wif yavif-oo', which also forms part of the instrumental piece donkerstraat:

I've now started work on another such project. This will be a performance piece, where I will be typing text live: I've done work along these lines before, but the new twist will be to try to find away to add extra processing to the speech synthesis live, including perhaps sampling and looping. There are some technical problems with doing this on the mac, however… which I'll make the subject of another post.

The Sloans Project

I saw a great new opera recently, The Sloans Project. Composed by Gareth Williams with a libretto by David Brook, it was set and performed in the historic Sloans Bar and Restaurant. Yes, that'll be opera performed in a pub! The opening scene was a coup de theatre. As the audience milled about in the bar downstairs, the show just started right there, with a couple at the bar bursting into song, soon to be answered by another drunken-looking guy at the bar. After that the audience were invited to process to some of the upstairs rooms, where there were a series of three vignettes, followed by a culminating scene in the ballroom.

Gareth of course is a friend and colleage of mine of old, with his PhD at the RSAMD – sorry, the Royal Conservatoire of Scotland – running more or less in parallel to mine. Recently he's been plowing the operatic furrow consistently and with great success. His musical language is very spare and secure, with a great command of vocal writing. In this piece I was drawn in by the unique staging as much as anything else, but I seemed to detect some new thinking in his approach, particularly in scene two Chopin's Ghosts, which collided separate and uncoordinated music in different keys on the harp and piano in a very creative and effective way.