Skip to main content

33 posts tagged with "livecode"

View All Tags

Livecode improvisation with Anne-Liis Poll

As part of the team that organised the third METRIC Improvisation Intensive at the Royal Conservatoire of Glasgow, I did not have as much time as I might have liked to improvise myself. I was pleased however to be joined for an impromptu livecoded session by Anne-Liis Poll, Professor of Improvisation at the Estonian Academy of Music and Theatre:

This did not quite turn out the way I had intended! In recent work I have been looking to find a way to respond in code to live human improvisations: this session turned into more of an algorave-ish groove built up from mechanical trumpet sounds, over which Anne-Liis worked with the voice. Even so, this was quite succesful. I hope to do more playing with other people along these lines.

Livecoding again

Back at the livecoding again. A couple of weeks ago, a quite succesful workshop for the students on the Interactive Composition module at the Royal Conservatoire of Scotland. Coming up: a couple of things. In March there is going to be another long-form online algorave that I'll be contributing a half-hour set to, Friday 16th at 1330 GMT. In April the METRIC Intensive III at the RCS sees staff and students converge on Glasgow for a week of improvisation: again, as well as leading some gamelan improvisation, expect to be SuperColliding as well.

Below, a more-or-less unedited trial run of some new stuff tonight: specifically, a collection of samples made purely from mechanical sounds of my trumpet, close-miked: springs, valve noise, slide pops and so forth.

Giving a workshop in Jakarta

My institution, The Royal Conservatoire of Glasgow, have sent me on a trip to make connections with a number of potential partners in Indonesia, including the UPH Conservatory of Music, the Jakarta Institute of Arts (IKJ), and Institut Seni Indonesia Surakarta (ISI Solo). I'll also be visiting Singapore to see Setan Jawa, and talk to the producers about bringing this to Glasgow for a Festival of Gamelan and the Moving Image that we are planning here for September.

Here's the poster for a workshop I'll be giving at UPH, that will take in a livecoding demo and a performance of Steadily-Stop! alongside an analysis of Antichthon.

Ubuntu Studio, SuperCollider, Dell Inspirion 11 3000 - success!

Having a very positive experience at the moment with Ubuntu Studio 17.04 running SuperCollider 3.8.0 on a £150 refurb 11" Dell Inspiron. Apart from an initial UEFI glitch with getting it to boot, Ubuntu Studio installed easily and works seamlessly so far. The SuperCollider install was made simple by this script install_supercollider_sc3plugins_buntu.sh from @theseansco  – thanks Sean! When it came to actually booting SuperCollider, I did not even need to mess around with Jack or any other at all, everything on the audio side seems to just work. Now to push it a little harder…

Getting ready to be five (/four)

Next Friday I’m going to to be taking part in a 24 hour online algorave event wearefive to celebrate five years of the algorave movement. By accident or design I’m on back to back with co¥ᄀpt (aka Sean Cotterill) who is one of only a couple of us livecoding in pure SuperCollider, rather than the by-now overwelmingly popular TidalCycles.

Sean has been putting together an interesting set of pages on his approach to livecoding in SC, particularly on the things that need to be set up beforehand. I’ve evolved some similar ideas myself, perhaps little a less organised and more hacky. For interest, I’ve put my current setup files with comments on sccode.org and also a wee example of the kind of code I use.

Admittedly, some of this won’t make sense without the particular arrangement of samples and loops that I use. I’ve recently hit upon the idea of using an array of 32 different drum samples organised roughly in the following pattern:

00 a bass drum sound 01 hi hat 02 a snare 03 a different hi hat (or other hit) 04 a different bass drum sound … etc

That way, I can make a basic un-ta-ka-ta beat just by stepping through all 32, or segments thereof:

Pseq(~arrayOfHits, inf) Pseq(~arrayOfHits[4..7], inf)

I’ve also discovered some really quite good longer patterns with this layout, using Pslide:

Pslide(~arrayOfHits,inf,4,3)

Guess we'll see how all of this sounds at the rather unravy time of 0800 GMT next Friday!

Recent livecoding in SuperCollider

Over the winter break I've been spending some time working on my livecoding/algorave setup in SuperCollider. Here's a quick practice run, this is how things are going at the moment.

The most recent idea here is the \warp synth, a granulator slowly reading through a choice of soundfiles. In this particular run, I think the .choose threw up a fragment of a Stokowski Bach transcription https://archive.org/details/J.S.BACH-OrchestralTranscriptions-NEWTRANSFER and perhaps a bit of the theme tune from The IT Crowd as well. A nice background wash of sound behind the rhythmic stuff. For the latter, the samples in the first half of the video are various from here http://machines.hyperreal.org/manufacturers/ and in the second half of the run, after I exectue ~changesamples, from here http://theremin.music.uiowa.edu/MIS.html and here http://www.philharmonia.co.uk/explore/sound_samples.

The synths I'm using and my initialisation file is up on GitHub at https://github.com/tedthetrumpet/supercollider.

Rave the Space

Last night I gave a performance called 'Rave the Space' at Stereo in Glasgow, part of a series of events called INTER run by Iain Findlay-Walsh 'creating a focused, public listening context for deep experiments in / with sound'.

My proposal was to 'perform the soundscape of the venue through the medium of livecoding'. What I did was to visit the venue the day before, at a quiet time, and make some recordings – a fairly typical basement club/rock venue, so I was able to wander onstage, through dressing rooms, behind the bar, into the toilets etc, all the while recording both the ambience and, in some cases, tapping or hitting objects of particular interest – there was a group of CO2 cylinders that were particularly nice.

On the morning of the event, I roughly levelled these recordings, discarded uninteresting ones, cut out handling noise, mobile phone interference, and initial and terminal clicks from the recorder. This left me with nine recordings, each about a minute long.

I decided to challenge myself by doing as little rehearsal for the performance as possible. I had one synth precoded, a slicing sampler. All this does is to take an audio file and play back one of n slices: in the case of these roughly 60 second long files, I used n=64. On previous occasions when I've done livecoding/algorave with found sounds, I've gone through the source audio files carefully in Audacity, looking for particular short sounds that I can then isolate and shape into something resembling a drum hit, then performed with those sounds in place of drum sounds.

The slicing approach used here is deliberately less controlled: it's a matter of luck what sound falls where as the point at which the file is sliced is quite arbitrary, perhaps falling just on ambience, or half-way through a percussive noise.

The performance was only to be ten minutes: which is not a lot on my timescale of livecoding in SuperCollider! I decided to start with a blank screen: in retrospect, I could have got to better musical gestures faster if I'd had maybe ten or a dozen lines precoded. Nevertheless, in this sit-down and concentrate atmosphere, the blank-page start was quite intruiging for the audience, I think.

The performance mostly went well, although there was one of those moments where I had what looked like a correctly typed line that evaluated correctly, that did not seem to be doing anything! I still can't figure out what I was doing wrong.

I'm pleased with this idea and intend to repeat it, particuarly the site-specific approach to gathering sounds.

Here's the code, not much to see here:

(//setup
s.waitForBoot{};
SynthDef(\sl, { |out, gate=1, buf, sig, slices=16, slice=0, freq = 261.6255653006, amp=0.1|
var myenv, env, start, len, basefreq = 60.midicps, rate;
rate = freq / basefreq;
len = BufFrames.kr(buf);
start = (len / slices * slice);
myenv = Env.asr(attackTime: 0.01, sustainLevel: 1, releaseTime: 0.1);
sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * rate, startPos: start);
env = EnvGen.kr(myenv, gate, doneAction: 2);
Out.ar(out, sig * env * amp)
}).add;
t = TempoClock(140/60).permanent_(true);
u = TempoClock(140/60 * 2/3).permanent_(true);
Pbindef.defaultQuant_(4);
Pdefn.defaultQuant_(4);
)
(
~paths = [
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/bar.aiff", // 0
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/c02ambience.aiff", // 1
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/cafe.aiff", // 2
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/co2.aiff", // 3
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/corner.aiff", // 4
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/lane.aiff", // 5
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/seatingbank.aiff", // 5
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/space.aiff", // 6
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stage.aiff", // 7
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stairs1.aiff", // 8
"/Users/jsimon/Music/SuperCollider Recordings/stereoglasgow/stairs2.aiff" // 9
]
)
~thebuf = Buffer.read(s, ~paths[7]);
~thebuf.play
//
Pbindef(\x, \instrument, \sl, \buf, ~thebuf, \slices, 64)
Pbindef(\x).play(t)
Pbindef(\x, \slice, 0)
Pbindef(\x, \slice, 64.rand)
Pbindef(\x, \slice, Pwhite(0,63,inf))
Pbindef(\x, \legato, 1/4)
Pbindef(\x, \dur, 1/4)
Pbindef(\x, \note, Pwhite(0,12,inf))
//
Pbindef(\y, \instrument, \sl, \buf, ~thebuf, \slices, 64)
Pbindef(\y).play(u)
Pbindef(\y, \slice, 0)
Pbindef(\y, \slice, 64.rand)
Pbindef(\y, \legato, 4)
Pbindef(\y, \dur, 1/2)
Pbindef(\y, \note, Pwhite(-12,12,inf))
t.sched(t.timeToNextBeat(4), {u.sync(120/60, 10)});```

The Next Station – ‘if only I had’

Tomorrow sees the launch of The Next Station, a project by Cities and Memory to reimagine the sounds of the London Underground. My contribution to this project is an audio work called if only I had, constructed entirely from a 3'42 recording of a train arriving and departing from Pimlico station.

The title is taken from Spike Milligan’s ‘Adolf Hitler: My Part in his Downfall’:

‘Edgington and I promenaded the decks. Harry stopped: “If only I had a tube.” “Why?” “It’s quicker by tube.”

… an inconsequential pun that has, for some reason, always stuck in mind!

I made this piece as a personal study into the possibility of using livecoding techniques in SuperCollider to develop a fixed piece. In recent months I have been very active in exploring coding in this way, particularly in the context of algorave: if only I had leverages these techinques. Here’s some of the code I used, with explanation:

(
s.waitForBoot{
Pdef.all.clear;
Pbindef.defaultQuant = 4;
t = TempoClock.new.tempo_(120/60).permanent_(true);
~path = "/Users/jsimon/Music/SuperCollider Recordings/pimlicoloops/";

This is a remnant of what turned out to be a bit of a false start to the project. My initial idea was to look through the file for shortish sections, in the region of 2-3 seconds long that, when looped, had some sort of rhythmic interest. This was done offline, using Audacity. I thought it might be interesting to develop the piece by using these fragments almost in the manner of drum loops, and wrote some code to juxatpose them in various ways at different tempi. This didn't really produce anything very effective however: the material is rather dense and noisy, and when looped together the rhythmic interested was lost in broadband mush of sound.

Instead, I revisited a synth from an earlier project that slices a buffer into 16 pieces for playback:

~bufs = (~path ++ "*.aiff").pathMatch.collect({ |i|  Buffer.read(s, i)});
SynthDef(\slbuf, { |out, buf, slices=16, slice=16, freq=440, sustain=0.8|
var myenv, env, start, len, basefreq = 440, rate, sig, sus;
rate = freq / basefreq;
len = BufFrames.kr(buf);
start = (len / slices * slice);
sus = BufDur.kr(buf)/16 * sustain * 1.1;
myenv = Env.linen(attackTime: 0.01, sustainTime: sus, releaseTime: 0.1);
sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * rate, startPos: start, loop: 1);
env = EnvGen.kr(myenv, 1, doneAction: 2);
Out.ar(out, sig * env)
}).add;

As well as experimenting with reverb, I also had a delay effect in here at one point. Again, the nature of the already fairly resonant material meant that this was not that useful. In the end, I only used the reverb at the very end of the piece as a closing gesture.

~rbus = Bus.audio(s, 2);
SynthDef(\verb, {|out = 0, room = 1, mix = 1|
var sig = FreeVerb.ar(In.ar(~rbus, 2), room:room, mix:mix);
Out.ar(out, sig)
}).add;
s.sync;
Synth(\verb);

At some point in developing the project, it occured to me to try playing together the sliced material with the orignal file. This seemed to effective, and gave me a clear trajectory for the work: I decided that the finished piece would be the same pop-song length as the original recording. In experimenting with this approach – playing sliced loops in SC at the same time as playing back the whole file in Audacity – I found myself gently fading the original in and out. This is modelled in the synth below: I used an explicit random seed together with interpolated low frequency noise to produce a replicable gesture:

~file = "/Users/jsimon/Documents/ Simon's music/pimlico the next station/Pimlico 140516.wav";
~pimbuf = Buffer.read(s, ~file);
s.sync;
SynthDef(\pim, { |out=0, start=0, amp = 1|
var sig, startframe, env;
startframe = start * 44100;
RandSeed.ir(1,0);
env = EnvGen.kr(Env.linen(sustainTime: ~pimbuf.duration - 9, releaseTime:9));
sig = PlayBuf.ar(2, ~pimbuf, startPos:startframe, doneAction:2) * LFNoise1.kr(1/5).range(0.05, 1.0);
Out.ar(out, sig * amp * env);
}).add;

There was a nice moment in the original where the accelerating electronic motors of the departing train created a seried of overlapping upward glissandi, sounding very like Shepard tones, or rather, the sliding Risset variation. Looking to enhance this gesture, I tried a couple of my own hacks before giving up and turning to a nice class from Alberto de Campo’s adclib:

~shep = {
var slope = Line.kr(0.1, 0.2, 60);
var shift = Line.kr(-1,2,60);
var b = ~bufs[8];
var intvs, amps;
var env = EnvGen.kr(Env.linen(sustainTime:53, releaseTime:7),1,doneAction:2);
#intvs, amps = Shepard.kr(5, slope, 12, shift);
(PlayBuf.ar(b.numChannels, b, intvs.midiratio, loop: 1, startPos:3*44100) * amps).sum * 0.2
};
s.sync;

All of the above is essentially setup material. The gist of the composition was in iterative experimentation with Pbindefs, as can be seen below: trying out different slicing patterns and durations, working with the various segments I'd prepared beforehand in Audacity.

Pbindef(\a, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/2, \buf, ~bufs[1], \note, 0);
Pbindef(\b, \instrument, \slbuf, \slice, Pser((8..15).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[1], \note, 0);
Pbindef(\c, \instrument, \slbuf, \slice, Pser((2..5).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[0], \note, 0);
Pbindef(\d, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 0);
Pbindef(\e, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 12);
Pbindef(\f, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, [-12,12,24,36]);
Pbindef(\g, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4.5, \buf, ~bufs[3], \note, [12,24,36]);
Pbindef(\h, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/5, \buf, ~bufs[3], \note, [-12,12,24,36]);
Pbindef(\i, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(12), 1), \dur, 1/6, \buf, ~bufs[3], \note, [-24,-12,12,24,36]);
Pbindef(\j, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/7, \buf, ~bufs[3], \note, [-24,-12,12,24,36], \amp, 0.3);
Pdef(\k, (instrument: \slbuf, buf: ~bufs[3], slice: 2, note: [-24,-12,12,24,36], amp: 0.5, out:~rbus));

s.sync;
};

The final composition was produced by running and recording the code below, which uses the handy Psym as a way to sequence the gestures developed above. The code by this point is entirely deterministic, and would produce the same piece on every run. No further editing was done, apart from normalising in Audacity.

// start from beginning
fork{
Synth(\pim);
2.wait;
Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t);
(60+14+20).wait;
"shep".postln;
~shep.play;
10.wait; "off again".postln;
Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t);
};
)
s.prepareForRecord
s.record
s.stopRecording

Overall, I'm happy with the piece, and glad to have been able to contribute to this very interesting project.

Layering visuals with SuperCollider

When I was at emfcamp last week, I saw a couple of instances of people layering up visuals with their code. Claudius Maximus had that going with his clive system, SonicPi (and Gibber??) can do it out of the box, and Shelly Knotts  had some sort of setup for (I think?) doing it completely within SuperCollider, with the cool idea of a webcam pointing down at her hands on the keyboard.

After a bit of thought, I've come up with this, just a still for now:

sclayervisuals.png

How this works: I used a $10 utility called ScreenCaptureSyphon that can amongst other things grab an application window and send it into Syphon. Then, Resolume Arena runs as a Syphon client, which lets me do almost anything including, as in the shot below, pull in the webcam and colorize. Not tried it yet, but Arena exposes its interface to OSC, so should in theory be possible to script visual changes from the SuperCollider IDE.

A reasonably concinnitous hack, if I say so myself. (Mind you, it's the first thing I've ever done with my MacBook Air that turns the fan on full blast the whole time!)

Here's my screen

Show us your screens! Ok, well at last maybe I'm ready. Here's five minutes or so of me improvising in SuperCollider that's not as embarrassing as some of my other attempts:

The code is on GitHub if anyone is madly interested.