## Gamelan sounds in GarageBand on iPad

This posting shows how I was able to get the sounds of the pelog half of the Spirit of Hope gamelan here in Glasgow to work in GarageBand on an iPad.

First install an app called ‘SoundFonts’ from the App Store: it’s £4.99.

The file should end up saved in the Downloads folder in iCloud Drive.

Open the SoundFonts app and click the + button.

This took me straight to the correct file in iCloud Drive: you might have to browse to find it though. Select the file to import it to the SoundFonts app.

Once you select the sound called ‘balungan_pelog’ you should be able to play it. You might need to change octave to get all of the sounds, there should be slenthem, demung, saron and peking.

To use the sounds in GarageBand, you need to find – I don’t know what it is called, this track browser thing! – and select the ‘External’ pane.

Select the SoundFonts icon

And you can now play and record tracks using the gamelan samples in GarageBand!

## Looking back over the year

The tempation to ‘share’ on proprietary online platforms means that I don’t document my work here as frequently as I should! So, here’s a roundup of some things I’ve produced this year: as much a reminder to myself as anything else.

• February
• ‘Perang Gagal: a Series of Inconclusive Battles’ at ICLC Limerick
• March
• two performances at Euleroom Equinox, a busk followed later by the ‘official’ set
• track ‘threequal’ on SoundArtist.ru mixtape
• April
• curated two gamelan soundfonts, released on archive.org
• May
• Algovoids performance
• July
• October
• 7 Calls collaboration with Mags Smith of Good Vibrations
• December
• two sets (sort of!) at Euleroom Solistice: ‘official’ performance and crazy spontaneous chaotic jam session:
• coming soon
• coming soon

## Gamelan samples

This is just a quick post to pull together links to a number of places online where I have offered up gamelan sounds for download. It’s all a bit chaotic! Most of these are from the pelog Spirit of Hope instruments in Glasgow, sometimes retuned. Some of them may be from other sources that I’ve forgotten about.

From my perspective, these are all inteneded to be CC0 ‘No rights reserved’ – you can do what you like with them!

If I’ve accidentally uploaded someone else’s sample here and you want me to take it down, please let me know.

https://freesound.org/people/tedthetrumpet/packs/14/

https://freesound.org/people/tedthetrumpet/packs/1797/

https://archive.org/details/so-h-gamelan-gongs-drums-etc-pelog

https://archive.org/details/sohgamelanbalunganpelog

https://github.com/tedthetrumpet/Perang-Gagal/tree/master/SuperCollider/arum

Here’s the collaboration that Bill Whitmer and I did for Radiophrenia

## Livecoding with Robert Henderson

Improviser Núria Andorrà visited Glasgow in March to teach on the International Collaboration in Contemporary Improvisation module at the Royal Conservatoire of Glasgow. My colleage Una McGlone took the opportunity to organise a gig for her at Hairdressers, in collaboration with a number of Glasgow improvisers.

I did a short set alongside trumpet player Robert Henderson, who I have known for many years: in fact, I know him from the period around twenty years ago where I myself was active as a gigging trumpet player! For this performance, I used a bank of sounds that I have created using purely mechanical sounds from a trumpet, the metal, valves, valve slides and so forth. It’s always slightly problematic livecoding alongide an actual analog musician, as it is not easy to respond particularly rapidly to another player. However, Robert and I enjoyed playing together and managed to create some satisfying musical gestures together.

Some reflections on the International Conference on Livecoding 2019 in Madrid.

The play-and-tell workshop that I helped put together with Evan Raskob and Renick Bell was, as intended, a low key and informal way for people to share their individual practices in livecode. Of particular interest to me was Dimitris Kyriakoudis showing how he uses heavily customised keyboard shortcuts in emacs as a way to be completely fluent when performing: as he put it, ‘typing should not get in the way of livecoding performance’. There were also some very interesting links to my mind between his Timelines system  – ‘all music is a function of time’ and Neil C Smith‘s ‘AMEN \$ Mother Function‘ performance that worked by chopping up a wavetable as a function of time.

As well as that session, I also had input to a paper entitled ‘Towards Improving Collaboration Between Visualists and Musicians at Algoraves‘ co-authored by – deep breath – Zoyander Street, Alejandro Albornoz, Renick Bell, Guy John, Olivia Jack, Shelly Knotts, Alex McLean, Neil C Smith, Atsushi Tadokoro, J. S. van der Walt and Gabriel Rea Velasco. The creation of this paper was itself an interesting process, beginning with a conversation in Sheffield, and then continuing with us writing the paper collaboratively in a shared online space. Guy presented the paper, you can see that here.

A stand-out performance for me was Maia Koenig‘s ‘RRayen’, using some sort of hand-held games console. Great energy: I can’t seem to find a video of her performing at ICLC, but here she is doing the piece elsewhere.

Of the many new livecoding systems presented, I was rather taken by Balázs Kovács slighly bonkers Makkeróni ‘web-based audio operating system’, kind of like an online shared bash shell that plays music.

Also very interesting was Alejandro Franco and Diego Villaseñor and presenting their Nanc-in-a-Can Canon Generator. The cultural backround was fascinating, with an attention to reclaim Nancarrow as a Mexican composer as explained in the talk.

I’ve never been to Madrid before, but found it was an easy place to be: dry, cold, comfortable and easy to get around. ICLC 2020 is to be in Limerick, fairly local for me, so I’ll be looking to present or perform there as well.

## Raving the netbook again

Once again happily proving to myself how possible it is to work with open-source software on basic hardware. Just upgraded to Ubuntu Studio 18.04 on a refurb 11″ Dell Inspiron netbook, and built SuperCollider 3.9.3 from source. Here’s an algorave-ish test track made using this setup:

https://clyp.it/5d3lo4na

Some new code idioms:

Plazy({Pseq((0..15).scramble,4)}).repeat(inf)

is easier to type than

Pn(Plazy({Pseq((0..15).scramble,4)}))

and similarly

Pseq([2,6,4,7],inf).stutter(32)

Pstutter(32, Pseq([2,6,4,7],inf))

also

Pseq((0..15).scramble,inf).clump(3)

## Livecoding Erraid

On a number of occasions I have used sounds collected at a particular location as a coherent set of resources for a livecoded set. For the last week I’ve been in retreat on with the community on the isle of Erraid, which has been a welcome break from the city!

One of the features of the island is the ‘observatory’. This is a circular tin structure, about two meters across by three high: a restored remnant of the building of the Dubh Artach lighthouse that took place there between 1867 and 1872.

The sound world inside this unusual structure is distinctive. I took some recordings (available on freesound.org, or they will be once the finish uploading), that I am going to be using in a livecoded SuperCollider improvisation this Monday during one of the Sonic Nights series at the Royal Conservatoire of Scotland, where staff and students diffuse new electroacoustic works on a multi-channel sound system. If it seems practical, I may stream the performance as well.

## The Next Station – ‘if only I had’

Tomorrow sees the launch of The Next Station, a project by Cities and Memory to reimagine the sounds of the London Underground. My contribution to this project is an audio work called if only I had, constructed entirely from a 3’42 recording of a train arriving and departing from Pimlico station.

The title is taken from Spike Milligan’s ‘Adolf Hitler: My Part in his Downfall’:

‘Edgington and I promenaded the decks. Harry stopped: “If only I had a tube.”
“Why?”
“It’s quicker by tube.”

… an inconsequential pun that has, for some reason, always stuck in mind!

I made this piece as a personal study into the possibility of using livecoding techniques in SuperCollider to develop a fixed piece. In recent months I have been very active in exploring coding in this way, particularly in the context of algorave: if only I had leverages these techinques. Here’s some of the code I used, with explanation:

( s.waitForBoot{ Pdef.all.clear; Pbindef.defaultQuant = 4; t = TempoClock.new.tempo_(120/60).permanent_(true); ~path = "/Users/jsimon/Music/SuperCollider Recordings/pimlicoloops/"; 

This is a remnant of what turned out to be a bit of a false start to the project. My initial idea was to look through the file for shortish sections, in the region of 2-3 seconds long that, when looped, had some sort of rhythmic interest. This was done offline, using Audacity. I thought it might be interesting to develop the piece by using these fragments almost in the manner of drum loops, and wrote some code to juxatpose them in various ways at different tempi. This didn’t really produce anything very effective however: the material is rather dense and noisy, and when looped together the rhythmic interested was lost in broadband mush of sound.

Instead, I revisited a synth from an earlier project that slices a buffer into 16 pieces for playback:

~bufs = (~path ++ "*.aiff").pathMatch.collect({ |i|  Buffer.read(s, i)}); SynthDef(\slbuf, { |out, buf, slices=16, slice=16, freq=440, sustain=0.8| var myenv, env, start, len, basefreq = 440, rate, sig, sus; rate = freq / basefreq; len = BufFrames.kr(buf); start = (len / slices * slice); sus = BufDur.kr(buf)/16 * sustain * 1.1; myenv = Env.linen(attackTime: 0.01, sustainTime: sus, releaseTime: 0.1); sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * rate, startPos: start, loop: 1); env = EnvGen.kr(myenv, 1, doneAction: 2); Out.ar(out, sig * env) }).add; 

As well as experimenting with reverb, I also had a delay effect in here at one point. Again, the nature of the already fairly resonant material meant that this was not that useful. In the end, I only used the reverb at the very end of the piece as a closing gesture.

~rbus = Bus.audio(s, 2); SynthDef(\verb, {|out = 0, room = 1, mix = 1| var sig = FreeVerb.ar(In.ar(~rbus, 2), room:room, mix:mix); Out.ar(out, sig) }).add; s.sync; Synth(\verb); 

At some point in developing the project, it occured to me to try playing together the sliced material with the orignal file. This seemed to effective, and gave me a clear trajectory for the work: I decided that the finished piece would be the same pop-song length as the original recording. In experimenting with this approach – playing sliced loops in SC at the same time as playing back the whole file in Audacity – I found myself gently fading the original in and out. This is modelled in the synth below: I used an explicit random seed together with interpolated low frequency noise to produce a replicable gesture:

~file = "/Users/jsimon/Documents/ Simon's music/pimlico the next station/Pimlico 140516.wav"; ~pimbuf = Buffer.read(s, ~file); s.sync; SynthDef(\pim, { |out=0, start=0, amp = 1| var sig, startframe, env; startframe = start * 44100; RandSeed.ir(1,0); env = EnvGen.kr(Env.linen(sustainTime: ~pimbuf.duration - 9, releaseTime:9)); sig = PlayBuf.ar(2, ~pimbuf, startPos:startframe, doneAction:2) * LFNoise1.kr(1/5).range(0.05, 1.0); Out.ar(out, sig * amp * env); }).add; 

There was a nice moment in the original where the accelerating electronic motors of the departing train created a seried of overlapping upward glissandi, sounding very like Shepard tones, or rather, the sliding Risset variation. Looking to enhance this gesture, I tried a couple of my own hacks before giving up and turning to a nice class from Alberto de Campo’s adclib:

~shep = { var slope = Line.kr(0.1, 0.2, 60); var shift = Line.kr(-1,2,60); var b = ~bufs[8]; var intvs, amps; var env = EnvGen.kr(Env.linen(sustainTime:53, releaseTime:7),1,doneAction:2); #intvs, amps = Shepard.kr(5, slope, 12, shift); (PlayBuf.ar(b.numChannels, b, intvs.midiratio, loop: 1, startPos:3*44100) * amps).sum * 0.2 }; s.sync; 

All of the above is essentially setup material. The gist of the composition was in iterative experimentation with Pbindefs, as can be seen below: trying out different slicing patterns and durations, working with the various segments I’d prepared beforehand in Audacity.

Pbindef(\a, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/2, \buf, ~bufs[1], \note, 0); Pbindef(\b, \instrument, \slbuf, \slice, Pser((8..15).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[1], \note, 0); Pbindef(\c, \instrument, \slbuf, \slice, Pser((2..5).pyramid(1), 32), \dur, 1/4, \buf, ~bufs[0], \note, 0); Pbindef(\d, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 0); Pbindef(\e, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, 12); Pbindef(\f, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4, \buf, ~bufs[3], \note, [-12,12,24,36]); Pbindef(\g, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/4.5, \buf, ~bufs[3], \note, [12,24,36]); Pbindef(\h, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/5, \buf, ~bufs[3], \note, [-12,12,24,36]); Pbindef(\i, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(12), 1), \dur, 1/6, \buf, ~bufs[3], \note, [-24,-12,12,24,36]); Pbindef(\j, \instrument, \slbuf, \slice, Pseq((1..8).pyramid(1), 1), \dur, 1/7, \buf, ~bufs[3], \note, [-24,-12,12,24,36], \amp, 0.3); Pdef(\k, (instrument: \slbuf, buf: ~bufs[3], slice: 2, note: [-24,-12,12,24,36], amp: 0.5, out:~rbus));

s.sync;
};

The final composition was produced by running and recording the code below, which uses the handy Psym as a way to sequence the gestures developed above. The code by this point is entirely deterministic, and would produce the same piece on every run. No further editing was done, apart from normalising in Audacity.

// start from beginning fork{ Synth(\pim); 2.wait; Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t); (60+14+20).wait; "shep".postln; ~shep.play; 10.wait; "off again".postln; Psym(Pseq("aabaccddeeffghiijk",1).trace).play(t); }; ) s.prepareForRecord s.record s.stopRecording 

Overall, I’m happy with the piece, and glad to have been able to contribute to this very interesting project.

## First public livecode

Last night I stumbled into my first public outing of some livecoding I’ve been working on in SuperCollider. The context was an improvisation night called In Tandem run by Bruce Wallace at the Academy of Music and Sound in Glasgow. I hadn’t intended to play, as I really don’t feel I’m ready yet, but I had my laptop and cables with me, they had a projector, so…!

I was jamming along with three other people, on bass, guitar and analog synth. It all went by in a blur, but everyone there seemed to think what I was doing was ok – mostly making grooves out of a random collection of drum samples, but running some algorithmically chosen chords as well.

The code is below: this is my screen exactly as I left it at the end of the night, mistakes and all. Although Toplap say ‘show us your screens’, they don’t say ‘show us your code’, but… it seems the right thing to do.

// the end! // they still going // if you're curious, this is SuperCollider // musci programming language // writing code live is called, er, livecoding // i'm just starting out "/Users/jsimon/Music/SuperCollider Recordings/hitzamples/".openOS;

 ( s.waitForBoot{ Pdef.all.clear; // clear things out ~hitzpath="/Users/jsimon/Music/SuperCollider Recordings/hitzamples/"; // a folder of samples ~hbufs = (~hitzpath ++ "*.aiff").pathMatch.collect({ |i| Buffer.read(s, i)}); // samples into an array of buffers t = TempoClock(140/60).permanent_(true); // tempo 140 bpm u = TempoClock(140/60 * 2/3).permanent_(true); // tempo 140 bpm * 2/3 SynthDef(\bf, {|out=0 buf=0 amp=0.1 freq=261.6255653006| var sig = PlayBuf.ar(2, buf, BufRateScale.kr(buf) * freq/60.midicps, doneAction:2); Out.ar(out, sig * amp) }).add; // this whole chunk defines a synth patch that plays samples }; // Pdef.all.clear; //"/Users/jsimon/Music/SuperCollider Recordings/".openOS; // t.sync(140/60, 16); ) (instrument: \bf, \buf: ~hbufs.choose).play; // play an event using the synth called \bf // pick a randoms sample from the array (instrument: \bf, \buf: ~z).play; ~z = ~hbufs.choose; t.sync(140/60, 32); // gradual tempo changes possible u.sync(140/60 * 2/3, 16); v.sync(140/60 * 5/3, 16); Pbindef(\x, \instrument, \bf, \buf, ~hbufs.choose).play(t).quant_(4); Pbindef(\y, \instrument, \bf, \buf, ~hbufs.choose).play(u).quant_(4); Pbindef(\z, \instrument, \bf, \buf, ~hbufs.choose).play(v).quant_(4); Pbindef(\z, \instrument, \bf, \buf, ~hbufs.choose).play(v).quant_(4); ~g1 = {~hbufs.choose}!16; // choose sixteen samples at random = one bar full ~g2 = {~hbufs.choose}!16; Pbindef(\x, \buf, Pseq(~g1, inf)); // play those sixteen samples chosen Pbindef(\x, \buf, Pseq(~g2, inf)); // different sixteen, so, a variation. Pbindef(\x, \dur, 0.5); ~d1 = {2.rand/10}!16; ~d2 = {2.0.rand/10}!16; Pbindef(\x, \amp, Pseq(~d1, inf)); Pbindef(\x, \amp, 0.2); Pbindef(\x, \note, Prand((-36..0), inf)); Pbindef(\x, \note, Pseq({(-24..0).choose}!16, inf)); // pitch each sample down by random amount Pbindef(\x, \note, nil); Pbindef(\x).resume; Pbindef(\x).pause; Pbindef(\z).pause; Pbindef(\y).resume; // hmm. blx diminished, that's just C major! // was using \degree instead of \note, better sounds a bit more like messiaen now :) ~c = {var x = Scale.diminished2.degrees.scramble.keep(4).sort; x.insert(1,(x.removeAt(1)-12))}; // hexMajor thing also works beautifully now! ~c = {var x = Scale.hexMajor6.degrees.scramble.keep(4).sort; x.insert(1,(x.removeAt(1)-12))}; 

// next question might be changing \note, \dur and \root in a coordinated way ( Pbindef(\k, \note, Pstutter(Prand([5,7,9,11,13]*2, inf), Pfunc(~c)), \dur, 0.5, \root, 3, // best option for feeling of key change \amp, Prand((2..5)/70,inf) ).play(t); ) Pbindef(\k).pause; Pbindef(\k).pause;