Skip to main content

Routing text-to-speech on the mac

There are any number of ways of working with the built-in text-to-speech synthesis capabilities on the mac. All of the music programming languages I use - Max, Pd and SuperCollider - offer ways of doing this, and I've also had great success with controlling the output using AppleScript. The problem is that in every case, the audio itself is actually synthesised by the mac os itself, which means it is not accessible within an audio environment for further processing.

I was inspired to have another think about this recently by a thread on the SuperCollider list where somebody was trying to do exactly this, by using Jack to route the sound from the mac back into the application for further processing. What I've started to experiment with is routing the audio into a different application: in the example above, controlling the speech synthesis in Max and passing the audio into Pd. Combined with the facility to pass midi from Max to Pd (easy), I think I can see how I can make a workable and potentially interesting system. But, for now, just proving to myself that it can be done :)