Regents Lecturer Concert, CNMAT (March 2011)

Today we look back on my solo concert at the Center for New Music Technologies (CNMAT) at U.C. Berkeley back in early March. It was part of my U.C. Regents Lecturer appointment this year, which also included technical talks and guest lectures for classes.

This is one of the more elaborate concerts I have done. Not only did I have an entire program to fill on my own, but I specifically wanted to showcase various technologies related to my past research at CNMAT and some of their current work, such as advanced multi-channel speaker systems. I spent a fair amount of time onsite earlier in the week to do some programming, and arrived early on the day of the show to get things set up. Here is the iPad with CNMAT’s dodecahedron speaker – each face of the dodecahedron is a separate speaker driven by its own audio channel.


[click image for larger view.]

Here is the Wicks Looper (which I had recently acquired) along with the dotara, an Indian string instrument often used in folk music.


[click image for larger view.]

I organized the concert such that the first half was more focused on showcasing music technologies, and the second half on more theatrical live performance. This does not imply that there wasn’t strong musicality in the first half or a lack of technological sophistication in the second, but rather which theme was central to the particular pieces.

After a very generous introduction by David Wessel, I launched into one of my standard improvisational pieces. Each one is different, but I do incorporate a set of elements that get reused. This one began with the Count Basie “Big Band Remote” recording and made use of various looping and resampling techniques with the Indian and Chinese instruments (controlled by monome), the Dave Smith Instruments Evolver, and various iPad apps.

Electroacoustic Improvisation – Regents Lecturer Concert (CNMAT) from CatSynth on Vimeo.

The concert included the premier of a new piece that was specifically composed for CNMAT’s impressive loudspeaker resources, the dodecahedron as well as the 8-channel surround system. In the main surround speakers, I created complex “clouds” of partials in an additive synthesizer that could be panned between different speakers for a rich immersive sound. I had short percussive sounds emitted from various speakers on the dodecahedron. I though the effect was quite strong, with the point sounds very localized and spatially separated from the more ambient sounds. In the video, it is hard to get the full effect, but here it is nonetheless:

Realignments – Regents Lecturer Concert, CNMAT from CatSynth on Vimeo.

The piece was implemented in Open Sound World – the new version that primarily uses Python scripts (or any OSC-enabled scripting language) instead of the old graphical user interface. I used TouchOSC on the iPad for real-time control.

I then moved from rather complex experimental technology to a simple and very self-contained instrument, the Wicks Looper, in this improvised piece. It had a very different sound from the software-based pieces in this part of the concert, and I liked the contrast.

The first half of the concert also featured two pieces from my CD Aquatic: Neptune Prelude to Xi and Charmer:Firmament. The original live versions of these pieces used a Wacom graphics tablet controlling OSW patches. I reimplemented them to use TouchOSC on the iPad.

The second half of the concert opened with a duo of myself and Polly Moller on concert and bass flutes. We used one of my graphical score sets – here we went on order from one to the next and interpreted each symbol.

The cat one was particular fun, as Polly emulated the sound of a cat purring. It was a great piece, but unfortunately I do not have a video of this one to share. So we will have to perform it again sometime.

I performed the piece 月伸1 featuring the video of Luna. Each of the previous performances, at the Quickening Moon concert and Omega Sound Fix last year, used different electronic instruments. This time I performed the musical accompaniment exclusively on acoustic grand piano. In some ways, I think it is the strongest of the three performances, with more emotion and musicality. The humor came through as well, though a bit more subtle than in the original Quickening Moon performance.

月伸1 – Video of Luna with Acoustic Grand Piano Improvisation from CatSynth on Vimeo.

The one unfortunate part of the evening came in the final piece. I had originally done Spin Cycle / Control Freak at a series of exchange concerts between CNMAT and CCRMA at Stanford in 2000. I redid the programming for this performance to use the latest version of OSW and TouchOSC on the iPad as the control surface. However, at this point in the evening I could not get the iPad and the MacBook to lock onto a single network together. The iPad could not find the MacBook’s private wireless network, even after multiple reboots of both devices. In my mind, this is actually the biggest problem with using an iPad as a control surface – it requires wireless networking, which seems to be very shaky at times on Apple hardware. It would be nice if they allowed one to use a wired connection via the USB cable. I suppose I should be grateful that this problem did not occur until the final piece, but was still a bit of an embarrassment and gives me pause about using iPad/TouchOSC until I know how to make it more reliable.

On balance, it was a great evening of music even with the misfire at the end. I was quite happy with the audience turnout and the warm reception and feedback afterwards. It was a chance to look back on solo work from the past ten years, and look forward to new musical and technological adventures in the future.

Preparing for Regents’ Lecturer presentation, Part 1

I have been busily preparing this weekend for the first of my UC Berkeley Regents’ Lecturer presentations:

Open Sound World (OSW) is a scalable, extensible programming environment that allows musicians, sound designers and researchers to process sound in response to expressive real-time control. This talk will provide an overview of OSW, past development and future directions, and then focus on the parallel processing architecture. Early in the development of OSW in late 1999 and early 2000, we made a conscious decision to support parallel processing as affordable multiprocessor systems were coming on the market. We implemented a simple scalable dynamic system in which workers take on tasks called “activation expressions” on a first-come first serve basis, which facilities for ordering and prioritization to deal with real-time constraints and synchronicity of audio streams. In this presentation, we will review a simple musical example and demonstrate performance benefits and limitations of scaling to small multi-core systems. The talk will conclude with a discussion of how current research directions in parallel computing can be applied to this system to solve past challenges and scale to much larger systems.

You can find out more details, including location for those in the Bay Area who may be interested in attending, at the official announcement site.


Much of the time for a presentation is spent making PowerPoint slides:

With slides out of the way, I can now turn to the more fun part, the short demos. This gives me an opportunity to work with TouchOSC for the iPad as a method for controlling OSW patches. We will see how that turns out later.