Outsound Music Summit: Vibration Hackers

The second concert of this year’s Outsound Music Summit, entitled “Vibration Hackers”, featured electronic musical experimentations from Stanford’s CCRMA and beyond. It was a sharp contrast to the previous night in both tone and medium, but had quite a bit to offer.

The concert opened with #MAX, a collaboration by Caitlin Denny on visuals, Nicole Ginelli on audio, and Dmitri Svistula on software development. It was based on the ubiquitous concept of the hashtag as popularized by Twitter. Audience members typed in suggested terms on a terminal set up in the hall. The terms we then projected on the screen and used to search online for videos, audio and textual materials to inform the unfolding performance. Denny used found videos as part of her projection, while Ginelli interpreted results with processed vocals.

#MAX

The idea was intriguing. I would have liked to see more explicit connection between the source terms and audio/video output – perhaps it was a result of the projection onto the distorting curtain instead of a flat surface, but the connection wasn’t always clear. It would have also been fun to allow audience members to input terms from their mobile phones via Twitter. But I applaud the effort to experiment artistically with social networking infrastructure and look forward to seeing future versions of the piece.

Next was a set of fixed-media pieces by Fernando Lopez-Lezcano, collectively called Knock Knock…anybody there? Lopez-Lezcano is a master of composition that uses advanced sound spatialization as an integral element, and these pieces presented a “journey through a 3D soundscape”.

A99-02915
[Photo: PeterBKaars.com.]

The result was a captivating immersive and otherworldly experience with moving sounds based on voices, sometimes quite intelligible, sometimes manipulated into abstract wiggling sounds that spun around the space. There was also a section of pop piano that was appropriately jarring in the context which gave way to a thicker enveloping sound and then fades to a series of whispers scattered in the far corners of the space. The team from CCRMA brought an advanced multichannel system to realize this and other pieces, and the technology plus the expert calibration made a big different in the experience. Even from the side of the hall, I was able to get much of the surround effect.

The next performance featured Ritwik Banerji and Joe Lasquo with “Improvising Agents”, artificial-intellgience software entities that listen to, interpret, and the produce their own music in response. Banerji and Lasquo each brought their own backgrounds to the development of their unique agents, with Banerji “attempting to decolonize musician-computer interaction based not he possibilities that a computer is already intelligent” and Lasquo applying his expertise in AI and natural language processing to musical improvisation. They were joined by Warren Stringer who provided a visual background to the performance.

Joe Lasquo and Ritwik Banerji
[Photo: PeterBKaars.com.]

As a humorous demonstration of their technology, the performance opened with a demo of two chatbots attempting to converse with one another, with rather absurd results. This served as the point of departure for the first piece, which combined manipulation of the chatbot audio with other sounds while Banerji and Lasquo provided counterpoint on saxophone and piano, respectively. The next two pieces, which used more abstract material, were stronger, with deep sounds set against the human performances and undulating geometric video elements. The final piece was even more organic, with subtle timbres and changes that came in waves, and more abstract video.

This was followed by Understatements (2009-2010), a fixed-media piece by Ilya Rostovtsev. The piece was based on acoustic instruments that Rostovtsev recorded and then manipulated electronically.

Ilya Rostovtsev
[Photo: PeterBKaars.com.]

It began with the familiar sound of pizzicato strings, that gave way to scrapes and then longer pad-like sounds. Other moments were more otherworldly, including extremely low tones that gradually increased in volume. The final section featured bell sounds that seemingly came out of nowhere but coalesced into something quite serene.

The final performance featured the CCRMA Ensemble, which included Roberto Morales-Manzanares on flute, voice and his “Escamol” interactive system, Chris Chafe on celletto, John Granzow on daxophone and Rob Hamilton on resonance guitar. Musical creations were a major part of this set. Chris Chafe’s celletto is essentially a cello striped down to its essential structure and augmented for electro-acoustic performance. The saxophone is based on a bowed wooden element where the sound is generated from friction. The Escamol system employed a variety of controllers, including at one point a Wii.

CCRMA Ensemble
[Photo: PeterBKaars.com.]

The set unfolded as a single long improvisation. It began with bell sounds, followed by other sustained tones mixed with percussive sounds and long guitar tones. The texture became more dense with guitar and shaker sounds circling the room. The celletto and daxophone joined in, adding scraping textures, and then bowing sounds against whistles. In addition to the effects, there were more idiomatic moments with bowed celletto and traditional flute techniques This was truly an experimental virtuosic performance, with strong phrasing, textural changes and a balance of musical surprises.

I was happy to see such a strong presence for experimental electronic technologies in this year’s Summit. And there was more electronics to come the following evening, with a very different feel.

CCRMA Transitions

We close out the year with one final gig report: my performance at the CCRMA Transitions concert at Stanford University’s computer-music center. The two-night event took place in the courtyard of CCRMA’s building, with a large audience beneath the stars and between an immersive 24-channel speaker array.

I brought my piece Realignments that I had originally composed in 2011 for a 12-channel radial speaker and eight-channel hall system at CNMAT, part of my Regents Lecturer Concert there. This version, outdoors in front a large audience and clad in a provocative costume, was quite an experience, and you can see the full performance in this video:

The Transitions version of the piece was remixed to use the eight main channels of the speaker array at CCMRA. Once again, the iPad was used to move around clouds of additive-synthesis partials and trigger point sources, which were directed at different speakers of the array. The overall effect of the harmonies, sounds and immersive sound system was otherworldly. I chose this particular costume to reflect that, although I had also used it a couple of weeks earlier in my duo “Pitta of the Mind” with poet Maw Shein Win at this year’s Transbay Skronkathon. I am planning more performances with this character (but not the same costume) in the coming year.

Jean-Claude Risset at CCRMA

A few weeks ago I had the opportunity to see composer and computer-music pioneer Jean-Claude Risset present a concert of his work at CCRMA at Stanford. Risset has made numerous contributions to sound analysis and synthesis, notably his extension of Shepard Tones to continuously shifting pitches. The sound of the “Shepard-Risset glissando” where pitches ascend or descend and are replaced to give the illusion of a sound that ascends or descends forever. You can hear an example here, or via the video below.

Sadly, I arrived slightly late and missed much of the first piece Duo for one pianist (1989-1992), featuring Risset himself on a Yamaha Disklavier piano. The duo comes from the computer control of the piano simultaneous to the live human performer. It’s not a simple computer-based accompaniment part, but rather a duo in which the actions of the live performer are interpreted by a program (written in an early version of Max) and inform the computer response in real-time.

The remainder of the concert features works for multichannel tape. The first of these pieces, Nuit (2010): from the tape of Otro (L’autre) featured eight channels with meticulous sound design and spatialization. The ethereal sounds at the start of the piece sounded like either frequency-modulation (FM) or very inharmonic additive synthesis (actually, FM can be represented as inharmonic partials in additive synthesis, so hearing both techniques makes sense). Amidst these sounds there emerged the deep voice of Nicholas Isherwood speaking in French, and then later in English as well – I specifically recalled the phrase “a shadow of magnitude.” Surrounding the vocal part was a diverse palette of sounds including low machine noise, hits of percussion and wind tones, a saxophone trill, tubular bells and piano glissandi. There were examples of Shepard-Risset glissandi towards the end of the piece.

The next piece Kaleidophone (2010) for 16-channel tape begins with similar glissandi, providing an interesting sense of continuity. In this instance, they were ascending, disappearing at the top of the range and re-emerging as low tones. Above this pattern a series of high harmonics emerged, like wispy clouds. The glissandi eventually switched to both up and down motions, and subsequently followed by a series of more metallic tones. At one point, a loud swell emerged reminiscent of the distinctive THX announcement at the start of most movies; and a series of percussive tones with discrete hits but continuous pitch changes, getting slower and slower. There was a series of piano-like sounds with odd intonations played more like a harp, followed by gong-like sounds reminiscent of gamelan music but with very artificial pitches and speeds. Industrial metallic sounds gave way to a section of tense orchestral music, and the long tones that subtly and gradually became more noisy and inharmonic. A sound like crackling fire seemed to channel the early electronic pieces by Iannis Xenakis. Highly-comb filtered environmental sounds gave way to eerie harmonies. They constantly changing sounds lull the listener in a calm state before starting him or her with a burst of loud noise (perhaps the most intense moment in the entire concert). This was followed by machine noises set against a sparse pattern of wind pipes, and a large cloud of inharmonic partials concluded the piece. I had actually not looked in advance at the subtitle in the program of “Up, Keyboards, Percussion I, Percussion II, Winds, Water, Fire, Chorus, Eole” – but my experience of the piece clearly reflected the section titles from perception alone.

The final piece Five Resonant Sound Spaces for 8-channel tape began with orchestral sounds, bells and low brass, gongs (or tam tam), timpani. The sounds seemed acoustic at first, but gradually more hints of electronics emerged: filtering, stretching and timbral decomposition. A low drone overlaid with shakers and tone swells actually reminded me eerily of one of my own pieces Edge 0316 which was based on manipulations of ocean-wave recordings and a rainstick. This image was broken by a trombone swell and the emergency of higher-pitched instruments. The overall texture moved between more orchestral music and dream-like water electronics. A series of fast flute runs narrowed to a single pure-tone whistle, which then turned into something metallic and faded to silence. All at once, loud shakers emerged and granular manipulations of piano sounds – more specifically, prepared piano with manual plucking of strings inside the body and objects used to modify the sound. The sound of a large hall, perhaps a train station, with its long echoes of footsteps and bits of conversation was “swept away” by complex electronic sounds and then melded together. A series of high ethereal sounds seemed to almost but not quite be ghostly voices, but eventually resolved the clear singing voices, both male and female. The voices gave way to dark sounds like gunfire, trains and a cacophony of bells – once again, channeling the early electronic work of Xenakis. A breath sound from a flute was set against a diversity of synthesized sounds that covered a wide ground, before finally resolving to a guitar-like tone.


The concert was immediately followed by a presentation and discussion by Risset about his music. His presentation, which included material from a documentary film as well as live discussion covered a range of topics, including using Max and the Disklavier to perform humanly impossible music with multiple tempi; and marrying pure sound synthesis with the tradition of musique concrete, with nods to pioneers in electronic music including Thaddeus Cahill, Leon Theremin, Edgard Varese, and Max Matthews (who was present at the concert and talk). He also talked about the inspiration he draws from the sea and landscape near his home in Marseilles. The rocky shoreline and sounds from the water in the video did remind me a lot of coastal California and made it even less surprising that we could come up with pieces with very similar sounds. He went on to describe his 1985 piece SUD in more detail, which used recordings of the sea as a germinal motive that was copied and shifted in various ways. Percussion lines were drawn from the contours, he also made use of sounds of birds and insects, including the observation that crickets in Marseilles seem to sing on F sharp. I did have a chance to talk briefly with Risset after the reception about our common experience of composing music inspired by coastal landscapes.

Overall, this was an event I am glad I did not miss.

Preparing for March 4 Concert, Part 2

On Tuesday, I went to the Center for New Music and Audio Technologies (CNMAT) in order to continue preparing for the Regent’s Lecture concert on March 4.  I brought most of the setup with me, at least the electronic gear:

Several pieces are going to feature the iPad (yes, the old pre-March 2 version) running TouchOSC controlling Open Sound World on the Macbook.  I worked on several new control configurations after trying out some of the sound elements I will be working with.  Of course, I have the monome as well, mostly to control sample-looping sections of various pieces.

One of the main reasons for spending time on site is to work directly with the sound system, which features an 8-channel surround speaker configuration.  Below are five of the eight speakers.


One of the new pieces is designed specifically for this space – and to also utilize a 12-channel dodecahedron speaker developed at CNMAT.  I will also be adapting older pieces and performance elements for the space, including a multichannel version of  Charmer:Firmament.  In addition to the multichannel, I made changes to the iPad control based on the experience from last Saturday’s performance at Rooz Cafe in Oakland.  It now is far more expressive and closer to the original.

I also broke out the newly acquired Wicks Looper on the sound system.  It sounded great!

The performance information (yet again) is below.


Friday, March 4, 8PM
Center For New Music and Audio Technologies (CNMAT)
1750 Arch St., Berkeley, CA

CNMAT and the UC Berkeley Regents’ Lecturer program present and evening of music by Amar Chaudhary.

The concert will feature a variety of new and existing pieces based on Amar’s deep experience and dual identity in technology and the arts. He draws upon diverse sources as jazz standards, Indian music, film scores and his past research work, notably the Open Sound World environment for real-time music applications. The program includes performances with instruments on laptop, iPhone and iPad, acoustic grand piano, do-it-yourself analog electronics and Indian and Chinese folk instruments. He will also premier a new piece that utilizes CNMAT’s unique sound spatialization resources.

The concert will include a guest appearance by my friend and frequent collaborator Polly Moller. We will be doing a duo with Polly on flutes and myself on Smule Ocarina and other wind-inspired software instruments – I call it “Real Flutes Versus Fake Flutes.”

The Regents’ Lecturer series features several research and technical talks in addition to this concert. Visit http://www.cnmat.berkeley.edu for more information.