Outsound Music Summit: Vibration Hackers

The second concert of this year’s Outsound Music Summit, entitled “Vibration Hackers”, featured electronic musical experimentations from Stanford’s CCRMA and beyond. It was a sharp contrast to the previous night in both tone and medium, but had quite a bit to offer.

The concert opened with #MAX, a collaboration by Caitlin Denny on visuals, Nicole Ginelli on audio, and Dmitri Svistula on software development. It was based on the ubiquitous concept of the hashtag as popularized by Twitter. Audience members typed in suggested terms on a terminal set up in the hall. The terms we then projected on the screen and used to search online for videos, audio and textual materials to inform the unfolding performance. Denny used found videos as part of her projection, while Ginelli interpreted results with processed vocals.

#MAX

The idea was intriguing. I would have liked to see more explicit connection between the source terms and audio/video output – perhaps it was a result of the projection onto the distorting curtain instead of a flat surface, but the connection wasn’t always clear. It would have also been fun to allow audience members to input terms from their mobile phones via Twitter. But I applaud the effort to experiment artistically with social networking infrastructure and look forward to seeing future versions of the piece.

Next was a set of fixed-media pieces by Fernando Lopez-Lezcano, collectively called Knock Knock…anybody there? Lopez-Lezcano is a master of composition that uses advanced sound spatialization as an integral element, and these pieces presented a “journey through a 3D soundscape”.

A99-02915
[Photo: PeterBKaars.com.]

The result was a captivating immersive and otherworldly experience with moving sounds based on voices, sometimes quite intelligible, sometimes manipulated into abstract wiggling sounds that spun around the space. There was also a section of pop piano that was appropriately jarring in the context which gave way to a thicker enveloping sound and then fades to a series of whispers scattered in the far corners of the space. The team from CCRMA brought an advanced multichannel system to realize this and other pieces, and the technology plus the expert calibration made a big different in the experience. Even from the side of the hall, I was able to get much of the surround effect.

The next performance featured Ritwik Banerji and Joe Lasquo with “Improvising Agents”, artificial-intellgience software entities that listen to, interpret, and the produce their own music in response. Banerji and Lasquo each brought their own backgrounds to the development of their unique agents, with Banerji “attempting to decolonize musician-computer interaction based not he possibilities that a computer is already intelligent” and Lasquo applying his expertise in AI and natural language processing to musical improvisation. They were joined by Warren Stringer who provided a visual background to the performance.

Joe Lasquo and Ritwik Banerji
[Photo: PeterBKaars.com.]

As a humorous demonstration of their technology, the performance opened with a demo of two chatbots attempting to converse with one another, with rather absurd results. This served as the point of departure for the first piece, which combined manipulation of the chatbot audio with other sounds while Banerji and Lasquo provided counterpoint on saxophone and piano, respectively. The next two pieces, which used more abstract material, were stronger, with deep sounds set against the human performances and undulating geometric video elements. The final piece was even more organic, with subtle timbres and changes that came in waves, and more abstract video.

This was followed by Understatements (2009-2010), a fixed-media piece by Ilya Rostovtsev. The piece was based on acoustic instruments that Rostovtsev recorded and then manipulated electronically.

Ilya Rostovtsev
[Photo: PeterBKaars.com.]

It began with the familiar sound of pizzicato strings, that gave way to scrapes and then longer pad-like sounds. Other moments were more otherworldly, including extremely low tones that gradually increased in volume. The final section featured bell sounds that seemingly came out of nowhere but coalesced into something quite serene.

The final performance featured the CCRMA Ensemble, which included Roberto Morales-Manzanares on flute, voice and his “Escamol” interactive system, Chris Chafe on celletto, John Granzow on daxophone and Rob Hamilton on resonance guitar. Musical creations were a major part of this set. Chris Chafe’s celletto is essentially a cello striped down to its essential structure and augmented for electro-acoustic performance. The saxophone is based on a bowed wooden element where the sound is generated from friction. The Escamol system employed a variety of controllers, including at one point a Wii.

CCRMA Ensemble
[Photo: PeterBKaars.com.]

The set unfolded as a single long improvisation. It began with bell sounds, followed by other sustained tones mixed with percussive sounds and long guitar tones. The texture became more dense with guitar and shaker sounds circling the room. The celletto and daxophone joined in, adding scraping textures, and then bowing sounds against whistles. In addition to the effects, there were more idiomatic moments with bowed celletto and traditional flute techniques This was truly an experimental virtuosic performance, with strong phrasing, textural changes and a balance of musical surprises.

I was happy to see such a strong presence for experimental electronic technologies in this year’s Summit. And there was more electronics to come the following evening, with a very different feel.

CCRMA Transitions

We close out the year with one final gig report: my performance at the CCRMA Transitions concert at Stanford University’s computer-music center. The two-night event took place in the courtyard of CCRMA’s building, with a large audience beneath the stars and between an immersive 24-channel speaker array.

I brought my piece Realignments that I had originally composed in 2011 for a 12-channel radial speaker and eight-channel hall system at CNMAT, part of my Regents Lecturer Concert there. This version, outdoors in front a large audience and clad in a provocative costume, was quite an experience, and you can see the full performance in this video:

The Transitions version of the piece was remixed to use the eight main channels of the speaker array at CCMRA. Once again, the iPad was used to move around clouds of additive-synthesis partials and trigger point sources, which were directed at different speakers of the array. The overall effect of the harmonies, sounds and immersive sound system was otherworldly. I chose this particular costume to reflect that, although I had also used it a couple of weeks earlier in my duo “Pitta of the Mind” with poet Maw Shein Win at this year’s Transbay Skronkathon. I am planning more performances with this character (but not the same costume) in the coming year.

CCRMA Modulations at SOMArts

A few weeks I go, I attended CCRMA Modulations 2011, an evening of live electronic music and sound installations by CCRMA (the Center for Computer Research in Music and Acoustics at Stanford) and special guests at SOMArts in San Francisco. The event was an eight-hour marathon, though I only stayed for about half the time, seeing many of the installations and most of the live-music performances.

The first part of the evening featured sound sculptures from Trimpin and his students at CCRMA. This particular project, the “Boom Boom Record Player” by Jiffer Harriman stuck with me.

The output from the record player is used to drive the electromechanical instruments on the right. I thought the instruments were well crafted – but I thought it was particularly fitting to have a classic Earth Wind and Fire LP on the record player.


[Click image to enlarge.]

Trimpin’s offering featured coin-operated robotic percussion where the drums included just about every model of Apple notebook computer going back to an early PowerBook (and even earlier as I think I espied an Apple IIc).


[Click image to enlarge.]


The live-music portion of the evening with Tweet Dreams by Luke Dahl and Carr Wilkerson. Audience members with Twitter access were encouraged to live-tweet messages to a specific hashtag #modulations. The messages were then analyzed in real time and the data used to affect the music. As I was planning to live tweet from this event anyway via iPhone, I was ready to participate. Of course, inviting audience participation like this is a risky proposition for the artists, as one cannot control what people may say. I will freely admit I can be a bit snarky at times and it came out in some of my tweets. The music was relatively benign, with very harmonic runs of notes – and I exhorted them to “give me something harsh and noisy”. Inspired by another participant, I also quoted lines from the infamous “More Cowbell!” skit from Saturday Night Live, much to the delight of some in the audience. The main changes in the music seemed to be in density, rhythm and some melodic structure, but all within boundaries that kept the sound relatively harmonic and “pleasant.” I would have personally liked to see (as I suggested via Twitter), more complex music, with some noisy elements and more dramatic changes. But the interaction with the music and and the audience was a lot of fun.

The next piece, Sferic by Katharine Hawthorne, featured dance and electronics. It was described as “using radio and movement improvisation to explore the body as an antenna.” The dancers, dressed in black outfits with painted patterns, began the movement to a stream of radio static. The motions were relatively minimalist, and sometimes seemed strained. Gestures included outstretched arms and fingers pointing, with Hawthorne walking slowly as her dance partner Luke Taylor ran more quickly. Rich, harmonic music entered from the rear channels of the hall, and dancers moved to being flat on the ground. The static noise returned, but more crackly with other radio-tuning sounds, then it became a low rumble. The dancers seemed to be trying very hard to get up. Then they started pointing. The music became more anxious, with low percussive elements. The dance became more energetic and active as the piece came to a close.

This was followed by Fernando Lopez-Lezcano performing Dinosaur Skin (Piel de Dinosaurio) a piece for multi-channel sound diffusion, an analog synthesizer and custom computer software. The centerpiece was a custom analog synthesizer “El Dinosaur” that Lopez-Lezcano build from scratch in 1981.

The instrument is monophonic (but like most analog synthesizers, a very rich monophonic), multiplied for the purposes of the performance by audio processing in external software and hardware. The music started very subtly, with sounds like galloping in the distance. The sounds grew high in pitch, then descended and moved across the room – the sense of space in the multichannel presentation was quite strong. More lines of sound emerged, with extreme variations in the pitch, low and high. The timbre, continually changing, grew more liquidy over time, with more complex motion and rotation of elements in the sound space. Then it became more dry and machine like. There was an exceptionally loud burst of sound followed by a series of loud whistles on top of low buzzing. The sounds slowed down and became more percussive (I was reminded as I often am with sounds like this of Stockhausen’s Kontakte (II)). Then another series of harsher whistles and bursts of sound. One sound in particular started the resonant quite strongly in the room. Overall, the sound became steady but inharmonic – the timbre becoming more filtered and “analog-like”.

The final performance in this section of the evening featured Wobbly (aka Jon Leidecker) as a guest artist presenting More Animals, a “hybrid electronic / concrete work” that combined manipulated field records of animals with synthesized sounds. As a result, the piece was filled with sounds that either were actual animals or reminiscent of animal sounds freely mixed. The piece opened with pizzicato glissandi on strings, which became more wailing and plaintive over time. I heard sounds that either were whales and cats, or models of whales and cats. Behind this sounds, pure sine tones emerged and then watery synthesized tones. A series of granular sounds emerged, some of which reminded me of human moaning. The eerie and watery soundscape that grew from these elements was rich and immersive. After a while, there was a sudden abrupt change followed by violent ripping sounds, followed by more natural elements, such as water and bird whistles. These natural elements were blended with AM modulation which sounded a bit like a helicopter. Another abrupt change led to more animal sounds with eerie howling and wind, a strange resonant forest. Gradually the sound moved from natural to more technological with “sci fi” elements, such as descending electrical noises. Another sudden change brought a rhythmic percussion pattern, slow and steady, a latin “3+2+2” with electronic flourishes. Then it stopped, and restarted and grew, with previous elements from the piece becoming part of the rhythm.


After an intermission, the seats were cleared from the hall and the music resumed in a more techno dance-club style and atmosphere, with beat-based electronic music and visuals. Guest artists Sutekh and Nate Boyce opened with Bands of Noise in Four Directions & All Combinations (after Sol LeWitt). Glitchy bursts of noise resounded from the speakers while the screens showed mesmerizing geometric animations that did indeed remind me a bit of Sol LeWitt (you can see some examples of his work in previous posts).

Later in the evening Luke Dahl returned for a solo electronic set. It began calmly with minor chords processed through rhythmic delays, backed by very urban poster-like graphics. Behind this rhythmic motif, filtered percussion and bass sounds emerged, coalescing into a steady house pattern, with stable harmony and undulating filtered timbres. At times the music seemed to reach back beyond house and invoke late 1970s and early 1980s disco elements. Just at it was easy to get lost listening to Wobbly’s environmentally-inspired soundscapes, I was able to become immersed in the rhythms and timbres of this particular style. The graphics showed close-ups of analog synthesizers – I am pretty sure at least some of the images were of a Minimoog. I did find out that these images were independent of the musical performance, and thus we were not looking at instruments being used. I liked hearing Luke’s set in the context of the pieces earlier in the evening, the transition from the multi-channel soundscapes to the glitchy noise and to the house-music and dance elements.

I was unfortunately not able to stay for the remaining sets. But overall it was a good and very full evening of music and technology.

Jean-Claude Risset at CCRMA

A few weeks ago I had the opportunity to see composer and computer-music pioneer Jean-Claude Risset present a concert of his work at CCRMA at Stanford. Risset has made numerous contributions to sound analysis and synthesis, notably his extension of Shepard Tones to continuously shifting pitches. The sound of the “Shepard-Risset glissando” where pitches ascend or descend and are replaced to give the illusion of a sound that ascends or descends forever. You can hear an example here, or via the video below.

Sadly, I arrived slightly late and missed much of the first piece Duo for one pianist (1989-1992), featuring Risset himself on a Yamaha Disklavier piano. The duo comes from the computer control of the piano simultaneous to the live human performer. It’s not a simple computer-based accompaniment part, but rather a duo in which the actions of the live performer are interpreted by a program (written in an early version of Max) and inform the computer response in real-time.

The remainder of the concert features works for multichannel tape. The first of these pieces, Nuit (2010): from the tape of Otro (L’autre) featured eight channels with meticulous sound design and spatialization. The ethereal sounds at the start of the piece sounded like either frequency-modulation (FM) or very inharmonic additive synthesis (actually, FM can be represented as inharmonic partials in additive synthesis, so hearing both techniques makes sense). Amidst these sounds there emerged the deep voice of Nicholas Isherwood speaking in French, and then later in English as well – I specifically recalled the phrase “a shadow of magnitude.” Surrounding the vocal part was a diverse palette of sounds including low machine noise, hits of percussion and wind tones, a saxophone trill, tubular bells and piano glissandi. There were examples of Shepard-Risset glissandi towards the end of the piece.

The next piece Kaleidophone (2010) for 16-channel tape begins with similar glissandi, providing an interesting sense of continuity. In this instance, they were ascending, disappearing at the top of the range and re-emerging as low tones. Above this pattern a series of high harmonics emerged, like wispy clouds. The glissandi eventually switched to both up and down motions, and subsequently followed by a series of more metallic tones. At one point, a loud swell emerged reminiscent of the distinctive THX announcement at the start of most movies; and a series of percussive tones with discrete hits but continuous pitch changes, getting slower and slower. There was a series of piano-like sounds with odd intonations played more like a harp, followed by gong-like sounds reminiscent of gamelan music but with very artificial pitches and speeds. Industrial metallic sounds gave way to a section of tense orchestral music, and the long tones that subtly and gradually became more noisy and inharmonic. A sound like crackling fire seemed to channel the early electronic pieces by Iannis Xenakis. Highly-comb filtered environmental sounds gave way to eerie harmonies. They constantly changing sounds lull the listener in a calm state before starting him or her with a burst of loud noise (perhaps the most intense moment in the entire concert). This was followed by machine noises set against a sparse pattern of wind pipes, and a large cloud of inharmonic partials concluded the piece. I had actually not looked in advance at the subtitle in the program of “Up, Keyboards, Percussion I, Percussion II, Winds, Water, Fire, Chorus, Eole” – but my experience of the piece clearly reflected the section titles from perception alone.

The final piece Five Resonant Sound Spaces for 8-channel tape began with orchestral sounds, bells and low brass, gongs (or tam tam), timpani. The sounds seemed acoustic at first, but gradually more hints of electronics emerged: filtering, stretching and timbral decomposition. A low drone overlaid with shakers and tone swells actually reminded me eerily of one of my own pieces Edge 0316 which was based on manipulations of ocean-wave recordings and a rainstick. This image was broken by a trombone swell and the emergency of higher-pitched instruments. The overall texture moved between more orchestral music and dream-like water electronics. A series of fast flute runs narrowed to a single pure-tone whistle, which then turned into something metallic and faded to silence. All at once, loud shakers emerged and granular manipulations of piano sounds – more specifically, prepared piano with manual plucking of strings inside the body and objects used to modify the sound. The sound of a large hall, perhaps a train station, with its long echoes of footsteps and bits of conversation was “swept away” by complex electronic sounds and then melded together. A series of high ethereal sounds seemed to almost but not quite be ghostly voices, but eventually resolved the clear singing voices, both male and female. The voices gave way to dark sounds like gunfire, trains and a cacophony of bells – once again, channeling the early electronic work of Xenakis. A breath sound from a flute was set against a diversity of synthesized sounds that covered a wide ground, before finally resolving to a guitar-like tone.


The concert was immediately followed by a presentation and discussion by Risset about his music. His presentation, which included material from a documentary film as well as live discussion covered a range of topics, including using Max and the Disklavier to perform humanly impossible music with multiple tempi; and marrying pure sound synthesis with the tradition of musique concrete, with nods to pioneers in electronic music including Thaddeus Cahill, Leon Theremin, Edgard Varese, and Max Matthews (who was present at the concert and talk). He also talked about the inspiration he draws from the sea and landscape near his home in Marseilles. The rocky shoreline and sounds from the water in the video did remind me a lot of coastal California and made it even less surprising that we could come up with pieces with very similar sounds. He went on to describe his 1985 piece SUD in more detail, which used recordings of the sea as a germinal motive that was copied and shifted in various ways. Percussion lines were drawn from the contours, he also made use of sounds of birds and insects, including the observation that crickets in Marseilles seem to sing on F sharp. I did have a chance to talk briefly with Risset after the reception about our common experience of composing music inspired by coastal landscapes.

Overall, this was an event I am glad I did not miss.