We close out the year with one final gig report: my performance at the CCRMA Transitions concert at Stanford University’s computer-music center. The two-night event took place in the courtyard of CCRMA’s building, with a large audience beneath the stars and between an immersive 24-channel speaker array.
I brought my piece Realignments that I had originally composed in 2011 for a 12-channel radial speaker and eight-channel hall system at CNMAT, part of my Regents Lecturer Concert there. This version, outdoors in front a large audience and clad in a provocative costume, was quite an experience, and you can see the full performance in this video:
The Transitions version of the piece was remixed to use the eight main channels of the speaker array at CCMRA. Once again, the iPad was used to move around clouds of additive-synthesis partials and trigger point sources, which were directed at different speakers of the array. The overall effect of the harmonies, sounds and immersive sound system was otherworldly. I chose this particular costume to reflect that, although I had also used it a couple of weeks earlier in my duo “Pitta of the Mind” with poet Maw Shein Win at this year’s Transbay Skronkathon. I am planning more performances with this character (but not the same costume) in the coming year.
Today we look back at the first of my two performances in New York, the appropriately named “AvantElectroExpectroExtravaganza” with a diverse collection of experimental electronic musicians. It was a small an intimate space nestled in a building in an industrial section of Brooklyn, east of Williamsburg. But we had a decently large stage and good sound reinforcement, and a small but attentive audience. And the industrial setting was one conducive to both my playing and enjoyment of art and music.
The performance began with a procession by members of the SK Orchestra improvising to sampled phrases “Hi” and “How are you doing”. For those who are not familiar with the Casia SK-1, it was a small sampling keyboard from the mid 1980s which allowed users to record and manipulate live sounds in addition to standard consumer keyboard features. The low fidelity and ease of use now makes them coveted items for many experimental electronic musicians. There were no fewer than five of them in the ensemble this evening.
[SK Orchestra. (Click image to enlarge.)]
As they sat down for the main part of the set, the sampled sounds grew more fragmented and processed, mixed with lots of dynamic swells and analog-filter-like sounds. Combined with a wide array of effects, the sounds were quick thick ranging from harmonic pads to noise to moments that could be best described as “space jam music.” I was particularly watching articulation with a Morley pedal and how it timbrally and rhythmically informed the sound. Taking advantage of the live-sampling capabilities of the SK-1, they resampled the output from the amps and PA and fed that back into the performance for a slow motion feedback loop that grew ever noisier and more forceful. The rhythms got more steady over time, with a driving beat set against the phrase “Holy Jesus!”, and eventually moved into a steady bass rhythm and pattern.
Rhythm was the main theme of the next set featuring Loop B. His theatrical and technically adept performance featured tight rhythmic patterns on found metal objects with playful choreography and beat-based electronic accompaniment. In the first piece, he performed on a large piece of metal salvaged from a vehicle with syncopated rhythms set against an electronic track. This was followed by a piece in which he donned a metal helmet, which he played against more Latin accompaniment.
[Loop B. (Click image to enlarge.)]
Other metal instruments included a wearable tube, as featured in this video clip.
And a return to the original car metal, but with a power drill.
The rhythmic character of the different pieces seemed to alternate between driving electronica and Latin elements, but this was secondary to the spectacle of the live playing. It was a unique and well-executed performance, and fun to witness. It would be interesting to hear what he could do in an ensemble setting with musicians with an equally tight sense of rhythm.
Loop B’s energetic and dynamic performance was followed by a very contrasting set by Badmitten (aka Damien Olsen). It began with eerie, ambient sounds that soon coalesced around watery elements. It gave me the sense of sitting near an alien sea shore. Pitch-bent tones were layered on top of this, and eventually noises and glitches that deliberately interrupted the ambience. A low-frequeny rhythm emerged along with a slow bass line. It seemed that music was moving from the sea to a forest.
[Badmitten. (Click image to enlarge.)]
The sounds were quite full and luscious, with guitar chords and synth pads. Over time it became darker, with modulated filter sounds and strong hits. Seemingly out of nowhere, a voice speaking in French emerged (which amused French speakers in the audience). The various sounds coalesced into a more steady monotone rhythm with minor harmonies, which started to come apart and become more chaotic. The set concluded with an electric piano solo.
It was then time to take the stage. Fortunately, we had quite a bit of time and space to set up before the show, so most everything was in place and I was able to get underway quickly after checking that the local wi-fi network between the iPad and the MacBook Pro (running Open Sound World) was working. I opened with a new version of the piece Spin Cycle / Control Freak that used the iPad in lieu of the Wacom Tablet from the original version 11 years prior. It worked quite well considering the limitations of the interface – and indeed the more rhythmic elements were easier to do in this case. This was then followed by a stereo version of the piece I composed for eight-channel surround and the dodecahedron speaker at CNMAT back in March. The timbres and expression still worked well, but I think it loses something without the advanced sound spatialization.
[Click image to enlarge.]
Perhaps the best piece of the set was the one with the simplest technology: I connected the output of the Wicks Looper to the input of the Korg Monotron for a pocket-sized but sonically intense improvisation, which you can see in the video below:
I concluded with a performance of Charmer:Firmament from my 2005 CD Aquatic.
The final set of the evening was Doom Trumpet, which did not feature a trumpet. Rather, artist David Smith performed improvised music with guitar and effects set against a video compiled from obscure science-fiction movies. I found myself focused on the visuals, and particularly liked how he opened many of the clips with a highly-processed version of the MGM lion. Musically, he layered samples and loops with live guitar performance through a variety of effects. The combination of the music and visuals (which seemed to be dated from the late 1960s through early 1980s based on costumes and hairstyles) kept things appropriately dislocated from the source material and more abstract.
[Doom Trumpet. (Click image to enlarge).]
Overall, it was a great night of music, which I was glad to be a part of. A few participants will be part of my next New York show at TheatreLab this coming Saturday, but I certainly hope to cross paths with everyone at concerts in the future.
The theme of this week’s Photo Hunt is digital. Rather than simply use a digital photo – which could be any photo ever taken of Luna – I chose a couple of images that demonstrate the unique opportunities of the medium. A digital photo is really just a stream of numbers, not unlike digital audio, and can be processed in countless ways using digital signal processing or applying other mathematical functions.
For a piece I originally did in 2007, I took one of Luna’s adoption photos from Santa Cruz County Animal Services and applied an algorithm that overlaid these colored bands, as shown above. The color bands were generated using a set of hastily chosen trigonometric and hyperbolic functions applied to the timeline of the animation sequence. These photos are stills from the full animation.
I did these using image and video extensions to Open Sound World – one nice feature of that work was that I could use the same functions for both audio and video, and “see” what a particular audio-processing algorithm looked like when applied to an image. And I would probably use the Processing environment for future visual work, perhaps in conjunction with OSW.
Weekend Cat Blogging #309 and Carnival of the Cats are both being hosted by Billy SweetFeets this weekend. Perhaps Luna’s animation could be part of one of the dance videos they often feature.
A special note this week. Our friend Judi at Judi’s Mind over Matter (home of Jules and Vincent) has information on how to help animals affected the storms and tornadoes in the southeast US. They live in Alabama, not far from the place that was hit hardest by the tornadoes. We’re glad they’re safe, and able to provide this information for those who would like to help.
Several pieces are going to feature the iPad (yes, the old pre-March 2 version) running TouchOSC controlling Open Sound World on the Macbook. I worked on several new control configurations after trying out some of the sound elements I will be working with. Of course, I have the monome as well, mostly to control sample-looping sections of various pieces.
One of the main reasons for spending time on site is to work directly with the sound system, which features an 8-channel surround speaker configuration. Below are five of the eight speakers.
One of the new pieces is designed specifically for this space – and to also utilize a 12-channel dodecahedron speaker developed at CNMAT. I will also be adapting older pieces and performance elements for the space, including a multichannel version of Charmer:Firmament. In addition to the multichannel, I made changes to the iPad control based on the experience from last Saturday’s performance at Rooz Cafe in Oakland. It now is far more expressive and closer to the original.
I also broke out the newly acquired Wicks Looper on the sound system. It sounded great!
The performance information (yet again) is below.
Friday, March 4, 8PM
Center For New Music and Audio Technologies (CNMAT)
1750 Arch St., Berkeley, CA
CNMAT and the UC Berkeley Regents’ Lecturer program present and evening of music by Amar Chaudhary.
The concert will feature a variety of new and existing pieces based on Amar’s deep experience and dual identity in technology and the arts. He draws upon diverse sources as jazz standards, Indian music, film scores and his past research work, notably the Open Sound World environment for real-time music applications. The program includes performances with instruments on laptop, iPhone and iPad, acoustic grand piano, do-it-yourself analog electronics and Indian and Chinese folk instruments. He will also premier a new piece that utilizes CNMAT’s unique sound spatialization resources.
The concert will include a guest appearance by my friend and frequent collaborator Polly Moller. We will be doing a duo with Polly on flutes and myself on Smule Ocarina and other wind-inspired software instruments – I call it “Real Flutes Versus Fake Flutes.”
The Regents’ Lecturer series features several research and technical talks in addition to this concert. Visit http://www.cnmat.berkeley.edu for more information.
Here Luna poses with TouchOSC on the iPad, which is becoming one of the main control surfaces I will be using to control Open Sound World. Last night I was building the synthesis infrastructure for the new piece, a combination of drum sampling and spatialized additive synthesis – at least four separate additive synthesis models that are algorithmically generated based on input from the iPad. Against this will be electronic drum sounds and an Afro-Cuban rhythm detail. I really won’t know the exact shape of this piece until I work with CNMAT’s speaker array.
I also learned from the Saturday’s performance in Oakland that I will need to refine the control on TouchOSC for the new implementation of my piece Charmer:Firmament. It was very well received, with descriptions like “beautiful” and “meditative”, but it was difficult to control compared to the Wacom graphics tablet. I will try a different mix of controls on the iPad to see if it works better.
I have been busily preparing this weekend for the first of my UC Berkeley Regents’ Lecturer presentations:
Open Sound World (OSW) is a scalable, extensible programming environment that allows musicians, sound designers and researchers to process sound in response to expressive real-time control. This talk will provide an overview of OSW, past development and future directions, and then focus on the parallel processing architecture. Early in the development of OSW in late 1999 and early 2000, we made a conscious decision to support parallel processing as affordable multiprocessor systems were coming on the market. We implemented a simple scalable dynamic system in which workers take on tasks called “activation expressions” on a first-come first serve basis, which facilities for ordering and prioritization to deal with real-time constraints and synchronicity of audio streams. In this presentation, we will review a simple musical example and demonstrate performance benefits and limitations of scaling to small multi-core systems. The talk will conclude with a discussion of how current research directions in parallel computing can be applied to this system to solve past challenges and scale to much larger systems.
You can find out more details, including location for those in the Bay Area who may be interested in attending, at the official announcement site.
Much of the time for a presentation is spent making PowerPoint slides:
With slides out of the way, I can now turn to the more fun part, the short demos. This gives me an opportunity to work with TouchOSC for the iPad as a method for controlling OSW patches. We will see how that turns out later.
The first night was the Touch the Gear Expo in which the public is invited to try out the musical instruments and equipment of a number of artists from the festival as well as other Outsound events. It was a respectably sized turnout, with a large number of visitors.
[Click to enlarge]
I brought the venerable Wacom Graphics Tablet and PC laptop running Open Sound World for people to play.
[click to enlarge]
It often gets attention during performances, and did so at this hands-on event as well. Because it uses familiar gestures in a visually intuitive way, many people were able to start right away experimenting with it making music with phrasing and articulation. I provided a simple example using FM synthesis as well as chance for people to play a phrase from my piece Charmer:Firmament (which uses additive synthesis).
Tom Duff also demonstrated his own custom software in combination with a controller, in this case an M-Audio drum-pad array. One thing we observed in his demo was how much computing power is available on a contemporary machine, like a Macbook Pro, and that for many live electronic-music applications there is more than enough. But somehow, many applications seem to grow to fit the available space, especially in our domain.
There were several demonstrations that were decidedly more low-tech, involving minimal or in some cases no electronics. Steven Baker presented a collection of resonant dustbins with contact microphones.
[Photograph by Jennifer Chu. Click to enlarge.]
The dustbins were arranged in such a way as to allow two performers to face each other for interactive performance.
I enjoyed getting to try out the hand-cranked instruments of the Crank Ensemble:
[Click to enlarge]
Basically, one turns the crank which creates a mechanical loop of sounds based on the particular instrument’s materials. I have seen the Crank Ensemble perform on a few occasions, but never got to play one of the instruments myself.
The body of the instrument is a cardboard box, and one plays it by running a comb over the various metal and plastic elements attached to the box. I spent a few minutes exploring the sounds and textures running different combs over the elements, including other combs. It was very playable and expressive, I could definitely make use of one of these!
Another variation on the theme of amplified acoustic objects was Cheryl Leonard’s demonstration in which one could play sand, water, wood, and other natural elements:
Returning now to electronics, and a different kind of “elemental music.” CJ Borosque presented her use of analog effects boxes with no formal input. Analog circuits do have some low-level noise, which is what she is using as a source for feedback, resonance, distortion and other effects. Ferrara Brain Pan demonstrated an analog oscillator than can handle very low frequencies (i.e., less than 1Hz!).
There are also several other live-performance electronics demonstrations. Bob Marsh presented the Alesis Air Synth (no longer in production). Performers pass their hand over the domed surface to manipulate sounds. Similar to the tablet, this is a very intuitive and rich interface. Rick Walker demonstrated a new powerful instrument for recording and controlling multiple live loops, with the ability to manipulate rhythm and meter. I look forward to hearing him use it in a full performance soon. Thomas Dimuzio showed a full rig for live electronics performance, that I believe he used at the electronics-oriented concert the following week.
OK, so I have been delinquent in reviewing some of own recent shows. I was hoping to find photos, but so far I have not found any. It does happen once in a while even in this hyper-photographic society. In fairness, I have taken photos at many shows I attend, but then find out they were not good enough to post. So, we will just go ahead and use our visual imagination.
Two weeks ago, on the day I returned from China, I participated in Pmocatat Ensemble. From the official announcement:
The Pmocatat Ensemble records the sounds of their instruments onto various forms of consumer-ready media. (Pmocatat stands for “prerecorded music on cds and tapes and things”.) Then, they improvise using only the recorded media. Several different pieces will explore both the different arrangements of recorded instruments and the sound modulation possibilities of the different recording media.
In my case, my pre-recorded media was digital audio played on an iPhone. I used recordings of my Indian and Chinese folk instruments, and I “played” by using the start, stop, forward, rewind, and scrubbing operations.
Other members included Matt Davignon, James Goode, John Hanes, Suki O’Kane, Sarah Stiles, Rent Romus, C. P. Wilsea and Michael Zelner.
Matt Davignon, who organized the ensemble, had composed some pieces which provided much needed structure and avoid a “mush” of pre-recorded sound. Some portions were solos or duos, with various other members of the ensemble coming in and out according to cues. This allowed for quite a variety of texture and musicianship. I definitely hope the Pmocatat Ensemble continues to the perform.
The following Monday, March 16, I curated a set at the Ivy Room Experimental/Improv Hootenany with Polly Moller and Michael Zbyszynski. I know Polly and Michael from completely different contexts, so it was interesting to hear how that would work together. Michael played baritone sax and Polly performed new words as well as flute and finger cymbals. I played my newly acquired Chinese instruments, the looping Open Sound World patch I often use, and a Korg Kaos Pad.
Musically, it was one of those sets that just worked. I was able to sample and loop Polly’s extended flute techniques into binary and syncopated rhythms, over which the trio could improvise. Periodically, I changed the loops, sometimes purposely to something arhythmic to provide breathing space. Michael’s baritone sax filled out the lower register against the flute and percussion.
We got some good reviews from our friends in the Bay Area New Music community. The following comments are from Suki O’Kane (with whom I played in the Pmocatat ensemble):
Amar had been dovetailing, in true hoot fashion, into Slusser using a small
digitally-controlled, u know, like analog digit as in finger, that totally
appeared to me to be the big red shiny candy button of the outer space ren.
The important part is that he was artful and listening, and then artful
some more. Polly Moller on vocals and flute, text and tones, which had a
brittle energy and a persistent comet trail of danger.
The “big red shiny candy button of the outer space ren” was undoubtedly the Korg mini-Kaos Pad.
Highway 11 in Connecticut is a north-south freeway connecting a major route from Hartford to, well, nowhere. So one moment, you’re happily traveling south on a nice country highway, and then the next moment, you better exit before it turns into a large dirt track and ditch. Or at least that’s the impression I get, having never been there.
It’s quite dramatic, as can be seen in these aerial photos from Greg Amy (we saw a few of his photos before when visiting Yale and New Haven, CT).
It kinda looks like someone just stopped building the highway one day, and forgot to come back and finish. The story, as described on Kurumi’s website and other sources, is that the project simply ran out of funding, and then ran into opposition, though it sounds like plans are now in the works to complete highway 11 to the New London area.
However, the details of CT 11 aren’t really the focus of this article, but rather it serves as a metaphor for the many unfinished projects here at CatSynth. These include:
Finishing my album 2 1/2. There are a few tracks left from this project last Februrary that need to be replaced before releasing the album. I still think I’d doable by late November, but so far I haven’t been able to work much on it during this period of “free time.” Technical problem with my “studio PC laptop” provide at least one excuse.
Although I have been doing work all along on Open Sound World, mostly to support my own music, it’s been quite a while since I have done a full-blown release of the software. It’s hard to feel motivated when most of the feedback reads like this. However, the core software (minus the old user interface) is really solid and musically useful, and I do plan to announce a new direction for the project “real soon.”
I need to do some revisions to my professional/artistic website. At the very least I need to get the performance schedule updated – fortunately, it is already up at MySpace. The goal is to bring it more in harmony with CatSynth and rest of my websites.
I purchased one of the last Kittenettik Fyrall kits from Ciat Lonbarde, but have yet to assemble it. I guess I’ve been waiting to find the right “space”, both literally and figuratively, to do this. If I get on it soon, I might have it done in time for Woodstockhausen.
And of course there are several large articles waiting to be completed and published here at CatSynth, particularly CD reviews, film discussions, and travelogues.
But then again, maybe it’s not so bad that I’m spending time looking for employment.
First, I have to remind myself to ABC: Always Bring a Camera. I missed several photo opportunities before and during our rehearsal in San Francisco on Wednesday. There were some great shots on the new Central Freeway terminal ramp. And then the “kitty moments” during the rehearsal with Polly Moller and John Moreira. I did snap this cell-phone pic of John Moreira's cat Crescenda rolling around among our cue sheets and amps. She and her fellow cat Pearl joined us several times during the rehearsal, but Crescenda's little act stole the show.
Musically, I had a minimal setup – a subset of what brought to the Skonkathon two weeks ago – just the MacBook, the E-MU 0202 | USB and a MIDI keyboard. The Mac was running the new script-based Open Sound World to process live guitar input. The processing worked quite well, I think, with several wavetables, ring modulation, and a rather nasty little FM algorithm (it's a lot like those distortion-modulation “sound mangler” pedals). Both the guitar and processing needed to fit within pieces with voice, flute and existing electronic material.
The one concern was the frequent OSW crashes – it wasn't a huge problem during the rehearsal because the system can reset itself very quickly (far more quickly than the older UI-centric version), with only a few seconds of dead time. But still, that's not cool. I suspected something related to the MIDI input handling. Fortunately, last night I was able to track down the crashes last night. They were indeed in the MIDI handling, some issues exposed by the multi-processing with the Core2 Duo. Easily found and fixed by playing the patch with a lot of MIDI control, with the laptop and keyboard on the coffee table. Actually, I made some interesting lo-fi music with the built-in mic and speaker and feedback while testing and debugging. This will probably form the basis of my next piece.