I'm a bass player, composer and performer of electronic music. Most often my work is described as "interactive computer music," although increasingly I have problems with that term. I use simple technologies to capture and transmit aspects of my performance gesture to a computer. There are sesnors on my bow that capture touch, angle and distance from the bass, and sensors on the bass that also capture touch as well as standard performance information such as pitch and loudness. The computer takes this data, and through real-time algorithms written in MAX/MSP/jitter (http://www.cycling74.com), uses it to affect streams of live digital signal processing.
The resulting sounds are largely composed of transformations of the original bass performance although there may be other things thrown into the mix as well. In order to capture the intimacy and instrumental qualities of my performance, I create new speakers that act more like acoustic instruments. These new auditory display devices work especially well when playing in small electronic chamber groups such as "interface," my collaborative ensemble with electronic violinist, Dan Trueman, dancer/musician Tomie Hahn, and other like-minded performers. (http://www.interfaceimprov.net)
I think of myself as an instrument designer. The systems I create are not so much "interactive" as "resonant" with my performance. They extend my playing and transport me to new and interesting sonic spaces. The creation of instrumental interfaces, digital signal processing algorithms, and new speaker technologies is a compositional act, the result a "composed instrument."
Following are some images, videos and writings from my work and collaborations. Much more can be found on my website (http://www.arts.rpi.edu/crb)
In 2001, I produced "r!g," an album of solo Sbass performance, through the Electronic Music Foundation (http://www.emf.org). Below are three pieces from that album, the title track "r!g," "Quabbin," an improvisation using largly bowed string bass sounds named after Lake Quabbin in Massachusetts, and "Mechanique," a noisy highly-processed improvisation combining bass and machine sounds (music up top).
In the same year, Dan Trueman and I released the CD "./swank" on the cycling74 label. This album was based on live duo performances at Mobius Art Space in Boston. The video "Sbass," below is an excerpt of the performance accompanying the solo bass track of the same name on our album (omitted due to space-sonstraints ~ ed. ~ sorry).
After this time, I created the Sensor Bass and speakers I am currently using. Below are some images of the new "rig".
The development of our third generation of spherical speakers was done with Stephan Moore at RPI based on original designs by Dan Trueman, his father Lawrence Trueman, and Perry Cook. We have a strong interest in investigating new approaches to speaker technologies and auditory display for immersive performance. Stephan's website details some of the history and development of these speakers (http://www.oddnoise.com/spheres.html) as does our website for the paper "Alternative Voices for Electronic Sound," (http://www.music.columbia.edu/%7Edan/alt_voices/index.html), Trueman, Cook, Bahn.
We have been using the spheres in both live performance and gallery installations.
Software developed in Max/MSP/jitter allows us to move sounds through the above grid of speakers and change their apparent size.
Another collaboration that is very important to me is the creation of new interfaces and pieces for dance. In this I have been working primarily with Tomie Hahn. Our two primary pieces are "Streams" and "pikapika."
"Streams" is an interactive sonic context for live performance. Wearing a sensing device developed by Bahn, Hahn freely navigates a virtual sonic geography consisting of synthetic sounds and non-linear poetry. Through her movement, she is able to negotiate and control all aspects of the sonic structure of this virtual soundscape. With each gesture "Streams" recalls bodies of water and land, technology, a flow of information, transmission, and liquid states. Through technology, the performance toys with the ephemeral quality of sound and the physical memory of time, sonic space, and sensory experience.
With a fifteen year history of collaborative performance, we found that the nature of the technologies employed in "Streams," fundamentally changed aspects of our collaboration regarding movement and sound composition.
Rather then structuring time, as in our previous dance/music collaborations, the conception of "Streams" was based on "composing the body." In this process, physical attributes of the dancer's movement vocabulary were analyzed to extract particularly salient and meaningful gestures. A custom sensor system was designed unique to this composition which would be able to effectively capture these gestures. A parameter-mapping system was devised allowing the dancer to freely navigate and layer sonic elements to construct a complex texture. The form and texture resulting from
The sonic palette employed in "Streams" draws from a combination of real-time digital signal processing, physical modeling synthesis algorithms and stored sound samples of text. At the heart of the computer performance system is a digital model of the filtration characteristics of the vocal tract, all other sounds are passed through this sonic model evoking the image that, through her movements, the dancer "speaks" the music.
Other sound sources are drawn from the technique of physical modeling synthesis, which, when paired with physical movement sensors, provides a particularly rich and evocative sonic landscape. The dancer also provides data controlling the construction of an algorithmic, non-linear text drawing from words relating to dreams, "flow," and communication.
The time-structure of "Streams" is not specified, the dancer is free to explore the sounds according to her feelings in the moment. However, drawing upon a basis of highly specified algorithmic compositional processes, neither is it improvised. The composition embodies physical mappings as they relate to the specific dancer's movement vocabulary, and the sound-world of the composition creating a highly personal and moving site-specific statement; "a personal sonic geography."
"Pikapika" is a character influenced by anime and
manga, Japanese pop animation and comics. Pikapika embodies movements
from bunraku (puppet theater), a movement vocabulary Tomie studied
while learning nihon buyo (Japanese traditional dance) pieces
derived from the puppet theater. The concept of the sonic punctuation
of Pikapika's movements is drawn directly from the bunraku musical
tradition. However, the actual sounds are not drawn from bunraku
musical vocabulary. Pikapika dons a new wireless interactive
dance system (SSpeaPer) created by Curtis Bahn. SSpeaPer naturally
locates and spacializes the electronic sounds to emanate from
the speakers mounted on her body. As Pikapika moves her gestural
information is sent by radio to an interactive computer music
system. The sounds are then broadcast back to her body, creating
a new sort of audio "alias" for her character; a sonic
I would like to thank Shankar Barua for his wonderful work with "The Idea," and for the opportunity to share some of my work with you. More can be found at http://www.arts.rpi.edu/crb.
Dr. Curtis Bahn
Director iEAR Studios & Assoc. Prof. Of Computer Music Composition & Performance
Integrated Electronic Arts Program, Arts Dept, West Hall 114a
Rensselear Ploytechnic Institute, 110 8th Street
Troy, NY 12180-3590