Artistic Program
Co-Creativity in Music, Sound, and AI brings together composers, performers, researchers, media artists, and students to explore emerging forms of artistic practice shaped through interaction between humans and intelligent systems.
Across concerts, workshops, screenings, and discussions, the conference examines how artificial intelligence transforms improvisation, audiovisual creation, embodiment, listening, and collective creativity. The event reflects the interdisciplinary mission of the Experimental Acoustics Research Studio (EARS) and the launch of the EARS InterArts Lab at the Culver Center of the Arts.
By connecting artistic experimentation, research, pedagogy, and public engagement, the artistic program brings together internationally recognized artists and researchers from IRCAM, Stanford University, Georgia Tech, Virginia Tech, and other institutions alongside student projects developed at UC Riverside. We warmly welcome all participants, artists, students, and audiences to this shared environment of listening, experimentation, and creative inquiry.
Artistic Direction Bradley Butterworth
Technical Director Nikolay Maslov
Screening Curator
Schedule of Events
Concert 1
7:00 – 8:00 PMThe EARS Engineers ensemble presents two improvisational performances using custom AI-generated digital instruments developed by students. Guided by instructor Bradley Butterworth, students used Claude.ai to investigate and expand experimental instrument designs through coding, exploring intersections between AI, improvisation, and human-computer interaction.
A live performance using facial gesture tracking and AI-mediated musical control systems. Inspired by Allen Ginsberg’s Howl and the historical emergence of artificial intelligence, the work contrasts cultural chaos and technocratic systems through AI-generated sound and virtual analog synthesis.
Phantom of Utopia II: The Convergence explores a liminal space between reality and imagination through granular synthesis, AI motion tracking, live video processing, and embodied performance. The work investigates illusion and transformation through sound, gesture, and moving image.
Concert 2: IRCAM / REACH Collective
8:00 – 9:20 PMA work exploring the dynamic relationship between acoustic instruments, electronics, and Somax2 through contrasts of density, resonance, and improvisational interaction.
A semi-improvised work for symphonic bass drum and generative electronics inspired by contemplative philosophy and ritual listening practices.
An improvisational collaboration exploring co-creative interaction with Somax2.
A concluding set of spontaneous improvisations and collaborative interactions.
Workshop 1: Embodied Calligraphy
2:30 – 4:00 PMLed by Ka Hei Cheng, this interactive workshop explores movement, sound, and AI-assisted co-creation through Chinese Calligraphic Dance, motion tracking, improvisation, and shared audiovisual interaction.
Audiovisual Screening
3:00 – 4:00 PMCurated by Nikolay Maslov, this screening presents a selection of audiovisual works submitted to the conference’s virtual exhibition Sound, Image & AI. The program highlights experimental approaches to co-creativity involving artificial intelligence, audiovisual systems, generative media, and interactive environments.
Workshop 2: Somax2
4:30 – 7:00 PMHosted by the IRCAM REACH collective, this workshop introduces Somax2 as a system for improvisation and composition. Participants will explore interaction strategies, live demonstrations, and collaborative improvisation with AI-driven systems.
Concert 3
8:00 – 9:20 PMAn audiovisual work using EEG brainwave data, AI classification, and generative synthesis to create music and visual environments from neural activity.
A live audiovisual environment exploring liminality, resonance, and technologically mediated sound through violin performance and live electronics.
An improvised interaction between performers and AI-generated musical agents using Somax2.
The conference concludes with an open collective improvisation celebrating experimentation, listening, spontaneity, and co-creativity between human performers and AI systems.
Artist Biographies
Assistant Professor of Teaching in the Music Industry Program at UC Riverside. He is a multi-instrumentalist, composer, audio engineer, music producer, and owner of Studio B Recording in Los Angeles. His work spans world music, jazz, chamber music, and experimental media.
Composer and audio researcher focused on high-density loudspeaker arrays, dynamic timbres, virtual drum machines, and performer-computer interaction. His software includes FFTease and LyonPotpourri. He teaches in the School of Performing Arts at Virginia Tech.
A composer and media artist whose practice integrates sound, artificial intelligence, generative systems, motion tracking, extended reality, and interactive audiovisual performance. Her works have been presented internationally at NIME, ICAD, SEAMUS, and ICMC.
A performance artist and researcher combining dance, ritual, light installation, and data. Rooted in ceremonial life with the Yaqui nation and fields like astrophysics and robotics, their work explores altered corporeal awareness and speculative relations.
Electronic musician and senior researcher at IRCAM, where he founded the Music Representation team. His work explores machine musicianship, creative AI, and machine improvisation. He is the recipient of a European Research Council Advanced Grant for the REACH project.
Italian musician, researcher, and improviser specializing in human-machine interaction. A doctoral candidate at Sorbonne Université and researcher in IRCAM’s Music Representation team, contributing to the ERC REACH project and Somax2 development.
Composer and pianist exploring mixed music, real-time interaction, instrumental virtuosity, and the production of spatio-temporal forms in listening space. She has received honors including the Giga-Hertz Prize and the ICMA Audience Award.
Composer and researcher encompassing instrumental, electroacoustic, mixed, and audiovisual music. His research focuses on electronic music composition, improvisation, and the development of tools for mixed and electroacoustic creation.
An internationally active percussion soloist whose repertoire includes more than 400 solo and concerto works. He has collaborated closely with composers including Cage, Berio, Saariaho, Grisey, Donatoni, and Manoury.
Composer and researcher with a background in engineering, composition, and conducting. His work focuses on generative music, creative systems, mathematical models in computer-assisted composition, and listening strategies. Currently associated with IRCAM and iReMus-Sorbonne.
Romanian composer exploring symbiotic relationships between music, video, and performance. His compositions have been presented internationally by Ensemble Dal Niente, ELISION Ensemble, and JACK Quartet. He is a lecturer at Stanford University’s CCRMA.
Digital media artist working across experimental video, multimedia performance, installation, and interactive art. Her practice explores symbolic and expressive uses of digital visual effects in collaboration with musicians and programmers.
AI researcher focusing on large language models, machine learning, and generative models for music and sound synthesis. He has worked at Stanford University’s Department of Electrical Engineering and AI Lab.
Mixed-realities director and researcher working at the intersection of art, neuroscience, immersive environments, and theatre. Associate professor at the National University of Theatre and Film “I.L. Caragiale” in Bucharest.
Intermedia composer-performer whose work integrates multi-instrumental performance, electroacoustic composition, live-processed visual media, and immersive audiovisual environments. Her work examines technology, affect, identity, and culture.
Associate Professor and Co-P.I. of the Creative Music Technology Lab at the Georgia Institute of Technology. His creative and research work focuses on improvisation, jazz performance, performer-computer interaction, and live computer music.
Guitarist, composer, and educator bridging performance, technology, sound design, and creative research. His practice combines jazz, electroacoustic, and popular music traditions with interactive music systems and algorithmic composition.
About IRCAM REACH Collective and Somax2
The IRCAM REACH collective brings together artists and researchers exploring co-creativity between humans and intelligent systems through improvisation, composition, and live performance. Central to this work is Somax2, an AI-driven improvisation system that listens, reacts, and interacts with performers in real time. Developed at IRCAM, Somax2 functions as a co-creative musical partner capable of generating responsive musical behaviors while remaining deeply connected to live performers and musical corpora. The system has become an important platform for exploring new forms of human–AI collaboration in contemporary music.
This project is made possible with the support of the UC Riverside Center for Ideas and Society. https://ideasandsociety.ucr.edu