Skip to main content

CO-CREATIVITY IN MUSIC, SOUND, AND AI

Improvisation, Interaction, Composition

June 5–6, 2026

Artistic Program

Concerts, Screenings, and Workshops at the Culver Center


Co-Creativity in Music, Sound, and AI brings together composers, performers, researchers, media artists, and students to explore emerging forms of artistic practice shaped through interaction between humans and intelligent systems.

Across concerts, workshops, screenings, and discussions, the conference examines how artificial intelligence transforms improvisation, audiovisual creation, embodiment, listening, and collective creativity. The event reflects the interdisciplinary mission of the Experimental Acoustics Research Studio (EARS) and the launch of the EARS InterArts Lab at the Culver Center of the Arts.

By connecting artistic experimentation, research, pedagogy, and public engagement, the artistic program brings together internationally recognized artists and researchers from IRCAM, Stanford University, Georgia Tech, Virginia Tech, and other institutions alongside student projects developed at UC Riverside. We warmly welcome all participants, artists, students, and audiences to this shared environment of listening, experimentation, and creative inquiry.

Paulo C. Chagas
Artistic Direction
Bradley Butterworth
Technical Director
Nikolay Maslov
Screening Curator

Schedule of Events

Friday, June 5, 2026

Concert 1

7:00 – 8:00 PM
EARS Engineers (2026) – 25 min
Bradley Butterworth, director | Performers: Rafael Avila II, Ariana Mares, Sonnet Swire, Taylor Taxdal, Ashley Wu

The EARS Engineers ensemble presents two improvisational performances using custom AI-generated digital instruments developed by students. Guided by instructor Bradley Butterworth, students used Claude.ai to investigate and expand experimental instrument designs through coding, exploring intersections between AI, improvisation, and human-computer interaction.

Moloch whose mind is pure machinery! (2025) – 8 min
Eric Lyon – composition, performance

A live performance using facial gesture tracking and AI-mediated musical control systems. Inspired by Allen Ginsberg’s Howl and the historical emergence of artificial intelligence, the work contrasts cultural chaos and technocratic systems through AI-generated sound and virtual analog synthesis.

The Convergence (2024–26) – 11 min
Ka Hei Cheng – composition, performance | Angel Poveda Yánez – dance

Phantom of Utopia II: The Convergence explores a liminal space between reality and imagination through granular synthesis, AI motion tracking, live video processing, and embodied performance. The work investigates illusion and transformation through sound, gesture, and moving image.

Concert 2: IRCAM / REACH Collective

8:00 – 9:20 PM
Taideji (2024) – 15 min
Lara Morciano – composition, piano | José-Miguel Fernández – AI-Agents Somax2Collider, immersive electronics | Thierry Miroglio – percussion

A work exploring the dynamic relationship between acoustic instruments, electronics, and Somax2 through contrasts of density, resonance, and improvisational interaction.

Six Spaces / Resonant Bodies (2026) – 15 min
Mikhail Malt – composition, AI-Agents Somax2 generative electronics | Thierry Miroglio – percussion

A semi-improvised work for symphonic bass drum and generative electronics inspired by contemplative philosophy and ritual listening practices.

REACHing Jeff (2026) – 15 min
Jeff Albert – trombone | REACH collective – AI-Agents Somax2 generative electronics

An improvisational collaboration exploring co-creative interaction with Somax2.

Surprise du Chef (2026)
REACH collective and guests

A concluding set of spontaneous improvisations and collaborative interactions.

Saturday, June 6, 2026

Workshop 1: Embodied Calligraphy

2:30 – 4:00 PM

Led by Ka Hei Cheng, this interactive workshop explores movement, sound, and AI-assisted co-creation through Chinese Calligraphic Dance, motion tracking, improvisation, and shared audiovisual interaction.

Audiovisual Screening

3:00 – 4:00 PM

Curated by Nikolay Maslov, this screening presents a selection of audiovisual works submitted to the conference’s virtual exhibition Sound, Image & AI. The program highlights experimental approaches to co-creativity involving artificial intelligence, audiovisual systems, generative media, and interactive environments.

Workshop 2: Somax2

4:30 – 7:00 PM

Hosted by the IRCAM REACH collective, this workshop introduces Somax2 as a system for improvisation and composition. Participants will explore interaction strategies, live demonstrations, and collaborative improvisation with AI-driven systems.

Concert 3

8:00 – 9:20 PM
Nexus (2025–26) – 7 min
Constantin Basica, Simona Fitcal, Prateek Verma, Alexandru Berceanu

An audiovisual work using EEG brainwave data, AI classification, and generative synthesis to create music and visual environments from neural activity.

Resonant Thresholds (2024) – 7 min
Cecilia Suhr – composition, performance

A live audiovisual environment exploring liminality, resonance, and technologically mediated sound through violin performance and live electronics.

Mirage (2026) – 8 min
Jeff Albert & Anthony Cammarota – composition, performance

An improvised interaction between performers and AI-generated musical agents using Somax2.

Somax2 Collective Improvisation
IRCAM REACH collective, workshop participants, and guest artists

The conference concludes with an open collective improvisation celebrating experimentation, listening, spontaneity, and co-creativity between human performers and AI systems.


Artist Biographies

Bradley Butterworth

Assistant Professor of Teaching in the Music Industry Program at UC Riverside. He is a multi-instrumentalist, composer, audio engineer, music producer, and owner of Studio B Recording in Los Angeles. His work spans world music, jazz, chamber music, and experimental media.

Eric Lyon

Composer and audio researcher focused on high-density loudspeaker arrays, dynamic timbres, virtual drum machines, and performer-computer interaction. His software includes FFTease and LyonPotpourri. He teaches in the School of Performing Arts at Virginia Tech.

Ka Hei Cheng

A composer and media artist whose practice integrates sound, artificial intelligence, generative systems, motion tracking, extended reality, and interactive audiovisual performance. Her works have been presented internationally at NIME, ICAD, SEAMUS, and ICMC.

Angel Poveda Yánez

A performance artist and researcher combining dance, ritual, light installation, and data. Rooted in ceremonial life with the Yaqui nation and fields like astrophysics and robotics, their work explores altered corporeal awareness and speculative relations.

Gérard Assayag

Electronic musician and senior researcher at IRCAM, where he founded the Music Representation team. His work explores machine musicianship, creative AI, and machine improvisation. He is the recipient of a European Research Council Advanced Grant for the REACH project.

Marco Fiorini

Italian musician, researcher, and improviser specializing in human-machine interaction. A doctoral candidate at Sorbonne Université and researcher in IRCAM’s Music Representation team, contributing to the ERC REACH project and Somax2 development.

Lara Morciano

Composer and pianist exploring mixed music, real-time interaction, instrumental virtuosity, and the production of spatio-temporal forms in listening space. She has received honors including the Giga-Hertz Prize and the ICMA Audience Award.

José-Miguel Fernández

Composer and researcher encompassing instrumental, electroacoustic, mixed, and audiovisual music. His research focuses on electronic music composition, improvisation, and the development of tools for mixed and electroacoustic creation.

Thierry Miroglio

An internationally active percussion soloist whose repertoire includes more than 400 solo and concerto works. He has collaborated closely with composers including Cage, Berio, Saariaho, Grisey, Donatoni, and Manoury.

Mikhail Malt

Composer and researcher with a background in engineering, composition, and conducting. His work focuses on generative music, creative systems, mathematical models in computer-assisted composition, and listening strategies. Currently associated with IRCAM and iReMus-Sorbonne.

Constantin Basica

Romanian composer exploring symbiotic relationships between music, video, and performance. His compositions have been presented internationally by Ensemble Dal Niente, ELISION Ensemble, and JACK Quartet. He is a lecturer at Stanford University’s CCRMA.

Simona Fitcal

Digital media artist working across experimental video, multimedia performance, installation, and interactive art. Her practice explores symbolic and expressive uses of digital visual effects in collaboration with musicians and programmers.

Prateek Verma

AI researcher focusing on large language models, machine learning, and generative models for music and sound synthesis. He has worked at Stanford University’s Department of Electrical Engineering and AI Lab.

Alexandru Berceanu

Mixed-realities director and researcher working at the intersection of art, neuroscience, immersive environments, and theatre. Associate professor at the National University of Theatre and Film “I.L. Caragiale” in Bucharest.

Cecilia Suhr

Intermedia composer-performer whose work integrates multi-instrumental performance, electroacoustic composition, live-processed visual media, and immersive audiovisual environments. Her work examines technology, affect, identity, and culture.

Jeff Albert

Associate Professor and Co-P.I. of the Creative Music Technology Lab at the Georgia Institute of Technology. His creative and research work focuses on improvisation, jazz performance, performer-computer interaction, and live computer music.

Anthony Cammarota

Guitarist, composer, and educator bridging performance, technology, sound design, and creative research. His practice combines jazz, electroacoustic, and popular music traditions with interactive music systems and algorithmic composition.

About IRCAM REACH Collective and Somax2

The IRCAM REACH collective brings together artists and researchers exploring co-creativity between humans and intelligent systems through improvisation, composition, and live performance. Central to this work is Somax2, an AI-driven improvisation system that listens, reacts, and interacts with performers in real time. Developed at IRCAM, Somax2 functions as a co-creative musical partner capable of generating responsive musical behaviors while remaining deeply connected to live performers and musical corpora. The system has become an important platform for exploring new forms of human–AI collaboration in contemporary music.


This project is made possible with the support of the UC Riverside Center for Ideas and Society. https://ideasandsociety.ucr.edu