Skip to main content

CO-CREATIVITY IN MUSIC, SOUND, AND AI

Improvisation, Interaction, Composition

June 5–6, 2026

Conference Program

Paper Sessions, Keynotes, and Panels
Featured Presentations

Keynote Lectures

Keynote · June 5, 3:30–5:00 PM
N. Katherine Hayles
Distinguished Research Professor, UCLA · James B. Duke Professor Emerita, Duke University
Co-creating with AI: Stimulating Human Creativity or Stifling It?
Abstract

Artificial intelligence systems such as Large Language Models learn systems of representation by ingesting vast corpora of human-authored texts. Through attention mechanisms in Transformer architectures, these systems evaluate relationships among tokens, generating context-aware probabilistic structures embedded in high-dimensional semantic spaces.

This keynote examines how such processes enable AI systems to infer implicit rules governing complex representational systems. It compares human cognition with machine-based forms of cognition, asking whether AI can be considered cognitive and, if so, in what sense. The talk explores the nature of creativity in AI systems in relation to human creativity, addressing both their potential and their limitations.

N. Katherine Hayles is Distinguished Research Professor at UCLA and James B. Duke Professor Emerita from Duke University. Her work focuses on the relations between literature, science, and technology in the 20th and 21st centuries. She is the author of twelve books, including Postprint: Books and Becoming Computational (2021) and How We Think (2015).

Keynote · June 6, 9:30–11:00 AM
Eric Lyon
School of Performing Arts, Virginia Tech
How to Compose AI Music That Isn't Mid
Abstract

Recent technological developments—particularly the use of GPUs and pre-trained transformer models—have sparked a new wave of AI-based music practices. At the same time, a central aesthetic critique has emerged: that AI-generated music tends toward the average, often described as "mid."

A primary focus of this keynote is to examine compositional strategies that move beyond this tendency, exploring how artists can engage AI in ways that produce distinctive and compelling musical outcomes. Systems such as Mushroom and SLURP will be discussed as examples of generative approaches to sound processing.

Eric Lyon is a composer and audio researcher focused on high-density loudspeaker arrays, dynamic timbres, and performer-computer interactions. He is the author of Designing Audio Objects for Max/MSP and Pd and Automated Sound Design. Lyon's work has been recognized with a ZKM Giga-Hertz prize and a Guggenheim Fellowship.

June 6, 11:30–1:00 PM

Panel Discussion

Panel · AI and Musical Creativity

This roundtable brings together scholars and music industry professionals to examine the evolving role of artificial intelligence in musical creativity. Topics include the impact of AI on institutions, industries, and labels, as well as its influence on collaboration, genre formation, and artistic practice.

Panelists & Organizers

Kathryn Agnes Huether

Postdoctoral Research Associate, UCLA

Research examines sound as a political and cultural force, connecting Holocaust and Genocide Studies with media theory, extending to AI, authenticity, and ethics.

mesmi (Emilie Mesmi Chu)

Artist, Producer, and Consultant

Singer-songwriter, producer, and engineer. Mentored by producer 9th Wonder, she runs VATOCA Studios and SOUND OFFF to support emerging Asian American creatives.

Frank Duchêne

University of Applied Sciences and Arts, Belgium

Music producer, sound designer, and lecturer focusing on the evolving relationship between musical creativity, recording technologies, and AI-assisted tools.

Amy Skjerseth & Liz Przybylski (Organizers)

UC Riverside

Scholars bridging ethnomusicology, popular music, material culture, and the cultural impact of technological defaults.

Paper Abstracts

Session 1: Posthuman Voice

June 5, 9:30–10:00 AM
Jörg Holzmann
Posthuman Vocality and the Infrastructural Reconfiguration of Opera

This article reconfigures current debates on AI in opera by shifting the focus from questions of authorship to the infrastructural conditions that shape operatic experience. The central case study, chasing waterfalls (2022), stages AI as a performing subject capable of generating text and vocal material in real time. Opera emerges as a laboratory for posthuman performance, where voice and agency are continuously reconfigured.

Jörg Holzmann studied classical guitar and musicology. His research centers on the infrastructural conditions of contemporary opera and (dis)embodied vocality, intersecting media theory, performance, and nostalgia studies.

June 5, 10:00–10:30 AM
Paolo Paradiso
From Vocal Body to Vocal Network: AI and the Reconfiguration of Musical Co-Creativity

How does the integration of GenAI reshape concepts of authorship and agency? Applied to case studies like Tomomibot and ULTRACHUNK, this paper explores vocal improvisations between humans and neural networks, where voice and body diffract through technologically mediated space, reconstituting themselves in a socio-technical assemblage.

Paolo Paradiso is a PhD student at the Free University of Bozen-Bolzano, investigating the implications of AI in music education and how emerging technologies reshape vocality and performance.

June 5, 10:30–11:00 AM
Darren Woodland Jr.
Material Synthesis Composition: Speculocultural Technopoiesis

Introducing Material Synthesis Composition (MSC), a methodology for sonic co-creativity in which relational material substrates serve as primary compositional feed alongside humans and AI. Examining works like Organic Memory (Triptych), this paper shows how material affordances generate compositional structure when abstracted via sensor data.

Darren Woodland Jr. is a PhD Candidate in Digital Media at Drexel University. His doctoral research develops methodologies guided by Black epistemologies and sonic identity.

Paper Abstracts

Session 2: AI Systems & Practice

June 5, 11:30–12:00 PM
Garrison Gerard
Ecosystemic Music: Building Systems for Improvisation

Exploring two approaches to developing systems for co-creativity using algorithmic composition and AI to traverse large environmental recording archives. Projects like Resonance Ecology and Sonifying the Arctic demonstrate how AI-mediated systems transform passive acoustic monitoring into interactive frameworks for live performance.

Garrison Gerard is a composer and soundscape ecologist. He has carried out acoustic surveys tracking human noise impact in Patagonia, Iceland, and Denali National Park.

June 5, 12:00–12:30 PM
Yifeng Yvonne Yuan
Glitch Voice: Real-Time Neural Deconstruction of Vocal Meaning

Introducing Glitch Voice, a real-time neural effect unit designed to deconstruct semantic speech into a visceral "glitched" vernacular using IRCAM's RAVE architecture. The system proposes a framework for "Neural Transcoding," prioritizing the gasps, frictions, and raw textures of human vocalization.

Yifeng Yvonne Yuan is a PhD Candidate in Computer Music at UC Santa Barbara. Her research bridges audio DSP programming with experimental music composition.

June 5, 12:30–1:00 PM
Jeremy Francoeur
No Truth, No Lies: Narrative Storytelling Through Memes and AI

Analyzing the online metal musician BOI WHAT and his song "Neon Tide," generated using AI voices simulating SpongeBob characters. This paper argues that AI-assisted voice modulation can expand expressive narrative complexity, framing AI advancements within the context of hypermediacy in popular music.

Jeremy Francoeur is a musicology PhD student at the University of Western Ontario researching the impact of the internet age on music making and self-identity.

Paper Abstracts

Session 3: Cultural & Economic Implications

June 5, 2:00–2:30 PM
Sonnet Swire
Prompt and Consequence: AI-Generated Music as 21st-Century Propaganda

Analyzing how AI-generated songs circulate as symbolic content, encoding nationalist values and functioning as coordinated messaging. By drawing on Christian worship and country anthems, AI tools replicate cultural stereotypes, proving that AI music generation does not neutralize cultural bias but underscores it.

Sonnet Swire is a composer, musicologist, and journalist. She is a PhD student at UC Riverside examining how technology, storytelling, and messaging reveal cultural trends.

June 5, 2:30–3:00 PM
Alvaro E. Lopez
Navigating the Convergence of AI and Music Composition: Labor and Attribution

Exploring scenarios for adaptation, assimilation, and revision of the music authorship concept in light of AI. This paper gathers judiciary readings on copyright for AI materials, examining how algorithmic style replication delves into the blurry zone between copyright infringement and fair use.

Alvaro Lopez, Ph.D, is an electronic musician, technology researcher, and composer holding a patent for the Progressive Adaptive Music Generator.

This project is made possible with the support of the UC Riverside Center for Ideas and Society. https://ideasandsociety.ucr.edu