Conference Program
Keynote Lectures
Artificial intelligence systems such as Large Language Models learn systems of representation by ingesting vast corpora of human-authored texts. Through attention mechanisms in Transformer architectures, these systems evaluate relationships among tokens, generating context-aware probabilistic structures embedded in high-dimensional semantic spaces.
This keynote examines how such processes enable AI systems to infer implicit rules governing complex representational systems. It compares human cognition with machine-based forms of cognition, asking whether AI can be considered cognitive and, if so, in what sense. The talk explores the nature of creativity in AI systems in relation to human creativity, addressing both their potential and their limitations.
Recent technological developments—particularly the use of GPUs and pre-trained transformer models—have sparked a new wave of AI-based music practices. At the same time, a central aesthetic critique has emerged: that AI-generated music tends toward the average, often described as "mid."
A primary focus of this keynote is to examine compositional strategies that move beyond this tendency, exploring how artists can engage AI in ways that produce distinctive and compelling musical outcomes. Systems such as Mushroom and SLURP will be discussed as examples of generative approaches to sound processing.
Panel Discussion
This roundtable brings together scholars and music industry professionals to examine the evolving role of artificial intelligence in musical creativity. Topics include the impact of AI on institutions, industries, and labels, as well as its influence on collaboration, genre formation, and artistic practice.
Kathryn Agnes Huether
Research examines sound as a political and cultural force, connecting Holocaust and Genocide Studies with media theory, extending to AI, authenticity, and ethics.
mesmi (Emilie Mesmi Chu)
Singer-songwriter, producer, and engineer. Mentored by producer 9th Wonder, she runs VATOCA Studios and SOUND OFFF to support emerging Asian American creatives.
Frank Duchêne
Music producer, sound designer, and lecturer focusing on the evolving relationship between musical creativity, recording technologies, and AI-assisted tools.
Amy Skjerseth & Liz Przybylski (Organizers)
Scholars bridging ethnomusicology, popular music, material culture, and the cultural impact of technological defaults.
Session 1: Posthuman Voice
This article reconfigures current debates on AI in opera by shifting the focus from questions of authorship to the infrastructural conditions that shape operatic experience. The central case study, chasing waterfalls (2022), stages AI as a performing subject capable of generating text and vocal material in real time. Opera emerges as a laboratory for posthuman performance, where voice and agency are continuously reconfigured.
How does the integration of GenAI reshape concepts of authorship and agency? Applied to case studies like Tomomibot and ULTRACHUNK, this paper explores vocal improvisations between humans and neural networks, where voice and body diffract through technologically mediated space, reconstituting themselves in a socio-technical assemblage.
Introducing Material Synthesis Composition (MSC), a methodology for sonic co-creativity in which relational material substrates serve as primary compositional feed alongside humans and AI. Examining works like Organic Memory (Triptych), this paper shows how material affordances generate compositional structure when abstracted via sensor data.
Session 2: AI Systems & Practice
Exploring two approaches to developing systems for co-creativity using algorithmic composition and AI to traverse large environmental recording archives. Projects like Resonance Ecology and Sonifying the Arctic demonstrate how AI-mediated systems transform passive acoustic monitoring into interactive frameworks for live performance.
Introducing Glitch Voice, a real-time neural effect unit designed to deconstruct semantic speech into a visceral "glitched" vernacular using IRCAM's RAVE architecture. The system proposes a framework for "Neural Transcoding," prioritizing the gasps, frictions, and raw textures of human vocalization.
Analyzing the online metal musician BOI WHAT and his song "Neon Tide," generated using AI voices simulating SpongeBob characters. This paper argues that AI-assisted voice modulation can expand expressive narrative complexity, framing AI advancements within the context of hypermediacy in popular music.
Session 3: Cultural & Economic Implications
Analyzing how AI-generated songs circulate as symbolic content, encoding nationalist values and functioning as coordinated messaging. By drawing on Christian worship and country anthems, AI tools replicate cultural stereotypes, proving that AI music generation does not neutralize cultural bias but underscores it.
Exploring scenarios for adaptation, assimilation, and revision of the music authorship concept in light of AI. This paper gathers judiciary readings on copyright for AI materials, examining how algorithmic style replication delves into the blurry zone between copyright infringement and fair use.
This project is made possible with the support of the UC Riverside Center for Ideas and Society. https://ideasandsociety.ucr.edu