This was my PhD project in music technology at the Department of Musicology, University of Oslo in 2008 to 2012.
Occasionally one hears of semi-autonomous instruments. These are typically digital instruments that use machine listening techniques to interact with a performer. Remove the realtime interaction and the performer, and an autonomous instrument is what remains.
A more technical and precise term for the kind of instrument this project deals with would be feature extractor feedback systems, or feature-feedback systems for short. They are
The basic idea can also be described with the metaphor of a musician playing a note, while constantly listening to the produced sound and adjusting aspects of the playing technique. Here the aim is not to model actual instruments or the way musicians interact with them. Instead, this project is about building autonomous instruments and to explore their musical potential.
Conceptually, feature-feedback systems can be separated into three units: a signal generator, a feature extractor, and a mapping from the analysed features to the control parameters of the signal generator. Hence there is a constant feedback from the sound generated at any moment to the settings that influence how the sound is generated.
Schematic of autonomous instruments. 'G' represents any signal generator, 'A' is a feature extractor, and 'M' stands for mapping.
In adaptive audio effects (A-DAFx), effect parameters are controlled by features extracted from an input sound. Feature-feedback systems take their own output as a point of departure, hence they may be thought of as self-organising systems.
The feature-feedback systems are studied from the point of view of chaos, nonlinear dynamical systems and complex systems theory. The project as a whole aims towards a better understanding of these synthesis models in terms of relations between parameter spaces and perceptual dimensions.
The word autonomous should be understood roughly as it is used in the context of differential equations. Essentially, it means that the user has no real-time control over the instrument.
Self-organised Sound with Autonomous Instruments: Aesthetics and experiments.
Abstract in English and Norwegian.
Smoothness under parameter changes: derivatives and total variation
Poster at the SMC conference in Stockholm, 2013.
The Constant-Q IIR Filterbank Approach to Spectral Flux
Corrected version of a paper presented at the SMC conference in Copenhagen, 2012.
Logistic map with a first order filter
Published in International Journal of Bifurcation and Chaos, Volume 21 Number 6 June 2011 (preprint version).
Feature Extraction for Self-Adaptive Synthesis
Published in SonicIdeas/IdeasSonicas Vol. 1, No 2, pp. 21-28, 2009.
Proceedings of the ICMC 2007, Vol 1, pp. 283-286. Copenhagen, Denmark.
A review of some common nonlinear filters.
Automated Composition using Autonomous Instruments. Published in Seismograf, 2014.
Automatisk komposition med autonoma instrument. Lydskrift, 2013.
Original version in Swedish of the above paper.
Lo-Fi Adventures in Obsolete Media
Reflections on the use of lo-fi techniques in some of my compositions.
Synchronisation (mostly about the Kuramoto model).
Feature Extractor Feedback Systems. Brief summary of some ideas developed in the thesis.
Filterbank Flux presented at the SMC conf. 2012 in Copenhagen.
Aspiration Noise. Unpublished poster for some course in 2009.
Frequency Shifting Almost Demystified. Demonstration of single sideband modulation with lots of sound examples.
Synchronization of chaotic modules. Discussion of chaos control, system identification and generative modular patches.
A small but complex world of self-organising oscillators. An autonomous differential equation used for the composition of a short piece.