Paper accepted at the 9th Sound and Music Computing Conference, 12-14 July 2012, Aalborg University Copenhagen
Franc ̧oise, J., Caramiaux, B., and Bevilacqua, F. (2012). A hierarchical approach for the design of gesture-to-sound mapping. In Proceedings of the 9th Sound and Music Computing Conference (SMC’12), pages 233–240, Copenhagen, Denmark.
Abstract: We propose a hierarchical approach for the design of
gesture-to-sound mappings, with the goal to take into account multilevel time structures in both gesture and sound
processes. This allows for the integration of temporal mapping strategies, complementing mapping systems based
on instantaneous relationships between gesture and sound
synthesis parameters. As an example, we propose the implementation of Hierarchical Hidden Markov Models to
model gesture input, with a ﬂexible structure that can be
authored by the user. Moreover, some parameters can be
adjusted through a learning phase. We show some examples of gesture segmentations based on this approach, considering several phases such as preparation, attack, sustain,
release. Finally we describe an application, developed in
Max/MSP, illustrating the use of accelerometer-based sensors to control phase vocoder synthesis techniques based
on this approach.