The central idea of the LEGOS project is to fertilize interdisciplinary expertises in sonic gesture control technologies with neurosciences, especially regarding sensori-motor learning. We believe that these aspects are not sufficiently taken into account in the developments of interactive sound systems. A better understanding of the sensori-motor learning mechanisms of the gesture / sound coupling is necessary to provide efficient methodologies for their evaluation and optimization.
The objective of the LEGOS project is to study systematically the coupling quality in gesture-sound systems using gestural interfaces.
The project will use extensively an experimental approach, considering these three perspectives
- ”Sound control”: The first point corresponds to a case of sensori-motor learning where the goal is to produce a given sound through the manipulation of a gestural interface, as in the case of digital musical instruments.
- ”Learning to gesture with audio feedback” This second point corresponds to the sensori-motor learning where the goal is to make a gesture guided by an audio feedback.
- ”Interactive Sound Design”: This third point corresponds to sensori-motor learning in the case of tangible interfaces, where the goal is the proper handling of an object.
Paper accepted for the conference 13th International Table Tennis Federation Sport Science Congress in Paris, May 12-13 2013
A potential application of sonification for learning specific Table Tennis gestures will be explained and demonstrated.
E. Boyer, F. Bevilacqua, F. Phal, and S. Hanneton, “Low-cost motion sensing of table tennis players for real time feedback”,13th International Table Tennis Federation Sport Science Congress in Paris, May 12-13 2013.
Two posters are accepted for the conference Progress in Motor Control IX meeting to be held at McGill University in Montreal, July 14-16 2013.
- E. Boyer, Q. Pyanet, S. Hanneton, and F. Bevilacqua, “Sensorimotor adaptation to a gesture-sound mapping perturbation”
- S. Hanneton, E. Boyer, and V. Forma, “Influence of an error-related auditory feedback on the adaptation to a visuo-manual perturbation”
The following poster was presented during the inauguration day of the LABEX SMART, March 26 2013
It reports on results of the Legos project and on the collaboration between the STMS Lab IRCAM-CNRS-UPMC and ISIR-UPMC
E. O. Boyer, F. Bevilacqua, S. Hanneton, A. Roby-Brami, S. Hanneton, J.Francoise, N. Schnell, O. Houix, N. Misdariis1, P. Susini and I. Viaud-Delmon, Sensorimotor Learning in Gesture-Sound Interactive Systems
Eric O. Boyer1, 2*
Bénédicte M. Babayan3
- 1 CNRS UMR 9912 IRCAM, France
- 2Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119, Université Paris Descartes UFR Biomédicale des Saints Pères, France
- 3Neurobiologie des Processus Adaptatifs, CNRS UMR 7102, UPMC, France
- 4Institut des systèmes intelligents et de robotique (ISIR) CNRS UMR 7222, UPMC, France
Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.
We held a two-day workshop at IRCAM, May 30th and June 1st 2012, to brainstorm on possible sonification of every-day objects. Every participants brought two objects, that were analyzed and discussed. On the second workshop day, experiments using sensors and real-time synthesis were carried-on on a selection of objects.
More information will be posted soon.
Paper accepted at the 9th Sound and Music Computing Conference, 12-14 July 2012, Aalborg University Copenhagen
Franc ̧oise, J., Caramiaux, B., and Bevilacqua, F. (2012). A hierarchical approach for the design of gesture-to-sound mapping. In Proceedings of the 9th Sound and Music Computing Conference (SMC’12), pages 233–240, Copenhagen, Denmark.
Abstract: We propose a hierarchical approach for the design of
gesture-to-sound mappings, with the goal to take into account multilevel time structures in both gesture and sound
processes. This allows for the integration of temporal mapping strategies, complementing mapping systems based
on instantaneous relationships between gesture and sound
synthesis parameters. As an example, we propose the implementation of Hierarchical Hidden Markov Models to
model gesture input, with a ﬂexible structure that can be
authored by the user. Moreover, some parameters can be
adjusted through a learning phase. We show some examples of gesture segmentations based on this approach, considering several phases such as preparation, attack, sustain,
release. Finally we describe an application, developed in
Max/MSP, illustrating the use of accelerometer-based sensors to control phase vocoder synthesis techniques based
on this approach.
November 08-2012 at IRCAM (1:30 pm – 4 pm)¶
Ircam: Frederic Bevilacqua (coordinator), Hugues Vinet (scientific director), Patrick Susini (Head of PDS tem) , Nicolas Misdariis, Olivier Houix, Norbert Schnell, Emmanuel Fléty, Joël Bensoam, Isabelle Viaud-Delmon, Jules Françoise
Neuromouv : Sylvain Hanneton, Agnès Roby-Brami, Nicolas Rasamimanana (Phonotonic)
ANR: Olivier Couchariere
General presentation of the project (slides), F. Bevilacqua
Présentation of Olivier Coucharière of ANR
Presentation of the scientific aims for the three different types of applications:
- Musical interfaces (slides) (F. Bevilacqua)
- Sonic interaction design (slides) (P. Susini)
- Motor control and learning (slides) (S. Hanneton)