What is this about?
I wonder if it’s possible to create a musical experience where music robots can behave and make decisions based on their interpretation of human gestures captured in real time. How can this type of interaction influence and model the way music can be written and performed?
I would like to explore and find out if it’s possible to provide an environment where human creative expression along with programmed robots can yield new ways of interacting through sounds and gestures.
Until now there have been few examples of this type of experiments mostly due to technology limitations and costs. But recent advances in computer processing power and increasing access to new technologies make for fertile ground for such explorations.