Complex communication for co-located performers within telepresence applications across networks is still impaired compared performers sharing one physical location. This impairment must be significantly reduced to allow the broader community to participate in complex communication scenarios. To achieve this goal, an avatar in the form of a musical conductor with forms of artificial intelligence will coordinate between co-located musicians. Improvised Contemporary Live Music of a larger ensemble, serving as a test bed, is arguably one of the most complex scenarios one could think of, because it requires engaged communication between individuals within a multiple-source sound field that also has to be considered as a whole. The results are expected to inspire solutions for other communication tasks.
The avatar system will actively coordinate co-located improvisation ensembles in a creative way. To achieve this goal, Computational Auditory Scene Analysis (CASA) systems, to allow robust feature recognition, and Evolutionary algorithms, for the creative component, will be combined, to form the first model of its kind. The research results are expected to be significant by themselves and are not bound to telematic applications. With regard to the latter, the proposed system will have a clear advantage over a human musician/conductor, while intelligent algorithms are clearly lacking behind human performance in most other applications, especially when it comes to creativity.