Oded Ben-Tal is London-based composer and researcher working at the intersection of music, computing, and cognition. His compositions range from purely acoustic pieces, to interactive, live electronic pieces and multimedia work. In recent years he is particularly interested in the interaction between human and computational creativities: Applying deep learning techniques to folk musics and interrogating the creative capacity of the resulting generative system within the folk tradition as well as outside it; Using AI-inspired approaches in the domain of interactive, live electronic music and joint improvisation between human performers and a semi-autonomous AI system. His work has been supported by grants from the UK's Arts and Humanities Research Council, the Leverhulme Trust and the Volkswagen Foundation. He is an Associate Professor in the Department of Performing Arts, Kingston University, London.

One, Two, Many is a three-way partnership between two human performers and an Artificial 'Intelligence'. One of the main challenges in human - AI co-creation is the question of motivation, goals, or aesthetic preferences. Cutting-edge AI systems are very good imitation systems increasingly able to reproduce human-like outputs. But computers are obviously incapable of aesthetic judgements or goals. In One, Two, Many the AI systems joins the flute players in realising the piece as a partner defined by an aesthetic preference - that of an impatient listener, one who seeks novelty. First the AI 'listens' to each flute and evaluates similarity and surprise in real-time. Stable input - signals that are predictability or similar to previous signals - increases the 'boredom' in the system. When this boredom gets over a defined threshold the computer changes it internal setting to achieve musical change and resets the boredom. The computer, therefore, interprets an audio signal and changes it's own musical behaviour based on this interpretation: a system that listens and responds according to some aesthetic criteria. While the realisation of this piece involves a partnership between the human performers and the AI, this is an unequal partnership. First, the AI is only able to transform the flute sounds not initiated sounds on its own. Second, the aesthetic preference in the system are obviously rather simple and lack the nuance, sophistication, or experience of a human musician. Since listening is at the heart of the multi-party interaction envisioned, the score is designed to allow the performers a degree of freedom. Freedom to respond to and try to influence the AI as well as ability to respond to each other. The parts are loosely coordinated - pages act as coordination units. But within each page players have some optional figures, alternatives and choice of repetition. The performers, therefore, are able to control, to a degree, the similarity/change in the music and thus interact with the aesthetic 'preferences' of the AI.