In this chapter I discuss the achievements of the Conductor’s Jacket project, noting both its successes and shortcomings. I detail a series of follow-up projects and extensions that could be done, and talk about its future possibilities as an instrument. The Conductor’s Jacket project has produced several useful hardware, software, and theoretical artifacts, including four versions of a wearable sensing interface, a multiprocessor architecture for gathering, filtering, and mapping physiological data, a prototype wireless architecture, a large analysis of conductor data, a set of interpretive decisions about the most meaningful features, supporting evidence for the theories that were presented, and a collection of compositions and etudes for live performance. But perhaps the most important contribution of this thesis is the underlying generative model that it proposes for musical performance. This thesis presents a method that may be useful for future projects into musical interfaces and software systems: go into the ‘field’, collect true, high-resolution data, analyze it for what people do naturally and intuitively, and then synthesize the system to reflect the analytical results. The use of both analysis and synthesis components in tandem is a powerful combination that has not been explored fully in computer music, and my sincere hope would be that future researchers take up the synthesis-by-analysis model that I have proposed here.
Given the enormous complexity of the human impulse for musical expression and its inherent difficulty in definition, the Conductor’s Jacket project posed narrow, specific, well-defined questions and demonstrated quantitative results. To that extent, I think that the quantitative and interpretive parts of this project, namely, the thirty-five expressive features and nine hypotheses of expression detailed in chapters 4 and 5, were its strongest contribution. While I acknowledge that the analytical results are preliminary and based on the eye-hand method, I think that they demonstrate the merits of a quantitative approach and its potential to deliver important future contributions to our knowledge about expressive, emotional, and musical communication between human beings.
Secondly, this project was able to make use of an established framework of technique to support its quantitative claims. Since conducting technique is well described in several texts and its structure has a regular grammar, I was able to detail numerous expressive phenomena in the physiological data of conductors. Without such a framework, it would not have been possible to establish a norm and interpret deviations from it in any meaningful way.
However, I’m less sanguine about the advancements I made with the synthesis in the Gesture Construction. I think that I made a big improvement upon previous tempo tracking devices and some of my crescendo techniques worked very well, but I don’t think that the final system effectively reflected the character and quality of each gesture. However, the Gesture Construction successfully served as a proof-of-concept of the method that was used to develop it. Truly expressive electronic instruments are possibly still decades away, if the development history of instruments like the violin is any indication; getting expression right by design is a really hard problem. I aimed very high in my expectations for the Gesture Construction system, and to the extent that I was able to control quantities like tempo, dynamics, and articulation with some accuracy and musicianship, it made a contribution to the state of the art. In the near future, however, more development is needed. The Gesture Construction will remain an important part of my future work and projects, and I remain hopeful about its prospects.
The behavior-based approach presented in this thesis was intended to extend the current limitations of gesture-based interactive systems for the performers who work with them. To the extent that a large set of features was discovered in an established performance tradition and several of them synthesized in a real-time system, the method was successful. Unlike other projects, which have tended to focus on perceptual, compositional, or improvisational issues, we believe that the focus on performed behaviors has contributed to the strength of the result. The natural constraints provided by the musical scores and the pedagogical documents on the performance practice made it possible to study a set of gestures empirically and determine the meanings associated with them. The careful staggering of the data collection, analysis, interpretation, and synthesis stages was crucial to this success.
Related issues that require some further discussion are included below; they provide some more insight into the reasons why certain choices were made.