2.3 Theories of Expression and Emotion in Music "through music, we can get as close as we can get to the inner feelings of another human being. You can actually feel their presence. You almost know how they’re thinking. You are thinking with them." Unfortunately, there are very few discussions of emotion and music that seem to ring universally true; perhaps this is because the experience of music is very personal and perceptual, and difficult to describe in language.

2.3.1 Leonard Bernstein

Ironically, the most widespread notions about music’s expressive capacity come from analogies to language. The linguistic theorist Noam Chomsky identified a universal, genetically endowed capacity for language among humans; he called this the ‘Innate Expressive Function.’ In a series of televised lectures, Leonard Bernstein borrowed from Chomsky’s ideas and applied them to music, claiming that there is an innate code buried in the musical structure which we are biologically endowed to understand.

He tried to show how the underlying strings, the basic meanings behind music, are transformed by composers into the surface structure of a composition.

Bernstein thought that the main difference between language and music is that music amplifies the emotions more effectively, thereby making it more universal. "Music is heightened speech," he wrote. "In the sense that music may express those affective goings-on, then it must indeed be a universal language." Ultimately, however, Bernstein’s Chomskian analogy fell flat, because it could not be sustained. Music is similar to language in some ways, but is also very different. He later wrote that music is a different kind of communication:

"I wish there were a better word for communication; I mean by it the tenderness we feel when we recognize and share with another human being a deep, unnamable, elusive emotional shape or shade. That is really what a composer is saying in his music: has this ever happened to you? Haven’t you experienced this same tone, insight, shock, anxiety, release? And when you react to (‘like’) a piece of music, you are simply replying to the composer, yes." 2.3.2 Manfred Clynes

While Bernstein’s comparisons with linguistics may not have been fruitful, another theorist was finding a way to describe musical communication by making connections between neurophysics, gesture and emotion. In 1977, Manfred Clynes, a concert pianist and neurophysiologist, presented his theory of Sentics, "the study of genetically programmed dynamic forms of emotional expression." During the 1950s, Clynes had invented the term "cyborg" to refer to creatures who have augmented their biological systems with automatic feedback controls. Clynes also adapted cybernetic techniques to the study of physiological regulatory mechanisms, including heart rate, blood pressure, and body temperature. While doing this work he formulated several theories about sensory perception, including his idea about essentic forms, precise dynamic forms that are characteristic of each emotion. One of Clynes’ big breakthroughs was that emotions are not fixed states, but rather transitions (spatio-temporal curves) with particular trajectories. He related these forms to musical structure through a theory of inner pulse, which he felt was unique to each composer – a kind of personal signature encoded in the shapes of the pulses on many levels simultaneously. For Clynes, the inner experience of music is reflected when the electrical impulses in the brain are mechanically transduced, for example, by the expressive shape of finger pressure. Clynes developed this idea after reading about a German musicologist, Gustav Becking, who did a study showing that when "an experienced musician was asked to follow a musical composition by moving his forefinger in the air – as if to conduct the music – the finger ‘drew’ shapes that seemed to be consistent among different compositions by the same composer."

During the past fifteen years Manfred Clynes has been working on an extension of the Sentics project more directly focused on music. His Superconductor software package allows users to delve into the deep interpretive issues of a musical score and modify elements such as pulse, predictive amplitude shape, vibrato, and crescendo. The idea is to give the understanding and joy of musical interpretation to people who otherwise would not have the opportunity or musical understanding to experience it.

2.3.3 Expression "Rules" Research

Many have assumed, as I do, that the greatest part of the emotional power of music comes in the variations of tempo, dynamics, and articulation. Several researchers have also assumed that these variations conform to structural principles and have attempted to demonstrate these expression rules. Caroline Palmer has demonstrated some general expressive strategies that musicians use, as have Eric Clark, Guy Garnett, MaryAnn Norris, Peter Desain and Henkjan Honig, J. Sundberg, Neil Todd, Carol Krumhansl, and Giuseppe De Poli. David Epstein has also discussed principles of expressive variation in his recent book, "Shaping Time," demonstrating that nonlinear tempos vary according to a cubic curve, and that periodic pulsations act as carrier waves. He makes a case that the kind of variation in musical structures such as tempo and dynamics constitute movement, and that this movement is highly correlated with emotional responses to music.

Robert Rowe has also described these phenomena in two books: Interactive Music Systems, and Machine Musicianship (forthcoming through MIT Press). He has written that one of the most important motivations we have for improving the state of the art in interactive music systems is to include greater musicianship into computer programs for live performance. Not only should the programs be more sensitive to human nuance, but also the programs themselves must become more musical. A chapter of his upcoming book covers this from an analysis/synthesis point of view – that is, given the general expressive strategies that have been described, can these observations be used to write programs that will add expression to a quantized performance? Finally, a simple experiment that I did with Charles Tang in 1995 achieved this to a limited extent; we showed that by adding volume and extra time to notes as they ascend above or descend below middle C, one can ‘musicalize’ a quantized MIDI file.

Many others have attempted to describe the relationships between music and emotion. The musical philosopher Susanne Langer saw a direct connection between music and emotion, writing that "music makes perceptible for experience the forms of human feeling." Also, "music is a logical form of expression that articulates the forms of feeling for the perceiver’s objective contemplation." Paul Hindemith wrote that tempi that match the heart rate at rest (roughly 60-70 beats per minute) suggest a state of repose. Tempi that exceed this heart rate create a feeling of excitation. He considered this phenomenon to be fundamental to music, and wrote that mood shifts in music are faster and more contrasting than they are in real life.

Other classical musical traditions treat emotion as a crucial element of performance. For example, the Indian philosopher Nandikesvara considered the expression of emotion to be the most important aspect of the performing arts. According to him, performing art forms should have "rasa" (flavor, character) and "bhava" (moods) in addition to rhythmic motions; these are what give the gestures their meaningfulness. As emotions intensify, Nandikesvara describes how they are increasingly expressed in the face and ultimately in the gestures of the performer; in the classical arts of India, these have become particularly stylized. An action or gesture (either in the body, the voice, or decoration) which expresses an emotion or evokes "rasa" is called "abhinaya."

 Chapter 2.4