“I became conscious of an entirely new effect produced by this familiar music. I seemed to feel the music detaching itself and projecting itself in space. I became conscious of a third dimension in the music. I call this phenomenon ‘sound projection’. . .” – Edgard Varèse, 1936 [emphasis mine]
The next several posts will detail some things I had in mind while preparing to compose the Dana/Inchindown pieces, beginning with sound spatialization. From the beginning of this project, it was clear to me that the piece would need to have a spatial aspect in order to emulate, as closely as possible, the experience of being within the tunnels & being awash in sound.
As an explicit musical parameter, spatialization would seem to be a fairly new area of exploration. Composers such as Ives (Fourth Symphony), Varèse (Poème électronique), Nono (Prometeo), Stockhausen (Helicopter String Quartet) and others utilized it in several of their works in the 20th century. Xenakis also thought a lot about this, in some cases drawing from his experience as an architect to design physical spaces for performance as part of an overall work, most famously in the case of the Philips Pavilion (see “Notes towards an ‘Electronic Gesture'”, published in Music and Architecture, ed. Sharon Kanach. My thanks to Sharon for her recommendations and advice during the research period).
However, much earlier examples of sound spatialization in music can be found in the likes of Monteverdi and even Palestrina. Some have even theorized that space and reverberation are fundamental influences on the evolution of musical cultures from the earliest. In his book Buildings for Music, Michael Forsyth writes:
“From early times the acoustics of stone buildings have surely influenced the development of Western music, as in Romanesque churches, where the successive notes of plainchant melody reverberate and linger in the lofty enclosure, becoming superimposed to produce the idea of harmony. Western musical tradition was thus not only melodic but harmonic, even before the notion grew, around A.D. 1000, of enriching the sound by singing more than one melody at once and producing the harmony at source.”
With the proliferation of electronic & electronically-augmented music and the availability of loudspeakers & signal-processing implements, the use of spatialization in music has started to become more widespread. Composer Rui Penha has developed a great piece of software called spatium toward this end. (I will be using spatium in both of the Dana 2017 works; more on that later, but for now check out this video introduction to the software:)
There are also psychoacoustic aspects to the way we experience space in music. Most of us are familiar with the notion that higher-pitched sounds are perceived as somehow literally existing “higher” in space, regardless of the actual location of the sound source; low, bass-heavy sounds are perceived as physically “bigger”, etc. This localization of sound appears to be basically universal & can be measured. In Sound Structure in Music, Robert Erickson cites a 1986 study by Roffler and Butler demonstrating that, in the absence of visual cues, high-pitched sounds are perceived as being “above” lower-pitched sounds in space. Erickson quotes researcher C. G. Pratt: “. . . prior to any associative addition there exists in every tone an intrinsic spatial character which leads directly to the recognition of differences in height and depth along the pitch-continuum.” (Ch. 6, Timbre in Texture)
We’ll be returning to the idea of spatialization as I begin to detail the composition process in future posts. In the next post, I’ll share some notes on the idea of timbral fusion.