Accidents (1967) for electronically prepared piano ring modulator, mirrors, actions, black light, and projections
Accidents (1967)--for electronically prepared piano, ring modulator, mirrors, actions, black light, and projections--was composed for performance by David Tudor, whose original and full length 1968 recording of the work can be heard on compact disc for the first time (CDCM Computer Music Series, "The Composer in the Computer Age, A Larry Austin Retrospective: 1967-94," Centaur Records, CRC2219). An excerpted version of this recording was released on an LP recording as part of Source magazine, Issue 4, 1968.
Accidents was composed at the invitation of David Tudor, who, the composer recalls responding, "But David, you don't play the piano anymore.", to which Tudor said, "Oh, Larry, you'll think of something." And Austin did: Accidents is a live-electronic piece for piano, where the pianist, seated at the keyboard, tries not to make a sound, as he silently depresses the keys. Austin wrote about the process of its performance in Source, "Accidents is an open form. The piece ends when the performer successfully completes every gesture in the piece. Sound is produced through accidental rather than deliberate action; i.e., all notes are depressed silently, and sound occurs only when a hammer accidentally strikes a string. Accidents occur, depending on the key action, the pressure applied to the keys (i.e., the velocity), and the preparation of the strings. The music is read in the conventional way, from left to right through ten 'gestures' and six systems in order. When an accident occurs, the player immediately stops playing that gesture and proceeds immediately to the next. Arriving at the last gesture and trying to complete it, the player returns to each of the gestures in which an accident occurred, always trying to complete them without an accident. With each new try the gesture is begun anew, and each time the uncompleted gestures are read in left-to-right, numbered order....the piece should always be played as fast as possible--at the most hazardous pace, making accidents highly probable. With successive performances it is possible that the player may develop techniques to avoid accidents; this happening, the performer should counter such gradually acquired technique by playing faster and with more abandon....The piano is prepared as follows: 1) the sustaining pedal is fixed so that it is constantly engaged, lifting the dampers and allowing the strings to vibrate freely; 2) a number of circular, flat, shell wind chimes are placed over the strings so that most, if not all of the strings are in direct contact with the freely vibrating shells; 3) a large number (at least 16) of contact microphones, guitar pickups, and cartridges are scattered over, but not attached to, the shells, so that they transmit the slightest vibration to as many as six but no less that two speakers spaced about the hall, so that the sound seems not to come from the piano but from other places in the hall."
"The dynamic range of the sounds can neither be controlled nor predicted. In general, however, this will depend on the strength of the sound of the accidents, the sensitivity of the electronic equipment, and, finally, the discretion of the player. The player should have an assistant controlling and varying the strength (slowly and barely perceptible change) of the signals. Feedback is probable and should be exploited by using such a signal as a carrier wave in appropriate ring modulation procedures." The first performance of Accidents was by Frederic Rzewski in a 1967 concert in Rome by MusicaEletronica Viva, the second by David Tudor later that year in Davis, California, on the First Festival of Live-Electronic Music.
AccidentsTwo was commissioned for performance by Montague/Mead Piano Plus. The premiere performance by Philip Mead, piano, and Stephen Montague, sound projection/processing, was presented in June, 1993, on the Platform Three festival of new music in London at the Institute of Contemporary Arts. The recording session for this compact disc took place at the Performance Space Recording Studio, The City University, London, in March, 1994, with Austin as producer and Nye Parry as engineer. Post-production was completed by Austin at the Kunitachi College of Music Sonology Department in Tokyo in April, 1994. The computer music sonic events, stored for performance on computer or DAT, were produced by Austin in the composer's studio, gaLarry.
Austin describes the work's conception: "AccidentsTwo continues, extends, and subsumes the original compositional and real-time performance approach first evolved in Accidents [One], composed in 1967 (see notes). It continues my work in open form, invoking highly evolved improvisational formats and protocols; it extends the musico-technical resources from the live-electronic theater piece toward hypermedia; and it subsumes the actual sounds from Accidents [One], recorded by David Tudor in 1968, transforming those accidents into 73 stored sonic events, computer-processed for performance in AccidentsTwo."
"AccidentsTwo is performed in open, set, or open/set form. In open form, the piece begins, continues, and ends at any point in time, its continuity of musical events and their ordering randomly programmed by the performers or a computer. In set form, the piece begins and ends at specific points in time, its continuity of events and their ordering empirically programmed by the performers. In open/set form, the performers and/or a computer create a continuity of random and empirically ordered events, as Montague and Mead have chosen in this recording. In AccidentsTwo, the continuity and combination of individual musical events are defined and performed in both graphic and sonic terms. An open, set, or open/set selection from the total collection of 51 graphic events in tablature notation is read by both performers, the 1-to-4 score-slide(s) for each graphic event successively advanced by the pianist. Each of the total collection of 73 pre-recorded sonic events is accessed for playback by the sound projectionist by incrementing or decrementing program number indices for each sonic event on a digital audio tape (DAT) or, optionally with a computer, accessing numbered soundfiles stored on disk. Notated graphic events have an open, variable duration ranging from as brief as 20 to as long as 40 seconds and averaging about 30 seconds. Each stored sonic event, of course, has a specific duration, ranging from as short as 10 to as long as 150 seconds and averaging about 30 seconds."
"The sound coloration and spatial texture of AccidentsTwo are dynamic-- combining, alternating, contrasting, projecting, and integrating the amplified/processed sound colors of the piano with constantly moving, stored, pre-processed computer-generated sound colors. As in Accidents [One], piano sounds are produced through accidental rather than deliberate action. The pianist depresses the keys silently, so that sound occurs only when a hammer accidentally strikes a string. The sustaining pedal is engaged freely by the pianist. Accidents--accidental sounds--occur, depending on the key action, the pressure applied to the keys (i.e., velocity), and the sensitivity and proximity of three microphones placed strategically above the strings. The dynamic range of the piano sounds can neither be controlled nor predicted. In general, however, this will depend on the strength of the sound of the accidents, the sensitivity of the microphones and electronic sound processors, and, finally, the discretion of the pianist and the sound projectionist. It is desirable that the sounds of the 'silent' action of depressing the keys be heard as an audible artifact of the piano amplification and sound processing. As in Accidents [One] the scored events of the piece should always be played as fast as possible and at the most hazardous pace, making accidents--sounds--highly probable."
"Processing the piano sounds in real-time involves a multi-effect signal processor and a digital delay processor. The resulting processed sound is mixed and moved dynamically in the performance space with the pre-processed, constantly moving computer music stored and played back on DAT or computer. The notation read and interpreted by both the pianist and the sound projectionist has its analog in traditional musical notation and in graphic plots representing functions over time and space (see booklet cover). The sound projectionist is positioned with the electronic equipment configuration at the center of the performance space. A large projection screen or white flat is positioned upstage from the piano, directly in front of the pianist, the sound projectionist, and the audience. The score-slides are advanced periodically or aperiodically in the chosen order. The length of time for each score-slide seen, before advancing to the next, is advanced at varying rates from 10 to 40 seconds by the pianist with his/her foot, utilizing the remote control device for the projector."
"At the beginning of the performance, the DAT (or computer disk-drive) is cued at the first positive Program Number (PNO) appearing on the first score-slide of the first event, the DAT machine set on "pause", ready to play. When the first accident occurs during that or a subsequent score-slide, the sound projectionist presses "play" on the DAT, causing the currently selected PNO sonic event to begin its playback. When subsequent accidents occur, he/she increments or decrements the PNO by freely choosing one of the numbers appearing on the current slide, ideally (if audibly discerned) the PNO determined by when and where an accident occurred in the course of an event. Upon hearing the next accident, he/she plays back the newly selected PNO. Meanwhile, the sound projectionist is setting the digital delay rate, which changes in every event on each score-slide. The digital delay values range from 0 to 4 seconds. When a positive number goes above 4, the difference between 4 and the greater number is added to zero for the new rate. When a negative number goes below zero, the difference between zero and the lesser number is added to zero for the new rate. Also, meanwhile, the sound projectionist is changing and monitoring the signal processing prescribed by the color bars of the current event, as well as the amplitude level of the overall mix of live and pre-recorded sound. (Note: A completely automated, hypermedia system for real-time performance of AccidentsTwo is currently under development by the composer.)"
Adagio: Convolutions on a Theme by Mozart (2004-5), for clarinet and computer, was commissioned for performance by renowned American concert clarinetist, F. Gerard Errante. The piece unfolds in two continuous sections, bridged by an improvised cadenza. The soloist's sounds are amplified, processed, and diffused in the listening space, combined with the interactive playback of the octophonic (optionally, stereo) computer music heard in montage: the listener is immersed in the live and recorded sounds.
All of the sonic materials for Adagio originated from Errante's clarinet recordings made at a fall, 2004, session produced by the composer at DRM Productions, Dallas, Texas, with David Rosenblad as recording engineer. Through a process of pairing Errante's recordings of Austin's five-part transcription of the slow movement of the Mozart Concerto for Clarinet, using one sound recording as the "primary input" file and a second recording as the "impulse response" file, the "convolution" process multiplies the waveform spectra of the two files together, producing a third, hybrid sound. The effect is a type of cross-synthesis, in which the common frequencies are reinforced. To the composer's ears, provocatively beautiful, ethereal sounds result: quiet sonic reflections of Mozart's masterpiece...passing before our ears.
The conception and realization of Adagio is orchestral, in great part because the hybrid, convolved sounds and the way they emerge in the texture of the piece are like orchestral instruments interacting and gently resounding. The clarinetist blends his/her lines and sounds with the computer music, whose essences derive from the twenty composed and transcribed sequences heard in combination and succession through the course of the piece. Adagio was completed in the composer's studio, gaLarry, in Denton, Texas, USA.
The score for art is self-alteration is Cage is... is what I call a uni-word omniostic, where all possible arrangements of the letters of one word--here, C A G E--appear adjacently, allowing one to spell the word, continually in sequence, following appropriate horizontal, vertical, and diagonal paths through the array of the word's letters (see score reproduction). The piece was composed between December, 1982, and January, 1983, and "dedicated to my friend and mentor, John Cage, in his seventieth year." The title of the work was inspired by John's definition of art: "Art is self-alteration." In receipt of my birthday present, John sent me a thank-you note saying, "Thank you. I feel changed already."
The performers--in this case, Robert Black, the string bassist, 16 times--trace a path through the omniostic score, playing each note associated with a letter without expression--but not mechanically--quietly changing to the next note when "...self-alteration is Cage is art is...". Each of the sixty-four block letters--sixteen iterations of C A G E--contains a combination of four pitches and/or silences derived by algorithmic program.
The score may be played by four string basses, by four 'celli, by quartet combinations of 'celli and basses, or by quartet multiples of such groups. In this recording, four string bass quartets are heard. Notated pitches are limited to the open strings and the first three natural harmonics on each string, the resultant gamut of pitches totaling sixteen. The four strings of each of the four instruments are tuned, scordatura, to the pitches c, a, g, and e, each instrument tuned to a different combination of the letters, beginning with the lowest string (IV) upward to the highest string (I), as follows: c, a, e, g; e, a, c, g; c, g, e, a; and e, g, c, a.
Beachcombers was commissioned in 1983 by John Cage and the Merce Cunningham Dance Foundation for performance with the dance work and film Coast Zone, choreographed by Merce Cunningham. The premiere performance by the Merce Cunningham Dance Company took place at the New York City Center Theater on March 18, 1983, performed by John Cage, chanting; David Tudor, electronics; Martin Kalve, cheng; and Takehisa Kosugi, violin. Beachcombers is dedicated to John Cage and Merce Cunningham.
Beachcombers are waves and wanderers. Waves rise, fall, crash, thunder, leaving droplet sheets and turbulent eddies. Wanderers explore, close by the waves, up and down the coast zone, walking slower, faster, slower...shifting, changing, searching. The musicians, provided with cue-tapes and headphones, follow the cue-tones, matching pitches and their pace. The tones fall within the range of a minor-third. The rising and falling lines derive from a sixty-one tone microtonal scale within the third. The tones' pace increases, decreases, gradually from slow to fast to slower to faster to...the tempo of the walk changing from as fast as five paces per second to as slow as three per five seconds. The musicians start their cue-tapes at any point on the tape on either side, following as they wish, conveying the image of a beachwalk...passing one another, walking alongside, reversing direction, stopping, starting.... The cue-tape for voice calls for intoning of text found on both sides of a large deck of cards. Words appear with and mostly without periods and/or commas following. When a word is followed by a period, pause for a while. When followed by a comma, pause briefly. After the deck is thoroughly shuffled, the word-cards are turned, one at a time. Each word is intoned by the musician, matching single tones or groupings of tones on the tape. Instruments which are continuously variable in pitch (not keyed)--string instruments, trombone, voice, electronic devices--are best. The sound image of the surf and the undulating motion of the water are heard both from the tape part and the signal source provided for an envelope follower controlling the shaping of white-noise envelopes. The computer music on tape (stereo, 30 minutes total duration) is mixed freely during the performance with the amplified instruments/voices: the dodecaphonic wave-patterns derive from a self-avoiding random walk through complete and segmented sentences written by mathematician Benoit Mandelbrot, applying the "Zipf law of word frequencies." Total performance duration is as open as the time it takes for a leisurely walk on the beach.
BluesAx was commissioned and composed for performance by concert saxophonist Stephen Duke, with composing grant sponsorship from Northern Illinois University and the University of North Texas. Much of the material for both the score and the pre-recorded and synthesized/processed computer music derives from recordings of Steve's sounds and emulations of four legendary jazz saxophonists, created in a six-hour, collaborative session with the composer in January, 1995, in Denton, Texas. In performance, Steve performs on both soprano and alto saxophones, combined with the computer music on tape. He follows a through-composed, precisely timed score, which details the music he reads and improvises with the taped computer music; the nature and notation of the events and music heard on the tape; and the patternings--"licks"--that inform and serve as models for his improvisations.
BluesAx is presented in seven continuous movements, four being interpretive portraits of the great jazz saxophonists Sidney Bechet, John Coltrane, Johnny Hodges, and Charlie Parker, these introduced and framed by three blues "choruses": I. BluesInCameroon; II. Sidney; III. Trane; IV. BluesLude; V. Hodges; VI. Cadenza; and VII.BluesOutParker.
The montage of sounds and music heard on the tape includes saxophone sounds, my sine-tone "BluesHum" orchestra, rainforest and lakeside sounds recorded for the BBC in Kenya, Senegal, and Cameroon, as well as the city sounds of London's Soho, New York's Times Square, and New Orleans's Heritage Festival. All were combined, mixed, processed, and married during the summer of 1995 at the composer's computer music studio, gaLarry, in Denton, Texas, using a NeXTstation computer and the software synthesis languages, csound and cmix.
Canadian Coastlines was commissioned by the Canadian Broadcasting Corporation as a 'radiophonic' composition for synchronized, live radio broadcast performance on CBC Radio, May 10, 1981, from Halifax, Toronto, and Winnipeg, heard here in a recording of that original broadcast. Four voices of an eight-voice canon are performed by eight musicians, the remaining four--'the computer band'--played as digital synthesizer sequences pre-recorded on tape, each voice entering in turn in exact melodic/rhythmic imitation. However, none of the eight voices are performed in the same tempo. Instead, the musicians follow four distinct tempo click tracks, allowing different, concurrent tempos as well as gradually accelerating and decelerating tempos over relatively long spans of time. The click tracks are timed so that the eight voices come into melodic/rhythmic unison--phase--five times during the piece; i.e., the voices momentarily catch up with one another, only the next moment to continue the acceleration or deceleration, as the case may be.
Fractal is a mathematical term coined by French mathematician Benoit Mandelbrot, used to describe a class of natural distribution phenomena involving the spectral density of a fluctuating quantity and its correlation. In the present piece, such fluctuating quantities are derived from freely concatenated mappings of Canadian coastlines, whose courses form coordinates on a graph and provide data for a compositional algorithm generating melodic contour, interval choice, textural density, dynamic flux, and rhythmic design: musical canonic fractals. The taped canonic voices were designed and generated with a Synclavier Digital Music System in the computer music facilities of the Center for Experimental Music and Intermedia at the University of North Texas, Denton.
Computer music practitioner John Strawn wrote in the Computer Music Journal (Vol. 6, No. 2,1982) that, "Canadian Coastlines...is an intriguing work for ear and mind...The work thins, thickens, cools, and occasionally warms, moves forward and then veers away, joins, separates, soothes and disturbs, and truly allows the listener to enter the process." Present at the same performance during the 1981 International Computer Music Conference in Denton, Texas, John Cage declared to Austin, "It's beautiful! I don't understand it!"
Clarini! is my paean to Bach, whose brilliantly virtuosic trumpet writing has always inspired me, once as an aspiring trumpeter and certainly always as a composer. My movement, marked Strettissimo!, is scored for five differently tuned trumpet quartets, respectively Bb, C, D, Eb, and A (piccolo). I am pleased to have been invited to compose a movement of Par Ses, especially since it is dedicated to my trumpet teacher of years ago at NTSU, John James Haynie.
Continuum (1964) was the first of what became a series of ten works in "open style," composed between 1964 and 1966: seven in Rome, Italy, and three in Davis, California. By "open style", I referred not only to compositional approaches I employed--through-composed, motion pieces with rude, violent gestures dominating--but also to the desired performance approach: directness, freedom, and an un-precious, unpretentious love of sound and movement. Continuum was dedicated "to my friends of the New Music Ensemble," an ensemble we had formed in summer, 1963, to explore free group improvisation. (Dary John Mizelle, who privately was studying composition with me then, became a co-member of the NME and remembers the premiere performance of Continuum.) Continuum was a "sit-down-composition" version of the music the NME improvised as "stand-up-composition." In fact, I composed each instrument's part separately from beginning to end in a kind of composer-performer frenzy, my pencil flying over the staves in virtually real performance time: first the flute part, then the oboe, then the percussion... The piece, in fact, could be performed in the same fashion, "for a Number of Instruments," (i.e., performers): as few as one performer, as many as seven.
The present performance is, as far as I know, the world premiere of this particular combination of six instruments. The piece was named Continuum, because that term connoted both the process of its creation and its moment-form: a slice of the continuum of this ensemble's music, suddenly emerging from its silence for ten or so minutes, returning again to its silent, but never-ending existence...no denouement, no beginning, no ending, just its momentary, sounding existence. (Historical note: Continuum was performed in my first faculty recital, fall, 1979, here at the University of North Texas, Thomas Clark conducting.
Djuro's Tree continues my sound and theater-piece portraits. All these portraits, since the first in 1966, have been composed for and/or about individual composers and performers. Djuro's Tree is, in contrast, a family portrait of three generations of Serbian mathematicians: Alexandra Kurepa, her father Svetozar and her great uncle Djuro (1907-1993). Alexandra speaks of Djuro's influence on her and her father's careers as mathematicians. Her son Andre tells of his fun at the Adriatic coast every summer. Djuro's family story is set in a dynamically moving octophonic "family" tree of sound, the wind moving through it, the sonic leaves and creaking limbs dramatically animating its soundscape.
Alexandra Kurepa's narrative and her son Andre's story--in both Serbian and English--were recorded by the composer at their home in North Carolina. The sounds of a tree's limbs creaking and it foliage rustling in a strong wind was taken from the BBC Sound Effects Library. The creaking limbs were "helped" by the recording of a squeaking wooden chair from the Jonty Harrison family kitchen in Edgbaston, England. All the sonic materials for the piece derive from these three sources. Materials for the piece were processed in the Electroacoustic Music Studios of the University of Birmingham in England during my June/July, 1997, Magistère de Bourges composer residency there. The work was completed in August/September, 1997, in the composer's studio, gaLarry, in Denton, Texas. Software and hardware systems used in Birmingham included Sound Designer, Soundhack, Audiosculpt, and GRM Tools, on a Macintosh computer. Systems used in Denton included Paul Lansky's rt and cmix and the audio software editors and 8-channel digital i/o programs developed for the Silicon Graphics O2 computer with an 8-channel digital i/o PCI. Djuro's Tree was commissioned by Borik Press.
John explains... (2007) is composed as octophonic computer music, dedicated to the memory of composer John Cage. The piece was completed on September 5, 2007, on what would have been his 95th birthday; it is ten minutes long and based in large part on an excerpted portion of a July, 1966, interview that writer Richard Kostelanetz recorded at John's home in Stony Point, New York. I thank Richard for granting me permission to use the recorded interview as the composition's central narrative. Note: Kostelanetz and Cage were joined toward the end of the interview by writer Susanna Opper. Enjoy.
La Barbara was commissioned and composed for performance by the accomplished and acclaimed singer and composer, Joan La Barbara. The materials for both the vocal score and the pre-recorded computer music derive from and are modeled on a two-hour conversation between the composer and Joan La Barbara, recorded in Santa Fe, New Mexico, in May, 1991. In performance, Ms. La Barbara sings and vocalizes with the computer music on tape. She is guided by a score which maps the events heard on the tape and by her innate sense of timing and improvisational inventiveness.
The computer music on tape is made up of thirty three moments extracted from our recorded conversation, moments that I chose because they seemed an essence of a facet of her career as a singer/vocalist/composer. The continuity of the piece is faithful to the chronology of our conversation, which moved through three parts, named in the piece as "The Name", "The Sounds", and "The Music"--herself, her singing, her composing. The montage of sounding musical events heard on the tape was created during the summer of 1991 at the composer's computer music studio, GaLarry, in Denton, Texas, using a NeXTstation computer and the software synthesis language, csound.
Les Flûtes de Pan: Hommage à Debussy (2005-6), for flute (piccolo), octophonic computer music, and dancers (optional), was commissioned for performance by flutist Jacqueline Martelle. The soloist's sounds are amplified, processed, and diffused in the listening space, combined with the synchronized playback of octophonic computer music heard in montage: the listener is surrounded and immersed in the live and recorded sounds. All of the sonic materials for Les Flutes de Pan originated from Martelle's flute, alto flute and piccolo recordings of sequences I derived from Debussy's solo flute piece, Syrinx (1913). Through a process of pairing Martelle's recordings of these sequences--using one sound recording as the "primary input" file and a second recording as the "impulse response" file--the "convolution" process multiplied the waveform spectra of the two files together, producing a third, hybrid soundfile. The effect is a type of cross-synthesis, in which the common frequencies are reinforced. To me, provocatively beautiful, ethereal sounds result: sonic images...passing before our ears. Les Flutes de Pan was completed during spring, 2005, through winter, 2006, in the composer's studio, gaLarry, in Denton, Texas, USA. Claude Debussy composed his solo flute piece, Syrinx, as incidental music for Act III of Psych, a dramatic poem in three acts by Gabriel Monrey, first performed by flutist Louis Fleury on December 1, 1913. Originally entitled "Flute de Pan" (1913), the piece was published in 1927 as "Syrinx". Groves Dictionary of Music states: "Syrinx. Greek term for...Panpipes, that is, a row of hollow pipes sounded by blowing across their tops....In mythology, the instrument is the attribute of PAN, the half-goat, half-man god of shepherds...the central myth as related in Ovid's Metamorphoses: Pan was pursuing the nymph Syrinx, who fled to a river and begged the nymphs there for help. She was allowed to conceal herself by taking the form of a reed-bed from which Pan subsequently picked the reeds to fashion his pipes. In keeping with its mythology the Syrinx has always had a strongly pastoral connotation...."
Between 1974 and 1993, I was involved in an ongoing project to compose a series of works based on Charles Ives' Universe Symphony, his last, most ambitious, yet uncompleted composition. What I entitle the Life Pulse Prelude is the percussion orchestra layer of the Universe Symphony which, according to Ives' memos, can be performed alone.
The "life pulse" percussion music of the Universe Symphony was very important for Ives. It was the first material sketched for the Universe Symphony between 1911 and 1915; sketches resumed in 1927 and 1928 and from time to time were taken up again until three years before his death in 1954. Of the 36 extant pages definitely part of the Universe Symphony, nine were devoted to material for the "life pulse": twenty percussion parts--one evidently a piccolo--all in different meters and tempi, coming into phase every eight seconds. At those points, a "low, deep, hanging bell" is struck, Ives' "B.U." (basic unit), as he called it. Then, one by one, the other percussion instruments enter to create the complex meter/tempo ratio of 1:2:3:4:5:6:7:8:9:10:11:12:13:14:17:19:22:23:29:31. In the nine sketch pages devoted specifically to the "life pulse", Ives actually notated half of one of the planned ten cycles of music, which, in the latter half, he specifies in exact palindromic reverse. Thus, one complete cycle was realized by Ives, himself. The other nine cycles are described in structural but not notational detail. With this and the structural outline provided by Ives, I have realized the entire "life pulse" music.
The full realization of Ives' "life pulse" music has not been accomplished sooner by me or others for three reasons: 1) the music cannot be performed with accuracy by human performers, unless some means is found to coordinate with precision the twenty different cross-rhythmic tempos and meters coming into phase every eight seconds; 2) the scope and requirements of the concept are formidable, even intimidating, and composers are not likely to take up the task, realistically, unless a performance of same is in the offing; and 3) not many composers relish the prospect of finishing another composer's piece, even if it is by Charles Ives. I took it up though, because Ives' ideas for the Universe Symphony so closely match my own compositional approach: I became Charles Ives' student.
Finally, what I term "the Life Pulse Prelude effect" is: Ives' "durational counterpoint" plus sound-mass/pulsation-mass/event-mass/rhythm-mass/melody-mass plus the phenomenological synthesis of mesmerizing melodic/rhythmic iterations and an incessant improvisatory catharsis! It works. It does, indeed, seem like the life pulse of the Universe.
(Note: For a detailed explication of my interpretation of the "life pulse" music sketches and the method of their realization for modern performance, I refer the reader to my extensive article about same published in the research edition of Percussive Notes, Vol. 23, No. 6, Sept., 1985, pp. 58-84.)
Life Pulse Prelude for live and recorded percussionists (1984/1996)
Based on sketches and plans for percussion orchestra music from a portion of Charles Ives's unfinished 'Universe Symphony' (1911-1951)
This is the second version of the Life Pulse Prelude that I have realized. Commissioned by The Percussion Group of Cincinnati, it is performed by three percussionists performing multiple percussion configurations (see Instrumentation/setup), combined with the remaining percussion parts performed by the sound diffusionist in pre-recorded tape performance. The first version, realized in 1984, was composed for a 20-member percussion orchestra, heard in live performance and coordinated by 20 individual tempo-tracks heard on headphones.
Since 1974, I have been involved in an ongoing project to compose a series of works based on Charles Ives' Universe Symphony, his last, most ambitious, yet uncompleted composition. What I entitle the Life Pulse Prelude is my realization of the percussion orchestra layer of the Universe Symphony which, according to Ives' memos, can be performed alone. I have studied and transcribed Ives's music and descriptions for his unfinished Universe Symphony (US) from reproductions of the extant unpublished manuscripts preserved in the Charles Ives Collection of the Music Library of Yale University, New Haven, Connecticut, USA. As a composer, I was inspired by the rich musical material found in the manuscripts and intrigued by Ives's open invitation to "somebody" in his memos to carry out his aspirations for the work: "...in case I don't get to finishing this, somebody might like to try to work out the idea...." (Memos, 1972) Ives named the prelude and the three main sections of his US: "Prelude #1"; "I--Past--from chaos, formation of the Waters and Mountains"; "II--Present--Earth and the firmament, evolution in Nature and Humanity"; and "III--Future--Heaven, the rise of all to the Spiritual", the sections respectively referred to by Ives more often as simply Section A, Section B, and Section C.
The "life pulse" percussion music--"Prelude #1"--of the Universe Symphony was very important for Ives. It was the first material sketched for the Universe Symphony between 1911 and 1915; sketches resumed in 1927, 1928, and 1932 and from time to time taken up again until three years before his death in 1954. Of the 36 extant pages definitely part of the US, nine were devoted to material for the "life pulse": twenty percussion parts--one evidently a piccolo--all in different meters and tempi, coming into phase every eight seconds. At those points, a "low, deep, hanging bell" is struck, Ives' "B.U." (basic unit), as he called it. Then, one by one, the other percussion instruments enter to create the complex meter/tempo ratio of 1:2:3:4:5:6:7:8:9:10:11:12:13:14:17:19:22:23:29:31. In the nine sketch pages devoted specifically to the "life pulse", Ives actually notated half of one of the planned ten cycles of music, which, in the latter half, he specifies in exact palindromic reverse. Thus, one complete cycle was realized by Ives, himself. The other nine cycles are described in structural but not notational detail. With this and the structural outline provided by Ives, I have realized the entire "life pulse" music.
(Note: For a detailed explication of my interpretation of the "life pulse" music sketches and the method of their realization for modern performance, I refer the reader to my extensive article about same published in the research edition of Percussive Notes, Vol. 23, No. 6, Sept., 1985, pp. 58-84.)
LudusFractalis was composed as part of the Cybernetic Arts Project 1984, a collaborative intermedia performance combining the creative work of five artists, produced at CEMI: Center for Experimental Music and Intermedia at North Texas State University, Denton, in October, 1984, made possible by a commissioning grant from the Inter-Arts Program of the National Endowment for the Arts. Mime Art Davis, performance artist Jerry Hunt, and composer/photographer Phil Winsor play important creative roles in the video portion, as is seen/heard. The computer music was originally performed live on the Synclavier Digital Music System, heard on tape in the present context. The spoken text declaimed by Jerry Hunt through the course of the piece is drawn in prosodic form from the writings of mathematician Benoit Mandelbrot, whose theories about natural forms and processes have been so important for much of my recent work. In Ludus, itself, seven melodic sequences are heard in montage in continuous variation, the original, intuitively composed sequences serving as models for self-similar, synthetic transformations, creating what I term "musical fractals." LudusFractalis is my second video composition. The first, Transmission One, was composed in 1969 at station KQED-TV, San Francisco, and, at the time of broadcast, was seen on television monitors as part of a concert at Mills College. Ludus is meant for a new performance context: personal viewing of a video tape on a home video system or, alternatively, in a concert setting with several monitors or large projection system.
In MONTAGE, intuitively composed themes and their computer-composed variations are heard in continuous succession, elaboration and combination. Themes I begin in succession, played alone by the violin and, as the tape first enters, in layered exposition. One by one, the themes become Variations I, then Variations II. Well into the piece, the violin presents Themes II, now with variations on tape. Variations III follows, ending the work as fragments of the final variations are played by the violin and heard on tape, both then dissolving to silence.
The computer orchestra on tape was created with digital recording of wind and string instruments, resynthesized into an ensemble of nine hybrid instruments, then scored and realized for digital synthesis on the Synclavier Digital Music System at the Center for Experimental Music and Intermedia, University of North Texas, Denton.
Themes I and Themes II are intuitively composed melodic/harmonic sequences, serving as model data for consequent computer-determined variations. The composer's program creates synthetic variations by invoking an algorithm which 1) analyzes each sequence for pitch, interval and durational content; 2) calculates and sets a frequency table of discrete relations of the sequence's musical "character set"; and 3) creates a synthetic variant of the original sequence according to probabilities of recurrence.MONTAGE was composed for and at the invitation of violinist Robert Davidovici.
In Italian, ottuplo means eightfold. Ottuplo! (1998-2000) unfolds in four continuous inter-episodes between two string quartets--one real, one virtual. The quartets call and answer in solos, duos, and quartets (Segnali e risposti); join and resound in eight-string clusters and perfect interval sonorities (Ottacordi e ottavi); freely combine in stormy, contrapuntal flurries (Presto e libero); and conclude in change-ringing peals of string-bells, eight times eight times eight times... (Otta-dia, scampanio). The real quartet is seen, its sound amplified and diffused in the listening space. The virtual quartet is unseen, its sounds heard in three-dimensional, ambisonic image: the listener is placed virtually in the center of the four players, immersed in its sound, just as the sound materials for the piece were encoded in studio with the special ambisonicSoundfield microphone placed in the center of the quartet. This is the first known string quartet composition to combine live performers and ambisonic encoding/decoding for three-dimensional recording and performance technology. Ottuplo! was commissioned and the virtual score recorded by the Smith Quartet. Support and sponsorship for the composition of the piece has also come from the Rockefeller Foundation, USA, and The University of York, UK. In summer, 1998, Austin was awarded a composer residency at the Rockefeller Center at Bellagio, Italy, to create the written score for the piece--hence, its inspiration from the resounding church bells, thunder storms, and lake sounds of beautiful Bellagio, heard from Studio Musica in Villa Serebiloni in the village of Bellagio from the promontory overlooking Lake Como. In winter, 2000, Austin was invited for a residency at York as a visiting research fellow working with the extensive ambisonic research resources of the Electroacoustic Music Studio there. Key collaborations at York in recording and research with the ambisonic materials for the piece came from faculty researcher Dave Malham and faculty composer Dr. Ambrose Field, Director of the York EMS, . The ambisonic materials for the piece were recorded in the Performance Space Studio, City University, London, with Malham as chief engineer, Nye Parry, assisting, and Austin as producer, made possible by the university's Electroacoustic Music Studios, Simon Emmerson, Director. Ottuplo!, in both its real and virtual incarnations, was completed during winter-spring, 2000, in the composer's studio, gaLarry, in Denton, Texas, USA
The eleven event/complexes of the Quadrants series explore the qualities of instruments or voices combined with continuously changing sonorities heard on tape. The taped electronic music--with the use of a specially designed pulse-wave frequency divider--explores the unique character of the first 256 partials of the sub-harmonic series. Providing a continuously changing, massive sonority, its timbral and structural qualities are integrated and synchronized with the viola's amplified performance of a stream of natural harmonics in successively faster cycles, echoing through the first 16 harmonics of each string. Composed originally for violist Walter Trampler in 1973, the work remained unperformed until its premiere in 1990 by violist George Rosenbaum.
The event/complexes of the Quadrants series function to explore the aural extensities of a given space with amplified/processed instruments or voices, combined with continuously changing sonorities heard on tape. Each event/complex may be performed singly (but always with the tape) or in combination with any or all of the others, simultaneously or successively. The taped electronic music--with the use of a specially designed pulse-wave frequency divider designed in 1972 by composer/inventor/percussionist Stanley Lunetta--explores the unique character of the first 256 partials of the sub-harmonic series. Providing its continuously changing, massive sonority, its timbral and structural qualities are integrated and synchronized with the pianist's and percussionist's amplified/delayed performance of a stream of 'super' harmonics in successively faster cycles, their pitches and rhythms derived mathematically from difference and summation tones created in combination with the electronic music's sub-harmonic partials, changing and advancing, partial by partial, every second of the piece. No. 4 was commissioned by Bowdoin College and first performed in England by pianist Stephen Montague, while No. 9 was commissioned by the Florida State Music Teachers Association and first performed by Robert McCormick. The 1994 revision of the piano music for No. 4involves technology needed but not available in 1972: a programmable piano--today, the Yamaha Disclavier. In the second half of the original version of this piano event/complex, precisely timed, extremely fast, descending chromatic runs are played directly on the strings, creating a blurred wash of sound. In the 1994 revision--thanks to J. B. Floyd--the runs are programmed to be played by the Disclavier precisely in time in dramatically clear catalysts of sound--a sensation my ear imagined in 1972 and now hears, perfectly performed, in 1994. The taped electronic music, originally for four-track tape, has been reprocessed and revised as well for this stereo recording, using room simulation programs by Doug Scott, implemented in Paul Lansky's cmix for the NeXT machine computer.
Redux (2007) re-visits and transforms my own violin music from the 'seventies, 'eighties, and 'nineties via both the computer music convolution process and the exemplary playing/recording of sequences from these pieces by violinist Patricia Strange, for whom the piece is composed. Redux is the fifth in a current series of pieces for virtuoso performers and octophonic computer music, which I have composed since 2001. But Redux will be different from the previous pieces, in that it "plays" on my own previously composed music, rather than varying other "previous" composers' musics, including Purcell, Moussorgsky, Mozart, and Debussy, to be specific. For the technically oriented, the soloist's sounds are amplified, processed, and diffused in the listening space, combined with the synchronized playback of convolved, octophonic computer music heard in montage: the listener is surrounded and immersed in the live and recorded sounds.
ReduxTwo (2007) re-visits and transforms my own piano music from the 'nineties via both the computer music convolution process and the exemplary playing/recording of sequences from these pieces by pianist Joseph Kubera, for whom the piece is composed. ReduxTwo is the seventh in a current series of pieces for virtuoso performers and octophonic computer music, which I have composed since 2001. But ReduxTwo will be different from the previous pieces, in that it "plays" on my own previously composed music, rather than varying other "previous" composers' musics, including Purcell, Moussorgsky, Mozart, and Debussy, to be specific. The soloist's sounds are amplified, processed, and diffused in the listening space, combined with the synchronized playback of convolved, octophonic computer music heard in montage: the listener is surrounded and immersed in the live and recorded sounds.
ReduxThree (2010-11) re-visits and transforms my own clarinet music from the 'nineties via both the computer music convolution process and the exemplary playing/recording of sequences from these pieces by clarinetists Gerard Errante and D. Gause, for whom the piece is composed. ReduxThree is the eighth in a current series of pieces for virtuoso performers and octophonic computer music, which I have composed since 1996. ReduxThree "plays" on my own previously composed music, rather than varying other "previous" composers' musics, including Schoenberg, Purcell, Moussorgsky, Mozart, and Debussy, to be specific. The soloists' sounds are amplified, processed, and diffused in the listening space, combined with the synchronized playback of convolved computer music heard in montage: the listener is surrounded and immersed in the live and recorded sounds. The video has been created by David Stout to accompany the clarinets and the computer music.
RomaDue is an abstract theater piece. Within an explicit context the piece calls for interpretive movement and sound by both the musicians and dancers. The attitude of their improvisational interpretations is suggested by their reaction--before, during, and after a perceived event--to the tape sounds, lighting, the stage properties, the movements/sounds of other players/dancers, the audience, and the nature of the hall. The first version of this piece was composed in 1965 and called Roma: A Theater Piece in Open Style. The tape was realized at the American Academy in Rome in 1964-65. This second version adds dance to the ensemble and concatenates the original tape materials and sonic events with dynamic spatial sonic movement.
¡Rompido! (1993) means torn!, rent! Its title was inspired by actually seeing and hearing a large piece of granite torn in half...a beautiful sound...a violent act. This music I have composed for dance and granite sculpture combines live performance (optional) on the sculpture by a percussionist with computer-processed chunks of sounding granite heard from the pre-composed digital audio tape. All the sounds and combinations of sounds heard on the tape come from recordings made by me in October and November, 1993, at two exhibits of granite sculpture by Texas sculptor Jesus Bautista Moroles and at his Rockport, Texas, studio or "factory", as he sometimes calls it. In fact, the piece is formed in three continuous scenes, each a sound-montage chronicle from my three field trips to record small and large chunks of sound made in the playing of the sculpture and its creation: Scene 1, called GraniteHarp, the Moroles "Granite Landscape" installation at the Fred Jones Museum at the University of Oklahoma in Norman; Scene 2, called ThunderStone, my day of recording the chipping, wedging, polishing, sawing, drilling, and tearing of granite--these very pieces!--in Rockport; and Scene 3, called SteleMusic, the harp-like arpeggios of Moroles' "stele's" in the Houston, Texas, exhibit recorded the morning of the opening day, November 19, 1993, of "Tearing Granite: Thunder in the Stone."
For the technically interested, all the tens of chunks of sound I collected were edited, processed and combined with my Prophet-3000 digital sampler plus my NeXTstation computer, using the software Csound, Soundworks, cmix, and rt in my Denton, Texas, computer music studio, gaLarry, copyright Borik Press 1993
Shin-Edo is computer music on tape. Excerpts from a sound-poem begun, recorded and computer-processed at Kunitachi College of Music Sonology Department, Tokyo, June-July, 1994. Inspired by the dynamic sound-scape and culture of Tokyo that I have experienced each day, I explored and recorded those places and sounds that have so heightened my sonic acuity. It is this sonic impression--by a fascinated foreigner--that I want to make into a piece about what I call this "Shin-Edo", this "new-inlet".
Realization: NeXT computer, using Vercoes's Csound, Lansky's cmix and rt software synthesis languages, Sonology Department, Kunitachi College of Music, Tokyo, Japan, and the composer's studio, gaLarry, Denton, Texas.
SinfoniaConcertante: A Mozartean Episode (1986)is modeled on the dramatic essence of its classic namesake: the interplay of the chamber orchestra and the computer music narrative; of sweet consonance and angry dissonance; of innocence and duplicity; of pleasure and sorrow. Dualities intrigue me, because they are never completely reconciled, just as polarities in the fortunes of life are never completely understood. The text for the taped narrative heard through the piece is formed from excerpts from ten letters written by Mozart from Mannheim and Paris to his father in Salzburg during a nine-month period from November 22, 1777, to July 9, 1778. The text was recorded in English translation by German actor Stefan Hurdalek, whose voice also serves as the source for all computer music heard on the tape.
The taped computer music was realized at the Center for Computer Music at Brooklyn College, City University of New York, while the composer was guest in residence from February to mid-May, 1986, at the invitation of Charles Dodge, Director of the Center. Special voice analysis/synthesis techniques, developed by composer Paul Lansky of Princeton University, were utilized to achieve the pitched, talking-singing timbres and events. The orchestral score was composed at MacDowell Colony during six winter weeks in 1986 in New Hampshire and, later, five spring weeks at Yaddo in New York. The piece was commissioned by the Cleveland Chamber Symphony and given its premiere in October, 1986. Notable performances since then have been presented by the Kansas City Symphony, the Memphis Symphony Chamber Orchestra, the Orchestra of Santa Fe, the Cleveland Chamber Symphony at the 1989 International Computer Music Conference, the Hoboken Chamber Orchestra, the Baltimore Symphony Orchestra, the Ft. Worth Chamber Symphony, and Camerata Zurich (1991 World Music Days). In 1988, Centaur Records released a compact disc recording of the work, included in Volume 1 of the CDCM Computer Music Series.
The Episode: Mozart writes his father of plans to leave Mannheim with his mother for Paris. Mannheim cohorts urge him to compose a sinfoniaconcertante for them to perform in Paris, promising that the entrepreneur, Le Gros, will surely sponsor its performance. Arriving in Paris and quickly finishing the piece for his friends, Mozart leaves it with Le Gros "to be copied", excitedly expecting its first performance. The score languishes, neglected by Le Gros. Dismayed, Mozart is certain that the Italian composer, Giovanni Cambini, has conspired with Le Gros to prevent the performance of SinfoniaConcertante(K.297b). Later, in a chance encounter, Mozart, nevertheless, accepts an invitation from Le Gros to compose "a grand symphony" (K.297, "Paris") for his famous Concert Spirituel series. Mozart, engrossed with his new symphony, delays writing his father the truth of his mother's death. A week later, in the midst of sudden success with his Paris Symphony, he confesses to his father that, "My mother fell asleep peacefully in the Lord." Fatalistic, he declares, "Let us therefore pray a pious Vaterunser for her soul and turn our thoughts to other matters, for there is a time for everything." Those 'other matters' betray his obsession with the public success of his new symphony: "Le Gros is so happy with the symphony that he says it is his very best." However, "the andante didn't please him", and, "in order to satisfy him, I have written a fresh one." Once more, he bends to Le Gros to curry favor. The episode ends, as Mozart consoles his father, writing, "Take comfort and pray without ceasing....This is the only consolation we have."
Singing! (1996-98), commissioned by Thomas Buckner, is a sound-portrait of the life and musical times, to date, of singer Thomas Buckner, with octophonic computer music on ADAT, interpreted as a sound-play, in three parts, by Thomas Buckner, baritone voice. Tom is an amplified-processed sound-player, interpreting and improvising with the computer music heard on octophonic tape. Singing! continues the composer's sound and theater-piece portraits. These portraits, since the first in 1966, have, in great part, been composed for and/or about individual composers and performers and, to a greater or lesser degree, have been based on recorded interviews/conversations with the portrait subject. Through this portrait, Tom's running narrative speaks of growing up as a singer, his early and continued training and influences musically, and his dedication to a career of "singing the music of my own time." Tom's narrative and all the materials for the piece were recorded by the composer in a six-hour vocalization-interview-lunch-improvisation session with Tom in his New York apartment in November, 1996. The three parts of the piece--Warmup, Lunch, and Improv--are, in fact, true to the chronology of the day's events, with the interview-narrative forming the continuity for the whole piece. Recorded materials for the piece were edited and processed in the Electroacoustic Music Studios of the University of Birmingham in England during the composer's June/July, 1997, Magistère de Bourges composer residency there. The composition was completed between November, 1997, and March, 1998, in the composer's studio, gaLarry, in Denton, Texas. Software and hardware systems used in Birmingham included Sound Designer, SoundHack, Audiosculpt, Csound, and GRM Tools, on a Macintosh computer. Systems used in Denton included Paul Lansky's rt and cmix and the audio software editors and 8-channel digital i/o programs developed for the Silicon Graphics O2 computer with the SGI 8-channel digital i/o PCI, along with Doug Scott's move program on a NeXTstation.
In the "Capo" section of Sonata Concertantea complex fabric of sound events develops on tape, while the pianist plays his contrasting "theme" in double octaves, desultorily interrupted by clusters and long trills. The traditional principle of duality in the sonata form is carried out, not successively but simultaneously, the themes' contrasting natures juxtaposed. In the concluding "Cadenza" and "Coda", the pianist and tape together play huge hammerstroke chords, exchanging improvisatory gestures in rhythmic dialogue.
The computer music on was realized at the Center for Experimental Music and Intermedia, North Texas State University, utilizing analog-to-digital synthesis, the full 7-octave range of a Steinway grand piano resynthesized and expanded to nine octaves. To coordinate, precisely, the concerted performance of the pianist with the taped computer music, a special cue-track is provided on the playback tape, followed by the pianist during performance.
The Sonata Concertante was commissioned by Bob Houston for Sound Source and composed for its first performance by pianist YvarMikhashoff at the North American Festival of New Music in Buffalo, New York, in April, 1984, followed by its New York City premiere in May. Mikhashoff has since presented the work in Sweden and, most recently, at the 1986 International Festival of Experimental Music in Bourges, France. Pianist Adam Wodnicki performed the work in 1985 at CEMI and at the 1985 International Electronic Music Plus Festival held in Austin, Texas. In early 1986 it was performed in Rome in a series organized by composer James Dashow. Pianist Ellen Corver performed the work at the 1986 Internatioanl Computer Music Conference at The Hague, The Netherlands. Swedish pianist Kristine Scholz performed the work at the 1987 SkinnskattebergElektronmusic Festival.
Sonata Concertante is discussed and analyzed in an extensive article by Thomas Clark in Perspectives of New Music, "Duality of Process and Drama in Larry Austin's Sonata Concertante," Vol. 23, No.1, 1984. The work is available from Borik Press, Raleigh, NC.
Larry Austin's SoundPoemSet (1990-91) is computer music derived from recorded conversations, 1988-89, between the composer and compatriot musical adventurers with distinctively etched musico-technological profiles, including American experimenters Pauline Oliveros, Jerry Hunt, Morton Subotnick, and David Tudor. Each SoundPoem maps its musical form from poetry created from select aphorisms uttered spontaneously during each conversation. Aphoristic utterances were extracted, analyzed, transformed and synthesized with "spectral modeling synthesis" (SMS), a sound analysis synthesis technique based on "deterministic plus stochastic decomposition", developed at CCRMA by Xavier Serra and Julius Smith. In residence at CCRMA in 1989 and invited by Serra to explore the compositional potential of SMS, the composer undertook a marathon of intensive experiments, completing the 107-minute Transmission Two: The Great Excursion (1990), the first composition to use the SMS technique with Serra's LISP program SANSY. SMS materials from TT:TGE are colored in SoundPoemSet with comb filters, "unreal" reverb, and alpass networks using csound and cmix, running on a NeXTstation in the composer's computer music studio, gaLarry. SoundPoemSet is scheduled for release in 1993 on compact disc as part of the "Composers in the Computer Age--II", Vol. 16 of the CDCM Computer Music Series on Centaur Records.
The idea for composing *Stars grew out of my fascination with the symbolic patterning by the ancients of eighty of the eighty-eight constellations seen in the Northern and Southern heavens. I devised a system to derive melodic sequences and unique timbral qualities from each constellation, combined in a composition to form the "future" music tape and digital synthesizer section of my orchestral work, Phantasmagoria (1974-81). It was the un-patterned constellations, as well as the total number of stars visible to our eyes--1,917 was my count!--that continued to intrigue me. In *Stars, exactly 1,917 unique "star-timbres" illuminate a continuously evolving "star-drone" to create my fanciful and, I feel, musical heavens.
While each star-timbre had its distinct set of random values controlling the frequency modulation ramps, amplitude and side-band functions, all had in common either of two fm indices. The total duration of each star-timbre (from initiation to disappearance) ranged from as brief as a millisecond to as long as a few seconds, these events also controlled by random selection. Such zero-order stochastic process was carefully skewed, however, by continual and very gradual time-unit fluctuations, slowing and accelerating the "beat" over relatively long time spans and creating what the composer feels are intriguing musical coherences.
*Stars, then, is computer music in both the way its compositional elements are formed and in its digital synthesis: in the first instance, the tempo-skewing of randomly generated parameters of the star-timbres, and, in the second instance, the control of the digital sound output of the Synclavier II digital music system. *Stars was completed in April, 1982, at the Electronic Music Center (now CEMI) at the University of North Texas.
Tableaux: Convolutions on a Theme (2003), for alto saxophone and octophonic computer music, was commissioned for performance by saxophonist and Distinguished Research Professor Stephen Duke with funding from the Graduate School of Northern Illinois University. The piece is an extended, single-movement work, unfolding in three continuous sections: convolutions, improvisations, and remixes. The soloist's sounds are amplified, processed, and diffused in the listening space, combined with the synchronized playback of an octophonic ADAT tape (optionally with a computer) of the computer music heard in a three-dimensional, octophonic montage: the listener is surrounded and immersed in the live and recorded sounds.
All of the sonic materials for Tableaux originated from Duke's saxophone recordings made at a spring, 2003, session produced by the composer at DRM Productions, Dallas, Texas, with David Rosenblad as recording engineer. Through a process of pairing Duke's recordings, using one sound recording as the "primary input" file and a second recording as the "impulse response" file, the "convolution" process multiplies the waveform spectra of the two files together, producing a third, hybrid soundfile. The effect is a type of cross-synthesis, in which the common frequencies are reinforced. To the composer's ears, provocatively beautiful, ethereal sounds result: tableaux sonore...sonic images...passing before our ears.
The "convolutions on a theme" are all based on a familiar theme and its harmonization composed originally as part of a 19th century composer's piano work, later brilliantly orchestrated by a twentieth century composer. Now, a 21st century composer elaborates. The conception and realization of Tableaux is orchestral, in great part because the hybrid, convolved sounds and the way they emerge in the texture of the piece are like flutes, trumpets, oboes, and strings interacting and gently resounding. The saxophonist blends his/her lines and sounds with the computer music, whose essences derive from the sixteen composed and transcribed sequences heard in combination and succession through the course of the piece. Tableaux was completed during spring-through-fall, 2003, in the composer's studio, gaLarry, in Denton, Texas, USA.
Tableaux Vivants was composed at Graphic Studio in Tampa, Florida, in 1973. It is a set of four multi-media lithographs, blending music notation and drawings, what my artist-collaborator Charles Ringness and I termed "sonographs." The idea was to integrate Ringness' highly symbolic drawings with my own intention to create a performable composition in a viable graphic art context. The first drawing completed by Ringness provided an area within the format where I could inscribe the music score. After drying the print, Ringness then rubbed a brown pigment over the entire surface, giving the final finish a brown sepia appearance. The same process was followed with the remaining three sonographs. Our sonographs have, indeed, been exhibited in galleries and performed in concert halls. The present tape realization of the sonographs heard with the musicians' sequences was created in 1981 with a Synclavier Digital Music System at the Center for Experimental Music and Intermedia at North Texas State University in Denton.
Tárogató! was commissioned by concert clarinetist and tárogató performer Esther Lamneck. It is scored for solo tárogató, dancer, and octophonic computer music on tape. The tárogató is an Hungarian folk woodwind (single reed) instrument developed during the 19th century and used as a dance instrument and outdoors for rallying troops in battle. Much of the material for both the score and the pre-recorded and synthesized/processed computer music derives from recordings of Esther's sounds and free improvisations, created in a two-hour, collaborative session with the composer in April, 1997, in New York. In performance, Esther performs with the computer music, following a through-composed, precisely timed score, which details the music she reads and improvises with the taped computer music; the nature and notation of the events and music heard on the tape; and the patternings that inform and serve as models for his improvisations.
The montage of sounds and music heard on the tape were all combined, mixed, processed, and married during the spring/summer of 1998 at the composer's computer music studio, gaLarry, in Denton, Texas, using a SGI O2 computer and the software synthesis languages, csound and cmix. The computer music was completed between March and June, 1998, in the composer's studio, gaLarry, in Denton, Texas. The performance part/score was completed in July, 1998, during a month-long composer residency at the Rockefeller Foundation's Bellagio Study and Conference Center in Bellagio, Italy. Computer music systems used included Paul Lansky's rt and cmix and the audio software editors and 8-channel digital i/o programs developed for the Silicon Graphics O2 computer with the SGI 8-channel digital i/o PCI, along with Doug Scott's move program on a NeXTstation.
Threnos (2001-2002) is a lament, composed in memory of the victims of the terrorist attacks of September 11, 2001. The piece is a single movement work, unfolding in three continuous sections, heard between two bass clarinet groupings--one real (live), one virtual (computer). The real solo or ensemble of 2, 4, or 8 bass clarinet(s) is seen, its sound(s) amplified, processed, and diffused in the listening space. The virtual ensemble is unseen, its sounds heard in a three-dimensional, octophonic montage: the listener is surrounded and immersed in the real and virtual sounds combined.
Threnos was commissioned, and the materials for the virtual, octophonic computer music recorded by bass clarinetist, Michael Lowenstern. On August 26, 2001, after hearing Michael perform an extraordinary concert of his own music at the Ought-One Festival of Non-Pop Music in Montpelier, Vermont, I exclaimed to him, "I want to write a piece for you and your instrument, combined with computer music!" Two days later, by email, we came to agreement about a commission for me to compose the piece and would meet in December in Brooklyn to record its basic materials. Michael and his family live in Brooklyn, NY, across the East River from Manhattan. I live in Texas. On the following September 11, as the morning's horrible events were unfolding on television, I emailed Michael, "You're ok, right? Just came to me that your address is not so far from ground zero!!!! Thinking about your piece." He replied, "Yes we're fine. We're across the river. No more view though..." After that, the conception of my piece for Michael became inextricably an expression of mourning for the victims of that infamous tragedy. And the model and inspiration for my lament was Dido's lamento aria for Aeneas from Henry Purcell's 17th century opera, Dido and Aeneas. In December, 2001, Michael and I recorded with bass clarinet the aria and all four parts of the accompaniment. These sequences were later combined and convolved in pairs of lines with my computer music system, forming the basic sound material for the octophonic computer music on tape. One doesn't literally hear the Purcell lament; instead, one hears a quiet, eerie, timbral montage drifting in and out of the sounding texture of bass clarinet lines and sounds. Threnos, in both its real and virtual incarnations, was completed during winter-spring-summer, 2002, in the composer's studio, gaLarry, in Denton, Texas, USA.
The premiere performance of Larry Austin's 107-minute composition for radio, Transmission Two: The Great Excursion(TT:TGE), was broadcast on KPFA-FM, Berkeley, California, in a live, public performance from Hertz Hall on the campus of the University of California, Berkeley, at 8 p.m., Monday, February 26, 1990. Tonight's performance is being broadcast live on KNTU-FM, Denton, Texas, The work is scored for chorus, computer music ensemble, and recorded dialogue. The recorded dialogue heard through the work chronicles episodes from conversations recorded between Austin and fellow composers Robert Ashley, John Chowning, Jerry Hunt, Pauline Oliveros, Morton Subotnick, and David Tudor and computer music scientist Max Mathews. They reflect on their own work as composers and music experimenters through the last thirty years of dramatic technological developments in the way music is created and presented.
Austin describes his work, as follows: "The premiere broadcast of TT:TGE was the culmination of two years of planning, recording, modeling, computing, and composing the materials for what I call my 'sound movie', or the genre that has come to be called an 'Horspiel' or 'ear play'. Central to the narrative continuity of TT:TGE are seven episodes, each devoted to dialogues between me and seven longtime friends and fellow composers--musical adventurers with highly original musico-technological profiles. Key excerpts from over eleven hours of recorded conversations are distilled to just over 100 minutes. The Prelude and Episodes 1-4 (Chowning, Hunt, Ashley, Oliveros) form Part I; Episodes 5-7 (Tudor, Mathews, Subotnick) and the Postlude form Part II."
Austin continues: "Central to the musico-dramatic continuity of TT:TGE are choral settings and computer music transformations of select recorded aphorisms uttered spontaneously during the conversations, essences of the protagonists' aesthetics and compositional approaches. The role of the chorus in my 'sound movie' is as choric commentator, not unlike its role in classical Greek theater: musing poetically, reflecting philosophically, interjecting assertively, commenting amusedly, mocking 'rap-ingly', and interpreting between the protagonists of the ongoing dialogue and the audience."
"These aphorisms, with tens more culled from the dialogue, created rich material for a wide range of sonic transformations made possible by the analysis and resynthesis of the voiced sounds with a unique computer music system--SANSY: Sound Analysis System, a specialized analysis/resynthesis/transformation program written for the Symbolics LISP Machine. SANSY was developed at Stanford's Center for Computer Research in Music and Acoustics by Dr. Xavier Serra. As this experimental system was being perfected and tested in 1989, I was invited by CCRMA Director John Chowning and Serra to explore SANSY in the creation of TT:TGE during my composer residency there from May to July, 1989, and again in December, 1989. As such, TT:TGE is the first piece created with SANSY. What I believe to be the unique sonic and musical results 'speak for themselves' through the piece."
"These transformed sonic aphorisms are heard as musical streams of events in diverse timbral/textural combination performed by the three musicians of the computer music ensemble with unique computer music instruments, 'controllers' of two Prophet-3000 16-bit digital 'samplers'. One of the controller-instruments is the Roland A-80 MIDI Digital Keyboard, the second the KAT Digital Percussion Keyboard. Through programming the protocol of MIDI (Musical Instrument Digital Interface) interaction between the controller-instruments and the hundreds of sonic events stored on the Prophet-3000 samplers, the musicians play, process, and layer the sonic aphorisms throughout the performance space."
"I want to thank all those who so generously contributed to the creation and production of TT:TGE, including Charles Amirkhanian, Philip Brett, Chris Chafe, Rick Chatham, Thomas Clark, John Chowning, Richard Friedman, Tony Gnazzo, Jay Kadis, Max Mathews, Randall Packer, Sammy Saul, Rollie Schafer, Douglas Scott, Xavier Serra, Lloyd Sitkoff, Tovar, Todd Winkler, Patte Wood, the CEMI staff, and, of course, the protagonists of my sound movie, Ashley, Chowning, Hunt, Mathews, Oliveros, Subotnick, and Tudor."
"TT:TGE is dedicated to my dear wife, Edna, whose devotion to our family and to me and my music has nurtured and sustained us through our own great excursion of the past 36 years."
Charles Ives's Universe Symphony (1911-51) as realized and completed (1974-93) by Larry Austin for multiple orchestras, in three continuous sections: PAST--from chaos, formation of the Waters and Mountains PRESENT--Earth and the firmament, evolution in Nature and Humanity FUTURE--Heaven, the rise of all to the Spiritual
Premiere performance, January 28, 1994 Premiere recording, January 29, 1994 Cincinnati Philharmonia Gerhard Samuel, conductor
Permission from the American Academy and National Institute of Arts and Letters, owners of the unpublished manuscripts of Charles Ives's "Universe Symphony", the Charles Ives Society, and the Yale University Music Library to derive materials for realization from the sketches is gratefully acknowledged.
Background: The Universe Symphony was American composer Charles Ives's last, most ambitious, yet uncompleted composition. Sketches for what would have been his fifth symphony were begun in 1911 (or possibly as early as 1908), continued until 1915, were resumed in 1927 and 1928 with "...a few notes added from time to time..." (Cowell, 1955) by Ives up until three years before his death in 1954.There are 36 extant sketch pages which I believe are definitely part of the Universe Symphony, including completed music, musical sketches and fragments, and detailed narrative and graphic descriptions concerning the form, continuity, and transcendental aesthetic of the work. Of its creation motif, Ives wrote on a sketch page: "The Universe symphony is an attempt in tones, every form and position known or unknown (to me) as the eternities are unmeasured, as the source of universal substances are unknown, the earth, the waters, the stars, the ether, yet these elements as man can touch them with hand and microscope and labeled as chemicals and atoms, as the eternal motion, life of things and man, their bulk, their destiny. They are not single and exclusive strains, but incessant myriads, for ages ever and always changing, growing, but in ages ever always a permanence--in humans of the earth for a man's lifetime, of life and death and future life--the only known is the unknown, the only hope of humanity is the unseen spirit--what can't be done but what reaching out to do (as we feel like trying it) is to cast eternal history, the physical universe of all humanity past, present and future, physical and spiritual, to cast, then, a 'universe of tones'." (Kirkpatrick, ms. #1852)
I began in 1974 to transcribe the musical material and to study Ives's plan for the Universe Symphony from reproductions of the extant unpublished manuscripts in the Charles Ives Archives of the Music Library of Yale University. I was inspired not only by the rich musical material found in the sketches for the Universe Symphony and by Ives's open invitation to other composers in his Memos to expand on and even to carry out his aspirations for the work: "...in case I don't get to finishing this, somebody might like to try to work out the idea..." Since 1974, I have completed four extended compositions based on distinct orchestral strata in Ives's US material: First Fantasy on Ives's Universe Symphony: the Earth (1975), for two brass quintets, narrator, and tape; Second Fantasy on Ives's Universe: the Heavens (1976), for clarinet, viola, keyboards, percussion, and tape; Phantasmagoria: Fantasies on Ives's Universe Symphony (1977, revised 1981), for orchestra, narrator, digital synthesizer, and tape; and Life Pulse Prelude (1974-84), for 20-member percussion orchestra. With the completion and performance of these works, I have since worked to incorporate the material and performance techniques developed for these pieces into what now has eventuated in this composed realization of the entire Universe Symphony, certainly Ives's most ambitious and, I believe, his most compelling and visionary work.
The Sketches: Four types of compositional material are found in Ives's sketches for his Universe Symphony: 1) virtually complete scoring (except for details of orchestration, dynamics, articulation, and phrasing); 2) incomplete scoring; 3) virtually complete formal, structural, and aesthetic descriptions of the nature and technical specifications; and 4) brief and often fragmentary musical and textual sketches exemplifying particular aspects or techniques that Ives was conceiving for the work. My intent in transcribing and interpreting Ives's sketches has been to realize and complete both Ives's explicit and implicit compositional, formal, and aesthetic intent for the work. Hence, to the extent intended and possible, I have meant this realization and completion of Ives's Universe Symphony to be experienced and appreciated in performance as a 100% Ives composition.
Validity: From my study and performance experience with Ives's US material, I believe strongly that the following conclusions can be well supported: 1) the instrumentation for the US is comprised of multiple orchestras (sometimes called "groups") of nominal size and primarily made up of related instruments; 2) the continuity for the work is sustained in three uninterrupted sections called "Past (A), Present (B), and Future (C)," with Ives's suggested option of preceding these sections with the ten "life pulse" percussion orchestra cycles, lasting 24 minutes; and 3) the "basic unit" tempo for the entire work is uniformly quarter-note=60 MM. (Note: For a detailed explication of my interpretation of the "life pulse" music sketches and the method of their realization for modern performance, please refer to my article about same in the research edition of Percussive Notes, Vol. 23, No. 6, Sept., 1985, pp. 58-84.)
The myriad details about instrumentation, tempo, and formal continuity in Ives's Universe Symphony have been important for me to sort out, since they have had direct bearing on my work as a composer of fantasies on the material and on the first full, completed realization of one of three macro-layers of the US, the "life pulse" music. Here, Ives calls for a 20-member percussion orchestra, each player performing in a different meter and at a different tempo, coming into metric phase every eight seconds. The other two macro-layers are called by Ives "the Heavens" and "the Earth". The four "Heavens" orchestras, each conducted in different meters and tempos by one of the four assistant conductors, are scored variously for violins, violas, high winds, and solo percussion. The two "Earth" orchestras are the "Rock formation" orchestra, scored for the brass and low winds, and the "Earth chord", scored for the 'celli and contrabasses.
Performance: In late 1974, I devised a 20-player, computer-controlled click-track system, which I used to record the performance of my realization of the first three cycles of Ives's "life pulse" percussion orchestra. That system made it possible to record, in succession, four 5-player realizations. That limited system was superceded in 1984, with the completion of my Life Pulse Prelude, providing a 16-track tape machine to play through headphones the 12 digital synthesizer-generated prime number meter click tracks for the 20 percussionists to perform, "live", through the "life pulse" music's complete ten cycles. Now, ten years later, with the realization and completion of the entire Universe Symphony, two additional, distinct click tracks have been added for two of the "Heavens" orchestras, making the number of different click tracks total 14 and the number of performers to follow their separate tempo with headphone click tracks number 25: twenty percussionists plus five conductors.
As I wrote in my 1985 article, "I believe that composers and percussionists have all, from time to time, experimented with the musical effect created by combining prime number pulses: e.g., clapping two-against-three or even three-against-four or several performers playing in different tempi, coming into phase at some agreed-upon interval. Certainly, composers with access to computer music facilities have enjoyed the ease of exploration of such fascinating rhythmic complexes. Is the notion of combining 20 [here 22!] different meters and tempos, coming into phase every eight seconds, an elementary, even primitive idea? Yes. Was Ives' really serious about this experiment? Yes. Is the musical effect...special? Absolutely!"
"Finally, what I term "the Life Pulse Prelude effect" is: Ives's 'durational counterpoint' plus sound-mass/pulsation-mass/event-mass/rhythm-mass/melody-mass plus the phenomenological synthesis of mesmerizing melodic/rhythmic iterations and an incessant improvisatory catharsis ["incessant myriads', Ives called it]. It works. It does, indeed, seem like the life pulse of the Universe." It is, indeed, a "universe of tones".
Sponsorship: I deeply appreciate and thank Maestro Gerhard Samuel and Professor Allan Otte for their unswerving support of my work and their dedicated musical leadership of the Cincinnati Philharmonia and the College-Conservatory Percussion Ensemble in the production of this first performance and subsequent digital recording of Ives's Universe Symphony. I thank the conductors and performers of the orchestra and ensemble for the excellence of their preparation and performance. I thank the Board of Directors of the Charles Ives Society for their enlightened sponsorship and generous subvention for the production of the recording of the Universe Symphony, in particular H. Wiley Hitchcock, Chair; J. Peter Burkholder, President; Ellis Freedman, Secretary; and members Todd Vunderink and James B. Sinclair. And I thank the American Academy and National Institute of Arts and Letters; University of Cincinnati College-Conservatory of Music; Peer International, Inc; American Composers Alliance; and Centaur Records. My special thanks go to Tom Haines and Terry Pender of the University of Cincinnati, whose dedicated expertise and stamina provided the formidable technical setup for the extensive headphone click track system. Last, and most, I thank my wife, Edna, for her steadfast support and help through these years of "living with Ives".
Computer music from the first of three scenes: Richard Armstrong, English reader; Cyril Reade, French reader; IngeGaida, German reader; Takashi Ohtsu, Japanese reader; English, French, and German readings recorded and, in part, computer-processed at the Banff Centre for the Arts, Banff, Alberta, January-March, 1994; English processed and Japanese recorded at Kunitachi College of Music Sonology Department, Tokyo, May-July, 1994.
Variations...beyond Pierrot is an extended sound-play for soprano, 5 instruments, hypermedia system, and computer music on tape, commissioned by the Canadian ensemble, Thira, premiered during their 1994-95 concert season. The computer music on tape involves the recording and processing of text read in English, French, German, and Japanese. The text is taken from the 21 poems Arnold Schoenberg set for his masterwork, "PierrotLunaire", poems originally written by Belgian poet, Albert Giraud, but made famous in German translation by Hartleben, subsequently translated into English and Japanese. The readings I have recorded, processed and combined for the piece range from dramatic to highly stylized. Inspired by but going beyond Schoenberg's musical melodrama, as he described it, my piece is a kind of multi-lingual dream of essences of the poems. In its completed form, the singer will sing, speak, and speak-sing the poems in all four languages, her voice--as well as the five instruments--processed in real-time using MAX and the ISPW.
During composer Violet Archer's tenure at the University of North Texas, Denton, from 1950 to 1952, I was her student in private composition and piano lessons. As my first real composition teacher, Violet was perfect. She enthusiastically encouraged and guided my efforts, from my Sonatina for violin and piano (1950) to my Concertino for flute, trumpet, and string orchestra (1952), four pieces later: five big pieces in two years! Yes, she instilled in me, early on, a passion to invent, to explore, and to be creatively productive. What fluency and invention I have sustained through the years since then was first nurtured by her challenging model as a prolific and ingeniously inventive composer. This piano piece, Violet's Invention, is composed for her as a small token of thanks to and admiration for her in this, her 75th year. (Note: The premiere performance of Violet's Invention was presented on March 7, 1991, in Concert II of the Society of Composers, Inc., 1991 Region VI Conference, Adam Wodnicki, pianist, in Irons Recital Hall, University of Texas, Arlington, Texas.)
Violet's Invention is a canon whose pitches derive from anagrammatic extrapolations of the letters in Violet Archer's name. Form, rhythmic design and melodic/harmonic continuity were created through a "Violet Archer ordering" of virtually all of the metaphorically appropriate anagrams that can be made with the two words of her name, themselves metaphors for what I sense as the Apollonian and Dionysian sides of her nature and her music.
Performance--The indication, "chromatics exclusive," means that all chromatic alterations affect only the immediate pitch; "colpedale" indicates the pianist's use of the sustaining pedal through the course of the piece to enhance the resonance and contrapuntal quality of the presto sections, contrasted with the quietly ringing sonorities of the subito adagio sections. Only the topmost pitch of the "tr" three-note clusters in the first part of the piece is to be trilled, a half-step higher.
Williams [re]Mix[ed] (1997-2001)*, for octophonic computer music system (ADAT), based on John Cage's Williams Mix (1951-53), for eight magnetic tapes: The Theme Restored; Six Short Variations: A-city sounds, B-country sounds, C-electronic sounds, D-manually produced sounds, E-wind produced sounds, F-small sounds; The Nth Realization *Commissioned by the International Institute for Electroacoustic Music, Bourges, France, with sponsorship and support from the John Cage Trust and Peters Edition
The process of creating the original realization of Williams Mix, as Cage explained, involved the precise cutting/splicing of recorded sounds to create eight separate reel-to-reel, monaural, 15-ips magnetic tape masters for the 4-minute 15-second, octophonic tape piece. The 192-page score is, as Cage referred to it, a kind of "dressmaker's pattern--it literally shows where the tape shall be cut, and you lay the tape on the score itself." Cage explained further in a published transcript of a 1985 recorded conversation with author Richard Kostelanetz that "...someone else could follow that recipe, so to speak, with other sources than I had to make another mix." Later in the conversation, Kostelanetz observed, "But, as you pointed out, even though you made for posterity a score of Williams Mix for others to realize, no one's ever done it," to which Cage replied, "But it's because the manuscript is so big and so little known." (Kostelanetz, Cage Explained, Schirmer, 1996, pp. 72-75)
Intrigued by Cage's open invitation to "...follow that recipe..." I embarked on a project in summer, 1997, to create just such a new realization of and variations on the 192-page score of John Cage's second tape piece, Williams Mix (1951-53), the first known octophonic, surround-sound tape composition. Presignifying the development of algorithmic composition, granular synthesis, and sound diffusion, Williams Mix was the first piece completed in the Project for Music for Magnetic Tape (1951-53), established in New York by Cage and funded by architect Paul Williams. Involved as collaborators were, first, pianist David Tudor, then composers Earle Brown, Morton Feldman, Christian Wolfe, and electronic music pioneers Louis and Bebe Barron, among others. The score for the piece was completed in October, 1952, as well as much of realization itself for the eight magnetic tapes, finally completed by Cage and Earle Brown on January 16, 1953.
In early 1998 the John Cage Trust provided me with a color-xerographic copy of the 192-page score, as well as associated sketches and commentary by Cage on the compositional process involved in the original (and only) realization for eight magnetic tapes. The Trust subsequently provided me with digital tape copies of the eight earliest extant generation, reel-to-reel masters of the piece from the Trust's Archive of Cage's works. With the score and tapes I began the restoration and analysis of the precise relation of the recorded sound events with their I Ching-determined parameters in the score. Out of this first, two-year phase came the restoration of the original eight tracks of tape, transferred to the digital, octophonic medium for playback on either computer or eight-track digital tape recorder. This newly restored Williams Mix is heard here, in fact, as the first movement, The Theme Restored of my Williams [re]Mix[ed]. Since first starting my project I have, meanwhile, been collecting new sounds for the new, recorded library of nearly 600 sounds (the actual number of different recorded sounds used in the Cage score is 350, their iterations totaling 2,128), according to Cage's six sound categories of city, country, electronic, manually produced, wind produced and small sounds.
The final phase of my project is the design and implementation of an interactive computer music program I have named the Williams [re]Mix[er]. It's functionality is modeled on Cage's I Ching compositional processes, extrapolated and applied from my years-long analyses of Cage's score, sketches, and tapes for Williams Mix, as well as his writings and recorded interviews about the piece and his compositional method. In fact, the Six Short Variations and The Nth Realization heard here are the very latest, computer-generated output of the Williams [re]Mix[er]. What took Cage and his collaborators months and months of recordings, coin-tosses, notation, and thousands of small pieces of tape spliced together to complete the first realization of the Williams Mix score is accomplished--after collecting the recordings and interacting with the program--in only a few minutes of computation time. Indeed, the default settings I have used in designing the Williams [re]Mix[er] are Cage's own parameters for the piece's structure and morphology of sound/silence events. On the last page of the score for Williams Mix, Cage inscribed, "(4 min. 15 sec. +) End 1st Part. N.Y.C. Oct. '52 Splicing finished Jan. 16, 1953." Dare I imagine that John's spirit is slyly laughing now, asking the oracle, "Is this the 2nd Part ?"