The Electronic Century Part I: Beginnings
Electronic Musician
Feb 1, 2000 Joel Chadabe
As we enter the 21st century, electronic music is fast approaching its 100th anniversary. This is a good time to look at our roots and get to know how we came to be where we are. This is the first in a series of four articles in which EM explores the instruments, artistic ideas, business concepts, musicians and entrepreneurs, and technical breakthroughs of the century—from the first technological achievements to the synthesizers of tomorrow.
The focus throughout the series is on the technologies that have been used by musicians to expand on the resources available in traditional, acoustic instruments. Where appropriate, each article will also document important musical compositions that have employed these technologies. There's a rich and deep tradition to uncover, so let's begin our journey!
When did electronic music start? It's a question often asked. Was it in 1759, in France, when Jean-Baptiste de La Borde built the Clavecin Electrique, a keyboard instrument that employed static electrical charges to cause small metal clappers to hit bells? Was it in 1874, in the United States, when Elisha Gray invented the Musical Telegraph? In my view, these one-of-a-kind experimental devices, and many others that were built during the 19th century, were merely setting the stage for instruments to come. The answer, then, it is widely agreed, is that electronic music began at the turn of the 20th century.
Thaddeus Cahill
As the 19th century came to a close, electricity was not yet widely available. Automobiles were rare. Telephone companies were just beginning to lay their cables up and down city streets, and Thaddeus Cahill, a lawyer and entrepreneur in Washington, D.C., had an idea.
Cahill's idea was to build an electronic musical instrument and use it to broadcast music through telephone lines into homes, restaurants, and hotels. In an age when mass musical media such as tapes and discs did not exist, Cahill's Telharmonium was viewed by many as a major innovation in the distribution of music. (See the sidebar "For Your Reading and Viewing Pleasure.")
Events unfolded quickly. In 1897, Cahill was granted his first patent for "The Art of and Apparatus for Generating and Distributing Music Electronically." In 1898, he began to build the first version of the instrument that would later be called the Telharmonium. Cahill found financial backers in 1901, and the New England Electric Music Company was formed. In 1902, the company leased factory space in Holyoke, Massachusetts, and Cahill began to build an improved version of the Telharmonium. In 1905, the New England Electric Music Company signed an agreement with the New York Telephone Company to lay special cables for the transmission of Telharmonium music throughout New York City.
Opening Night
In 1906, the Telharmonium was dismantled and transported to New York City, where it was reassembled in the newly established Telharmonic Hall at 39th Street and Broadway (see Fig. 1). The instrument weighed approximately 200 tons and had to be transported from Holyoke in more than 12 railway boxcars.
The Telharmonium was played by two performers seated at a two-keyboard console that was installed on the ground floor. The sound-generating method was additive synthesis, accomplished by alternating-current dynamos, which were installed in the basement along with the switching system, transformers, and other electrical devices. Sine waves were generated by toothed wheels rotating near inductor coils. As a tooth on the turning wheel came closer to the coil, the voltage in the coil would rise, and then the voltage would dip as a gap between teeth passed the coil. Different wheels produced different harmonics, as the number of teeth on a wheel determined the frequency of the resulting waveform.
FIG. 2: The electronic instrument invented by Leon Theremin can be heard on various recordings even today.
Telharmonic Hall was opened to the public and press on September 26, 1906. The first broadcast to a restaurant was on November 9, 1906, to the Cafe Martin, on 26th Street between Fifth Avenue and Broadway. At the restaurant, romantic couples seated at tables were treated to Telharmonic sounds through special loudspeakers peering through plants on their tables. It was a festive moment.
But then the troubles began. The broadcasts of Telharmonically generated Rossini overtures, through the cables laid by the New York Telephone Company, interfered with telephone conversations. This led the telephone company to terminate its agreement to lay cables, and a crisis ensued. Cahill's business colleagues reacted by forming the New York Cahill Telharmonic Company and seeking a franchise from New York City to lay its own cables. But in the meantime, there were no cables. And without cables, no sounds were broadcast. Without sounds, there were no subscribers to the service. Without subscribers, there was no business. The doors at Telharmonic Hall were soon closed.
Cahill remained undaunted and determined and would not admit defeat. He shipped the Telharmonium back to Holyoke, took control of the company, and made a valiant attempt at a comeback with a third and improved model. Finally, in 1911, the franchise to lay cables was granted. Unfortunately, by then it was too late. The time for the Telharmonium had passed. Other instruments and technologies were capturing the public's attention, and the Telharmonium was no longer newsworthy. In 1914, the New York Cahill Telharmonic Company declared bankruptcy.
It was a sad ending. As both engineer and businessman, Cahill had taken a visionary idea to its limit. He failed because his idea required a technology that was simply not available at the time. The technology did exist, however, for the next major electronic musical instrument, invented just a few years later.
The Magical Instrument
Although the theremin has never sold in huge numbers to a mass market, it was by all other measures a resounding success. The story of the theremin and its inventor, Leon Theremin, is a tale of political intrigue as well as musical invention.
In 1920, while still an engineering student in Moscow, Theremin built a very unusual instrument and demonstrated it to his fellow students. It was a box with two antennas, one extending vertically from the top, the other projecting horizontally from the side (see Fig. 2). Theremin played his instrument by moving his hands in the air. He moved one hand relative to the vertical antenna to control pitch, and the other hand relative to the horizontal antenna to control volume.
At that time, the Russian government was making a major effort to introduce electricity throughout Russia. Since Theremin's instrument was electronic, it attracted attention. After presenting his instrument before a group of Soviet scientists in Moscow in 1921, Theremin was invited to demonstrate it for Lenin. He carried his instrument to Communist leader Vladimir Lenin's office and performed. Lenin then played it himself. As Theremin later recalled, Lenin had a musical ear.
It is reasonable to assume that the meeting in Lenin's office was the beginning of Theremin's involvement as an ancillary contact for the NKVD, the precursor of the KGB. Theremin was given a travel grant to demonstrate his instrument throughout Russia. In 1927, he received support in a very successful concert tour throughout Europe. When he arrived in New York in December, Theremin was welcomed as a celebrity.
Theremin stayed in New York for ten eventful years. He met Clara Rockmore, who became the first and best-known theremin performer. RCA produced theremins for a short time. Leon Theremin found patrons, built instruments himself, performed, worked with others in performance, and married. But he carried out unsatisfactorily his tasks for the NKVD.
In 1938, probably judging that he would be of more use in Moscow than in New York, the NKVD kidnapped Theremin and returned him to Russia. During World War II, Theremin worked on radar. In 1947, after the war, he developed a bugging device for what had become the KGB and, as a reward for his work, received the Stalin prize of 100,000 rubles. In 1991, at the age of 95, he returned to the United States for a brief visit, during which he played a concert at Stanford University and met with old friends. Leon Theremin died in 1993.
Sliding Pitches
Theremin wasn't alone in having the idea of a keyboardless device that could play pitches between the normal notes of a scale. If the theremin had been invented in the MIDI era, it would have been called an alternate controller. And although many keyboard instruments were built during Theremin's early years (including organs of many different types, shapes, and sizes), some of the most interesting instruments, among them the Trautonium and Ondes Martenot, were conceived without a keyboard in mind.
The Trautonium, developed in 1928 by Friedrich Trautwein in Berlin, was something like a horizontal metal violin, played by pressing a wire against a metal bar much as a violin string is pressed against a fingerboard. A second metal bar was used to control the volume and articulation of each note, and timbre was chosen by manipulating an independent bank of switches. Oskar Sala, who studied with Trautwein in Berlin, developed a two-manual version of the original instrument and called it the Mixturtrautonium. Sala was particularly interested in film music and used the device to compose music and sound effects for films, including some sound effects for Alfred Hitchcock's The Birds.
Maurice Martenot developed the Ondes Martenot in 1928 in Paris. In the first version of the instrument, Martenot played it by pulling a ribbon that was attached to a ring placed on his finger, so that as he pulled the ribbon, the pitch changed in a continuous glissando. While he pulled the ribbon, he used his left hand to vary volume and choose timbre settings from a bank of switches. Responding to requests, he later incorporated a keyboard in the instrument. He also added a knee-operated lever, placed under the keyboard, by which a performer could control continuous timbre change.
Neither Trautwein nor Martenot were businessmen. They were inventors. They designed and built their instruments without market analysis, without plans for mass production, and without a business strategy for public success. The Ondes Martenot, more than the Trautonium, did have buyers, but it was manufactured and purchased on a one-by-one basis.
By 1930, the field of electronic musical instruments seemed to have promise, though yet unfulfilled. One major business had failed, one innovative instrument had become well known, a few instruments had small followings, and many others had come and gone without any general public awareness.
Was there a large market for electronic musical instruments? Yes. And it was about to break open.
Commercial Success
Inventor Laurens Hammond designed and manufactured a variety of instruments: clocks, an automatic shuffling bridge table, and eyeglasses for viewing 3-D film. Then, in 1933, he bought a used piano and began to design an electronic organ.
FIG. 3: Hugh Le Caine with his Electronic Sackbut in 1954. Despite its technological innovation, it never reached a mass market.
Unlike Theremin, Trautwein, and Martenot—and the other electronic-instrument inventors who were motivated by the adventure of invention and a fascination with discovering new ways to make music—Hammond was motivated by profit through sales. His goal was to sell organs to a mass market. Like most businesspeople with a similar goal, he approached the problems of design, manufacture, marketing, and sales with a cool-headed eye toward reducing expenses and increasing revenue.
The designs for his organs reflected economy in manufacture. For example, after analyzing concave pedalboards in other organs, Hammond leveled the pedalboard in his design and omitted the pedals that were played the least often. The sounds in his organ were generated by additive-synthesis tonewheels that were refinements of the mechanisms that Cahill had used in the Telharmonium.
The Model A organ appeared in June 1935. Hammond's marketing was pervasive and intense, initially aimed at churches throughout the country. But his organ's distinctive sound was soon found to have just the right quality for jazz and blues—and eventually for rock. Many different models followed, with assorted variations and improvements, to satisfy customers' varied needs. The Hammond B-3, first introduced in 1936, has achieved legendary status in the music world. Especially when paired with a Leslie rotary speaker, the B-3 has brought tears of joy to the eyes of many musicians.
Hammond's organ was a major success. It was everywhere. And it demonstrated the existence of a mass market for electronic instruments. But it was limited in the variety of sounds it could produce. From the perspective of a musical-instrument inventor, there was a lot of work to be done.
The Electronic Sackbut
During World War II, Hugh Le Caine worked on microwave transmission at the Canadian National Research Council in Ottawa. In a more relaxed period following the war, he pursued a secret life. Working at home in the evenings, he was building an electronic musical instrument.
In 1948, Le Caine finished a working prototype of what he called the "Electronic Sackbut," a precursor to the voltage-controlled synthesizers to come in the 1960s (see Fig. 3). The Sackbut was capable of great performance nuance, with keys that were sensitive to sideways pressure to change pitch. One note could slide into another, vibrato could be performed by wiggling a finger side to side, or notes could be bent as far as an octave away from the basic pitch. Vertical pressure on a key controlled volume such that gradual attacks could be made. Even more interesting, Le Caine added mechanisms that introduced irregularities into the tones, such as breath sounds, buzzing, or raspiness, to enhance what he called the "monotonous purity" of electronic tones.
Following a public presentation of the Sackbut and many lectures and demonstrations, Canada's National Research Council established a studio for Le Caine. The studio enabled him to develop electronic musical instruments for manufacture by Canadian companies. This was an affirmation that an electronic-music market truly existed and that this market was beginning to open up.
The RCA Synthesizer
FIG. 4: Renowned classical composer Milton Babbitt mastered the complex workings of the RCA Mark II synthesizer to produce some of the most significant electro-acoustic works of this century.
The next major electronic musical device to come along focused on the ability to make any sound. RCA's concept was to develop an instrument that could substitute for a studio orchestra. The RCA Mark II Electronic Music Synthesizer was a step in that direction, built by Harry Olson and Herbert Belar at RCA's Sarnoff Laboratories in Princeton, New Jersey, and finished in 1957.
The Mark II contained 750 vacuum tubes. It covered an entire wall of a room, horizontally and vertically. It was, in fact, a punched-paper-tape reader that controlled an analog synthesizer. Information was input by using a typewriterlike device to punch holes in a paper roll. The paper roll was then passed through a reader and read by contacts between metal brushes that touched through the holes, thereby closing switches and causing the appropriate machine processes to start or stop.
Considering the time at which it was built, the Mark II was powerful. But its user interface was a nightmare. In fact, it was so difficult to operate that it had only one primary user. Acquired by the Columbia-Princeton Electronic Music Center in 1959, it was used almost exclusively by Milton Babbitt, composer and professor at Princeton University (see Fig. 4). (Babbitt later remarked, "I've got the patience of Job.") Although the Mark II was seriously damaged by thieves who broke into the studio in 1976, it still exists in the Columbia University Computer Music Center.
The Early Days
In summary, the history of electronic music during the first part of the 20th century comprises the development of the early instruments more than the evolution of electronic musical art. These instruments, by and large, were not associated with innovative musical ideas. Rather, they were continuations of a long lineage of instrument invention, and they were generally intended for playing the same music that was played on traditional instruments. Cahill, for example, had set out to provide mass distribution of music that would include many musical styles, from Rossini overtures to popular music to church hymns.
Most of the early instruments, including the Telharmonium, did not offer composers a lot of new musical possibilities. They did offer some novel sounds, even if they were often difficult to play, and a few avant garde composers experimented with them. Paul Hindemith, for example, composed a few pieces for the Trautonium. Pierre Boulez and Olivier Messiaen, among other composers, had a passing interest in the Ondes Martenot-in fact, it's worth noting that the Ondes Martenot is still occasionally used by contemporary composers. Le Caine himself experimented with music for the Sackbut, but his hit number was a performance of the opening to Gershwin's "Rhapsody in Blue." The sounds of the RCA Mark II were documented on a demo LP made by RCA engineers, but the selections, which included Irving Berlin's "Blue Skies," were not exactly musically innovative.
The RCA Mark II was exceptional in that it did offer new musical possibilities. It offered precision of control, the possibility for substantial complexity in rhythm and texture, and a large palette of electronic sounds. These were the qualities that Milton Babbitt found important, and Babbitt's "Philomel" (1963) and "Vision and Prayer" (1964), both for soprano and taped electronic sounds made with the RCA Mark II, are probably the only masterworks created with the early instruments.
In one way or another, all of the early instruments foreshadowed the future. Cahill's business plan for the Telharmonium presaged Muzak. The Trautonium and Ondes Martenot laid the foundations for pitch bending and microtonality. The Electronic Sackbut was the forerunner of the voltage-controlled synthesizer, and the RCA Mark II Synthesizer, with its punched-paper-tape reader, prefigured the software sequencers of the MIDI age.
There was one exception: the theremin. Among the entire first group of electronic musical instruments, the theremin alone remains viable today in its original form. It has been used in music by the Beach Boys, Led Zeppelin, and many others, and its sound has provided an eerie background to films by the likes of Alfred Hitchcock. It is now lighter and less expensive than it was at the start, and its mechanisms have otherwise been improved by modern technology. But it does today exactly what it did when Theremin played it in New York in 1927.
Joel Chadabe, composer and author of Electric Sound, is president of the Electronic Music Foundation. He can be reached at chadabe@emf.org.
For Your Reading and Viewing Pleasure
There are several excellent resources that you should consider if you'd like more information on the first era of electronic music technology. Here are some book recommendations:
Electric Sound (Prentice Hall, 1996), by Joel Chadabe, discusses developments throughout the 20th century, with excellent coverage of the first 50 years.
Magic Music from the Telharmonium (Scarecrow Press, 1995), by Reynold Weidenaar, is the most thorough book ever published on this amazing instrument; check out the video as well.
Sackbut Blues (Canadian National Museum of Science and Technology, 1989), by Gayle Young, chronicles the life and work of Hugh Le Caine, a fascinating innovator.
To supplement your reading, here are a few recommended recordings:
The Art of the Theremin (Delos) features Clara Rockmore playing transcriptions of music by Rachmaninoff, Saint-Saens, Stravinsky, and others.
Oskar Sala: Subharmonische Mixturen (Erdenklang) includes compositions for the Trautonium by several composers, including Paul Hindemith.
Les Ondes Musicales (SNE) features Genevieve Grenier performing Debussy, Ravel, Faure, Gaubert, and Satie on the Ondes Martenot.
Milton Babbitt (CRI) includes the seminal work "Vision and Prayer," featuring Bethany Beardslee, soprano, and electronic sounds from the RCA Mark II Electronic Music Synthesizer.
We also recommend this videotape:
Clara Rockmore: The Greatest Theremin Virtuosa (Big Briar), produced by Robert Moog and Big Briar, features theremin performances and demonstrations by Clara Rockmore in an informal conversational environment with Robert Moog and Tom Rhea.
These and other interesting items are available from CDeMUSIC at www.cdemusic.org.
The Electronic Century Part II: Tales of the Tape
Mar 1, 2000 12:00 PM, Joel Chadabe
FIG. 1: The Rangertone was an early tape recorder developed by Colonel Richard Ranger, who played a key role in bringing German recording technology to the United States at the end of World War II.
When tape recorders were introduced to the market around 1950, composers embarked on a musical revolution. Magnetic recording made it possible for them to record sound sources anywhere in the world—whether a railroad locomotive in Paris or a department store in Tokyo—and arrange them into any order. In fact, the term "tape music" refers to music composed with sounds that have been recorded on tape, then edited into a particular continuity.
The first steps toward inventing the tape recorder took place in 1898 in Denmark, when Valdemar Poulsen developed a device to record sound on a steel wire. In the following years, many people formed businesses—not all of them successful—to exploit Poulsen's invention. By the late 1920s, patents had been filed for magnetic tape, and in 1935, Allgemeine Electrizitats Gesellschaft (AEG) demonstrated the first version of a tape recorder at the German Annual Radio Exposition in Berlin. This helped establish magnetic recording as a viable technology. By the late 1940s, Ampex, Rangertone (see Fig. 1), and other companies had been formed to manufacture tape recorders, and Minnesota Mining and Manufacturing (3M) developed an improved magnetic tape.
FOUNDS SOUNDS
The use of tape recorders to create musical compositions grew out of a tradition that began in the early years of the century. That tradition used "found" sounds rather than composed sounds. As early as 1917 in Paris, France, Jean Cocteau conceived the ballet Parade, which called for the sounds of sirens, a steam engine, and other mechanical devices, as well as music by composer Erik Satie. In 1926, George Antheil used an airplane engine onstage in a Paris performance of his Ballet Mecanique (recently revived in a production by Paul Lehrman at the University of Massachusetts at Lowell).
American composer John Cage, however, was the first to consistently explore the use of nontraditional sounds in music (see Fig. 2). In the style of Ferruccio Busoni and Edgard Varese, who earlier in the century had theorized that music might include all sounds, Cage said: "I believe that the use of noise to make music will continue and increase until we reach a music produced by the aid of electrical instruments.." In 1939, Cage included a number of variable-speed phonograph turntables in his composition Imaginary Landscape no. 1. (Many of the works mentioned in this article are available on modern recordings. See the sidebar, "Tape-Music Hit Parade," for a list of recommended recordings.) Throughout the 1940s and 1950s, Cage used radios, phonograph records, tin cans, and other nontraditional sound sources in his works.
MUSIQUE CONCRETE
While Cage was largely interested in performance, Pierre Schaeffer (see Fig. 3), a radio announcer at Radio France in Paris, was primarily interested in recording his own work. In 1948, during the course of developing a medium he called "radiophonic sound," Schaeffer completed an important experiment. He recorded railroad locomotives, then combined those sounds into a short composition. Before he had access to tape recorders, he would cut the sounds directly onto plastic discs, play back several sounds simultaneously on different players, and select and mix the sounds as they played.
He named his composition Etude aux Chemins de Fer (Railroad Study). He then coined the term musique concrete to describe his technique of recording and combining sounds. By using the phrase musique concrete ("concrete music"), he hoped to contrast a concrete way of working with sounds with an abstract way of working with them, in which the sounds are represented by notes in a musical score.
FIG. 2: John Cage in his studio n Bank Street, New York City, in 1977. Photo of Rhoda Nathans.
To understand Schaeffer's work, it is important to remember that there was no television in 1948 and that radio was the most universal theater of the time. Dramatic programs, serials, and adventure stories, as well as music and news, were broadcast on the radio, and the sounds in radio programs inspired a high level of creativity.
Schaeffer, in fact, developed the idea of radiophonic sound into an art form. He finished four additional musique concrete studies in 1948 and broadcast them on Radio France on October 5, 1948. The program, called Concert de Bruits (Concert of Noises), was tremendously successful. Composing music using recorded sound was an idea whose time had come.
THE PARIS STUDIO
Encouraged by the positive public reaction to his work, Schaeffer requested and received support from the administration of Radio France. He was then able to hire Pierre Henry as his musical assistant and Jacques Poullin as technician. He was also given support to form a studio specifically to compose musique concrete.
Over the next few years, Schaeffer and Henry collaborated on many projects, among them 1950's Symphonie pour un Homme Seul (Symphony for One Man Alone), one of the first major works in the new medium, and Orphee (1951), a musique concrete opera. In 1951, the first tape recorders arrived at Radio France. Poullin designed different types of recorders to create special musical effects and developed a spatialization system to direct sounds to different loudspeakers around a concert hall. The studio grew through the 1950s and attracted many composers, among them Pierre Boulez, Karlheinz Stockhausen, Luc Ferrari, Olivier Messaien, and Iannis Xenakis.
Xenakis, in particular, produced several important works using classic musique concrete techniques such as manipulating sounds by varying tape speed or playing sounds backward. The sound sources in Diamorphoses (1957) include earthquakes, airplanes, and bells. In Concret PH (1958), Xenakis modified the sound of smoldering charcoal. For Orient-Occident (1960), he recorded bowed objects, bells, and metal rods, while Bohor (1962) is based on the sounds of Middle Eastern bracelets and other jewelry clanking together.
In 1958, Pierre Henry left Radio France to form an independent studio. Apart from his professional work, he produced many important pieces on his own using musique concrete techniques. Perhaps the most interesting, due to the simplicity of its sound sources, is 1963's Variations pour une Porte et un Soupir (Variations for a Door and a Sigh).
THE COLOGNE STUDIO
Many paths crossed in those early days. Karlheinz Stockhausen, who had come from Cologne, Germany, to study at the Paris Conservatory, worked in Schaeffer's studio. In 1953, Stockhausen returned to Cologne to begin working in the studio newly established by Herbert Eimert at West German Radio, and he soon became the studio's director and principal composer.
FIG. 3: Pierre Schaeffer established the first electronic-music studio in France in the 1940s. Schaeffer is shown here in 1952 with two versions of the phonogene, a variable-speed tape recorder built by Jacques Poullin.
The initial philosophy of the Cologne studio was very different from that of musique concrete. Whereas in Paris sounds were recorded in the real world and recombined by editing as in film, in Cologne, the first idea was to generate sounds electronically by additive synthesis. Considering that the studio owned one sine-wave oscillator, it was a laborious process. Sine waves were made on a 4-track tape recorder, then mixed down onto a single-track recorder. The mix was then bounced to one track of the multitrack recorder, additional sine waves were added on the other tracks, and all the tracks were again mixed down on the single-track recorder. This approach was called elektronische Musik.
Stockhausen began by composing two studies using only electronic sounds. In 1956, he finished Gesang der Junglinge (Song of the Youths), the first major work to be composed in the Cologne studio and one of the first masterworks of tape music. In Gesang der Junglinge, Stockhausen mixed a young boy's voice with electronic sounds so that the words were variously intelligible and completely abstract and musical.
In 1960, he went on to compose Kontakte (Contacts), in which the sounds suggest percussion and piano timbres. During a trip to Japan in 1966, he composed Telemusik (Telemusic) with sounds recorded in Japan, Bali, the Sahara, and other places. He modulated all of the sounds in such a way that their sources are unrecognizable. In 1967 in Cologne, he composed Hymnen (Anthems), in which he electronically processed national anthems from around the world. By this time, Stockhausen's techniques had changed dramatically from working with purely electronic sounds to processing recorded material.
THE BRUSSELS WORLD'S FAIR
Although known primarily as a composer, Iannis Xenakis had a particular nonmusical impact on the history of electronic music. Trained initially as a civil engineer, he had worked since the late 1940s with Le Corbusier, one of the best-known European architects of the time.
In 1956, Philips Corporation, a major electronics company based in Holland, invited Le Corbusier to design its pavilion for the 1958 Brussels World's Fair. Le Corbusier replied, "I will make you a poeme electronique." and asked Xenakis to design the pavilion. Xenakis came up with an idea based on hyperbolic paraboloids (see Fig. 4). During the world's fair, the building was used as a shell for multiple projections of images Le Corbusier had created and as a music playback system that included 425 loudspeakers. The music included Xenakis's Concret PH, which was less than three minutes long, played between performances of Edgard Varese's Poeme Electronique. More than 2 million people attended the event.
In 1957, Philips had invited Varese to create Poeme Electronique in its Eindhoven laboratory. Varese used recordings of traditional musical instruments, percussion, electronic sounds, a singing voice, and various machines. All of the sounds were processed electronically. Poeme Electronique is a definitive statement of musique concrete.
NEW YORK, NEW YORK
While the Paris studio was getting started in the late 1940s, Louis and Bebe Barron established a small commercial studio in New York. They composed several electronic film scores, among them one for the well-known 1956 film Forbidden Planet. They also worked with John Cage in 1951.
FIG. 4: The Philips Pavilion at the 1958 Brussels World's Fair, site of the performance of Varese's Poeme Electronique.
As soon as tape recorders became available, Cage became interested in exploring ways they could be used in composing music. He decided to start what he called the Project for Music for Magnetic Tape. An architect and friend, Paul Williams, was willing to fund the project. In 1951, Cage began working with the Barrons to assemble a large and varied library of taped sounds. He worked first with David Tudor, then with Earle Brown, to cut and splice those sounds into a tape composition, Williams Mix.
The work took place in Cage's loft on Manhattan's Lower East Side. Cage cut the tapes into short pieces, then flipped coins to decide how to order them. Using this method, Cage and Brown finished Williams Mix together. They then worked together on Brown's Octet. In 1954, the Project for Music for Magnetic Tape wound down, partly because the money ran out and partly because Cage moved on to other projects.
While John Cage and Earle Brown spliced together snippets of tape in lower Manhattan, other events were unfolding uptown. In 1952 at Columbia University, Vladimir Ussachevsky presented a concert that included his first electronic compositions. Shortly afterward, he began working with composer Otto Luening in Bennington, Vermont, and in various living rooms and studios in New York City.
On October 28, 1952, Ussachevsky and Luening presented a concert of their music at the Museum of Modern Art in New York-significant because it was the first public concert of tape music in the United States. The program included Ussachevsky's Sonic Contours and Luening's Fantasy in Space. After that, the two men became busy with radio appearances, other concerts, commissions, and fellowships. This flurry of activity led to the establishment of a tape studio at Columbia University in 1955 (see Fig. 5).
The studio flourished. In 1959, with support from the Rockefeller Foundation, Ussachevsky, Luening, and Princeton professor Milton Babbit established the Columbia-Princeton Electronic Music Center and acquired the Mark II Electronic Music Synthesizer. The center also housed three tape studios, and in the next ten years, more than 60 composers from 11 countries came to New York to work there.
Among them was Mario Davidovsky, who arrived from Argentina in 1960 and became one of the major composers of tape music. Davidovsky's Synchronisms no. 5 (1969), based on an interplay between electronic sounds on tape and live percussionists, is a good example of his style. His Synchronisms no. 6, for piano and tape, won the Pulitzer Prize for music in 1971.
ON TO MILAN
Ussachevsky and Luening's concert at the Museum of Modern Art had another important consequence. Luciano Berio, visiting New York from Milan, Italy, was in the audience and became excited by the possibilities of this new medium. When he returned to Italy a few months later, he met composer and conductor Bruno Maderna, and they decided to work together to explore the potential of tape music. In 1955, they established Studio di Fonologia at the Radio Televisione Italiana (RAI) studios in Milan.
Berio's best-known work to come out of this studio was Omaggio a Joyce (Homage to Joyce), finished in 1958. Berio asked his wife, Cathy Berberian, to recite from chapter 11 of James Joyce's Ulysses. He then processed the words electronically and with tape-recorder manipulations. Of particular interest is how he mixed different versions of the same sound to produce sounds that suggest the meanings of other words.
Many other composers worked at the Milan studio. Henri Pousseur composed Scambi (Exchanges) in 1957 by filtering white noise. In 1958, John Cage visited Milan and composed a tape version of his earlier composition Fontana Mix by using random numbers to determine the length of the tape segments. (While there, Cage distinguished himself by appearing on an Italian television quiz show, correctly answering questions about mushrooms.)
THE END OF THE BEGINNING
John Cage continued his groundbreaking work into the 1970s. In 1972, he composed Bird Cage, which juxtaposed the sounds of birds recorded in aviaries, the sounds of Cage himself singing Mureau (an earlier composition of his based on Thoreau's writings), and sounds recorded randomly from the environment.
FIG. 5: Otto Luenig and Vladimir Ussachevsky in the Columbia Tape Studio, about 1960.
In 1979, Cage composed Roaratorio, the largest in scope of all tape music compositions in the number of sounds used and a fitting piece with which to designate the end of an era. In it, Cage recorded, collected, and randomly combined all of the sounds that James Joyce mentions in Finnegans Wake. In performance, the tapes were played while Cage read his own recomposed version of Finnegans Wake. At the same time, Irish musicians played traditional Irish folk music. Roaratorio gathered an enormous variety of sounds—doors closing in Dublin, a river flowing, a glass placed on a bar, a car passing in the street—and assembled them as music.
The idea of using tape to juxtapose sounds in any combination from any source was so powerful that tape studios quickly formed throughout the world. The first round of work was done not only in New York, Paris, Cologne, and Milan, but also in studios formed in London, Tokyo, Buenos Aires, Toronto, Stockholm—in short, everywhere.
It was an exciting time in the history of music, and it seemed to many composers that anything was possible. Around the world they shared the common goal of creating a new kind of music based on the availability of all sounds.
Joel Chadabe, composer, is author of Electric Sound and president of the Electronic Music Foundation. He can be reached at chadabe@emf.org.
TAPE-MUSIC HIT PARADE
The following recommended materials are available from CDeMUSIC at www.cdemusic.org.
John Cage 25-Year Retrospective Concert (Wergo) includes Imaginary Landscape no. 1 and Williams Mix from the Project for Music for Magnetic Tape.
Forbidden Planet (GNP Crescendo) by Louis and Bebe Barron is the original 1956 soundtrack to the famous science fiction film.
Pierre Schaeffer: L'Oeuvre Musicale (EMF Media) brings together all of Schaeffer's musical works, including his collaborations with Pierre Henry.
Xenakis: Electronic Music (EMF Media) includes all of Xenakis's early works.
Pierre Henry (Harmonia Mundi, France) includes Variations pour une Porte et un Soupir, one of the most elegant works of early musique concrete.
Elektronische Musik 1952-1960 (Stockhausen Verlag) includes Karlheinz Stockhausen's Gesang der Junglinge and Kontakte.
Hymnen (Stockhausen Verlag), by Karlheinz Stockhausen, uses the national anthems of the world as source material.
Mikrophonie I and II; Telemusik (Stockhausen Verlag), by Karlheinz Stockhausen, includes sounds from Asia and elsewhere.
Electro Acoustic Music Classics (Neuma) includes Edgard Varese's Poeme Electronique, first performed at the 1958 Brussels World's Fair.
Electronic Music Pioneers (CRI) includes works by Vladimir Ussachevsky and Otto Luening that were played at the Museum of Modern Art in New York on October 28, 1952.
Henri Pousseur (BV Haast) includes Scambi, composed in 1957 in Milan.
Berio/Maderna (BV Haast) includes Berio's Omaggio a Joyce, based on text from James Joyce's Ulysses.
John Cage Bird Cage (EMF Media) is a major collage work by John Cage, based largely on the sounds of birds recorded in aviaries.
Roaratorio (Wergo), by John Cage, includes Cage reading, Irish musicians playing and singing, and all the sounds mentioned in James Joyce's Finnegans Wake.
Pauline Oliveros: Electronic Works (Paradigm) includes I of IV and other early compositions that use tape.
I Am Sitting in a Room (Lovely Music), by Alvin Lucier, uses tape recorders and room resonance to transform words into abstract sounds.
A Sound Map of the Hudson River (Lovely Music), by Annea Lockwood, records the Hudson River from its source in the Adirondack Mountains to the Lower Bay of New York City and the Atlantic Ocean.
You can read more about tape music and the history of electronic music in the book Electric Sound by Joel Chadabe (Prentice Hall, 1996).
The recorder, which became commercially available around 1950, made possible a musical revolution because it allowed composers to record sounds and arrange them into any order. Throughout the 1950s and 1960s, advances in technology led many composers to realize there were sonic possibilities beyond magnetically recorded sounds. For these composers, the goal was to compose sound itself, and computers and analog synthesizers provided the means to do so.
The Electronic Century Part III: Computers and Analog Synthesizers
Apr 1, 2000 Joel Chadabe
MUSIC FROM COMPUTERS
FIG. 1: Jean-Claude Risset in 1977. Risset worked at Bell Labs in the early years with Max Mathews and later became head of computer music research at IRCAM in Paris.
The first computer-generated sound was heard in 1957 at Bell Telephone Laboratories in Murray Hill, New Jersey. Max Mathews had finished writing Music I, the first program to generate sounds with a computer, and used it to play a 17-second composition by a colleague, Newman Guttman. Although the piece didn't win any music awards, it was the first computer music composition and marked the birth of digital sound synthesis.
John Pierce, head of the department in which Mathews worked, was interested in the possibilities of sound synthesis. With Pierce supporting his work, Mathews and his collaborators made continued improvements to the Music I program over the next several years, resulting in a series of programs that came to be known as the Music-N series: Music II (1958), Music III (1960), Music IV (1962), and the last in the series, Music V (1968).
Music V was modular and hierarchical in its structure. The software simulated oscillators, mixers, amplifiers, and other audio modules; each module was referred to as a unit generator. The software oscillators functioned by reading waveforms from numerical tables and outputting streams of numbers that represented those waveforms. Numerical outputs from two software oscillators, for example, could then be added together in a 2-input software mixer. The output from the mixer could in turn be scaled in a software amplifier by multiplying it by a fixed number—increasing its amplitude if the multiplier was more than 1 and decreasing its amplitude if the multiplier was less than 1. In the Music V language, a particular combination of unit generators was called an instrument, a sound was called a note, and a sequence of notes was called a score.
Additional work in computer music was being done at the Massachusetts Institute of Technology by Ercolino Ferretti, and also at Princeton University by Hubert Howe, Jim Randall, and Godfrey Winham, who introduced some improvements to Music IV. During the first several years, however, developments in computer music were centered at and around Bell Labs. By the mid-1960s, the field began to expand when centers were established at Stanford University, Columbia University, and elsewhere. With sound synthesis becoming an important direction for musical research, the field of computer music continued to grow well into the '70s.
As if to underline the importance of this new technology, the French government established the Institute for Research and Coordination of Acoustics and Music (IRCAM) in Paris in 1977. Jean-Claude Risset (see Fig. 1), who had worked with Max Mathews at Bell Labs in the '60s, was appointed head of IRCAM's computer music department.
EARLY COMPUTER WORKS
The computer music research at Bell Labs and other institutions provided the backdrop to the first round of creative musical work with computers. From the beginning, John Pierce and Max Mathews had been eager to make contact with musicians, and in 1961 Pierce hired composer James Tenney to come and work at Bell Labs.
FIG. 2: A photo from the late 1960s of Robert Moog (front) and Jon Weiss (back) using a Moog modular system in the R.A. Moog studio, Trumansburg.
Tenney worked at Bell from 1961 to 1964 and completed several compositions during that period. His first was Analog #1: Noise Study, finished in 1961 and inspired by the random noise patterns he heard in the Holland Tunnel on his daily commute between Manhattan and New Jersey. His interest in randomness at that time included using the computer to make musical decisions as well as to generate sound. In Dialogue (1963), Tenney used various stochastic methods to determine the sequencing of sounds.
Tenney continued to develop his stochastic ideas in Phases (For Edgard Varese) (1963), in which different types of sounds are statistically combined. His techniques resulted in sounds with continually changing textures, similar to a fabric made up of a variety of materials in various shapes and colors.
In 1963, Mathews published an influential article on computer music titled "The Digital Computer as a Musical Instrument" in Science. Jean-Claude Risset, at the time a physics graduate student in France, read the article and became so excited by the potential of computer music that he decided to write his thesis based on research he planned to do at Bell Labs. Risset came to Bell in 1964, began research in timbre, returned to France in 1965, and came back to Bell in 1967. He completed Computer Suite from Little Boy in 1968 and Mutations in 1969. Both compositions contain sounds that could not have been produced by anything but a computer.
Meanwhile, at Stanford University in 1963, John Chowning also came across Max Mathews's Science article and became inspired to study computer science. Chowning visited Bell Labs in the summer of 1964 and left with the punched cards for Music IV. He subsequently established, with David Poole, a laboratory for computer music at Stanford. The lab would eventually become the Center for Computer Research in Music and Acoustics (CCRMA), a major center for computer music research. Chowning later went on to develop frequency modulation (FM) as a method for generating sound. His approach to FM, in fact, was licensed by Yamaha in 1974 and was the basis of sound production in many Yamaha synthesizers through the 1980s.
Chowning's early compositions Sabelithe (1971) and Turenas (1972) both simulated sounds moving in space. In Stria (1977), Chowning used the Golden Section to determine the spectra of the sounds. The results were otherworldly—magical, strange, icy, and unlike anything that one could imagine coming from an acoustic instrument.
WAITING FOR A SOUND
James Tenney, Jean-Claude Risset, and John Chowning were among the first composers to work with computers in the 1960s. Many others followed in the 1970s and 1980s, including Charles Dodge, Barry Vercoe, Jonathan Harvey, Larry Austin, Denis Smalley, and Paul Lansky. Yet compared with composers working with the interactive computer systems of today, these pioneers had a job that was far from easy.
FIG. 3: Donald Buchla playing the Electric Music Box in the early 1970s.
They required technical knowledge, perseverance, patience, and the ability to deal with a lot of frustration. The time frame between specifying a musical idea at the computer and hearing the results, for example, was often measured in days or weeks. A composer would accumulate a Music V creation on a digital tape second by second, day by day. When the composition was finished, the digital tape was normally taken to a particular department at Bell Labs, where it was converted into analog signals and recorded onto an audiotape. This process could take up to two weeks to complete.
A serious problem with this way of working was that composers were not able to hear a work as they created it. Many musicians of the time, including those who were attracted to electronics, did not want to deal with the long turnaround times necessary for generating computer music. Another significant problem was that composers and musicians had to know computer programming.
BIRTH OF THE SYNTHESIZER
Analog synthesizers provided a solution. They made possible a new world of sound without the need for programming skills. Synthesizers were also designed for performance and provided an immediacy of response resembling the performance capabilities of traditional musical instruments. Even though synthesizers were based on new technologies, many musicians found them attractive because they had familiar forms and features.
In 1964 three men independently invented analog synthesizers: Robert Moog in Trumansburg, New York; Paul Ketoff in Rome; and Donald Buchla in San Francisco.
That year Robert Moog invited composer Herb Deutsch to visit his studio in Trumansburg. Moog had met Deutsch the year before, heard his music, and decided to follow the composer's suggestion and build electronic music modules. By the time Deutsch arrived for the visit, Moog had created prototypes of two voltage-controlled oscillators. Deutsch played with the devices for a few days; Moog found Deutsch's experiments so musically interesting that he subsequently built a voltage-controlled filter.
Then, by a stroke of luck, Moog was invited that September to the AES Convention in New York City, where he presented a paper called "Electronic Music Modules" and sold his first synthesizer modules to choreographer Alwin Nikolais. By the end of the convention, Moog had entered the synthesizer business (see Fig. 2).
Also in 1964, Paul Ketoff, a sound engineer for RCA Italiana in Rome, approached William O. Smith, who headed the electronic music studio at the city's American Academy, with a proposal to build a small performable synthesizer for the academy's studio. Smith consulted with Otto Luening, John Eaton, and other composers who were in residence at the academy at the time. Smith accepted Ketoff's proposal, and Ketoff delivered his Synket (for Synthesizer Ketoff) synthesizer in early 1965.
Meanwhile, Donald Buchla had begun working with Morton Subotnick and Ramon Sender at the San Francisco Tape Music Center. After designing and building a waveform generator controlled by optical sensors, Buchla conceived of a voltage-controlled synthesizer that incorporated an analog sequencer. Subotnick and Sender requested and received a small grant from the Rockefeller Foundation, and Buchla built the synthesizer and delivered it to the Tape Music Center in the early months of 1965.
Buchla worked closely with Subotnick throughout 1965 to refine the synthesizer, and by the end of the year they had developed what Buchla called the Series 100. In 1966 he formed a company, Buchla and Associates, and began to sell the Electronic Music System (see Fig. 3).
EARLY SYNTH TECHNOLOGY
James Tenney's pioneering work at Bell Labs included the compositions Analog 31: Noise Study (1961) and Dialogue (1963).
The first round of analog synthesizers were voltage-controlled modular systems—a collection of separate modules, each with a particular audio or control function. The audio modules typically included oscillators, noise generators, filters, and amplifiers.
Sounds were normally generated by subtractive synthesis. With this technique, a composer links oscillators in frequency- or amplitude-modulation configurations to generate complex waveforms, then focuses on elements of the sound within the waveform by using filters to subtract partials.
Typical controllers of the day included envelope generators and keyboards. Buchla employed analog sequencers in his first systems in 1965, and Moog began incorporating them into his systems in 1968.
Analog sequencers are used to generate a series of voltages. The voltage level of each stage in the series is controlled independently by a knob. Each stage is then played in sequence, one after the other, using an oscillator to control the timing. The Moog sequencer, for example, had 24 stages configured in 3 rows of 8.
Sometimes sequencers were used to automate aspects of a performance. But it was far more common to use a keyboard controller to play an analog synthesizer. Voltages generated by the keyboard controlled the frequencies of the oscillators and filters. Every time a key was pressed, the keyboard triggered an envelope generator that normally controlled the filter and amplifier.
EARLY SYNTHESIZER WORKS
The specific design of each synth—the type of keyboard it used, for example—optimized it for a particular musical and performance approach.
The Moog synthesizer was the most traditional of the three early synths because its keyboard resembled a traditional piano keyboard in size and operation. The keys were approximately the same size as those on a piano, and the case was made of wood. As if to verify the traditional functionality of the Moog keyboard, Wendy Carlos used a Moog synthesizer to record Switched-On Bach (1968).
The Synket was a bit less traditional than the Moog and much more compact and portable. Its keyboard was smaller than that of a normal piano, and each key could be wiggled sideways to bend the pitch. Pianist and composer John Eaton immediately saw its potential and began using the Synket as a performance instrument. In 1965 Eaton composed Songs for RPB, for soprano, piano, and Synket. In April of that year, in what was possibly the first public performance using a synthesizer, Eaton accompanied soprano Michiko Hirayama in a concert at the American Academy in Rome. This author had the particular distinction of turning pages at that concert.
FIG. 4: Peter Zinovieff, about 1971, playing the Synthi 100 in his London studio.
The Buchla modular system was the least traditional of the three synthesizers. Its keyboard was made up of a series of fixed-position capacitance-sensitive metal strips, each of which generated a voltage when touched. Morton Subotnick, who had played a role in the Buchla synth's design, used it extensively.
In 1966 Subotnick relocated to New York City, where Nonesuch Records commissioned him to create a series of works specifically for release as recordings. He had brought a Buchla synthesizer with him and used it to compose Silver Apples of the Moon, the first of the series, in 1967. The Wild Bull and Touch followed in 1968 and 1969, respectively.
Subotnick's approach to composing music was unconventional in that he did not play his creations using a keyboard but instead automated most of the detail with the sequencers. In these compositions, Subotnick functioned more as a conductor, "cueing" the sequencers from moment to moment, turning them on or off, changing connections, and pushing buttons.
TOWARD MAJOR SUCCESS
Because it had been commissioned specifically to appear on recordings sold by a commercial record company, Subotnick's work crossed the line from art music to commercial music. In fact, much of the synthesizer-created music of the day became popular. Wendy Carlos's Switched-On Bach became the hit of 1969 and one of the best-selling classical music recordings ever.
After hearing Chris Swanson, Robert Moog, and others perform a jazz concert in 1969 at the Museum of Modern Art in New York City, Keith Emerson bought a small Moog modular system and used it for the hit song "Lucky Man" on the album Emerson, Lake, and Palmer. Eric Siday also used a Moog synthesizer to compose a theme for CBS.
Demand from musicians, in what was clearly a growing market, led to a large number of companies being formed and new products being developed. Peter Zinovieff (see Fig. 4) formed EMS Ltd. in London, for example, and with David Cockerell and Tristram Cary produced the VCS-3, among other synthesizers and devices. Robert Moog, Bill Hemsath, and others developed the portable Minimoog, the first commercially successful synthesizer. Alan R. Pearlman formed ARP Instruments near Boston and produced the modular Model 2500, followed by the integrated and portable Model 2600. Tom Oberheim founded Oberheim Electronics and designed the Four Voice, the first polyphonic synthesizer on the market. Dave Smith formed Sequential Circuits and developed the Prophet-5, an analog synthesizer with digital controls.
Many other companies and products came and went. The '70s saw the market for electronic musical instruments expand, accompanied by the feeling that they would have a profound impact on the way musicians thought of sound and music. It was a very exciting time.
Joel Chadabe, composer and author of Electric Sound, is president of the Electronic Music Foundation. He can be reached at chadabe@emf.org.
FOR YOUR READING AND VIEWING PLEASURE
Several excellent resources offer more information on synthesizers and computer music technology. Here are some recommended books:
The Computer Music Tutorial (MIT Press, 1996), by Curtis Roads, provides in-depth coverage of the technology of computer music.
Electric Sound (Prentice Hall, 1996), by Joel Chadabe, discusses developments in electronic music throughout the 20th century, with excellent coverage of early computer and analog synthesis.
Keyfax Omnibus Edition (MixBooks, 1996), by Julian Colbeck, has a wealth of information on commercially produced synthesizers.
Here are a few recommended recordings to supplement your reading:
John Chowning (Wergo) highlights the composer's early computer music works.
Columbia-Princeton Electronic Music Center 1961-1973 (New World) includes Charles Dodge's Earth's Magnetic Field.
Jean-Claude Risset (INA-GRM) includes Mutations, composed at Bell Labs, as well as Inharmonique and Sud, composed later.
Jean-Claude Risset (Wergo) features Computer Suite from Little Boy, as well as Sud and other compositions for computer sound and acoustic instruments.
Morton Subotnick (Wergo) includes Silver Apples of the Moon and The Wild Bull.
James Tenney: Selected Works 1961-1969 (Artifact) features the compositions that Tenney finished at Bell Labs.
These and other interesting items are available from CDeMusic at www.cdemusic.org.
The Electronic Century Part IV: The Seeds of the Future
May 1, 2000 12:00 PM, Joel Chadabe
FIG. 1: Joel Chadabe performing at the New Music New York festival at the Kitchen in New York City in 1979. The antennas are modified theremins that were custom-made by Robert Moog. Here Chadabe is using them to 'conduct" the first Synclavier, which is on the table behind him.
At the end of the 1960s, two distinct but parallel paths of technical innovation traversed the field of electronic music. One of the paths, leading toward a future of digital audio and digital signal processing, was computer music. It was neither musically nor technically an easy path to follow. But the difficulties of computer-music development, such as the lack of real-time feedback and the need to specify music in computer code, were offset by the promise of creating any sound imaginable—not to mention the advantages of precise control, repeatability, and nearly indestructible storage.
The other path of technical progress, followed by many musicians, led to the development of the synthesizer. Analog synths, many of which could be played like traditional instruments, opened up a new world of electronic sound in performance. With the help of hugely successful recordings like Wendy Carlos's Switched-On Bach and Keith Emerson's single "Lucky Man," synthesizers were becoming standard in virtually every band's instrumentation.
SYNTHESIZERS OF THE '70S
By the beginning of the 1970s, it was clear that electronic sounds were hot and that electronic music could become a viable industry. In fact, the market exploded during the decade, with many new companies developing new instruments, and the technology itself advanced quickly. As we moved from the transistors of the '60s to the integrated circuits of the '70s, computers and analog synthesizers became less expensive and easier to use, and they were often joined together in what were called hybrid systems.
In several experimental studios—including those at Bell Telephone Laboratories in Murray Hill, New Jersey, and the Institute of Sonology in Utrecht, the Netherlands—computers were used as sophisticated sequencers to generate control voltages for analog synthesizers. Emmanuel Ghent's Phosphones (1971) and Laurie Spiegel's Appalachian Grove (1974) are examples of music created at Bell Labs; Gottfried Michael Koenig's Output (1979) exemplifies music composed at the Institute of Sonology (see the sidebar "Recommended Resources").
The most important trend of the '70s, however, was the increasing accessibility of digital technology. With the invention of digital synths, the analog and digital paths—which had wound their separate ways through the landscape of electronic music in the '60s—began to converge. These new instruments combined the performance capabilities of analog synthesizers with the precision of computers.
In 1972 Jon Appleton was director of the Bregman Studio at Dartmouth College, which housed a large Moog modular system. Appleton asked Sydney Alonso, a faculty member at Dartmouth's Thayer School of Engineering, about using a computer to control this system. Alonso's advice was to forget the Moog and build a digital synthesizer. Together they did, calling it the Dartmouth Digital Synthesizer. Cameron Jones, a student at the college, wrote the software. Alonso and Jones then formed a company called New England Digital and, with Appleton's musical advice, went on to create the Synclavier.
The Synclavier was a computer-and-digital-synthesizer system with an elegantly designed keyboard and control panel. In September 1977, I bought the first Synclavier, although mine came without the special keyboard and control panel that Alonso and Jones had so painstakingly designed (see Fig. 1). My idea was to write my own software and control the computer in various ways with a number of different devices. For example, in Follow Me Softly (1984) I used the computer keyboard to control the Synclavier in a structured improvisation with percussionist Jan Williams. In 1983, Appleton composed Brush Canyon for a Synclavier with both the keyboard and the control panel.
By the late '70s, digital synthesizers were under development at research institutions such as Bell Labs and the Paris-based organizations Groupe de Recherches Musicales and Institute for Research and Coordination of Acoustics and Music (IRCAM). The market was full of analog, hybrid, and all-digital synthesizers, drum machines, and related devices. These products were manufactured by a long list of companies, among them ARP, Crumar, E-mu Systems, Kawai, Korg, Moog Music, Oberheim Electronics, PPG, Rhodes, Roland, Sequential Circuits, Simmons, Synton, and Yamaha. Technology was advancing quickly, the level of creativity was high, a new mass market was emerging, and price was increasingly important. High-end products were quickly adapted to a larger market. When Fairlight Instruments put the first sampler on the market in 1979, it cost about $25,000; by 1981, E-mu's Emulator was selling for $10,000. It was an exciting time, with new and powerful technologies appearing at increasingly affordable prices.
THE BEGINNING OF MIDI
Although innovation, creativity, and adventure were in the air at the end of the '70s, there was also a large measure of chaos in the market. Standardization was nonexistent: if you bought a synthesizer from one manufacturer, you had to buy other products from that same company to maintain compatibility. The marketplace was fragmented, with no fragment large enough to warrant major investment. In the view of Roland president Ikutaro Kakehashi, standardization was necessary to make the industry grow. With a global market unified by a digital standard, a company of any size could develop and sell its products successfully.
FIG. 2: Among the earliest digital samplers was the Ensoniq Mirage. It was supported by numerous third-party manufacturers that offered both hardware accessories and software enhancements.
In June 1981, Kakehashi proposed the idea of standardization to Tom Oberheim, founder of Oberheim Electronics. Oberheim then talked it over with Dave Smith, president of Sequential Circuits, which manufactured the extremely successful Prophet-5 synthesizer. That October, Kakehashi, Oberheim, Smith, and representatives from Yamaha, Korg, and Kawai met to discuss the idea in general terms.
In a paper presented in November 1981 at the AES show in New York, Smith proposed the idea of a digital standard. At the NAMM show in January 1982, Kakehashi, Oberheim, and Smith called a meeting that was attended by representatives from several manufacturers. The Japanese companies, along with Sequential Circuits, were the primary forces behind sustaining interest in the project, and in 1982 they defined the first technical specification of what came to be known as the Musical Instrument Digital Interface, or MIDI. At the January 1983 NAMM show, a Roland JP-6 was connected to a Sequential Circuits Prophet 600 to demonstrate the new MIDI spec. After some refinement, MIDI 1.0 was released in August 1983.
The adoption of MIDI was driven primarily by commercial interests, which meant that the specification had to represent instrumental concepts familiar to the mass market. Because that market was most comfortable with keyboards, MIDI was basically a spec designed to turn sounds on and off by pressing keys. For some musicians, this was a serious limitation, but most felt that the benefits of MIDI far outweighed its shortcomings.
FROM FM TO SAMPLES
In business terms, MIDI was a smashing success. Its universal format allowed any company—new or established, large or small—to present the world with an original concept of music.
In 1983, Yamaha introduced the first monstrously successful MIDI synthesizer. The DX7 was a hit not only because of its MIDI implementation, but also because it sounded great and was reasonably priced at less than $2,000. To generate sounds, the DX7 used frequency modulation (FM), which John Chowning had developed at Stanford University in 1971 and which Yamaha had licensed in 1974.
FM results when the amplitude of one waveform, called the modulator, is used to modulate the frequency of another waveform, called the carrier. As the amplitude of the modulator increases, the spectrum of the carrier spreads out to include more partials. And as the frequency of the modulator changes, the frequencies of the partials in the carrier spectrum change. In other words, by changing the amplitude or frequency of the modulator, a performer can change the spectrum's bandwidth and the timbre of sounds. The early advantage of FM synthesis was that simple controls could cause major changes, making instruments like the DX7 very popular for live performance.
Throughout the '80s, Yamaha continued to develop new applications of FM synthesis in a line of instruments, while many other companies—Akai, Korg, and Roland among them—developed their own synthesizers. Roland, for example, released the Juno-106 in 1984 and the D-50 family in 1987. To a growing number of musicians, however, the main disadvantage of synthesized music was that it sounded electronic. As it turned out, most MIDI musicians wanted emulative sounds. They turned to samplers, which allowed any sound-whether trumpet riff or traffic noise—to be recorded and played back at the touch of a key.
In the early '80s, E-mu Systems had broken through the first major price barrier in the sampler market with its $10,000 Emulator. In 1984, Ensoniq introduced the Mirage at less than $1,300 (see Fig. 2). And in 1989, E-mu lowered the bar even further. Its Proteus, a sample-playback device that came with 256 prerecorded samples and an exceptionally simple interface, cost less than $1,000.
The electronic-music industry continued to grow throughout the 1980s. By the early '90s, the market was overflowing with synthesizers, samplers, and other MIDI hardware, but attention was beginning to center on software development.
SOFTWARE BEGINNINGS
A MIDI software industry had already emerged in the mid-'80s. For example, Opcode Systems established itself in 1984 with a MIDI sequencer for the Macintosh and almost immediately expanded its product line to include David Zicarelli's DX7 patch editor. At the same time, other companies were forming and releasing similar software, among them Steinberg Research in Hamburg, Germany, and Mark of the Unicorn in Cambridge, Massachusetts.
FIG. 3: Intelligent Music's M algorithmic software was a remarkable tool for generating music on a Macintosh. The software also had a brief life span on the PC and has recently been reintroduced for the Mac by Cycling '74, a company founded by M's inventor, David Zicarelli.
As personal computers got faster and less expensive and computer-based MIDI sequencers became more commonplace, other MIDI software applications were developed. In 1985, Laurie Spiegel wrote Music Mouse, a program that contained harmony-generating algorithms. In 1986, Zicarelli developed two applications, M and Jam Factory, for Intelligent Music, which continued to develop other interactive-composing programs during the years that followed. Of particular interest was M, an interface of musical icons that controlled algorithms (see Fig. 3). Given a melody or other input, a composer could use M to generate an infinite stream of rhythmic and melodic variations, ending with something distinct and original. For example, I composed After Some Songs, a group of short improvisational compositions for electronics and percussion, by using M to transform some favorite jazz standards.
In 1985, Miller Puckette went to IRCAM to develop software for the 4X, a digital synthesizer built by Giuseppe di Giugno. By mid-1988, Puckette had developed a graphical modular control language that he called Max. At about the same time, Zicarelli saw a demonstration of Max and, after discussing it with Puckette, developed the language as a commercial product-first with Intelligent Music and then, after Intelligent changed directions in 1990, with Opcode Systems. Max was released in late 1990 and remains an essential tool for many customized music software applications.
The first digital audio programs were also developed in the mid-'80s. Among them were MacMix, written by Adrian Freed in 1985, and Sound Designer, a Digidesign product released the same year. In 1986, working in conjunction with a company called Integrated Media Systems, Freed added a specialized hardware device and called the system Dyaxis. And in 1988, taking advantage of increasing computer speeds, larger hard drives, and digital-to-analog converters, Digidesign released Sound Tools, which established an industry standard in audio editing. Digital audio was fast becoming accessible to musicians.
TRENDS INTO THE '90S
Everything expanded throughout the 1990s. The market filled with increasingly sophisticated synthesizers, samplers, drum machines, effects generators, and an enormous variety of modules, each of them doing something different and offering unique sonic possibilities. A large secondary market of patch panels and other MIDI-management gear formed around the needs of professionals with racks of devices. Software applications, including sequencers, patch editors, effects processors, and hard disk recording systems, also permeated the market. In fact, there was so much to learn that the following joke was frequently heard: "What's a band?" "Three guys reading a manual."
Some musicians may have felt a few pangs of nostalgia for analog equipment and sounds, but the 1990s were largely a digital decade driven by the availability of ever-faster microprocessors. Not surprisingly, as personal computers kept getting speedier and more powerful, digital audio became increasingly software based.
But the advances of the 1990s reach far beyond simply editing sound. Digital signal processing (DSP), which allows composers to transform as well as synthesize sounds, became an important component in digital audio systems. One of the most complete DSP applications to appear in the late '90s was MSP, created by Miller Puckette and David Zicarelli and marketed through Cycling '74, a company that Zicarelli formed in 1997 to develop DSP software as well as to make M available again.
Among the pioneers in DSP systems for composers is Carla Scaletti. In 1986, she began creating a software synthesis system that she called Kyma (Greek for wave). By the following year she had extended Kyma to include the Platypus, a hardware audio accelerator built by Kurt Hebel and Lippold Haken that sat alongside a Macintosh, received instructions, and generated sound (see Fig. 4). Scaletti's 1987 composition sunSurgeAutomata demonstrates the sound-processing and algorithmic abilities of the Platypus. By 1990, Scaletti and Hebel had upgraded the hardware to a system called the Capybara. In 1991, they formed Symbolic Sound Corporation and shipped the first complete Kyma system, available initially for the Mac and shortly thereafter for the PC. With its evolving hardware and continual upgrades, Kyma remains one of the most powerful sound-design systems available today.
INTO THE 21ST CENTURY
As we look to the future, it's hard to know which innovations will have the greatest impact on our lives and our work. Although we can assume that digital audio technology will keep improving as computing horsepower increases and prices drop, predicting exactly how this will play out in our studios isn't easy. I asked several leading figures in the music-technology field for their thoughts on what the next decades will bring. Here are some of their predictions:
Craig Harris (composer, author, and executive editor of the Leonardo Electronic Almanac): "New instruments will have enormous flexibility in both the sonic realm and the modes of interaction, such that composers can create in the way that is most effective for them, performers can realize works in ways that work best for their own personal styles, and audience members can benefit from a rich variety of interpretations. This is one realm that distinguishes electronic instruments from traditional instruments, in that there is no preconceived sonic realm or method for interaction that is inherent in the machine. For the first time, we have instruments that will have their limits established more by our imaginations than by the laws of acoustics."
FIG. 4: Pictured (from left to right) are Bill Walker of the CERL Sound Group, Kurt Hebel, and Carla Scaletti with the Platypus at a sound check before a November 1989 concert in Columbus, Ohio.
Carla Scaletti (composer, software developer, and president of Symbolic Sound Corporation): "What seems to interest us is the process of making music. Bootlegs of tour improvisations on original album material are more sought after than the finished albums. Some musicians are beginning to post MP3 versions of `works in progress' on the Internet, so all of us can witness and participate in the process of exploration and refinement that goes into a `finished' album. Every album that is released immediately spawns multiple offspring in the form of remixes. Interactive and immersive environments like computer games require music that can be `traversed' in a multiplicity of ways; each path through the game results in a new piece of music. The 21st century will be `the composition century' where `objects' (like finished albums) will be virtually free on the Internet, while the creators of those objects will be highly sought after."
Daniel Teruggi (composer and director of the Groupe de Recherches Musicales in Paris): "If we put our analytical ears on, we see that there is still a great difference between a recorded sound and the sound produced and propagated by an acoustical device. Loudspeakers, microphones, amplifying systems, and digital conversion are the elements of sound processing that still have to achieve what I would call a more `realistic' image of sound."
David Wessel (researcher and director of the Center for New Music and Audio Technologies at the University of California, Berkeley): "Standard general-purpose processors like the PowerPC and Pentium are now capable of real-time music synthesis and processing. Laptops will become the signal processors and synthesis engines of choice, at the core of the new performance-oriented electronic music instrumentation. I'm confident that we will also see the development of a new generation of gesture-sensing systems designed with music in mind, including a number of the common interfaces like drawing tablets and game controllers adapted for the intimate and expressive control of musical material. And I see, and am beginning to hear, the emergence of an electronic music that might be more akin to chamber music or that of a small jazz group where musical dialog and improvisation play essential roles."
David Zicarelli (software developer and president of Cycling '74): "The computer, synthesizer, and tape recorder have become the new folk instruments of industrialized cultures, replacing the guitar. An overwhelming number of recordings are being produced in the electronica genre right now, and there is no sign that this will stop anytime soon."
You may now be wondering how to take advantage of the resources available to you right away and in the years to come. Start by thinking about what kind of music you want to write and which tools will best help you reach your goals. Explore the Web and read magazines such as EM for new developments in hardware and software. Above all, learn the history: read books on the subject and study recordings by the pioneers as well as the current movers and shakers. A 100-year tradition is waiting to be explored, and the more you know about the past, the better you can shape your own future.
Joel Chadabe is a composer, past president of Intelligent Music, author of Electric Sound, and president of the Electronic Music Foundation. He can be reached at chadabe@emf.org.
RECOMMENDED RESOURCES
For an overview of electronic-music history, read Electric Sound, by Joel Chadabe (Prentice Hall, 1996).
For an overview of MIDI, read MIDI for the Professional, by Paul D. Lehrman and Tim Tully (Amsco Publications, 1993).
The following compact discs feature music mentioned in this article:
After Some Songs (Deep Listening) is a group of Joel Chadabe's abstractions of jazz standards, for computer and percussion.
CDCM Computer Music Series, volume 3 (Centaur), includes Carla Scaletti's sunSurgeAutomata, for the Platypus.
CDCM Computer Music Series, volume 6 (Centaur), includes Jon Appleton's Brush Canyon, for Synclavier.
CDCM Computer Music Series, volume 24 (Centaur), includes Joel Chadabe's Follow Me Softly, for Synclavier and percussion, and Cort Lippe's Music for Clarinet and ISPW.
Computer Music Currents 2 (Wergo) includes Emmanuel Ghent's Phosphones, composed at Bell Telephone Laboratories in 1971.
Gottfried Michael Koenig (BVHaast) includes Koenig's Output, composed in 1979 at the Institute of Sonology.
Women in Electronic Music 1977 (CRI) includes Laurie Spiegel's Appalachian Grove, composed at Bell Labs in 1974.
|
|