Paul Lansky



Interview by Joshua Cody,1996
      

 


Mr. Cody: You studied twelve-tone composition with Milton Babbitt, and then you studied with George Perle, who had a more idiosyncratic application of the twelve-tone method. What is your relationship to twelve-tone composition in general? Was there a specific rapport between those theoretical advances and the advances that electronic and computer music made in the 1950s and 60s?
Well, I got involved in computers at a time when the prime motivation for doing so was that we thought it was going to be a way in which the properties of the twelve-tone system could be deeply investigated. The idea was that with computers, one could perform really complicated structural manipulations of sets, rhythms, timbres, and other things that one could never possibly do with instruments. The really interesting thing for me was that as soon as I started to do that, I lost interest in doing it. It was probably the computer, more than anything else, that led me away from twelve-tone music, because as I started to do that, I noticed that anything I did on the computer was much less interesting than the most primitive sounds somebody could make scraping a violin. I got very involved in the early and mid-seventies in using the computer as a sort of camera on the sounds of the world. But it certainly was the excitement of serialism in the sixties that led me to use the computer.


"The kind of stuff that I'm doing is much less sophisticated, and much simpler technologically, than what Michael Jackson does."


As far as your training, did you become interested in computer music as a musician? In fact I know you were a musician, a French horn performer, but as far as composition, did you begin studying quite traditionally? How did you originally involve yourself in electronic music?
There are several answers to that. First of all, when I went to Princeton in the mid-sixties, computer music had just started up. At that point I was very dissatisfied with writing pieces in my studio, waiting six months, and then having somebody play them. I was much more interested in writing them, and hearing them as I was doing it. Also, the whole community at Princeton at that time that was doing this stuff was really exciting: it was a great adventure, and it was fun to be part of the great adventure. That was largely what the motivation was. Also, at the back of my mind, you know, I was draft bait at that point. I figured if I got drafted, perhaps computer programming would keep me out of range of fire arms. That was not an insubstantial thought, actually. It was a very tense time.

The whole business with serialism in particular in the sixties was very exciting. Milton Babbitt gave some sensational courses, he gave some legendary courses that have since, actually, been published in a book by the University of Wisconsin Press, by Stephen Dembski [Milton Babbitt: Words About Music, ed. Stephen Dembski and Joseph Straus, 1987]. They're quite interesting. Again, as with the computer, it was very interesting to be at the forefront of something, with somebody that felt very strongly that, no matter what else was going on, this was really substantial, consequential aspect of the way the world was going to be. He's still pursuing it, and that's very interesting stuff. It's just not what I've been doing for the past few years.

Around 1969, I had studied with George Perle at Queens College. He was very reluctant to talk about his own music at that point. After I was already at Princeton, I was reading George's book, his description of his so-called twelve-tone modal system. So I started to do some experimentations with that. I did some experimentations which George had never done, and that led to an intensive period of collaboration, because nobody had ever done anything with his system, in the thirty years since he had come up with it. George and I worked together very intensively for about three or four years; we had a voluminous correspondence, and we wrote a lot of pieces. I ended up working so deeply in it that I ultimately. . . I won't say that I lost interest in it, but I became interested in other things, and the computer caught on in my imagination at that point. Dealing with specific pitch structures ceased to be a really focal aspect of the kinds of things I was concerned with, I was more interested in other things. Also, I became less and less interested in working in a method of composition where you go from abstract to particular. I was much more interested in starting out with the particular, and chiseling away at that.

When you spoke of moving from dealing specifically with pitch structure to other things, I thought of your comparing your compositional method to photography, dealing with sound as opposed to pitch. I was wondering if you felt any affinities with musique concrète, which was a very important movement in its own right.
That's a very good question. The whole relation between computer music and music concrète is one that is quite interesting. I found very early on that the spirit and sense of musique concrète was something that was quite fascinating, challenging, and provocative. I thought it was really worth investigating. However, I really didn't like analog tape technology. I was a clumsy person in the studio: putting my fingers on the splicing block was a very risky venture; I never really came out intact. As a matter of fact, I've never actually done any work in an analog studio. I'm strictly a second-generation electronic musician; I didn't like turning knobs, I didn't like spicing tape. I found that the computer was great, because with pencil and paper I could sit down and decide what it was I wanted to do, and then I would go to the computer and program it.

The whole business about musique concrète then became very suggestive, because it struck me that now I could actually start to engage the kinds of things that the original pioneers of musique concrète were imagining, without having to use what I considered an antique technology. I'm sure people who were involved in this would regard the antique technology as the only real way to do it. Maybe they're right. I think there's a difference, subsequently, between the kinds of things I've done and the kinds of things other people have done, with the use of the computer as a kind of aural camera on the sounds of the world. There's a difference between that and the original spirit of musique concrète. Or at least I should say there's a difference for some of us. There are some computer music composers like Denis Smalley who really carry on the tradition of musique concrète. The idea of musique concrète is that you use the sounds of the world as a synthetic engine, and then create your own world noise out of it, which becomes something utterly unlike the sounds of the world; the idea as I understand it is not to retain the flavor of the way things sound in the world, but to create an entirely new sound world out of the sounds of the world. My thinking, however, differs in that I became very interested in the extent to which the computer as an aural camera could help us see the sounds of the world for what they were. We would no longer take things for granted that were easy to take for granted. I started out with speech, which was the obvious one. The first piece I did--mySix Fantasies, on a poem of Thomas Campion--which I felt really got somewhere in this respect took the sounds of somebody reading a poem and took several passes over it. Each pass was designed to isolate and highlight a specific aspect of speech. In the end, you end up with a view of speech that was explicitly musical. In other words, my intention in doing this was to find the implicit music in world noise and make that explicit.

Was John Cage ever an influence?
I'd have to say not. No, I don't think so. I think Cage's influence on the kinds of things I was concerned with--in a sense, what I'm doing is the opposite of what Cage is doing. Cage is saying that in a sense everything, the world, is our music, and so he's teaching us to hear the world as our music. I think the flavor of what we're trying to do is to say that we're extracting from the world noise a music that is very particular and has a lot to do with much more traditional notions of music. The liberating thing about Cage, on the other hand, is just the extent to which he stood up and asserted that this was how he felt about it, and that there was nothing wrong about making assertions of this sort. I think there was a great deal of freedom and liberation that resulted from seeing this kind of model of a composer.


"I write protein-base pieces, and I write silicon-base pieces."


One thing that alienates some people from computer music, I think, is the lack of definition between composer, performer, and instrument. Whereas these are more or less separate entities in the classical tradition, the computer seems to blur these boundaries. For example, if a composer writes a piece with a computer program that he himself developed specifically for that piece, he is, in a way, creating an instrument for himself, one specific to the particular piece that he's working on. Another example is the role of gesture in both these musics. One might say that part of the meaning of a gesture in classical music is generated from the tension of separate entities contacting each other at a particular moment--namely, the performer, the instrument, and perhaps a purely musical substance. As an electronic composer, how do you respond to this situation?


The model of an artist that I found attractive in this domain, at first, was more of a sculptor than a composer. In another sense, one could say, more a filmmaker than a playwright. If one looks at the traditional notion of the composer as analogous to a playwright, a playwright writes a text which is subsequently interpreted by other people. If on the other hand one takes the model of a filmmaker, a filmmaker--whatever his or her particular involvement in the project is--ends up with the finished product. The filmmaker is really the performer in that sense. In fact, as a person who came into music as a performer, and whose real excitement about music was through performance, that's in fact what attracted to me to the medium in the first place: that I could function as a performer. I still write what I like to call "protein-based" pieces for protein-based systems, and I write silicon-based pieces; I'm not particularly involved in mixing the two; I regard them as very separate things. And there's a great thrill in writing a script for a person who's going to subsequently perform it. But I really get high sitting home in my studio making sounds and putting them on tape: sculpting sounds on tape. That's for me what making music is all about. I think it's really a cultural model to regard making music as a process which necessarily involves a two-step engagement by a composer and a performer. That's really what's happened in the Western world, but in lots of other cultures and in lots of other places in the world people have always been sitting down and making music themselves. I'm sure there are some cultures in which it's unimaginable that there's any distinction between the person who makes the music and the music itself, so you can't really separate them. In a sense, that's what I like about doing sound on tape. I don't even like to call it computer music anymore, for computers are ubiquitous; everyone has a computer, and everything is a computer. Your CD player is a computer, your car has about ten computers at this point. Sculpting sound on tape, in a sense, is what I really like to do. I regard myself as a performer--and as a composer, and as a sculptor.

Now the second question you asked is very interesting, because it raises the issue of the integrity of the stuff that's out there on tape. If you've just got stuff on tape, what's the implicit musicality of it? Who's making it? What's it an image of? These are very deep questions, because in our sense of recording over the past seventy or eighty years, recording has largely been regarded as an archival medium. We're used to listening to a recording of something as a recording of an action that took place. When you hear a recording, you're used to listening to it as a document of an event, in which people actually did something. If you look, for example, at a lot of the things people do today to enliven recordings, one of the biggest things is the use of artificial reverberation; almost everyone doing recordings--even electronic music composers, even me--will put electronic reverberation on it. It gives the recording the sense of physical space; it creates the sense that this is something that happened at some time, and was captured on tape. I think that's a legitimate and interesting thing to do; I don't think there's anything wrong with using the computer as a way to capture activities of people engaged in physical action. In fact, that's one of the things I like so much about using real-world sound: what I regard myself as doing is actually going out there and capturing physical actions and putting them on tape and doing things to them that will help you to see them from different perspectives.
To use the film analogy again, it's not all that different from making a movie of somebody doing something. I think that the mistake that's made when one tries to use this analogy is to say that the general way to think of sound on tape is analogous to drawing on film, or doing computer modeling of images on film, when in fact most of what happens on film is photography. With the computer, I think that it's useful to think that most of what happens on the computer--or at least, a lot of what some of us are doing--is going out and capturing physical actions.

Now, there's another side to the coin, which is that a lot of energy and time and money in recent years in computer music has gone into the creation of synthetic instruments. This is something that I haven't been all that interested in, specifically because of the point you make, that a lot of these synthetic instruments do things much too easily. They don't capture a sense of physical action; they don't capture a sense of effort. In my pieces, I really like to create the sense that there's somebody working hard in order to go from C to D; that going from C to D is not an action that one does with the touch of a button; that going from C to D requires some effort and some time and energy.

It's interesting to think of one of the classical reactions people have to electronic music: it strikes them as "outer space" music. I think one of the reasons they regard it as "outer space" music--or perhaps here we're talking about a certain kind of electronic music--is that it actually is made without any effort. There's no way they can attach a physical analog to the sounds that they're hearing.

As we've said, unlike the piano, the computer and electronics begin not with the twelve pitches of the chromatic scale, but with the phenomenon of sound itself, in a more general, and perhaps a more complicated, sense. I was wondering what role is played in your electronic music by pitch, here understood in the classical sense.
A lot of the stuff I've done in recent years has actually been quite tonal. I've been using triads and scales and keys. In fact that was not a conscious decision that I made; I didn't actually decide that I was going to write tonal music, in one way or another. The way it happened was that I found that I wanted to simplify certain aspects of the sound to allow certain other aspects to be concentrated upon. In other words, I really didn't want to. . . . In a way, I'm being evasive by using tonal structures. I didn't want to present a context in which one not only had to parse a really complicated timbral texture and a really complicated rhythmic texture, but also a complicated pitch texture. So in some pieces starting in the mid-eighties, I very consciously thought of triads as bands of pitch color that would go over a period of time. The closest analogy that I had was the paintings of Mark Rothko: a band of blue, a band of yellow. . . . While one listened to that, I would create textures that had a lot of different information. The first piece that really concerned me in this respect was a piece called Idle Chatter. Here, I had lots of voices going all over the place, they were chattering right and left. In order to make any sense out of it, I found I really had to simplify the pitch structure. I started out using a fairly complicated pitch structure, and that listening experience proved to be utterly exhausting; I just got tired, and I couldn't really deal with it. As soon as I decided to place a B-flat triad in the background, I found myself able to listen to all kinds of things. So we have here an issue of a "listening model." In recent pieces, I've started to get more complicated pitch structures. It's hard to say exactly why, or how, or what; I think a lot has to do with the context of the piece. There are certain pieces and certain kinds of material which will lend themselves very well to. . . . Oh, an interesting thing I did was a piece called The Lesson, which hasn't been recorded yet, and which was based on a brilliant colleague of mine named J. K. Randall expostulating on the difference between Beethoven and Mozart. It was a very interesting discussion that had to do with the ways that Mozart manipulated your sense of things, and the ways in which Beethoven was very different; Beethoven launches you into what he calls "cosmic states," whereas Mozart really knew how to lead you by the nose and tell you exactly where to go. As I was doing the piece, I noticed that if I were to have used a relatively simple pitch structure, it would have made Randall's language seem kind of silly. So I went back to the types of things I had done with George Perle; I got a fairly complicated chromatic voice-leading which was perhaps not exactly either atonal or tonal, but certainly not explicitly B-flat major. I found that that was much more appropriate. In another recent piece called Word Color, based on a Walt Whitman text, I found also that in order to project the sense of the text I had to do certain kinds of things with pitch. So I really don't think of pitch as a given, in any sense. I think of it as a way to adapt to the material at hand. There are a lot of cases in which I find uses for very complicated pitch structures, and other situations in which I might want to do something very simple.

Do you treat pitch differently when writing chamber works?
I'd have to say that I have more trouble with the whole issue when I'm writing chamber works, basically because of the things I just mentioned: I think that my whole frame of mind has a lot more to do with maneuvering around these issues of context and content. I've only done two pure chamber pieces in the past fifteen, maybe fourteen years. One was a choral piece I did two summers ago, and the other is a piece for marimba and violin that I've just finished. In both of these pieces, I had a great deal of difficulty figuring out what the pitch domain was doing. In the choral pieces, one of the pieces is actually quite tonal; another is not. In the marimba and violin piece, there's a kind of jazz section, then there's a kind of funky atonal section. I think that that's sort of telling me that my whole way of thinking has really been very heavily conditioned by the extent to which I'm using pitch to serve other means, in the electronic medium. That's an interesting question.

Do you feel that electronic music plays a different role in Europe right now than in the United States?
Yes, that's interesting: I think that the United States is probably at the bottom of the list in the way it treats electronic music. For example, in Canada, there's a lot of activity, particularly in Montreal and Vancouver, and in other places. It's really regarded as an exciting, growing field. There are a number of very good composers who are doing really wonderful things in very different ways. It's a big thing in Canada, especially when one looks at the relative size of their population and ours. In this country, people are more interested in Peter Gabriel than in any of these things--but then, that's probably a different issue.

Europe is interesting, also, in that I feel Europe is sort of under the spell of a lot of great institutions there, like IRCAM . But there's a tradition of radio in Germany, in Cologne, that's quite interesting. People have been doing horspiel in Europe for a while that is very close to the type of thing that I'm interested in doing--they don't seem to find me interesting, but I find them interesting. In general, in Europe, aside from the influence of IRCAM , which is a very special case, I would say that it's taken fairly seriously. I've found that when I've been in Europe, I seem to be taken much more seriously than I am in the United States.

This is a very general question: having been closely associated with the development of computer and electronic music since the 1960s, as you look back upon its progress, does its course surprise you? Where do you think it might take us, in the future?
Computer and electronic music has outgrown its infancy at this point, I think. I think it's pretty much in its adolescence; it's on the verge of becoming something. I'd like to see it merge with the mainstream of music to an extent that there will no longer be specialty bins in record stores for electronic music; that the music will be identified for what it is, how it sounds, rather than how it's made. We don't really have that many record stores that isolate music according to instrumentation; you basically have composers and performers, that's the way people parse things. And that's certainly the way people parse things in popular music. I think it's unfortunate if you have a music whose culture surrounds the hardware that's used to make it. That's sort of saying that that's the most interesting way one can imagine parsing it.

What's happening today is quite interesting, in that we've seen a convergence of recording technology and computing technology; that these things have actually merged to the point where you can go into a record store and buy a CD to dump to your computer; nobody will say, "I've never heard of that before;" it's quite familiar. And all editing is done digitally, and all processing is done digitally; everything is done digitally. I always like to say that in terms of the actual machinery that we're using, the hardware that we're using and even the software that we're using, I think people have got it all backwards. The kind of stuff that I'm doing is much less sophisticated, and much simpler technologically, than what Michael Jackson does. Michael Jackson has much fancier hardware than I have, and he uses much fancier software. If you really want high-tech computer music, that is high-tech computer music! My stuff is just child's play. I may use a few sophisticated techniques that involve algorithmic composition, for example, but Lord knows that Michael Jackson, to use our example, has the power of all the technology that's out there at his fingertips. And he uses it, and he uses it very well. So this image that what I do is somehow at the cutting edge of the computer music revolution is really sort of backwards. Maybe I shouldn't admit this in public, but I guess it's better to admit it sooner than later!


Copyright 1996 by the PNMR Society. This interview was recorded for an exclusive radio broadcast by WNUR-FM, Evanston/Chicago.