Blog

A Composer’s Guide to Compression, Pt. 3

This is the final installment of a three-part series about using compression in recordings of classical music. In part one, I talked about why it’s important for composers to advocate that their music be recorded and mixed using compression. In part two, I discussed the technical side of this issue: what is compression and how does it work? In this last part, I want to provide some context about what the perceived reasons are for not using this technique on classical music.

Why isn’t this already happening?

It’s difficult for me to present the other side of this issue because I have never believed in it. I have an inherent bias against recording music this way and I feel passionately about the fact that we should be doing it better. With that in mind, I think the best way for me to approach this part of this discussion is to present the arguments against using compression on classical music that I have found to either be the most convincing, or that I have heard the most frequently. I’m also presenting my rebuttal to each of these comments. For what it’s worth, I have never heard an argument that had me convinced. If I had, I would put it here.

1. “Using compression on a concert recording makes all sorts of weird things about the concert hall
and
all kinds of background noises become much more evident in the recording than otherwise.”

This is absolutely 100% true. However, rather than being an argument against using compression, this is actually only an argument in favor of not recording music during a live performance, recording in a studio space and close mic’ing all the instruments. See #4 below.

This isn’t to say that live recordings aren’t valuable. They are, and this is true in other genres as well. But they are a different kind of product for a different audience than a studio recordings are. As a rule, live music and recorded music are different products and should be treated differently. See #3 below.

2. “I want my music to have a wide dynamic range. When I write ppp, I want it to be
barely audible.
When I write fff, I want it to be overpowering.”

This actually seems like a pretty convincing argument initially and it’s true that using compression limits the dynamic range of a recording. The problem is that this ignores both a truth about the physical properties of sound, and the necessity of compensating for the ways in which people listen to music.

There is a relationship that exists in acoustic instruments between perceivable overtones and amplitude. Any pitched sound (except a sine wave) contains a fundamental and numerous overtones that occur in a particular pattern above the fundamental. The presence and relative amplitude of these overtones is what creates timbre in musical sounds. As the frequency of these overtones increases, their relative amplitude decreases. Further, as the amplitude of the fundamental decreases, so do the relative amplitudes of each subsequent overtone. In short, louder sounds have more audible overtones than quiet ones.

This means that louder sounds have a different timbre than quieter sounds! So, in fact, when a composer writes ppp they’re not just writing a soft sound, they’re also writing a sound with an inherently different timbre. Increasing the volume of a prerecorded sound only makes that timbre louder, it doesn’t alter it. So your “barely audible” ppp will still actually have the timbre of a quiet sound no matter how loud we make it, and the fff will always have the timbre of fff even when the volume is turned all the way down. Therefore, we have to assume that changing a “barely audible” ppp into something that’s actually listenable won’t have a significant difference on the perception of that sound.

The other part of this particular argument is really about the context in which we listen to music. If I am listening in my car, for example, I need to get the quietest sound above the noise floor (the volume of ambient sound) of my car to be able to hear it. If your ppp was recorded at 20 dB, in my Honda on the highway, I need to turn it up at least 20 dB to be able for it to be “barely audible.” The problem is that by doing that I also made the explosive fff that’s coming 20 dB louder. If that fff was recorded at 80 dB, now it’s 100 dB!

Good performers naturally make these kinds of adjustments when they play. If a hall is very large, ppp will be louder than it would in a small chamber setting. And fff will be quieter at a house concert than it will be in an amphitheater. Players adjust their dynamics to suit the space in which they perform. This is why we don’t write decibel numbers into the score instead of dynamic markings. The system for notating dynamics is designed to be flexible.

Unfortunately, this is not how recordings work. The relative distance between different dynamic levels is entirely fixed once it is recorded, and can’t be adjusted to suit the listening situation, so we need to provide a sufficiently limited dynamic range that listening is actually possible in a variety of situations. The only way to accomplish this is with compression.

3. “I do this sort of thing to classical music if I’m mixing a movie score. But never to a concert piece.”

Once upon a time I asked my facebook friends to tell me who their favorite living composer was. Everyone, and I mean literally EVERYONE, who wasn’t deeply versed in contemporary classical music (and even some who were) said the name of a movie composer. The fact that engineers are mixing movie music differently than concert music, and that everyone loves movie music is not a coincidence! This is also not a question of compositional language or marketing. Experimental music gets made all the time, even by film composers (consider the scores for Interstellar or The Revenant), and sells well because people listen to it when it is recorded correctly. This is a question of how listenable the music is in its recorded form. Concert music and film music are the same sounds made by the same instruments. Musically speaking, they are the same thing. The only difference is context. It’s foolish to think that there is something “special and different” about concert music as opposed to film music that necessitates a different technique when they are the same thing.

4. “People want to hear the natural sound of the hall the music is being played in. Compression destroys that.”

I have literally no idea where this came from. No one wants to hear the sound of the hall. The hall sucks. It’s full of coughing, sneezing, talking, cell phone carrying people. That’s not what anyone wants to hear on a classical record. They want to hear the music, not the hall.

Ok, I’ll grant you an audiophile or three who spent more money on their stereo system than their car, but this is at best a niche market. It’s fine to make recordings that cater to that market, but it doesn’t make any sense to record an entire genre a particular way with those three guys in mind. Other than them, if people wanted to hear the sound of the hall, they would be buying records of pop music “as recorded at Carnegie hall” or whatever. They’re not doing that. And they’re certainly not buying records of classical music bearing the same information.

5. “This is a genre of music for the concert hall, not for recordings.”

There are so many things wrong with this…

First of all, if this is true, why are we recording this stuff at all? Why is this even an issue?

Second, this is another reason that people are turned off by classical music being snooty and elitist and so on. By saying classical music is only for the concert hall you are also saying that it is only for people who can afford concert tickets, a suit, a baby sitter, and a night off of work. It further says that this music is only for people who live in a major metropolitan area where this kind of music is performed, or can afford to travel to one.

At its best, this way of thinking is, indeed, elitist. At its worst, it’s racist.

Classical music is a beautiful, powerful art form that can and should be made, listened to, and appreciated by anyone and everyone. Recordings are the opportunity that we have to make our music approachable by those who would not normally have the opportunity to hear it. Recordings are the way that we can carry our music into new generations and inspire the people who will create and appreciate the classical music of the future. That isn’t true of the concert hall.

6. “People shouldn’t listen to classical music in cars or at work etc.”

They already do.

Again, this falls into the category of limiting your audience to those who can afford to spend a significant amount of time just listening to your music, or a significant amount of money to buy the equipment to make it not suck to listen to. I don’t even know musicians, people who are deeply passionate about music, who are capable of making that kind of time and money. Everyone has a family and friends and a job and a million places they have to be all the time. If we want people to listen to our art, we need to meet them where they are. This is especially true since every other genre is already successfully doing this and winning the ears and wallets of consumers far in advance of anything we put out.

I WANT PEOPLE TO HEAR MY MUSIC! I don’t care where they listen to it, or how, or why. These things don’t matter to me and they aren’t the reason that I write music. I write music so that people will listen to it, and that means that I have an obligation to my audience to make this as easy as I can for them.

____

These are the most common and most convincing reasons that people have given me for not recording and mastering classical music with a compressor.  To me, they seem to all be either related to some incorrect decision they’ve made about the market for classical music, or to some fundamental misinformation they have about the nature of sound. And none of them are convincing to me. What’s more, many of them equate essentially to “this is just the way it’s done” which is a bad excuse to do things in a lazy way.

That might not be true for you. If not, that’s fine. There is a wealth of people who think like you and will make your music the way you want. But you need to know that you are limiting your audience and you’re making your music unapproachable and hard to listen to. When people don’t listen to your music and don’t come to your concert you can’t blame them for “not understanding art” or “not being educated enough” because those things aren’t true and never have been. What’s true is that you presented your product, which you have labored over for hours and hours, in a lazy way that people hate and don’t want to buy.

If you do agree with what I’m saying here, you should know this: those of us who write music and those who perform it are the ones that are driving the recording industry as it relates to classical music. We make industry decisions by spending our money in one place, and not another; by releasing one kind of sound and not another. This is what drives industry trends. We have a responsibility to ensure that our music is presented in the best possible way that it can be, or no one’s going to listen to it.

You have the power to change this kind of trend by taking control of how your music is presented to your audience. Be informed about these processes. Advocate for them to be done correctly by hiring people who will treat your art the way that you want it treated, and by NOT hiring people who try to convince you otherwise.

If classical music is dying, it’s because somewhere down the line we stopped caring about whether or not people listen to what we create. Either that or we never learned the tools that are necessary to compete on a technological level with what the rest of the music world is creating. Or, worse yet, we’ve stopped considering the fact that our music does, in fact, need to compete with the rest of what’s out there.

People want to listen to your music, but you need to let them.


A Composer’s Guide to Compression, Pt. 2

If you’re just tuning in, we’re in the middle of a three-part series about compression in the recording industry and how that applies to classical music. Part one discussed why this is necessary and why we should be advocating for this to happen on our recordings. Today, part two discusses the technical side of this problem by explaining what compression is and how it is used. Next week we’ll discuss the kind of pushback I’ve seen from those in the recording industry on this topic.

What is compression?

A compressor (the tool that engineers use to add compression to a signal) falls into a family of effects processors called dynamic effects. All this really means is that a compressor makes changes to the relative volume of a signal. Other dynamic effects are gating, expanding, and limiting. They’re all basically related and work around the same principals.  Simply put, compression is a process that engineers use to limit the dynamic range of a signal. It makes things that are loud a little (or a lot) quieter, and makes things that are quiet a little (or a lot) louder. It does this by automatically sensing the level of a signal, reducing that level (attenuation) if it goes beyond a certain point (threshold) and then turning the overall volume of a signal back up to compensate for this reduction (output gain).

A compressor generally has three very important controls: threshold, ratio, and gain makeup (also sometimes called “out gain” and various other iterations). There will almost always also be several other controls (attack, release, knee etc.), but these three are where the main business of compression gets done.

Compression happens in a three step process that is embodied by these controls. The threshold control sets the point at which gain reduction begins to occur.  Let’s break that down a little bit. The image below is a wavefrom representation of an audio signal. The horizontal axis represents time, and the vertical represents volume. So, what we’re looking at is how the loudness of a signal changes over time.

You can see that this signal starts very quietly, and then something explosively loud happens before a more moderate volume level takes over. The first step in compression involves setting the threshold. In the image below, I have drawn a line at about 0.25 dB. Let’s say that this is where we set our threshold.

This means that any of the signal that goes beyond the red line (is louder than 0.25 dB) will cause the compressor to apply some gain reduction (turn down the volume).

This is where the ratio control comes in. The ratio determines how much gain reduction will be applied once a signal goes beyond the threshold level. This is expressed as a ratio of pre-attenuation to post-attenuation decibels. The control in the picture above has markings like 2:1, 4:1 etc. This means that if the ratio knob is set to 2:1, any signal that is 2 dB above the threshold will be attenuated until it is 1 dB above the threshold; any signal that is 4 dB above the threshold will be attenuated until it is 2 dB above the threshold and any signal that is 1 dB above the threshold will be attenuated to0.5 dB above the threshold. The same is true of 4:1: 4 dB in equals 1 dB out; 1 dB in equals 0.25 dB out and so on.

For demonstrations sake, let’s apply a pretty significant ratio, about 5:1, to our waveform. Look what happens.

The very loud thing is now significantly quieter, but the volume of the other elements remains the same!

The last step is gain compensation. The idea behind this is that by turning down the loud parts of a signal, we lose a certain amount of overall volume, so we turn the signal back up post attenuation to compensate for that loss of volume. In this case, I’ll boost the signal back up so that the very loud part is about the same volume it was pre-attenuation. You might be asking why I would turn it back up after I just turned it down. Bear with me for just a moment…

After everything, the overall effect was to compress the dynamic range so that the quiet parts are actually louder, without losing any volume in the loud parts!

You might now be asking why we didn’t just turn the overall volume up. The simple answer is that we weren’t able to do that because of that very loud thing wouldn’t permit it. If we had turned up the overall volume in an attempt to make the quiet parts louder, that loud thing would have gotten louder to the point that it would overload and distort (or hurt someone’s ears, or blow someone’s speakers etc.). This is why this doesn’t work well in the car, turning the overall volume up to make the quiet parts audible makes the loud parts unmanageable.

This is basically how compression works. There’s a lot of nuance that come out of this process, but that isn’t necessary to discuss here. It’s also worth mentioning that the images I use above used relatively extreme settings so that you would be able to actually see the different processes at work. More common compressor settings, those for which I actually advocate, would be more subtle and more difficult to see in action.

As mentioned above, compressors also come in a lot of different flavors. One of the most important ones in this discussion is called a limiter. This is usually a tool that’s used in the mastering stage of the recording process. This is the final stage before audio can be released commercially. Mastering is done to put a final polish on the recording using eq, and to make sure that it is loud enough, and that all the songs on an album are the same level. This is where the limiter comes in.  A limiter is like a compressor with its ratio permanently set at :1. What this means is that, no matter how far above the threshold the music goes, the limiter will turn it down until it is AT that threshold. Imagine the scene in The Lord of the Rings when Gandalf shouts at the Balrok “YOU SHALL NOT PASS!” That’s what a limiter does to an audio signal. This means that you can turn the signal going in to the limiter UP and what comes out will still come out at the threshold level, so the result is a signal that is perceptually significantly louder than it was before limiting. This is because, theoretically, a limiter provides you with the ability to make the quietest moment in your music exactly the same level as the loudest moment.

This isn’t actually what I’m advocating for. There needs to be some amount of dynamic contrast in classical music. Crushing a recording to death so that there isn’t any actual contrast at all makes it unpleasant and tiring to listen to. But if the dynamic range of a recording is too large, it’s essentially impossible to listen to. What needs to happen is that the quiet end needs to be brought up to a reasonable level for the most common listening situations, without the loud end becoming oppressive. Compression is the only way to do this.


A Composer’s Guide to Compression, Pt. 1

Those who know me know that I am passionate about how classical music is recorded and that my belief is that it is often not recorded well. You may have even suffered through a rant of mine on this subject, or followed (or even been a part of) one of several discussions I have had on the subject on social media. I’d like to address this issue a little differently today. I think that part of the problem in this is that composers don’t understand what some of the major techniques used in the recording process are, how they work, and why they’re important to how their music is perceived. So, rather than arguing with engineers (which is usually who ends up on the other side of this conversation) I’d like to try and educate composers, and anyone else who’s interested, about one of the major elements that plays a key role in this debate: compression.

I’ll be dealing with this in three installments over the next three weeks. Today’s topic is why this process is important in the way that our music is sold and consumed by listeners. Next week will be a rather technical post about what exactly compression is and how we use it. Finally, the third installment will deal with some of the pushback that I’ve gotten on this issue.

It’s worthwhile to start with a sort of quick-and-dirty definition of compression. In its most basic sense, compression is a dynamic effect that engineers use to limit the dynamic range of an audio signal. Put simply, it makes loud things quieter and quiet things louder. It does this by turning down the volume when a signal gets loud, and then compensating for that gain loss by turning up the overall volume, thus reducing the overall dynamic range. Again, we’ll get into this more next week.

Compression is frequently used on individual tracks of a recording, but much of this conversation hinges around master compression, or compression that occurs in the final stages of creating a recording on the overall mix of several tracks. This kind of compression, which is frequently accomplished with a device called a limiter, is generally used to make the overall volume of a piece of music louder, without causing it to overload and distort.

Why do I want this?

The fact of the matter is that we need to be aware of what people expect from our product. Most people listen to music in their car, at the gym, or at work. In many cases, these are the only times that a person will listen to music in their life at all. Neither of these situations is particularly conducive to listening to very quiet music. Most cars have an idle cabin noise level of about 40dB (source). This increases as the car travels faster. An office is at least equivalent and often louder (source) .  So, in order for someone to actually be able to hear your music in either of those situations, they need to turn the volume up above that level. Turning the overall volume up to make the quiet parts louder than the atmospheric noise also makes the loud parts louder by the same increment; if the range between quiet and loud is too large, the adjustment that needs to be made can make it difficult to listen to music without having to frequently readjust the volume.

I have frequently heard people complain about constantly having to adjust the volume control on their stereo when they listen to classical music. In fact, I’ve probably complained about this myself. It’s an extremely common complaint. This is a problem that stems directly from not using compression in the recording process.  This might not seem like that big of a deal, but you have to look at what that really means.

People hate this.

It’s annoying.

When you don’t use compression on your recordings, you are asking for someone to go through something that annoys them, which they hate, in order to be able to listen to your music. So you’re making it hard for people to be able to hear what you spent all those hours working on.

People are also lazy. Unless they have an investment in listening to your music (like, if they’re your mom), if you don’t make it easy for them, they won’t do it. Think about the last time you did something you hated. Would you have been willing to pay for that experience? Would you do it again?

Additionally, you simply can’t really ask people to change their listening habits to suit your needs. Recordings are one of the primary ways we have of distributing our music to a wider audience. In fact, other than concerts, which have an inherently limited scope, recordings are the ONLY way we have of reaching any audience that isn’t substantially musically literate. If you want people to listen to recordings of your music, you have to meet them where they are or they are going to go somewhere else. This is one of the major reasons that classical music is labeled as snobby, elitist, and pretentious: because only people with the time to listen to it in their homes while they do nothing else, or to go to a concert, or with the money to listen to it on expensive stereo equipment that applies compression internally are actually ABLE to listen to it in a meaningful way.

We also have to consider the larger musical market place: in essentially every genre of music EXCEPT classical, using compression in the recording process is the norm. In many cases, using hugely intense, destructive compression is the expectation.  In fact, for a few decades a thing went on in the recording industry called “the loudness wars” wherein engineers were pushing the limits of what they were putting out, always trying to make it louder and louder. You can read more on that here if you’re interested. What I advocate for is really a much more gentle iteration of this process that is designed simply to make music listenable to a more general audience. If we don’t provide our audience with a listening experience that they enjoy, they will go somewhere else that provides it, and the entire rest of the music industry is already doing that.

The person who’s money we’re all competing for, the person who listens to music in their car, who hates adjusting the volume knob over and over again, ends up being presented with the choice of listening to your classical music that either requires them to go outside of their normal listening habits or suffer through turning the volume up and down over and over again, or listening to something else that has been compressed that they can put on in their car and simply enjoy. This is the choice they make when they spend their dollar. What do you think they’ll actually choose? We need to stop pretending we aren’t in direct competition with the rest of the music industry. We ARE. There’s nothing special about classical music that gives it a pass to exist without an audience. It will die if we do not provide better stewardship of it.


Alpha Performances

Alpha, a new work commissioned by the Keith/Larson Duo, is being performed a bunch coming up. I’m really excited to see this piece come to life in such capable hands.

Feb. 17th 7pm @ Louisville Center for the Arts (801 Grant Avenue, Louisville,  CO 80027) http://www.louisvilleco.gov/visitors/center-for-the-arts

Feb. 19th 6pm @ Church of the Ascension (600 Gilpin St, Denver, CO 80218-3632)
http://www.ascensiondenver.org/

Feb. 22nd 7pm @ Mutiny Information Café (2 So. Broadway, Denver, CO 80209)
https://www.mutinyinfocafe.com/

(Plus a bonus performance of Terry Riley’s In C and Louis Andriessen’s Worker’s Union that I’ll be playing on)


ZAHA Disklavier Performance (UPDATE w/ VIDEO!)

A concert is upcoming very soon at Cherry Creek Presbyterian Church which will feature Evan Mazunik’s ZAHA soundpainting ensemble. Click here or here for more details.

For this concert, Evan asked if I would do some max/msp programming to make it possible for the disklavier that the church owns to be played by a computer.

For those that might not know, a disklavier is the more modern version of a player piano. (You can learn more here). Conrad Kehn turned me on to the idea that a disklavier can take midi input from a computer via finale and max/msp.

The goal of the specific programming that I’m doing is really to take control of the piano out of Evan’s hands. It’s all about these layers upon layers of random and stochastic decisions that are made by the computer and fed into the piano. This is a project that I’m really excited about as it embodies an element that tends to be consistent across much of my work: the marriage of digital technology with acoustic instruments.

I’ll be making some videos of the disklavier in action and updating this blog with them as the project goes on. Check back for more.

!!UPDATE!!
One of the things I’ve been considering as I work on this project, and really since Conrad brought this up the first time, is what kinds of things a disklavier can do that a person playing a piano cannot. Generally this falls into two categories: speed, and density.
Obviously the “ultimate density” on a piano is all 88 keys being played simultaneously.
Unfortunately, even robot pianos have limitations. And telling one to play all 88 keys at once is one of those limitations.
When I asked it to do this, it did, indeed, do it.

Once.

And the result was impressive.

And then it wouldn’t play anything at all.

Fortunately the Yamaha tech support team is really helpful. So I learned that when you ask a robot piano to do something a little crazy like play all 88 keys at once, this happens:

imag0552Yep. That’s a blown fuse. Like, EXTRA blown.

So when that happens, this has to happen:
imag0551

But the good news is that once both of those things happen, this can happen:

and this:

and this:

You can watch more videos of this in action on my youtube channel here.

And make sure you come to see ZAHA next weekend!


Improvisation in Stockhausen’s Solo

Years ago I wrote a paper on a piece by Stockhausen called Solo. The paper itself was long and boring, so I’ll spare you a reproduction of it here. I recently suffered through a rereading of it and discovered that there are some interesting thoughts in it about improvisation which I do find worthwhile to explore a bit. One of the most interesting things about Solo is the methodology of improvisation that it asks the player to use, which I believe is a very rare kind of improvisation.

It’s a bit difficult to describe Solo briefly since it is such a complex work. Solo is an electroacoustic work for a single player and feedback delay. The delay times are much longer than those that we usually associate with delay as an effect, which tend to have delay times in milliseconds. Rather, the delay in Solo uses times in multiple seconds, so whole or multiple phrases could be repeated by the delay after the performer has played them.

Solo1

The notation consists of six form schemes and six pages of notated music. An example of a page of notation is shown above, and a form scheme is shown below. The player is instructed to letter the pages of notation A-F and place them in order. Since the lettering is left up to the player, the order of the pages ends up being more or less arbitrary. Stockhausen then refers the player to different divisions of the material on each page. Specifically, pages, systems, parts, and elements. Pages and systems have the same definitions that they would in other notated music. Stockhausen defines a “part” as any group of notes contained within a pair of bar lines. This is not called a “bar” or a “measure” simply because the printed music contains both proportional and mensural notation. An “element” is any single normally printed note, any grace note by itself, any group of grace notes, or any single grace note and its associated normally printed note.

Formscheme2

The form schemes represent the way in which the player will interpret the notated music. For a performance, only one form scheme is selected to be played. Each of the form schemes are broken into smaller sections made up of cycles and periods. A cycle is the group of periods between two letters as determined in the form scheme. Each form scheme has six cycles which are lettered to correspond generally to the similarly lettered page of notation. So, cycle A is the first cycle of periods on all of the form schemes and generally will contain material from page A of the notation. Periods are smaller groupings within cycles which have time values in seconds assigned to them based on the delay time of the electronics for the corresponding cycle. So, as we can see in the image taken from form scheme II below, in cycle A, there are nine periods of twelve seconds each. Within cycle B there are seven periods of twenty-four seconds each, and so on.

FS topFS top2

A performance of Solo is never a “start at measure one and play to the end” kind of endeavor. Rather, the player is at liberty to select portions of each page to play in a given cycle. Below each cycle there is a group of symbols that tells the player relatively loosely how they should perform the music for that cycle. Stockhausen calls these “what,” “where,” and “how” symbols. A “what” symbol tells a player what size of gesture they should select (systems, parts or elements); a “where” symbol tells a player from where they should select these gestures (from the current page, the current and the following page, the current and the previous page, or all three); a “how” symbol tells the player how the gestures they select should relate to each other (different, the same, or opposite). The criteria for the how gesture is up to the player. So, the player might decide that the how symbol relates to pitch. In this case, the “same” symbol would indicate that the gestures within a cycle should all have more or less the same pitch range.
Two additional symbols indicate the length of time a player may pause between periods, and how the player should attempt to relate to the electronics part within a cycle.

The image below is from cycle B of form scheme V. These particular symbols indicate that, within this cycle, the player must draw musical material made up of parts, from pages A, B, and C, which are either the same or different, with medium pauses following each part, and entrances staggered so as to create a polyphonic texture with the electronics.

Polyphon

So, in actual performance, the player might play this part from page B 1, then this one from page C 2, this from A 3, this from B 4,and so on until they had played a 45 second period from the cycle. Then the player can take a medium pause before they continue the same process again, trying to create a polyphonic texture as the electronics play back what they played from the previous period.

Whew! Remember when I said it was difficult to describe this piece simply? There’s actually quite a bit more to the performance of the piece (for example, we haven’t really discussed the electronics at all!), but I think that’s all you’ll need to know for now.

Solo represents an excellent example of what I would call “composed improvisation.” The term itself seems like an oxymoron, but the concept is actually much more common than one might think. For example, virtually all ‘traditional’ jazz is composed improvisation. Jazz players are generally given, or have learned, some kind of chart or lead sheet which contains the chord changes and melody of a piece, and then improvise based on that information.

19442405

In fact, it’s fairly common for this same kind of controlled improvisation based on notation to occur in contemporary classical music as well. What I have seen most commonly, and have used the most in my own music, is a section wherein only pitches are notated and everything else is left to the player to decide. An example from my music is shown below. Note that the given pitches can be used in any order, in any octave, with any rhythm, dynamic, articulation and so on.

Naut

These are by no means the only ways that notated improvisation can occur. There are probably as many different ways to utilize these kinds of ideas as there are composers using them. But Solo is actually an example of something very rare in the world of composed improvisation. To work out what that is, we have to take a quick step back.

Music is fundamentally organized into a series of impulses. A note begins on an impulse. That note can be combined with other notes into a larger phrase, which has its own larger impulse. That phrase is then grouped with other phrases to form a section, which has its own, still larger impulse. Sections can be grouped into a large form which we might call a movement, or a complete work, each of which also has its own much larger impulse. Sometimes people refer to this concept of grouping things into larger and larger impulses as “the big beats” of music. I’m deliberately avoiding the word “beat” here because it can be misleading.

This concept is actually alluded to in a Ted talk by Benjamin Zander, which you can watch below, and is more scientifically stated by Stockhausen himself in an essay which appears in Perspectives on Contemporary Music Theory edited by Benjamin Boretz and Edward T. Cone.

Composed improvisation can generally be organized into three levels based on with what level of impulses the player is being allowed to improvise and what levels of impulse have been predetermined. In the first level, the form and the phrases are both predetermined, but the specific notes which are played are up to the performer. In the second level, the form and the specific notes are determined, but the phrases which are constructed out of those notes are up to the performer. In the final level, specific notes and phrases are determined, but the form of the piece is left to the performer.

So, the two forms of composed improvisation that we have discussed thus far are both level-one improvisation. Consider jazz improvisation: the form of the piece and the phrase structure are already given based on the notation within the chart, but exactly which notes are played when is up to the player to decide. Specific notes are undetermined, but the larger impulses are predetermined.

An example of third-level improvisation would be the “open form” music found in some of the works of Pierre Boulez is an example of this as are numerous works by Stockhusen (Zyklus, and Licht, for example). In this kind of improvisation, while entire sections of notes and phrases are specifically notated, the order in which those sections occur is determined by the performers.

Solo is a rare example of level-two improvisation in which specific notes and gestures are determined, as is the overarching form, but the way those notes and gestures organize to make phrases is left to the player. I have not yet encountered another piece of composed improvised music that contains large-scale, level-two improvisation, even among Stockhousen’s works. What’s more, the understanding by the performer that this work functions as level-two improvisation is absolutely imperative to a particular performance faithfully representing Stockhausen’s intentions for Solo.

For those interested in hearing Solo, below is a recording of me and horn player Briay Condit playing this piece.

The fact that this work is, as far as I am aware, unique in the world of improvised music makes it more meaningful to the cannon, and likely explains why the work is so notationally involved and difficult for performers to meaningfully understand. And, frankly, this only begins to deal with the things about this work that are fascinating and misunderstood, which probably explains why my previous paper was so long and boring… perhaps more on this another day.


J A N U S

I’ve recently started another project. It’s an improvisation duo with Jasper Schmich Kinney that we have decided to call J A N U S. The name comes from “J and S” (get it?). The name seems very fitting. In Roman mythology, Janus was a god with two faces who ruled over beginnings, gates, transitions, passages, and endings.

Jasper plays a really fascinating dulcimer that he has detuned and altered in several ways. I do my electronics thing with loops. We’ve played a few gigs thus far, but hadn’t formalized anything until recently.

We’re going to be pretty consistently posting music to social media and other places. Here’s the first taste you’ll find on this site. Enjoy!

Moondog’s 6th Ave. Viking-Style Helmet


Upcoming performance: The Noise Gallery at Dazzle

“The Noise Gallery Presents” Living In The Moment: A musical representation of Life w/ Alzheimer’s
Dazzle Jazz
930 Lincoln Street
Denver, CO, 80203
http://dazzlejazz.com/
Monday, June 16th at 7pm.
$8-$10 tickets available here.

"The Noise Gallery Presents" Living In The Moment: A musical representation of Life w/ Alzheimer’sJoin The Noise Gallery for an evening a live composition that represents the day to day, and minute to minute lives of those living with Alzheimer’s and their caregivers.  Alzheimer’s disease is responsible for a loss of communication between cells, affecting memory, thinking, and communication.  Experience for yourself the sights, sounds, and sensations of living life in the moment.

A portion of the evening’s proceeds will go to support SPARK! Cultural Programs for people with Memory Loss.

The Noise Gallery is Denver’s first fully dedicated Soundpainting ensemble. Made up of some of the area’s best improvisers, classical and jazz players, composers, electronic musicians, weirdos, and visual artist instrument builders, The Noise Gallery is the perfect collective of spontaneous and creative thinking in the art of live composition. Expanding minds and challenging norms, we invite everyone to enter the Gallery and be adventurous listeners.

Soundpainting is the universal multidisciplinary live composing sign language for musicians, actors, dancers, and visual Artists. Presently (2016) the language comprises more than 1200 gestures that are signed by the Soundpainter (composer) to indicate the type of material desired of the performers. The creation of the composition is realized, by the Soundpainter, through the parameters of each set of signed gestures. The Soundpainting language was created by Walter Thompson in Woodstock, New York in 1974.


Classical Music: “Ah! You’re Indians!”

Dinner parties with strangers are notoriously dangerous ground for me, and, I think, for most composers. Inevitably, as the group deals with the appropriate small talk, someone asks “what kind of music do you write?” This question seems innocuous to them; they really only mean it as a way of getting to know me better. They really don’t understand how difficult something like that is to answer. When answering that question, one has to judge not only how much or how little that person knows about music in general, but also how much or how little they actually want to learn about MY music.

My answer should probably be something like this: “I write texture-based chamber, choral, band, and orchestral music that often equally integrates both electronic instruments and acoustic instruments and which is informed by all of the compositional techniques and languages from the last century; the goal of which is to capture a moment, express an idea or emotion, and generally to cause an audience member or listener to have an experience of some kind.”

But that’s a lot.

Maybe I’m underestimating the strangers with whom I attend dinner parties, but I’ve always assumed that’s more than someone wants to hear as an answer to that question. My real answer is this: “I write avant-garde classical music.” It’s short, it’s to the point, and it does, in some way, actually give a person an idea of what my music is like. Moreover, it leaves some openness for more questioning, if someone is actually interested in going down that rabbit hole with me.

Some people would have a problem with my usage of the term “classical” to describe my music. The technical definition of “classical music” is music that was written in Western Europe from about 1750-1850. That’s not my music. In fact, that’s not anyone’s music that has been alive for the last 150 years. But this means that there are several generations of composers who have no words to describe their music. The music that we write isn’t pop music, it isn’t jazz, it’s not rock, and if it isn’t “classical,” then what the hell is it? How should we describe it to potential listeners? What can we say that will give them some idea of what we do and also allow them the option of learning more without feeling intellectually alienated by an incomprehensible stream of music-specific terminology?

Several terms have been proposed or used over the years in an effort to remedy this situation. Some call this music “art music,” some “serious music,” even “legitimate music.” The rather offensive implication of these terms is that other genres are “not art,” “not serious,” or “not legitimate.” Some call it “concert music,” which, of course, absurdly means that no other music has ever or will ever be performed in a concert. “Orchestral music” is an attractive candidate, but implies a specific ensemble and excludes others. Can one really say that a piece written for string quartet is “orchestral?” Furthermore, the term “orchestral” tells us very little about what the music sounds like. Composers like Philip Glass and Arnold Schoenberg have both written for, recorded with, and performed with orchestras, but so have Ray Charles and Metallica.

The two most recent candidate terms that I have seen are “notated music” and “composed music.” These two terms came to me via blogs that were mentioned to me by colleagues. They certainly seem attractive at first, but I believe that, just like all the other terms mentioned above, neither actually does an effective job of telling us about the music they are attempting to describe.

“Composed music” comes from music journalist and radio producer Craig Havighurst. You can read his blog on the subject here. “Notated music” ultimately comes from Steve Reich, but is brought up again by Ethan Hein whose blog you should read here.

For those of you who are too lazy to do that (no judgement), here’s the abridged version: Havighurst likes “composed music” because it venerates the composer again. He says it implies music that comes from “a singular mind, fixed and promulgated in written form” as well as a particular restraint and “composure” that is expected of us when we listen to this music. Hein, whose blog is actually an excellent critique of Havighurst’s term, points out the reek of exclusionist privilege that permeates Havighurst’s concept of “composed” music. He also draws attention to the fact that, really, all music is composed in one way or another. Lastly, Hein proposes Reich’s “notated music” as an alternative. There’s actually a lot more to be said here, but it’s not entirely pertinent to this particular conversation, so it will have to wait until another time.

The creators behind these two terms are forgetting, or perhaps ignoring, two extremely important things about genre terminology. The first really has to do with the nature of language. Language is a means of expressing or describing something in the absence of that thing. In other words, the only reason that we use the word “chair” is because at some point in time someone had to refer to a chair without being able to point to one and say “this.” The word “chair” creates in us a series of definitions that we understand about chairs. Probably “a place for sitting” is number one on that list for most of us. But those definitions aren’t inherent to the word itself; they had to be taught to us over time. This is why if I say “chair” to someone who doesn’t speak English, it doesn’t mean anything to them, and similarly why if I say “get off the chair” to my cat, he does absolutely nothing.

This same concept should be applied to genre terminology. We create words to define the differences between different kinds of music. But the terms we create only have meaning if there is a common understanding of their definition. “Composed music” is meaningless to the layperson; as is “notated music.” If I have to explain the definition of the terminology I’m using then I’m back to square one. Why would I waste time doing that, when I could just as easily actually explain my music itself to them? In fact, the only people to whom “classical music” is not an effective descriptor are those with enough musical knowledge that other preexisting musical terminology, like “minimalist” or “post-serial,” is already meaningful and serves as a better descriptor.  These are academic words that only academics are arguing over.

To the layperson, the word “classical” doesn’t mean “music written by Western European men between 1750 and 1850.” It means “music typically composed for acoustic instruments from the orchestral families and/or voices and performed in a particular kind of concert setting.” The proof of this is the fact that the vast majority of people consider contemporary film scores to be “classical” music. Frankly, that description is pretty close to what I do. Adding the words “chamber,” or “electroacoustic,” or “avant-garde” gets the definition close enough that someone will actually know what I’m describing to them and that’s the only point of having words to explain genre.

The second point that those focused on creating new terms for music are forgetting is a product of the first. It is this: we don’t actually get to decide what our music is called. Debussy famously railed against the idea that his music would be classified as “impressionism,” yet every music history textbook that I have ever seen places him in that movement. In fact, John Adams, Arnold Schoenberg and Steve Reich have all attempted to reject the genre labels that have ended up being applied to them. Yet three quick searches for these composers’ names on iTunes reveal this gem:
Untitled
It’s probably also worth mentioning that Josquin Des Prez, and Gerard Grisey both come up under this same genre in iTunes.

Louis CK makes this point well as he discusses how white people ruined America.

CK’s remark, “ah! You’re Indians!” has come to be my mantra when discussing new terminology for “classical” music. No matter what terms we invent to try and better define what we do, people are still going to call it classical music. People aren’t concerned with the start and end dates of a particular aesthetic movement when they ask what kind of music you write. To correct them about their terminology, or to try and teach them some new definition, is fundamentally disrespectful to the fact that someone just expressed an interest in what you do! If we ever want to make our music relevant to the world at large we need to meet people where they are by describing what we do in ways that actually mean something to them. We have enough battles to fight as living composers without fighting people about the name they call our music.

I don’t care if people call it classical music, as long as they call it something.


News: recording of the The Uncurling Nautilus is finished!

The Uncurling Nautilus is finally finished and ready for release!

This is a piece from a few years ago that I’ve had the pleasure of recording with cellist Gil Selinger. The process of recording this was challenging for various reasons, but the result, I think, is quite good.

A low-quality version is included here for you to check out. The full release will be available on iTunes and other streaming services shortly. Stay tuned for more info.