An Expanded Program Note for Total Synthesis: D-Luciferin

The process of writing this piece was unusual for me. Generally speaking I am mostly concerned with the sound of a piece and the emotional reaction that it generates. For this piece, since the compositional process was almost algorithmic, with each dimension of music mapped to a particular dimension of the chemical synthesis, the process and primary concerns were much different. For this piece, the process was mostly centered around precompositional decisions surrounding what musical features correspond to what chemical features. The primary concern, then, was simply realizing those decisions as accurately as possible, while still attempting to retain some element of playability for the musicians.

In this post, I wanted to provide a bit more insight into the precompositional decisions that formed this piece. As I mention in the printed program note, the piece is inspired by the structural changes that occur in a molecule during a chemical reaction. So the idea was to have a musical structure that slowly changed and developed over the course of the piece until the “product structure” was reached at the end, in this case, a musical structure that corresponds to D-luciferin.

In this piece, pitch corresponds to molecular structure as determined by hydrogen nmr when possible, and carbon nmr or parent mass spectrometry when necessary (for example, since phosphorous oxychloride lacks hydrogen for h-nmr, and also lacks carbon for c-nmr, the mass spectrum was used to determine the pitch structure).

Below are shown the two h-nmr spectrum for the reactants from the first movement:  p-anisidine and ethyl oxalate respectively. Generally speaking the c-nmr and mass spectrometry data tend to look more or less the same, so these are representative.

p-anisidine h-nmr spectrum

Ethyl oxalate h-nmr spectrum

In order to map these spectra to a pitch collection, I actually just held up an image of a keyboard up to these spectra and marked where the spectrum peaks aligned with the keyboard, rounded to the nearest quarter tone. This is shown below.

p-anisidine h-nmr spectrum mapped to the keyboard

ethyl oxalate h-nmr spectrum mapped to keyboard

These two reactants are combined to form the first product in this synthesis. The h-nmr and associated keyboard mapping of that product is pictured below.

h-nmr of XII

keyboard mapping of XII

Having now determined the pitch structures that represented the two reactants in the first movement, the next step was to determine how these structures should shift and change over the course of the movement to form the first product. This was done through simple interpolation, which I did the old-school way with colored pencils. As you can see in the image below, the p-anisidine pitch structure is shown on the left in orange, the ethyl oxalate pitch structure is shown on the left in purple, and the product is shown in black on the right. Each pitch on the left moves by quarter-tone steps to the closest pitch on the right.

Pitch interpolation of the first movement

Obviously this process is simply repeated through all eight movements until the end product is reached.

 

Deciding how to approach rhythm in this piece was a challenge. The solution that I arrived at was to use the molecular structure of the solvents used in the various reactions of the synthesis. I think this makes a kind of sense: rhythm and pitch can be thought of as separate (non-interactive) elements in music. One could say that a rhythm is imposed onto a pitch structure. Similarly, the solvents used in these reactions don’t directly contribute to the structural changes that occur throughout the synthesis. The reactants exist within the solvents.

Further, rhythms are made up of individual values of varying sizes. This is also true of molecules, where the composite molecule is made up of individual elements of varying atomic sizes. So, carbon has a molecular weight of 12, and hydrogen has a molecular weight of 1. If a sixteenth note is used to represent hydrogen, then the value that represents carbon would be a dotted half note (a dotted half note is 12 sixteenth notes). This is ultimately how I mapped solvent molecular structures to rhythmic values.

So, in the case of methanol, which is a solvent used in the fourth, fifth, and eighth movements, three sixteenth notes and a dotted half note correspond to the methyl group on the left side of the molecule, and a whole note and sixteenth note correspond to the oxygen-hydrogen bond on the right side of the molecule. This is shown below.

 

In cases where a solvent contains several different isomers, as in xylene, the three different isomers were each mapped separately and assigned to instruments to create a distribution that represented the distribution of each isomer within the given solvent.

Finally, there are several instances where no solvent was used to dissolve a reactant, or at least none mentioned in the experimental procedure I followed. In these cases, I allowed the performers to improvise a non-period rhythm. The solvents and associated rhythms are shown in the image below.

Solvents and associated rhythms

Other dimensions generally had simper mappings related to the larger scope physical elements of the synthesis. The tempo of each movement maps relatively simply to the temperature of a given portion of the synthesis. The only work that was really done was to find a reasonable maximum and minimum tempo to generate an effective range for tempo across the piece.

The length of each movement is proportionally relative to the length of that step in the synthesis. So, if the overall synthesis takes 10 hours (it’s actually much longer), and one step of it took one hour, that constitutes about 10% of the overall synthesis time, so the corresponding section in the music would constitute about 10% of the length of the piece.

Dynamics are mapped to the volume of the reaction. So, if the size of a reaction is large, the music is played loudly, and small equates to a softer dynamic. This also seemed appropriate since “loudness” can sort of be thought of the “size” of a particular sound. Again, the only process involved in this mapping was what working out the size of a particular reaction step, and finding a dynamic range that seemed reasonable for the music.

Lastly, if a recrystallization step occurred as part of a reaction within the synthesis, the players use extended techniques to brighten their tone, and pause between movements.


The Swarm

I’ve been learning and experimenting with generative art, machine learning, and algorithmic art and music lately. This is me trying out some approaches to swarm/flock/particle dynamics without using the boids algorithm. It’s a little glitchy as well because I’m apparently bad at screen capture. Kinda cool, though.

There’s, I think, a lot more stuff in this area to come. I’m learning a lot, and enjoying it.

First upload for #52fridays (yes, it’s a day late).


500th Anniversary Reformation Commission

This year I was commissioned to write a new work to celebrate the 500th anniversary of the reformation. The commission came from Cherry Creek Presbyterian Church and Evan Mazunik. The work was performed on reformation Sunday at CCPC and the recording is here.

To The Sons of Korah celebrates the 500th anniversary of the reformation. It is based on the reformation hymn “A Mighty Fortress is Our God” which itself is based on psalm 46. The piece moves gradually through various dissonant collections towards a final, celebratory statement of the hymn itself. The two trumpets are also asked to play in separate parts of the performance space, only physically joining together in the middle of the piece. Both of these effects symbolize the unification and emergence of the church from a period of separation, corruption, and debauchery.

The title pays homage to the authors of psalm 46, the Korahites, who were an important branch of singers from the Korahite division of Levites. I think it would be interesting for these ancient musicians to see the many different permutations their work has taken over the centuries, not least of all this one.

Unfortunately the recording itself wasn’t amazing because this was performed as part of a service, rather than a concert, but it’s still worth listening to.


Casper College New Music Days

In October I had the pleasure of going out to Casper Wyoming to work with Ron Coulter on presenting some of my music and some of my ideas on compression at Casper College New Music Days.

The experience was amazing, and there is at least one forthcoming blog post coming out of it, but I wanted to take the opportunity to share the performances that came out of that weekend here. Enjoy!


A Composer’s Guide to Compression, Pt. 3

This is the final installment of a three-part series about using compression in recordings of classical music. In part one, I talked about why it’s important for composers to advocate that their music be recorded and mixed using compression. In part two, I discussed the technical side of this issue: what is compression and how does it work? In this last part, I want to provide some context about what the perceived reasons are for not using this technique on classical music.

Why isn’t this already happening?

It’s difficult for me to present the other side of this issue because I have never believed in it. I have an inherent bias against recording music this way and I feel passionately about the fact that we should be doing it better. With that in mind, I think the best way for me to approach this part of this discussion is to present the arguments against using compression on classical music that I have found to either be the most convincing, or that I have heard the most frequently. I’m also presenting my rebuttal to each of these comments. For what it’s worth, I have never heard an argument that had me convinced. If I had, I would put it here.

1. “Using compression on a concert recording makes all sorts of weird things about the concert hall
and
all kinds of background noises become much more evident in the recording than otherwise.”

This is absolutely 100% true. However, rather than being an argument against using compression, this is actually only an argument in favor of not recording music during a live performance, recording in a studio space and close mic’ing all the instruments. See #4 below.

This isn’t to say that live recordings aren’t valuable. They are, and this is true in other genres as well. But they are a different kind of product for a different audience than a studio recordings are. As a rule, live music and recorded music are different products and should be treated differently. See #3 below.

2. “I want my music to have a wide dynamic range. When I write ppp, I want it to be
barely audible.
When I write fff, I want it to be overpowering.”

This actually seems like a pretty convincing argument initially and it’s true that using compression limits the dynamic range of a recording. The problem is that this ignores both a truth about the physical properties of sound, and the necessity of compensating for the ways in which people listen to music.

There is a relationship that exists in acoustic instruments between perceivable overtones and amplitude. Any pitched sound (except a sine wave) contains a fundamental and numerous overtones that occur in a particular pattern above the fundamental. The presence and relative amplitude of these overtones is what creates timbre in musical sounds. As the frequency of these overtones increases, their relative amplitude decreases. Further, as the amplitude of the fundamental decreases, so do the relative amplitudes of each subsequent overtone. In short, louder sounds have more audible overtones than quiet ones.

This means that louder sounds have a different timbre than quieter sounds! So, in fact, when a composer writes ppp they’re not just writing a soft sound, they’re also writing a sound with an inherently different timbre. Increasing the volume of a prerecorded sound only makes that timbre louder, it doesn’t alter it. So your “barely audible” ppp will still actually have the timbre of a quiet sound no matter how loud we make it, and the fff will always have the timbre of fff even when the volume is turned all the way down. Therefore, we have to assume that changing a “barely audible” ppp into something that’s actually listenable won’t have a significant difference on the perception of that sound.

The other part of this particular argument is really about the context in which we listen to music. If I am listening in my car, for example, I need to get the quietest sound above the noise floor (the volume of ambient sound) of my car to be able to hear it. If your ppp was recorded at 20 dB, in my Honda on the highway, I need to turn it up at least 20 dB to be able for it to be “barely audible.” The problem is that by doing that I also made the explosive fff that’s coming 20 dB louder. If that fff was recorded at 80 dB, now it’s 100 dB!

Good performers naturally make these kinds of adjustments when they play. If a hall is very large, ppp will be louder than it would in a small chamber setting. And fff will be quieter at a house concert than it will be in an amphitheater. Players adjust their dynamics to suit the space in which they perform. This is why we don’t write decibel numbers into the score instead of dynamic markings. The system for notating dynamics is designed to be flexible.

Unfortunately, this is not how recordings work. The relative distance between different dynamic levels is entirely fixed once it is recorded, and can’t be adjusted to suit the listening situation, so we need to provide a sufficiently limited dynamic range that listening is actually possible in a variety of situations. The only way to accomplish this is with compression.

3. “I do this sort of thing to classical music if I’m mixing a movie score. But never to a concert piece.”

Once upon a time I asked my facebook friends to tell me who their favorite living composer was. Everyone, and I mean literally EVERYONE, who wasn’t deeply versed in contemporary classical music (and even some who were) said the name of a movie composer. The fact that engineers are mixing movie music differently than concert music, and that everyone loves movie music is not a coincidence! This is also not a question of compositional language or marketing. Experimental music gets made all the time, even by film composers (consider the scores for Interstellar or The Revenant), and sells well because people listen to it when it is recorded correctly. This is a question of how listenable the music is in its recorded form. Concert music and film music are the same sounds made by the same instruments. Musically speaking, they are the same thing. The only difference is context. It’s foolish to think that there is something “special and different” about concert music as opposed to film music that necessitates a different technique when they are the same thing.

4. “People want to hear the natural sound of the hall the music is being played in. Compression destroys that.”

I have literally no idea where this came from. No one wants to hear the sound of the hall. The hall sucks. It’s full of coughing, sneezing, talking, cell phone carrying people. That’s not what anyone wants to hear on a classical record. They want to hear the music, not the hall.

Ok, I’ll grant you an audiophile or three who spent more money on their stereo system than their car, but this is at best a niche market. It’s fine to make recordings that cater to that market, but it doesn’t make any sense to record an entire genre a particular way with those three guys in mind. Other than them, if people wanted to hear the sound of the hall, they would be buying records of pop music “as recorded at Carnegie hall” or whatever. They’re not doing that. And they’re certainly not buying records of classical music bearing the same information.

5. “This is a genre of music for the concert hall, not for recordings.”

There are so many things wrong with this…

First of all, if this is true, why are we recording this stuff at all? Why is this even an issue?

Second, this is another reason that people are turned off by classical music being snooty and elitist and so on. By saying classical music is only for the concert hall you are also saying that it is only for people who can afford concert tickets, a suit, a baby sitter, and a night off of work. It further says that this music is only for people who live in a major metropolitan area where this kind of music is performed, or can afford to travel to one.

At its best, this way of thinking is, indeed, elitist. At its worst, it’s racist.

Classical music is a beautiful, powerful art form that can and should be made, listened to, and appreciated by anyone and everyone. Recordings are the opportunity that we have to make our music approachable by those who would not normally have the opportunity to hear it. Recordings are the way that we can carry our music into new generations and inspire the people who will create and appreciate the classical music of the future. That isn’t true of the concert hall.

6. “People shouldn’t listen to classical music in cars or at work etc.”

They already do.

Again, this falls into the category of limiting your audience to those who can afford to spend a significant amount of time just listening to your music, or a significant amount of money to buy the equipment to make it not suck to listen to. I don’t even know musicians, people who are deeply passionate about music, who are capable of making that kind of time and money. Everyone has a family and friends and a job and a million places they have to be all the time. If we want people to listen to our art, we need to meet them where they are. This is especially true since every other genre is already successfully doing this and winning the ears and wallets of consumers far in advance of anything we put out.

I WANT PEOPLE TO HEAR MY MUSIC! I don’t care where they listen to it, or how, or why. These things don’t matter to me and they aren’t the reason that I write music. I write music so that people will listen to it, and that means that I have an obligation to my audience to make this as easy as I can for them.

____

These are the most common and most convincing reasons that people have given me for not recording and mastering classical music with a compressor.  To me, they seem to all be either related to some incorrect decision they’ve made about the market for classical music, or to some fundamental misinformation they have about the nature of sound. And none of them are convincing to me. What’s more, many of them equate essentially to “this is just the way it’s done” which is a bad excuse to do things in a lazy way.

That might not be true for you. If not, that’s fine. There is a wealth of people who think like you and will make your music the way you want. But you need to know that you are limiting your audience and you’re making your music unapproachable and hard to listen to. When people don’t listen to your music and don’t come to your concert you can’t blame them for “not understanding art” or “not being educated enough” because those things aren’t true and never have been. What’s true is that you presented your product, which you have labored over for hours and hours, in a lazy way that people hate and don’t want to buy.

If you do agree with what I’m saying here, you should know this: those of us who write music and those who perform it are the ones that are driving the recording industry as it relates to classical music. We make industry decisions by spending our money in one place, and not another; by releasing one kind of sound and not another. This is what drives industry trends. We have a responsibility to ensure that our music is presented in the best possible way that it can be, or no one’s going to listen to it.

You have the power to change this kind of trend by taking control of how your music is presented to your audience. Be informed about these processes. Advocate for them to be done correctly by hiring people who will treat your art the way that you want it treated, and by NOT hiring people who try to convince you otherwise.

If classical music is dying, it’s because somewhere down the line we stopped caring about whether or not people listen to what we create. Either that or we never learned the tools that are necessary to compete on a technological level with what the rest of the music world is creating. Or, worse yet, we’ve stopped considering the fact that our music does, in fact, need to compete with the rest of what’s out there.

People want to listen to your music, but you need to let them.


A Composer’s Guide to Compression, Pt. 2

If you’re just tuning in, we’re in the middle of a three-part series about compression in the recording industry and how that applies to classical music. Part one discussed why this is necessary and why we should be advocating for this to happen on our recordings. Today, part two discusses the technical side of this problem by explaining what compression is and how it is used. Next week we’ll discuss the kind of pushback I’ve seen from those in the recording industry on this topic.

What is compression?

A compressor (the tool that engineers use to add compression to a signal) falls into a family of effects processors called dynamic effects. All this really means is that a compressor makes changes to the relative volume of a signal. Other dynamic effects are gating, expanding, and limiting. They’re all basically related and work around the same principals.  Simply put, compression is a process that engineers use to limit the dynamic range of a signal. It makes things that are loud a little (or a lot) quieter, and makes things that are quiet a little (or a lot) louder. It does this by automatically sensing the level of a signal, reducing that level (attenuation) if it goes beyond a certain point (threshold) and then turning the overall volume of a signal back up to compensate for this reduction (output gain).

A compressor generally has three very important controls: threshold, ratio, and gain makeup (also sometimes called “out gain” and various other iterations). There will almost always also be several other controls (attack, release, knee etc.), but these three are where the main business of compression gets done.

Compression happens in a three step process that is embodied by these controls. The threshold control sets the point at which gain reduction begins to occur.  Let’s break that down a little bit. The image below is a wavefrom representation of an audio signal. The horizontal axis represents time, and the vertical represents volume. So, what we’re looking at is how the loudness of a signal changes over time.

You can see that this signal starts very quietly, and then something explosively loud happens before a more moderate volume level takes over. The first step in compression involves setting the threshold. In the image below, I have drawn a line at about 0.25 dB. Let’s say that this is where we set our threshold.

This means that any of the signal that goes beyond the red line (is louder than 0.25 dB) will cause the compressor to apply some gain reduction (turn down the volume).

This is where the ratio control comes in. The ratio determines how much gain reduction will be applied once a signal goes beyond the threshold level. This is expressed as a ratio of pre-attenuation to post-attenuation decibels. The control in the picture above has markings like 2:1, 4:1 etc. This means that if the ratio knob is set to 2:1, any signal that is 2 dB above the threshold will be attenuated until it is 1 dB above the threshold; any signal that is 4 dB above the threshold will be attenuated until it is 2 dB above the threshold and any signal that is 1 dB above the threshold will be attenuated to0.5 dB above the threshold. The same is true of 4:1: 4 dB in equals 1 dB out; 1 dB in equals 0.25 dB out and so on.

For demonstrations sake, let’s apply a pretty significant ratio, about 5:1, to our waveform. Look what happens.

The very loud thing is now significantly quieter, but the volume of the other elements remains the same!

The last step is gain compensation. The idea behind this is that by turning down the loud parts of a signal, we lose a certain amount of overall volume, so we turn the signal back up post attenuation to compensate for that loss of volume. In this case, I’ll boost the signal back up so that the very loud part is about the same volume it was pre-attenuation. You might be asking why I would turn it back up after I just turned it down. Bear with me for just a moment…

After everything, the overall effect was to compress the dynamic range so that the quiet parts are actually louder, without losing any volume in the loud parts!

You might now be asking why we didn’t just turn the overall volume up. The simple answer is that we weren’t able to do that because of that very loud thing wouldn’t permit it. If we had turned up the overall volume in an attempt to make the quiet parts louder, that loud thing would have gotten louder to the point that it would overload and distort (or hurt someone’s ears, or blow someone’s speakers etc.). This is why this doesn’t work well in the car, turning the overall volume up to make the quiet parts audible makes the loud parts unmanageable.

This is basically how compression works. There’s a lot of nuance that come out of this process, but that isn’t necessary to discuss here. It’s also worth mentioning that the images I use above used relatively extreme settings so that you would be able to actually see the different processes at work. More common compressor settings, those for which I actually advocate, would be more subtle and more difficult to see in action.

As mentioned above, compressors also come in a lot of different flavors. One of the most important ones in this discussion is called a limiter. This is usually a tool that’s used in the mastering stage of the recording process. This is the final stage before audio can be released commercially. Mastering is done to put a final polish on the recording using eq, and to make sure that it is loud enough, and that all the songs on an album are the same level. This is where the limiter comes in.  A limiter is like a compressor with its ratio permanently set at :1. What this means is that, no matter how far above the threshold the music goes, the limiter will turn it down until it is AT that threshold. Imagine the scene in The Lord of the Rings when Gandalf shouts at the Balrok “YOU SHALL NOT PASS!” That’s what a limiter does to an audio signal. This means that you can turn the signal going in to the limiter UP and what comes out will still come out at the threshold level, so the result is a signal that is perceptually significantly louder than it was before limiting. This is because, theoretically, a limiter provides you with the ability to make the quietest moment in your music exactly the same level as the loudest moment.

This isn’t actually what I’m advocating for. There needs to be some amount of dynamic contrast in classical music. Crushing a recording to death so that there isn’t any actual contrast at all makes it unpleasant and tiring to listen to. But if the dynamic range of a recording is too large, it’s essentially impossible to listen to. What needs to happen is that the quiet end needs to be brought up to a reasonable level for the most common listening situations, without the loud end becoming oppressive. Compression is the only way to do this.


A Composer’s Guide to Compression, Pt. 1

Those who know me know that I am passionate about how classical music is recorded and that my belief is that it is often not recorded well. You may have even suffered through a rant of mine on this subject, or followed (or even been a part of) one of several discussions I have had on the subject on social media. I’d like to address this issue a little differently today. I think that part of the problem in this is that composers don’t understand what some of the major techniques used in the recording process are, how they work, and why they’re important to how their music is perceived. So, rather than arguing with engineers (which is usually who ends up on the other side of this conversation) I’d like to try and educate composers, and anyone else who’s interested, about one of the major elements that plays a key role in this debate: compression.

I’ll be dealing with this in three installments over the next three weeks. Today’s topic is why this process is important in the way that our music is sold and consumed by listeners. Next week will be a rather technical post about what exactly compression is and how we use it. Finally, the third installment will deal with some of the pushback that I’ve gotten on this issue.

It’s worthwhile to start with a sort of quick-and-dirty definition of compression. In its most basic sense, compression is a dynamic effect that engineers use to limit the dynamic range of an audio signal. Put simply, it makes loud things quieter and quiet things louder. It does this by turning down the volume when a signal gets loud, and then compensating for that gain loss by turning up the overall volume, thus reducing the overall dynamic range. Again, we’ll get into this more next week.

Compression is frequently used on individual tracks of a recording, but much of this conversation hinges around master compression, or compression that occurs in the final stages of creating a recording on the overall mix of several tracks. This kind of compression, which is frequently accomplished with a device called a limiter, is generally used to make the overall volume of a piece of music louder, without causing it to overload and distort.

Why do I want this?

The fact of the matter is that we need to be aware of what people expect from our product. Most people listen to music in their car, at the gym, or at work. In many cases, these are the only times that a person will listen to music in their life at all. Neither of these situations is particularly conducive to listening to very quiet music. Most cars have an idle cabin noise level of about 40dB (source). This increases as the car travels faster. An office is at least equivalent and often louder (source) .  So, in order for someone to actually be able to hear your music in either of those situations, they need to turn the volume up above that level. Turning the overall volume up to make the quiet parts louder than the atmospheric noise also makes the loud parts louder by the same increment; if the range between quiet and loud is too large, the adjustment that needs to be made can make it difficult to listen to music without having to frequently readjust the volume.

I have frequently heard people complain about constantly having to adjust the volume control on their stereo when they listen to classical music. In fact, I’ve probably complained about this myself. It’s an extremely common complaint. This is a problem that stems directly from not using compression in the recording process.  This might not seem like that big of a deal, but you have to look at what that really means.

People hate this.

It’s annoying.

When you don’t use compression on your recordings, you are asking for someone to go through something that annoys them, which they hate, in order to be able to listen to your music. So you’re making it hard for people to be able to hear what you spent all those hours working on.

People are also lazy. Unless they have an investment in listening to your music (like, if they’re your mom), if you don’t make it easy for them, they won’t do it. Think about the last time you did something you hated. Would you have been willing to pay for that experience? Would you do it again?

Additionally, you simply can’t really ask people to change their listening habits to suit your needs. Recordings are one of the primary ways we have of distributing our music to a wider audience. In fact, other than concerts, which have an inherently limited scope, recordings are the ONLY way we have of reaching any audience that isn’t substantially musically literate. If you want people to listen to recordings of your music, you have to meet them where they are or they are going to go somewhere else. This is one of the major reasons that classical music is labeled as snobby, elitist, and pretentious: because only people with the time to listen to it in their homes while they do nothing else, or to go to a concert, or with the money to listen to it on expensive stereo equipment that applies compression internally are actually ABLE to listen to it in a meaningful way.

We also have to consider the larger musical market place: in essentially every genre of music EXCEPT classical, using compression in the recording process is the norm. In many cases, using hugely intense, destructive compression is the expectation.  In fact, for a few decades a thing went on in the recording industry called “the loudness wars” wherein engineers were pushing the limits of what they were putting out, always trying to make it louder and louder. You can read more on that here if you’re interested. What I advocate for is really a much more gentle iteration of this process that is designed simply to make music listenable to a more general audience. If we don’t provide our audience with a listening experience that they enjoy, they will go somewhere else that provides it, and the entire rest of the music industry is already doing that.

The person who’s money we’re all competing for, the person who listens to music in their car, who hates adjusting the volume knob over and over again, ends up being presented with the choice of listening to your classical music that either requires them to go outside of their normal listening habits or suffer through turning the volume up and down over and over again, or listening to something else that has been compressed that they can put on in their car and simply enjoy. This is the choice they make when they spend their dollar. What do you think they’ll actually choose? We need to stop pretending we aren’t in direct competition with the rest of the music industry. We ARE. There’s nothing special about classical music that gives it a pass to exist without an audience. It will die if we do not provide better stewardship of it.


Alpha Performances

Alpha, a new work commissioned by the Keith/Larson Duo, is being performed a bunch coming up. I’m really excited to see this piece come to life in such capable hands.

Feb. 17th 7pm @ Louisville Center for the Arts (801 Grant Avenue, Louisville,  CO 80027) http://www.louisvilleco.gov/visitors/center-for-the-arts

Feb. 19th 6pm @ Church of the Ascension (600 Gilpin St, Denver, CO 80218-3632)
http://www.ascensiondenver.org/

Feb. 22nd 7pm @ Mutiny Information Café (2 So. Broadway, Denver, CO 80209)
https://www.mutinyinfocafe.com/

(Plus a bonus performance of Terry Riley’s In C and Louis Andriessen’s Worker’s Union that I’ll be playing on)


ZAHA Disklavier Performance (UPDATE w/ VIDEO!)

A concert is upcoming very soon at Cherry Creek Presbyterian Church which will feature Evan Mazunik’s ZAHA soundpainting ensemble. Click here or here for more details.

For this concert, Evan asked if I would do some max/msp programming to make it possible for the disklavier that the church owns to be played by a computer.

For those that might not know, a disklavier is the more modern version of a player piano. (You can learn more here). Conrad Kehn turned me on to the idea that a disklavier can take midi input from a computer via finale and max/msp.

The goal of the specific programming that I’m doing is really to take control of the piano out of Evan’s hands. It’s all about these layers upon layers of random and stochastic decisions that are made by the computer and fed into the piano. This is a project that I’m really excited about as it embodies an element that tends to be consistent across much of my work: the marriage of digital technology with acoustic instruments.

I’ll be making some videos of the disklavier in action and updating this blog with them as the project goes on. Check back for more.

!!UPDATE!!
One of the things I’ve been considering as I work on this project, and really since Conrad brought this up the first time, is what kinds of things a disklavier can do that a person playing a piano cannot. Generally this falls into two categories: speed, and density.
Obviously the “ultimate density” on a piano is all 88 keys being played simultaneously.
Unfortunately, even robot pianos have limitations. And telling one to play all 88 keys at once is one of those limitations.
When I asked it to do this, it did, indeed, do it.

Once.

And the result was impressive.

And then it wouldn’t play anything at all.

Fortunately the Yamaha tech support team is really helpful. So I learned that when you ask a robot piano to do something a little crazy like play all 88 keys at once, this happens:

imag0552Yep. That’s a blown fuse. Like, EXTRA blown.

So when that happens, this has to happen:
imag0551

But the good news is that once both of those things happen, this can happen:

and this:

and this:

You can watch more videos of this in action on my youtube channel here.

And make sure you come to see ZAHA next weekend!


Improvisation in Stockhausen’s Solo

Years ago I wrote a paper on a piece by Stockhausen called Solo. The paper itself was long and boring, so I’ll spare you a reproduction of it here. I recently suffered through a rereading of it and discovered that there are some interesting thoughts in it about improvisation which I do find worthwhile to explore a bit. One of the most interesting things about Solo is the methodology of improvisation that it asks the player to use, which I believe is a very rare kind of improvisation.

It’s a bit difficult to describe Solo briefly since it is such a complex work. Solo is an electroacoustic work for a single player and feedback delay. The delay times are much longer than those that we usually associate with delay as an effect, which tend to have delay times in milliseconds. Rather, the delay in Solo uses times in multiple seconds, so whole or multiple phrases could be repeated by the delay after the performer has played them.

Solo1

The notation consists of six form schemes and six pages of notated music. An example of a page of notation is shown above, and a form scheme is shown below. The player is instructed to letter the pages of notation A-F and place them in order. Since the lettering is left up to the player, the order of the pages ends up being more or less arbitrary. Stockhausen then refers the player to different divisions of the material on each page. Specifically, pages, systems, parts, and elements. Pages and systems have the same definitions that they would in other notated music. Stockhausen defines a “part” as any group of notes contained within a pair of bar lines. This is not called a “bar” or a “measure” simply because the printed music contains both proportional and mensural notation. An “element” is any single normally printed note, any grace note by itself, any group of grace notes, or any single grace note and its associated normally printed note.

Formscheme2

The form schemes represent the way in which the player will interpret the notated music. For a performance, only one form scheme is selected to be played. Each of the form schemes are broken into smaller sections made up of cycles and periods. A cycle is the group of periods between two letters as determined in the form scheme. Each form scheme has six cycles which are lettered to correspond generally to the similarly lettered page of notation. So, cycle A is the first cycle of periods on all of the form schemes and generally will contain material from page A of the notation. Periods are smaller groupings within cycles which have time values in seconds assigned to them based on the delay time of the electronics for the corresponding cycle. So, as we can see in the image taken from form scheme II below, in cycle A, there are nine periods of twelve seconds each. Within cycle B there are seven periods of twenty-four seconds each, and so on.

FS topFS top2

A performance of Solo is never a “start at measure one and play to the end” kind of endeavor. Rather, the player is at liberty to select portions of each page to play in a given cycle. Below each cycle there is a group of symbols that tells the player relatively loosely how they should perform the music for that cycle. Stockhausen calls these “what,” “where,” and “how” symbols. A “what” symbol tells a player what size of gesture they should select (systems, parts or elements); a “where” symbol tells a player from where they should select these gestures (from the current page, the current and the following page, the current and the previous page, or all three); a “how” symbol tells the player how the gestures they select should relate to each other (different, the same, or opposite). The criteria for the how gesture is up to the player. So, the player might decide that the how symbol relates to pitch. In this case, the “same” symbol would indicate that the gestures within a cycle should all have more or less the same pitch range.
Two additional symbols indicate the length of time a player may pause between periods, and how the player should attempt to relate to the electronics part within a cycle.

The image below is from cycle B of form scheme V. These particular symbols indicate that, within this cycle, the player must draw musical material made up of parts, from pages A, B, and C, which are either the same or different, with medium pauses following each part, and entrances staggered so as to create a polyphonic texture with the electronics.

Polyphon

So, in actual performance, the player might play this part from page B 1, then this one from page C 2, this from A 3, this from B 4,and so on until they had played a 45 second period from the cycle. Then the player can take a medium pause before they continue the same process again, trying to create a polyphonic texture as the electronics play back what they played from the previous period.

Whew! Remember when I said it was difficult to describe this piece simply? There’s actually quite a bit more to the performance of the piece (for example, we haven’t really discussed the electronics at all!), but I think that’s all you’ll need to know for now.

Solo represents an excellent example of what I would call “composed improvisation.” The term itself seems like an oxymoron, but the concept is actually much more common than one might think. For example, virtually all ‘traditional’ jazz is composed improvisation. Jazz players are generally given, or have learned, some kind of chart or lead sheet which contains the chord changes and melody of a piece, and then improvise based on that information.

19442405

In fact, it’s fairly common for this same kind of controlled improvisation based on notation to occur in contemporary classical music as well. What I have seen most commonly, and have used the most in my own music, is a section wherein only pitches are notated and everything else is left to the player to decide. An example from my music is shown below. Note that the given pitches can be used in any order, in any octave, with any rhythm, dynamic, articulation and so on.

Naut

These are by no means the only ways that notated improvisation can occur. There are probably as many different ways to utilize these kinds of ideas as there are composers using them. But Solo is actually an example of something very rare in the world of composed improvisation. To work out what that is, we have to take a quick step back.

Music is fundamentally organized into a series of impulses. A note begins on an impulse. That note can be combined with other notes into a larger phrase, which has its own larger impulse. That phrase is then grouped with other phrases to form a section, which has its own, still larger impulse. Sections can be grouped into a large form which we might call a movement, or a complete work, each of which also has its own much larger impulse. Sometimes people refer to this concept of grouping things into larger and larger impulses as “the big beats” of music. I’m deliberately avoiding the word “beat” here because it can be misleading.

This concept is actually alluded to in a Ted talk by Benjamin Zander, which you can watch below, and is more scientifically stated by Stockhausen himself in an essay which appears in Perspectives on Contemporary Music Theory edited by Benjamin Boretz and Edward T. Cone.

Composed improvisation can generally be organized into three levels based on with what level of impulses the player is being allowed to improvise and what levels of impulse have been predetermined. In the first level, the form and the phrases are both predetermined, but the specific notes which are played are up to the performer. In the second level, the form and the specific notes are determined, but the phrases which are constructed out of those notes are up to the performer. In the final level, specific notes and phrases are determined, but the form of the piece is left to the performer.

So, the two forms of composed improvisation that we have discussed thus far are both level-one improvisation. Consider jazz improvisation: the form of the piece and the phrase structure are already given based on the notation within the chart, but exactly which notes are played when is up to the player to decide. Specific notes are undetermined, but the larger impulses are predetermined.

An example of third-level improvisation would be the “open form” music found in some of the works of Pierre Boulez is an example of this as are numerous works by Stockhusen (Zyklus, and Licht, for example). In this kind of improvisation, while entire sections of notes and phrases are specifically notated, the order in which those sections occur is determined by the performers.

Solo is a rare example of level-two improvisation in which specific notes and gestures are determined, as is the overarching form, but the way those notes and gestures organize to make phrases is left to the player. I have not yet encountered another piece of composed improvised music that contains large-scale, level-two improvisation, even among Stockhousen’s works. What’s more, the understanding by the performer that this work functions as level-two improvisation is absolutely imperative to a particular performance faithfully representing Stockhausen’s intentions for Solo.

For those interested in hearing Solo, below is a recording of me and horn player Briay Condit playing this piece.

The fact that this work is, as far as I am aware, unique in the world of improvised music makes it more meaningful to the cannon, and likely explains why the work is so notationally involved and difficult for performers to meaningfully understand. And, frankly, this only begins to deal with the things about this work that are fascinating and misunderstood, which probably explains why my previous paper was so long and boring… perhaps more on this another day.