‘Cantor’ at the 2014 MAT End of Year Show

For the 2014 MAT End of Year Show, my piece “Cantor” was performed in the audiovisual concert. Here are the program notes:

“In ‘Cantor’, the mathematical process of removing middle thirds to generate the Cantor ternary set (introduced by mathematician Georg Cantor in the 1870s) inspires a musical process across multiple timescales. Different levels and mappings are layered together to create a a sort of Cantor mosaic.”

And the piece itself:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

You can read about some of my earlier projects and explorations with the Cantor set here and here.

Music Information Retrieval tools for Algorithmic Composition

The field of music information retrieval, while relatively young, still abounds with a variety of interesting mathematical techniques. Many of these techniques take as a point of departure a collection of ‘features’ or ‘feature vectors’; e.g., spectral flux, windowed root-mean-square energy, short-time Fourier transform vectors, etc. The hope is to glean some sort of recognition of genre or mood based on certain properties of these feature vectors.

One concept that is occasionally used is the self-similarity matrix (SSM). Assume that we begin with a set of n  feature vectors for which we would like to construct the corresponding SSM. By definition, this matrix is the n \times n matrix whose entry at position (i, j) is give by a ‘similarity’ (e.g., Euclidean distance) between vectors v_i and v_j . Here is a graphical example of an SSM of approximately 100 windows of a spectrogram:

SSM

 

This matrix may remind you of a related concept from probability. Given a set of states, we can define the stochastic matrix, which determines with what probability we will transition from one state to another. In other words, the entry at position (i, j) of the stochastic matrix is given by the probability of transitioning from state i to state j . Here is an example of a stochastic matrix (note how the rows sum to 1 ):

Stochastic matrixGiven an SSM, we can obtain a stochastic matrix by normalizing each row to sum to 1 . A stochastic matrix can then drive an algorithmic process such as a Markov chain. The main question is determining how to map the different ‘states’ into audio. One approach, assuming the SSM has been computed from a spectrogram, is to map each window to a stochastic frequency cloud, where the various frequencies occur in proportion to the energy in the corresponding bin. Here’s a basic audio example:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The previous example focused mostly on frequencies, but we can use other typical MIR features to map to other musical parameters, such as amplitude envelopes. For instance, we can compute a windowed RMS of an audio sample to get an idea of where energy peaks are located in time. We may then use the RMS signal itself as an envelope, creating a sound whose contours follow the energy of the original sound through time. Here’s an example:

 

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Another musical parameter we can control is rhythm. By locating relative extrema (using a simple first-difference approach), we can also approximate onsets. The spacing between those onsets can then be mapped into rhythms.In my example, I randomly choose a spacing with weight corresponding to the intensity of the onset.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Finally, there is the parameter of form. Since music typically evolves through time, it is aesthetically important to devise some system whereby the music will be changing. A purely algorithmic approach, based on self-similarity, is intriguing: starting with an initial SSM, we generate a few seconds of audio using the previously described techniques. That audio is then analyzed to generate a new SSM, which in turn drives new audio. Other data such as RMS can also be sequentially updated. The piece essentially composes itself forever. Of theoretical interest might be the question of whether it converges to some steady state (or perhaps ‘converges’ into a steady loop).

Here is an étude based on combining several of the aforementioned ideas:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

 

 

The Devil’s Staircase

The Cantor function, mapping the unit interval to itself, is a relative to the Cantor set, which was described previously on this website. Being uniformly continuous, but not absolutely continuous, it is a classic mathematical counterexample. Although its derivative is zero almost everywhere, its range of outputs still encompasses the full unit interval. Wikipedia has a nice pictorial representation of the iterative process that defines it.

 

Cantor iteration, courtesy of Wikipedia

 

The concept starts from considering the Cantor set probabilistically. The Cantor function is in fact the cumulative distribution function for a random variable which is uniformly distributed on the Cantor set. But we can also define the Cantor function directly. Consider the Cantor set as the set of all ternary expansions whose entires are all either 0 or 2. As mentioned previously, there is then a bijection between the Cantor set and the full unit interval by ‘pretending’ the 2s are 1s and interpreting the numbers as binary expansions. Hence, for an input x which belongs to the Cantor set, we define c(x) as this correspondence. We then further define the Cantor function on inputs x that do not belong to the Cantor set as follows: since x does not belong to the Cantor set, it must contain a 1 in its ternary expansion. By truncating x immediately after the first 1 in its expansion, ‘pretending’ any previous 2s are 1s, and then interpreting the result as a binary expression as before, we obtain the appropriate output c(x).

I’ve recently written the code for a Cantor function UGen in SuperCollider for use in digital audio. The UGen uses an approximation to the Cantor function based on a recursive sequence of functions which converge pointwise to the Cantor function. The user can control the depth of the recursion for more or less precision. Here are a few very primitive demonstrations. To illustrate the recursion of the function sequence that converges to the Cantor function, I’ve taken a sine oscillator and modulated its frequency from 200 to 600 Hz using different convergents of the Cantor function. Here they are, for n = 0, 1, 2, and 3:

n = 0: (equivalent to one straight line between 200 and 600; i.e., no plateaus)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

n = 1: (comprises three straight lines, the middle one being completely flat; i.e., one plateau)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

n = 2: (three plateaus connected by straight lines)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

n = 3: (seven plateaus connected by straight lines)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

(In general, there will be 2^n - 1 plateaus.)

Of course, those examples aren’t especially interesting from a musical perspective. However, I found that the UGen was quite expressive when used at control rate to manipulate such parameters as envelopes and rhythms. Here is a recording  of an improvisation I generated in SuperCollider, using almost exclusively the Cantor UGen to shape the sounds. (The actual audio rate UGen is a pulse oscillator, and the end result is put through a reverberator.)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

If others are interested in trying out the Cantor function UGen, I will post the code shortly!

 

 

Granular experiments

Recently, I’ve been exploring new ideas based on basic granular synthesis. As part of my work for a digital audio programming course, I composed a short three-part work, ‘Sweep’, which is largely experimental but based on some of these ideas. The audio is generated using a basic sequencer that we designed in C using our granular synthesis module from an earlier course taught by Andrés Cabrera. The code reads in a file in a Csound score format (in this case, one that is generated from a Python script) and sequences it in real time. Each movement is actually generated from the same Python script, but modulating a few of the basic parameters. The idea is to be able to create lots of different pieces in a modular fashion, so this is just one particular possible realization.

Movement I:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Movement II:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Movement III:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

A bolt from the blue

The following game was played in the ICC 5-minute pool against an opponent whose FIDE rating is 2071. I don’t normally annotate or show blitz games that I’ve played for obvious quality reasons, but I uncorked an extremely snazzy tactic in this game that I can’t help but share. (To my opponent’s credit, he was very gracious and also congratulated me on the move.) A steady diet of exercises based on ‘invisible’ moves seems to have finally paid off!

*Spoiler alert* The key positions is after White plays 18. g3, so you can try to work it out for yourself if you like! Of course, knowing ahead of time the critical moment makes any tactic easier to spot, which is part of why finding moves like this can be so difficult!

Barkley and Doppler

Recently we’ve learned a bit about physical modeling of PDEs in my pattern formation class. Here is a video depicting Barkley turbulence, which is a type of reaction-diffusion system.

On an unrelated note, here are a few of my results from implementing a simulation of the Doppler effect. In our first example, a source is traveling at about Mach \frac{1}{10} horizontally from right to left across a distance of 200 meters. When it is directly in front of us, the distance between us and the source is 10 meters.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

In the second example, the source travels twice as fast, twice as far, and we are twice as close. The shift is correspondingly more dramatic.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

One of the more counterintuitive aspects of the Doppler effect is the situation in which the source is coming directly at us. We never really experience this in real life (unless we are actually getting run over by the source). In this situation, the source does not in fact slide continuously from a higher frequency to a lower frequency but rather it does a discrete jump. (This is because the Doppler effect depends on the projections of the velocity vectors of the source and the receiver onto the line connecting the two, so unless the source and receiver are traveling directly toward each other, these projections vary continuously. However, when the source and receiver travel directly toward each other, the projections are constant until they meet, at which point they discretely jump to a new value.) In the particular example I have implemented, the source is traveling at Mach \frac{1}{4} . (Warning: this example might be somewhat uncomfortable to listen to!)

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Notice that we heard what sounded like a major sixth! This is actually a pretty straightforward consequence of the mathematics behind the Doppler shift. The emitted frequency, f, is distorted by a frequency factor of \frac{c}{c + v_s} , where is the speed of sound in the relevant medium and v_s is the velocity of the source approaching you, with the convention that it is negative as it approaches you and positive as it recedes. Hence, the frequency ratio between the approaching sound and the receding sound is given by \frac{c + |v_s|}{c - |v_s|} . In particular, at Mach \frac{1}{4} , this ratio is \frac{\frac{5}{4}}{\frac{3}{4}} = \frac{5}{3} , which is a just major sixth. Other musical intervals are, of course, also easily obtainable — Mach \frac{1}{3} yields an octave, for instance. In general, the reader can verify with some basic algebra that in order to get the frequency ratio P:Q, the source must travel at Mach \frac{P - Q}{P + Q} .

You might have noticed the expression c - v_s in the denominator above and wondered about the situation in which we are travelling Mach 1 or faster, but that’s another story for a future post.

 

 

 

Media mashup

I’ve been working on a bunch of small projects recently that I wanted to share. The first two are related to visualizing Newton fractals in the complex plane. One is a video showing what happens when you move around the three zeros of a cubic polynomial while coloring its corresponding Newton fractal, while the other shows the Newton fractal generated from the function f(z) = \sin z .

The next image is a basic implementation of diffusion-limited aggregation.

I also experimented a bit with an iterative edge detection procedure on MAT’s canonical bunny image, which yielded the following video.

In the audio world, I’m currently experimenting with ideas based on granular synthesis and the Doppler effect, so stay tuned for future content!

 

 

Fractal Colorings

Last time, we saw some fairly primitive images of the Mandelbrot set. In particular, the coloring scheme was somewhat gaudy — there were noticeable discrete bands in all the images. The reason for this was in the code, where I had set the value of each pixel to be whichever integer we escaped at, creating a discrete range of possibilities. Luckily, there are fancier methods of coloring which smooth out these discontinuities. While there are a ton of different implementations out there, I’m just going to mention two: Hubbard-Douady Potential, and the Normalized Iteration Count.

In the Hubbard-Douady method, we treat the Mandelbrot set as though it is an electric field and consider the corresponding electric potential that it induces. This function turns out to be \phi(z) = \lim_{n \rightarrow \infty} \frac{\log|z_n|}{2^n} , where z_n is the value of z after iterations. So one option for smoother coloring is to assign to pixel z the value of \phi(z) , or an approximation for a large value of n. Here’s an example.

 

In the Normalized Iteration Count, we try to come up with a corresponding continuum to the different integer range that we had specified before. There are a few variants of this formula, but the simplest is given by mapping each point to the real number r = n - \log_2(\frac{\log|z_n|}{\log R}) , where R is the bailout radius and z_n is the first iteration value that escapes the disk of radius R centered at the origin. (One way to understand this is to see that this is equivalent to choosing a real value r which satisfies \frac{\log R}{2^r} = \frac{\log|z_n|}{2^n} .) This formula will give us real numbers r in the interval [0, n) so that upon division by n we get a standard color mapping. Here are a couple examples.

 

Finally, here’s another spinning Julia video, but with a continuously modified smooth color scheme. I’ve mostly used my own variant of the Hubbard-Douady method in this particular piece.

 

 

Mandelbrot Madness

The Mandelbrot set is a very well-known fractal discovered by Benoit Mandelbrot in the late 1970s. Despite its complex appearance, it is generated from an extremely simple iterative procedure. Consider any number c  in the complex plane. Write c = c_0 . Now for any positive integer n, define c_n = c_{n-1}^2 + c_0 . Then belongs to the Mandelbrot set provided that there is a real number r such that c_n is inside the disk of radius r in the complex plane for every n. (It turns out that the Mandelbrot set lives entirely inside the closed disk of radius 2, so that in practice it suffices to check whether |c_n| \le 2 for every n.)

As a quick example to demonstrate, consider c = c_0 = -i . This has distance 1 from the origin. Now c_1 = (-i)^2 + i = -1 + i , which has distance \sqrt 2 from the origin. Next, c_2 = (-1 + i)^2 + i = -2i + i = -i , which is back to distance 1 from the origin. But since c_2 = c_0 , the pattern will repeat forever, and we will clearly never escape the disk of radius 2. Thus, c = -i belongs to the Mandelbrot set. (An example of a number which would not belong to the Mandelbrot set would be c = 1, which the reader can verify.)

When generating computer images of the Mandelbrot set, it is common to try the iterative procedure a set number of times, and then assume that if we haven’t escaped the disk of radius 2 yet, we never will. Of course this is a bit imprecise, but it is the best we can do with finite memory. Various colorings of the points outside the set also take into account how many iterations it takes before escaping the disk.

There are many fancy images of the Mandelbrot set on the web, but it’s always more fun to brew your own. Here are a couple I generated for my pattern formation class:

Another interesting set from the field of complex dynamics is the Julia set. Its formal definition is more complicated, but a special case of Julia sets arise from a procedure very similar to the one that generates the Mandelbrot set. If instead of adding back c_0 at each iteration, we add back some other pre-chosen complex parameter, a , then we generate a family of Julia sets for different choices of a . Here’s one with a = \frac{1 + i}{2} :

 

I also designed an interactive animation in which the user can cycle through a family of Julia sets. Here’s a video of it in action:

And another version of the same concept, zoomed in and colored:

As before, check out my course’s website for other interesting content generated by my fellow classmates.