Fractal Colorings

Last time, we saw some fairly primitive images of the Mandelbrot set. In particular, the coloring scheme was somewhat gaudy — there were noticeable discrete bands in all the images. The reason for this was in the code, where I had set the value of each pixel to be whichever integer we escaped at, creating a discrete range of possibilities. Luckily, there are fancier methods of coloring which smooth out these discontinuities. While there are a ton of different implementations out there, I’m just going to mention two: Hubbard-Douady Potential, and the Normalized Iteration Count.

In the Hubbard-Douady method, we treat the Mandelbrot set as though it is an electric field and consider the corresponding electric potential that it induces. This function turns out to be \( \phi(z) = \lim_{n \rightarrow \infty} \frac{\log|z_n|}{2^n} \), where \( z_n \) is the value of \(z\) after \(n \) iterations. So one option for smoother coloring is to assign to pixel \(z\) the value of \( \phi(z) \), or an approximation for a large value of \( n\). Here’s an example.

 

In the Normalized Iteration Count, we try to come up with a corresponding continuum to the different integer range that we had specified before. There are a few variants of this formula, but the simplest is given by mapping each point \( z \) to the real number \( r = n \ – \log_2\left(\frac{\log|z_n|}{\log R}\right) \), where \(R\) is the bailout radius and \( z_n \) is the first iteration value that escapes the disk of radius \(R\) centered at the origin. (One way to understand this is to see that this is equivalent to choosing a real value \(r\) which satisfies \( \frac{\log R}{2^r} = \frac{\log|z_n|}{2^n} \).) This formula will give us real numbers \(r\) in the interval \( [0, n) \) so that upon division by \(n\) we get a standard color mapping. Here are a couple examples.

 

Finally, here’s another spinning Julia video, but with a continuously modified smooth color scheme. I’ve mostly used my own variant of the Hubbard-Douady method in this particular piece.

 

 

Mandelbrot Madness

The Mandelbrot set is a very well-known fractal discovered by Benoit Mandelbrot in the late 1970s. Despite its complex appearance, it is generated from an extremely simple iterative procedure. Consider any number \( c \) in the complex plane. Write \( c = c_0 \). Now for any positive integer \(n\), define \( c_n = c_{n-1}^2 + c_0 \). Then \( c \) belongs to the Mandelbrot set provided that there is a real number \( r\) such that \( c_n \) is inside the disk of radius \( r\) in the complex plane for every \( n\). (It turns out that the Mandelbrot set lives entirely inside the closed disk of radius \( 2\), so that in practice it suffices to check whether \( |c_n| \le 2 \) for every \( n\).)

As a quick example to demonstrate, consider \( c = c_0 = -i \). This has distance \( 1\) from the origin. Now \( c_1 = (-i)^2 + i = -1 + i \), which has distance \( \sqrt 2 \) from the origin. Next, \( c_2 = (-1 + i)^2 + i = -2i + i = -i \), which is back to distance \( 1\) from the origin. But since \( c_2 = c_0 \), the pattern will repeat forever, and we will clearly never escape the disk of radius \( 2\). Thus, \( c = -i \) belongs to the Mandelbrot set. (An example of a number which would not belong to the Mandelbrot set would be \( c = 1\), which the reader can verify.)

When generating computer images of the Mandelbrot set, it is common to try the iterative procedure a set number of times, and then assume that if we haven’t escaped the disk of radius \( 2\) yet, we never will. Of course this is a bit imprecise, but it is the best we can do with finite memory. Various colorings of the points outside the set also take into account how many iterations it takes before escaping the disk.

There are many fancy images of the Mandelbrot set on the web, but it’s always more fun to brew your own. Here are a couple I generated for my pattern formation class:

Another interesting set from the field of complex dynamics is the Julia set. Its formal definition is more complicated, but a special case of Julia sets arise from a procedure very similar to the one that generates the Mandelbrot set. If instead of adding back \( c_0 \) at each iteration, we add back some other pre-chosen complex parameter, \( a \), then we generate a family of Julia sets for different choices of \( a \). Here’s one with \( a = \frac{1 + i}{2} \):

 

I also designed an interactive animation in which the user can cycle through a family of Julia sets. Here’s a video of it in action:

And another version of the same concept, zoomed in and colored:

As before, check out my course’s website for other interesting content generated by my fellow classmates.

Conway’s Game of Life

One of the early examples of cellular automata to be popularized was John Conway’s ‘Game of Life’. You start with an infinite two-dimensional grid of cells, each of which is in one of two states: ‘dead’ or ‘alive’. (In practice, this is usually implemented by coloring the dead pixels black and the live ones white, or vice versa.) Each cell has eight neighboring cells — the four orthogonally adjacent cells and the four diagonally adjacent cells. Based on the initial position of the live and dead cells, the arrangement ‘evolves’ according to a set of simple rules:

  1. Any live cell with fewer than two live neighbors dies (underpopulation).
  2. Any live cell with exactly two or three live neighbors lives on to the next generation.
  3. Any live cell with more than three live neighbors dies (overpopulation).
  4. Any dead cell with exactly three live neighbours becomes a live cell (reproduction).

The ‘game’ was invented in 1970 and has been extensively analyzed and explored. Here’s a quick demonstration that I coded in my pattern formation course, taught by Prof. Ted Kim. The initial setup is an R-pentomino, which expands rapidly.

I also made a few modifications as an exploration. One of the major implementation concerns are boundary conditions — although the game in theory is played on an infinite grid, computers work with a finite amount of memory. One solution is to treat the boundary as a dead zone. Another is to use a ‘wrap-around’ effect, essentially mapping the grid to the fundamental polygon of a torus. I decided to try out a more exotic concept, mapping the grid to the fundamental polygon of the real projective plane. In addition, I relaxed the ‘birth’ rules to include having only two live neighbors. Here are the results.

My classmates are also producing interesting content, which you can view on our course blog.

 

 

Sine Wall

It’s been a while since I’ve updated, but I’ve still been trying to come up with new creative content whenever possible. I recently wrote a brief electronic piece as a project for one of my MAT courses. Although it is generated entirely with sine oscillations, it still packs a bit of a punch. Entitled “Wall,” it is more playful and less serious than my previous works. Hope you enjoy!

Brown

In my MAT 201B course, “Programming with Media Data,” I’ve developed an audio/visual representation of different molecular random walks, including a loose model of Brownian motion. The fluctuations in the spatial displacements are mapped to color (in terms of HSV) and pitch (in terms of frequency). The user can interact with the simulation, discretely or continuously varying the temperature to affect the magnitude of the displacements. I’ve created two other stochastic processes in addition to the Brownian motion and included a mode which cycles rapidly between the three random walks, essentially producing a ‘random walk between random walks.’ I also programmed a short piece which illustrates the various aspects of my project in action. You can see a video screen capture of it below.

More musique concrète

I recently finished composing a slightly longer piece, “Crackle,” based on similar themes to “Snap.” In “Crackle,” I’ve used only one sample of myself snapping my fingers in an homage/satire of Steve Reich’s “Clapping Music.” Using Pro Tools again, the sample is developed in a variety of ways, but principally with time-distortion, reverb, and panning.

Musique concrète

Here’s a short electronic piece (“Snap”) I recently composed which is generated entirely from recordings I made of myself (snapping my fingers, tapping a desk, whistling, and singing a drone). I altered the speed of the recordings in Peak and mixed the four tracks in Pro Tools, using a few minimal effects (reverb and panning).

The abc conjecture

Recently, the Japanese mathematician Shinichi Mochizuki announced that he had a proof of the abc conjecture. This claim was significant enough to receive plenty of major news coverage, including from the New York Times. His strategy was to use entirely novel self-developed techniques — so-called inter-universal Teichmüller theory. In fact, practically no one else in the mathematics community is conversant in these techniques, so it will take quite a long time before the proof is sufficiently refereed. If it turns out to be accurate, however, it will undoubtedly a huge achievement!

So what is the abc conjecture, anyway? Essentially, it lies somewhere in the intersection of additive and multiplicative number theory. Curiously, it’s a statement about one of the simplest equations possible:

\( a + b = c \)

for natural numbers \( a \), \(b\), and \(c\). Naturally, there are infinitely many solutions to this equation. Now, it’s obvious that this equation is additive, but we can give it a multiplicative flavor if we consider the various prime divisors of \(a\), \(b\), and \(c\). In particular, for this conjecture, we’re interested in repeated divisors — cases where \(a\), \(b\), and \(c\) are divisible by perfect powers.

It’s probably helpful to use a concrete example to illustrate some of the ideas here. So suppose \( a = 81, b = 128, c = 209. \) We see that indeed \( a + b = c. \) Now, \( a = 3^4, b = 2^7, \) but \( c = 11 \cdot 19. \) Thus, although \( a \) and \( b \) are divisible by large prime powers, it seems that \( c \) is not. Of course this is simply one example, but in general this sort of pattern appears to be true. Try it out for yourself and see if you can find any counterexamples!

In order to describe the abc conjecture precisely, it will help to introduce a little terminology. Let’s define the radical of a natural number \( n \), denoted \( \mathrm{rad}(n), \)  to be the product of the distinct prime factors of \( n. \) For instance,

\( \mathrm{rad}(12) = \mathrm{rad}(72) = 2 \cdot 3 = 6, \mathrm{rad}(2^7) = 2, \mathrm{rad}(1) = 1, \) etc.

One formulation of the abc conjecture tries to compare \( c \) to \( \mathrm{rad} (abc) \). We can think of this in the following way: to what power must we raise the radical of \( abc \) in order to obtain \( c \)? In other words, in the form of an equation, what exponent \( q \) satisfies

\( \displaystyle (\mathrm{rad} (abc))^{q} = c \)?

The more that \( c \) is highly divisible by primes, the larger value of \( q \) we will need. Solving for \( q \), we obtain

\( \displaystyle q = \frac{\log c}{\log (\mathrm{rad} (abc))} \).

For convenience, let’s call this quantity \( q \) the quality of the abc-solution. We claim that large values of \( q \) arise when \( a\), \(b\), and \(c\) are divisible by prime powers. It’s worth verifying this claim in order to get some intuition for how this quantity \( q \) is a useful construct. So suppose for the sake of argument that \(a\), \(b\), and \(c\) are all perfect powers. For instance, let \( a = x^n, b = y^n, c = z^n \) for some natural numbers \( x, y, z, n. \) Then we get the infamous Fermat equation \( x^n + y^n = z^n \). Of course, it’s now known that no solutions exist for \( n \ge 3 \), but for the sake of argument, let’s suppose solutions do in fact exist for arbitrarily large values of \( n \). What is the quality of such an abc-solution? Well,

\(\begin{align*} \log c &= \log (\max(a, b, c))\\
&\ge \frac{1}{3}(\log a + \log b + \log c)\\
&= \frac{1}{3}\log abc\\
&= \frac{1}{3}\log (x^ny^nz^n)\\
&= \frac{1}{3}\log(xyz)^n\\
&= \frac{n}{3}\log(xyz)\\
&\ge \frac{n}{3}\log(\mathrm{rad}(abc)).
\end{align*}\)

So,

\( \displaystyle q =\frac{\log c}{\log (\mathrm{rad} (abc))} \ge \frac{n}{3}. \)

If \( n \) can be arbitrarily large, then so can \( q. \) However, the abc conjecture establishes a restriction on the size of \( q \) as follows.
The abc conjecture: For any real number \(C \) greater than \(1\), all but a finite number of \(abc\)-solutions have a quality \(q\) less than or equal to \(C\).
There are several alternative formulations as well — you can check them out at Wikipedia or MathWorld — but this one, adapted from Barry Mazur’s piece Questions about Number, appealed to me most. Let’s hope that Mochizuki’s proof is successful and opens up new doors in number theory! For further reading and research, here is a list of the consequences of the abc conjecture.

Training games

I’ve played a couple training games with a friend of mine from undergraduate recently. Here is one of them, which features an instructive rook and pawn endgame.

[Event "ICC"]
[Site "Internet Chess Club"]
[Date "2012.09.14"]
[Round "?"]
[White "Aaron"]
[Black "Jason"]
[Result "1-0"]
[WhiteELO "?"]
[BlackELO "?"]

1.d4 Nf6 2.c4 e6 3.Nf3 Bb4+
{The Bogo-Indian. A solid choice.}
4.Bd2 Qe7 5.g3 Nc6 6.Nc3
( 6.Bg2 Bxd2+ 7.Qxd2 $6 Ne4 8.Qc2 Qb4+ {and Black equalizes.} )
6...O-O 7.Bg2 d6
( 7...Bxc3 8.Bxc3 Ne4 {and the bishop is snagged.} )
8.O-O e5
( 8...Bxc3 9.Bxc3 Ne4 10.Be1 {is now possible.} )
9.Nd5 $1
{Emphasizing Black's failure to capture on c3.}
9...Nxd5 10.cxd5 Bxd2
{Forced. But now White has an interesting choice of captures.}
11.Qxd2 $6
( 11.dxc6 Bh6 12.dxe5 dxe5 13.Qd5 {was the most precise.} )
11...Nxd4 12.Nxd4 exd4 13.Qxd4 $5
( 13.Rfe1 {is simpler, but the game continuation is more combative.} )
13...Qxe2
{Facing the prospects of a passive defense of the weak c7-pawn, Black
chooses instead to grab some material and start a fight. But now White
obtains a massive lead in rook development.}
14.Rfe1 Qb5
( 14...Qg4 15.Re4 )
15.Rac1 Re8
{The idea of Black's Queen placement. Now Black's back rank hangs by a
thread! }
16.Rxe8+
{Finally, I decided on this simple continuation. Trading off a pair of
rooks highlights Black's pathetic rook on a8.}
( 16.Qc4
( 16.Bf1 Qd7 17.Qg4 Qd8 18.Qg5 Bd7 )
16...Bd7
( 16...Qxc4 $4 17.Rxe8# )
)
16...Qxe8 17.h4
( 17.Rxc7 $2 Qe1+ 18.Bf1 Bh3 )
17...Qd8
{Black tries to defend c7, but he cannot hold it for long.}
18.Qc3 Bf5
{Black finally mobilizes, but now the material balance is
reestablished, with White maintaining the active Rook.}
19.Qxc7 Qxc7 20.Rxc7 Rb8
{With perfect play, the position is likely a draw. However, Black's
pieces are so passive that White has excellent practical chances.}
21.Bf3 $1
{Starting a good plan of kingside expansion. In order to create
winning chances, White must activate his King and probe for more
weaknesses.}
21...a5
{Removing the a-pawn from the vulnerable 7th rank.}
22.g4 Bc8
{Not ideal, but if the Bishop abandons the c8-h3 diagonal, White can
continue with Rc7-d7, picking off the d6-pawn. }
23.Kg2 Kf8 24.Kg3 Ke8 25.Be4 $1
{Probing for weaknesses in order to try to gain entry for the King
into Black's position.}
25...h6
{Regardless of whether Black plays h7-h6 or g7-g6, he creates holes in
his sixth rank.}
26.Kf4 Bd7 27.Bf5 Bxf5 $2
( 27...Bb5 {and White has trouble coordinating his King and Bishop,
since both of them would like to occupy the f5-square.} )
28.Kxf5 b5
{Starting the plan of b5-b4 and Rb8-b5. But it's too slow.}
29.g5 $1 hxg5 30.hxg5 b4 31.g6 $1
{The point. White gets into the sixth rank. }
31...fxg6+ 32.Ke6
{White is now 'up a King' and winning.}
32...Kf8 $6
( 32...Rd8 33.Rxg7 Kf8 34.Rxg6 {and White is still winning, but not as
quickly.} )
33.Kxd6 Rb6+
{Basically a spite check, as Black cannot realistically stop White's
d-pawn.}
34.Rc6 Rb7
{This move does lose immediately, but it's hard to criticize since
Black is losing regardless.}
35.Rc8+ Kf7 36.Rc7+
{and Black resigned.}
( 36.Rc7+ Rxc7 37.Kxc7 {and the d-pawn is unstoppable.} )
1-0



Aliasing

Aliasing is a common problem in signal processing, both visually and aurally. If you’ve ever watched the spokes on a wheel turn in a movie, you may have wondered why they sometimes appear to go backward or even stand still. The reason for this has to do with how film works. Typically, the illusion of continuity is created by rapidly juxtaposing many still photographs, for instance, \( 24 \) per second. (This number is usually called the frame rate.) In other words, one photograph takes place in every \( \frac{1}{24} \) seconds. Let’s assume that we are photographing a car whose wheels turn at a rate of \( \frac{1}{4} \) revolution every \( \frac{1}{24} \) seconds. Then on film, the car will appear to be moving forward as normal. But suppose instead that the car is traveling three times faster — that is, its wheels are turning at a rate of \( \frac{3}{4} \) revolution every \( \frac{1}{24} \) seconds. When we observe the film, without sufficient visual context, it will actually appear as though the wheels are turning backward \( \frac{1}{4} \) revolution every \( \frac{1}{24} \) seconds. In other words, the car was moving so fast that the frame rate couldn’t keep up. In fact, even if the car had only been moving twice as fast — that is, at a rate of \( \frac{1}{2} \) revolution every \( \frac{1}{24} \) seconds — we wouldn’t be able to distinguish whether its motion was forward or backward on film.

Thus, if we want to accurately capture the motion of a wheel on film, we actually need to choose a frame rate at least twice the rate of the wheel. (This is the essence of the Nyquist-Shannon sampling theorem.) So a rate of \( 24 \) frames per second can accommodate any wheel spinning at a rate slower than \( 12 \) revolutions per second. If we take the average circumference of a tire to be about \( 67 \) inches, then \( 12 \) revolutions per second corresponds to \( 67 \) feet per second, or approximately \( 45 \) mph. Hence, any motion in excess of \( 45 \) mph may appear distorted by aliasing without a faster frame rate.

All these arguments apply to audio processing as well! In digital signal processing, a continuous sound is sampled by gathering rapid data about discrete instants of the sound, similar to taking rapid photographs of a visual event. The sampling rate (analogous to the frame rate) indicates just how much data are gathered in one second. For example, one common choice of sampling rate is \( 44100 \) samples per second. But if the sampling rate does not exceed twice the frequency of a signal, then the signal may be subject to aliasing. This results in low-frequency signals which were not present in the analog sound manifesting themselves in the digital version. For example, a tone with a frequency of \( 43660 \) Hz could be heard instead as a tone with frequency \( 44100 – 43660 = 440 \) Hz, a concert A.

While aliasing is usually a drawback of digital processing that engineers try to avoid, it sometimes has an interesting sound. Here’s a quick little track that demonstrates the concept:

The program is written so that the frequency slides perpetually higher; however, you will eventually start hearing a quasi-random scattering of different pitches due to aliasing.