Pop Video: Visualizing the Riemann Hypothesis and Analytic Continuation | Nagwa Pop Video: Visualizing the Riemann Hypothesis and Analytic Continuation | Nagwa

Pop Video: Visualizing the Riemann Hypothesis and Analytic Continuation

Grant Sanderson • 3Blue1Brown • Boclips

Visualizing the Riemann Hypothesis and Analytic Continuation

20:42

Video Transcript

The Riemann zeta function, this is one of those objects in modern math that a lot of you might have heard of, but which can be really difficult to understand. Don’t worry, I’ll explain that animation that you just saw in a few minutes. A lot of people know about this function because there’s a one-million-dollar prize out for anyone who can figure out when it equals zero. An open problem known as the Riemann hypothesis. Some of you may have heard of it in the context of the divergent sum one plus two plus three plus four, on and on up to infinity. You see, there’s a sense in which the sum equals negative one twelfth, which seems nonsensical if not obviously wrong. But a common way to define what this equation is actually saying uses the Riemann zeta function.

But as any casual math enthusiast who started to read into this knows, its definition references this one idea called analytic continuation, which has to do with complex-valued functions. And this idea can be frustratingly opaque and unintuitive. So what I’d like to do here is just show you all what this zeta function actually looks like and to explain what this idea of analytic continuation is in a visual and more intuitive way. I’m assuming that you know about complex numbers and that you’re comfortable working with them. And I’m tempted to say that you should know calculus since analytic continuation is all about derivatives. But for the way I’m planning to present things, I think you might actually be fine without that.

So to jump right into it, let’s just define what this zeta function is. For a given input, where we commonly use the variable 𝑠, the function is one over one to the 𝑠, which is always one, plus one over two to the 𝑠 plus one over three to the 𝑠 plus one over four to the 𝑠, on and on and on, summing up over all natural numbers. So, for example, let’s say you plug in a value like 𝑠 equals two. You’d get one plus one over four plus one over nine plus one sixteenth. And as you keep adding more and more reciprocals of squares, this just so happens to approach 𝜋 squared over six, which is around 1.645. There’s a very beautiful reason for why 𝜋 shows up here. And I might do a video on it at a later day.

But that’s just the tip of the iceberg for why this function is beautiful. You could do the same thing for other inputs 𝑠, like three or four. And sometimes you get other interesting values. And so far, everything feels pretty reasonable. You’re adding up smaller and smaller amounts, and these sums approach some number. Great, no craziness here. Yet, if you were to read about it, you might see some people say that zeta of negative one equals negative one twelfth. But looking at this infinite sum, that doesn’t make any sense. When you raise each term to the negative one, flipping each fraction, you get one plus two plus three plus four, on and on, over all natural numbers. And obviously that doesn’t approach anything, certainly not negative one twelfth, right?

And, as any mercenary looking into the Riemann hypothesis knows, this function is said to have trivial zeros at negative even numbers. So, for example, that would mean that zeta of negative two equals zero. But when you plug in negative two, it gives you one plus four plus nine plus 16, on and on, which again obviously doesn’t approach anything, much less zero, right? Well, we’ll get to negative values in a few minutes. But for right now, let’s just say the only thing that seems reasonable. This function only makes sense when 𝑠 is greater than one, which is when this sum converges. So far, it’s simply not defined for other values.

Now with that said, Bernhard Riemann was somewhat of a father to complex analysis, which is the study of functions that have complex numbers as inputs and outputs. So rather than just thinking about how this sum takes a number 𝑠 on the real number line to another number on the real number line, his main focus was on understanding what happens when you plug in a complex value for 𝑠. So, for example, maybe instead of plugging in two, you would plug in two plus 𝑖. Now if you’ve never seen the idea of raising a number to the power of a complex value, you can feel kind of strange at first. Because it no longer has anything to do with repeated multiplication. But mathematicians found that there is a very nice and very natural way to extend the definition of exponents beyond their familiar territory of real numbers and into the realm of complex values.

It’s not super crucial to understand complex exponents for where I’m going with this video. But I think it’ll still be nice if we just summarize the gist of it here. The basic idea is that when you write something like one-half to the power of a complex number, you split it up as one-half to the real part times one-half to the pure imaginary part. We’re good on one-half to the real part; there’s no issues there. But what about raising something to a pure imaginary number? Well, the result is gonna be some complex number on the unit circle in the complex plane. As you let that pure imaginary input walk up and down the imaginary line, the resulting output walks around that unit circle.

For a base like one-half, the output walks around the unit circle somewhat slowly. But for a base that’s farther away from one, like one-ninth, then as you let this input walk up and down the imaginary axis, the corresponding output is gonna walk around the unit circle more quickly. If you’ve never seen this and you’re wondering why on earth this happens, I’ve left a few links to good resources in the description. For here, I’m just gonna move forward with the what without the why. The main takeaway is that when you raise something like one-half to the power of two plus 𝑖, which is one-half squared times one-half to the 𝑖. That one-half to the 𝑖 part is gonna be on the unit circle, meaning it has an absolute value of one.

So when you multiply it, it doesn’t change the size of the number; it just takes that one-fourth and rotates it somewhat. So if you were to plug in two plus 𝑖 to the zeta function, one way to think about what it does is to start off with all of the terms raised to the power of two. Which you can think of as piecing together lines whose lengths are the reciprocals of squares of numbers which, like I said before, converges to 𝜋 squared over six. Then when you change that input from two up to two plus 𝑖, each of these lines gets rotated by some amount. But importantly, the lengths of those lines won’t change, so the sum still converges. It just does so in a spiral to some specific point on the complex plane.

Here, let me show what it looks like when I vary the input 𝑠 represented with this yellow dot on the complex plane. Where this spiral sum is always gonna be showing the converging value for zeta of 𝑠.

What this means is that zeta of 𝑠, defined as this infinite sum, is a perfectly reasonable complex function as long as the real part of the input is greater than one. Meaning, the input 𝑠 sits somewhere on this right half of the complex plane. Again, this is because it’s the real part of 𝑠 that determines the size of each number, while the imaginary part just dictates some rotation. So now what I wanna do is visualize this function. It takes in inputs on the right half of the complex plane and spits out outputs somewhere else in the complex plane. A super nice way to understand complex functions is to visualize them as transformations. Meaning, you look at every possible input to the function and just let it move over to the corresponding output.

For example, let’s take a moment and try to visualize something a little bit easier than the zeta function, say 𝑓 of 𝑠 is equal to 𝑠 squared. When you plug in 𝑠 equals two, you get four. So we’ll end up moving that point at two over to the point at four. When you plug in negative one, you get one. So the point over here at negative one is gonna end up moving over to the point at one. When you plug in 𝑖, by definition, its square is negative one, so it’s gonna move over here to negative one. Now I’m gonna add on a more colorful grid. And this is just because things are about to start moving. And it’s kinda nice to have something to distinguish grid lines during that movement. From here, I’ll tell the computer to move every single point on this grid over to its corresponding output under the function 𝑓 of 𝑠 equals 𝑠 squared. Here’s what it looks like.

That can be a lot to take in, so I’ll go ahead and play it again. And this time, focus on one of the marked points. And notice how it moves over to the point corresponding to its square. It can be a little complicated to see all of the points moving all at once. But the reward is that this gives us a very rich picture for what the complex function is actually doing. And it all happens in just two dimensions. So back to the zeta function, we have this infinite sum, which is a function of some complex number 𝑠. And we feel good and happy about plugging in values of 𝑠 whose real part is greater than one. And getting some meaningful output via the converging spiral sum.

So to visualize this function, I’m gonna take the portion of the grid sitting on the right side of the complex plane here, where the real part of numbers is greater than one. And I’m gonna tell the computer to move each point of this grid to the appropriate output. It actually helps if I add a few more grid lines around the number one. Since that region gets stretched out by quite a bit.

Alright, so first of all, let’s all just appreciate how beautiful that is. I mean, damn, that doesn’t make you wanna learn more about complex functions, you have no heart. But also, this transformed grid is just begging to be extended a little bit. For example, let’s highlight these lines here, which represent all of the complex numbers with imaginary part 𝑖 or negative 𝑖. After the transformation, these lines make such lovely arcs before they just abruptly stop. Don’t you wanna just, you know, continue those arcs? In fact, you can imagine how some altered version of the function with a definition that extends into this left half of the plane might be able to complete this picture with something that’s quite pretty.

Well, this is exactly what mathematicians working with complex functions do. They continue the function beyond the original domain where it was defined. Now as soon as we branch over into inputs where the real part is less than one, this infinite sum that we originally used to define the function doesn’t make sense anymore. You’ll get nonsense like adding one plus two plus three plus four, on and on up to infinity. But just looking at this transformed version of the right half of the plane where the sum does make sense. It’s just begging us to extend the set of points that we’re considering as inputs. Even if that means defining the extended function in some way that doesn’t necessarily use that sum.

Of course, that leaves us with the question, how would you define that function on the rest of the plane? You might think that you could extend it any number of ways. Maybe you define an extension that makes it so the point at, say, 𝑠 equals negative one moves over to negative one twelfth. But maybe you squiggle on some extension that makes it land on any other value. I mean, as soon as you open yourself up to the idea of defining the function differently for values outside that domain of convergence — that is, not based on this infinite sum — the world is your oyster. And you can have any number of extensions, right? Well, not exactly. I mean yes, you can give any child a marker and have them extend these lines any which way. But if you add on the restriction that this new extended function has to have a derivative everywhere, it locks us into one and only one possible extension.

I know, I know, I said that you wouldn’t need to know about derivatives for this video. And even if you do know calculus, maybe you have yet to learn how to interpret derivatives for complex functions. But luckily for us, there is a very nice geometric intuition that you can keep in mind for when I say a phrase like “has a derivative everywhere.” Here, to show you what I mean, let’s look back at that 𝑓 of 𝑠 equals 𝑠 squared example. Again, we think of this function as a transformation moving every point 𝑠 of the complex plane over to the point 𝑠 squared.

For those of you who know calculus, you know that you can take the derivative of this function at any given input. But there’s an interesting property of that transformation that turns out to be related and almost equivalent to that fact. If you look at any two lines in the input space that intersect at some angle and consider what they turn into after the transformation, they will still intersect each other at that same angle. The lines might get curved and that’s okay. But the important part is that the angle at which they intersect remains unchanged. And this is true for any pair of lines that you choose.

So when I say a function has a derivative everywhere, I want you to think about this angle-preserving property. That anytime two lines intersect, the angle between them remains unchanged after the transformation. At a glance, this is easiest to appreciate by noticing how all of the curves that the gridlines turn into still intersect each other at right angles. Complex functions that have a derivative everywhere are called analytic. So you can think of this term analytic as meaning angle preserving. Admittedly, I’m lying to you a little here, but only a little bit. A slight caveat for those of you who want the full details is that at inputs where the derivative of a function is zero, instead of angles being preserved, they get multiplied by some integer. But those points are by far the minority. And for almost all inputs to an analytic function, angles are preserved.

So if when I say analytic, you think angle preserving, I think that’s a fine intuition to have. Now if you think about it for a moment, and this is a point that I really want you to appreciate, this is a very restrictive property. The angle between any pair of intersecting lines has to remain unchanged. And yet, pretty much any function out there that has a name turns out to be analytic. The field of complex analysis, which Riemann helped to establish in its modern form, is almost entirely about leveraging the properties of analytic functions to understand the results and patterns in other fields of math and science.

The zeta function defined by this infinite sum on the right half of the plane is an analytic function. Notice how all of these curves that the gridlines turn into still intersect each other at right angles. So the surprising fact about complex functions is that if you wanna extend an analytic function beyond the domain where it was originally defined. For example, extending this zeta function into the left half of the plane. Then if you require that the new extended function still be analytic — that is, that it still preserves angles everywhere — it forces you into only one possible extension, if one exists at all. It’s kind of like an infinite continuous jigsaw puzzle where this requirement of preserving angles walks you into one and only one choice for how to extend it.

This process of extending an analytic function in the only way possible that’s still analytic is called, as you may have guessed, analytic continuation. So that’s how the full Riemann zeta function is defined. For values of 𝑠 on the right half of the plane, where the real part is greater than one, just plug them into this sum and see where it converges. And that convergence might look like some kind of spiral. Since raising each of these terms to a complex power has the effect of rotating each one. Then for the rest of the plane, we know that there exists one and only one way to extend this definition so that the function will still be analytic. That is, so that it still preserves angles at every single point. So we just say that, by definition, the zeta function on the left half of the plane is whatever that extension happens to be. And that’s a valid definition because there’s only one possible analytic continuation.

Notice, that’s a very implicit definition. It just says, use the solution of this jigsaw puzzle, which, through more abstract derivation, we know must exist. But it doesn’t specify exactly how to solve it. Mathematicians have a pretty good grasp on what this extension looks like. But some important parts of it remain a mystery, a million-dollar mystery in fact. Let’s actually take a moment and talk about the Riemann hypothesis, the million-dollar problem.

The places where this function equals zero turn out to be quite important. That is, which points get mapped onto the origin after the transformation? One thing we know about this extension is that the negative even numbers get mapped to zero. These are commonly called the trivial zeros. The naming here stems from a long-standing tradition of mathematicians to call things trivial when they understand it quite well. Even when it’s a fact that is not at all obvious from the outside. We also know that the rest of the points that get mapped to zero sit somewhere in this vertical strip, called the critical strip.

And the specific placement of those nontrivial zeros encodes a surprising information about prime numbers. It’s actually pretty interesting why this function carries so much information about primes. And I definitely think I’ll make a video about that later on. But right now, things are long enough, so I’ll leave it unexplained. Riemann hypothesized that all of these nontrivial zeros sit right in the middle of the strip on the line of numbers 𝑠 whose real part is one-half. This is called the critical line. If that’s true, it gives us a remarkably tight grasp on the pattern of prime numbers as well as many other patterns in math that stem from this.

Now so far, when I’ve shown what the zeta function looks like, I’ve only shown what it does to the portion of the grid on the screen. And that kind of undersells its complexity. So if I were to highlight this critical line and apply the transformation, it might not seem to cross the origin at all. However, here’s what the transformed version of more and more of that line looks like. Notice how it’s passing through the number zero many, many times. If you can prove that all of the nontrivial zeros sit somewhere on this line, the Clay Math Institute gives you one million dollars. And you’d also be proving hundreds if not thousands of modern math results that have already been shown contingent on this hypothesis being true.

Another thing we know about this extended function is that it maps the point negative one over to negative one twelfth. And if you plug this into the original sum, it looks like we’re saying one plus two plus three plus four, on and on up to infinity, equals negative one twelfth. Now it might seem disingenuous to still call this a sum. Since the definition of the zeta function on the left half of the plane is not defined directly from this sum. Instead, it comes from analytically continuing the sum beyond the domain where it converges. That is, solving the jigsaw puzzle that began on the right half of the plane.

That said, you have to admit that the uniqueness of this analytic continuation, the fact that the jigsaw puzzle has only one solution, is very suggestive of some intrinsic connection between these extended values and the original sum. For the last animation, and this is actually pretty cool, I’m gonna show you guys what the derivative of the zeta function looks like.

Join Nagwa Classes

Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher!

  • Interactive Sessions
  • Chat & Messaging
  • Realistic Exam Questions

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy