YTread Logo
YTread Logo

The Most Difficult Math Problem You've Never Heard Of - Birch and Swinnerton-Dyer Conjecture

Apr 16, 2024
A few weeks ago I came across an article in MIT Technology Review that claimed that quantum computing would break 2048-bit RSA within the next 25 years. For those of you who don't know what RSA actually is, it's basically an encryption algorithm - it's what we use to encrypt things, from secret government apps to what's hidden in your bank account. Now, for

most

of us it doesn't really matter if, when, or how RSA gets hacked, because 256-bit RSA has already been hacked in the past, and for us it doesn't really matter that our credit card can get hacked. years in the future.
the most difficult math problem you ve never heard of   birch and swinnerton dyer conjecture
But the government does care. And that got me thinking: Why do we trust an algorithm that is old, obsolete, and exactly the kind of algorithm that a quantum computer can brute force? Now, after doing some research, I discovered that a better and more secure algorithm has already started to make its way to the mainstream. This is elliptic curve cryptography or ECC. Now, there is a lot of politics associated with ECC (because a single company owns many patents and so it is

difficult

to discern exactly what we can and cannot use), but the fact is that ECC will probably be the encryption method. from the future.
the most difficult math problem you ve never heard of   birch and swinnerton dyer conjecture

More Interesting Facts About,

the most difficult math problem you ve never heard of birch and swinnerton dyer conjecture...

The key length is much shorter, uses less CPU power, is faster to store, and requires less memory. From the outside it looks like a miracle! But the more I read about these structures, the more I discover that we actually know very little about these things known as elliptic curves. Most of it revolves around open and unanswered

problem

s, and in this video I want to discuss perhaps the

most

interesting and unanswered

problem

related to them: the Birch and Swinnerton-Dyer

conjecture

, which is a Millennium Prize problem and has led to several Fields Medals. . Then let's get started! But before we talk about the Birch and Swinnerton-Dyer

conjecture

I want to take a moment to appreciate number theory.
the most difficult math problem you ve never heard of   birch and swinnerton dyer conjecture
It is the type of

math

ematics that I think the layman would be most interested in, simply because the conjectures are so simple to state and yet so intriguing that Gauss once called it the Queen of Mathematics. Fermat's Last Theorem states that a to the power of n plus b to the power of n is equal to c to the power of n has no solutions for n greater than 2. The Goldbach conjecture states that every even number can be written as a sum of two primes. . From these examples it is easy to see that one of the most fundamental objects of study in number theory has been equations.
the most difficult math problem you ve never heard of   birch and swinnerton dyer conjecture
What's amazing about this is the fact that even though we've studied them for thousands of years, we still know almost nothing about the vast majority of them (and when I say thousands of years, I mean thousands of years). The first known equations can be attributed to Egypt and China, where texts such as the Rhind Papyrus spoke of linear equations back in 1500 BC. But the most significant advances in the theory of equations were made in the 3rd century AD. Diophantus, a Greek

math

ematician who created the method of analysis now known as Diophantine analysis and who inspired Fermat to conjecture several of his theorems, including Fermat's Last Theorem. which was in fact scribbled in the margin of Diophantus's book, Arithmetica, with the very useful comment that he had found a remarkable proof of the theorem.
Now what I would call the key feature of Diophantus's work was that he only cared about the positive integral solutions of his equations, and to this day an equation for which we only care about the integral solutions is still called Diophantine equation. I think the simplest example of such an equation would be a polynomial. A polynomial literally means many parts in Greek and is made up solely of equations in which a single variable is added, multiplied and raised to a power that is a non-negative whole number, meaning that you

never

get x as a fraction or root.
In mathematical terms they are simple and not very interesting due to something known as the rational root theorem. When I first

heard

about the rational root theorem, my jaw dropped: in your 12 years of schooling you spend a lot of time solving polynomial equations and you are repeatedly told that you can only solve linear, quadratic, and sometimes cubic equations. . You cannot solve equations in which x is raised to a power greater than three. But the rational root theorem says, 'No, you can, and it's actually very simple!' The idea behind the rational root theorem is that any polynomial can be written in terms of x and various constants.
This is the general representation of a polynomial; All you have to keep in mind is that the powers of x are always integers greater than zero and the a can be completed with any number we want. It is easy to see that we can write any polynomial in this way. What the rational root term says is that any rational solution to this equation must have a factor of a_0 in the numerator and a factor of a_n in the denominator. a_0 is the constant term and a_n is the constant in front of the highest power of x.
After that, it's just a game of trying all the solutions. For example, if we have x2 + 2x +1 = 0, then the factors of a_n are only 1 and -1 and the factors of a_0 are also only 1 and -1. This means that the only possible rational solutions to this equation are 1/1 -1/1, which are just 1 and -1 respectively. Plugging the values ​​back into the equation, we get that yes, -1 is a solution. Now, while the rational root theorem is indeed a beautiful piece of mathematics, it is easy to see why it makes polynomials so uninteresting to mathematicians. The fact is that we already know all the rational solutions to a polynomial, so it is much more fun to delve into the unexplored parts of mathematics and look at those equations that have more than one variable.
As soon as we add y to the mix, things start to go crazy. The most elementary of these equations are those that have only the terms x and y without raising them to any power, known as Linear Diophantine Equations. These equations can be modeled by a straight line and it is quite easy to determine all their solutions, because there are infinitely many rational solutions. The same applies to equations where x and y are raised to the second power, known as conics. Conic sections are basically parts of cones, so they form figures such as circles, ellipses, parabolas and hyperbolas.
It is also easy to find rational solutions to these equations because we have a defined algorithm to solve them using something known as the Hasse-Minkowski theorem; again, there are an infinite number of rational solutions. Things really start to get interesting after this. Curves in which one or both powers of x and y are 3 are known as elliptic curves and are the most interesting from the point of view of number theory. They look like this* and like this*. The reason for this is that the equations do not need to have an infinite number of rational solutions. Well, many of them do, but some don't either.
This is important, because if you have an equation with powers greater than 3 (such as 4, 5, 6, etc.), then it has been mathematically proven that they do not have an infinite number of rational solutions: they only have a finite number. It was demonstrated in 1984 by Gerd Faltings, who won the Fields Medal for it in 1986 (keep counting, that's the first Fields Medal). We also know that there is no algorithm to determine those particular solutions. But elliptic curves are different, and the reason is that they are not like conic curves: they do not have 0 or an infinite number of rational solutions.
Some of them may also have finite numbers of rational solutions; We simply don't know how to determine whether a particular curve has an infinite or finite number. For a long time there was no elliptic analogue of the rational root theorem or the Hasse-Minkowski theorem. Despite all the advances we have made in mathematics in the last 200 years, we still had almost no idea how that simple cube worked. But that began to change at the beginning of the 20th century thanks to a mathematician named Louis Mordell. A while ago we talked about Diophantus and his book Arithmetic, which is where Pierre de Fermat wrote his famous last theorem.
Now, Louie Mordell was an expert in Arithmetic and was writing in his own textbook about those ideas in 1921, during which he made a startling observation: he incorporated a key idea from topology into the theory of curves. elliptical. Now, before this, topology and number theory were as different fields of mathematics as they could be, but after Mordell's discovery, they became inexplicably intertwined through these special elliptic curves. I

heard

a little about topology in my previous video on the Hodge conjecture, and in it I mentioned that structures in topology were grouped according to the number of holes they contained.
What I didn't mention was that the number of holes in a figure is called the genre of that figure. So a sphere has genus zero and a torus has genus one. This is truly an astonishing fact that has exceptional implications on not one, but three millennium problems: the Hodge conjecture, the BSD conjecture, and also the Poincaré conjecture. But the ramifications that Mordell found in number theory are probably the most extraordinary. Mordell conjectured that any elliptic curve with genus greater than two has at most a finite number of solutions. This was the same theorem that Faltings proved in 1984 and which translates directly into the monumental leap that any equation in which the powers of x and y are greater than four has only a finite number of solutions.
But the most brilliant part of his conjecture did not concern curves with a finite number of solutions at all; instead, he generalized the results of the rational root theorem and the Hasse–Minkowski theorem to all curves with a finite number of solutions, not just those of degree one or two. He claimed that if elliptic curves *existed* with an infinite number of rational solutions, then those curves had to have a quick formula to determine the solutions or an algorithm to generate all the solutions. The main implication of this was that if you knew some of the solutions, you could figure out the rest.
But that still leaves out one of the most crucial steps in dealing with these curves: how can we tell if a particular curve has an infinite or finite number of solutions? Now, at this point we need to define a fairly abstract mathematical construct, so stay with me for a moment. Essentially, the important fact is that one of the major implications of Mordell's conjecture (What is a theorem? I think it should be called a theorem), of Mordell's theorem, is that if you have a curve with a finite number of solutions and already If you know some of the solutions, then it is possible to generate all the others using only those first ones.
The set of all rational solutions of an elliptic curve is called the Mordell-Weil group of that curve; The group can be infinite or finite depending on the curve. As I said, if gender is zero, it has infinite solutions. Since genus is greater than two, it has a finite number of solutions. What we are most concerned about are elliptic curves, which have gender exactly 1. Now we can continue without knowing exactly how this solution generation process is carried out, but I think it is important to understand that process because it helps us understand the entire process . Implications of Mordell's theorem.
Let's start by considering any random elliptic curve that leads to a pair of rational solutions. There is a very important theorem that says that if we draw a line that passes through two rational points, the third point of intersection with the curve is also another rational point. So now we have three rational points. You might think that we only need two starting points to generate all rational solutions, but that's not exactly true, because we may need more than two; It is possible that the two rational points we have at the beginning only generate some of the solutions. .
We may need two or more additional points to be able to generate them all. But the fact is that we know that if we have two or more initial points for any given curve, then we can calculate the entire Mordell-Weil group for that curve. The

difficult

y again lies in knowing exactly how many of them we need. We call the number of those points the range of that curve. At this point I want to stop for a second and take some time to remember what exactly we have combined to form this wonderful theory of elliptic curves. We've used number theory, algebraic geometry, and topology together!
It's amazing how these different fields of mathematics come together so uniquely in this common theory, but what we're going to add next is absolutely mind-blowing and probably the last thing one would expect to be added to this particular one: the Riemann Zeta function. . Now, I'll be the first to say that it was actually a bit misleading: we won't be using the *zeta function* but rather the broader class of functions to which the zeta function belongs. a, called L functions. This relates to another very important thing in mathematics which wasdiscovered by Gauss: modular arithmetic. Modular arithmetic was originally called "clock arithmetic" because it is essentially what happens in the analog clock.
If our wall clock says 12:00, then it is a little strange that an hour later it is not 1:00 but 1:00. This is because the time resets every 12 hours on the clock. But there's nothing particularly special about the number 12; We could also have used ten, for example, where after 10 would come one instead of 11. We call this number "module". This gives us a very different way of analyzing the number line. Now, this idea is nice and simple, but it is something that will be extremely unfamiliar to the average layman because he would have to train himself to think this way.
But the fact of the matter is that, using this concept, we only need to analyze a few small numbers to model the behavior of the larger numbers. This modular arithmetic is one of the most important tools that number theorists use to analyze properties of structures such as functions. In elliptic curve theory, number theorists analyze integer solutions of the modulo function of prime numbers (such as 2, 3, 5, 7, etc.). It is a fairly easy task to do on your computer and many of you will be able to write a program at home. But in the 1960s, when mathematicians began looking for solutions modulo prime numbers, they didn't have access to the supercomputers we have today;
Its cutting-edge technology was a room-sized computer called EDSAC, which had a total of 576 bits of memory. For reference, laptops in 2020 can reach RAM capacities of over 64 gigabytes, which is half a trillion times more, an unfathomably larger sum! Of these mathematicians who used the EDSAC, it was a young bridge champion, Peter Swinnerton-Dyer, who was the first to attempt to generate these results modulo prime numbers. He and his supervisor, Ian Cassels, ran the tests through EDSAC and discovered an incredible and unexpected result: the number of solutions they obtained per prime number was linked to a formula previously associated with an exceptionally advanced branch of mathematics that had been the natural solution.
Calculus Progression Over the Last 200 Years: Complex Analysis! To illustrate exactly how this method works, we first start by drawing a table and recording the number of solutions per Prime. Here I will represent the number of solutions per p prime as N_p with p in the subscript. Now we can make p as big as we want, since we know that the number of prime numbers is infinite, so we keep looking for the next and the next prime number, and we can keep doing this process of finding the number of results forever . The next interesting part of this dilemma was discovered by Bryan Birch, who was also working with Cassels as a doctoral student.
He realized that if you take the number of solutions N_p and divide it by the prime p itself, then the fractions N_p/p that are formed have a very particular property. Specifically, if you multiply all of these fractions from 2, 3, 5 to p together, you will find an extraordinary correlation that I am going to show now. What I have done is take a logarithmic graph, which means that the intervals on the x-axis are getting smaller and smaller. The first thing we are going to do is find N_2 for the smallest 2nd prime on this elliptic curve and then we are going to plot the relationship N_p/p.
Next we will find the ratio for the next Prime 3 and then multiply it by N_2/2. Then we plot *that* product. We repeat this with the prime numbers 5, 7 and continue with increasing values ​​of prime numbers. Over time, you will start to notice that as p gets higher and higher, the points start to oscillate up and down on this particular line: in statistics, it would be called the line of best fit. This is all perfectly normal behavior, but there's something very suspicious going on here with the slope of this line turning out to be exactly 1. That's all well and good, until coincidentally (or not) it turns out that the range of this elliptic curve is also equal to 1.
This is a rather surprising observation and can easily be dismissed at first as mere coincidence, but once we started analyzing more and more curves, we discovered that each time the slope of the line is always equal to the range of the curve. As Bryan Birch and Peter Swinnerton-Dyer continued to analyze an increasing number of curves, they eventually became convinced that this calculation would always lead to the range of the curve. In 1965 they formalized this conjecture and announced it in an article titled 'Notes on Elliptic Curves'. II' In it, Birch and Swinnerton-Dyer definitively linked range to complex analysis: they observed that several elliptic curves could be modeled with something known as the L function and became convinced that this could be successfully applied to all elliptic curves. curves.
The Riemann Zeta function is also one of these special L functions, although it is arguably the *most* special. We now know that we can assign a unique L function to each elliptic curve, but at the time Birch and Swinnerton-Dyer's conclusions were controversial, as this was not known to be true. To give you an example, for the curve x3 +y3 = 5, the function L is this peculiar series. From here we began to venture into muddy waters. In this function, when we enter the value s = 1, we get a result of the form Omega*S, where Omega is a standard integral that is not that difficult to solve and S is a characteristic number of that curve.
For example, in the function we saw before, L(1) equals precisely 1.03313660856 and the integral, Omega, also happens to be exactly that value, leaving S =1. So now that we are familiar with some analysis, we can finally express the BSD conjecture in its simplest form: an elliptic curve only has infinitely many solutions if S = 0. The statement can be shown to be completely equivalent to Birch and Swinnerton. -Dyer's original statement that the slope of the line obtained by plotting the products of the ratios modulo p is equivalent to the range of that curve. Now that we know what the BSD conjecture is, it's time to ask the million-dollar question: is it true?
Well, if you ask the math committee, the answer seems to be an overwhelming "yes"! I think the simplest indication that the BSD conjecture makes sense is the fact that mathematicians have found curves with enormous ranges; the largest is 18, found by Noam Elkies in 2006, which is this monstrosity. We're not really sure whether the range of an elliptic curve can be arbitrarily large at this point, but it's more likely that there is no upper limit. In 1990, Russian mathematician Victor Kalyvagin showed that the BSD conjecture was true for all curves of rank 0 and 1. Research on curves of rank greater than two has been done primarily statistically, and in 2010 Manjul Bhargava and Arul Shankar discovered by extensive analysis that the average range of an elliptic curve is less than 7/6, for which Manjul Bhargava won a Fields medal.
What this implies is that almost all curves have a range equal to zero or one. In fact, it has been statistically conjectured that 100% of elliptic curves have rank zero or one, but in a mathematical sense that doesn't mean much since we are dealing with infinite numbers; It simply means that curves of rank greater than two are essentially too rare to be statistical anomalies. Bhargava and Shankar showed that at least 62.5% of elliptic curves have ranks 0 or 1. I think the BSD conjecture is one of those strange problems that acts as the definitive crossover between all these different fields of mathematics.
While it is fundamentally a number theory problem, it has profound implications in fields ranging from statistics to complex analysis, and acts as the basis for all these other incredible theories. Hundreds of articles have been published assuming that the conjecture is true. A final proof will legitimize thousands of claims, prove countless theorems, and deepen our understanding of the number theoretical framework to places it has

never

gone before. The Birch and Swinnerton-Dyer conjecture is right up there with the Riemann hypothesis as one of the most important problems in the history of mathematics, and solving it would be the cherry on top of the great mathematical theories of the 20th century.
Thanks for watching.

If you have any copyright issue, please Contact