YTread Logo
YTread Logo

Can programmers do math? What is a real number, really?

May 17, 2024
Before time...was hashable, I made a controversial statement: "...and

programmers

have no idea how to do

math

." Given my enormous fame, many would say that I have a responsibility to recognize the defamation I utter. I'm sure the

programmers

will be fine:   this slight towards them is probably as significant as   the difference between 0.1 + 0.2 and 0.3. However, if you are one of the affected programmers, wait: an apology will come at the end of the video. On a surface level,

real

number

s are pretty intuitive: a

real

number

is just an integer, plus an infinite sequence of digits after the decimal.  However, our intuition sucks.
can programmers do math what is a real number really
That's why

math

ematicians worry about proofs.  Specifically, we find it very difficult to discern between "

really

big finite" and "infinite." Deep down, we are all inherently ultrafinitists, no matter how hard we try. You might object that you don't have this difficulty, and maybe you're right and I'm just stupid, but intuition breaks down pretty quickly when we run out of finite controls for things. Mathematics puts us in a rather difficult situation. We want to talk about infinite things, but we only have finite tools: the things we talk about need finite representations, and proofs have finite length. On the other hand, I just said that a real number has a possibly infinite and arbitrary sequence of digits after the decimal point.
can programmers do math what is a real number really

More Interesting Facts About,

can programmers do math what is a real number really...

In particular, the digits of a real number may not have a pattern or rule for generating subsequent digits. "I know, this is like pi, right?" "...within this string of decimals is your date of birth, your locker combination, your social security number... it's all there, somewhere." First of all, the normality of pi (which would make this video true) is an open problem.  Second, pi supports finite representations: like this, or if you want to read hexadecimal digits, this. "So real numbers can have infinite information, big deal," I hear you say.  "After the tenth digit, we can't even make out the numbers." You say that, but the precision you need obviously depends on the context.
can programmers do math what is a real number really
You can say that one and 1.000000 000000001 and 0999999999 99 are quite indistinguishable and differ by 10^-11, but my monomial function f(x) = x^(10 billion) is definitely not artificial and is completely natural. f(1) is one, but going up 1/10^11 takes you to about 26.88 tredecillions, and going down 1/10^11 takes you to about 37.2 quattuordecillions. Well, well, my example is quite artificial.  I can't help it; I am a pure mathematician; all my examples are artificial. Who do you think comes up with the elementary school problems in which Jane buys 160 identical spherical watermelons 50 cm wide, travels 3 km west and 4 km north to reach her hemispherical igloo with a radius of 10 m and then asks if all those watermelons fit? watermelons at home, assuming that each watermelon gains 1 ml of volume per meter traveled by absorbing snow?
can programmers do math what is a real number really
In any case, the average person's practical understanding of real numbers depends crucially on the assumption that we can work with suitably precise approximations of real numbers in our everyday lives. This is why most people seem perfectly satisfied with floating point numbers, despite their... interesting artifacts. I know some of you are programmers and you're still waiting for that apology I promised you.  While you wait, consider this technical interview question: You're building a numerical math library for some wealthy clients, but they need higher precision than long doubles offer. "Runtime and memory are not an issue," they say, "but our application requires arbitrary precision." Arbitrary precision means you can say goodbye to the floating point model.
Then,

what

are you going to do? Perhaps the most natural way to implement real numbers as precise as necessary is to encode them as a pair, consisting of the integer part of the number and a generator that produces as many bits of the fractional part as it would. ever want.  This meets the arbitrary precision requirement, so it seems perfect. However, how would you implement real number addition in this model?  If we are given two real numbers generated digit by digit, there is a subtlety that makes it difficult to generate the sum digit by digit. Carry propagation can occur well below the digit of interest, so it may be necessary to generate many arbitrary digits before you can get the integer part of the sum correct.
Addition is a very important thing to do well, and it's probably also important to keep it simple, so that this complication doesn't work. Runtime may not be an issue, but we can't have an addition algorithm that can take infinite time to get to finite precision. Luckily, memory is not a problem. The idea behind spitting out the fractional part of a real number bit by bit is to give a sequence of finite approximations of the real number, so why don't we allow all real numbers to be encoded as a sequence of finite approximations? ?  We can even control the precision of the sequence, for example we can force the nth term of the sequence to have a precision with an error of up to 1/2^n.
With this real number model, we can define sum easily. As an application, let's use our implementation to calculate the square root of two.  Since we know that the square root of two lies between 1 and 2, we can use a binary search.  Briefly, we update the lower or upper bound to the midpoint, depending on whether the midpoint squares to something greater than two or less than two, and repeat this process until we can get the bounds to the precision we want. The same idea can be used to calculate the square root of any non-negative rational number. Now that we can calculate the square root of any rational number, we can extend this to calculate the square root of any real number.
For simplicity, assume that x is greater than or equal to 1. In this case, if x_n is within epsilon of x, then the square root of x_n is epsilon of the square root of x. Therefore, we can take advantage of our square root function for rationals to obtain a square root function for reals. To calculate the square root of x to within 1/2^n, we take a 1/2^(n+1) approximation of the square root of the rational number x_(n + 1). This is a general pattern: if we want to define a function f from R to R, we can start by defining the function in rational numbers and then use f(x_n) or an appropriately large n to approximate f(x).
However, this pattern only works if f(x) is continuous. We use binary search to approximate the square roots of rational numbers, so a natural question you might have is why we don't also use binary search for the rest of the real numbers. Well, here's the problem: while most of the arithmetic can be implemented using our real number model, we can't implement any of the comparison operations. Specifically, how would you implement the is_pos function, which simply checks if a number is positive? You might think, "Well, x is positive if and only if there exists some n large enough for x_n to be greater than 1/2^n." Since x_n is within 1/2^n of the true value of x, being greater than 1/2^n means that x itself is greater than zero.  "Therefore, maybe you can implement the pause function as follows." Although this implementation behaves well for all positive and negative x, the function runs forever if we enter the number zero.
Basically, if the function is still running after n iterations of the loop, we can't tell if the input x is zero or if x is a real number that is actually close to zero (like within 1/2^(n+1)) .  We encounter a very similar problem if we try to represent real numbers as a sequence of bits. This reflects the amazing fact of constructive mathematics. "no.avi". This fact is often displayed in a much more provocative way, by claiming that all functions from R to R are continuous, but this is just media hyperbole.  The truth is that constructive mathematics cannot prove the existence of discontinuous functions from R to R, since this would require constructing such a function.
The source of the problem in defining a discontinuous function lies not in the discontinuity, but in the fact that its domain is the set of real numbers. There are no difficulties in constructing a discontinuous function from the rationals to R. In particular, is_pos is

really

simple to define, assuming that the rational numbers are encoded as a proportion of integers where the denominator is positive. As promised, I owe an apology. My previous video suggested that the elementary proof of the intermediate value theorem provides an algorithm for solving equations of the form f(x) = 0 using binary search, but as we just saw, we cannot perform binary search on a general real number .
So, in addition to all the lying and deceit that is already prolific in that video, I also lied in the first half: Although I emphasize that the elegant proof does not come with an algorithm, none of the proof is constructive. For this infraction, as a Canadian, I have to say "sorry, huh?" If you're still here, thanks for watching! On your way out, thank PHScience for calling me out on my blatant deception. If you like video, some recent literature reviews suggest there is a way to express it. If... "Hey,

what

about when you said that programmers have no idea how to do math?" What's up with that?

If you have any copyright issue, please Contact