YTread Logo
YTread Logo

What does the second derivative actually do in math and physics?

May 01, 2024
…says this: you don't have to know

what

's going on anywhere outside of a little ball; If you want to know

what

the potential is here, tell me what it is on the surface of any ball, no matter how small (you don't have to look outside, just tell me what's in the neighborhood) and how much mass is in the ball. The rule is this... The man you just saw speaking is physicist Richard Feynman, giving a lecture at Cornell in 1964. I remember seeing this in my dorm room my first year of undergrad, and being taken aback by the "average a ball" by Feynman. concept, whatever that means.
what does the second derivative actually do in math and physics
In this video, I want to really dig into what Feynman is talking about, and in the process, we will develop a deep intuitive understanding of what the

second

derivative

actually

does

in

math

ematics and

physics

, and why it appears in the Schrodinger equation. , E&M and other places. As a heads up, I'll assume that you are familiar with Taylor series, to the extent that you know that we can expand a function as such. Now, before we dive in, I want to tell everyone about a really interesting opportunity from some Harvard

physics

PhD students working with the Harvard Quantum Initiative.
what does the second derivative actually do in math and physics

More Interesting Facts About,

what does the second derivative actually do in math and physics...

To celebrate World Quantum Day, they are running a quantum short contest on their HQI blog. This is a contest open to absolutely anyone, regardless of their training or experience in physics. Basically, you are invited to create a short video on a quantum topic of your choice, using your creativity to explain some aspect of quantum physics. After you submit your entry, you have a chance to win Harvard merchandise and even a trip to Harvard to explore its quantum research facilities and meet the scientists driving quantum research. This is a really interesting opportunity run by really passionate people, so check out their blog and enter the competition if you're interested;
what does the second derivative actually do in math and physics
I'll link it in the description. The deadline is May 14, 2024, so good luck if you decide to submit something! Now, to begin our journey on the

second

derivative

, I think we should quickly review our intuition about the first derivative. Essentially, let's say we have some variable x and a point x0. Similarly, let's say we have some function f(x), where I indicated where f of x0 falls on this number line. If we move x a little further away from x_0, we will consequently move f(x) a little further away from f(x0). Then, if we take the change in f(x) and divide it by the change in x, we intuitively obtain the first derivative at the point x0.
what does the second derivative actually do in math and physics
Formally you have all the limits and all that, but this is intuition. So the first derivative intuitively tells us how much f changes when we change x by a small amount. So what about the second derivative? What

does

this tell us? Well, we're usually taught that it tells us how the first derivative changes when we change the input by a small amount. But... this realization kinda sucks. I don't want to know what the second derivative tells me about the first derivative, I want to know what it tells me about the function itself! So... how do we try to intuitively understand the second derivative?
Well, this is where we're going to follow Feynman's lead, so let's delve into the "one-ball average" concept he was talking about, first in one dimension. Let's say we have some function and let's look at a particular point x_0. What I want to do is look at the points right next to x_0, both at a distance dx. Note that this is what a "ball" is in one dimension: it is all points of radius dx away from x0. Now, trusting Feynman again for a moment, I want to know if the value of f at points near x_0 is on average greater or less than the value of f at x_0.
Here we see that they are both older, but how do we quantify this? One way to measure this is by calculating the average of f for the points around x0 (where I've used this fancy double parenthesis to indicate the average) and then subtracting the value of f at x_0. This should indicate how much higher or lower, on average, the points around x_0 are. Take a second to make sure you understand what this amount represents. This may seem like a random expression, but let's stick with it for a moment. First, let's calculate the value of this average term.
The average of the two values ​​around x0 is calculated exactly as we expected: by adding and then dividing by two. Now, remember that dx is supposed to be quite small, so that should inspire us to taylor expand both quantities around the point x_0. The Taylor series of the point to the right of x_0 can be written as follows, while the series of the point to the left of x_0 can be written similarly. Now, if we add the two, note that the terms with an odd power of dx will cancel, leaving only the terms with an even power of dx.
So, we get the following. Now note that using a first order expansion for both terms would not have worked here. That usually works, but notice that the first order approximation cancels out! – We will have more information on this in a moment, but keep this in mind. So, dividing by 2, we get that the average of the points around x_0 can be written as follows. Now we can subtract the point at the center, f(x0), from both sides. To continue, I'll divide both sides by dx^2. Now, let's take the limit when dx goes to zero on both sides.
Note that all terms with dx^2 and higher on the right side will go to zero. If we then move the ½ in front of the second derivative to the left side, we are left with the following. This is a really interesting result: we have found that the second derivative at a point is related to the average of the values ​​around that point, minus the value of the function. And if we take a moment to think about this result, it should make a lot of sense. Remember that we usually use the second derivative to study the curvature of functions.
If a function is concave upward, then at any point the values ​​of the function around that point are on average higher, so the limit we derived a few moments ago would be positive, giving us a positive second derivative . And if a function is concave downward, then at any point the values ​​of the function around that point are on average lower, so the limit is negative and we get a negative second derivative. So this whole “averaging a ball” thing is really just a way to quantitatively capture the curvature of a function, which turns out to be related to the second derivative.
Now, what is the curvature of a straight line? Zero! Which explains why the terms of the first order on Taylor's expansions were cancelled! Those terms represent the linear part of the function, which contributes nothing to the curvature. So we see that the second derivative for a function of a single variable has a really clear geometric interpretation in terms of the average value around a point, minus the value at the point itself. Now, suppose we wanted to extend this idea to three dimensions, how would we do it? This necessarily becomes a multivariable calculus problem, but we can guess what the solution would be in this case.
In three dimensions, to find the average value of a function around a point, we would look at a small sphere of radius dx around that point and take the average value of all the points on that sphere. Then, we would simply subtract that average by the value in the middle of the sphere. Our claim is that this is related to some three-dimensional version of the second derivative. It turns out that this is one hundred percent correct! In three dimensions, the corresponding expression is the following, where the second derivative in three dimensions is written using this symbol here, called the Laplacian.
The only difference is that the 2 has become a 6 (and in fact, that number is always twice the number of dimensions you are in). For those of you who have taken a multivariable calculus course, you will recognize the Laplacian and know how to calculate it, but for anyone who hasn't, let's take this to mean a second derivative in 3 dimensions, which it really is. Now, for those of you interested in a derivation of this fact, which you are entitled to, I have included a link to a clean derivation in the description. This is one of those expressions that is much better suited to paper derivation, where you can see each calculation step, rather than displaying a hundred equations on a screen for 20 minutes.
That said, the intuition for this equation is exactly the same as for the one-dimensional case. Now, taking this expression as a fact, we can now use it to intuitively derive some of the most important equations in modern physics, without doing tedious calculations. For example, let's say we have an electric charge distribution defined by a function rho(x). How would we come up with an equation for the potential energy function generated by this charge distribution? Although this seems like a difficult problem, we can use our new intuition to try it. Let's say we have a region of negative charge.
If we take a positive charge and move it away from the region of negative charge, the force of attraction means we have to work to do so, so it gains electrical potential energy as we move it away from this region. Similar to how lifting something gives it more gravitational potential energy. Likewise, if we had a region of positive charge and we move our particle away from this positive region, the positive charge will be repelled, so it loses potential energy. So, let's use this intuition to form a differential equation! Let's say we have our potential energy function defined over all of space and let's look at some point x0.
We can then examine the average potential energy in a small sphere around x0. If the potential energy is greater at points far from x0, then our intuition tells us that there should be some negative charge here, because this means that our particle gains potential energy as it moves away from x0. Likewise, if the potential is lower at points far from x0, then our intuition indicates that there is some positive charge here. So, summarizing all this, we can use our physical intuition to guess that perhaps an equation of the following form is correct: the second derivative of the potential (which tells us the average of how much larger the potential is around any given point) should be proportional to the negative of the charge density at that point x0.
Take a second and digest this, and you'll see that it exactly describes the conclusions we drew a moment ago. And it turns out that this is exactly right, in fact it is one of Maxwell's equations, with A equal to one over the permittivity of free space. So we have intuitively derived an equation without the need for any sophisticated electromagnetic theory. Although we were lucky that the relationship in rho was not something more complex, you would be surprised how often nature seems to choose the simplest expression. Now, since this channel has been dedicated to quantum mechanics in the past, we can also use our new understanding to develop greater intuition about quantum phenomena.
Specifically, I want to examine the kinetic energy operator in quantum mechanics, which in terms of position can be written as the second derivative. Now, even if you've never seen this before or never taken a quantum mechanics course, we can still understand why this quantity should represent kinetic energy. To summarize some quantum physics, the wave function is a function that tells us the probability amplitude of where our particle is. With just this, we can use our second derivative intuition to derive some quantum facts. First, let's assume that the position wavefunction of our particle looks like a simple Gaussian.
For now, let's assume that it is a fairly narrow Gaussian, which means that our particle is well localized around some point in space. So how do we interpret the kinetic energy operator in this wave function? Well, if we take this relationship as a fact for the moment, then at the peak of our wavefunction, all the points around it are on average much smaller, so, using our newly developed intuition, we expect that the second derived is a large quantity. negative number, which when multiplied by the negative, means that this quantity is a relatively large positive number. I say relatively because hbar is small, but the smaller we make the Gaussian, the larger we make this number.
Now this should be a somewhat shocking result. Although it is a bit dubious to interpret an operator at a single point, this statement is approximately true in the region where our particle is located. So why should this surprise us? Well, all we did was define where our particle was located in space, and nowhere did we enter how it moved or in what direction. So... why and where does our particle obtain this magnitude of kinetic energy? What we are discovering here is, in fact, a vestige of Heisenberg's uncertainty principle. Note thatOur particle is very confined in space, so we have a really low uncertainty in its position.
The uncertainty principle then dictates that we must be wildly uncertain about the momentum of our particle, and therefore it can take on large magnitudes, increasing the kinetic energy of our particle. And in fact, if we evolve this initial quantum state in time, the solution we would obtain would be a Gaussian that expands over time, since that extra kinetic energy coming from the uncertainty of the moment pushes our particle outward. So now we can see that the kinetic energy operator in quantum mechanics not only measures how fast our particle is moving, but also carries information about how the uncertainty principle affects its motion: the more localized our wave function is and, therefore, the smaller the The higher the average values ​​around it, the more its motion and energy will be deformed by momentum uncertainty.
I think this is absolutely fascinating and gives us some intuition about how the uncertainty principle fits into the Schrodinger equation via the second derivative. Now, before we conclude the video, I encourage you to think about how you can use this understanding of the second derivative to build new intuitions. For example, think about what the differential heat equation

actually

says. Likewise, in quantum physics, higher energy wave functions tend to have smaller and smaller wavelengths: how can we understand this now? I'll let you think about this! As physicists and

math

ematicians, I hope I have given you another tool that you can use to understand our world, the same way Feynman did for me many years ago.
As always, if you have any questions, feel free to leave a comment and I'll do my best to answer them. I hope everyone had a good quantum day!

If you have any copyright issue, please Contact