YTread Logo
YTread Logo

Lecture 19: Dynamic Programming I: Fibonacci, Shortest Paths

May 23, 2024
The following content is provided under a Creative Commons license. Your support will help mit opencourseware continue to offer high-quality educational resources for free. Make a donation or view additional materials from hundreds of mit courses. Visit mit open courseware at ocw.mit. edu let me start a new and exciting topic

dynamic

programming

yes, very exciting, in fact, I'm very excited because

dynamic

programming

is my favorite thing in the world in algorithms and it will be the next four

lecture

s. It's very exciting. It has many different facets, it is very powerful overall design technique. We don't talk much about algorithm design in this class, but dynamic programming is very important and also takes a little time to get used to.
lecture 19 dynamic programming i fibonacci shortest paths
We like to inject it now in double or six, so in general. our motivation is to design new algorithms and dynamic programming, also called dp, is a great, very general, powerful way to do this, it's especially good and intended for optimization problems, things like

shortest

paths

, you want to find the best way to do something,

shortest

paths

. you want to find the shortest path, the minimum length path, if you want to minimize, maximize something that is an optimization problem and usually good algorithms to solve them involve dynamic programming which is a bit of a broad statement, you can also Think of dynamic programming as a kind of exhaustive search, which is usually a bad thing because it leads to exponential time, but if you do it smartly using dynamic programming you usually get polynomial time, so one perspective is that programming Dynamics is roughly careful brute force, it's kind of a funny combination, a little counterintuitive, but we take the idea of ​​brute force, which is to try all the possibilities, do it carefully, and get to polynomial time.
lecture 19 dynamic programming i fibonacci shortest paths

More Interesting Facts About,

lecture 19 dynamic programming i fibonacci shortest paths...

There are many problems where essentially the only known polynomial time algorithm is the dynamic one. programming doesn't always work, there are some problems where we don't think polynomial time algorithms exist, but when it's possible, dp is a good sort of general approach and we're going to talk a lot about dynamic programming. has many different ways of thinking about it, today we will look at a few. We're going to warm up today with some pretty easy problems that we already know how to solve, namely calculating Fibonacci numbers, which is quite a bit. easy and calculating shortest paths and then in the next three

lecture

s we will get to more interesting examples where it is quite surprising that you can even solve the problem in polynomial time, although probably the first burning question in your mind is why it is called dynamic programming , what does that mean?
lecture 19 dynamic programming i fibonacci shortest paths
I used to have this spiel about, well, you know, programming refers to I think it's the British notion of the word, where it's about optimization, optimization in American English is kind of like programming in British English, where you know. you want to configure the program, the schedule of your trains or something like that. I think that's where the programming originally came from, but I looked up the real story. Why is it called dynamic programming? Dynamic programming was invented by a guy named Richard Bellman who you may have heard of. bellman in bellman ford's algorithm so this is actually the precursor to bellman ford and we will see bellman ford emerge naturally in this setup so here is a quote about him it says bellman explained that he invented the name of dynamic programming to hide the fact that he was doing mathematical research, he was working at a place called rand and under the direction of a secretary of defense who had a pathological fear and hatred of the term research, so he settled on the term dynamic programming because it would be difficult to give a has a pejorative meaning and because it was something that not even a congressman could oppose, it basically sounded cool and that is the origin of the name dynamic programming, so why is it called that, who knows?
lecture 19 dynamic programming i fibonacci shortest paths
I mean, now you know, but it's not, it's weird. term just take it as it is, it may make some kind of sense, but okay, let's start with this example of how to calculate Fibonacci numbers and maybe before we start I'll give you a preview. than you might think of dynamic programming as uh and this equation, so to speak, is going to change throughout today's lecture and eventually settle into sort of a more precise perspective. The basic idea of ​​dynamic programming is to divide a problem into subproblems solve those subproblems and reuse the solutions to your subproblems it's like a recycling lesson okay so we'll look at that in the Fibonacci numbers so you remember the Fibonacci numbers the correct number of rabbits you have on day n if they reproduce.
If we mentioned them before talking about avl trees, I think this is typical. You can think of it as a recursive definition or recurrence of Fibonacci numbers. It is a definition of what the nth Fibonacci number is. So let's assume. our goal, an algorithmic problem, is to calculate the nth Fibonacci number and I'm going to assume that that fits into a word and therefore a basic arithmetic sum, whatever the operating time constant is, so how do we do it ? Everyone knows how to do it. in many ways, but I'm going to give you the dynamic programming perspective on things, so this is going to seem a little obvious, but we're going to apply the exact same principles that we'll apply over and over again in dynamic programming, but here it is in a very familiar environment, so we will start with the naive recursive algorithm and that is, if you want to calculate the nth Fibonacci number, check if you are in the base case.
I'll write it this way. f is just our return value you'll see why write it this way in a moment and then you return f in the base case is one otherwise you recursively call

fibonacci

of n minus one recursively call

fibonacci

of n minus two add them return that this It is a correct algorithm It is a good algorithm No, it is very bad Exponential time How do we know that it is exponential time apart from experience? Well, we can write the execution time as a recurrence t of n represents the time to calculate the nth Fibonacci number.
How can I write the recurrence? Nice throwback to the first divide and conquer lectures I hear whispers yes yes t of n minus one plus t of n minus two plus a constant cushion I don't know how many you have now to create the n Fibonacci number we have to calculate the n minus the first number of Fibonacci and the n minus the second Fibonacci number, which are these two recursions and then we take a constant time; otherwise we do a constant number of sum comparisons, you know, all these operations take a constant time, so it's a recurrence.
How do we solve this recurrence well? One way is to see that this is the Fibonacci recurrence, so it's the same thing, there's this plus whatever, but in particular, this is at least the final Fibonacci number and, if you know Fibonacci stuff, this is the golden ratio to the nth power, which is bad, we had a similar recurrence in abl trees, so another way to solve it is just a good check, say, well, that's at least two times t of n minus two because it will be monotonous the greater the n. You have to do more work because to do the nth thing you have to do n minus the first thing, so we could reduce n t of n minus one to t of n minus two, which will give us a lower bound and now these two terms. now this is kind of easy, you see you're multiplying by two every time you're subtracting two from n every time how many times can I subtract two from n n more than two times before I get to a constant and so this? is equal to two to the power of n over two, I mean multiplied by some constant, which is what you get in the base case, so I guess I should say theta.
The thing is theta, okay, so it's at least that big and the correct constant is uh phi in the base of the exponent, okay, so that's a bad algorithm, we all know that's a bad algorithm, uh, but I'm going to give you a general approach to making bad algorithms like this good and that general approach is called memorization and this is a dynamics technique. programming I'm going to call this the memorized dynamic programming algorithm, so I settle on using memo in the notes, the idea is simple whenever we calculate a Fibonacci number, we put it in a dictionary, if so, and then when we need to calculate the nth Fibonacci Number that we check, is it already in the dictionary?
Have we already solved this problem? If so, return that response; otherwise, complete it. Okay, you will see that the transformation is very simple. Alright. These two lines are identical to these two lines. Okay, then you can. see how the transformation works in general, you can do this with any recursive algorithm, the memoization transformation in that algorithm, we initially create an empty dictionary called memo and before doing the calculation we say, let's check if this is this version of the Fibonacci problem . calculate f n is already in our dictionary, so if that key is already in the dictionary, we return the corresponding value in the dictionary and then once we have calculated the nth Fibonacci number, if we bother to do this, if this it doesn't apply so save it in the notes table for us to store let's say well if you ever need to calculate f of n again here it is and then we return that value okay so this is a general procedure you can apply any Recursive algorithm without side effects I guess. technically uh and it turns out that this makes the algorithm efficient now there are many ways to see why it is efficient um in general maybe it is useful to think about the recursion tree so if you want to calculate fn in the algorithm above we calculate fn minus one and fn minus two completely separately to calculate fn minus one we calculate fn minus two and fn minus three to calculate f n minus 2 we calculate f n minus 3 and f n minus 4 and so on and you can see why that is exponential in n because we're just decreasing n by 1 or 2 each time, but then you notice, hey, these fn minus three are equal.
Actually, you should only have to calculate them once and that's what we're doing here the first time you call fn minus three. but once that's done and you move on to this other recursive call, this will just cut off. There are no trees here. We may have some recursive call here. We won't do it because it's already in the notes table. Well, in fact, this already happens. Fn less. two, this whole tree disappears because fn minus two has already been done, it's clear why it makes things better, so in fact you can argue that this call will be free because you've already done the work here, but I want to give it a very particular shape To think about why this is efficient, which is the following, so you could write a recursion for runtime here, but in a sense recursions are not the right way to think about this because recursion is kind of a thing. weird if you are calling fibonacci of some value k you will only make recursive calls the first time you call fibonacci of k because from now on you have put it in the notes table and you won't recurse so I can think of there are two versions to call Fibonacci of k.
The first time, which is the non-memoized version, the call does recursion and works and then every time you're doing memorized Fibonacci calls of k and those. They cost a constant time, so memorized calls cost a constant time, so we can think of them as basically free. When you call Fibonacci of n minus 2, because it's a memorized call, you don't really pay anything for it, I mean, you're already paying. constant time to do sums and whatever, so you don't have to worry about time, there's no recursion here, okay, and then what we care about is that the number of calls not memorized, which is the first time you call a Fibonacci of k, it's n it's not even necessary theta we're going to call fibonacci of one at some point we're going to call fibonacci of two at some point and the original call is fibonacci of n all of those things will be called at some point, that It is quite easy to see, but in particular certainly at most we never call fibonacci of n plus one to calculate fibonacci of n, so there are at most n calls, in fact, there will be exactly n calls that are not memorized, those for which that we have to pay, how much should we pay?
You have to pay well if you don't count recursion, which is what this recursion does. If you ignore recursion, then the total amount of work done here is constant, so I'll say that the non-recursive work per call is constant and therefore claim. that the running time is constant, oh sorry it's a linear constant, that would be quite surprising, it's actually not the best algorithm and furthermore the best algorithm to calculate the nth Fibonacci number uses log n uh arithmetic operations so that you can do better, but if you want to see you should take 6046, okay, today we're going to get to linear, which is much better than exponential.
So why linear? because there are nnon-memorized calls and each one of them costs constant, so it is the product of those two numbers, okay. It is an important and very important idea. I'm going to write it again in a slightly more general framework in general dynamic programming. I didn't say why it is called memorization. The idea is that you have this notebook where you write down all your drafts. work, that's this note dictionary and memorizing is writing in your notepad. I didn't make it up. Another crazy term that means to remember and then you remember all the solutions you've done and then you reuse those solutions now these solutions. they are not really a solution to the problem I care about the problem I care about is calculating the nth Fibonacci number to get there I had to calculate other Fibonacci numbers why you know because I had a recursive formulation this is not always the way to solve a problem , but generally when you're solving something you can break it down into subproblems, we call them, they're not always the same type as your original target problem, but there are some kind of related parts and this is the big challenge when designing a dynamic program is figuring out which ones. are the subproblems, let's always say the first thing I want to know about a dynamic program is what are the subproblems in some way they are designed to solve your real problem and the idea of ​​memorizing is that once you solve a subproblem write the answer, yes ever need to solve that same subproblem again, reuse the answer, so that's the core idea and in this sense dynamic programming is essentially recursion plus memoization and so in this case these are one's fibonacci subproblems through fibonacci of n, the one we care about is fibonacci of n, but to get there, we solve these other subproblems in all cases if this is the situation for In any dynamic program, the execution time will be equal to the number of subproblems different ones you might have to solve or solve multiplied by the amount of time you spend on each subproblem.
Well, in this situation we had n subproblems and for each of them we spent a constant time and when I measure the time per subproblem, which in the case of Fibonacci I say that it is constant, I ignore the recursive calls, that is the key, we do not have than solving recurrences with dynamic programming, yay, recurrence is not necessary, okay, no. I don't count recursions, obviously, I don't count memorized recursions, the reason is that I only need to count them once after the first time I do it. It's free, so I count how many different subproblems I need to solve.
These will be expensive. recursions where I work I do a certain amount of work but I don't count the recursions because otherwise I would be counting twice. I just want to slice the count of each subproblem once and then this will solve it, so it's a simple idea in general dynamics. Programming is a super simple idea, it's nothing fancy, it's basically just memorization, there's an extra trick that we're going to use, but that's the idea, okay, let me tell you another perspective, this is the one that's maybe taught the most. commonly: think about i'm not a particular fan of this, I really like memorization, I think it's a simple idea and as long as you remember this formula here, it's very easy to work with, but some people like to think about it this way, so you can choose any way.
It seems more intuitive to you rather than thinking of a recursive algorithm that, in a sense, starts at the top of what you want to solve and works your way down. You can do the opposite. You can start from the bottom and work your way up and. This is probably how you normally think about calculating Fibonacci numbers or how you learned it before. I'm going to write it in a bit of a fun way. What I want to point out is that the transformation I am doing from the naive recursive algorithm to the memorized algorithm to the bottom-up algorithm is completely automated.
I don't think I'm doing it right, it's easy. This code is exactly the same as this code and is that code, except I replaced n with k just because I need a couple different ones. n values ​​here or I want to iterate over n values ​​and then there's stuff around that code that's just a formula. There's a little bit of thought put into this for loop, but that's okay, this does exactly the same thing, as the memorized algorithm might take a little bit of thinking to figure out if you unroll all the recursion that's happening here and just write it out sequentially this is exactly what is happening right, this code does exactly the same additions exactly the same calculations as this one, the only difference is how you get there here we are using a loop here, we are using recursion, but the same things happen in the same order .
There is really no difference between the code. This code will probably be more efficient in practice because you don't actually make as many function calls. stick, I made a small mistake here, it's not a function call, it's just a lookup in a table. Here I'm using a hash table to be simple, but of course you could use an array, okay either, but they're both constant time. with a good hash, okay, so is it clear what this is doing? I think so, I think I made a little type, so we have to calculate another typo, we have to calculate f1 to fn, which in Python is that and you know we calculate It's exactly like we used to do it, except now, instead of resorting , I know that when I am calculating the Fibonacci number k, a lot of guys laugh, uh, when I calculate the Fibonacci number k, I know that I have already calculated the previous two.
Because? because I'm doing them in increasing order, nothing special, so I can do this and the solutions will be waiting there, if they weren't I would get an error key to know there is an error, but in fact, I won't get a key here. I will always have calculated these things, then saved them in my table and then repeated them. I finally solved all the subproblems from f1 to fn and the one I cared about was the nth one. Okay, it's very simple. I do this because I really don't want to have to go through this transformation for every problem we do.
I'm doing it in Fibonacci because it's very easy to write the code explicitly but you can do it for everyone. the dynamic programs that we will cover in the next four lectures, well, I will now give you the general case, this was the special version of Fibonacci in general, from the bottom up it does exactly the same calculation as the memorized version and what we are doing is in It is actually a topological type of the dag dependency subproblem, so in this case the dag dependency is very simple to compute. I will do it the other way around to calculate fn.
I need to know fn minus one and fn minus two if I can do it. If I know those, I can calculate fn, so there is fn minus three, which is necessary to calculate this and that, and so on, so you can see what this dag looks like. Now I have drawn it conveniently so that all the edges go from left to right. This is a topological order from left to right, so I only need to do f1, f2 through fn in order. Well, it's usually totally obvious in what order to solve the subproblems, but in general what you need to keep in mind is that we're doing topological sorting here, we just did it in our heads because it's very easy and it's usually very easy, it's just a for loop, nothing special, okay, one arrow is missing, okay, let's do something a little more interesting, should we fix one thing?
What we can do from this bottom-up perspective is save storage space in your algorithm. We generally don't worry about space in this class, but it is actually important, so here we are building a size table and in fact, you really only need to remember the last two values, so you could store the last two values ​​and every time you create a new one, delete the oldest one. By thinking a little bit here, you'll realize that you just need constant space, linear time but constant space, and that's it. That's often the case from the bottom up perspective, you see what you really need to store what you need to keep track of.
I guess another good thing about this perspective is that the execution time is totally obvious, right, it's clearly a constant time. this is clearly linear time, whereas in this memorized algorithm you have to think about when it will be memorized once it won't. I still like this perspective because with this rule you just multiply the number of subproblems by the time per subproblem and you get the answer, but it's a little less obvious than code like this, so you know, choose however you want to think about it. Okay, so we move on to the shortest paths, so again, as usual, I think about the shortest paths from a single source, so we want to calculate the weight of the shortest path from s to v for all v okay, I would like writing this initially as a naive recursive algorithm that I can then memorize and then I can come up with a phi.
I just made it up, so how could I write this as a Naive Recursive Algorithm, it's not that obvious, okay, but first I'll tell you how, just like you know, an oracle tells you this is what you should do, but then come on to think about taking a step back, well, actually I mean. It's up to you, we can tell you the answer and then figure out how we get there or we can just figure out the answer, uh, preferences, figure it out, okay, no divine inspiration is allowed, so let me give you a tool. the tool is guessing right, this may sound silly but it is a very powerful tool.
Well, the general idea is to assume that you don't know something but you would like to know, so I would like you to know what the answer to this question is. I don't know, man, I really want a cushion, how am I going to answer the question? I guess that's okay, it's a tried and tested method to solve any problem. I'm hammering the point here, the algorithmic concept is: don't try just any guess, try them all. Okay, it's pretty simple too, I said dynamic programming was simple, okay, try every guess, this is fundamental to dynamic programming, I know it sounds obvious, but if I want to fix my equation here, dynamic programming is approximately recursion plus memorization, this should really be more riddle memorization. it's obvious to guess which is obvious it's the core concepts of dynamic programming trying to make it look easy because usually people have problems with dynamic programming it's easy try every guess that's something a computer can do very well this is the part brute force is fine, but we will do it carefully, not so carefully, I mean we are just trying all the guesses, we take the best one, it is important that we can choose one to be called better, that is why programming Dynamics is good for optimization problems, if you want to maximize something.
Minimize something, try them all and then you can forget about them all and just narrow it down to one thing that is the best or best. Well, now I want you to try applying this principle to the shortest paths. I'm going to make a drawing that may help. I have source s, we have some vertex v, we would like to find the shortest path from s to v, okay, suppose I want to know what is the shortest path, suppose this was you. I have an idea ready, yeah, well, so I could see all the places I could go from s and then look at the shortest paths from there to v so we can call this, I don't know, s prime, so here's the idea, there are some shorter hypotheticals. way, I don't know where it goes first, so I'll guess where it goes first.
I know that the first edge must be one of the outgoing edges of s. I don't know which one, try them all. A very simple idea of ​​each of them. you could somehow calculate the shortest path from there to v, just do that and take the best option for what that first edge was, so this would be the approach of guessing the first edge. It's a very good idea, it's not exactly what I wanted because unfortunately that changes. and then this would work, it would be a little less efficient if I'm solving for the shortest paths from a single source, so I'm going to slightly modify that idea by guessing the last edge instead of the first edge, they're really equivalent if I were doing this I would basically be solving the single objective shortest paths we talked about before, so I'm going to draw the same image that I want to get to v.
I'm going to guess the last edge, call it uv. I know it is one of the incoming edges to v unless s is equal to v then there is a special case, as long as this path has length at least one there is a last edge, what is it? I don't know, guess, guess all the possible incoming edges to v and then recursively calculate the shortest path from s to u and then add in the edge v is fine, so what is the shortest path? It's delta from s comma u, which looks the same. It's another subproblem I want to solve.
There are v subproblems here that matter to me. Well I guess I add the uv edge weight and that should hopefully give me delta of s comma v well if I was lucky and I guess the right choice from you now actually I'm not lucky so I haveto minimize. overall uv edges, so this is what we're minimizing. The choice of u v is already given here, so I take the minimum over all edges of the shortest path from s to u plus the weight of the edge uv that should give me the shortest path. because this gave me the shortest path from sdu, then I added the edge I need to get there and wherever the shortest path is has something that uses a last uv edge there has to be some choice of yours, that's the right one, that's it the good guess we expect we don't know what the good guess is so we just try them all but whatever it is this will be the weight of that path it will take the best path from s to u because sub has the shortest path or the shortest path optimal substructure, so this part will be delta s u, this part is obviously w of uv, so this will give the correct answer, hopefully, okay, uh, it certainly will.
I mean, this is the analogue of the naive recursive algorithm for Fibonacci, so it's not going to be efficient if I mean this is an algorithm, right? You could say this is a recursive call. Let's treat this as a recursive call instead of just a definition. So this is a recursive algorithm. How good or bad is this? terrible recursive algorithm very good very bad I should say uh it's definitely going to be exponential without memorization so we know how to improve algorithms we memorize well so I think you know how to write this as a memorization algorithm to define the delta function of sv which first check is s comma v in the notes table, if so return that value; otherwise do this calculation where it is a recursive call and then save it in the notes table, okay, I don't think you need to write that. below is like the memorization code there, only now there are two arguments instead of one, in fact. it is not changing, so I just need to store with v instead of s comma v uh it is that it is a good algorithm, I claim that memorization makes everything faster is that a fast algorithm is not so obvious I guess so, how many people think that yes, it is a good algorithm? definitely better can't be worse how many people think it's a bad algorithm it's still ok so three for yes zero for no how many people are not sure including the yes votes well okay it's not that complicated let me draw you a graph like So we wanted to calculate the delta of s comma v let me give these guys the names a and b so that we can calculate the delta of s comma v to calculate we need to know uh delta of s comma a and delta of s comma b, of course, those are the two ways oh sorry, actually we only need one, just one incoming lead for v, so it's delta from s comma to uh, sorry, I should have put a base case here too uh delta from s comma s is equal to zero, let's say which is okay, delta best comma a plus the edge is okay, there is a shortest path to a to calculate the shortest path to a we look at all the incoming edges to a there is only one, so delta from s comma b now I want to calculate the shortest path from b uh well, there are two ways to get to b one of them is delta from s comma b sorry uh s comma s came from s the other way is delta from s comma v do you see a problem? yeah, delta s comma v is what we were trying to figure out now, could you say oh, okay because we're going to memorize our answer to delta s comma v and then we can reuse it here except we haven't finished calculating delta of s comma v we can just put it in the notes table once we are done on this, when this call happens, the notes table has not been set and we are going to do the same thing over and over again, this is an infinite algorithm, wow, not that good , so it will be infinite time on the lawn with cycles okay for dags for acyclograms it actually runs in v time plus e this is the good case in this situation we can use this formula uh time is equal to the number of subproblems multiplied by the time per problem, so I guess we have to think about that a little bit where is my code here is my code uh the number of subproblems is v there are v different subproblems that I am using here I am always reusing subproblems of the form delta s comma something which could be any of the v vertices uh how much time do I spend on each subproblem?
That's a little bit complicated, it's the number of edges incoming to v, so the time for the delta subproblem of sv is the degree of v, the number of edges incoming to v, so this depends. in v, so I can't just take a simple product here, what this really means is that you have to summarize all the time subproblems by subproblem, so that the total time is the sum of all b and v uh, the degree of v and we know this is the number of edges, okay, actually it's a degree plus one over three plus one, so this is e plus v, okay, handshake lemma again, okay, now we knew an algorithm for shorter paths and dags and ran in time v plus e. so there is another way to do the same thing if you think about it enough, this memorized algorithm is essentially doing a depth first search to do a topological sort to run a round of Bellman Ford, so we had a topological sort plus a round of Bellman four. is kind of all in one, this should be seen as the Bellman Ford relaxation step or the shortest paths relaxation step.
This minute it's actually doing the same thing, so it's actually the same algorithm, but we're coming at it from a different perspective, okay? but I claim that I can use this same approach to solve for shortest paths in general graphs even when they have cycles. Oh, how am I going to make the dags look right? What was the lesson learned here? The lesson learned is that subproblem dependencies should be cyclical, otherwise. we obtain an infinite algorithm so that memorization works this is what you need is all you need okay, we have almost seen this because I said that to make a bottom-up algorithm you make a kind of topological dependence on the subproblem, I already said it should be cyclic , okay, we just forgot, I didn't tell you yet, so for it to work, it better be acyclic, for dp to work, for memorization to work, it better be cyclic, if you are cyclic, then, this It's the race. time, so it's all general okay, so somehow I need to take a cyclical chart and make it acyclical.
In fact, we've already done this in recitation, so if I have a graph now, let's take a very simple cyclical graph, okay, one thing I could do is explode it into multiple layers, you did this in test two in several ways, it's well, it's like the only interesting thing you can do, as short as pads, I feel like you want to make the shortest path problem harder, it requires you to reduce your graph to k copies. of the graph, all right, I'm going to do it in a particular way here, which I think you've seen in the recitation, which is to think of this axis as time or whatever you want and have all the edges go from each layer to the next layer and this should be a familiar technique, so the idea is that every time I follow a low edge to the next layer, this makes any graph acyclic.
What the hell does this mean? What are you doing? Does double rainbow mean? Okay, so I don't know how I've gone so long this semester without referring to Double Rainbow, which used to be my favorite. Okay, so this is what delta sub k of sv means. I'll define this first. This is a new type of subproblem which is what is the shortest, what is the weight of the shortest path from s to v that uses at most k edges, so I want it to be shorter in terms of total weight, but I also wanted to use some edges. total so this is going to be zero in some sense if you look so here it is here is s and I'm always going to do s this and then this is going to be v in situation zero this is going to be v in situation one situation v so if I look this v I look at the shortest path from s to v which is delta sub zero of sv so maybe I'll call this v sub zero v sub one v sub two well the shortest path from here to here is the there's no way to get there with zero edges, shortest path from here to here.
This is the best way to get there with at most one edge, the shortest path from here to here. Well, if I also add some vertical borders, I guess I'll cheat a little. this is the best way to go from s to v using at most two edges and then you get a recurrence which is the last minimum edge overall so I'm just copying that recurrence but I realize that the stu part uses one less edge and then I use the uv border, okay, that's our new recurrence by adding this parameter k. I have made this recurrence in subproblems acyclic, unfortunately, I have increased the number of subproblems, the number of subproblems is now v squared, technically v times v minus one. because really v squared sorry do it right I start at zero and what I care about is that my goal is delta sub v minus 1 of sv because from Bellman Ford's analysis I know that I only care about the simple paths, the passive length should be less I guess there are no negative weight cycles here.
I should have said that before, if you assume that, then this is what I care about, so k ranges between 0 and v minus 1. So there are v options for k, there are v options for v. so the number of subproblems is v squared, how much time do I spend on each subproblem? Well, just like before, in the degree, where did I write it here?, the degree of that problem, so what I'm really doing is adding up all of the input degree and then I multiply it by v, so the time Total execution is very familiar. This is the Bellman Ford algorithm again and this is where the Bellman Ford Belmont Ford algorithm comes from.
It's this view of dynamic programming, so we have I've seen another way to do Bellman Ford. It may seem familiar, but in the next three lectures we will look at a lot of problems that can succumb to the same approach and it's great.

If you have any copyright issue, please Contact