YTread Logo
YTread Logo

LangGraph 101: it's better than LangChain

May 07, 2024
today we're going to take a look at langra, now langra is another library within the L-chain ecosystem and what it allows us to do is create very custom and flexible agents, which I think agents are the kind of near-term future of Now with AI you can create agents with the centerline chain library and I've talked a lot about this before, but after experimenting with both the Lang chain and the Lang graph, I'm still forming an opinion, but I think the Lang graph It's just so much more powerful. solution to create agents now let's take a quick look at the general concept of what the Lang graph is now, as you may have guessed from the name The L graph has a big emphasis on graphs, so Lang Chain thought of Agents as objects that somehow attach tools to insert some prompts etc etc.
langgraph 101 it s better than langchain
The L-graph instead thinks of Agents as more of a graph, so it has its initial starting point which could be anything, it's a function, some kind of executable function, so it could actually be be just an agent. or it could be a string or it could be some other executable function now from here instead of just ending this agent we can control what happens next so down here we could have something like a search tool so we can do a rag . This way, we can go down here and we can say, “Okay, this is our search tool” and it will give us information based on what the agent or the chain has decided to do here and then from there we could continue.
langgraph 101 it s better than langchain

More Interesting Facts About,

langgraph 101 it s better than langchain...

We could end up with another llm, most likely it's a llm and that LM may have a message where it consumes the context you got from your search tool. Your initial query, which would have appeared here from your user, consumes both. and then it will generate a response and that response will go to the end node. Well, this is a very simple example. As I look at this, I want to point out a few things, so here this circle is a node, here this arrow is an edge, and everything in our graph basically consists of nodes and edges in some combination.
langgraph 101 it s better than langchain
Now this is an incredibly simple agent, let's look at something perhaps a little more realistic. I'll still keep it simple but still a little more realistic so we can get started. with an agent up here and this agent actually has some tools that it contains, but it's going to use them through function calls, so it's not actually going to run the tool itself, it's just going to send the input to a tool that we should use, like this I'm going to connect to this agent the schemas for a final response because I'm going to create a slightly different final response than usual.
langgraph 101 it s better than langchain
I'm going to have a final response where you're going to get a similar result from your agent and also a quote, so it's like a Json output in a particular format and we want to apply that and I'm also going to enter that search tool from before, okay, now this agent will generate which of these tools we should use and the parameters that we should pass to it now because there are two tools here, we have like two alternative paths that we could follow and this is another component of line charts that is called Conditional Edge, now a Conditional Edge It is what it seems, it is an advantage or a couple of advantages that are conditional on some type of condition being met, so the condition that is met here will be if our agent has decided to go to the search or to the The Answer tool Final now to implement a conditional Edge we need something called a router and that router is basically an if else statement, so based on that we can go one of two ways, we can go to our search node here or we go to our final answer , which is actually Now we'll go to the final node with the search.
We will perform a search and return a response. What will the response look like? Well, we'll generate some context. Okay, so the result of this is some text. Basically, let's get context now. that's a response in itself, it's just some text that we've pulled from somewhere, it hasn't been formatted into a natural language response, so now we need to pass this through another um LM we'll say instead of an agent because an agent is good when we have multiple tools, in this case we only have one tool that we want to use and that is the final response tool and the only thing that the final response tool does is create that format for us, so let's say okay, you need to provide an answer and a quote and from there we have our final answer and that will go to the final node.
Now, again, this is a very simple chart, there's nothing particularly complex about this, but it's much more flexible and much more. easy to work with than at least what I'm used to in the L chain and I haven't been using this for a long time so I think there's a lot of potential in using the L chart and just learning it and I think when the time comes to create agents so far this is by far my preferred method or library. Now, a quick overview, let's take a look at how we implement all of this in code, so let's get into this notebook.
Here there will be a link in both the description and the comments so that you can open this same notebook and you can follow it now. There are some additional things we need to do here and the first thing is to install this library here, this py graph VI. uh displaying py graphs is not required to use graph l, it is required to display graphs that are built into graph L, so if you're just developing with this library, it's not necessary, you don't need this, but yes. For this example, you know, walk you through things. It's really nice to be able to visualize what you're building and we have some libraries that we need here, they're all from the Lang chain, we're going to use open AI here, so yeah.
We will initialize it and move to the state of our graph. Now that there are several ways to do this, you can create your own chart state, which is what we're doing here, or you can use a built-in one. Think it is so. message status that are built into the library and you can use them, it depends if you know what you want to do. I prefer this method because you can define what's there and it's easier to understand and this. It has been inspired by a very good video by Sam Ingenuity on the same topic in Langra.
He made a very good introduction. In fact, I would recommend that as well, so what we're doing here is defining this agent state and as we go along. through each node and as we go through the graph, we have this agent state that you know is with us, so as we go through all the new information, for example from our search tool, that we retrieve, it will be stored here and you Recognize this, so the intermediate steps if you've used L chain agents before are very typical, so the intermediate steps are okay between the user asking a question or writing some kind of query and the result they get , there can be a lot of multiple steps, since we have looked at that graph and we saw the information from them here and another thing we have here is this agent out, this is the output of an agent, there is nothing, nothing sophisticated there, the other thing is inputs, so that's the user's input and we would also have a chat history here.
I'm not including it here because I just want to keep things as simple as possible, but with that said, let's build what I create. It's a great agent, I'm using langra very soon. I'm working on it, it will be interesting, much more complex and I think it will show us a little more of what this library can be used for, but this is a good introduction, uh. So the first thing I want to do inside the graph is define those different nodes. So the two tool nodes that I want to create are the search node or the search tool and the final response tool, so here we are, you know, normally we would implement a rag. pipe here, but I'm just going to emulate that, so I'm going to say, okay, this is the information that we're going to retrieve from our rag, you know, our emulated pipe rag is from a file document in some embeds there. we have the title of that article, we have the abstract, we have the authors and the source, so it will be up to our llm The Final Answer um llm or the initial agent to decide how to construct the citation when they get this information and of course the answer also, so yeah, that's our information for our emulated search tool and we're going to define our Define Search tool, which is right here when we define a tool, we use this tool decorator, here we name the tool that we pass in, okay ?
Do we have what it takes in that tool? So we just have a query here on this side of the schema and we give a description here now. This description is fine, it's for us, but actually it's also for the movie. Okay, it's for our agent, he'll read it. this and decide which tool to use and also how to use it based on what we put here now, here you would put your rag stuff, uh, but I'm just going to return the emulated content type there, then we also have that final response tool. Now the final answer tool doesn't do anything right, that's right, well it doesn't do anything as you can see here, it just returns empty, the reason it doesn't do anything is because I don't need to do anything, the only thing I'm using it for this is telling my llm or agent to use this structure when generating a final response, so the LM or agent should use this as a tool when using a tool everything they do.
This generates what should be the input for that tool, it doesn't actually run it, so that's all we need, we just need this format. We give a short description here explaining what we want in both fields and the LM will do it. produces that for us, okay, it just gives us the final answer in that structure that we need both. We're going to run the Openi tools, which are like the latest version of their calling function and you can see how that. works here, so if we see the search tool, we pass this information to our LM, so you'll see okay, this is the search tool to use it, you will need to enter the word search, then we have this description, so this is the function, these are the inputs to that function and then it tells us that it looks for information on the topic AI etc., it's just what I wrote here, okay, so we have that, then we'll go down a little bit more and Let's initialize our first agent .
Now, that first agent. In fact, I'm going to use the middleline string to implement a very simple or typical openai tool agent. All you will have is a message I just received. The Line Chain Hub will have our llm, which is an open AI G PT 3.5 chat model. Actually, I'm not sure what the default is now. I guess it's still 3.5 uh yeah GPT 3.5 turbo and we have our tools. Okay, so the final answer tool and the search tool will now need to pass your API key here and then you can run OK and we can test it very quickly to confirm that it works so I'm going to ask you what are embeds of ehi uh yeah, we run that and okay, you can see the results up to here as the agent action of this tool and this is what we're going to use in our router for those conditional modifications later on, so let's go look at the tool element there and then we'll take the input of this tool taking it as keyword arguments to our function, so query and in that query we'll pass embeds ehi now, of course, we're just emulating rag here, so it's not actually going to do anything other than return that text to us, but that's what we would really need so okay we move on uh oh yeah I'm just showing you here what we would actually be posting there so we would be taking the function we have our arguments and we have the name okay , so okay, what else do we have?
We are going to define nodes for our graph, so we defined the tools and the agent and now we just need to define the functions that will be executed as nodes within our graph, so that we have the Run query agent that will consume our state and execute our agent query agent, so the query agent that I defined, where was it here? Okay, so that's the one that makes the decision between the final response and the search, then we'll run our search, so this is a function for rag, obviously you know it's emulated again. We have our router, which is what we use with our conditional Edge to decide which one. direction to go and that covers the first component of our graph, so basically all of this, but we also have this component here, so these connections are fine, so the search tool we have defined this, but we need to define this final answer . final answer llm something that is quite useful that we can do is we can create our llm, we link a tool so that the final answer tool to our llm and then we can say that the llm should call that tool, so this is just for help us. reduce the chance that you will hallucinate and do more than just the tooluser because we want to enforce it to generate that kind of structure that we would like so we can do that and it's cool.
I will show you how. so we're here, we have our llm, this is the one we defined earlier, it's just our chat llm and then we link tools to it, we just link one tool. It's the final tool and then we apply the tool that you have to use. I'm going to say that you should use the final answer tool, okay, and yes, that's the solution. We need to define a function to handle that, so we have our final response R. We are taking the user input from our state. We're also taking the context from the previous steps, so the middle steps feed both into a very simple message here where we just have the context and a question that we invoke in our message, so we run the LM and that will generate a function call, OK? to our final response tool, okay and then we just return it now, the final one that we're also going to add, so I didn't visualize it before, is this handling error, so what we have with the current query agent is that is not required to make a function call, so sometimes, basically, when you say something like hello or hello, as you know, a very short message that is very conversational, the agent will want to respond in a way that you already know, without use no tools and I'm sure you could point out that this is very rare, but it will still happen sometimes and handle that we can actually create another function, this will still use this final llm answer and basically what it will do is the router will crash Look at the output of our agent and you should see the search or the final response, but if we don't see those tools being used, we'll assume there's an error and then enforce the use of that final response, okay, like that, now You know, it's pretty simple, so we do it.
Now we have You know, we've built all those nodes, we've put all that logic in, now it's time for us to put it all together to build a graph, so we initialize our graph first. We do it with this statechart object and we pass it. our agent indicates what we defined above and then what we do is we add some nodes, okay, our query agent, which is, you know our query agent, we have the search tool, we have this error handling tool and then we have our final answer as a format basically. takes our original user query, the result of the r tool puts them together and produces a final response for us, then we also define where in the graph we start, which is with our query agent, okay, so we set the point From the outset, okay, we execute it, okay. so what we have to do is Define our edures to Define our edures what we do is add an advantage, I think it's literally add advantage and we just say where we come from so we can go from the agent here, so this would be will be our X and where we're going, so in this case it would be our router and yeah, I mean, that's how we do it because we've defined the nodes in our graph using strings, it's just what we do, we'll do it too.
Define most of these as those strings, so the agent, for example, that we defined as query agents, is a string, so that's exactly what we would put inside X here. The only exception we always have is our final node here, so final is actually either a function or an object and there's no string value tied to that, so we just pass in the actual final method that we'll see in a moment, so let's do that. You can see some examples of what I was talking about, so let's add an edge between our search node and our final rag response node and then we can add another one between our final rag response and our final node, okay, I've got some here repetitions, so let's remove that, okay, we import that end node there. add our edges, which is what I'm doing here, so these are like the individual edges that will be taken, so if you go to the search node, then you go to the Final Rag Response node, if you go to the error node, will go to the end node and the same thing here, the other thing we have here is this Conditional Edge, so this is what I mentioned before for a Conditional Edge, you can go in different directions depending on a particular condition, now the starting point. because that is the query agent that will then go to our router which will decide which direction we go in so the router will output a string which will either be a lookup error or a final response and based on that output Let's go to the search node, to the error node or we will go to the end node.
Okay, it's pretty simple, once we've done everything, we can compile our graph, so we run it and then what we installed earlier that I mentioned is this. so we can visualize these things and you can see what we've created here, okay, so we have our starting point or entry point, which is really just the query agent. Here we go to our router which decides where we should go uh, if there is an error, we go to the error which will form our force of knowledge from that structured output. We can also go to our final answer tool or we can go to the search tool.
If we go to the search tool, we do our job. we know the emulated rag on this node then we pass the state which includes both the context and our original query to this final llm response and then we are done and that's it it's pretty simple now let's see how let's see how it works well so we can see the path you are deciding to take here, so we are asking what AI is, so our search tool is defined as how it should be used when someone asks about AI, so we have our turning point, which is our query agent to the one who passes the router will always do it and our router decides that we should use the search tool correctly and it decides that based on the result of our query agent, so all it does is pass the result of the query agent and then decide well on what direction. come on then we run our search and then we go to our final answer LM and then we finish and we can see here we can see what we did right so that we have the answer you know there is some answer and then we also have our source now the source here that is using is not actually our embed source ehi because okay they are embeds the same way we use rag in AI to make or make rag to search but it doesn't describe which AI is correct so the LM doesn't actually it's using our information even though we've given it that information and instead it's using something that is remembered, which is the Wikipedia page on artificial intelligence and this link should really work, they usually do, yes, so you get this , remember it.
I'm always amazed at how much they remember movies, like blog sites or random websites, obviously this one is less impressive, but they feature some crazy, sometimes great links, so we've got that, now let's try something else. I'm going to ask what they are. ehi embeds and we can test the citation ability of our now see what it gives us, okay, so it's using that context that we give it, you know, this is basically just information from that context from the Pipeline emulated rag and then we have the source here and that source again will work fine, you see here, that's the document and this is a relatively new document and we're using an older GPT 3.5 model so I don't think this is in the current turbo model training data. although they are changing them, it could be like that, but in any case you can also use the older models and it will do the same thing, okay, let it cool, uh, I can ask, just ask more questions, tell me about these inlays it will do and it will. give me a quote you already know again.
I guess in the citation here I would have liked to generate the citations to, like you know, get the authors together and whatever else there and yeah, you could ask it to do that. So, but obviously in this use case it's a bit of a silly use case because actually you would probably just extract the source of the document that's been used, since we're just returning a document, which is, you know, it's pretty easy. but anyway we have that and this is a useful thing when you know that by returning things from its own memory you can see where you know what it has been trained to know how to find that information or to remember that. information which is kind of interesting, so okay, this is where it would normally break if we didn't have that bug.
You can see we are handling this error, if we didn't have it I would just try it. and the output, you know, a normal sentence here we see actually forces it to use that response source format, which is pretty cool and then I can literally beg this agent not to use this format and it still will , which is it? I think that's cool, it's useful depending on what you're building, of course, but it's a nice, nice little thing that we have there, so yeah, that's the Lang graph. I think you know this is a very basic example, there are many.
You can do more with the L graph, like just build much more complicated agents than what we've done here, but at the same time you can also build these simple agents and you can customize them like we did with the rag. emulator and the final answer and also the final answer, you know the error handling, we built all that and it's not that complicated and you can add a lot of nodes and a lot of different functions and build something quite sophisticated without too much difficulty and the code. I know one thing that a lot of people say and I know that I understand the Langang chain to some extent is that the method is very complicated, there are like a million ways to do one thing and I'm not saying it's perfect here. but I think with the line graph it's a lot more refined and you know we just showed you that to build a graph you need to add the edges you add the nodes and yes there are a lot of different ways to build those nodes but the logic is pretty intuitive and easy to follow and just very extensible.
I would use basically the same functions whether I'm building this very simple agent or some super large research agent that you know has a million different sources of information that you know we would be using. you know roughly the same functions without too much difference, we would just be, you know, putting a lot more into them and that's something that I quite like so far with the graph and I think the other thing is just the ability to really control what your agent. what you are doing with the L string, it is all hidden behind abstractions and there are so many abstractions here.
You know, I won't lie, but they feel a lot more useful and a lot less frustrating than line string abstractions, which I appreciate, and while this can be a bit you know, it's complex to get started after a few hours. I think it gets pretty intuitive and that's something I like about this library, so for now I'm building agents with Lang Graph instead of Lang chain or Decor l chain of course. There are still a lot of L chain components here and you know, I'm sure I'll continue to use them for a long time in the future, but this is the way I'm building the logic or routes inside the agents and I think it works pretty well, so that's it for this introduction to L Graph, as I mentioned, we will make more langra videos, we will create some more complicated things, but yeah, that's it for this introduction, so I hope it was a useful and interesting video, so Thank you very much for watching and see you again in the next one, goodbye.

If you have any copyright issue, please Contact