YTread Logo
YTread Logo

Lecture 20, The Laplace Transform | MIT RES.6.007 Signals and Systems, Spring 2011

May 05, 2024
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: In the last series of

lecture

s, when discussing filtering, modulation, and sampling, we have seen how powerful and useful the Fourier

transform

is. Starting from this

lecture

, and during the following ones, I would like to develop and exploit a generalization of the Fourier

transform

, which will not only lead to some new and important insights into

signals

and

systems

, but will also remove some of the restrictions. that we have had with the Fourier transform.
lecture 20 the laplace transform mit res 6 007 signals and systems spring 2011
The generalization that we will talk about in the case of continuous time is called the Laplace transform, and in the case of discrete time, it is called the z transform. What I would like to do in today's lecture is to start with the case of continuous time, that is, a discussion of the Laplace transform. Continue this in the next lesson and then develop the z transform for discrete time. And also, as we go forward, exploit the two notions together. Now, to introduce the notion of Laplace transform, let me remind you again what led us to the Fourier transform.
lecture 20 the laplace transform mit res 6 007 signals and systems spring 2011

More Interesting Facts About,

lecture 20 the laplace transform mit res 6 007 signals and systems spring 2011...

We develop the Fourier transform considering the idea of ​​representing

signals

as linear combinations of basic signals. And in the Fourier transform, in the case of continuous time, the basic signals that we selected in the representation were complex exponentials. And in what we had referred to as the synthesis equation, the synthesis equation corresponded, in effect, to a decomposition as a linear combination, a decomposition of x of t as a linear combination of complex exponentials. And of course, associated with this was the corresponding analysis equation that actually gave us the amplitudes associated with the complex exponentials. Now, why do we choose complex exponentials?
lecture 20 the laplace transform mit res 6 007 signals and systems spring 2011
Well, let's remember that the reason was that complex exponentials are eigenfunctions of time-invariant linear

systems

, and that was very convenient. Specifically, if we have a time-invariant linear system with an impulse response h of t, what we have shown is that that class of systems has the property that if we put in a complex exponential, we get a complex exponential at the same time. frequency and with a change in amplitude. And this change in amplitude did, in fact, correspond, as we showed as the discussion progressed, to the Fourier transform of the impulse response of the system. So the notion of decomposing signals into complex exponentials was very intimately connected, and the Fourier transform was very intimately connected, to the eigenfunction property of complex exponentials for time-invariant linear systems.
lecture 20 the laplace transform mit res 6 007 signals and systems spring 2011
Well, complex exponentials of that kind are not the only eigenfunctions of time-invariant linear systems. In fact, what you've seen above is that if we take a more general exponential, e to the power of st, where s is a more general complex number. Not just j omega, but sigma plus j omega. For any value of s, the complex exponential is an eigenfunction. And we can justify it by simply substituting in the convolution integral. In other words, the response to this complex exponential is the convolution of the impulse response with excitation. And notice that we can divide this term into a product, e to the st e to the minus s tau.
And the term e to st can be left out of the integration. And consequently, by simply performing that algebra, you would reduce this integral to an integral with an e to st factor outside. So, just following the algebra, what we would conclude is that a complex exponential with any complex number s would generate, as output, a complex exponential of the same form multiplied by whatever this integral is. And this integral, of course, will depend on what the value of s is. But that's all it will depend on. Or put another way, all of this can be denoted as some function h of s that depends on the value of s.
Then, finally, e al st as excitation of a time-invariant linear system generates a response, which is a complex constant that depends on s, multiplying the same function that excited the system. So what we have is the property of the eigenfunction, more generally, in terms of a more general complex exponential where the complex factor is given by this integral. Well, in fact, what this integral corresponds to is what we will define as the Laplace transform of the impulse response. And in fact, we can apply this transformation to a more general function of time that may or may not be the impulse response of a time-invariant linear system.
And then, in general, this transformation into a function of time is the Laplace transform of that function of time, and it is a function of s. So the definition of the Laplace transform is that the Laplace transform of a function of time x of t is the result of this transformation on x of t. It is denoted as x of s, and as abbreviated notation like the one we had with the Fourier transform, then we have in the time domain, the time function of s. And these represent a transformed pair. Now, let me remind you that developing that mapping is exactly the process we initially went through when developing a mapping that ended up giving us the Fourier transform.
Essentially, what we've done is broaden our horizon or our notation a little bit. And instead of simply pushing a complex exponential through the system, we've pushed a more general time function e to st, where s is a complex number with a real part and an imaginary part. Well, the discussion we've gone through so far, of course, is very related to what we went through for the Fourier transform. The mapping we end up with is called the Laplace transform. And as you can well imagine and perhaps have already recognized, there is a very close connection between the Laplace transform and the Fourier transform.
Well, to see one of the connections, what we can see is that if we look at the Fourier transform expression and if we look at the Laplace transform expression, where s is now a general complex number sigma plus j omega, these two expressions, in fact, are identical if, in fact, sigma is equal to 0. If sigma is equal to 0, so that s is just j omega, then everything that this transformation is is the same. We substitute in s equals j omega and this is what we get. What this tells us then is that if we have the Laplace transform, and if we look at the Laplace transform at s equals j omega, then that, in fact, corresponds to the Fourier transform of x of t.
Now this poses a bit of a notation problem and it's very easy to solve. But it's something you have to focus on for a second to understand what the problem is. Note that on the left side of this equation, x of s represents the Laplace transform. When we look at that with sigma equals 0 or s equals j omega, our natural inclination is to write that as x of j omega, of course. On the other hand, the right side of the equation, that is, the Fourier transform of x of t, we normally write as x of omega.
Focusing on what is a function of this omega variable. Well, there's a slight awkwardness here because here we're talking about a j omega argument, here we're talking about an omega argument. And a very simple way to approach this is to simply change our notation for the Fourier transform, recognizing that the Fourier transform, of course, is a function of omega, but, in fact, it is also a function of j omega. And if we write it that way, then the two notations come together. In other words, the Laplace transform on s equals j omega simply reduces both mathematically and notationally to the Fourier transform.
So, the notation that we will now adopt for the Fourier transform is the notation by which we express the Fourier transform no longer simply as x of omega, but by choosing j as the argument of omega. Simple notation change. Now, here we see a relationship between the Fourier transform and the Laplace transform. That is, the Laplace transform for s is equal to j and omega reduces to the Fourier transform. We also have another important relationship. In particular, the fact that the Laplace transform can be interpreted as the Fourier transform of a modified version of x of t.
Let me show you what I mean. Here, of course, we have the relationship we just developed. That is, that is equal to j omega. The Laplace transform reduces to the Fourier transform. But now let's look at the more general expression of the Laplace transform. And if we substitute in s equals sigma plus j omega, which is the general form of this complex variable s, and apply some algebra, dividing it into the product of two exponentials, z to the minus sigma t multiplied by z the minus j omega t. Now we have this expression where, of course, in both there is a dt.
And now when we look at this, what we see is that it is, in fact, the Fourier transform of something. What is that something? It is no longer x of t, it is the Fourier transform of x of t multiplied by e to the power of negative sigma t. So if we think about these two terms together, this integral is simply the Fourier transform. It is the Fourier transform of x of t multiplied by an exponential. If sigma is greater than 0, it is an exponential that decays with time. If sigma is less than 0, it is exponential that grows with time.
So we have this additional relation, which tells us that the Laplace transform is the Fourier transform of an exponentially weighted function of time. Now, this exponential weighting has an important meaning. In particular, let us remember that there were convergence problems with the Fourier transform. In particular, the Fourier transform may or may not converge. And for convergence, in fact, what is required is that the time function that we are transforming is absolutely integrable. Now, we can have a function of time that is not absolutely integrable because, let's say, it grows exponentially as time increases. But when we multiply it by this exponential factor that's built into the Laplace transform, that actually brings the function back down for a positive time.
And we will impose absolute integrability on the product of x of t times e to the power of negative sigma t. And so the conclusion, an important point is that the Laplace transform, the Fourier transform of this product can converge, even though the Fourier transform of x of t does not. In other words, the Laplace transform can converge even when the Fourier transform does not converge. And we will see that and we will see examples of that as the discussion progresses. Now let me draw your attention to this fact, although we will not analyze it in detail.
To the fact that this equation, in fact, provides us with the basis for figuring out how to express x of t in terms of the Laplace transform. In fact, we can apply the inverse Fourier transform to this and therefore take into account the exponential factor by taking it to the other side. And if you go through this, and you do, you will have the opportunity to do this both in the video course manual and in the text, which will end up with a synthesis equation, an expression for x. of t in terms of x of s which corresponds to a synthesis equation.
And that now constructs x of t from a linear combination of not necessarily functions of the form e to the j omega t, but in terms of basic functions or signals that are more general exponentials e to the st. Okay, well, let's look at some examples of the Laplace transform of some time functions. And these examples that I will analyze are all examples that are developed in the text. That's why I don't want to focus on algebra. What I would like to focus on are some of the issues and interpretation. First, let's look at the example in the text, which is Example 9.1.
If we take the Fourier transform of this exponential, then, as you well know, the result we have is 1 over j omega plus a. And that cannot converge for any a. In particular, it is only for a value greater than 0. What that really means is that for the Fourier transform to converge, it has to be a decreasing exponential. It can't be a growing exponential. If we apply the Laplace transform to this instead, applying the Laplace transform is the same as taking the Fourier transform of x of t multiplied by an exponential, and the exponent that we would multiply by is e to the minus sigma t.
So in effect, taking the Laplace transform of this is like taking the Fourier transform of e at least in e at least sigma t. And if we carry that out, just working on the integral, we end up with a Laplace transform, which is 1 over s plus a. But just like the Fourier transform, the Fourier transform will not converge for any a. Now what happens is that the Laplace transform will only converge when the Fourier transform of it converges. In other words, it is when the combination of plus sigma is greater than 0. So we would require that, if I write it here, plus sigma is greater than 0.
Or that sigma is greater thanless to. In fact, in the Laplace transform of this, we have an expression 1 over s plus a. But we also require, in interpreting that, that the real part of s be greater than minus a. So essentially the Fourier transform of x of t multiplied by e to at least sigma t converges. That is why it is important to recognize that the algebraic expression we obtain is only valid for certain values ​​of the real part of s. So for this example, we can summarize it as this exponential has a Laplace transform, which is 1 over s plus a, where s is restricted to the range of the real part of s greater than minus a.
Now, we haven't had this problem before of the restrictions on what the value of s is. With the Fourier transform, it either converged or it didn't converge. with the

laplace

transform, there are certain values ​​of s. Now we have more flexibility, so there are certain values ​​of the real part of s for which it converges and certain values ​​for which it does not. The values ​​of s for which the Laplace transform converges are: The values ​​are called the region of convergence of the Laplace transform. And it is important to recognize that when specifying the Laplace transform, what is required is not only the algebraic expression, but also the domain or set of values ​​of s for which that algebraic expression is valid.
Just to underscore that point, let me draw your attention to another example from the text, which is example 9.2. In Example 9.2, we have an exponential for negative time and 0 for positive time. And if you follow the algebra there, you end up with a Laplace transform expression, which again is 1 over s plus a. Exactly the same algebraic expression that we had for the previous example. The important distinction is that now the real part of s is restricted to be less than minus a. And so, in fact, if we compare this example to the one above, let's look again at the answer we had there.
If you compare those two examples, here the algebraic expression is 1 over s plus a with a certain region of convergence. Here the algebraic expression is 1 over s plus a. And the only difference between those two is the domain or region of convergence. So now there's another complication or twist. Not only do we need to generate the algebraic expression, but we must also be careful to specify the region of convergence over which that algebraic expression is valid. Now, later in this lecture, and also as the discussion of the Laplace transform progresses, we will begin to see and understand more about how the region of convergence relates to various properties of the time function.
Well, finally let's look at an additional example from the text, and this is example 9.3. And what it consists of is the function of time, which is the sum of two exponentials. And although we haven't talked formally about the properties of the Laplace transform yet, one of the properties we will look at (and it is relatively easy to develop) is the fact that the Laplace transform of a sum is the sum of the Laplace transform . So in fact, we can get the Laplace transform of the sum of these two terms as the sum of the Laplace transforms.
So for this one, we know from the example we saw earlier, example 9.1, that this is of the form 1 over s plus 1 with a region of convergence, which is the real part of s greater than minus 1. For this First one, we have a Laplace transform that is 1 over s plus 2 with a region of convergence that is the real part of s greater than minus 2. So for the two together, we have to take the superposition of those two regions. In other words, we have to take the region that encompasses both the real part of s greater than negative 1 and the real part of s greater than negative 2.
And if we put them together, then we have a region of combined convergence, which is the real part of s greater than minus 1. So this is the expression. And for this particular example, what we have is a ratio of polynomials. The ratio of polynomials, there is a numerator polynomial and a denominator polynomial. And it is convenient to summarize them by plotting the roots of the numerator polynomial and the roots of the denominator polynomial in the complex plane. And the complex plane on which they are drawn is called the s-plane. So we can, for example, take the denominator polynomial and summarize it by specifying the fact, or representing the fact that it has roots at s equals negative 1 and at s equals negative 2.
And I've done it in this image by putting an x where are the roots of the denominator polynomial. The numerator polynomial has a root in s equal to negative 3/2, and I have represented it with a circle. These are the roots of the denominator polynomial and this is the root of the numerator polynomial for this example. And also, for this example, we can represent the region of convergence, which is the real part of s greater than minus 1. And that is, in fact, the region here. There are also, if I draw them, just the roots of the numerator and denominator of polynomials, I would need additional information to specify the algebraic expression completely.
That is, a multiplying constant in front of everything. Well, this particular example has the Laplace transform as a rational function. That is, there is a polynomial in the numerator and another polynomial in the denominator. And in fact, as we will see, Laplace transforms, which are ratios of polynomials, form a very important class. In fact, they represent systems that can be described by linear differential equations of constant coefficients. You shouldn't necessarily...in fact, you probably shouldn't see why that's true now. We'll see about that later. But that means that Laplace transforms that are rational functions, that is, the ratio of a numerator polynomial divided by the denominator polynomial, become very important in the discussion that follows.
And in fact, we have some terminology for this. The roots of the numerator polynomial are called zeros of the Laplace transform. Because, of course, those are the values ​​of s at which x of s becomes 0. And the roots of the denominator polynomial are known as the poles of the Laplace transform. And those are the values ​​of s at which the Laplace transform explodes. That is, it becomes infinite. If you think about setting s equal to a value where this denominator polynomial goes to 0, of course, x of s becomes infinite. And what we would expect and of course we will see this to be true.
What we would expect is that wherever that happens, there must be some problem with the convergence of the Laplace transform. And in fact, the Laplace transform does not converge at the poles. That is, in the roots of the denominator polynomial. In fact, let's focus on that a little more. Let's examine and talk about the region of convergence of the Laplace transform and how it is associated with both the properties of the time function and the location of the poles of the Laplace transform. And as we will see, there are some very specific and important relationships and conclusions that we can draw about how the region of convergence is constrained and associated with the locations of the poles in the s-plane.
Well, to begin with, we can, of course, claim, as I just did, that the region of convergence contains no poles. In particular, if I think about this general rational function, the poles of x of s are the values ​​of s where the denominator is 0. Or, equivalently, x of s explodes. And of course then, that implies that the expression has no longer converged. Well, that's a statement we can make. Now, there are some others. And one, for example, is the claim that if I have a point in the s-plane that corresponds to convergence, then in fact any line in the s-plane with that same real part will also be a set of values ​​for which The Laplace transform converges.
And what is the reason for that? The reason for this is that s is sigma plus j omega and the convergence of the Laplace transform is associated with the convergence of the Fourier transform of e to at least sigma t multiplied by x of t. And then convergence only depends on sigma. If it only depends on sigma, then if it converges for some value of sigma... Sorry, for some value of sigma for some value of omega, then it will converge for that same sigma for any value of omega. The conclusion then is that in the region of convergence, if I have a point, then I also have a line.
And what that suggests is that when we look at the region of convergence, it actually corresponds to fringes in the complex plane. Now, we can finally match the convergence region with the convergence of the Fourier transform. In particular, since we know that the Laplace transform reduces to the Fourier transform when the complex variable s is equal to j omega, the implication is that if we have the Laplace transform and if the Laplace transform reduces to the Fourier transform when sigma equals 0 In other words, when s equals j omega, then the Fourier transform of x of t converging is equivalent to the statement that the Laplace transform converges for sigma equals 0.
In other words, does What does the convergence region include? The omega j axis in the s plane. So we have some statements linking the location of the poles and the region of convergence. Let me make another claim, which is much more difficult to justify. And I won't try, I'll just say it. And the convergence region of the Laplace transform is a connected region. In other words, if the entire region consists of a single stripe in the s-plane, it cannot consist of a stripe here, for example, and a stripe there. Well, let me emphasize some of those points a little more.
Suppose I have a Laplace transform, and the Laplace transform I'm talking about is a rational function, which is 1 over s plus 1 times s plus 2. So the zero pole pattern, as it is known, in the Plane s, the location of the roots of the numerator and denominator polynomials. Of course, there is no such thing as a numerator polynomial. Shown here are the roots of the denominator polynomial, which I have represented with these x. And this is the zero pole pattern. And from what I have said, the region of convergence cannot include any poles and must correspond to stripes in the s-plane.
And, furthermore, it must be a single connected region and not multiple regions. And so, with this algebraic expression, the possible choices for the region of convergence consistent with those properties are as follows. One of them would be a region of convergence to the right of this pole. A second would be a region of convergence that lies between the two poles, as I show here. And a third is a region of convergence that is to the left of this pole. And because I said without evidence that the region of convergence must be a single stripe, it cannot be multiple stripes.
In fact, we could not consider, as a possible region of convergence, what I show here. In fact, this is not a valid region of convergence. There are only three possibilities associated with this zero pole pattern. That is, to the right of this pole, between the two poles, and to the left of this pole. Now, to take the discussion further, we can, in fact, associate the convergence region of the Laplace transform with some very specific characteristics of the time function. And what this will do is help us understand how, for various choices of the convergence region, the interpretation that we can impose on the related time function.
Let me show you what I mean. Suppose we start with a time function like the one I give here, which is a time function of finite duration. In other words, it is 0 except in some time interval. Now let us remember that the Fourier transform converges if the time function has the property of being absolutely integrable. And as long as everything remains finite in terms of amplitudes in a signal of finite duration, we are not going to encounter any difficulty here. Now, here the Fourier transform will converge. And now the question is, what can we say about the region of convergence of the Laplace transform?
Well, the Laplace transform is the Fourier transform of the time function multiplied by an exponential. Then we can ask if we can destroy the absolute integrability of this by multiplying by an exponential that grows too fast or decays too fast, or whatever. And let's take a look at that. Suppose that this function of time is absolutely integrable. And multiply it by a decreasing exponential. So this is now x of t multiplied by z to the negative sigma t if I think about multiplying these two. And what you can see is that, for a positive moment, thinking informally, I'm helping the integrability of the product because I'm pushing this part down.
For negative time, unfortunately I am growing things. But I don't let them grow indefinitely because there is a time before which this equals 0. Likewise, if I had a growing exponential, then for a growing exponential for a negative time, or for this part, I'm making things smaller . . By the way, over time this exponential is growing without limits. But the function of time stops at some point. So the idea is that for a time function of finite duration, it doesn't matter what kind of exponential you multiply, whether it goes this way or this way, due to the fact that essentially the limits of the integral are finite, I have the guarantee that I will always maintain absolute integrability.
And so, in fact, for a time function of finite duration, the region of convergence is the entire s-plane. Now, we can also dostatements about other types of time functions. And let's look at a time function that I define as the right-hand side time function. And a right-hand side function of time is one that is 0 up to some point, and then continues after that, presumably to infinity. Now, let me remind you that the whole issue here with the region of convergence has to do with exponentials by which we can multiply a function of time and have the product end up being absolutely integrable.
Well, suppose that when I multiply this function of time by an exponential that, let's say, decays. But an exponential e at least sigma 0 t, what you can see intuitively is that if this product is absolutely integrable, if it were to increase by sigma 0, then I'm doing things even better for positive time because I'm pushing them down. And although they could be worse for negative time, that doesn't matter because before some time the product is equal to 0. So if this product is absolutely integrable, then if I choose an exponential e raised to negative sigma 1t where sigma 1 is greater than sigma 0, then that product will also be absolutely integrable.
And we can draw an important conclusion about that, about the convergence region from that. In particular, we can state that if the time function is right-handed and if convergence occurs for some sigma value 0, then in fact we will have convergence of the Laplace transform for all values ​​of the real part of s greater than sigma 0. The reason, of course, is that if sigma 0 increases, then the exponential decays even faster during positive time. Now, what that means, thinking another way, in terms of the region of convergence as we might draw it in the s-plane, is that if we have a point that is in the region of convergence corresponding to some 0 sigma value, then all values ​​of s to the right of that of the plane s will also be in the region of convergence.
We can also combine that with the claim that for rational functions we know that there can be no poles in the region of convergence. If we put those two statements together, we end up with the statement that if x of t is right-handed and if its Laplace transform is rational, then the region of convergence is to the right of the rightmost pole. So we have a very important idea here, which tells us that we can infer some property about the function of time from the region of convergence. Or, conversely, if we know something about the function of time, that is, if it is right-handed, then we can infer something about the region of convergence.
Well, in addition to signals on the right side, we can also have signals on the left side. And a left side sign is essentially an inverted right side sign. In other words, a left-hand signal is one that is 0 after some time. Well, there we can make exactly the same kind of argument. That is, if the signal shoots to infinity in the negative time direction and stops somewhere during positive time, if I have an exponential that I can multiply it by and make that product absolutely integrable. And if I choose an exponential that decays even faster for negative time, so that it pushes things even further down, then I improve integrability even more.
And you might have to think about that a little bit, but it's exactly the other side of the argument for right-side signals. And the conclusion then is that if we have a signal on the left side and we have a point, a value of the real part of s that is in the region of convergence, then in fact all the values ​​to the left of that point in the The s plane will also be in the region of convergence. Now, similar to the statement we made for the right-hand side signals, if x of t is the left-hand side and, in fact, we are talking about a rational Laplace transform, which we will normally do.
So, in fact, we can claim that the region of convergence is to the left of the leftmost pole because we know that if we find a point that is in the region of convergence, everything to the left of that has to be in the region. of convergence. We cannot have poles in the region of convergence. You put those two statements together and it says that it is to the left of the leftmost pole. Now the final situation is one in which we have a signal that is neither on the right nor on the left side. It goes to infinity through positive time and it goes to infinity through negative time.
And what you have to recognize is that if you multiply by an exponential, and it decays very quickly during the positive time, it will grow very quickly during the negative time. On the contrary, if it decays very quickly during negative time, it grows very quickly during positive time. And there is the notion of trying to balance the value of sigma. And, in effect, what that says is that the region of convergence cannot extend too far to the left or too far to the right. Put another way, for a two-sided sign, if we have a point that is in the region of convergence, then that point defines a strip in the s-plane that takes that point and extends it to the left until it hits a pole. , and extends it to the right until it hits a post.
Then we begin to see that we can unite some properties of the region of convergence and the right, left or bilateral laterality of the time function. And you will have the opportunity to examine it in more detail in the video course manual. Let's conclude this lecture by talking about how we could obtain the time function given the apparatus transformation. Well, if we have a Laplace transform, we can, in principle, recover the time function by recognizing this relationship between the Laplace transform and the Fourier transform, and using the formal expression of the Fourier transform. Or equivalently, the formal expression for the inverse Laplace transform, found in the text.
But more typically what we would do is what we have also done with the Fourier transform, which is to use pairs of simple Laplace transforms together with the notion of partial fraction expansion. And let's go over that with an example. Suppose I have a Laplace transform as I indicated here on its zero pole graph and a convergence region that is to the right of this pole. And what we can identify in the region of convergence, in fact, is that we are talking about a function of time on the right side. So the region of convergence is the real part of s greater than minus 1.
And now, looking at the algebraic expression, we have the algebraic expression for this, as I indicated here, equivalently expanded in a partial fraction expansion, as I show a continuation. . So if you just combine them, it's the same as this. And the region of convergence is the real part of s greater than minus 1. Now, the region of convergence of... this is the sum of two terms, so the time function is the sum of two time functions. And the convergence region of the combination must be the intersection of the convergence region associated with each one. Recognizing that this is to the right of the poles, that immediately tells us that each of these two would correspond to the Laplace transform of a right-hand side function of time.
Well, let's look at it term by term. The first term is the factor 1 over s plus 1 with a region of convergence to the right of this pole. And this corresponds algebraically to what I have indicated. And this, in fact, is similar to or a special case of the example we pointed out at the beginning of the lecture. That is, example 9.1. And then we can just use that result. If you remember that example or refer to your notes, we know that the time function of the form e to the power of minus at gives us the Laplace transform, which is 1 over s plus a with the real part of s greater than minus a.
And this is the Laplace transform of the first one. Or, sorry, this is the inverse Laplace transform of the first term. If we now consider the pole at s to be equal to negative 2, here is the region of convergence that we originally started with. In fact, having removed the pole at minus 1, we can extend this region of convergence to this pole. And now we have an algebraic expression, which is negative 1 over s plus 2, the real part of s greater than negative 1. Although, in fact, we can extend the region of convergence to the pole. And the inverse transform of this now, again, refers to the same example, negative e to the power of negative 2t times the unit step.
And if we simply put the two terms together, adding the one we have here to the one we had before, we have a total inverse Laplace transform, which is that. Basically, what happened is that each of the poles has contributed an exponential factor. And because the region of convergence is to the right of all those poles, that is consistent with the notion that both terms correspond to right-hand functions of time. Well, let's focus for a second or two on the same zero pole pattern. But instead of a region of convergence that is to the right of the poles as we had before, we will now take a region of convergence that is between the two poles.
And I'll let you work through this more leisurely in the video course manual. But when we carry out the partial fraction expansion, as I have done later, we would now associate with this pole a region of convergence to the right. With this pole a region of convergence is formed to the left. And then what we would have is the sum of a function of time on the right side due to this pole. And in fact it is of the form e to the power of negative t for positive t. And a function of time on the left side because of this pole.
And in fact, that is of the form e to the power of negative 2t for negative t. And so, in fact, the answer that we will get when we decompose this, let's use the partial fraction expansion, being very careful in associating the region of convergence of this pole on the right and this pole on the left, we will have then, when we are done , a function of time that will be of the form e raised to minus t multiplied by the unit step for positive t. And then we'll have a term of the form e al...
Sorry, this would be e at least 2t since this one is at minus 2 and this one is at minus 1. This would be a plus sign and this would be minus e to the minus t for negative t. And you'll look at it a little more closely when you sit down with the video course manual. Okay, well, what we've seen, pretty quickly, is an introduction to the Laplace transform. And a couple of points to highlight again is the fact that the Laplace transform is very closely associated with the Fourier transform. And in fact, the Laplace transform for s equal to j omega reduces to the Fourier transform.
But more generally, the Laplace transform is the Fourier transform of x of t with an exponential weight. And there are some exponentials for which that product converges. There are other exponentials for which that product has a Fourier transform that does not converge. This then imposes on the debate on the Laplace transform what we call the region of convergence. And it is very important to understand that when specifying a Laplace transform, it is important to identify not only the algebraic expression, but also the values ​​of s for which it is valid. That is, the region of convergence of the Laplace transform.
Finally, what we did was match some properties of a time function with things we can say about the region of convergence of its Laplace transform. Now, like the Fourier transform, the Laplace transform has some very important properties. And outside of these properties, both are some mechanisms for using the Laplace transform for systems like those described by linear differential equations of constant coefficients. But the most important thing is that the properties will help us. As we understand them better, it will help us use and exploit the Laplace transform to study and understand time-invariant linear systems. And that's what we'll move on to next time.
In particular, talking about properties and then associating with time-invariant linear systems much of the discussion we've had today regarding the Laplace transform. Thank you.

If you have any copyright issue, please Contact