YTread Logo
YTread Logo

Economics 421/521 - Econometrics - Winter 2011 - Lecture 7 (HD)

May 30, 2021
As for autocorrelation, we start this last time. One thing I didn't say last time is that this class starts off hitting you pretty hard over the head with heteroskedasticity, if you keep up to this point you'll be fine in this. Of course, if you're struggling a little bit, you won't know that we'll have some technical points ahead of us, but it won't get substantially harder or anything like that. I mean, that's the course and that's how it is. It's not like we're starting on a gradient and we're starting to go up here to make it really really difficult as we go, so stick with it, you know it's important that you stick to it every day, every week and that kind of stuff. but it's not like we keep getting harder and harder until I find out that everyone quits or can't handle it or something, so hopefully that's a confidence builder anyway, so correlation, we talk about the assumptions last time, what number did I pass up to C8?
economics 421 521   econometrics   winter 2011   lecture 7 hd
Does anyone have good? The two that were important were the ones we called C6, which was that UT and u s are uncorrelated for all T is not equal to S, let me I don't use the for everything I did last time, so we assume that, of course, is correlated with itself, it moves perfectly with itself, but for any T other than the same, these are not completely correlated and will often be violated. Another one we had was C7, which said that the UTS and the XTS are not correlated. Now we'll get to this one in a moment, but I want to focus on this one for a moment and repeat some things that you did in 420, hopefully from this. book and focus a little bit on this second one, so the message is that we are using time series data and these are the two assumptions that are most frequently and commonly violated with time series data and that's why I want to explain the nature of violations, how Are we violating these assumptions when we have time series data and then we can talk about testing and then we can talk about corrections?
economics 421 521   econometrics   winter 2011   lecture 7 hd

More Interesting Facts About,

economics 421 521 econometrics winter 2011 lecture 7 hd...

I will have good news for you about the corrections. I won't do something this year that's pretty. Hard, that will make it easier, so let me look at this in a little more detail and just remind you how it works, so if I have a model that is YT, I'm using T now to have beta 1 plus time series data. beta 2 x2t plus u t = 1 2 for this is just a limit don't let this confuse you we are just indexing with time instead of with I so this is just the number of observations so that is our model basic and what you learn in 420. was that the beta 2 OS estimator is equal to the sum of the bar x i - xar * y i - Y over the sum of x i - xar 2 and we want to see the bias so I need to use that trick you use to isolate beta then isolate the inner product between the x's and the U's because we want to see this correlation, so we use the fact that YT minus the y bar is beta.
economics 421 521   econometrics   winter 2011   lecture 7 hd
Use the same symbols I use here in the book: beta 2 * XT. xar is so familiar if you take this model, what is beta 1, the beta 1 estimator and barus beta 2 in the X2 bar? Remember if I plug in the estimator for beta 2, here I'm going to get that equation, just plug in that beta 1. Y Bar minus beta 2 in the xar group terms and you'll get this, so this use that hat beta 1 is Y Bar minus beta 2 hat xar and if you put it in the original model, you can take it out of it, put it in the original model, you actually have to use the true values ​​here, but you don't, so you substitute this into this equation.
economics 421 521   econometrics   winter 2011   lecture 7 hd
Oh, I'll leave it here. It's probably a good idea to have the error there or we won't solve the problem. What is the mean? of this zero is one of our assumptions, so bar U is zero, so it's actually U minus bar U, but there's no bar there, which means it's zero, so now we can write this as beta 2 times the sum of time x i - xar. let me do it this way Time beta 2 * x i - xar plus I'm using it t ided by I should get out of the way here sorry, keep writing eyes.
I finally trained myself to make eyes here when I need the tees, so now I'm doing it the other way around, you've seen it before. I hope so, when I multiply these two I get XT - x^ 2 sum with a beta 2 in front, so this becomes beta 2 * the sum of XT - xar 2 over the sum of XT - xar 2 plus the sum of XT - xar UTS over sum of XT - xar Squared so that's equal to Beta 2 plus sum of XT - xar UT over sum of you take the expectation of this so to show that it is unbiased we take the expectation having said that there is nothing more difficult I think this is a review I hope it is supposed to be the expected value, take the expectation, then the expected value of beta 2 It's beta 2 plus oh, I spilled coffee on my notes, that was cool, actually, did they use an a for this?
Did they replace this with this? Do you remember if you used it? substitution that the book uses, okay, so what the book says is that defining a t as three bars is not equal. this is a definition XT - xar over the sum of plus the sum of a t u which looks familiar, then we take the expected value of beta 2 hat and it's beta 2 times. the expected value of the sum of a t UT then if UT and a are not correlated, what can we do? We can say that this has to be true. If it is true, then we can say that the expected value of beta 2 is beta 2 plus the sum of. the expected value of a multiplied by the expected value of U You should do this in two steps.
I must take the expected value of the product and then convey the expectation. Remember when two things are uncorrelated or independent, the expected value of a is x value of a. times the expected value of B if they are uncorrelated, you can't do that, so only when the errors are uncorrelated, you can say that the expected value of a b is EA * EB if a and b are random variables, so only when are uncorrelated is the product of the expectation the expectation of the product when they are not uncorrelated you just can't do that you can't convey that expectation but once we can what is the expected value of UT Z zero this is not zero but it is a t * 0 so this will be equal to Beta 2 if XT and a are independent or I guess uncorrelated is all I need uncorrelated we'll just go to the book.
I actually prefer to say independent there, these are I'm going to say independent, these need To be independent I'm struggling because we have normal errors and when we have normality it's the same, but you don't always have normality, so I want to say it right, this is the crucial assumption here, this is what we should do. I want to focus now there are two cases here note that a depends on this is the key all the all the X's from one to T, so every If I really put UT there, sorry, it's not good, so why do you correct me to keep quiet?
You know, I know it's best not to make the hole in the seam, that's why this thing leaks, it's not like I don't drink coffee every day, okay? Here's point eight: T depends on every X, not just XT, so you must be independent not just of XT, but it depends on all XT. from T = 1 to T = T and again that comes from this lower term here so what I want to show you later is the following. I just want to give you the results and then we'll go back and dig them up so there are Here are two results: If XT, not all the small we get the wrong answer and even when we let N go to infinity, even if we had an infinite amount. of data we would still get the wrong answer, so this is the serious problem when X and UT are correlated, that is a problem.
The second result here is that this is rare, this is the most common case, but if XT and UT are uncorrelated, but Last night I thought at this point you were going to be a little confused about what I'm talking about here, so I thought maybe a picture could help. I won't make things a thousand words long, but here's your x two, this is X2, so it goes x21 up to X2, limit T, so this is your spreadsheet data, this is just your here is x2t and right here. there is a UT if UT is correlated with that X, that's when you become biased and inconsistent.
I write it too small to see, that's so skewed and inconsistent now, if I throw it out and say okay, it's just correlated with these other X's, this one is okay. it's not correlated here but it's correlated with these other they. then you are fine, you have satisfied C7, then there are three cases, it is not correlated with all of them, that is the best case, then you have that it is unbiased and consistent when it is not correlated with all of them, then the least problematic one is correlated with all the X but not correlated with the contemporary is correlated with others then you simply become partial but you are consisting in the worst case is this or this and this but the worst case is when you are correlated with this is when you have no consistency and there is no partial biases, so there are biases and inconsistent estimators, in that case now let me give you three examples of these models so we can see how they work, so here are the three models that we are going to work with and then I will go ahead and start showing you how to test things.
I just want to set things up where we're headed, so that there are three cases, these three cases not correlated at all, contemporaneously correlated outside of contemporaneous terms, okay, so write down models of the three cases, our basic model will be and t is BET to 1 and you can extend this to more Think that these X and U are not correlated, the problem here is C6 UT and USU are correlated, you will see that UT and UT minus1 are correlated, in fact this is the correlation coefficient, this is the degree of correlation, usually in these models the row will be less than one in absolute value, so it will be like 1/2 so today's ER is half of yesterday's ER or 7 of yesterday's era, something like that plus a new shock, so in this case your and consistency, what I don't have is efficiency because of this problem, so this model will be one of the beta ones. 2.
It will be unbiased and consistent, but it is not efficient because U and U minus one are correlated, that is the assumption that C6 we have been talking about, this is C7 we are talking about here, the problem with this one is C6 C7 okay, There is no correlation between the to come back to this model again and again this is the way we will look to we will perform tests based on this model let me note two other models that may have some problems this is the one I want to use here let me let me this is a different model but I want to explain the problem YT is beta 1 plus beta 2 of lag because you have slow adjustment and you have persistence, so if GDP is high today, it was high yesterday, it tends to be high today, so you tend to see a pattern.
I like this quite often, but if I write and tus1 is a function of U tus1, if I write and tus1 here it will be beta 1 + beta 2 X2 t-1 beta 3 Y t- 2 + U tus1, so U t-1. is in YT minus one U T minus J is in YT minus J that's a problem because it has um u t minus1 in it so y t minus1 is correlated with u tus1 now think of this as ZT where ZT is YT this is my variable T of time, it simply turns out to be YT minus one. You think of this as ZT.
This could be yt- 2 or your 3 or your 4 or something like that, it doesn't matter, but what I have here is that a Time T variable is correlated with another error. this is this case here the UT is not correlated with YT is correlated withanother X but now we have one of the out of contemporaneous correlation if this were YT, which would be silly because there is YT here, but suppose it were, then it would be correlated with UT simultaneously if this has UT minus one and UT hasn't even happened this is not related to this thing, but it is correlated with u t minus one and T, so it is correlated with u, so U is correlated with one of the be independent of all things on the right side, this is not so, then we could think of a third case which is that model and this model together, suppose we have YT is beta 1 + beta 2 x2t plus beta 3 and t - 1 plus u and U is the row u t -1 + e t so if I substitute that in my model it will be YT is beta 1 + beta 2 x2t + beta 3 and t -1 plus the row U tus1 + b t these two terms here will be our big problem in this case because u t minus1 is in YT minus1 so now a variable on the right side is correlated with u those two things are correlated there is UT right here so YT minus one and UT are correlated before it was YT minus1 this was different, it was YT minus one and UT minus one, the UT was fine, but now that I made UT have UT minus one and tus1 has U tus1, so they are correlated, this model actually has both problems, this is correlated. with that and those YT minus one is correlated with things outside of contemporary like before and the contemporary version, so that model is our most problematic model, we will use this model here.
I should have said um, this is unbiased, this is biased but consistent, so what we lost relative to the first model is impartiality, we maintain consistency and the bias again is because one of the Y's correlates with one of the U and it's not contemporary, so the third model, this is the biggest problem, it's um biased and inconsistent, okay? These details are going to be a little confusing, but you probably don't have this particular one. I guess it doesn't quite ring a bell yet so you have to look at it a little closer, but the basic idea is when the U's are not correlated with all the X's it's okay when the usage is correlated with the X's it's not so bad if it's outside the current time period when you have problems it's when the try to capture the idea and keep looking for that idea as we go through this.
I'm going to review them again, so we'll be back, we'll be doing. This model on Thursday in more depth, so we'll start with model one and start looking at them one by one in more detail, but I wanted to give you these three conditions and establish where we're headed. bias and consistency now we know we need to test for autocorrelation because it causes problems if it is present it causes at least even at best it was inefficient oh all of these are inefficient too not only are they biased and inconsistent but you are also inefficient, so this one might be fine because there are no correlated errors, but this one is inefficient, so there are all kinds of problems with these models when you get these lag dependent variables or you get a correlation, okay, let's start with the case simplest, so let's go back to case one, which is this, get rid of that, so here are the three variations, you just don't have to write this, but quickly there's the model that's okay.
If I add this now I have a problem with C6 because the errors are correlated if I then add y t minus1 to the problems because they are correlated and because it is also correlated here I have all kinds of problems and once I add this lag dependent variable that's when the things start to get worse Messier and there are two cases there, either the errors are random or they are not random and those are the three cases we did, so there is the base case without this, now we are doing the simplest one by just adding autocorrelation, for which this violates C6, which is uncorrelated, okay, so again it's impartial consistent but inefficient in that particular case this is impartial inconsistent you just wrote this and it's inefficient now let me talk about this a little bit more because this is a big problem. remember in your first assignment you regressed money on income on the interest rate or something and you got a t-statistic of 120 or something, it was a huge t-statistic.
What happens when you have a positive serial correlation? So when there is a positive serial correlation, what do I mean by positive? I mean, the row is greater than zero here, the standard errors are too small, so when you look at your T-statistic, which is the hypothetical beta minus the beta estimated over half beta Sigma, this is unbiased, that's good , it's not a problem, it's consistent, but it's too small a shape. too small sometimes if this is too small relative to what it should be because it's biased and inefficient what that does to your T's makes them too big so when you see huge T statistics your first inclination is almost always I have an autocorrelation problem somewhere in my model is giving me bias, you get biased estimates of the signals, they are too small, the FS are too large, the FS are also too large, so it ruins your test statistics and gives you It makes you think you have a much, much better fit than you actually have on your model, so you're jumping up and down for Joy.
I got the best model ever and then you correct it for serial correlation and all your t's are insignificant, which can happen, trust me. Well, this is a big problem, something we need to worry about. The other condition I really need here is that the axes grow over time, but that's usually the case in macro

economics

, so I'm not going to emphasize that condition too impartially. consistent but it's inefficient, you get the wrong standard errors, the t's are skewed up, the FS's are skewed up, the Ki squares are skewed up, everything will look a lot better than it actually is, so what do we do right? ?
You usually get a series correlation from these models because of partial adjustment mechanisms, so if you increase the interest rate at this time, it will usually like or decrease, let's say, the interest rate at this time and the investment increases, usually increases over time, so the changes are distributed over time, so you usually only get a partial adjustment in the current period, you get some dynamic relationship, so it is very common to get, say, YT depending on the delay YT or have a relationship like this only because of a partial adjustment, so invest today half of what it was yesterday and some new innovation, but, then, you are getting some serial correlation in the process of persistence in the process just because you don't fully adjust in the current time period, so if you look at a pricing equation, if you don't have full price adjustment, you will get PT based on lag pts or you will get similar error structure to this one, so one way to get these serially correlated errors is simply through partial adjustment mechanisms that are very common in macro.
Because we don't think we have perfectly flexible prices or perfectly flexible GDP, in one day we can solve a GDP problem, then there is this persistence and that persistence generates these autoregressive terms. The other way you can commonly get autocorrelation is if your model is misspecified, it says GDP is supposed to be in the model and I leave it out, so it's up in the air plus beta 3 times GDP T , if this is serially correlated and surely it is today's GDP, it depends on yesterday's GDP, you will find serial correlation. errors purely from your Miss specification, if you have it, if your model is actually non-linear and you fit a linear regression to it, you will see positive negative positive errors, it will look like a serial correlation, so serial correlation may be there because it is necessary.
It may be because it is actually part of the model due to partial fitting mechanisms or it may be there because you tried to fit a non-linear relationship with a linear model, you left something out, you misspecified the model in some way that makes this appear diagnostic and, so when you find a zero correlation you have to think about what kind of problem I suspect I have, whether it's something that's a misspecification or what exactly is going on here and we'll talk more about that as but mostly those are the ways where serial correlation arises through these specific models or through these partial adjustment mechanisms that are common in macro, your errors should be persistent, so if you have a trend GDP model, your errors tend to be correlating. over time, so there is a tendency only from persistence in the GDP series to obtain highly correlated errors within the model and, therefore, persistence in that series gives you a model like this or a model with a lag term here and we'll also talk about those differences later, okay, so how do you prove this?
Have you seen a thought in your result called the Durban Watson statistic? It's there now let's use that thing. That's what's there. You can quickly test for autocorrelation, if that's close to two, you should jump up and down and say Yahoo. I have no problem if it is close to zero or close to Four problems, so let's explain the test we will use for most. We will use two tests, the Deran Watson test, which is first order automatic correlation, and I will use a Bruce Godfrey LM test for higher order serial correlation, so let's first look at the Deran Watson test, which is a only for first order autocorrelation and I haven't explained it yet, so this test won't work for anything except the first order, if I say so, the model we are going to use here is y t is beta 1 + beta 2, the most simple x2t + U and U is row.
U minus1 + e t where the row is between 0 and one and the absolute value makes it what we call stationary. If this is not true, it will explode over time and I don't want to get into there being more general error processes. which we will get to later, this is called ar1, it is a first order autoregressive process, so we say this is a first order autoregressive process if I have U is row one u-1 + row 2 U your 2 plus row 3 uus 3 plus et that would be an AR3 which is a third order regression process because it has three LS and there is actually a way to write them as pols, which is a third order polynomial and that's where the terminology comes from, in They are actually pols.
It's something called a delay operator which we won't talk about at all. I guess I did right, so I didn't lie at all. I'm mumbling, so I'm not saying anything important, ignore me. Does anyone know what a moving average process is? That's one more thing you could do, you could put delays on the ETS. We are not going to address this at all. When you put lags into the ETS, what's called NA or moving average process, then you can mix them together and you get what's called an Arma automatic regression moving average process, which is why I have lags.
This is a ma0. I don't have delays if I have more, let's say, Delta 1 ET minus one, which is a ma1 term. I have a Arma 31 3 AR delays. one has a lag, so there is also what is called moving average processes and we are not going to deal with them at all. The book mentions them so I thought I should do it too, so here is Derin Watson's test, how do you do it? print because it's always there, that's how you do it, but how do you actually do it? There is our model, so the first step is to estimate YT is beta 1 + beta 2 x2t we will generalize a little beta k x KT and remember that UT follows this. process here that is what we want to test then our null will be that row is zero so what we want to test here I should have written that we are going to test the null that row is equal to zero the alternative is that row is not equal to zero because if the row is zero there is no serial correlation if the row is not zero then there is so that is what we want to know if that term is not there this is just we are fine there is no problem so that is what We're testing, so you estimate by operating system and then save the u t which again resides in step two of the program.
This is the calculation of the test statistic. That's the Durban Watson statistic, which is the sum of T = 2 to T of U hat T - U Hat tus1 identified by the sum of squares UT I'll explain why very soon, but for now let's just keep in mind that zero is less than D is less than four, so it must be true and I'll show you later. I just want to give you the mechanics of testing right now, not the theory behind it, so this is how testing is done, regression does this now, whether that EVS can be done or not.
NoI know, that's a little difficult. maybe all of you can make an EVS, but this is doable, you can certainly imagine doing it by hand. Okay, now there are two tests, step 3A, it's actually two tests, so now let's get a little more specific, let's say I want to test. and know that row is equal to Z versus positive serial correlation, so this is just a one-tailed positive serial correlation, this is one-tailed and it's a positive correlation. There is a table in the back of a book and it will give you two values, so there is a table. depends on n and K where K is the number of observations or variables thanks for making that funny look and N is the number of variables okay so search the table so that from the table you get two values ​​du and DL an upper value and a lower value now, in this case, you only have to worry about because the range from 0 to 2 is the range of positive serial correlation, so just put these here, there will be a DL and a du, this area here is not can reject. so if you're close to two you're fine no problem this area here you reject so if you get something close to zero below this critical value that you get from the table you reject here you don't know the test is inconclusive , so there is also an inconclusive range that is supposed to say that it is there when it is inconclusive we will do an LM test, so 0 a DL rejects DL a Du is inconclusive du to2 is not rejected So pretty much what What you do is get the printout, look at the number of observations, at the end you go to the end of the book, you get du and DL and you compare what's in the printout, which is this, with those values. a one sided or two sided test this is one sided in this case what are the hypotheses this is one tail ok so now it will be assumed 3B I want to test the row equal to Z versus the row less than zero so this is the other tail you could put them together and do it , sometimes you don't know if it's positive or negative and in

economics

we rarely see a negative correlation, so it rarely happens that GDP is above Trend Today below Trend Tomorrow above Trend below Trend above Trend below Trend variables just don't work like that now, sometimes if you look year over year like in a c, have you ever heard of a spider web model, um, where you adjust too much and you overfit, you overfit, and you overfit, you can write models where that happens. then to be honest you get this negative serial correlation in the data which to be honest that happens mostly in Micro Data but in economics we more often see a positive serial correlation so it's almost always this test one-tailed, we rarely run this test here. because we simply know that inesis is positive or zero and we simply know that it is not negative.
I was wondering why it's 0 to two. We'll show it in a minute, yes, this is approximately equal to this is, I can wait, yes, D is approximately 2 * 1 - row when the row is minus one you get four when the row is one you get zero so if there is a positive correlation perfect you get zero there is a negative correlation perfect you get four if there is no correlation at all you get two so when it is one you get zero when it is minus one you get four and when it is zero you get two and so it goes from 0 to four with two.
I haven't shown you that yet, but that's the whole point. okay for this test you do the opposite it goes from 2 to four and this is 4 minus DL let me check that should be correct the U should be bigger 4 minus d will be bigger yeah I want to make sure I do that good. this is reject this is not to reject and again this is the inconclusive range and if it is not conclusive we will have another test that we can do at some point so what you have then this is to reject reject not reject not reject inconclusive inconclusive so R is reject f could not be rejected and I in so if you were doing a two tailed you would use all of this but they are almost always one tailed on this side so just have in your notes when the test is inconclusive we will do the test LM which we will discuss next, which is never conclusive, it is not at all good in proof, but it is never inconclusive, so we let this be resolved, this is how it is resolved and it is not conclusive if someone was going to ask what you do that we have not yet talked about correcting it.
Fixing it is going to be very difficult in EVS, you'll add ar1 as one of your regressors and it will fix it, we'll get to that, okay? Why is it between 0 and four then Y is D TW Z? We're going to need this is something we're going to need along the way, so let me give you a little bit of information. We won't show it yet. this is something I will need in a moment roh hat that is the correlation remember our model has U is Row uus 1 plus et suppose I want that row hat I can estimate that as the sum of T = 2 to T of the UT hat u - a hat over the sum of 1 to T of the square U hat and what I hope you remember is that this can be written as the covariance of UT and UT minus1 over the variance of UT, it is approximate in this case. because I'm missing a term, but basically that's true if I take one/n minus K here 1/n minus K * this is a variance estimate right 1 over nus K * this is a covariance estimate so if I just multiply by one 1/ nus k over 1 n minus K here I get the covariates here these are just definitions that you should have from your other class this is actually the regression coefficient if I do a regression U hat on uck T minus one which is the coefficient of regression, this is the sum of the bars Yi minus y because the bar Y zero, this is the sum of the bars x i - x and this is the sum of essentially x i - xar squ, but I just indexed it with t they are the same in both sides and this is basically this is just the covariance of the variance is the same as the estimate that's not right so that's just the definition that we need as we go through this so let's just go to take the statistic of Durban Watson D is equal to the sum of UT - UT - 1^2 over the sum of UT hat squared, make sure you write it equal, yes, yes, yes, and just multiply this so this is the sum of U hat squared minus 2 U hat U tus one hat that term is coming towards us plus u t-1 t s over the sum of U in squ and imagine multiplying both sides by 1 nus k so what is this an estimate of that is a variance estimate of the residuals correct, that is an estimate of Sigma Square is an estimate of Sigma squ is an estimate of Sigma Square we are only using one observation minus this goes from 2 to T this goes from one to T but it is still an estimate that is only an estimate based on one less observation, so what you should really do, then I could write this like, um, I want to use the ASM totic.
I just said that, um, this goes from 2 to T and this goes from one to T. those sums, so let me take nus one put here, so this is equal to n -1 multiplied by the sum of 1 over N - one n minus Kus one multiplied by U hat squ - 2 U hat u - 1 Hat plus u -1 What did I write t-1 here? Yeah I divided it by n - K time the sum of 1 nus k u 2 well that's what I wanted so this is equal to N - K -1/ N - K what's that really close to? 1 this is like 99 out of 100 is almost one good approximate is one what is that an estimate of well this goes from 2 to T so this is just an estimate of Sigma squ this is an estimate of Sigma squ this is an estimate of the co - variance and this is an S of Sigma squ, so it's going to be multiplied by Sigma squ here plus Sigma squ, so that's that term and that term minus 2 * the covariant of U T and U T minus1 which is that ter over that is an estimate of Sigma Square again so that that and that over that is what two that plus that over that is that and this over that is the coari over the variance this is just the variance of Sigma squ but this is about 2 * 1 - row because this is the row that's what we said, I wrote, I deleted it, that's what I started with, I said that thing, right there is a row that we're going to need, that's the coari over the Vari, so equals about this close, this is the actual approximation here. which goes to an ASM totically, these are actually estimates, but when T goes to Infinity, they go to the True Value, so ASM totically means, when T goes to Infinity, this goes to one, these go to sigmas, this goes to the coari, so let's So, if D is approximately 2 * 1 - row if there is perfect positive correlation Row = 1 and D is equal to zero if there is no correlation then that row is equal to zero What is d equal to two?
That's why we're not conclusive about two, we're getting closer to two. we don't have cor, we don't have any problem, that's why we don't reject when the statistic is only two T in the statistic is close to two, there are only two t's in the statistic, it's a smart thing to say and actually there are both D equals to two and if there is a perfect negative correlation, the rarest case in the row Eon is equal to -1 and D is equal to four, so when we approach Four we have a perfect negative correlation, we approach zero, we have perfect positive and near the middle we have no If there is any problem, as we move towards the two such, we go from failing to rejecting to inconclusive and rejecting once we get close to zero close enough to Z or four, then we reject, so when I tell you to do this, show me that D is between 0. and 4 you need to know, okay, you say, okay, what's the statistic?
You're done, just multiply that, collect the squared terms. Things are variances T and T minus 1 are covariance. I'm not going to make you calculate the statistic on EVS. I've never seen a comparison computer package that didn't do this. spits this out as part of the normal output and is a bit tricky to do. There's really no reason for me to force you to do this actually. That doesn't mean you shouldn't know how it's calculated. What I want you to know about these things, but we're not really going to do it. Practically speaking, it's simple: just look at the printout, look for the values ​​at the back of the book, compare them, and draw a conclusion.
This helps. you understand what you are doing and why you are doing it that way. I was hoping there would be a question: oneus P or oneus P hat here, yes, technically it's a row hat, but since I use an approximate, I can put the row in there, but technically this technically these are hats. I've been a little lazy with my hats, so it's better to write with a hat or no hat, technically you should write it. I pulled out the quick wood here these are not true Sigma squares, they are actually Sigma hats. It's actually a coari hat because those are hats, it's a hat, but I didn't want to put even more notation and confuse it.
I should have put a rough in there and gotten out of it, but yeah, I realized what I was going through. Waving my hands I thought it would be clearer if I took off the hats, but maybe it wasn't, but technically, in fact, those are estimates, so this is Infinity, which is what we should really do, so I can use equality. this goes to one, this goes to Sigma squ and this goes to true covariance, so with n large enough the approximation is just small, but yes, technically it is, when you bring the beta hat to the OS, this is only what I have. five minutes and I'm going to use them, but I don't want to start a new section, so let me show you some of the sum of x t - xar or y of whatever or eyes.
Anyway, I guess I do this 1 over n - K and 1 K, what is that in Ence it starts with a covariate of there are only two variables up there X and Y, so it's actually an estimate of the covariate of X and Y and what is an estimate of the variance of I said what row is the covariance over the variance, my head said oh, that has to be a regression coefficient because I know that and it turns out that, in fact, if I run this regression, that's the estimate of this row in this regression, so that that row we are talking about is actually the row of a regression. of UT into UT minus one, so that gives us the correlation between these two variables, as if you were running a regression, so a regression coefficient can always be written as a covariance over a variance.
It was supposed to take six minutes, not two. But hey, then don't let me do this anymore.

If you have any copyright issue, please Contact