YTread Logo
YTread Logo

Economics 421/521 - Econometrics - Winter 2011 - Lecture 6 (HD)

May 29, 2021
Well, let's start where the screen is warming up, so today the first thing is that it was spectacularly reckless of me to try to do linear algebra in the last class. I shouldn't have done it, it only added to the confusion and did anyone here have any idea what I was talking about? I probably could have done it in my office just as easily, so I apologize for that decision. I don't know what happened to me there. Ignore that part of the video that the class loves. you're not going to do that and I'm not going to do that type of algebra again we're going to do regular algebra not more linear algebra now part of the reason we say that as I detected them wait and realize that at this point you don't have these procedures completely written so the idea today is to first write the procedure once again in its entirety go to the computer make an example and encourage you to ask questions every time you don't get a step that I'm doing so that you understand this and can do your tasks, once you can do it in reviews it will become much clearer because you will have the procedure in your head and somewhat inductively realize why. is that you're doing it the way you're doing it, I think it will start to make some sense, but you have to internalize that procedure first before you can start making the connections, so let's go over the procedure for correcting heteroskedasticity. and then we will demonstrate how to correct it.
economics 421 521   econometrics   winter 2011   lecture 6 hd
So far what we've learned is that when you have that elasticity, there are problems with OLS, it's not a bias problem, it's an efficiency problem, so if we have something better than a dirty one. less is not the most efficient estimator, so we need a way to test it, so we came up with several tests. I think five of them we did a golden ticket desire test. We divided the sample into three parts, threw away half and looked for differences. in the variances in the two groups we analyze some LM tests in which models model a model B model C are used as the basis of the test then we analyze a very simple multiplicative model Sigma squared I is alpha or Sigma squared multiplied by X I squared or something like that, a very simple model as a base and we also looked at White's test, so basically there are three tests, there is the golden bell quantitative test, there are all the LM tests with the three models, maybe four if you include that other symbol, then there is the white test and I'm not going to illustrate gulp, don't walk and almost write the other two tests today the white test and how to do one of those three, the other, the others should be relatively simple at that point, so that's the goal, so let's remember what these steps are for doing this test and let me use my notes so I don't write it any differently than I did before.
economics 421 521   econometrics   winter 2011   lecture 6 hd

More Interesting Facts About,

economics 421 521 econometrics winter 2011 lecture 6 hd...

I think I did it last time. I only did it for Part A, so let me generalize. make sure you have in your notes all the steps you already have for the most part so these are the steps to correct for heteroskedasticity and the main problem we want to do is divide by the variance we want to divide by the standard deviation in each moment, so we need an estimate of the standard deviation, so most of these procedures are about getting an estimate of the standard deviation by dividing by doing a new regression, so if the first step was to do the regression AND again, This last time I made a constant y This gives you the estimated parameters.
economics 421 521   econometrics   winter 2011   lecture 6 hd
Those estimated parameters allow you to calculate an estimated error. You have T. That's why t minus theta 1. hat minus theta 2 hat X 2 I minus theta K in 2 X 2 plus beta decay the betas, we can subtract them from the Y to obtain an estimate of the error so this is what we are looking for, we simply subtract that minus that warning also that later we will need to do the procedure we will need the predicted values ​​of some variables, how is the predicted value obtained? Well, this is the predicted part and this is the air, so if I wanted just the predicted values, I can think And minus R, is it because I'm bothered by the program you have?
economics 421 521   econometrics   winter 2011   lecture 6 hd
So if I just want that full sum, I don't have to calculate that. I can easily get out of the computer why: you give me the predicted values, the estimated values ​​of C give me the forecast for each X, so we'll also use that later anyway, so you understand. I had this over and over again. We keep saying that this is just the visit variable in the program, so it's not hard to find, so there are 3 a 3 B and 3 XC a can do a regression, I swear, so you take that hat and square it in a constant z. 1 to Z P and then this would be the basis for a test or trekking board.
Now this is where you do your test in R squared if you're actually going to do the test, so this is the step where you do the test. see if you need to continue, we've already done the testing, we know I asked for elasticity, we won't repeat them, this is where you do the testing for whatever model you have, so you run the regression, yeah, do that then. what you would do for 3 a then you do a regression with your hat I squared is equal to alpha zero plus alpha 1 z 1 plus alpha P plus one error then you use this estimate of you have T squared to run that regression that gives you alpha hats now This is the step that people have trouble with in the procedure it seems to go to one place and people get stuck.
We have to use the expected value of this. It's a kind of hat. I'm going to give it a new name. So what do we need? It's Sigma's hat squared because alpha Oops! I hit something that should have done this should be 1 and both should be two just like we indexed it in the past technically that was fine we can just change our index we don't want to do that so this will be alpha one hat plus alpha 2 hat C 2 plus alpha P hat C P and you get this like in the program you have I squared minus R is that if you run this regression you will get residual and just like we just said the predictive value was this minus the residual which will give the value predicted for this regression and they are predicted values ​​that we want to analyze, we want the variances to follow this model, we proposed a model on the variances, so we need the variance that the model provides us.
These are the variances predicted by the model that we have imposed on the data. Not many people want to use this is their estimate of Sigma. They want to use this as an estimate that these are not the predicted data, these are not the model values ​​that we want, the ones that satisfy the model that we have imposed on the data, to say it again, contain the same thing, so again , this is a step that people sit on. and come on, that's what you all know guys what we went through, well, where does that happen now from here, you use this to get the Sigma hat, it's a square root, just take the square root and then you'll divide by the Sigma hat I, I mean, come on.
And I over Sigma hat I equals beta 1 we divide the entire model by that Sigma hat I plus beta 2 x2 over Sigma hat I plus plus theta K your Sigma hat I'm a little careless with my writing here are your eyes so they are This model will be like that, that's what we call it. I don't want to do this again. I want to save those steps for now, so let me go from here to here, that's what we call our star models. that's why the star is beta 1 problem as we showed last time, this star U has a homeless registry because we have divided it by something that fixes it, so that is the procedure, it is actually not that difficult.
I guess that's okay, how would it change if it was 3b? We would return, that is the model where it is. only Sigma I is equal in our s, they point out the absolute value of the error, so here we take the absolute value of UI in that step, remember that because the model here is instead of Sigma I squared it is alpha 1 plus alpha 2 in place of this model, our underlying one. the model is actually sitting with equals squared and this is our estimate of that, so we can estimate the application here, so this second model is just Sigma and our estimate of Sigma is this, so it will always change, now I would run this regression.
I realize you have to type more than me so I need to go slow here to run the original regression, let's say the U is to take the absolute value of the UI and do a regression on the Z's now instead of getting Sigma squared, get Sigma because I have a user interface. here this is the testable test, this is the eyes estimate of Sigma, so this is what we are using as a proxy for Sigma I, this is our proxy variable and then we get Sigma I from this regression and everything else is equal a time we have Sigma.
But nothing changes, so that's it, it's different with the B model and just everything is going to change is how you get this once you go down, you do the same thing, it doesn't depend on the procedure, it's about getting the asset from the Sigma hi-hat, so depending on what model you have it changes how you get Sigma but once you have it you divide by it and it's blue and if we knew that we wouldn't have to do any of these things we just divide by the actual value, but we don't Miller, that's the problem, eh, okay, so we get Sigma.
I had it when I got you, you had it, I was less right or is it just this case. Okay, sorry, I didn't change that. Yes, in this case, you have to understand that. We're going to give you the predicted values, no, yes, yes, this is an S, this is what you use to get that this is equal to Sigma I in this case, in this case, the last case was Sigma I squared when we run 3a. in the program you still take naps with the value of that, you know you've squared it, so you won't have to because you took the UI and exited, so if you have to form as it was squared and when the form as was squared and makes them all positive because it squares them in 3a.
Yes, yesterday there was a well in the laboratory, but he won't call. Be very cautious and if you want to teach a procedure for all three, it's fun, but. technically free and you don't take the absolute value because it is already positive. Now one thing that can go wrong in these procedures is that it should have put this step in and it will warn you at the end, it is possible that These things are negative, there is nothing that restricts this prediction to be positive, most of the time they will be positive , but they may be negative, so this is the step where you often want to take the absolute value.
You may have done it. here just to be sure you don't have a negative variance. I think so, so if they did it there, that's exactly what you want to do and I left it out. I want to add a small complication. I was going to include that later and with 3c, one of the advantages of the third model that we haven't talked about yet is that you will never get a negative variance, so that's only a problem with models 1 and 2 minus a and B. See , this will never give you a negative variation. Darius, so it's a good idea to routinely take the absolute value here, although it's not always necessary in this step.
These can be negative, so one problem is that it gave me negative, so use the absolute value and you probably won't be able to. read that, wake up there, this problem would be less than zero, so use the absolute value, so you may have done it here where you don't need it. You may not have done it here where you need it. That's why I'm doing it. the class council and I'll talk, let's make a bottle, let's see why we're here, let's see what we want here, we want the log of U, hat, I swear here, so this will be here, the log of you is squared, so , what you get is the Sigma square hat log, that's what you're estimating when you take, in this case you take the UI square hat log with me and again I'm using an erase so I need to

lecture

the people of someone writing all this. again, so I'll give you a little chance to catch up here, here's what you do in this case, use this to get Sigma I, but it's a little bit different in this case, Sigma I, which is the square root of F, how do you do it with my sifter of notes.
I squared is e for Alpha 1, plus alpha 2, Z 2 plus alpha P, because everyone could get Sigma. I have to bring e to this. What you do is obtain this predictive value. here that gives you this so to get this do this my arms are long enough so ask do this and then you get the estimated value ofthis give this here just take this this difference that gives you this this difference is this right term Here, when you take this, the arrow and you get these things, but Sigma is squared, not only in this case, it is no longer to buy, yeah, yeah, what you get here, this is the trunk of Sigma's hat, it's actually the trunk of that hat. go over everything technically, you're getting an estimate of that, okay people, okay, I understand you or not, if we're not communicating, so stop at this step, we run some fat, we run a record of you. constant and the Z's so we get the logarithm of U squared, v is alpha one half, let me do this so you run that's this regression right here, run that regression and now we need the predicted value of the logarithm you're in quadrature, you need the predicted value of this thing from this model, that's what this step gives us now.
I'm doing this step so I just run the original regression. I form this and run the secondary regression, the auxiliary regression, they're called the alpha regressions. I run the epithelium regression and now I need to predict the value so I take the log of you that I have squared minus R is what gives me the predicted values ​​with me if you run the original model get R is squared the resentful choose the log of the reasons that is the log of the squared ratio of the original regression that in all your Z then takes the predicted value of that regression which is an estimate of the log of Sigma that I have squared, so what we have now estimated is this , that's what you're asking eights with that estimate of this we have to go back to high Sigma, so if I take e to the register, I have an estimate of this, call the estimate register Sigma, I raised half, so that's the estimate now I just take e to that estimate e to Alpha a map plus alpha 2 hat and then I take the square root of that to get Top Hat Sigma, the hard part in this case is untangling the regression to get back to Top Hat Sigma and once you've partitioned Sigma Top Hat, make your star variables and that's it, run your regression, but the hard part here is getting the Sigma Top Hat.
High hat, I have the square, so just take the square root for this. Maybe I should make this example two as we go through them. The good thing about this is that they will always be positive, you will never have a negative variance or e. Not in this case, how can I clarify this? I sure think we just have to start doing it. I think that's when we'll see how to do it, so let's start doing it, let's do it, it's easier to understand, yeah. let's do it on the computer because I think you'll see how everything fits together very simply, so I'm going to solve exactly your homework problem.
You have model 3a, I'll do model 3b and if when we're done with 3b, "You're still looking at me with the faces I'm seeing now, we'll move on to 3 and see if you seem to be doing well. I understand this now. I can only see that it's not Don't worry, don't bother, then we win." Don't bother, I suspect we'll end up doing it, but we'll see and I should issue my typical warning. I'm about to go live here and we'll see what happens, so I might make a mistake. Let's hope so. the projector comes back on, I'm going to get rid of all of this and rewrite it as we go along so you have the steps along with the computer stuff.
This all looks confusing, a lot of scribbling and I think too scary. this is off, okay, I already opened the reviews, do it again, wow, it's not in them, what is it? Well I just turned one back on to make it work, it won't turn on, this is what you really expect to happen, everything off, projector on, I think so. You turn it off, it won't let you turn it back on for a few seconds because it doesn't want to blow up that expensive ball valve. I think we're dealing with my hope, you light it up, I'm going to tear it down. start over I hate this so if you remember we have a data set with salary and years and our model is long salary equals beta 1 plus beta 2 years plus beta 3 ok it comes in years squared plus uij years i years i salary and the data set went from 1 to 2 22 now I'm going to model that Sigma squared I, so here's the variance model.
This is the gate of the model where Sigma I is alpha 1 plus alpha 2 years plus alpha 3 years, so I go. to run this i take the absolute value that i have so my estimate will be the absolute value of the f8 UI control very good i have connected the projector surprise ok here we are going to present a new working file we have data undated one to two twenty two I need to import the data set, import from Excel should access the data from the correct directory for problem three open and I think it's the salary in years, the correct order is it looks like years.
Now let's get the salary record, so the salary record is equal. record the salary, square your ears right now, let's run this first regression right here, so I'll run this regression, so let's do a quick estimate. I'll show you something right here, look at the options, look at the first option that says matrix of heteroscedastic consistent coefficients with blank marked, check that box, do the correction for the blanks and you're done correcting for heteroscedasticity, so the whites is that easy. I want you to do it yourself the first time after that, you can use that, oh, for correction, you can't make corrections. yourself, there is a way to do the testing, so we won't do this.
They are following what is right for white people. I would just do that. We would be done and just run the regression for the whites. The fixes are pretty easy, so we're not going to do that. So what I want to do here is run the log salary in constant years and years squared. Now I need to calculate the absolute value of the residuals for this regression. If you're doing one at this point, you'll find that R is squared, but I need the absolute value of the residuals for this, so I'm going to say generate apps that are equal to the absolute value of Brazil.
Now they should all be positive, although the residuals themselves are sometimes negative, so I took the absolute value because the variances cannot be negative now I can run this regression, okay, don't run the auxiliary regression, everyone with me, so I started okay, so I need to execute what I need to say, quickly estimate something else for everyone, let me do it like this. there, you can do white tasks very quickly here, but I don't see it, let it go, it's an interview, just, yeah, I'm going to thank you here, so if you hit this wind in all terms, it will be under the view. residual test lights test I have to use this program my day job I don't know where all the amazing stuff the whites test will do for you so you press that button after you run your regression you get the whites test account heteroscedasticity you come back to run the regression by checking that box, you've corrected it so the blanks are pretty simple if I just tell you to do this, there it is, there's the test, there's your NR squared, Bob's times times R squared, gives it the level of significance in everything works, so in your homework you can check it pretty easily, okay, let's go back to where we were, although I hope I didn't mess things up by doing that.
I don't believe it. Okay, I need to do a quick estimate of this regression. I need to run the absolute value of U hat in constant years and years squared, so I'm running this regression right here. Now Raziel has the errors from that regression I just ran. This is the part I said. seems to confuse people the part we obsess over with a complicated now I need the predicted value, so I need the predicted value of this. I need Sigma. I used a proxy for this. I use it as a proxy for this. but right now I need Sigma, so let me call it that, let's call it Sigma, so let's say Sigma, it's what ABS you have minus R, is that correct?
So we take this regression. What I'm doing here is taking this variable right here, we just ran the absolute value, we just ran the absolute value of U, which is equal to alpha zero plus alpha one year plus alpha 2 years squared, now we want to predict the value , so Alice, you have that is ABS u minus the The reason for this equation is what this device does: get a predicted value from the model once it has Sigma. My initial variables are ready to run my regression, so now I just need to do Y star X 1 star X 2 stars. 3 stars once I have that, I can make the correction, so I'll say generate star Y, let me call it log salary star equals log salary divided by Sigma, look, okay, I need the constant star to be 1 divided by Sigma because I'm dividing by this times Sigma, so I just did that.
I just got that variable. Now I get that there is 1 year. I'm getting 1 over Sigma high for that variable. Now I'm going to do this on this one. a years star hasn't been up there I don't want to do that I have no idea what it does ok I'm afraid I'm in front of a class so here this year's stars are divided by the Sigma hat and finally here years star squared divided by Sigma aging and now what's the last step run the stars and the stars so the last step is a quick estimate Ln salary star star constant does it consulate C star constant year star 2 months r1 or @ lost? the star of the year was announced and the years get exactly the star of the squares, there is no constant because we have divided by this, don't put one in running that regression and those are the corrected values ​​right there, so that's the hetero, the Consistent estimates or blue estimates, okay.
I'm seeing heads, not this way, that's what I want to see, let's do model three for fun, it will be fun, this is a bottle, so this is abs, this is the one that says the model I used here was the model B. where Sigma I is alpha 1 plus alpha 2 Z 2 plus alpha P C and this was just one year so these are the axes for me exactly so we run your model is this and the other model is the one that left me do this one just to reinforce it, I'm going to take this where I get the Sigma hat and I'll let you form the stars and run the regression, so I'll take it to the package that we need to think about.
I need to form the variables into a star, it is exactly the same way, the coefficients are not the same old coefficients. By dividing by Sigma, we haven't changed the coefficients, so we get this. We are asking for the same slope parameters as we had before. it's a small change because what you mean by unit change is now standardized but it won't change, you didn't change, you're still getting tomatoes because you standardized both sides, everyone knows this is like you take if you change the feet two inches on this side and leave the feet on this side, you'll change the beta, so if I multiply by 112 or divide by 12, but if I divide both sides by 12 to go from feet to inches, I'm not going to change the coefficient at all, so basically just you are changing the units of measurement when you divide, think about it now, it rarely changes, it is different every moment, it is essentially that intuition why you get the same things that I need.
I didn't get I say the fibers are variable so I have to start over normally. I said I have to write it in Ruiz since you just have to have it there but my reason is that we will get it back from the first regression and I didn't say that so I will have to rerun the first regression so let's rerun the first one regression very fast. I have it here somewhere. There's no need. I just don't have it there. I just don't have the waste and I'm. I'm not sure how to do that. That's not what I want.
We just did a quick estimate. That I have here? What am I doing? Record constant salary. Here is your square. Well, now I have the label of writer. 0.046 187, which should be the same. the absolute value of this four six one eight seven we have the writers is great now I need to square them and then take the log just so the square is Raziel squared and then the log of u squared is the log of u hat al squared and I should say log in this package is the LG yandy so now I have the log u hat squared which is an estimate which is a proxy for the log of Sigma hat Sigma squared which is what I really want, so this is my proxy for this, I don't know if now I run this regression, so I need to run the log of u hat squared in constant years and years squared, so I'm running this regression right here, okay, variant, now I need the predicted values ​​of this I need now what I may need is an estimate of a log of Sigma squared, so I needget an estimate of this and any predicted value, so I want the log of Sigma squared.
I need that predicted value and that will be alpha 1 plus alpha 2 years hat plus alpha 3 years hat squared, so to get the predicted value there's probably an easier way to do it. This is the way I've been doing it because it makes you think about this, so I need those predicted values, so let's generate the log of Sigma hat squared is equal to what I call this, the log you have squared minus R, so I'm getting the predicted value for this regression that I'm taking. I'm using that trick by taking this: the air to Understand this, I'm taking this variable here: the residual to get this predictive value, like we said before, you're wrong.
Oh thanks. You will have your head. Hey, look better, we'll find out in a second. It's OK now. I have that, which is what I want. I just formed this variable now that I need now I need the square hat Sigma is key to that, so the square hat Sigma here I'll probably use it already. Is it the XP? So I didn't plan to do it. this line along the Sigma hat square register, so now I'm going to get this here, which is actually just the XP Sigma hat square. So how could I ship my hat?
Let's take the square root and then I would divide by Sigma hat run for the stars now you look bored, that's a good sign for once or you've lost any hope of understanding this, which is a bad sign. I hope that's a good sign, look what I'm missing. and everyone who opens the computers is following the reviews, okay, questions. I forgot to do something in one step. Any idea what that was on the previous model? I remember taking the absolute value of Cygnus to make sure everyone was positive. I don't think I've made it minus three gardens, that's basically everything turned off.
Yeah right, it's no fun turning that damn thing back on, this keeps me playing on the Internet. Sometimes it turns out that otherwise I'll be reading my email during classes, okay, that's what chewing affects, that's pretty much yes, so before we move on to the chapter. 12 any questions about header specificity, that answer is fine, let's move on to time series data and then autocorrelation, so chapter 12 is mainly about autocorrelation, which is another violation of our basic model, but Gauss-Markov assumptions are one of the assumptions. What we did was that the variances were constant. The last chapter was about what happens when they are not the answer.
It is divided by an estimate. The variance makes them constant. The hard part is getting an estimate on those bearings, but once you have it. in your hands, the solution is very easy, you just divide, run the regression and the hard part is getting the Sigma high hat, so let's move on, then we know what to do in that case. What if another assumption we have is that? our errors are not correlated over time which is often a very very bad assumption especially in

economics

often when you are above trend you say GDP is higher than normal you can expect That next quarter the trend will be higher and your error tomorrow is related to your air today. to your hair the next day, temperature is the same as persistence, so often the errors will come this way, the patterns will look like this, there will be more distributions in that course, they will basically follow some kind of path when you are up normal for a while and it's hot you're below normal for a while above normal below and so on, but the point is that this error and this error are not independent if I know it's 10 degrees warmer than normal If I know that the economy is in a big downwind session it's unlikely that we'll have full employment tomorrow, it's unlikely that nine point six percent unemployment will turn into, you know, 6 percent next month, Even though initial unemployment claims are over 5,000 per day, which is very good news relative to the recent past, it still isn't there to get us out of this recession.
Additional money came out today for unemployment. We would like to see them below 400,000 when they reach 4,000, that is generally due to labor markets creating jobs rather than losing jobs that have been there. about four or five hundred thousand through the hole even higher points throughout the entire recession lately in the last two or three weeks they have started to come down and today they are four hundred and five thousand the first time they have been in that range which we didn't suspect was through data issues, so let's hope that your job market next spring, once it hits four thousand, the economy is creating more jobs and losing most of the time, so that would be good news, but of It tends to appear in these wave patterns anyway, so what to do about it?
Because the problem is that it deals with all these errors. It is equally important. It's very strange. This is completely new information that I didn't know before. but it's not completely new information, much of it you already knew, if you were above the trend yesterday, you know you'll be above trying today, so the fact that you notice a big mistake doesn't tell you anything important, but the OLS stuff does. because you don't know that errors are correlated over time, so you treat each error as equally informative when, in fact, you're not adding much new information with each new error because you already knew much of it in a common model. is using row minus 1 plus PT, suppose row is point a is point a UT minus 1 plus PT, that means 8 percent of today's error occurred in yesterday's, so Yesterday's point 8 is curious because today there is persistence and then there is only a little bit. of innovation so that randomness moves you you don't get exactly the same observation and you get point 8, so there is a little bit of deterioration, but it's not new information, this is the new information, this is what we want to isolate once we We have isolated the new information.
All of Eska does things right, but if we base our optimization on some of our UI squared minimization problems, we'll get the wrong answer because we think all those users are independent when they're not. so OLS makes a bad assumption here and leads to those inferences. Now I'm worried I'll run out of time before I get to a certain part of this. I asked you a question about your homework, so I'm going to jump. of the order of my notes here and I'll tell you one thing and then we'll back up a little bit on the consequences of ignoring serial correlation, so let's say you have a waiter, this is next week's assignment that I have on Tuesday, well, let's go hard and let's get started. let's assume it has serial correlation, since we're calling, the other name I'll use a lot is autocorrelation, air is correlated with itself, these errors are correlated because of this relationship, you can actually write what is Sigma squared e about what minus rho the square is the correlation minus the covariance correlation is Rho Rho Rho are the corners, so let's assume that you have zero correlation or autocorrelation as a microphone and do everything as one of the properties in which the coefficients remain unbiased and consistent, but not if there is a delay dependent variable. and I'll show you this in a moment why that's true, we're saying there's an exception here if I run Y.
I'm going to start using T now because we're in time series data and cross section data if let's say YT is beta 1 plus beta 2 YT minus 1 plus beta 3 if YT appears on the right side delayed you no longer have consistent and unbiased estimators they are still consistent they will not be unbiased well it depends this is a problem, we will explain it later as long as it is not there you are still unbiased and consistent with OLS it is inefficient throughout the period, even when it's unbiased, this is like heteroscedasticity, it's inefficient, so you're going to get the right core beer and you're going to be right.
Are you driving it very well? You will get it right. average, so the coefficients are fine in terms of not being biased or inconsistent, but they are inefficient, the variance will generally be too small, so there is a better estimator, there is a blue estimator when OLS is not in he. I'll be a GLS estimator again, but this time we won't do it with linear algebra and the third thing is that the standard errors are incorrect standard errors of the beta things that you use to get the T-statistics and the F-statistics are incorrect and Often, in

economics

there is a downward bias when you have what we call positive serial correlation, that is, when today is positively related to today, so if T in that model, if T is Rho UT minus 1 plus and when this Rho is between 0 and 1, the standard. the errors are usually too small, there is also a condition on the T statistic 129, so if I had.
Let's regress the first task and I said, don't worry about the T statistic, we'll talk about that later, you should have gotten a T statistic, a huge number 129, the reason you got such a big t statistic is your correlation serial in that model that you haven't corrected, the significant standard errors decrease a lot when you calculate TN, which is the hypothetical data minus the estimated data on Sigma. This is too small when their positive correlation, the T metric, is too large. so serial correlation makes you think you matter when you don't when there is a positive serial correlation, so it is often biased downward when you have a positive serial correlation, so the standard errors are too small when you have a positive serial correlation, now it is possible to have a negative correlation then they are too large, but we don't see that very often in economics, in general, when you are above the trend, you stay above the impression and you don't have this back and forth behavior, so most of the time we have a positive serial correlation that exploits the Es which ruins the absolute mix, everything seems significant when in reality it may not be, so you think you matter, in reality isn't there, so your shirts are too big and you make bad inferences.
Okay, now our next goal is I understand why you have this problem and why you are biased in this case, so let's back up now. I guess I forgot that your task was nobody. I was thinking that it was the one that should be delivered immediately. The chapter starts by generalizing the The assumptions for the estimates to be blue we write them on the first day when you get to that is not necessary. to talk now that's not new C - what is the excess or not stochastic they are not random variables the things on the right side are chosen by the experimenter we talked about this on the first day they are not random variables well, just I wrote an example, this is a random variable right here, this Y is a random variable, we say it's predetermined because it's the dice inscribed right now, but it's a random variable, so the idea that at x and this is very, very common type of model which you see in time series all the time macroeconomics all the time GDP today is related to GDP yesterday unemployment today unemployed that yesterday interest rates on all those types of things, so you see this all the time the fact that we have this why there tells us that we have random variables on the right side, so we can no longer continue with this assumption.
It ends at a part I skipped. I'm going to skip. This doesn't cause big problems on its own. There are some assumptions and some things you have to do there are so good that there are some convergence conditions and some things, but for the most part they are pretty broad conditions, the fact that that is random alone is not a problem, always and when it's not yet correlated with your usage, it'll be fine. so on its own, that's not generally what the problem will be, there's no variable like linearity, there's no change there, that's still your error it has a zero beam, still there's no change in that, that's not the problem here, see five, those were Homeless Basket mistakes, that was last chapter, so let's say okay, very covered, now we're heading to a new problem, so that was last chapter or today, okay , this is the one that is a problem before we said that UT and we are independent when T is not equal to UI and UJ are independent the errors are independent that is all the sense errors are independent that was our assumption that generallyIt's not a problem with cross-sectional data so if you're at a point where I'm measuring all the houses in the city or all the priced houses in the nation that's what we call cross-sectional data there's what we call spatial autocorrelation that you can give that data but for the most part this is not a problem in cross-sectional data like What we've been using so far is a problem in time series and it's a very, very common problem.
Larne explains why today's error is related to yesterday's air and therefore a common model that will use a lot of G of T is Rho. um t minus 1 plus CT and you need Rho to be less than 1 to keep that thing from exploding, make it a stationary process, as we see, so that's going to be a big problem that we're going to have to deal with in the next assumption. It is also a big problem in time series data and that is that the use and axis or uncorrelated will also be a big problem in time series data.
Let me give you an example. I will talk about this in more detail, but just to show. you are an example + YT is beta 1 plus beta 2 YT minus 1 plus beta 3 XT plus UT and UT is Rho UT minus 1 plus 2 so we have correlated errors now what we are saying is y and them, so this Y and this , therefore, YT. minus 1 mu + and UT minus 1 are correlated, so if I write this error as Rho UT minus 1 plus DT, which is the error term that and that are correlated because yesterday this was what any - 1 and all this Carm here is UT - this is UT yesterday this was YT minus 1 this was UT minus 1 so clearly it had a correlation now that's a problem that means all our estimates are biased whenever you have correlation of the X's in the loan shark's estimates are all biased and that is a big Big problem, let me do the next three real quick because they are not a problem, so the last one I gave a 10 on the first day because I had the book with eight assumptions, so I will continue with our presentation for now, the errors are normally distributed. that's for testing, you don't really need it for gauss-markov.
I'm not sure why succession we'll just go ahead and assume okay, that's not the problem here, it's not in the distribution of bugs that are causing us problems. So why will you show it next time? Why do we have prejudices? These guys, do they feel more confident about the material? So this helped. I can't figure out what you don't know about getting feedback, so please let me know. what i need to do to help you

If you have any copyright issue, please Contact