YTread Logo
YTread Logo

Economics 421/521 - Econometrics - Winter 2011 - Lecture 2 (HD)

May 01, 2020
As you can see here, I posted task 1 on the web page. They gave me a bit of an extension for the first lab, but it was the same as last year and they just pulled out last year's homework and used it as a guide, so they were identical. It was okay, but in the future, right after class on Tuesday, I'm going to post the assignment for next week and they'll discuss it in the lab that week, maybe some examples will show you some things like the problem you're solving. try to help you do it, especially, we'll get into more complicated techniques, then they're always due next week in the lab and then, well, I'll take it from there, so in these assignments there should be things that you've already done, I think just hypothesis testing a short answer when we talked last time about the Gauss-Markov theorem and then an estimation problem and oh I have the wrong browser again but I posted the video from the first class and that took a little longer usual.
economics 421 521   econometrics   winter 2011   lecture 2 hd
I'll try to finish them by the next day. It takes me 6-12 hours to process it after class. You have to do a bunch of stuff and you have to upload it to YouTube and then when it gets to YouTube it will take a while. It takes them a couple of hours to reprocess it, so I can't finish them right after class anyway, but usually by the next morning they're all done and I'm ready to go, so that'll work for us, let's see, just go there , I'm sorry. keep forgetting this is a 4 bit 62 bit system there is no 64 bit youtube or 64 bit flash drives anyway that's how it is and it's not a 1a player it's in high definition so if you really want go crazy, you should do it. being able to put it in higher definition sometimes takes a lot of processing but it should be available in 1920 so if you put it in full screen it should clear up, you want to be able to see everything if you can't. see it in the smaller versions so there it is, if for some reason it's too wide for your web page just click here and go to youtube and you can see it there and it will be the right size for you. of the people who like to do in these videos is especially people with language problems in life, it's just if once you start playing, you know, it will start downloading once all the videos are there, you can scroll and You can use your notes to figure out what you're looking for just find the place in the

lecture

you're looking for the part where you're not in trouble with a lot of people just watch it over and over again until they understand it or if it's something you missed in class, so you don't have to watch everything to sleep and that brings up something I forgot to talk about last time with these videos.
economics 421 521   econometrics   winter 2011   lecture 2 hd

More Interesting Facts About,

economics 421 521 econometrics winter 2011 lecture 2 hd...

I really hope you use them as a compliment and not a substitute. for coming to class and so they are meant to fill in the missing gaps and I have been doing this for 3 4 years. There are big costs and big benefits to these videos. My students are doing better than ever, but the incentives are kind of bad at 470. I solved it in part by doing homework in class so that people had to show up at least once a week and find ways to get around, which they would leave them and run away or give them to their friends and that's something like that, but it works here, there is no such city and if people start taking advantage of the fact of posting videos and trying to watch them at home and do the kind of that Anyway, I'll probably stop doing it.
economics 421 521   econometrics   winter 2011   lecture 2 hd
Doing that because that's not really the way I want to do this and I hope you operate and use non-uses as a substitute for coming to class, but you know sometimes people have a doctor's appointment, some people are traveling, so people have these problems coming back, they're great, you can fill in the gaps. I also have students who never come anyway, never show these videos which made those students earn much better because they get the material in a way that I never got it before. The best students in the class get resentful because they came to class they did the work they were here they got up in the morning or whatever and sometimes I think it's unfair and I don't have a good answer for you about what makes sense which is, but also my goal is to maximize learning and distinguish who knows who knows the material that you don't and in the end how you learn is not that important so then you know there is that argument that maximizes learning, but the Honestly, the people who do They may have a hard time getting a C or a B, but if they don't come to class there's a reason for it and it's usually a sign of deeper educational problems, so you shouldn't do it if you're a top performer, you shouldn't. worry about being taken advantage of that way anyway, so I hope it's generally okay and that's enough.
economics 421 521   econometrics   winter 2011   lecture 2 hd
Today we are going to talk about hypothesis testing, which should be reviewed, but not to a large extent. I need to wrap up a little bit last time and then we'll move on to hypothesis testing, but a lot of people teach this differently. I know if you take this classic from UC San Diego, you won't hear about T and F tests. There they only hear about wall tests and other types of tests, they don't believe in them there, so if you took this class there you wouldn't even know what which is a TNF test, so I'm going to go over a little bit of Tia's Naps and how to do our death.
What I'm smelling. Okay, so let's finish it last time. They continue to test hypotheses the two things I didn't cover last time. One is a pretty simplistic assumption, so these are still the assumptions. that make the Gauss-Markov theorem work, so the guiding assumption is that the regression model is correctly specified. This class is actually about misspecified models and how to fix them, what to do about them, what the consequences are, all that kind of stuff, except one assumption. What's behind the Gauss-Markov theorem is that you have the right , there are no measurement problems. you're prioritizing people's income or wealth, you're doing it exactly right, all that kind of stuff, if that's not true, then we're going to have problems with the models.
If you notice some of these omitted variables, it can cause bias. The inclusion of too many variables. It's not that big of a problem if you delete irrelevant variables, you're probably better off removing small variables that belong in the model, so if you make a mistake you probably want to add too much instead of too little because the consequence is just the power of your test is reduced rather than biased, so you've already looked at some of these questions about the specification. We're going to look at a lot more, but for now we just think that one of the assumptions is, in fact, that the model script is specified, well, the last one is that there is no perfect multicollinearity.
Each X adds new information to the model. What is perfect multicollinearity? It's when a variable is an exact linear combination of other variables, so let's say that X 2 is 3 constant is not important here and so this doesn't really add any new information to the model if you were to throw this variable in all this variable. I can possibly tell you that it is already in this variable, so this variable is completely strange now, what happens in all s is that if you have Y is B 1 plus B 2 X 2, I must continue dancing last night. I was pulling out of a parking lot in La Cabeza, even though you know if you can park under the building, a guy on a skateboard came down the hill and completely cleaned the side of my truck, lifted the door.
I guess I'm still a little irritated that he doesn't have the money to pay. For this, it's going to be fun, so the perfect multivator, let's say your model is this beta 2 X 2 plus beta 3 we can have perfect momentum linearity what is the intuition we can do mathematically you get a singular matrix somewhere you did what is the intuition exactly exactly so what else did we talk about this last time? Find a move at x2 that is by itself, nothing else moves, so this goes up by one, this goes up by four, so everything is beta to turn, then find another move at x2, alone, goes up, see how goes up Y and finds all of those in the data, all of these unique types when that intuition, so if that's it, LS is working fine every time he moves, if this moves perfectly with him Oh LS, what does it say, oh man, this one up, this one up, they never go up individually, so when these go up and this one goes up, where do I sign the effect directly into this variable, go to this variable and all of us? you just throw your hands up and say I don't know I can't solve this there is absolutely no possible way to solve it and then it just doesn't work at all you get a model if it doesn't work and it's because you can't tell them apart now if this isn't perfect if you have an exact relationship then you are fine because sometimes when this moves x2 it moves by itself and x3 stays fixed so this is an imperfect correlation in perfect multicollinearity now there will be some unique moves in this variable and some moves in this variable which will not cause this and as long as you have some moves unique and one of the two variables you will be fine you need both you have it here too so basically once you get past the exact relationship OLS can still work but since they say they have a 90% correlation only 10% a the time they move independently, it's like having 1/10 as my data, so this is really when you have an impermeable familiarity, it's a small sample problem, it doesn't make the thing collapse, it doesn't make the procedure not work , but it's like having almost no observations because you have no independent variation and you get As a consequence, there are really high standard errors in your Vedas, so the solution in that case is a rule of thumb: you can use the professional version if the correlations exceed a a certain amount, but the best solution is simply if you can get more data. because then if you get ten times more data with a 90% correlation, I think this is very approximate, it doesn't work exactly like that, but if you get 10 times more data then you would be fine because you have those hundred independent ones. variations as long as you look for seven only ten, it would help you differentiate them, so the more data you have, the more independent variation you will have and the easier it will be to differentiate them, so what makes linearity is basically a small sample problem when it becomes perfect is more than a sample problem the procedure totally collapses ok then we will have to assume that there is no perfect binary ok let's change the hypothesis testing jeremy is a bayesian economist he probably got tongue tied and spoke with classical statistics and that's what which we're going to do too, so Jeremy usually doesn't do this kind of thing.
It's more of what we call a Bayesian guy, which is a different way of looking at these problems, but he may not have talked about that. We are going to take a classic approach so that the classic sources of participation are made to rewrite them. Well, the first step is to formulate two competing hypotheses, such as beta 0 beta is not 0 beta is 11 betas not 11 nate is -7 beta. is greater than minus 7 two competing hypotheses is all you need our goal is to find a rule that tells us which of those two hypotheses we should favor, so which of the two do the data point to and that's why we need to derive a statistical test and your sampling distribution, what you want to find to generate a test statistic is something that under the null value is very small or very large, is there a difference?
You just need something like that and then under the alternative you have the opposite, so you want something let's say it's very small under the null hypothesis and it gets bigger and bigger as you go as the alternative is more and more true, so what you want is something that some test discriminates between the two hypotheses, so that if a Jo is true, you get a small number and we'll do one like these later, we look at the restricted sum squared: some R squares are unrestricted. Remember that for the F test you have a constrained model and an unconstrained model, and if the constraints are true, the null value is true. two things can be very close together, in that case we will accept that the null value is true because we get a small number and then we get a very large number, although if they are different because summer restricted squared will be different than summer squared unrestricted you will get a big difference and then when you need the distribution, we'll be able to determine well how large it needs to be before abandoning the null value and accepting the alternative, so we're just trying to give ourselves a rule. choose one of the two outcomes H 1 H 0 without us being tempted to go back and manipulate the results.results to favor one hypothesis over the other, so the big advice is to give us a decision rule that discriminates between the two hypotheses that you cannot manipulate x post or even ex ante, that is an important part because when you spent a year in your thesis and you need something to be significant and it comes out 0.94 and you need point nine, five or nine for it to be significant, you chose 5%.
We will be tempted to express the hypothesis as 10% instead of 5%, that is what we want to prevent you from doing. You might as well not take the test, if after the fact you're going to do the sums necessary to get the result, you need to write your thesis or finish your scholarship or make your boss happy or whatever and that's what we're trying to do. What we do is insulate ourselves against that kind of thing, so we're going to derive a decision rule about when to reject, when not to reject, that's just we have what we define a rejection region in a non-rejection region for our statistics, so which we derive a decision rule.
To actually choose one of the two competing hypotheses so you can design your own test statistic, you simply find something that is small below the null larger than the alternative, you figure out how it is distributed, that's the hard part, once you know its distribution, you simply discover one at five percent Linehan, so the 10 percent line is your decision, rule anything about that to reject being on this side, you don't reject, so the real trick of statistics The test is finding something that moves the way you want to move and knowing its distribution is no easier these days because you can do it with Monte Carlo and other types of techniques.
You can use numerical methods instead of theoretical methods to find the distributions. It's absolutely easier than it used to be and again the idea is to try to remove all bias from the process, okay so let's do an example, suppose we want to test whether an increase in government spending caused an increase in the interest rate, then we need a model. Now I'm going to write a simple model and I hope you'll let me assume that that simple model is the world, which is a correctly specified model. If I were actually doing this as a real job, I wouldn't use this particular model, but it will serve as an example, so let's assume this properly. models the process that generates the data let's assume the interest rate is beta 1 plus beta 2 GT minus 1 plus UT let's say that's our model that I want to eliminate, this is probably exhaustive, it doesn't matter, just that one is the one, so what we would do is Beforehand, before you look at the data, the first thing you need to do is determine what model you're going to use and that can take a lot of work to get the model right and we'll talk about how to practice specific models again. time on them, let's assume we have it, so choose the model beforehand and choose the tests now for the moment, technically what I have done here is a one-sided test.
I'm asking if you make the interest rate the law before. At the moment I'm going to do a two-tailed test instead of making this greater than zero because it's easier to start there, we'll get to the one-tailed test momentarily, let's start with a two-tailed test. Now we need to know. distribution now it turns out that the distribution of this theta test statistic inherits its distribution from UT, so to perform the test we are going to have to make some distributional assumptions about the UTS, so let's assume that UT is normally distributed with mean 0 and variance Sigma squared, so we will soon see that UT is normally distributed 0 Sigma squared the standard assumption.
Now I know what the distribution of these test statistics is and then you will find the test statistic in this case. It's going to be a T step, we'll do a little more on that in a second and then the distribution in the distribution of this test statistic because this is normal, the distribution here is also normal, I guess we've got it figured out. I reserve the right to say what I want to say about you, if you are rude enough to stand in the middle and walk away while you know we are fine, we have to go talk beforehand about the nice guy, he really is fine, so we need the proof. statistic so we know that this is normally distributed from your other class.
I'm not going to go over that, but now we know that this is going to be normal and it's going to be centered at zero, so what you do with it in the third step is you find the rejection region, so you find these points where if the test statistics in these regions go to reject, it's really that simple and then I'll do a more detailed example in a second with regression models, well this is a regression, so now let's test for the regression to give you the beta 2 hat band and the hat Sigma beta 2. What is that?
That's the standard error of this estimator game experience, so what I drew here is the distribution, well, that's not quite right. It's been drawn up I'm sorry, can we take these and which one is T equal? It will be the estimated value of this number minus a hypothetical value, which in this case is zero divided by the standard error. Great, here we go, now I know what this is. What is this measure? In words again, we do not forget the mathematics. What are the words they say to me? Why is it a good test statistic? What is measuring the average value?
It is also said that it was ten and the standard error is two. we're going to get a number of five, which is five, one of your five standard deviations from the mean, so what we have is a test statistic that the further the estimate gets from a hypothesis, the larger it gets, that's just what we want now, we need notes. distribution, but we don't know the distribution of that, you can give our Sigma a hat by eye if I justify it with Sigma, it's normal, but when you divide it by the estimated standard error, it's t distributed, so you and your last I should have done it. we showed or learned that this was t distributed and we found out why it is so, then we find a statistic that is small below the null, so the true ones, this is zero, the less true the null is, the larger it becomes, we have something that measures how many standard deviations we are away and approximately if we get more than two standard deviations in either direction we say the null is not true, we are not going to accept it, no, that is the truth, we are going to go with the alt, it doesn't. the Stuart null seems too far from the truth to accept them all and all the tests work that way under the null are small of the alternative are large find that the distribution is a good interpretation number of standard deviations then you simply find the rejection region the rejection region is simply based on you choosing beforehand it could be a 5% rejection region in a 10% rejection region those are the popular ones but you choose that beforehand and then if it's here it doesn't reject neither of the two. of these regions you reject and that's really all there is to it if it's equal to 25 remember that this form of distribution depends on the end that's because you're estimating this the precision of this depends on the end you're not estimating this this is normal and It does not depend on the end, that is because it has the exact value, the percentage is not a function of n, so when you estimate this you obtain a statistical precision T, it is s when it depends on n, so the distribution will depend Of the small Reds you need a wider distribution to be careful that with certainty in this estimate you are fine with N is equal to 25 and alpha is 0.05 Maybe the rejection size in the rejection region this number is 2.08 and this number is minus 0.08, so any T-statistic larger than that, an absolute value is a rejection. any value less than that, an absolute value as a failure to reject what are degrees of freedom.
In this case, for the t statistic 23:23, it is n minus K, right, the degrees of freedom are n minus K. but that is the number of regressors for theta 1 and beta 2 and it is 25, so this is the T statistic for 23 degrees from Ashley oh, you know what I just did, this is not right, no, no, no, on my ID, my real example. it has garbage in my real example it has 4 betas I decided to make it simpler so e n minus K my notes are 21 e n minus K and the board is 23 I didn't bring my book with me so I can't look Upload the other test statistic, this one It is actually the one that corresponds to 21 degrees of freedom.
Sorry, this is my fault and I should have done it, it's trying to simplify things on the fly in the negev arm, but you guys can, okay? of smiling and being jovial now after being completely out of character before, so what I did here, are you okay with this, these are my numbers wrong, I should have had them if I just added, you know, more beta3, something was beta for something that is fine in My apologies for that. I will continue. What if we had chosen it differently? What if that was our hypothesis? What does the rejection region look like?
In that case, when should we reject? Let's assume alpha is equal to 0.05 again, but now this is a one-tailed test. when we reject this in favor of this, we don't have a reason to believe that with zero zero or more, yes, as we go in this direction, beta becomes negative, right, is that true? if we are going in that direction now, so that direction we are good, we have to try it in that direction, only when this grows enough, now we are going good, all 11.0, but where should we go in the positive direction? Where should we abandon zero and move to a positive number? that's what the distribution tells us, we put a point zero five, if it really had 21 degrees of freedom, I could tell you what number it is, but I'm not going to write it down this time, so this is T critical, yes sir, okay . pretty good, it's the point oh five four twenty equals 21 n equals 21 23 one of those days or 6-4 so this number should have been two point zero six four and minus two points thank you very much.
I appreciate that n increased from 21 to 23 what happened to the limits did you get narrower with 2.8 2.06 did they get narrower because we have more extremes oh you have a long north on the right oh that's funny that's what I want I think it's now because it made it smaller a little bit which is exactly what it should do because we have more precision and it's higher, the more precision is going to adjust your confidence intervals. Well, these people usually have never done it, they have usually made it this far. The next topic starts to be where some people got burned in the first course. so it's probably worth looking into and that's a joint hypothesis, so I'm going to take Y Z theta 1 plus theta 2 x2.
Exam 1 is supposed to be 2 3 X 3 plus beta 4 X 4 plus u, so let's use that. model now let's have a Chilean hypothesis beta 3 equals beta 4 equals 0 versus beta 1 or theta 2 not 0 yes, technically I watched last year's video last night mice I spent about five minutes trying to figure out how and/or finance oh yeah that's pretty poorly written, yes it should, not only is it poorly written, I just received it, either to please or beta for y or R zero or non-zero and the cover, you or both, one of them is not zero, at least one is different from zero, that is the alternative. there are two ways to do this test there is a t test version and there is also an F test version we are going to do the t test an F test version because the t test version does not work when you have more than one constraint for the book to pass by a t-test version of these tests.
I'm not going to bother with that because once you pass a constraint it's no good, so there are a variety of ways to do this, but I'm going to use the constrained unconstrained model, so we need the unconstrained model and the constrained one, which are yeah, this is the unconstrained one, we're all set there, so in the constrained model you impose the null and I'll start with a simple pair in a minute. I'm going to have two linear equations here for beta three plus beta 4 equals 4 NATO 2 minus beta 1 equals 3 or something like that, we'll talk about how to do that, so the main way on the web to do it is to substitute the constraints on the null value it will estimate. that model to see if they are true then the restrictive model is why T is beta 1 plus beta 2 again in class after this is time series

econometrics

that I teach in the afternoon.
Eventually I use T, since that's all using time series, these are eyes or what I want to say, thank you, thank you, thank you still. one of those days that shot looks better, so what's the procedure here? It may be that the unconstrained model saves what it needs to say saves the residuals of the resistance network. I think this book uses unrestricted RSS as a symbol, okay, it's bigger in the book, yeah, that's what it's about K there was one of there was a stupid K in my notes that I didn't know I shouldn't look for the book I was in school all your names are because it's more intuitive I want the intuition in this plus K is generally the number of a tip that is better so yeah to make the unconstrained model estimate the constrained model and you will get r SS r books that some books use well anyway if you get very confused withThis is because some books use ESS for estimation, some squares and other tweet air books are some squares, some people, this is the residual sum of squares, other books are the regression sum of squares, so RSS, the ESS It could be either and I think I did it right.
I think they use this as RSS in this book, so we'll use that, but if what we mean is the sum of the squared residuals, okay, so what is the F statistic that you form? F equals r s as restricted - r SS unrestricted divided by the number of bets not yet lower or the one above is the number of restrictions this is the number of restrictions all divided by r SS unrestricted divided by n - okay, it sounds much more fun than morality why does this work know how to talk about realism if this is true? you like I'm making it very difficult if this is true what should I impose the restriction because of the reason there are some squares did nothing because of this model if this is true what we estimated for so long what were those coefficients B 0 so when I asked mate, the restrictive model I impose 0, they won't live, you can get the point or 1 or minus 0.03, you will get literally 0, but get a very close estimate, so under null this difference is 0 now, When would you be older if you have a restriction? that difference is probably 1 now you have two constraints, it's probably higher, there are two ways it can go wrong, there are three The constraints will probably be even higher because there are three ways it can go wrong, so what we're doing is normalizing them using numerical restrictions.
Basically, this distance increases with a number of constraints that we normalize, so that's really all you need for the test. statistics, that's exactly what we want, we're using a big capable astronomy model, okay, it's the big model, so this is the K and the unreached, it should be the big one, okay, use, they're active, which They're small, so they're separated from the cake up there. I'm going to do this because 95% of the books that I think are larger do it this way, so this is the standard way in literature and it's the way it should be done if the book does it this way. different.
This is one of those cases where I just want to reorient the thinking to the correct length is not a question like a thing okay the only reason we divide by this is because we don't know what the distribution is so divided For this, is it something? We know what an app is, so we know how it is distributed. It looks like this. What we're going to do is put alpha in the queue. If we get something larger and F critical, then we are going to reject that there is anything smaller than F critical. so we don't reject, we hope there's some deviation here because when we're sampling, those won't be literally zero, you know, this could be an indicator if I said point oh one, just let me: point O 3 will be If you stay here somewhere , you'll get some difference here, but it's just noise and we don't want to reject it.
I know for sure that the difference is significant. There's another way. Remember the other way of arguing. You can use r-squared when you're using crayons, they don't always give you that data, so another way to do it is our excuse bar Unconstrained Square minus R squared constrained divided by the number of recipes in all 1 minus R squared unconstrained. restrictions and - ok, that's not so intuitive but there is a way to do it with our slips, okay, let me try another theta 2 plus 2 beta 3 equals 4 theta 2 plus 2 beta 3 does not equal 4 how do you do it the same?
Plug the constraint well the model estimates the constrained and unconstrained models and forms your application so let's impose the constraint you can solve for beta 3 or beta to listen to it is there any difference in this top beta 2 so beta 2 is equal or less 2 beta 3 below north below? H oh, so no, no, no, that's a map on the axis, okay, yeah, I don't have a good one later. I don't have a good reason for that particular hypothesis, but, for example, you can have a Douglas production function alternator and then you take. records is that you can try alpha plus theta equals one and that's the kind of thing we have to find here is a KL that doesn't move perfectly together, okay?
So, to the model, we have and, boy, I have to get out of that T thing why is it equal to beta 1 more or less 2 beta 3 X 2 plus beta 3 X 3 plus beta 4 X 4 plus you I do it right, it seems a pretty defined group, so I just trust you to find my mistakes now, what do I do? I'm going to multiply it. I'll do it in great detail, I'll remove everything that doesn't have a beta attached to it for the left side and leave everything that has a beta attached to it on the right side, you always do the same thing, just expand this, this is beta 1 plus 4 X 2 minus 2 beta 3 X 2 plus beta 3 X 3 plus beta 4 steps below, which has a beta version which has a beta version of data here, everyone stays on this. side but any of these terms will have data, move it to the left side which is purely data, so why could you delete the line and minus 4 X 2 equals beta 1 and then group it plus beta 3 times X 3 minus 2?
X 2 plus beta 4 X 4 plus u when I do wrong yes yes yes yes then you are going to move any of the restricted model x3 - but very mom, it is distributed 3x3 - oh, this x3, thank you, yes, I appreciate it, yes. I'll have to tell my terrible, incompetent story. You did studies and if you do all the classes like this. if you get everything perfect, people stop paying attention, if you make little coefficient errors like that, from time to time, everyone looks because they don't want to get their notes wrong and if they really pay attention, the learning actually increases if you do. some mistakes during the conference because of course people don't pay attention, that wasn't a purpose.
I wish I could tell unless I wasn't seeing if you were paying attention. How do you do this? I'm going to upload Y tilde equals beta 1 plus beta 3 payable Attention, how are these variables created so that you can have, for example, an Excel spreadsheet? It will have to be a Y, a constant how to make this variable now this is called in Excel, so we are in Excel. I want to do that for the tilde Y, so what will the tilde Y be? the formula a, this will be 1 minus 4 stars, you see it C 1 C 1 and then once, then there's another column and what X would say is D 1 - 2 C 1, so ad 1, to see what you are. you know, take that little corner, double click on it and magically place the formula completely or copy the formula.
You have no idea. For some reason showing how to do it in the Excel spreadsheet gives me the idea of ​​how to execute this. regression, then you form these two new variables y tilde and How much data is there in the original law? How many bangs are there in the law? restricted models 1 2 3 so how many restrictions are there? Well that will always work, just count the data, so what is my procedure here? It's an unconstrained estimate that models the restricted form of the F statistic, finds the critical value, and rejects or accepts the taunts, ok, one more.
So as an application you are simply the sum squared, what would we use? SSR SSR v RS. Thank you all for thinking about RSS. I wanted to say something last time if you want on the class website, if you know RSS readers like Google. Reader and that kind of stuff, so subscribe to the class RSS feed the moment I post an assignment or something from the class or a video. You will receive a notification via your RSS feed. The videos also appear on my Twitter account, but I don't. I have the class assignments, I have followers, I want to put the class assignments there, but I put the videos on a Twitter account, so if you are interested, you can also get the videos from there, just bookmark the toll on Twitter. and if you want, I can set one up for the class, but no one has wanted me to, so what's best?
Okay, so what you do is you end up running a standard regression and you put in this linear combination and then you test whether that linear combination is zero, so you end up with a model that is you them testing the constraint theta 2 plus 2 beta 3 minus 4 times 2 which should be 0 under the null and then you end up with a model that is the same white teal that we have and tilde is beta 1 plus theta 3 times X 3 minus 2 X 2 plus beta 4 X 4 plus a that a times X 2 and you do a little bit on that and that's how you do it. what you do is you take the unrestricted restricted monthly ball and the restrictive ball, subtract them and then take what's left and that has to be equal to 0 and then impose that on the original model, but that's what you actually end up running, but it's Basically, testing this constraint directly with a constraint is pretty close because we know that T squared is f, so with one constraint you'll basically get the same thing, but when you have more than one constraint you can't even do it this way.
I can't test the articulation which was a little loose, probably to really understand that the book has a whole section on how to do this with T tests that doesn't explain what I just did, okay arm, where are we okay? Just to complete. one plus equal about why I say one two let's have Mel B theta 1 minus theta 3 equals 1 and beta 2 plus what I write in beta 4 equals 600 1 I in my notes once when the null is one or both no , this is the biggest condom I've ever had here in all classes was joyful. We're going to be able to get people talking so far.
You feel great. Keep asking the questions. Keep repeating correcting my mistakes. We'll all have more fun that way. So how do we do this? A simple subsection of both restrictions on the right. This one tells us that beta 2 is equal to 6 minus 2 3 4 and this one tells us that Beta 1 is equal to 1 plus K 3. Why get what you want to try? set of 2 why would you have a 1? I feel like you would have less clarity about what is good. You are interested in knowing if they are simultaneously true. It is a joint hypothesis that is possible to make.
Let me answer your question with a simpler example oh the cables beta 1 plus beta 2 , nothing to do is a t test here and then nest T here if we look at the confidence intervals, the t test will be a rectangle, so since you have two tests going, we put them together in the middle stage, beta 2 and beta 3. I'm going to get a rectangle when you do the F statistic what you get is an ellipse as a confidence interval, so the tests are almost identical, but there are some regions where they can differ and the reason is because in one case you are testing a joint hypothesis and in the other In case you are testing them individually, but for the most part they are the same and most of the time, if you were Geoff, any of these with a t and you can reject it without it, what would you deal with? are you going to get in trouble? the individual t test is something we talked about, assumption 10 and that's Ike linearity, so the reason for using an F test is that you can do a T test and get them both to be 0 in the presence of substantial multicollinearity, Neth tests reject them as zero altogether so due to the multicollinearity problem we prefer the F test to the T test but they will largely give the same answers so why would you do the meta tests first because they weren't testing that they exist joint truths? if they are true at the same time and that is exactly like going to the t-test, then you are writing exactly because F is T squared, you can write exactly the same problem, you cannot reject both individual s, but the joint F could still be rejected , so it is the joining of the hypothesis that makes the difference and sometimes in the presence of multicollinearity you can get completely different answers with individuals who object to joint testing, so when we test a joint hypothesis we must use the F to avoid the multicollinearity problem, that is the simplest, so let's impose this, how would I get if I change this?
Yes, Gary, I'm debating whether I want to introduce the complication that the constraint involved, the constant, your new event, is going to change it. I will change the way I am worried. I didn't mean for this to happen. I didn't try it, so I don't know what comes out. So why am I equal to soar 1 plus theta 3 4 beta 1 right plus 6 minus a? be 4 times X 2i plus beta 3 Set it to the left, you have a beta Ward; leave it on the right and group all the betas together, so that 1plus beta 3 plus 6x2 in - to be 4x - 1 plus 2 3 , very y is equal to 1 plus x 3i 8 + 3 x 1 plus X 3.
I'm going to be 4x - I need a group, sorry, plus beta 4 times X 4i - 2 a little to see it. I think this model is positive, so the restrictions on lies are constantly known. Make the law restrictive. Thank you very much when it is 8 the restricted bottle. I want to leave out the constant, so I'll form. this in Excel that in Excel and that in Excel and I will run And tilde is beta 3 X tilde to say more beta 4 Sometimes it's not obvious how many restrictions you have because they'll involve the same vein as sometimes and all sorts of weird stuff and it's just not clear from your eye whether it's a new restriction or not, so just counting the number of betas is the way to tell. calculate the numbers, so again you decimate the constrained model, you estimate the unconstrained model and it's equal to this constrained sum of squares error - sum of squares error I'm constraining /-/unconstrained RSS over N minus K, yeah you I can do that so these things can come up, so I went through you, I guess you have a cross-section of companies, you remember the production function of Cobb-Douglas and Glenn, you probably understood that he is teaching in the other room and that you are missing, that is. it just tells us that given that amount of capital and not the amount of labor in this amount of technology, it's about the output that you're going to get and then these models, if alpha plus beta equals one, we have what's called constant returns to scale so I want to test whether we have constant returns to scale or not and you can't compare this to a specific decreasing or increasing e2 but I'll just say no we need to convert it to a linear model first so how can I convert this to a linear model?
Take its logarithm so that the logarithm of y is equal to the logarithm. Let's make this a prime that shouldn't bother anyone too much and then take that logarithm of one more prime. alpha log li plus beta log AI what is the log in e1 inverse easy of long logs University e remember the functions so this is more standard you take your spreadsheet why I Li pey then I use it for the module and this is the register of Y, okay, register of L and you use them as data, so your null value here is alpha plus 8n equals one.
Your alternative experts are not going to use any Chuck, so I would do what you would do, how did you test it? Take this and say that maybe something like alpha is equal to one minus. beta plug it in there keep some stuff on the side which will be after today too we'll start with heteroskedasticity on Tuesday and hopefully that will start something new you've got the C right so let's plug that in so log in And I is a I'm going to call this a and we call it a a is a record of a prime is just the country a plus 1 minus beta log li plus beta log K UI so and minus log li taking that term to the other side is a, which is a parameter 1, a SMA plus B Times log AI minus log Li plus UI, so that would be the constrained model that Vera, so that's why you put tilde in much of the white area if you want a tilde plus B that there is your restricted mode. then just build up your F stat and go to town.
I'm ignoring the side of the room. I find myself turning this way more than this. Please remember that we have people sitting here. It could be the lady. Any questions, listen. just say yes, quit, let's go to court and change the name, no pocket, okay, no point starting a new chapter two minutes ago, sorry for the start of class and see you next time.

If you have any copyright issue, please Contact