YTread Logo
YTread Logo

Can ChatGPT Plan Your Retirement?? | Andrew Lo | TEDxMIT

May 24, 2024
So I'd like to start by giving all of you an investment decision. I want you to choose between two investments. Investment A vs. B. Investment A is an investment where you will earn $240,000 for sure, free and clear. Investment B is a lottery ticket where you could win a million dollars with 25% probability, but you won't win anything with 75% probability, so since this is a quantitative crowd, I'll help you calculate the expected value of B. The expected value is 250,000, but you don't get the expected value you get a million or nothing more expected return but higher risk so, by show of hands, how many of you would choose a sure thing?
can chatgpt plan your retirement andrew lo tedxmit
And how about B? Well, let's state for the record that most of you wanted something for sure. thing and some hands here and there choose the risky option b very well, now let me ask you to make another investment decision between two other options investment decision C is a certain loss of $750,000 D is a lottery ticket where you will lose a million dollars with 75% probability and nothing with 25% probability now in this case the expected values ​​are identical they are minus 750 but in the case of D you cannot lose 750 you lose a million or nothing. I teach NBA students and when I give them this example, they get very angry and their response is that we don't want to, no thanks, but you can imagine a situation where we have to choose the lesser of two evils, so how many of Would you choose the LW C for sure, raising

your

hand?
can chatgpt plan your retirement andrew lo tedxmit

More Interesting Facts About,

can chatgpt plan your retirement andrew lo tedxmit...

How about D? Wow, let's state for the record that the vast majority of this room chooses D. Now this is just a matter of risk preferences. There doesn't seem to be a right or wrong answer, but let me show you what most of you chose. You, who chose A and D, those two options are equivalent to the single lottery ticket that gives you $240,000 with 25% probability and will cost you $760,000 with 75% probability, how did I get so good? If you picked one, you're sure to get 240, right? but if in addition to a you also choose D, there is a 25% chance that you will lose nothing on D, in which case you will keep the 240, but there is a 75% chance that you will lose a million on D, in which case you have minus 760 left, this is how I got this combination A and D.
can chatgpt plan your retirement andrew lo tedxmit
Now, the few of you who chose B and C, this is what you would have gotten exactly the same odds of winning and losing 25 against 75, but look at this when you win , you win 250, not 240 and when you lose, you lose 750, not 760, in other words, the choice that most of you didn't choose B and C is actually equivalent to a and d plus $10,000 for sure, so by a show of hands, How many of you would still choose A and D if you saw me later. I want to do a little exchange with you now. This is a phenomenon that two famous psychologists Kerski deduced and called loss aversion.
can chatgpt plan your retirement andrew lo tedxmit
They were doing it experimentally with Stanford undergraduates, so the prizes. They were much smaller, I had to add some extra zeros because I teach MBA students and I had to make it meaningful. It turns out that this is a phenomenon that is embedded in all of us, all of our human preferences because we can't handle loss. alright, that's why it's called loss aversion and that's why financial economists have realized that there are opportunities to essentially extract money from this audience by making these transactions. If you think this is an unrealistic example, imagine a multinational investment bank whose London office is faced with options A versus B and the Tokyo office is Cho faced with options C versus D locally there doesn't seem to be a right answer or incorrect, but the globally consolidated book will show a very different story: we can create all kinds of arbitrage opportunities free lunches to take money from you using complicated financial engineering like this, so this is not a good thing and you want to understand how to avoid it, it turns out that this It's part of a much broader phenomenon that I wrote an article about with my students a few years ago and we technically call it the panic factor.
What happens is that when you face losses in the stock market, you tend to get scared and take

your

money and put it in cash during the financial crisis of 2008. Between the fourth quarter of 2008 and the first quarter of 2009, the S&P US stock market 500 fell about 50% from the peak to 50%. His 401k, if invested in the stock market, became a 201k during that period and investors freaked out. and they took out their money now, that's not so bad because it still continued for quite a while and so you ended up avoiding some of the losses. In fact, five years after the financial crisis I gave a talk about this and then one of my former students came up and said prof.
I just want to get some advice from you. I really enjoyed your talk and I want to let you know that as a money manager I took money out of all my clients in the fourth quarter of 2008 because I wasn't sure when the bleeding was going to stop and I said good for you, that was very smart , you saved your investor some money, why do you need my advice? And he said, well, it's been five years now, do you think it's time? putting the money back into the market, that's the problem, we are afraid of losses and will act irrationally, so what if we asked the GPT chat what should we do if we lost 25% of our savings?
Well, if you do that. this is the answer you would get, it's a long list and some of them are reasonably sensible, stay calm, avoid making impulsive decisions, but follow the list and on point number four, rebalance your portfolio. Really, after this loss, you want me to start rebalancing. In the midst of an illiquid market or five, consider dollar-cost averaging, which means buying more at the bottom than at the top. You would want all investors who have lost money to do that, actually, if you gave that general advice to all of your investors who meet this Criterion.
You could be prosecuted for not taking your client's particular needs into account, that's bad advice. What about chat gp4? Now this gets really interesting. If you ask chat gp4 the exact same question, you'll get a list of recommendations that are pretty impressive. In fact, this list is better than some of the financial advice my friends have received from professional financial advisors and that's interesting, so it raises the possibility that we could use great language models to provide financial advice. What if your financial advisor knew how your portfolio was performing at all times? or night 247 What if your financial advisor digested and read every financial news story ever published?
What if your financial advisor was available to talk to you at any time? You are available, you are never on hold and your advisor is completely trustworthy and takes care of you. Best interests only, that's the potential with great language models, now rich people don't need this, they already have it, the people who need financial advice the most are those that financial advisors are not interested in having as customers and that is the opportunity if we can solve this problem for all those who are underserved, which would have a tremendous impact on their well-being. So can great language models really serve as trusted financial advisors?
I'll explain what I mean by that in a minute, thankfully I've been pitching in. with two wonderful students, uh, Jillian Ross was a pH student here at Cale and Nina Gersberg, who has a master's degree in engineering and based on the work that we're doing, we hope to answer that question, so our research program consists of three parts. I'm not going to talk about all three, the first part is whether you have expertise in a specific domain, the second part is whether you can provide personalized financial advice, but the third, and I think the most challenging, is the ethical nature of the financial models. big language, can you get a big lang language? models are completely reliable now it turns out that financial advice is not the only thing we are focusing on, although it is an ideal test: we have about 15,000 financial advisors in the country who manage something like 114 trillion dollars for 62 million clients and there are many more people who need advice who don't get it and bad advice can cause a lot of harm, as well as revealing some private information that you don't want disclosed, so these issues are at the forefront, but not just for financial advice , but for all types of advice, medical advice, accounting, legal, virtually any type of human interaction where you are looking for some type of knowledge transfer, this will be relevant to what we are talking about today and the fact that Refocus In financial advice it allows us to narrow our focus so that we can find precise answers to the question I am about to ask.
In the third part, can we engage in broad language models that are ethical and trustworthy? It turns out that in the legal profession there is a term for what is called fiduciary duty. A fiduciary is an individual who looks out for the interests of other people before his or her own. For example, his portfolio manager, who manages his

retirement

assets, are fiduciaries and are supposed to take care of them. your financial interests ahead of theirs, can we get great language models that satisfy that criterion? Now it turns out that if you think about it, there are several ways in which we have already imposed those kinds of standards on humans, in practically every financial organization in the world.
The profession has a code of conduct and a set of ethical standards that its members must adhere to, they have to focus on the best interests of their investors and the question is: can we get software to make it turn out that in computing there is a term for That's called alignment problem. Can you get an AI to be aligned in terms of its behavior with yours? Now there are many different aspects of human behavior, so you have to ask carefully what type of behavior, so we are going to focus. For a couple, I don't have time to go through them all, but I'll give you a few examples to illustrate whether or not we can tell if a large language model is aligned correctly.
I'm going to do this. Through a game called the ultimatum game, economists have devised this as a way to understand the nature of human interaction in very specific economic environments, so let me tell you how it works. Suppose we have a proposer, say me, and the proposer's job is. propose to another person tell them how to divide a certain amount of money that the proposer has access to, say $10, then my job is to propose a division for that $10 and I propose it to them and their decision is simply accept or reject the proposal if You reject the proposal then neither of us gets the money but if you accept it then we get the money divided according to what we agreed on okay then I offer you $5 out of 10 you say yes.
Accept and the money is actually divided between us, but if I offer you something else and you say no, I reject it, no one gets the money, so let's play this game right now for real. Okay, I need a volunteer who is willing to play. He plays with me, okay, could you come here for a second? Now three minutes. We need, we need money, so let's get money from Jillian Ross. Jillian, could you come here? It's not, it's not totally frivolous, given that she is an expert in generative Ai and pH D at MIT she will be making money soon so Julian um I'm going to propose something to fando Fernando and the question is whether you accept or not, so I'm G to propose, I'll give you five cents. you accept you accept excellent and then I'll take the rest you're okay with that you get 5 cents and then I'll keep $9.95 you're okay with that that's not okay I'm sorry I guess I guess we won't get the money I propose $4 dollars for No, it's okay, end of the game thanks, sorry Jillian, I lose to you, thank you very much Fernando, how do large language models behave?
It turns out that most humans are the 40% 40% split they offer and it turns out that Actually, that's usually enough to get people to agree on the big language models. Not all of them are there yet, but some are, and through many other examples like this, we can map the behavior of large language models and compare them to the way humans engage. Ultimately, this will allow us to shape large language models to be completely reliable so that we feel comfortable with people very similar to the way we learn about the golden rule on the playground of what to do with people. others do to you and eventually, as they grow, they learn a different version of the golden rule in the business world, which is that those who have the gold generally make the rules, so the question is: can we really come up with great language models that help augment or improve?
Replace trusted financial advisors and the answer will return within a year. I knowwe will appreciate it very much.

If you have any copyright issue, please Contact