YTread Logo
YTread Logo

The 4 things it takes to be an expert

Apr 19, 2024
- Do you pull this trick at parties? - Oh no. It's a terrible party trick. Here we go. 3.141592653589793 - This is Grant Gussman. He watched an old video of mine about how we think there are two systems of thought. System two is the conscious, slow, effortful system. And system one is subconscious. Fast and automatic. To explore how these systems work in his own head, Grant decided to memorize one hundred digits of pi. -Three eight four four six... -Then he continued forward. He has now memorized 23,000 digits of pi in preparation to challenge the North American record: .95493038196. It's 200. (Derek laughs) - That's amazing.
the 4 things it takes to be an expert
For a long time he wanted to make a video about

expert

s. This is Magnus Carlsen, the five-time world chess champion. They show him chess boards and ask him to identify the game in which they occurred. - This looks a lot like Tal V Botvinnik. (funny music) - Oops. - Well. This is, obviously, Sevilla's 24th game. (laughing) - Now I'm going to play an opening. And stop me when you recognize the game. And if you can tell me who was playing black in this one. Well. (fun music) I'm sure you've seen this opening before. - Well. It will be Anand. (laughs) - Against it? - Zapata. - How can you do this?
the 4 things it takes to be an expert

More Interesting Facts About,

the 4 things it takes to be an expert...

It seems like a superhuman ability. Well, decades ago, scientists wanted to know what makes

expert

s like chess masters special. Do they have incredibly high IQ, much better than average spatial reasoning, and greater short-term memory capacity? Well, it turns out that, as a group, chess masters are not exceptional on any of these measures. But an experiment showed that their performance was far superior to that of amateurs. In 1973, William Chase and Herbert Simon recruited three chess players, a master, an A player, who was an advanced amateur, and a beginner. A chess board was set up with about 25 pieces placed as they would during a game.
the 4 things it takes to be an expert
And each player was allowed to look at the board for five seconds. They were then asked to replicate the memory configuration on a second board in front of them. Players could take as many five-second glances as they needed to match their board. From the first glance, the teacher was able to remember the positions of 16 pieces. Player A could remember eight and the beginner only four. The teacher only needed half as many looks as Player A for his board to be perfect. But then the researchers laid out the board with pieces in random positions that would never appear in a real game.
the 4 things it takes to be an expert
And now, the chess master performed no better than the beginner. After the first glance, all players, regardless of their rank, could remember the location of only three pieces. The data is clear. Chess experts don't have better memories overall, but they do have better memories specifically for chess positions that might occur in a real game. The implication is what makes the chess master special, it is that he has seen many chess games. And during that time, his brains have learned patterns. So instead of seeing individual pieces in individual positions, they see a smaller number of recognizable configurations.
This is called "fragmentation." What we have stored in long-term memory allows us to recognize complex stimuli as a single thing. For example, you recognize this as pi instead of a string of six unrelated numbers or meaningless scribbles. - There is a wonderful sequence that I really like, which is three zero one seven three. Which to me means that Stephen Curry number 30 won 73 games, which is the record in 2016. So three oh one seven three. - At its core, experience is about recognition. Magnus Carlsen recognizes chess positions in the same way we recognize faces. And recognition leads directly to intuition.
If you see an angry face, you have a pretty good idea of ​​what's coming next. Chess masters recognize board positions and instinctively know what the best move is. - Most of the time I know what to do. I don't have to figure it out. - Developing an expert's long-term memory

takes

a long time. 10,000 hours is the rule of thumb popularized by Malcolm Gladwell, but 10,000 hours of practice alone is not enough. There are four additional criteria that must be met. And in areas where these criteria are not met, it is impossible to become an expert. So the first is many repeated attempts with feedback.
Tennis players hit hundreds of forehands in practice. Chess players play thousands of games before becoming grandmasters, and physicists solve thousands of physics problems. Everyone receives feedback. The tennis player sees if each shot passes the net and goes in or out. The chess player wins or loses the game. And the physicist solves the problem well or badly. But some professionals do not repeat the experience with the same type of problems. Political scientist Philip Tetlock selected 284 people who make a living commenting on or offering advice on political and economic trends. This included journalists, foreign policy specialists, economists and intelligence analysts.
For two decades, he peppered them with questions like Would George Bush be re-elected? Would apartheid end peacefully in South Africa? Would Quebec separate from Canada? And would the .com bubble burst? In each case, experts rated the probability of various possible outcomes. And at the end of the study, Tetlock had quantified 82,361 predictions. So how did it go? Pretty terrible. These experts, most of whom had graduate degrees, performed worse than if they had simply assigned equal probabilities to all outcomes. In other words, people who spend their time and make a living studying a particular topic produce predictions worse than chance.
Even in the areas they knew best, experts were not significantly better than non-specialists. The problem is that most of the events they have to predict are unique. They have not had the experience of going through these or very similar events many times before. Even presidential elections occur infrequently and each in a slightly different environment. Therefore, we should be wary of experts who do not have repeated experience with feedback. (upbeat music) The next requirement is a valid environment. One that contains regularities that make it at least somewhat predictable. A player who bets on roulette, for example, may have thousands of repeated experiences with the same event.
And for each one, they get clear feedback on whether they win or lose, but you wouldn't rightly consider them experts because the environment is low validity. A roulette wheel is essentially random, so there are no regularities to learn. In 2006, legendary investor Warren Buffet offered to bet a million dollars that he could pick an investment that would outperform the best hedge funds on Wall Street over a 10-year period. Hedge funds are funds of money actively managed by some of the brightest and most experienced traders on Wall Street. They use advanced techniques such as short selling, leverage and derivatives in an attempt to provide huge returns.
And as a result, they charge significant fees. One person accepted Buffet's bet; Ted Seides of Protege Partners. For investment he selected five hedge funds. Well, actually, five funds of hedge funds. In total, a collection of more than 200 individual funds. Warren Buffet took a very different approach. He chose the most basic and boring investment imaginable; a passive index fund that simply tracks the value-weighted value of America's 500 largest public companies, the S&P 500. They started the bet on January 1, 2008, and immediately

things

weren't looking good for Buffet. It was the beginning of the global financial crisis and the market crashed.
But hedge funds could change their holdings and even benefit from market declines. So they lost some value, but not as much as the market average. Hedge funds stayed ahead for the next three years, but by 2011, the S&P 500 had caught up. And from then on, it wasn't even close. The market average skyrocketed leaving hedge funds in the dust. After 10 years, Buffet's index fund gained 125.8% versus 36% for hedge funds. Now, market behavior was not unusual during this time. With annual growth of eight and a half percent, it almost matches the long-term average of the stock market. So why have so many investment professionals with years of industry experience, research at their fingertips, and strong financial incentives to perform failed to beat the market?
Well, because stocks are a low validity environment. In the short term, stock price movements are almost completely random. So feedback, although clear and immediate, doesn't really reflect anything about the quality of decision making. It is closer to a roulette wheel than chess. Over a 10-year period, around 80% of all actively managed mutual funds fail to outperform the market average. And if we look at longer periods of time, the poor performance increases to 90%. And before you say, "Well, that means 10% of managers have real skills, consider that simply by chance, some people would beat the market anyway. Portfolios picked by cats or by throwing darts have been shown to do precisely that." That.
And luckily, there are nefarious practices, from insider trading to pump-and-sell schemes. Now, I don't mean that there aren't expert investors. Warren Buffet himself is a clear example. expert due to the low validity of his environment. Brief side note, if we know that stock picking will generally produce worse results in the long term, and that what active managers charge in fees is rarely offset by better performance, then , why is so much money invested in individual stocks, mutual funds and hedge funds? Well, let me answer with a story. An experiment was carried out with rats and humans, where there is a red button and a green button that can each. one. light. 80% of the time, the green button lights up.
And 20% of the time the red button lights up, but randomly. Therefore, you can never be sure which button will light up. And the task of the subject, whether rat or human, is to guess in advance which button will light up when pressed. For the rat, if he guesses correctly, he gets some food. And if they make a mistake, a slight electric shock. The rat quickly learns to press only the green button and accept the 80% win percentage. Humans, on the other hand, usually press the green button. But from time to time they try to predict when the red light will turn on.
And as a result, they are right only 68% of the time. It is difficult for us to accept average results. And we see patterns everywhere, even in chance. That's why we try to beat the average by predicting the pattern. But when there is no pattern, this is a terrible strategy. Even when patterns exist, timely feedback is needed to learn them. And YouTube knows this, so within the first hour after you publish a video, they tell you how its performance compares to your last 10 videos. There are even confetti fireworks when the video is number one. I know it seems silly, but you have no idea how powerful this reward is and how much effort YouTuber puts into chasing this supercharged dopamine hit.
To understand the difference between immediate and delayed feedback, psychologist Daniel Kahneman contrasts the experiences of anesthesiologists and radiologists. Anesthesiologists work alongside the patient and get immediate feedback. Is the patient unconscious and with stable vital signs? With this immediate feedback, it is easier for them to learn the regularities of their environment. Radiologists, on the other hand, do not get quick information about their diagnoses, if any. This makes it much harder for them to improve. Radiologists typically correctly diagnose breast cancer from X-rays only 70% of the time. Delayed feedback also appears to be a problem for college admissions officers and recruiting specialists.
After admitting someone to college or hiring someone at a large company, you may never, or much later, find out how they did. This makes it more difficult to recognize patterns in ideal candidates. In one study, Richard Melton attempted to predict the grades of freshmen at the end of their first year of college. A group of 14 counselors interviewed each student for between 45 minutes and an hour. They also had access to high school transcripts, several aptitude tests and a four-page personal statement. For comparison, Melton created an algorithm that used only a fraction of the information as input. Just high school grades and an aptitude test.
However, the formula was more precise than that of 11 of the 14 counselors. Melton's study was published along with more than a dozen resultssimilar in a variety of other areas, from predicting who would violate parole to who would succeed in pilot training. If you've ever been denied admission to an educational institution or been turned down for a job, it feels like an expert has considered your potential and decided that you don't have what it

takes

to succeed. I was rejected twice from film school and twice from a theater program. So it's comforting to know that the gatekeepers of these institutions are not great predictors of future success.
So if you are in a valid environment and get repeated experience with the same events, with clear and timely feedback from each attempt, you will definitely become an expert in about 10,000 hours? Unfortunately, the answer is no. Because most of us want to be comfortable. For many tasks in life, we can become competent in a fairly short period of time. Take for example driving a car, initially it is quite challenging. It occupies the entire system two. But after about 50 hours it becomes automatic. System one takes control and you can do it without thinking much. After that, spending more time driving does not improve performance.
If you wanted to continue improving, you would have to try driving in challenging situations such as new terrain, higher speeds, or in difficult weather conditions. Now I've been playing the guitar for 25 years, but I'm not an expert because I usually play the same songs. It's easier and more fun. But to learn, you must practice to the limit of your ability, going beyond your comfort zone. You have to use a lot of concentration and methodically repeatedly try

things

you're not good at. - You can practice everything exactly as it is and exactly as it is written, but at such a speed that you have to think and know exactly where you are, what your fingers are doing and how it feels. - This is known as deliberate practice.
And in many areas, professionals do not practice deliberately, so their performance does not improve. In fact, sometimes it declines. If you have chest pain and walk into the hospital, would you prefer the doctor to be a recent graduate or someone with 20 years of experience? Researchers have found that medical students' diagnostic skills increase with their time in medical school, which makes sense. The more cases you have seen with comments, the better you will be able to detect patterns. But this only works to a certain point. When it comes to rare heart or lung diseases, doctors with 20 years of experience actually diagnose them worse than recent graduates.
And that's because they haven't thought about these rare diseases for a long time. That is why they are less able to recognize the symptoms. Only after a refresher course were doctors able to accurately diagnose these diseases. And you can see the same effect in chess. The best predictor of skill level is not the number of games or tournaments played, but the number of hours spent in serious solitary study. Players spend thousands of hours alone learning chess theory, studying their own games and those of others. And they play through compositions, which are puzzles designed to help you recognize tactical patterns.
In chess, as in other areas, it can be difficult to force yourself to practice deliberately. And that's why coaches and teachers are so valuable. They can recognize their weaknesses and assign tasks to address them. To become an expert, you have to practice for thousands of hours in the uncomfortable zone, trying things you can't do yet. It is amazing to observe the true experience. It seems like magic to me, but it isn't. In essence, experience is recognition. And recognition comes from the incredible amount of highly structured information stored in long-term memory. Building that memory requires four things: a valid environment, lots of repetition, timely feedback, and thousands of hours of deliberate practice.
When those criteria are met, human performance is astonishing. And when it is not, there are people who we consider experts but who in reality are not. (techno sound) If you want to become an expert in STEM, you must actively interact with problems. And that's what you can do with Brilliant, the sponsor of this video. Check out this computer science course where you can discover the optimal strategy for finding a key in a room. And you'll quickly learn how you can replicate your own strategy in a neural network. Logic is another great course that I find challenges me mentally.
You go from thinking you understand something to actually understanding it. And if it feels difficult, that's a good thing. It means you are being pushed out of your comfort zone. This is how Brilliant facilitates deliberate practice. And if you ever get stuck, you'll always have a helpful hint at hand. So don't fall into the trap of feeling comfortable doing what you know how to do. Get into the habit of being uncomfortable and learn something new regularly. That is the path to learning and permanent growth. So I invite you to check out the courses at Brilliant.org/veritasium, and I bet you'll find something there you want to learn.
Plus, if you click right now, Brilliant is offering 20% ​​off an annual premium subscription to the first 200 people who sign up. So I want to thank Brilliant for supporting Veritasium and I want to thank you for watching.

If you have any copyright issue, please Contact