YTread Logo
YTread Logo

There is No Algorithm for Truth - with Tom Scott

May 31, 2021
Imagine that tomorrow Google announces that it has invented a machine learning system that can distinguish fact from fiction, that can determine

truth

from lies. And this is a magical psychology experiment, so in this experiment, in this thought experiment, the machine learning system is right, all the time. And then they say they've connected it to Google search results and YouTube recommendations. So now, when vaccines are sought, only scientific consensus appears. All the anti-vaccine material is on pages six and seven, where no one is going to look. YouTube videos about conspiracy theories are starting to be recommended less and less, and those that debunk them continue to appear at the top.
there is no algorithm for truth   with tom scott
It still matters if the search result is attractive, relevant and well-referenced, but now it also objectively matters if it is true. And then you realize that this new system, this

algorithm

, disagrees with you on something really important and produces a result that you find abhorrent. Now I'm struggling to find an emotive example for this that isn't too sensitive for an audience like this. So, since this is the Royal Institution in London and it's September 2019, let's imagine that this machine learning system concludes that it would be a good idea to have a no-deal Brexit. (*audience laughter*) Okay, no.
there is no algorithm for truth   with tom scott

More Interesting Facts About,

there is no algorithm for truth with tom scott...

Don't applaud that, it was a low blow. That was a low blow and there may be people here who actually believe that, so for those people, please imagine the opposite conclusion. That this establishment believes it would be a good idea to completely dissolve the UK into Europe. Look, this is a system that in this thought experiment creates an objective

truth

and is not in accordance with one of its fundamental core values. More likely, A:, you'll just walk away. Oh, okay, I guess they'll listen to me less and less and I guess we'll just have to deal with that or B: you decide that you need to shout louder instead and that the

algorithm

is wrong.
there is no algorithm for truth   with tom scott
So for the next hour, I want to talk about the state of science communication and television communication in general, for an audience that has wildly different ideas and areas of knowledge about how that works right now. I want to talk about why some of science communication seems to reach everywhere, and much of it doesn't. And to give advice to people trying to reach out and spread their truth. And whether it's for small individual groups publishing content, or for large corporations and charities trying to spread content. That will also include an immersion in the concept of parasocial relationships.
there is no algorithm for truth   with tom scott
Those strange one-way relationships that appear a lot in other media. Because if you want to understand how to reach an audience, then you need to understand what that audience is looking for. And I want to talk about a particular set of things that increasingly govern the media we consume and everything in our lives: algorithms. Those who recommend the videos you see on YouTube, the search results on Google, the order in which your Twitter account appears. in and basically everything that includes recommendations on social media these days. And from the corporation's point of view, what advertising to show you.
Now, there are a couple of caveats here. I have generalized this talk as much as I can. I've mentioned this to quite a few people in my industry, but I speak from a position of success. I'm lucky enough to have 1.9 million subscribers on YouTube right now. We didn't reach two million in time. Now, subscribers are not a particularly useful metric. That's more a function of how long you've been on the platform and how many one-hit wonders you've had. It's more honest to say that my average science communication video gets between a quarter of a million and a million views.
And those could be about linguistics, which is what I'm actually majoring in. They could be about the basics of computer science, where I am self-taught but review my script with experts. Or they could be about infrastructure and science, the interesting things in the world. That's where I go out there and give it to people who know what they're talking about. Now my degree is in linguistics with a master's degree in research in educational studies. But ultimately I'm in this position because I spent 15 years throwing things on the Internet before anything worked. I was very lucky that what turned out to work was science communication, going out and telling the world things that interest me.
I'm even luckier that it involved filming on location. In recent years I have been lucky enough to experience zero gravity. I've been to the Arctic, I've flown with the Red Arrows. And yes, that is mostly a brag and mostly an excuse to show off the best photo of me that has ever been taken in my life. Have you ever looked at a photo of yourself and thought that everything is going downhill from here? Because that's. One more caveat: those of you who have attended a Royal Institution speech before will know that there are generally two types.
There is one where a researcher with a PhD and an associate professorship talks about her research and there is another where someone from art and culture shares her experience. This is more of the latter. Some of what I say will be opinion and not facts and hopefully this audience will be able to tell the difference. There are points here where I explicitly say that I don't have all the answers. And I also want to add a note about conflict of interest. My company makes a lot of its revenue from ads that appear in and around YouTube videos.
Which means my rent is indirectly paid for by Google, Ireland. I can't imagine why he is in Ireland. There is no reason why they should have settled there instead of the UK. No one at Google has transmitted a word of this speech. They don't even know I'm doing it. I'm not an employee of them, but ultimately, I am... As much as I'm willing to irritate that company and bite the hand that feeds me, they are indirectly paying my rent. I can try to represent the people who haven't been so lucky, who the algorithm has turned on, but I'm pretty happy with them at the moment and a lot of other people aren't so happy anyway, that's the plan.
This is the state of scientific communication in the English-speaking world at the end of the second decade of the 21st century. And to understand how it works, we must start with the algorithm. Algorithm in this context means something quite different than what people with a lot of experience in mathematics and computer science might think. The algorithm is referred to in the singular, which is the almost anthropomorphized name given to... this collection of machine learning systems. Last year I attended a conference for science communicators and, after about three or four hours, we realized that we had to ban the word from conversations because, while many people on YouTube complained endlessly about it, anyone who didn't was on the platform he simply found it confusing and disorganized and wouldn't shut up.
So when I talk about the algorithm, I'm talking about this kind of almost magical black box of code and the idea is that you've set up this... this black box and then you give it a list of human beings. -selected examples and resolves their distinctive characteristics, provides some type of categorization system. And then when you present it with novel examples, it categorizes them and learns from the feedback. And those distinctive features may be completely new or completely unknown to humans. So one of Google's recent AI projects was analyzing retinal photography, and this recent paper claims that with amazing accuracy, it was able to look at retinal photographs and calculate the gender, sex with 97%. of precision, age in four years and better than probability of smoking, blood pressure, major adverse cardiac events.
Ophthalmologists now can't detect any of those things themselves, they're not really sure how the Machine did it. Now, it's okay to be skeptical of some of those claims. Maybe there's a difference in the metadata, maybe the retina photography machine - I guess it's technically a camera - is configured or focused differently depending on some of those attributes, but the document does a good job of covering its bases. there and maybe a human being. It could also be trained to detect those differences. The thing is that no one minds when you can just look at the chart next to the patient.
But the simplest black-box machine learning system is essentially categorizing images. You give him a lot of pictures of cats and a lot of pictures of things that aren't cats. And then you ask: 'Is this new image a cat?' Which seems like it will be useless unless you try to design a filter for adult content and feed it porn and non-porn and the goal is to have a classifier that can look at a photo that has never been seen before and work. Find out if you should show it to all ages or not. Of course, it's not that simple, there are stories after stories of machine learning systems that have failed due to bad training data or, more likely, biased training data.
And this is a little outside my scope. I know Hannah Fry covered this in her lecture here a while ago, but I have an example that is very close to my heart. YouTube uses a machine learning system to try to detect whether videos are suitable for advertisers to place their ads next to. It was released too quickly and before it was completely ready, because YouTube had one of the many small scandals they have and they needed to do something to appease their advertisers. So this is a very condensed summary based on unofficial conversations. and insinuations and rumors.
I'm not breaking any NDA here, but... the story goes that they provided the machine learning system with a big block of videos that were definitely 100% safe for advertisers, and then they gave it a big block. That definitely wasn't and they told the system to be pretty conservative because it was just a first line of defense if it deemed your video inappropriate. I could send it to humans for review and this, in my industry, is quite controversial. , but I don't think it's an unreasonable solution to a very difficult problem. YouTube has 500 hours of content, 500 hours of video.
I'm not saying content. YouTube has 500 hours of video uploaded every minute. That's a human life every day. It's not unreasonable for a machine learning system to be the first line of defense as long as there's human review behind it, and today it's working quite well, with a few high-profile exceptions. But the problem, I was told, was that there was a bias in the training data. Videos, I would say channels, people; People who talked about LGBT issues were more likely to talk explicitly about sex in some of their videos. Not all, not most, but enough that the machine learning system discovered that there was a correlation between people talking about LGBT issues and people talking about explicit sex.
Again, only in a small number of videos, but enough, that when the machine learning system figured out that something had to do with being gay, it deemed it more likely to be unsafe. Now, YouTube's CEO said in a recent interview: "We work incredibly hard to make sure that when our machines learn something, because a lot of our decisions are made algorithmically, our machines are fair." I know some YouTube employees. I'm friends with some YouTube employees. I think they work incredibly hard to minimize that bias, but it's still there. Algorithmic bias is a major concern for all machine learning systems.
Systemic biases in the rest of the world have already reached social networks without machine learning intervening. Of the top 10 highest-earning creators on YouTube right now Well, as of 2018, as of last year, of the top 10, they're all men. And I'm very aware that I'm at the Royal Institution giving this talk as one of the reasons I got that audience in the first place, one of the reasons I ended up here, is because I'm a white man with a British accent, that It sounds authoritarian. Trying to make sure AI doesn't inherit these systemic biases is an incredibly difficult job, and it's one for Hannah Fry and her team, and not for someone who earned a degree in linguistics.
When YouTube turned that recommendation engine over to machine learning, they configured it to increase watch time. This is what we told everyone: if people stayed watching your video until the end, it was 20 minutes long, then the system considered it good. At that point they broke Goodhart's law: "When a metric becomes a goal, it is no longer a good measure." So, people made longer videos and put everything important at the end, forcing people to watch until the end. Through Then worse videos were recommended. Now YouTube's official line is thatThey reward high-quality videos that keep people on the platform.
Now that may not be videos on the same channel, by the same creator, in the same genre. In 2017, Jim McFadden, who was then technical lead for YouTube recommendations. He talked about the new engine they had, which came out of a department wonderfully called Google Brain. "One of the key things it does," he says, "is being able to generalize, whereas before, if you saw this video of a comedian, our recommendations were pretty good at saying, here's another one just like it. But the Google Brain model discovers other comedians that are similar but not exactly the same, even more adjacent relationships.
You are able to see patterns that are less obvious." videos to recommend, one of the basic ideas of the new model is that if someone comes to YouTube, because of your video, they mark That's good, and if someone doesn't leave YouTube because of your video.video, they don't stop watching, that's something good. So the black box picks up those signals and determines what will keep people on our platform, what will keep them watching the videos and, again, most importantly for Google, seeing the ads in between? By the way, apparently Google also serves more ads to people who are more tolerant of ads.
If you want two ads to appear less frequently before your video, skip them more! That was bad advice for someone who makes money from it! Because you have to remember that all these big companies, Google, Facebook, Twitter, are essentially advertising companies. Almost all of its revenue comes from being the largest marketing, advertising and targeting company the world has ever seen. Your idea is that each ad is perfectly targeted to you. As you all know, they are not there yet. But it turns out that if you reward videos that keep people on the platform, then you end up with conspiracy theories and clickbait.
Yes, all, all the companies that have algorithms are working on the fight against disinformation because they are aware that it is a public relations disaster for them. But they are doing it first in English. I'll come back to that later. From a creative perspective, "The Algorithm" is often seen as something of a skinner box. It's an operant conditioning chamber. It's a food dispenser that could give you some money if... you clicked hard enough. Believer, Google, YouTube and Twitter will never tell you what that algorithm is looking for, because every bit of information they provide means there are more opportunities.
Let people abuse it and spam it, but from my perspective, the pellets come out randomly and we all develop superstitions and we all keep pushing the lever. I was lucky this year to have a conversation with Google's Head of Recommendations Product, the person responsible for the algorithm. It seems that the idealized version of what they want for those recommendations that appear next to your video, is that, It's a bit like, and this is something that shows my age, It's a bit like a TV guide It's a bit like Radio Times Every channel should always have something that is interesting to you, all the time It should be a There is a layer of transparent glass between the audience and what they want to see and the question is, as they weigh all those videos, how much do they put their finger on the scale? , how much they make that TV guide for the worthy, honesty, truth-seeking version of you, and not the clickbait conspiracy version of you, because ultimately yes, sometimes you want to watch a documentary, but sometimes you want to watch someone trip and get hurt, like; "You've been framed!" existed for a reason.
And I'm not just talking about YouTube. I mean Twitter, I mean every algorithmic system out there, if all they recommend is quick, short doses of dopamine that just get you in and out. So, that's not long-term survival. strategy. That spirals toward the lowest common denominator, which ultimately hurts the world. But if they don't have some of that there, then people are going to go somewhere else, they're going to go to the company that does have it. It is thoroughly verified and educational. Then only a minority of people will see it. In finding that balance, finding that solution, I said there would be analogies to older media.
That's what television commissioners still do. It's what the people who program YouTube's algorithm are trying to do, and let's be clear: it's an unsolvable problem. There is no magic balance in between that makes this work. It's about finding a balance. It's about finding the least bad option. You can't have a successful platform that is exclusively clickbait, but you also can't have a successful platform without clickbait. Because either way, advertisers will leave, viewers will get tired, and it's not sustainable in the long term. Much research has been done on the effects of the algorithm. A lot of research, formal and informal, that showed how you can very, very easily go from something apolitical and then click again and find something slightly political.
And then click again, and find, then find something. It's a little clickbait, but it's still honest and then maybe something that's about moderately conservative politics that's a little disingenuous and then on the next click find something about why Hillary Clinton is evil and Donald. Trump is the greatest thing that has ever happened to the universe or vice versa. The most notable recent investigation into online radicalization is this deep dive from the New York Times and, as I say, companies are working in English first because "The New York Times" searched about radicalization in Brazil and included one of the most sobering things I've seen in a long time: "Right-wing YouTubers had hijacked already-viral conspiracies about Zika and added a twist: women's rights groups, they claimed, had helped engineer the virus as an excuse to mandate abortions." mandatory." Quoting Zeynep Tufekci, who was referring to research in the Wall Street Journal: "Videos about vegetarianism led to videos about veganism.
Videos about jogging led to videos about running ultramarathons. It seems like you're never 'tough' enough with the YouTube algorithm. It promotes, recommends and spreads videos in a way that seems to constantly raise the stakes. Given its billion users, YouTube may be one of the most powerful radicalizing instruments of the 21st century. I'm on it Ben McOwen Wilson, director general of YouTube in the UK, had an interview He obviously disagrees with that. He says "that the platform reduces the spread of content designed to mislead people and raises authoritative voices." Note that he did not say those voices .were true.
Or those voices were right. He said they were authoritative. There's an old Jonathan Swift quote, and it took a lot of research to prove that it was actually a Jonathan Swift quote. "Falsehood flies and the truth comes limping after it." Quick show of hands, who saw this tweet. ? Few people, yes, there are around ten or twelve in the audience. I mean, look, he did big numbers. The idea that Marmite, you put the bottle aside to get it out, that did big numbers. Who here saw the confession and the recantation? Okay, about half the number of people.
Well, sure, that's harmless enough. By the way, that's now the Marmite guy. That's how he is known. Everyone assumed that someone had verified the facts. It's not that important. But everyone always assumes that, especially if they agree with our preconceptions, I mean the long-term solution is to teach information literacy in schools. But as real-world education policy, that's about as useful as saying we should reduce our carbon emissions. It's true, how do we get there? I don't have the answer to that, Douglas Adams, in his novel "Dirk Gfully's Holistic Detective Agency", came up with the idea of ​​a fictional software called "reason", which is a program that allows you to specify in advance what decision to make. you wanted to get to, and only then give him all the facts.
And the job of the program, which he was able to do with great ease, was simply to construct a plausible series of logical steps to connect the premises to the conclusion. In the novel, it was sold to the Pentagon, highly classified, and explains why they spent so much on Star Wars missile defense. If you want to be convinced of something, YouTube, Twitter and all other networks will happily find you. people to convince you If you have lost your faith in God, then there will happily be dozens of evangelical preachers from all sorts of denominations who will happily bring you back into the fold, and dozens of angry atheists who will make sure you stay out of it.
From this Choose, in which direction you want to fall If you want to know who is to blame or what is wrong in the world right now. Then I can find you hundreds of people who will make you apopletically angry at billionaires, immigrants, or both. Whichever path you choose, there will be someone with authority to tell you. So how do we define that authority? They used to be goalkeepers, it just seemed like they used to be goalkeepers. Scientific communication was in magazines, television and radio, which meant there was no significant peer review system, but you knew there were researchers behind it and standards and quality to keep these things fairly accurate. .
There were professionals. There is David Attenborough, there is James Burke and many other men, and they were all men of authority. With the BBC to back them up. Which is certainly true, but for every Planet Earth or Connections, somewhere in a channel, there would be Ancient Mysteries, or In Search Of, or Ancient Aliens. The Bible code became a phenomenon in the 1990s. This was the idea that there were certain messages hidden in the Scriptures that could be traced if you simply went through exactly the correct number of letters in a row. The Daily Mail loved it.
They put it right on the cover, a touch above the headline. Not because it was true, but because they knew he had sold newspapers. We had gatekeepers We definitely had gatekeepers, but I'm not convinced that, from the public's perspective, it was that different from what we have today. Online authority often comes from having an audience. You know, why am I here giving this talk today? It is not because I have done 15 years of painstaking research on this topic and now I present to you my doctoral thesis. It's not because I have a particularly wide or deep range of knowledge.
It's because I've worked with the Royal Institution before, you know I can present. I don't pretend to know your motivations, Sean, but I think it's safe to say that I'm here in part because you knew you could sell tickets and you knew you'd get some online clicks by having me associated. I'm not saying this is all, I'm just saying it was probably a consideration. There is a reason why every major British space documentary, over the last 15 years, has been presented by Professor Brian Cox. Except they're not just documentaries. about space, that's Wonders of Life! It was about natural history.
Sorry, I just have to take a moment. Oh, well, there it is. That joke would have been much better if it had come at the right time. Can we take a minute to appreciate how bad Photoshop's job is, by the way? The lighting doesn't match, he has a halo around his head, and that right arm isn't even cut properly! It's the official BBC Press photo! Anyway, this was billed as a physics-based approach to natural history, and yes, it covered some pretty advanced physics concepts, but let's be honest, from a TV commissioner's perspective, it's a bit of a stretch.
It was almost certainly an excuse to offer Brian Cox on television, to the audience that wants him, to get those ratings. I'm sure it was well researched, I'm sure they did a very good job, but Wonders of Life with an extremely qualified double biologist and astrophysicist, who no one has ever heard of, wouldn't have had as many people watching it, and it would have been a lot more difficult to order. How much does it cost? get someone less qualified but known to the audience? Instead of the best person for the job? Sister Wendy was not much of an art historian, she graduated in English, but in the 1990s she presented five BBC series on art history because she was liked by the public and listened to.
In its obituary for her, The New York Times said, "Her insightful, unscripted comments from her emotionally connect with millions of people." Dani Beck, a Norwegian neuroscientist, recently said the following: "His desire for her a platform or interest in various sciences should not replace her ethical responsibility and duty." "Do not speak on topics you are not qualified to address. There are others who know better and can do the job. Consider amplifying their voice instead." And I agree with that, which is why the rest of this conference will be present... No, it won't, it won't. But just for a moment you believed me and how did it feel to pay to come here and see that?
As in the first videos, my educational series featured "the stereotypical guy who hasn't done any research and thinks he knows everything, spouting facts without sources." There are still many people who stand in front of a green screen, oruse voiceover and footage and don't provide quotes or references and just ask people to trust them. I quickly learned not to do that, but the problem is that's what the audience wants. A loyal audience, whether on YouTube, Facebook or Twitter, doesn't want to see someone else's voice amplified. If I hit the retweet button on Twitter, I just take someone else's message with their face, send it to my audience, few people will pass it on.
If I quote a tweet, if I take its message and then add a bit of useless commentary around it with my face, that goes a lot further, the more people engage, the more people stream it on YouTube, which gives creators retention statistics. see on average where people stay and leave a video. I'll note that that graph goes up to 120 percent. There are a couple of reasons for this, one is that sometimes people go back and watch a bit again, that counts twice, it can go higher, but it's also because the YouTube Charts API just isn't there.
It's not good in percentages. There's a big drop in... the first few seconds up there, it's like people say Oh, I don't like that, or I don't like him, or this is not what I like. Thought, there is a big drop at the end, what is now called the final card and used to be called credits, but other than that constant downward slope, as people get bored and you know the algorithm will look favorably on you if that slope is... it's less steep But if I go somewhere on site and give it to experts and experts who are doing most of the video, that graph will be steeper and the retention will be lower.
People will get bored more quickly and share the video less So how much do you please your audience? How much do you tell them what they want to hear, especially when on YouTube there is literally a dollar amount attached to every video and every view? How much do you earn from what your audience wants? At a social cost. You know, it's not just viewers who are sent down a rabbit hole of radicalization. They are also the creators, when you look at the clickbait that generates numbers and money. It can be very, very tempting to simply cut your losses.
Double down on the clickbait and embrace it, make money while it comes, "make hay while the sun shines" because in a couple of weeks it could all be gone. But, light that shines twice as bright lasts half as long. 1988, Jimmy Cauty and Bill Drummond, the KLF, wrote "How to Have a Number One the Easy Way: The Manual." They had a novelty pop song that topped the charts a few months earlier, and The Manual was a tongue-in-cheek song. Detailed guide to success in the music industry. How to reach number one, without money and without musical talent. Which they themselves said, they achieved.
Almost everything is outdated now, there is a wonderful section in a couple of chapters, which says that by the mid-90s, someone in Japan will have invented a machine that allows you to do all this from home without needing a recording studio. That was correct, but from the beginning, some of it is as true now as it was when it was written: "Most number one hits are achieved early in an artist's public career, before they have been able to establish reputations and build a reputation." solid fan base Most artists are never able to recover from having one and it becomes the millstone around their necks that all subsequent releases are compared to" "Or the artists will be destroyed in their attempt to prove to you to the world that there are other facets in which their creativity either voluntarily succumbs and spends the rest of their lives as a traveling freak show, peddling nostalgia for those now distant, carefree days" the Cheeky Girls wrote that ten years earlier the Cheeky Girls They wrote that ten years, 20 years, before YouTube came along, and it's exactly true for people online: a number one doesn't create a career, it can kill a career.
If you only have one hit, everyone will want to see that. A single video doesn't make you a YouTube star, a single tweet doesn't get you a book. Well, to be honest, that hasn't happened in five years anyway. A Number One just makes you the person who He repeats that slogan until everyone gets tired of it. What you want to do is build a catalog of minor hits over time, get a little respect, hone your craft, learn your craft, and then once you have a little bit of an audience, you can start aiming upwards. . Bands, singers, and the music industry are actually a very good analogy for how the Internet works today.
After all, you know, Brian Cox performs stadium shows and tours all over the world. For people hoping to communicate science to the world, it's like starting a band, launching a music career, or going solo and singing. You won't pay rent for the first few years, or decades, or maybe even ever. And if you're one of the lucky ones, if you make it, it won't prepare you for life. And if you're an organization and you're trying to spread your message: Well, I used to think people had a built-in corporate bullshit detector. I used to think that anything an institution or an advertising agency published was doomed to failure just because it wasn't authentic and that's not true.
That's not true at all. Remember what I said before: 500 hours of unwatched and uploaded video per minute. You know, 82 years of video content a day and most of it took no time or effort to produce, but there's more to each of those videos. or less the same chances as that project your organization spent six months and a million dollars on. The most popular recurring series I do now is very simple, it's obviously fast-paced. Um, it's a ten-minute tape monologue to camera about Computer Science. Camera doesn't move Camera doesn't cut There aren't any of those jump cuts that a lot of people use.
Now I can completely understand why they make them. It's much easier. Sometimes there are occasional graphics, but mostly it's just me and the camera. and That breaks all the rules established by television when I grew up, breaks all the rules that the corporate media thinks They need to make this work It's not about 'spectacle' but about people explaining that we have to talk about parasocial relationships. The term parasocial relationship was invented by these two Donald Horton and Richard...Wall...Wohl...it really should be proven that the term means that there is a difference between the spectator, the spectator as we call it now, and the performer. what we now call the creator That the viewer is emotionally invested in the person on the screen but the person on the screen has no idea that the viewer exists and if they do there is such an imbalance of power there that they couldn't be a friendship.
Parasocial relationships are not something new and neither is turning them into money for any celebrity who has ever had an official fan club. With a membership fee I was doing exactly that, actually in the late 80s and early 90s there was a trend among celebrities to set up phone numbers that were their personal number. or your personal voicemail and some of them made a lot of money from it. That's Corey Feldman and Corey Hart and if you think I didn't track down that ad to play it halfway and get everyone's attention, you're dead wrong." You can listen to their private phone messages and get their personal number where you can leave them a message of your own.
Two dollars for the first minute, forty-five cents for each additional minute, ask your parents before calling. "Yes... I haven't tried the number. I can't imagine it working anymore. But those celebrities didn't have Twitter. Twitter is effectively someone's personal number if you know that a celebrity is always on their phone, always typing on the computer, sending their thoughts to the world. Oh, why not reply and send them a message? They might notice, they might...they might even respond to you personally. You know, you... you might get their attention and suddenly it's not a weird parasocial relationship anymore.
They are your friends, more than that, because they are on social networks. ah... you know, you can try to get that attention. You can try to get that... that exaggerated fandom, but it's also performative. It can also be competitive. Now all the fans can see each other doing this. I have a couple of friends who have that kind of scary Beatlemania-style fanaticism, the kind of kids who scream at "Take That" concerts, except it's not just people hanging around magazines and now in small groups. These are not kids who come to a concert and shout at their idols and then disperse.
They have notifications turned on when their idol tweets so they can get the first response. They have group chats whose only distinguishing characteristic for the people there is that they are all fans of this person, if you've ever wondered why... why kids would rather sit and watch a stream of someone playing a video game instead of just play the game themselves. It's not about the game, it's about the person playing it, streaming a video game alone isn't interesting, it's someone you know, or someone you think you know, playing a video game, just hanging out with a friend with a chat.
Next door in that chat there are a lot of friends hanging out, you guys just hang out together except one of you has a lot more power and influence than the others and is often indirectly asking for money. You'll notice that I'm not using slides during this part, I don't want to criticize any particular person for what is basically just putting pressure on Patreon and Twitch subscriptions and YouTube memberships. All these tools. They have to raise money for individual people. They are not inherently a bad thing that Patreon meant. that science communicators can support themselves. Even though sometimes their content is not advertiser-friendly, people who talk about sex education, mental health, or antique weapons can get paid for their content and maybe even not have to hold down a separate job because there are people in the The world has thought this should exist and I am willing to donate money to make that happen.
Animators, writers, podcasters, the kind of people who work incredibly hard and are only able to do what they do because people have decided to support them. That's something brilliant. But when it starts to get unsettling, when it starts to get a little strange is when it's not about supporting someone's craft but about selling friendship. and if you think it's weird when I put it that way Yeah, you should, it's... it's really weird. If you look at one of the really popular video game streamers on Twitch, which is a platform that is just video game streaming. You'll see there.
They are almost always talking. They are looking at the chat. They are reacting to the messages that come. They are reading them aloud. They are responding to them. They are shouting names. They have seen that they are greeting people who have been in that group for a long time. They are being friendly and open and for hours at a time doing exhausting emotional work. They will thank anyone. who said them and tip or better yet, subscribe because in a world where Netflix costs $13 a month, single person subscriptions on Twitch can cost $5, $10 or $25 a month and depending on what you pay , that could affect the benefits you get. and what attention you receive.
If someone repeats their subscription, join again. You can then choose to announce it throughout the broadcast. A small animation tells how long you have been subscribed. You'll see people on Twitch say, "Hey, then." And so, thank you for being a part of the cult for four years." That's... literally language I heard while researching this that seems normal to anyone embedded in that culture. Now, if you're a good communicator, it won't be so much. so much, but it could give you behind the scenes access. You know, you might be able to watch someone's videos from the beginning and put some comments or get your name in the credits, it could give you access to a special private chat room just for members, or If you're giving someone maybe $50 a month, maybe there's a private video chat with the creator just for people who spend that kind of money.
Maybe that's fine. Maybe that's the way social norms go now and I'm the old guy who looks at that, what the hell are the kids doing? That could be the case, but it may essentially be selling friendship and, again, it doesn't have to be that way. There are people who do. uses exclusively, a lot of people who use it exclusively as a way to fund their work, but here are some of the tips that Patreon gives on how to get more people to sign up through monthly subscriptions "Bring your audience." "Go along for the ride" by sharing photos, videos and anecdotes from your life, they become vulnerable within reason.
An emotional connection to you as a creator can be key to turning a fan into a patron. An emotional connection can be key to turning a fan into a patron. . There's a very, very blurry line between being a fan of someone's work and being a fan of someone. and that's a line that I find really uncomfortable because part of my brain doesn't do parasocial relationships. Never done it, maybe there is somethingbad up here. But for me there is a big distinction between I am a fan of X's work and a fan of X I know that linguistically they can be shortened.
But to me those are very separate concepts. I like the work of mentalist magician Darren Brown. I have blatantly copied some of the...techniques he used in talks. Some of the tricks. Aren't they like magic tricks, but some of the rhetorical tricks? Shamelessly from their live shows is the idea of ​​preparing something from the beginning and letting the audience forget it and then bringing it back. In the end, he reveals that it is the key to everything. I shamelessly ripped it off Darren Brown. More than once I really enjoy his work. I think he's a great artist, but I don't give a damn about the man himself because I don't know him.
He's a stranger to parasocial relationships and everything about the way these one-way, one-to-many relationships work blurs that line in the service of greater benefit. I have a strong memory of... from when he was a child, maybe from... this high, when at school I was asked to write something about a personal hero. And I didn't have any when, in retrospect, the benefit of adults, obviously, was to talk about my parents, but while the kids around me were writing about sports heroes or actors or whoever, I was just stumped and didn't. It wasn't until I was much older that I realized that most people liked someone's work and liking someone is the same thing and that was blindingly obvious to most people I'm sure, but I'm too young man, that was a revelation, that has revealed something personal and vulnerable within reason, that is helping to create an emotional connection between the audience and me. - Check the collection buckets that will be by the door on your way out⸮ God, it's like a cold breeze just blew into the room that was bright. --So why have I covered that in such detail?
What does all that mean? Does it have to do with scientific communication? Very early in the history of television it was discovered that the people who were good at getting an audience were not the ones who said "Ladies and gentlemen!", but the ones who said it. Hey Hello, it wasn't the people that were. "How is everyone tonight?!" There were people who looked at the camera and said: how are you? It's the difference between talking to the audience and talking to the viewer. There was a difference between a nature documentary with archive footage and a voiceover and a David Attenborough nature documentary.
There is a difference between Wonders of Life and the sound quality of Brian Cox's Wonders of Life. accuracy video quality They all matter, but not as much as having someone on the screen that the audience can connect with. My friend... Dr Simon Clarke vlog did his PhD at Oxford for the more old school people in the audience vlog essentially means video diary Um Simon now has a PhD in atmospheric physics in 2018 He stopped making personal videos about his life for science communication and wrote this post On Why and How and I saw a quote from her “The motivation to watch someone’s struggle with the monumental task of researching the PhD was primarily what drew viewers to watch. ..
I was the product. By this I mean that my lived existence on earth was a commodity, something that could be bottled, refined and sold." Simon's recent science communication videos are really good, but some of his audience didn't stick around when he went from talking about his life to talking about his work. He has had to build a new audience that is interested in his post-academic career and the topics that interest him now. He's getting there. He is doing very well. Dr. Clark is in the minority because he is qualified to discuss his issue with each of them.
There are countless people who speculate or repeat wrong facts or simply lie outright or try to deceive essential oils by claiming that they cure cancer. Hopefully they are people like dr. Clark, that would be authorized. But often that is not the case. Authority often comes from having an audience, and having an audience all too often comes from that parasocial emotional connection with people if you're going to try to talk about science to the world as an anonymous voice or a corporation just saying words. It doesn't really matter how visionary your sources are, how innovative your research is.
You have to tell people about the human history in it. Preferably, it's yours. and to some extent, you have to be parasocial. Then you can think well. Well, in television we had access control, we had certain standards. Surely. It may well have been about parasocial relationships. There are occasional glitches, but at least there are people we could relate to even if they weren't technically qualified, and again, to be clear, most of the people who are doing this today are extremely qualified and even if they aren't, there is a The whole team digs behind the scenes, but it's still the primary outlet that gave us most of the ghost hunters and haunted ones, and even most of the ghost hunters you weren't watching because you're interested in ghosts.
You're interested in seeing the presenter say "sigh!" something that didn't exist and the online world is often seen as an uncontrolled and unmediated place where anyone can say anything about anyone, but recent years have shown that that is not the case either. Which brings me to the last main part of this is about Echo Chambers and Nazi Bars, the final piece of the puzzle. Discovering why some lucky transmissions go around the world and others don't, is to talk about the people who transmit them, they are the groups in which someone can choose to amplify. your voice or condemn it. where it is transmitted from person to person to person, from group to group. and some things are broadcast because they are interesting or entertaining and for that reason alone.
But often it is because they support existing views - because they reinforce the group's views - or - because they are diametrically opposed to the groups' views and can create links. about Despising Him. Up there are the two extremes of online moderation. Let's talk about the Nazi bar first. This is what happens when many sites set themselves up to be some kind of bastion of free speech where... Anything legal and... and by that, they usually mean lethal under the laws of the United States. Anything legal can be posted for free and, as you'll see, created by the kind of well-intentioned libertarian tech coming out of Silicon Valley, Reddit, which is one of the main centers of discussion, at least among tech-savvy Americans , is perhaps the canonical example.
They were created as... like this perfect bulwark of free speech, if it's legal, Reddit will let you say it. The small groups there may have their own rules, but other than that, it's a meritocracy. The best ideas will emerge. upstairs, at which point the Nazis inevitably entered. Ah... And that's not just a label. I'm not just... just smearing people with right-wing views there. I'm literally talking about modern neo-Nazis, and unlike most... like many European countries, the United States does not have a law against inciting religious or racial hatred. So that was legal. So Reddit let the man play with free speech.
They said it could be countered with more freedom of speech, and unsurprisingly, that lasted until it started affecting their bottom line. When there was, finally, when advertisers started having problems with them, when there was finally a crackdown on those who openly, amazingly racist discussion there This was the quote from one of the co-founders of Reddit and it's surprising that we didn't ban them for being racist. We ban them because we have to spend a disproportionate amount of time dealing with them. This is not unique to Reddit. By the way, Facebook's moderation has been equally poor and inconsistent and they've somehow gotten away with it.
The inevitable conclusion (let no one say anything) is that the worst people have finally found a place that will let them in. expel the most careful and cautious. Then the discussion leans a little more toward their views, which means the more moderate people leave, so it swings a little in that direction and the cycle goes on and on and on until eventually, you realize that either the worst thing is people will survive or maybe the moderators want to kick out the Nazis. This is... this is the analogy with the Nazi bar, the local pub. It could be the great place... the best place in town, but if they let the Nazis hang out. in the basement.
You don't want to go in there. Or at least you won't tell your friends that you go in there. In 2015, Reddit conducted a survey of its users and found that the number one reason its users don't recommend the site even though they use it themselves is because they want to avoid exposing their friends to hate and offensive content. So let's look at the other extreme. Echo Chamber For a better analysis of this, I would refer you to the work of Walter Quatro Chiaki and the people he works with at the Data Science and Complexity Laboratory at Kalfas Kara University in Venice.
I mispronounced one of those words, I don't know which one. - in an echo chamber There is definitely no freedom of expression, no dissent is allowed, seen in places like Facebook groups for multi-level marketing schemes and... and Anti-vaccines where everyone has to accept the group's philosophy or Otherwise being branded a shell or an enemy, anyone with a dissenting opinion is shouted down by a large crowd, all of whom support each other's views, whether that view matches reality or not. And if all dissent is banned, you will end up with a similar situation. problem, the more obsessed, the more extreme radical believers, kick out the people who aren't so sure, so the discussion, on average, starts to tolerate more extreme obsession and less descent and the cycle goes on and on and on, if those two modes of failure sound. similar Well, more or less they are.
Both extremes are harmful and please note that I am NOT talking about political alignments. I'm talking about the extremes of politics that allow anyone to say anything or don't allow any kind of dissent. And every major company that allows debate has to choose where it falls somewhere on that scale. So this is a post from a Florida based natural medicine clinic that sells homeopathic vaccines. I've... Anonymized them as best I can for obvious reasons. . Your phone number should not be here. I think all of us here can agree that this is dangerous and there will be a wide variety of opinions on whether it should be legal.
So in part they did it right. Those are definitely safe for ages 5 and up. There's a concept on Twitter called ratio. It's the number of replies you get, the number of likes, and the number of retweets. If your number of retweets is higher, you have done something that has resonated and you should increase the signal and send it to the world if your number of likes is higher. You've said something poignant, personal, or emotional that people want to sympathize with but might not want to send to their friends. If your number of responses is higher, as shown at the end of that tweet, you've probably said something that many people disagree with.
An answer like that is... it's called "proportion", that tweet about the homeopathic vaccine was completely proportional. those 132 answers are all people making fun of or sometimes clearly stating why homeopathic vaccines are a bad thing along with a scientific consensus. I think we can agree, at the Royal Institution, it is probably a good idea that homeopathic vaccines receive some opposition. But here's the thing: 132 people suddenly responding to that company, which normally gets literally no response to anything, is algorithmically indistinguishable from a bunch of mass abuse directed at one vulnerable person. If some horrible person with a moderate following says hey, I hate that guy, leave. and they make fun of it, then machine learning systems can't distinguish between abuse and A company gets blowback for selling homeopathic flu vaccines.
Any policy decision designed to reduce abuse on Twitter to prevent vulnerable people from being attacked while wearing masks is vital and necessary. Will also do it. Help the snake oil peddlers. I don't know where to get that political decisions on community standards are often seen as drawing that line somewhere between Echo Chamber and The Nazi Bar, but there's this idea that if the company just pushes the line a little bit that way. I'll push it a little. That way. There will be a perfect solution that will keep everyone happy and I am very sorry to say it, but it is not true.
Echo Chamber and The Nazi Bar are not polar opposites. They are both a gradient and overlap in the middle. You can't choose the best option. You can only choose the least worst, a slight tangent. But I think that's part of the centralization of the web that has occurred overthe last 20 years. In the early 2000s, each discussion site was on a different server run by a different person, perhaps in a different country, with completely different rules and in that way mirrored the systems we have now in real life, as You know, one of those sites might well allow, you know, vulgar abuse and back and forth. fourth argument Another one of those sites might ban someone for even mild swearing and that's how it works in the real world.
You know, there is a big difference between the conversation that football fans have singing when they enter the stadium and the conversation that the hallowed halls of the Royal Institution have. I mean, I guess there's Sean, I haven't been here after last Christmas, sure, I guess it gets a little complicated, but that - that different register and the different social norms - it's true for a lot of the bubbles inside. . Twitter, Facebook and YouTube. you can have community norms within smaller subgroups, but all of those subgroups are centralized on platforms run by huge, mostly American, corporations.
So the community standards should be standardized across all of them and should be either the corporation's or theirs. Advertisers or VC sponsors will support, you know, you can't set massive cross-platform policies that allow every type of discussion, or in the case of YouTube comments, any type of discussion. The problem is that while the communities may be small, the platforms are too big and too centralized because you know some people just go to Twitter and just say, this is a nice little coffee shop. Friendly space with some people I know, you know, I'm just going to talk. to my friends and suddenly *Boom* You know what...
It's... they get this attack from these "other people" who use Twitter, it was this massive shouting match forum, they'll look for anyone who uses this particular hash. .. tag anyone with these opinions and they will yell at you because that's the way they use the platform. And some people have tried to federate this. It's called, there's a... a network called Mastodon, which is basically Twitter on several different servers. You know, everyone has their own different rules. They all talk to each other, but running a server is complex and expensive and joining Twitter is free and easy. Why wouldn't you do that?
There's something called discord which is the closest thing I think there is to those old web forums, those old bulletin boards, every discussion section is... it's closed and it's private and it's just separated from the rest of the world it sounds like... .sounds like a great plan sounds...sounds brilliant Great And you know, it may work for many small groups, but you still have a single federated login Sorry, not federated across the network Unintended consequences abound if you try to play with these things YouTube recently had an algorithm change that you know is trying to raise authoritative voices.
Suddenly, if you were watching videos about climate change, you might be sent to someone like the Royal Institutions videos that are about scientific consensus. Which means suddenly all the climate deniers who were already entrenched in their views. they were sent to videos like the Royal Institutions and suddenly under each of those videos. There are unscientific, poorly thought out comments and as a viewer you can scroll down a bit and say: Oh, there are my people, they are the ones who are right, as you would if the algorithm had determined that your fundamental Las beliefs were wrong.
Which brings us back to the beginning: I'm pretty sure the person running that homeopathic flu clinic really thinks he's doing good for the world. Core beliefs are at odds with reality, but that has never stopped people from believing things that people can take. What is... what is just a saline injection and they can think it's effective and they can spread the flu? And that, in the worst case, is lethal. And I would say that in a perfect world, the tech companies that facilitate that discussion have a moral imperative to reduce or eliminate messages like that, but it can't be that clear because we're not perfect, the people who handle them.
Damn, they're not perfect now, ideally the algorithms for Facebook or any other YouTube... for a YouTuber from any other company. They could think a little further ahead. Your goal would be to increase profits in the long term. At least that's what corporations would like. They understand public relations and you know. They...could decide what to do, and from the perspective of the humanities. The ideal algorithm would be, you know, helping humanity survive in the long term. It would suppress conspiracy theories and fake news, but allow enough entertainment and nonsense that we would still pay attention. In 1950, Isaac Asimov wrote a story called The Avoidable Conflict.
It became the final chapter of iRobot and was about giant supercomputers running the world's economies. They were called "the machines" and in the story they are not perfectly efficient. They are not perfectly designed. They're making little mistakes here and there, turns out the spoilers are spoilers for a book from 1950. They're programmed to protect Humanity, and they know better than ourselves, a little nudge here, a little nudge there. Humanity is then less likely to destroy itself, and people who believe machines are doing that are more likely to be seen as conspiracy theorists. We don't have machine learning systems like that.
We don't have that kind of artificial intelligence. At least not yet, and you know. We can't... we can't tell a computer program that these are the odds that humanity will survive into the next century and beyond, improve them if we ever If we have a machine learning system like that, the kind of superintelligence that could theoretically control broad swaths of humanity, then the world is about to change so much that fake news will be the least of our worries if anyone ever does, because I can only hope its goal isn't to maximize the profits of a company, but until someone develops an algorithm that can do all that, it's up to us.
I know it's a very cheesy note to end on, but as if we are that system, it's up to us to verify the facts before passing them on. It is up to everyone to create things that are honest and that do not stain the truth. There are few of us who create and train those algorithms To understand bias and make sure we are not creating rabbit holes of conspiracy and it is up to those of us who create things To manage those commercial demands of clickbait and drama against honesty and truth and help the The world will not become a horrible pit.
The only "algorithm for truth" we have now is ourselves. My name is Tom Scott, thank you very much. < Applause

If you have any copyright issue, please Contact