YTread Logo
YTread Logo

Henry Blodget Leads A Panel On Facial Recognition Technology | Davos 2019

May 02, 2020
I welcome you all. I'm Henry Blodgett. I'm the editorial director of Business Insider. I am very proud and happy to be working with Microsoft and the rest of the

panel

on

facial

recognition

. This is one of the technologies that is advancing at such a rapid pace. I think that even if you're tuned into

technology

, you often don't have a good idea of ​​how fast it's advancing and how positive it can be for the world and how potentially dangerous it can be as well, and we have a wonderful

panel

this morning to talk about this. Brad Smith arrives as president of Microsoft Mauricio Shaka is a member of the European Parliament, Ken Roth is the executive director of Human Rights Watch and Kafer Butterfield is the head of artificial intelligence and machine learning at the World Economic Forum here in Davos, so we'll talk about that and to start off, Brad, can you give us an overview of where the

technology

is and ultimately just an idea of ​​what it can do and what it can do to harm us?
henry blodget leads a panel on facial recognition technology davos 2019
Just frame this for a few minutes to keep the conversation going. follow me but on behalf of Microsoft let me also say thank you for coming welcome this morning having arrived before dawn it is always part of every day it seems that in Davos it is great on behalf of the company to welcome you we have Eric Orbitz who

leads

Microsoft Research and probably I've forgotten more about artificial intelligence than anything I've ever learned, but it's great to have you here. Penny Pritzker, who is a member of our board of directors, is here and we are delighted to have you all.
henry blodget leads a panel on facial recognition technology davos 2019

More Interesting Facts About,

henry blodget leads a panel on facial recognition technology davos 2019...

Take a few minutes to talk about artificial intelligence and you'll know the explosion, so to speak, in public policy problems that it's creating and I always think it's worth starting, as we think about this topic, by reflecting on something that we all know but that in never really. Consider that we all developed the ability to recognize people's faces from our first months of life, we could distinguish between our mother and our father or a sibling and a babysitter and of course you know that this happens very, very early in life. life and continues throughout life, even if we can't always remember everyone's name, especially in a place like Davos, well here we are in the year

2019

and it turns out that more and more computers can do the same thing as beings humans.
henry blodget leads a panel on facial recognition technology davos 2019
Facial

recognition

has been a topic of discussion. among computer scientists since the 1960s, so why now you know why it's exploding in this decade and I think it's actually because of four different technological developments that have really come together. One is the improvement in 3D and 2D cameras and the ubiquity of cameras. Both cameras are like closed circuit cameras, but there are actually cameras everywhere, even in everyone's pockets and bags, on the phones we carry around, but it's more than that, it's the massive accumulation of data that is really possible. thanks to the cloud, the availability of extensive computing and data power. cloud storage which has really made that computing power available to almost anyone and then by definition the ability to use a lot more data than before and ultimately advances in artificial intelligence and machine learning, particularly with neural networks and the ability to tell people. so suddenly we discovered that computers can do what humans can do and partly it is because it turns out that our faces can be reduced to a set of mathematical equations, since we all know that our faces are unique, there are different distances between our pupils . our noses have unique shapes, there is the shape and size of our smile, there is the cut of our jaw, all of this can be reduced to mathematics and ultimately algorithms, and this technology is rapidly changing the world we will talk about . challenges it creates, but I think it's helpful to remember that it is doing a lot of good in the world and will do a lot of good in the world.
henry blodget leads a panel on facial recognition technology davos 2019
There are many different scenarios emerging and some of them are worth pointing out. For consumer convenience, Nash Australian Bank is working to allow a customer to walk up to an ATM and, instead of having to use a credit or debit card, simply have their face recognized and then key in the pin for security purposes. security, which is relatively A simple scenario, a more impactful scenario if you will, is taking place in part of the National Institutes of Health in Washington DC. There is a genetic disorder that is more likely to affect people from Africa or Asia.
It can cause heart and kidney problems, but it manifests itself. also in

facial

features and it turns out that facial recognition technology can often diagnose an individual, a patient who has this disease, more quickly than doctors acting alone or a third scenario that we have been following at Microsoft, what we call seeing AI, it's an application. which uses a phone's camera and then connects it to the cloud and allows a blind person to know through a headset what is happening in the world around them, including the ability to recognize someone who has just entered. the conference room to join a meeting, so all these things are improving people's lives, so I have this panel, well, it's on the other side of the coin and you are just a framework, there are three topics that We will have the opportunity to talk and there is certainly more too, but there are questions about issues of privacy, prejudice and discrimination and a number of questions and concerns about human rights and democratic freedoms, so it is rightly bursting into public discourse , both in terms of what technology companies are. be asked to do what needs to be done and what governments need to do, not only for themselves in terms of policy, but also in terms of a new generation of laws in terms of regulation, so that's what We are here to discuss, that's great.
Thanks Brad, so Ken, once. To start, we were talking before we left about some of the dangers of this and what immediately emerged with autonomous weapons with facial recognition technology. Ken said no, no, let's talk about the more immediate risks, so Ken, why? Don't you explain it to us well first? I think it's important to recognize that you know, as Brad opened, that facial recognition technology has advantages. The opinion is a matter of recognizing that certain problems will arise and that regulation is needed sooner rather than later because it is very difficult to go back, so the questions really know what practices we should institute and require as facial recognition technology is implemented.
So let me give you some examples of the problems I see if you imagine combining facial recognition software with various psychological interpretations, which is already happening. Imagine you're an employer, you have 3,000 applications, and one thing you need to do is have facial recognition software look at the applicants and decide who will be a good worker. you know who will get along with the others. who reflects You know the right personality for our company. you can easily imagine the prejudices. that comes into it because you know how to interpret, you know who is nice and who is not, it's very easy for race, ethnicity or differences to come into that calculation, you need regulation to prevent that if you're going to use this type of you know Psychological matching with facial recognition software. um if you think about it, you know it's not just the United States that would use this, what we're trying to do with this, you know, imagine, you know there's a demonstration and you know it's impossible. have the police sit there and find out who's there, you know they can try to take pictures, but you know it's hard to piece everything together, but if you do what China is trying to do, which is create a database of everyone's faces , suddenly, you know, retroactively. you can figure out who the dissidents were, who you should go pick up, you know who you put into re-education and so, you know, you have to think globally and who might be taking this, um, I think we also have to think in terms of privacy because you know we're not used to thinking about privacy as something that happens on the street and especially Americans, who can come to this a little bit later, but we've been conditioned by the strange way in which the US Supreme Court The US has interpreted privacy law to think that what happens in public is not private you know the terms it almost seems like you know antagonistic antithetical but in fact if you think about your life you assume a certain degree of anonymity because it is impossible for the police to follow you You know, my headquarters is in New York, if I go out on the street, you know, yes, in theory, the police can follow me, but you know, good luck with that, with the millions of people around, you know, and how They're going to rebuild my life, you know, take that. massive deployment of police resources that is just not feasible, so I have some privacy just in my anonymity in a city or in the world, facial recognition software can change all that because suddenly you know if you have the cameras.
Anywhere you don't need a person to go through all this, they can just search my face where it appears and they can reconstruct my life using this software, so how do we feel about it? In my opinion, that is impeding my privacy even though the laws don't exist yet, but I think it suggests the need for regulation because I think the feeling of privacy and anonymity is something that many of us have come to value and could be lost. as facial recognition software is deployed with law enforcement. agencies, okay and deserve your thoughts on this jump and what you're saying, first of all, I think it's great that Microsoft is proactively pursuing this discussion.
It's not easy, on the one hand, to develop technologies and, on the other hand, it invites discussion about sensitivity, so I think that's great, but it's not like you mentioned that we are also starting to use these technologies, you know. used widely, I mean anyone who has come to Europe or the United States at an airport has to look at a camera before the doors open when you go through security or years ago I have seen the deployment of grade cameras military in the stadiums so that a photograph can be taken and retroactively you can evaluate whether someone threw something on the field or people started a fight or something, so I think we are already past the point of assuming anonymity and privacy in spaces publics, which I think is a deep concern and we shouldn't just look at it in In the context of facial recognition technologies in isolation, but actually cross-data usage, I mean you have location tracking on a lot of your devices , there is a lot going on opaquely and transparently in terms of data collection and data sales.
We need much more clarity about the purpose, the responsibility, the protection of this data and of course when we look at human rights, that is the first point of concern. I mean, as a member of the European Parliament, there have been many occasions when I have met human rights defenders who should be in a position to deny that they have met me because otherwise it could have had two very serious consequences for them. , you know, or just, in general, the freedom to do things without being tracked 24/7, even dating years. back I think it's up for grabs, so while I'm very happy we're having this discussion, I think we shouldn't pretend that we can still be ahead of the curve, it's already there and I wish we'd had this discussion a long time ago.
Previously and I discussed the purpose of checks and balances and also, actually, and maybe this is a taboo here in a business dominated community, but to limit the use or even prohibit the use of these technologies in certain cases, let's go to the ban, but I'm sorry. Keiko, yes, I wanted to echo Brad that there are some really good uses for these technologies and obviously with my background in human rights and having worked in human trafficking, one of the big success stories has been human trafficking. people and helps them find victims, but also with my forum hat on, I'm obviously concerned about the technologies for all the reasons you and the other panelists have mentioned.
Georgetown University did a report in 2016 that said one in two adults was in a police database in the United States, so you know, that wasn't three years ago, so we have to think about the regulation and again I applaud Microsoft for doing this work, it's in the surveillance space and I also think in our homes, you know, one of the projects. What we are doing at the Forum is with UNICEF working on devices that our children use, so AI-enabled toys with facial recognition technology and the impact of that on our society, as their data that should be protected is vacuumed up. for these. devices, so I think we have to think about the private pipeline, about the spaceprivate and in the public space, so let me play devil's advocate a little bit because as soon as you talk about it, a topic like this comes up, privacy. surveillance, everyone suddenly has the 1984 thing on their mind, it's going to be terrible if we retract it all the time, but taking the perspective of honest law enforcement or journalism where you really want to find out what happened, isn't it Is it possible that I would say that you see?
It's a good thing that we have cameras everywhere and that we are all learning that you have to have good human behavior all the time because if you don't, you can be immediately exposed to the world and in normal circumstances like this we tend to be polite and cordial to each other most of the time and this technology wouldn't help with that and shouldn'tLaw enforcement will be able to say in the stadium, let's find out who dropped the bomb on the field or what I mean, right? We're reacting too quickly and saying oh, it's just going to be abused and it's terrible, I think.
A lot depends on one's definition of good human behavior. Yes, I think we would all agree that good human behavior would be to wait, throw something onto the field at a sporting event under most circumstances, but yes, I think most people in this room would do that. They also argue that the people who went to the center of kyiv to defend more democratic rights had good human behavior. Most democratic movements really draw on the ability of people to peacefully assemble to stand up and speak for themselves. However, it is also a world where we all know that in certain countries people can be arrested and imprisoned or even executed if they do that nowadays, how do people do it well?
You know, there are often people who approach a city square and cover their faces with a scarf. so they can't be recognized, but if there are omnipresent cameras so they can recognize you as soon as you walk out the front door, they can identify you as soon as you put on that scarf, no matter where you are, it's a very different world. and so, for example, last year we had the opportunity to do business to provide facial recognition technology to a government that wanted to implement it throughout the national capital in a country where it was clear that human rights would not be protected, we did not feel that comfortable moving forward with that transaction and at the same time when you're in business you have to ask yourself if this simply means that the next company is going to close that deal even if the next company would.
I'd rather not have done it. we do it, but I prefer that no one makes that deal and that requires them to know how to create a broad set of international norms, which is not an easy task, but I think it has to start with the protection of the rights of people in countries. that you always seek to protect people's rights and in that sense I think one of the most interesting developments in 2018 was a five to four decision by the United States Supreme Court that said that the police need a search warrant to Being able to go access your cell phone's location records the idea is that you know we all travel with our phones.
I mean, we all voluntarily travel with the equivalent of ankle bracelets that courts used to require an order in case of guilt before imposing it and you know it, the phone companies. We have records of everywhere we go, so the court decided in the opinion written by Chief Justice Roberts that this ability to track people in public places was so intrusive that authorities should obtain a search warrant first. to obtain that information. This is the same. What you already know is that I think the question now is whether our faces deserve as much protection as our phones and whether we want to have a world where, as we've urged, authorities are required to get an independent determination of probable cause or something similar.
Or you act when there is an imminent threat of death or serious injury, but there have to be some limits or I think we could find democratic freedoms at risk, which is not difficult to see how this is a big challenge in countries. where there is no rule of law or where there are no democratic rights, but also in our own societies where the rule of law prevails, there is the notion of necessary and proportionate security measures while respecting universal human rights, the general idea that rights are universal is that they should be upheld even when someone is suspected of the worst of all crimes, so the idea that security can take precedence over other interests, especially rights, is something I really reject and I think we also have to Consider an individual's rights and their ability to enforce those rights, so imagine that a facial recognition system has identified you on some kind of risk list or suspect on a crime list.
You know how empowered you are as an individual to challenge that notion without having the same kind of I mean, on what basis are you supposed to defend yourself against the State that is sort of empowered by these systems and that can, with the controls and proper balances, including access to these databases? We also have to ask ourselves: do you know how vulnerable the individual is and how are we going to make sure he or she is protected and, furthermore, in the databases that collect all this facial recognition data, we still see data breaches every time. weeks without any consequences on a massive scale.
You know, there's no accountability, so I think. The responsibility of companies that handle this type of sensitive data should also be part of the concerns around regulation and minimum requirements to avoid negligence and attacks with massive consequences. If I may, I would like to pick up on the US Supreme Court case that Brad mentioned and explain how revolutionary it is because the tradition in the United States that has unfortunately shaped the rest of the world's view of This stuff comes from a decision from 1979, a completely different era, you know, before cell phones, where the court ruled that when I dialed my phone number back then.
I had no confidence in the privacy of the number I dialed because I shared it with the phone company and the US government has driven a truck through that narrow view of privacy and therefore told all the data that you Share with the Internet company all the data that you share with the telephone company. You know everything there is on your mobile phone. No privacy and trust because you share it, so the communication itself is private, but you know who you sent it to. you were all you know, all this other data, the metadata is not and that's how the government collected all this and said they had a right to see it now there's been a reaction to that and that's what this Supreme Court case is about It reflects that you don't even know this, you know this tracking device, you know I'm sharing everything with the phone company, they know exactly where I am.
I have a privacy interest in where I was. This historical data is private if not required. for the search word, in other words, if the government cannot show that there is probable cause to believe that this data will be indicative of a crime, then that is the same standard that is used when the government can search my house or even arrest me, so it was very important to leave behind this idea that sharing, if that's the right word in today's world, where by necessity you need these devices, you know, there's no other option than to use a mobile phone, so we don't do it voluntarily.
Sharing is like living in the modern world, but they, you know, remove this view that what is called sharing is an exception to privacy, now we don't know where this is going. I mean, this was a really great first foray, but my guess is either the Supreme Court and now we don't know what we can count on for the Supreme Court or it will be legislation that reflects the public's view of what privacy should be. , we are going to move further towards a world where the mere fact that you know that you are not sitting in the closet is not indicative of privacy, that you have privacy rights in many things that you do in the world, I would even suggest just walking, you know, absent. some reason for the government to believe that you are involved in criminal activity, so I think you know what I would like to see regulating facial recognition software work is something similar to requiring probable cause for the government to get this now You can argue, you know, first of all, they should be able to collect the data, where with the metadata, you know because the phone companies in the Internet companies already have all this data, you know it's being stored at least for a while, The question is more, when?
Does the government have access to it? I think in the case of facial recognition software, you know whether they should store that data in the first place is a question, but if they are storing it, what should they require before they have access to it? I think some kind of assurance requirement is the way to preserve privacy at the same time as we have increasingly ubiquitous use of facial recognition software. Well and briefly, what does it mean for private companies that already collect this data, so governments are obliged to comply with these checks and balances and I think it is very valuable how this case sheds new light on the rights of authorities law enforcement agencies, but if there are global companies with global access to global facial recognition and databases of people's faces around the world, then it's great that in democracies we have laws, but that doesn't mean companies will be subject to them worldwide or that we will restrict their use for commercial purposes in the same way as for law enforcement purposes, etc.
I think that's where I would like to get to talking about governance, regulation and global governance, because it takes a long time to regulate and therefore one of the things that is problematic in this area is that when two years have passed, it could be the time what is needed to regulate, then the mission then the problems have advanced exponentially and that is why we are working in the Forum on a plan for a kind of agile governance in this and that is what we do by bringing together government companies, civil society and academics together to co-create some best practices or policy frameworks that are then tested by our government partners and so one way that I can see to maybe start to address this is to maybe invite companies to work with the ones you were talking about. with us or inviting one of our governments to take this opportunity to explore ways of creating frameworks for some legislation or some regulation that doesn't have to go through Parliament or government to really start to make a difference in this space and how?
Does this regulation really happen? Technology is moving so fast even in the United States. The government is far behind on many of the problems that have arisen in the last five to ten years. Do we wait for test cases to be presented before passing legislation? Like, Brad, you said now is the time to act? How can we take action in advance? Well, I'm hopeful that we can see some new law created in this space in

2019

and I think one of our first opportunities will be at the state level in the United States, you know, the states of the United States are the laboratories of the democracy and you know, I don't think it's necessary to have the answer to all the questions to start legislating in the areas where the answers are now clear and so when it comes to certain issues related to bias, we think there is a legislation that could usefully stimulate the market, the protection of privacy and the protection of democratic freedoms, there are legislative provisions, so you know what there will be, defending and supporting those you know.
In general, when you think about privacy on a large scale, Europe tends to be the leader in privacy protection, used by companies, and the United States tends to be the leader in privacy protection, accessed by governments by many constitutional reasons that must be taken into account. with each continent, but I think we can expect more actions in these spaces. The truth is that Europe already has many tools under the general data protection regulation and Europe can take advantage of them, but the truth is that data protection authorities may already have enough authority with the current law and therefore can act faster.
I think overall it's a very good starting point, also illustrated by this Supreme Court case, which basically takes existing legal rights and simply says that they also extend. to this technology instead of starting from scratch, which is in fact cumbersome and also runs the risk of being unsustainable because, ifWell today we talk about facial recognition, there will be other technologies in the future, the rights do not change, but the way they should be. enforce through regulators and building on existing provisions or I think the way forward is to just look more at the basic principles and then empower regulators or data protection authorities or others to somehow enforce what is pending in the environment or the technology that is in use. scope, so we should get started.
I mean, it's really remarkable how long it's taken if you look at the use of this technology, which is certainly the only example going back to AI-enabled toys. Germany just banned these devices because they already had a law on The books on SP spy devices and they say they were able to say that these are spy devices within the meaning of the law, so there are some laws in place. I think it's one of the most interesting opportunities to think about the role the law can play. playing in a very agile way is about addressing the issue of bias, yes, because there has been significant academic work led by some of the people in this room that you know that has actually documented that you know that the facial recognition systems that you You know they certainly started by having higher levels. accuracy rates for Caucasian men than for women or people of color and, yes, if you think about it, no client really wants to implement a system that is biased or discriminatory, so a market that works in my opinion well it will further stimulate the technological advances that You know, are underway to reduce these bias problems, but then you would have to ask yourself what does it take to have a well-functioning market and yes, it is instructive because in the U.S.
The Institute National Standards and Technology NIST in November released a report evaluating facial recognition technologies from more than 40 companies, including Microsoft. And you know it evaluates accuracy and what was documented were tremendous advances in the technology over the course of four years, but not all companies made their technology available for testing; In fact, there are companies that have well-known names that decided they were not going to do it. they make their technology available for testing and then you step back and say, well you know in the car world we care about the safety of cars and we can basically read about tests that compare the safety of cars or when you think In going to the before you buy something, you can pick it up and understand what it will mean in terms of nutrition and things like that, so the laws in place have required companies in certain markets to be transparent and publish certain information so that you know An idea that we're very excited about is a legal provision that would require a company that wants to participate in the facial recognition technology market to be transparent in explaining how its technology works and, second, to ensure that its technology can be tested using a programming interface applications or in some other way by the academic community through third-party testing services and what's interesting from this perspective, at least to me, is that we don't have to pass this law everywhere, we just have to pass it in somewhere. with a jurisdiction, a state in the US or a country in Europe, that has enough impact on the market for a company to say, you know, if I'm going to be in the facial recognition business, I have to be in that jurisdiction, therefore, I'm going to have to make my technology available for testing and once it can be tested anywhere, the results can be shared everywhere so you know, look for opportunities where a government can move more fast, accelerate market forces in a positive way on a global level, that to me is the kind of step we can take to ensure that this technology is used responsibly, which is fundamentally what I think most people want that adds to what Brad says and I think it's implicit in what you say that yes, we want legislation that We want regulation, but now companies can articulate the principles that they believe should lead to regulation and that will guide their own conduct in The and I mean facial recognition software is obviously just a subset of artificial intelligence and therefore the standards that you're talking about really broader AI standards and I would go a step further than what you've said. because yes, you want transparency and testability.
That is, you know how to start identifying biases and if not you can get into the algorithm enough to test for biases. This is a problem and we have to say it from the beginning. I would go further, although I think it is always necessary for people to have the right to appeal if they have been negatively affected by some AI program and at that time it is necessary to have a person who hears the appeal, you know how to say: I am going to appeal to the AI program and it will spit me back in the AI ​​response, you know and who knows who is making what decision, it's not good enough, you know, so what we really need, is it you?
Learn a set of principles that start with transparency and testability, but then learn when people need to be aware. You know when a person can be a victim of software and there are certain decisions that we really shouldn't leave to AI. software, there are certain decisions, like the decision to kill, that you always have to have a human in the loop, you know and we were, this is probably a different panel for later in the day, but you know, there's this thing you know with these completely weapons autonomous companies whose facial recognition software would be a key part because if you're going to activate some kind of kill robot to try to find the suspect and kill him, which is very predictable, we're trying to ban this, but it's pretty predictable, how Do you find out who the person is?
It's the facial recognition software that you know and so you know if you have that or discuss whether you always have a person in the loop before the final decision to kill is made. And in fact, I would totally agree, I mean, it's another point where we've urged that that legislation requires that before companies use facial recognition technology to make decisions that would have this kind of significant impact on lives of people, undoubtedly a decision to kill, but you know. just the decision to let you into the country, the decision to let you into the store or the stadium's decision to deny you credit, you know, companies should take what comes out of the computers and subject it to meaningful human control, which means have people trained in how to do it. interpret those results and have the opportunity to listen to someone who can say no, it's not me, it's a mistake you're about to make.
I mean, I think these are all useful tests for consumer transparency, but the real question is, of course, one of oversights that is also independent of people because I was thinking about the past where the alignments in the US In the US they were very frequently used to identify criminal suspects and especially when racial discrimination was still embedded in the laws, there was often identification of someone on the basis of skin color because there was a person with that skin color lined up who would automatically be identified as the suspect, so people may also be biased.
I mean, now humans use automatic weapons in a way that we can have a lot of information. There are question marks as well, but at the end of the day I think that an adequate articulation of the rule of law in the context of new technologies requires oversight mechanisms that are robust, no matter what person, no matter what government is known to be politically inclined toward The left or the right are in power, but really give checks and balances of oversight of reparation to everyone indiscriminately and I think that's another level, let's say, than testing for the sake of transparency so that people see what product they would prefer.
They are used on top of each other, but they touch each other in the sense that the oversight of algorithms is increasingly important and also in the interest of democracy, for example, if we now look at social networks and the ratings of certain feeds, This is a very broad topic that I think is very urgent to address, so we have a few more minutes left. We would love to answer some questions from all of you and I have many more. Come on, sorry, hello, Beebe, the mohawked president. and co-founder of Hyperloop transportation technologies, so my company is working, we have an incredible opportunity to work with the fifth mode of transportation, we have a blank canvas to write on, we are imagining the passenger of the future as a naked passenger, someone who does not I need a ticket for the passenger of the future, it's just being a Yama tree to have a frictionless experience, enter and, magically, through a blockchain, receive a series of services that are not conceived by anyone now In the safety aspect, it will be an amazing trip, but the problem.
What we are facing is that this creates a real barrier for people who, for example, separate because they are integrated into the service, integrated into the station, integrated into everything that he will be, we are imagining what he will be doing, there is the concept to accept being. monitored and recognized and this greatly conflicts with my view of the world where you should always be able to opt out, you always have the option to say no, but many of the universities that we are designing have built in the implicit It is necessary for the user to say that yes, how do you deal with this contradiction?
It's the same question that comes up when he moves around with his tracking device, you know and I think you know that as a company there are certain questions that he needs to ask. He needs to store the data so he knows he might need to recognize a person, but does he need to store it anyway? Please, beyond the initial journey they take because you obviously know where it

leads

. Knows? Are you going to your psychiatrist's office? to your topic of HIV treatment, I mean, there are many things that people do in public that are nevertheless private and therefore you know if this is the way that you are going to get rid of your car key. so you know, so you have to ask, you know, even if you have to store it for, you know, the day or whatever, what if the government wants to come after you and that's where we need regulations?
I think it shouldn't be alone. a subpoena which is basically a prosecutor signing a document to get that data, they should know if I really used your car and they want to know where I went, they should demand a search warrant, they should demand from you There is no reason to believe there is a probable cause that I am involved in criminal activity and that is why I want to say that it is beyond your company. If you get a subpoena, you'll have to turn over the data, but you shouldn't, that shouldn't be the law. The law should be that they have to give you a search warrant before you come back with that data, but before you even go there, why do you keep the data?
I would only do it to the extent that it is absolutely necessary for your business purposes, don't sell it to someone for advertising purposes or better yet, target the person you know so they can take it to their store next week. Hi, I'm Joy Boom Weenie, the founder of the Algorithmic Justice League. We've done some of the fundamental work exposing bias in facial recognition technology, we have a new paper about to be published that shows that Amazon has significant bias. The New York Times will cover it tomorrow, but they're selling to intelligence agencies, so you mentioned earlier the National Institute of Standards and Technology, which is an institution. which focuses on observing how suitable these technologies are with the new study that emerged from the National Institute of Standards and Technology.
Amazon wasn't one of the companies that featured, so we did our test. They are biased, gender bias, racial bias, the numbers will come. It's coming out tomorrow, but I want to put pressure on the National Institute of Standards and Technology because if we're going to have government agencies actually setting these standards, we have to challenge them, so if we go back and look at the Ness standards in 2007, Facebook was really excited because they gave them it did very well on a benchmark that was a gold standard in facial recognition. 97% when you looked at that benchmark, it was actually 77% male and over 83% white.
We tested the NIST benchmarks on the above and they were 75%. Bad 80% lighter skin so even if we have these standards we need to make sure that when the reports come out we have the demographic information and this is important especially when we talk about what kind of bands or limitations we have because it could have a false feeling of progress, so go where we should have bans. I think there should be. I'm sure. Can I just, Brad, do you want to address that? Not inActually, first of all, I have nothing. What to say about Amazon, but it's actually good to have scrutiny of the standards and even have alternatives between the standards because you're right, joy.
I mean, you know we all get excited when we do well on a benchmark, but if the benchmark is "Not particularly good," we probably shouldn't get too excited now about moratoriums because before we were talking about lethal autonomous weapons, but we should think about other lethal applications, so this is where there should be a moratorium on law enforcement right now, given everything we know about the biases that a company called axon is providing or is looking to provide facial analysis technology in. police warning body cameras, so we also need to think about areas where lethal application can be contemplated if we are to use this technology for example in the UK, where they have actually done numbers in the real world beyond of benchmarks.
Big Brother Watch UK in May 2018 showed false positive match rates of over 90 percent false matches for real-world dating, over 2,400 innocent people were mismatched, and there was even one young man. Black man who was arrested they asked him for his ID and he produced it, but they don't believe him right, so there's a case going on, so going forward, what can we do as a global community to really think about where these gangs can? Initially, we have something called the safe face pledge that three companies have already made to say that they will not use facial analysis technology to launch autonomous weapons to the president of the international standards for facial analysis technology, so if you're interested, I'm willing to join, but I want to know more about what you think we can do, particularly around bans and categorical areas we shouldn't address well.
In fact, I think it is important to identify certain scenarios where we feel that the technology is not mature enough to be used with good intentions without risks of bias or where we do not believe that the technology should be used without significant human control. or in some cases where we don't think the technology should be used at all and yes. It's kind of a natural phenomenon, I think when you start getting into this kind of field, you start learning case by case and yes, I actually applaud the work that you are happily doing and that others are doing to classify. of pushing everyone to think more, you know what we're finding as we go through this when we start working with potential clients and they come and say we want to implement this technology and you know we sit down with them and I'll give them There are two scenarios of the real world, so I sometimes find that categorical bands can be too broad to work effectively.
You know, one was a law enforcement scenario that wanted to use facial recognition technology inside a prison. You know, I had a very small stage then. community of people relatively speaking, hundreds of people there, yeah, and they wanted to make it so that when you knew people were moving from one part of the prison, you knew they were tracking people and they knew where the prisoners were and you know we looked at that and we come to the conclusion. that there was no risk of bias, you know you could have a high accuracy rate, you already had people whose freedom was by definition limited and the law enforcement agency, the Bureau of Corrections, you know, was using it in a way with the one we felt comfortable with. and at the same time we had another scenario: a law enforcement agency, a police department that wanted to use facial recognition technology to literally take a photo of everyone who was stopped in a car for any purpose to decide if there was A coincidence. and the person would have to be taken to the center for further questioning for other crimes and in that second situation, you know, we concluded that the risk of bias was such that in all likelihood people would be taken to the center for crimes which they never committed because being misidentified, so we sat down with the police department and said, look, this is not a scenario where we would feel comfortable giving you our technology.
We also said that this is a scenario where we don't think you should feel comfortable using this. technology for this purpose and at the same time we said well, we will work with you so that you can see how the technology evolves and we can talk and you can learn about all the different considerations that will need to be thought about. in the future, so yeah, I think it's very important for companies in this space to be thoughtful and responsible and you know, for certain uses to move forward on certain uses, no now it becomes a little bit challenging because of course we look at different tradeoffs. , including yours of course, and we say yes, we can be comfortable with the 92% and then there's the 8% where we might say no, we thought it would be a different approach, you know, it might make sense at least for us .
I think, in our opinion, what it really points to overall is that we know that we want this technology to be used responsibly and it requires that the people who develop it and implement it really think carefully about the questions that are being created and in the world today what I don't want to see, among other things, is what we would call a race to the bottom, you know, where responsible companies end up losing business in a market that can tilt due to the data-intensive nature of this business towards companies that decide, I'll say I'm not responsible or, more likely, they just don't think about anything, they just have a philosophy that says, hey, if it's legal, I'll provide it and if people don't like it, they'll make it illegal.
I think that could be very problematic for the world, so you know it's good for all of us to be pushed. I would leave it here. I just wanted to look at your question a little differently because you mentioned the kind of false positives that are to some extent a flaw in the technology, but let's say that the technology works perfectly and getting better, so let's say that facial recognition can determine whether are you LGBT or not and the consequences that would have in certain countries if people can't know it. prevent it from being used to be imprisoned or otherwise persecuted, so I think we should not only look at false positives, which is a big challenge and if it is not working well, obviously the government should ask itself if it is useful in almost as well assume what happens when it works perfectly and there I think we need reference points for minimum standards that do not depend on the goodwill of companies or their positioning in the space of acting responsibly or not because there will always be companies that are happy to participate in the race to the bottom, we'll see and there will always be countries where authorities don't care about false positives because it's okay to arrest five more people without probable cause, why doesn't it happen every day, whether based on facial recognition or not, so I think democratic countries, countries that are based on the rule of law, should really set the standard here.
Work with companies to drive these standards. Working with civil society or expert initiatives and all that to really have multiple stakeholders. initiatives to establish these minimum benchmarks before it is too late and also to show leadership here, this is not a technology that has to invade us and we are all helpless, although of course data will be collected on people, it will also be It's about you. "We know how long it can be stored for what purposes it can be used for and I would predict that there will be more regulation for specific use than for technology in general, also because it's already there, the cat is out of the bag, you can do it." I won't really say it again, we are almost at a point where we can get there.
Do you have any more final comments? Let me tell you, I think when the race to the bottom argument comes up, you know some companies I'm with. I'm not saying Microsoft here, you know, use this as an argument to just say, well, you know what the point is because someone else is going to undercut us, we're just going to get the business and the answer in those types of situations is to push laws and regulations. It is not about engaging in the race to the bottom and we have seen in a number of situations, including technology transfer, that it could actually work, and that shaming those who are racing to the bottom may be one way to try. deter them in terms of you know what the possible bands are, I mean, so I think you know what you're talking about really, given the current state of the software, it's not deploying it where there will be immediate actionable use, you know, there is , you already know.
When doing a retrospective type of use with a search warrant, there are many more safeguards than simply putting it on a police officer's jacket and knowing that arrest decisions are made on that basis. We also know that we really need bans on certain types of governments in this regard. Gay Mario is implicit in what you're saying, but you know, if you think about today's classic dystopia, you know China's social credit index, where you know they use facial recognition software combined with a bunch of other data that they collect. Giving people scores that you know are basically indicative of political and social trustworthiness that they will determine.
You know where you are going to live, where you can send your children to school. a passport or, you know, going to the hottest news show in town, I mean, it's a method of social control that doesn't require prison, but, by using all of this, you know a new ability to collect eight on yourself and I think that The answer in those situations should be don't sell it, you know now China is developing its own, but you can imagine you know little China is popping up all over the world, where Microsoft is where the world shouldn't facilitate that and finally say really say. that we are in the forum working on some form of global governance and Brad is delighted to join as co-chair of the Global AI Council to really think about some of these issues with countries, businesses, civil society and academics and myself.
I'm actively looking to recruit a country to test some of these regulations we've been talking about on facial recognition, so I hope we make some progress. Okay, yeah, thank you all very much for coming. It's great to have them and thanks to our panel. was

If you have any copyright issue, please Contact