1 00:00:04,200 --> 00:00:07,240 Speaker 1: Get in text with technology with tech Stuff from how 2 00:00:07,280 --> 00:00:14,080 Speaker 1: stuff works dot com. Hey there, and welcome to text Stuff. 3 00:00:14,120 --> 00:00:18,520 Speaker 1: I'm your host Jonathan Strickland, senior writer with how stuff 4 00:00:18,600 --> 00:00:22,400 Speaker 1: works dot com, and today we're going to examine a 5 00:00:22,520 --> 00:00:27,240 Speaker 1: controversial topic, namely how law enforcement is using facial recognition 6 00:00:27,280 --> 00:00:31,240 Speaker 1: software and the problems that this raises. Now, before I 7 00:00:31,320 --> 00:00:33,040 Speaker 1: dive into the topic, I want to make a couple 8 00:00:33,080 --> 00:00:37,239 Speaker 1: of things very clear at the very beginning. First, is 9 00:00:37,320 --> 00:00:41,159 Speaker 1: I'm biased. I think the use of facial recognition software 10 00:00:41,240 --> 00:00:46,400 Speaker 1: is problematic even if you have regulations in place. But 11 00:00:46,479 --> 00:00:50,960 Speaker 1: I'm mostly talking about unregulated use because really we haven't 12 00:00:51,200 --> 00:00:54,680 Speaker 1: established the rules and policies to guide the use of 13 00:00:54,720 --> 00:00:58,640 Speaker 1: facial recognition software in a law enforcement context. So that's 14 00:00:59,120 --> 00:01:01,640 Speaker 1: problem No. One is. I have a very strong opinion 15 00:01:01,640 --> 00:01:06,800 Speaker 1: about this, and I'm not gonna shy away from that. Um, 16 00:01:06,840 --> 00:01:11,880 Speaker 1: it's really unjustifiable to have unregulated use of facial recognition 17 00:01:12,160 --> 00:01:15,920 Speaker 1: software in law enforcement contexts. So I want to make 18 00:01:15,959 --> 00:01:18,000 Speaker 1: that clear out of the gate, that I have this bias, 19 00:01:18,600 --> 00:01:21,600 Speaker 1: and if that's an issue, that's fair. But at least 20 00:01:21,640 --> 00:01:24,160 Speaker 1: I'm being honest, right, I'm not presenting this as if 21 00:01:24,160 --> 00:01:30,560 Speaker 1: it's completely objective, unbiased information. I own this. You don't 22 00:01:30,600 --> 00:01:34,600 Speaker 1: have to tell me. I know it already. Next, this 23 00:01:34,680 --> 00:01:39,000 Speaker 1: is largely going to be a US centric discussion so 24 00:01:39,040 --> 00:01:42,520 Speaker 1: that I can talk about details. But please know that 25 00:01:42,600 --> 00:01:44,880 Speaker 1: there are a lot of these types of systems all 26 00:01:44,880 --> 00:01:47,840 Speaker 1: over the world, not just in the United States, and 27 00:01:47,920 --> 00:01:50,360 Speaker 1: a lot of these places have similar issues to the 28 00:01:50,360 --> 00:01:52,480 Speaker 1: ones I'm gonna be talking about here in the US. 29 00:01:53,200 --> 00:01:57,560 Speaker 1: I'll just be focusing more on US stories to make 30 00:01:57,600 --> 00:02:00,960 Speaker 1: specific points because this is where I live, and now 31 00:02:01,000 --> 00:02:03,800 Speaker 1: to explain what I'm actually talking about here. So back 32 00:02:03,840 --> 00:02:08,280 Speaker 1: in the FBI undertook a project that cost more than 33 00:02:08,320 --> 00:02:12,200 Speaker 1: an estimated one point two billion dollars that's billion with 34 00:02:12,280 --> 00:02:16,840 Speaker 1: a B to replace what was called the Integrated Automated 35 00:02:17,040 --> 00:02:20,799 Speaker 1: Fingerprint System or I A f S. Have I only 36 00:02:20,800 --> 00:02:24,919 Speaker 1: been in place since nine and I've talked about fingerprints 37 00:02:25,600 --> 00:02:29,960 Speaker 1: in a previous episode. UM. The i F I a 38 00:02:30,200 --> 00:02:34,880 Speaker 1: f S was an attempt to create a A U 39 00:02:35,080 --> 00:02:39,160 Speaker 1: S Y database of fingerprint records so that if you 40 00:02:39,200 --> 00:02:41,799 Speaker 1: were investigating a crime and you had lifted some prints 41 00:02:41,800 --> 00:02:47,480 Speaker 1: from the crime, you could end up UH consulting this 42 00:02:47,639 --> 00:02:50,160 Speaker 1: database and see if there are any matches in place 43 00:02:50,200 --> 00:02:55,520 Speaker 1: to give you any leads on your investigation. The project 44 00:02:55,560 --> 00:02:59,200 Speaker 1: the FBI undertook was meant to vastly expand that capability 45 00:02:59,240 --> 00:03:03,560 Speaker 1: by adding a lot more data to the database, not 46 00:03:03,639 --> 00:03:07,239 Speaker 1: just fingerprints, but other stuff as well, and the news 47 00:03:07,320 --> 00:03:11,840 Speaker 1: system is called the Next Generation Identification or in g I. 48 00:03:12,960 --> 00:03:16,920 Speaker 1: It includes not just fingerprints, but other biographical data and 49 00:03:17,040 --> 00:03:21,720 Speaker 1: biometrics information, including face recognition technology. So a lot of 50 00:03:21,840 --> 00:03:26,280 Speaker 1: images are included in this particular database. So as part 51 00:03:26,280 --> 00:03:30,400 Speaker 1: of this project, the FBI incorporated the Interstate Photo System 52 00:03:30,560 --> 00:03:33,000 Speaker 1: or i p S, so you have n g I 53 00:03:33,120 --> 00:03:37,360 Speaker 1: dash ips that typically is how it's referred to now. 54 00:03:37,360 --> 00:03:40,720 Speaker 1: That system includes images from police cases as well as 55 00:03:40,920 --> 00:03:45,640 Speaker 1: photos from civil civic sources that are not necessarily related 56 00:03:45,680 --> 00:03:49,520 Speaker 1: to crimes. Not that's not the only way the FBI 57 00:03:49,640 --> 00:03:53,680 Speaker 1: can scan for a match of a photograph they've taken 58 00:03:53,880 --> 00:03:56,720 Speaker 1: that relates to a case in some way to this 59 00:03:56,840 --> 00:04:01,080 Speaker 1: massive database, But more on that in a little bit now. 60 00:04:01,080 --> 00:04:04,880 Speaker 1: The general process of searching for a match follows a 61 00:04:04,880 --> 00:04:08,800 Speaker 1: pretty simple pattern, although the details can be vastly different 62 00:04:08,880 --> 00:04:12,360 Speaker 1: depending upon what facial recognition software you are using at 63 00:04:12,360 --> 00:04:17,040 Speaker 1: the time. So you first start with an image related 64 00:04:17,240 --> 00:04:20,799 Speaker 1: to a case, and this is called the probe photo. 65 00:04:21,200 --> 00:04:24,880 Speaker 1: It is the one you are probing for lack of 66 00:04:24,880 --> 00:04:28,640 Speaker 1: a better term, You don't know the identity of the 67 00:04:28,640 --> 00:04:32,320 Speaker 1: person in the photograph, typically, or at least you might 68 00:04:32,360 --> 00:04:35,279 Speaker 1: have suspicions, but you don't necessarily know for sure. So 69 00:04:35,320 --> 00:04:40,039 Speaker 1: you've got a picture of an unknown person in this photograph. 70 00:04:40,360 --> 00:04:43,560 Speaker 1: You then scan that photo and you use facial recognition 71 00:04:43,560 --> 00:04:46,880 Speaker 1: software to analyze the picture and to try and find 72 00:04:46,920 --> 00:04:50,480 Speaker 1: a match in this larger database. It starts searching all 73 00:04:50,520 --> 00:04:52,760 Speaker 1: of the images in this database looking for any that 74 00:04:52,839 --> 00:04:56,279 Speaker 1: might be a potential match. Depending upon the system and 75 00:04:56,320 --> 00:04:59,080 Speaker 1: the policies that are in use, you could end up 76 00:04:59,080 --> 00:05:01,640 Speaker 1: with a single photo do returned to you. You could 77 00:05:01,760 --> 00:05:04,760 Speaker 1: end up with dozens of photos. So these would all 78 00:05:04,760 --> 00:05:08,479 Speaker 1: be potential matches with different degrees of certainty for a match. 79 00:05:08,920 --> 00:05:11,560 Speaker 1: You might remember in episodes I've talked about things like 80 00:05:11,600 --> 00:05:14,599 Speaker 1: IBM S Watson that would come up with answers to 81 00:05:14,640 --> 00:05:18,520 Speaker 1: a question and assign a value to each potential answer, 82 00:05:19,080 --> 00:05:22,640 Speaker 1: and the one that had the highest value, assuming it's 83 00:05:22,640 --> 00:05:26,560 Speaker 1: above a certain threshold, would be submitted as the answer. 84 00:05:26,960 --> 00:05:29,000 Speaker 1: So it's not so much that the computer quote unquote 85 00:05:29,120 --> 00:05:32,520 Speaker 1: knows it has a match. It suspects a match based 86 00:05:32,560 --> 00:05:36,159 Speaker 1: upon a certain percentage as long as it's over a 87 00:05:36,279 --> 00:05:39,839 Speaker 1: threshold of certainty, or you might end up with no 88 00:05:39,960 --> 00:05:42,839 Speaker 1: photos at all. If no match was found or nothing 89 00:05:43,320 --> 00:05:47,400 Speaker 1: ended up being above that threshold, the system might say, 90 00:05:47,480 --> 00:05:50,520 Speaker 1: I couldn't match this photo with anyone who's in the database. 91 00:05:52,680 --> 00:05:56,720 Speaker 1: A study performed by researchers at Georgetown University found that 92 00:05:57,279 --> 00:06:02,520 Speaker 1: one in every two American adults has their face captured 93 00:06:03,080 --> 00:06:06,839 Speaker 1: in an image database that is accessible by various law 94 00:06:06,920 --> 00:06:11,400 Speaker 1: enforcement agencies, including but not limited to, the i p S. 95 00:06:11,440 --> 00:06:14,320 Speaker 1: In fact, the i p S has a small number 96 00:06:14,440 --> 00:06:19,040 Speaker 1: of photos compared to the overall number represented by databases 97 00:06:19,080 --> 00:06:24,800 Speaker 1: across the US. Now, this involves agencies at all different levels, federal, state, 98 00:06:24,880 --> 00:06:31,880 Speaker 1: even tribal law for Native American uh tribes. That ends 99 00:06:31,920 --> 00:06:36,760 Speaker 1: up being about a hundred seventeen million people in these databases, 100 00:06:37,480 --> 00:06:41,359 Speaker 1: many of whom, in fact large personage of whom have 101 00:06:41,520 --> 00:06:45,440 Speaker 1: no criminal background whatsoever. Their images are also in these databases, 102 00:06:45,839 --> 00:06:49,920 Speaker 1: and this raises some big concerns about privacy and also accountability. 103 00:06:50,240 --> 00:06:53,160 Speaker 1: So in today's episode, we're going to explore how facial 104 00:06:53,279 --> 00:06:57,960 Speaker 1: recognition software works. As well as talk about the implement 105 00:06:58,040 --> 00:07:03,120 Speaker 1: implementation for law and forcement and the reaction to this technology, 106 00:07:03,200 --> 00:07:05,760 Speaker 1: and will probably listen to me get upset and a 107 00:07:05,800 --> 00:07:09,600 Speaker 1: little head up about the whole thing in general. All Right, 108 00:07:10,000 --> 00:07:13,440 Speaker 1: So first, before we leap into the mess of law enforcement, 109 00:07:13,640 --> 00:07:17,880 Speaker 1: because it is a mess, that's just a fact, let's 110 00:07:17,920 --> 00:07:22,200 Speaker 1: talk first about the technology itself. When did facial recognition 111 00:07:22,240 --> 00:07:26,000 Speaker 1: software get started and how does it work? Well, it's 112 00:07:26,000 --> 00:07:29,880 Speaker 1: related to computer vision, which is a subset of artificial 113 00:07:29,920 --> 00:07:33,520 Speaker 1: intelligence research. If you look at artificial intelligence, a lot 114 00:07:33,560 --> 00:07:36,560 Speaker 1: of people simplify that by meaning, oh, this is so 115 00:07:36,600 --> 00:07:39,080 Speaker 1: that you can teach computers how to think like people. 116 00:07:39,560 --> 00:07:42,760 Speaker 1: But that's actually a very specific definition of a very 117 00:07:42,800 --> 00:07:46,080 Speaker 1: specific type of artificial intelligence. When you really look at 118 00:07:46,120 --> 00:07:49,040 Speaker 1: a I and you break it out, and involves a 119 00:07:49,080 --> 00:07:52,080 Speaker 1: lot of subsets of abilities, and one of those is 120 00:07:52,120 --> 00:07:57,800 Speaker 1: the ability for machines to analyze imagery and be able 121 00:07:57,840 --> 00:08:01,200 Speaker 1: to determine what that imagery represent. In a way, you 122 00:08:01,240 --> 00:08:06,480 Speaker 1: could argue it's teaching computers how to understand pictures. It's 123 00:08:06,520 --> 00:08:10,320 Speaker 1: also really challenging, and this is one of the object 124 00:08:10,400 --> 00:08:14,960 Speaker 1: lessons that I use to teach people how artificial intelligence 125 00:08:15,680 --> 00:08:19,400 Speaker 1: is really tricky. It requires more than just pure processing power. 126 00:08:19,920 --> 00:08:23,520 Speaker 1: I mean, processing power is important, but you can't solve 127 00:08:23,680 --> 00:08:26,920 Speaker 1: all of AI's problems just by throwing more processors at it. 128 00:08:27,760 --> 00:08:30,760 Speaker 1: You have to figure out from a software level how 129 00:08:30,840 --> 00:08:34,240 Speaker 1: to leverage that processing power in a way that gives 130 00:08:34,280 --> 00:08:39,840 Speaker 1: computers this ability to identify stuff based upon imagery. So 131 00:08:40,000 --> 00:08:42,800 Speaker 1: a computer might be able to perform far more mathematical 132 00:08:42,840 --> 00:08:46,640 Speaker 1: operations per second than even the cleverest of humans, but 133 00:08:46,679 --> 00:08:49,160 Speaker 1: without the right software, they can't identify the picture of 134 00:08:49,200 --> 00:08:53,160 Speaker 1: a seagull compared to say, a semitruck. You have to 135 00:08:53,280 --> 00:08:56,880 Speaker 1: teach the computer how to do this. So let's say 136 00:08:56,880 --> 00:08:59,600 Speaker 1: you develop a program that can analyze an image and 137 00:08:59,640 --> 00:09:05,520 Speaker 1: breaking down into simple data to describe that image, and 138 00:09:05,600 --> 00:09:09,080 Speaker 1: then you essentially teach a computer what a coffee mug 139 00:09:09,160 --> 00:09:11,280 Speaker 1: looks like. You take a picture of a coffee mug, 140 00:09:11,880 --> 00:09:15,479 Speaker 1: you feed it to a computer, and you essentially say 141 00:09:15,520 --> 00:09:22,240 Speaker 1: this data represents a coffee mug. You then would have 142 00:09:22,360 --> 00:09:27,439 Speaker 1: to try and train the computer on what that actually means. 143 00:09:27,679 --> 00:09:30,840 Speaker 1: The computer does not now know what a coffee mug is. 144 00:09:31,880 --> 00:09:35,920 Speaker 1: It will recognize that specific mug in that specific orientation 145 00:09:36,080 --> 00:09:39,880 Speaker 1: under those specific lighting conditions, assuming that you've designed the 146 00:09:39,920 --> 00:09:45,240 Speaker 1: algorithm properly. But it's way more tricky than that. What 147 00:09:45,360 --> 00:09:47,920 Speaker 1: if in the image that you fed the computer, the 148 00:09:48,000 --> 00:09:52,080 Speaker 1: coffee mugs handle was facing to the left with respect 149 00:09:52,120 --> 00:09:55,320 Speaker 1: of the viewer, but in a future picture the handle 150 00:09:55,400 --> 00:09:57,360 Speaker 1: is off to the right instead of to the left, 151 00:09:57,480 --> 00:10:00,000 Speaker 1: or it's turned around so you can't see the hand 152 00:10:00,040 --> 00:10:02,720 Speaker 1: to at all. It's behind the coffee mug. What if 153 00:10:02,720 --> 00:10:05,480 Speaker 1: the mug is bigger or smaller, or a different shape, 154 00:10:05,720 --> 00:10:09,160 Speaker 1: what if it's a different color. Image recognition is tough 155 00:10:09,200 --> 00:10:13,880 Speaker 1: because computers don't immediately associate different objects within the same 156 00:10:13,920 --> 00:10:19,640 Speaker 1: category as being the same thing. So if you teach me, Jonathan, 157 00:10:20,200 --> 00:10:22,520 Speaker 1: what a coffee mug is, and you show me a 158 00:10:22,520 --> 00:10:26,080 Speaker 1: couple of different examples saying, this is a coffee mug, 159 00:10:26,120 --> 00:10:28,360 Speaker 1: but this is also a coffee mug, even though it's 160 00:10:28,360 --> 00:10:30,839 Speaker 1: a different size, and different shape and a different color, 161 00:10:31,600 --> 00:10:33,839 Speaker 1: I'll catch on pretty quickly and it won't take very 162 00:10:33,880 --> 00:10:37,200 Speaker 1: many coffee mugs for me to figure out. All Right, 163 00:10:37,280 --> 00:10:40,120 Speaker 1: I got the basic idea of what a coffee mug is. 164 00:10:40,280 --> 00:10:43,480 Speaker 1: I know what the concept of coffee mug is now, 165 00:10:44,280 --> 00:10:48,040 Speaker 1: but computers aren't like that. You have to feed them 166 00:10:48,280 --> 00:10:51,880 Speaker 1: thousands of images, both of coffee mugs and of not 167 00:10:52,240 --> 00:10:55,440 Speaker 1: coffee mugs, so that the computer starts to be able 168 00:10:55,480 --> 00:10:59,960 Speaker 1: to pick out the various features that are the say 169 00:11:00,040 --> 00:11:03,280 Speaker 1: it's of a coffee mug versus things that are not 170 00:11:03,360 --> 00:11:07,320 Speaker 1: related to being a coffee mug. It takes hours and 171 00:11:07,360 --> 00:11:09,920 Speaker 1: hours and hours of work of training these computers to 172 00:11:09,960 --> 00:11:13,079 Speaker 1: do it, so it's a non trivial task, and this 173 00:11:13,160 --> 00:11:19,839 Speaker 1: is true of all types of image recognition, including facial recognition. Now, 174 00:11:19,880 --> 00:11:25,720 Speaker 1: to get around that problem, you end up sending thousands, 175 00:11:25,800 --> 00:11:29,319 Speaker 1: countless thousands, millions maybe of images of what you're interested 176 00:11:29,360 --> 00:11:31,719 Speaker 1: in while you're training the computer. And the nice thing 177 00:11:31,800 --> 00:11:35,240 Speaker 1: is computers can process this information very very quickly, so 178 00:11:35,280 --> 00:11:40,800 Speaker 1: while it takes a lot, it doesn't take relatively that long. 179 00:11:41,280 --> 00:11:44,199 Speaker 1: It's not as laborious a process as it could be 180 00:11:44,360 --> 00:11:49,559 Speaker 1: if computers were slower at analyzing information. So you might 181 00:11:49,600 --> 00:11:52,800 Speaker 1: remember a story that kind of illustrates this point. Back 182 00:11:52,840 --> 00:11:56,240 Speaker 1: in two thousand twelve, there was a network of sixteen 183 00:11:56,320 --> 00:12:01,680 Speaker 1: thousand computers that analyzed ten million only in images, and 184 00:12:01,760 --> 00:12:03,880 Speaker 1: as a result, it could do the most important task 185 00:12:04,000 --> 00:12:07,160 Speaker 1: any computer connected to the Internet should be expected to do. 186 00:12:07,760 --> 00:12:11,400 Speaker 1: It could then identify cat videos because it now knew 187 00:12:11,400 --> 00:12:14,240 Speaker 1: what a cat was, or at least the features that 188 00:12:14,320 --> 00:12:18,199 Speaker 1: defined cat nous catness as in the essence of being 189 00:12:18,200 --> 00:12:22,280 Speaker 1: a cat, not a character from Hunger Games. Even then, 190 00:12:22,320 --> 00:12:24,000 Speaker 1: there were times when the computer would get it wrong. 191 00:12:24,200 --> 00:12:26,840 Speaker 1: Either it would not identify a cat as being a cat, 192 00:12:27,040 --> 00:12:30,000 Speaker 1: or it misidentify something else as being a cat because 193 00:12:30,040 --> 00:12:32,440 Speaker 1: its features were close enough to cat like for it 194 00:12:32,480 --> 00:12:37,120 Speaker 1: to fool the computer algorithm. A major breakthrough and facial 195 00:12:37,160 --> 00:12:40,080 Speaker 1: recognition algorithms happened way back in two thousand one. That's 196 00:12:40,080 --> 00:12:43,640 Speaker 1: when Paul Viola and Michael Jones unveiled an algorithm for 197 00:12:43,679 --> 00:12:47,360 Speaker 1: face detection, and it worked in real time, which meant 198 00:12:47,400 --> 00:12:50,880 Speaker 1: that it could recognize a face that it would appear 199 00:12:51,000 --> 00:12:53,440 Speaker 1: on a webcam. And but I recognize I mean it 200 00:12:53,520 --> 00:12:57,439 Speaker 1: recognized that it was a face. It didn't assign an 201 00:12:57,480 --> 00:13:01,160 Speaker 1: identity to the face. It didn't say, oh, that's Bob, 202 00:13:01,600 --> 00:13:04,400 Speaker 1: It said, oh, that is a face that is in 203 00:13:04,440 --> 00:13:09,000 Speaker 1: front of the webcam right now. Uh. The algorithm soon 204 00:13:09,000 --> 00:13:12,520 Speaker 1: found its way into open CV, which is an open 205 00:13:12,600 --> 00:13:17,160 Speaker 1: source computer vision framework, and the open source approach allowed 206 00:13:17,200 --> 00:13:20,920 Speaker 1: other programmers to dive into that code and to make 207 00:13:21,040 --> 00:13:24,679 Speaker 1: changes and improvements, and it helped a rapid prototyping of 208 00:13:24,679 --> 00:13:29,480 Speaker 1: facial recognition software to other computer scientists who helped advance 209 00:13:29,520 --> 00:13:33,680 Speaker 1: computer vision further. Where Bill Triggs and Novnique de Lal 210 00:13:34,520 --> 00:13:36,560 Speaker 1: who published the paper in two thousand five about the 211 00:13:36,640 --> 00:13:40,800 Speaker 1: histograms of oriented gradients. Now, that was an approach that 212 00:13:40,840 --> 00:13:43,480 Speaker 1: looked at gradient orientation in parts of an image, and 213 00:13:43,600 --> 00:13:46,520 Speaker 1: essentially it describes the process of viewing an image with 214 00:13:46,559 --> 00:13:51,200 Speaker 1: attention to edge directions and intensity gradients. That's a complicated 215 00:13:51,240 --> 00:13:54,000 Speaker 1: way of saying the technique looks at the totalitary t 216 00:13:54,320 --> 00:13:57,240 Speaker 1: of a person, and then a machine learning algorithm determines 217 00:13:57,280 --> 00:13:59,600 Speaker 1: whether or not that is actually a person or not 218 00:13:59,640 --> 00:14:04,079 Speaker 1: a person, and a bent later, computer scientists began pairing 219 00:14:04,080 --> 00:14:08,880 Speaker 1: computer vision algorithms with deep learning and convolutional neural networks 220 00:14:08,960 --> 00:14:13,880 Speaker 1: or CNNs. To go into this would require an episode 221 00:14:13,880 --> 00:14:16,880 Speaker 1: all by itself. Neural networks are fascinating, but they're also 222 00:14:17,080 --> 00:14:20,560 Speaker 1: pretty complicated, and I've got a whole lot of topics 223 00:14:20,560 --> 00:14:23,400 Speaker 1: to cover today, so we can't really dive into it. 224 00:14:23,920 --> 00:14:27,520 Speaker 1: You can think of an artificial neural network as designing 225 00:14:27,520 --> 00:14:30,480 Speaker 1: a computer system that processes information in a way that's 226 00:14:30,680 --> 00:14:33,720 Speaker 1: similar to the way our brains do. The computers are 227 00:14:33,720 --> 00:14:37,600 Speaker 1: not thinking, but they are able to process information in 228 00:14:37,640 --> 00:14:41,960 Speaker 1: a way that mimics how we process information, or a 229 00:14:42,040 --> 00:14:46,320 Speaker 1: close a semi close approximation thereof that's a really kind 230 00:14:46,320 --> 00:14:48,240 Speaker 1: of weak way of describing it. But again, to really 231 00:14:48,240 --> 00:14:54,600 Speaker 1: go into detail will require a full episode of by itself. Typically, 232 00:14:54,880 --> 00:14:59,440 Speaker 1: facial recognition software uses feature extraction to look for patterns 233 00:14:59,440 --> 00:15:02,920 Speaker 1: in an image relating to facial features. In other words, 234 00:15:02,920 --> 00:15:06,080 Speaker 1: it's searches for for features that resemble a face, the 235 00:15:06,240 --> 00:15:10,520 Speaker 1: elements you would expect to be present in a typical face, 236 00:15:10,960 --> 00:15:15,520 Speaker 1: so eyes, knows, a mouth, that would be major ones. Right. 237 00:15:15,800 --> 00:15:19,040 Speaker 1: Then the software starts to estimate the relationships between those 238 00:15:19,080 --> 00:15:23,360 Speaker 1: different elements. How wide are the eyes, how far apart 239 00:15:23,400 --> 00:15:25,320 Speaker 1: are they from each other, How white is the nose, 240 00:15:25,920 --> 00:15:28,920 Speaker 1: how long is the jawline, what shape are the cheap bones? 241 00:15:30,240 --> 00:15:34,680 Speaker 1: These sort of elements all play apart as points of data, 242 00:15:36,000 --> 00:15:40,520 Speaker 1: and different facial recognition software packages weight these features in 243 00:15:40,560 --> 00:15:44,080 Speaker 1: a different way. So it's it's not like I could 244 00:15:44,080 --> 00:15:47,640 Speaker 1: say all facial recognition software looks at these four points 245 00:15:47,640 --> 00:15:51,480 Speaker 1: of data as its primary source. It varies depending upon 246 00:15:51,520 --> 00:15:54,840 Speaker 1: the algorithm that's been designed by various companies. UH And 247 00:15:54,880 --> 00:15:56,560 Speaker 1: part of the problem that we're going to talk about 248 00:15:56,680 --> 00:16:00,800 Speaker 1: is that law enforcement across the United States, they're not 249 00:16:00,880 --> 00:16:05,920 Speaker 1: relying on a single facial recognition software approach. Different agencies 250 00:16:06,000 --> 00:16:10,400 Speaker 1: have different vendors that they work with, So just because 251 00:16:10,480 --> 00:16:14,280 Speaker 1: one might work very well doesn't necessarily mean it's competitors 252 00:16:14,320 --> 00:16:17,920 Speaker 1: work just as well. And that's part of the problem. Now, 253 00:16:18,040 --> 00:16:20,680 Speaker 1: all of these little points of data I'm talking about, 254 00:16:20,720 --> 00:16:24,080 Speaker 1: these nodal points and how they relate to one another, 255 00:16:24,200 --> 00:16:28,160 Speaker 1: all of that gets boiled down into a numeric code 256 00:16:28,840 --> 00:16:31,600 Speaker 1: that you could think of as a face print. This 257 00:16:31,720 --> 00:16:34,520 Speaker 1: is supposed to be a representation of the unique set 258 00:16:34,520 --> 00:16:39,640 Speaker 1: of data that is a compilation of all of these 259 00:16:39,640 --> 00:16:47,680 Speaker 1: different points boiled down into numeric information itself. Then what 260 00:16:47,760 --> 00:16:51,520 Speaker 1: you would do is you would have a database of faces. 261 00:16:52,560 --> 00:16:54,520 Speaker 1: So if you want to find a match, you would 262 00:16:54,560 --> 00:16:58,479 Speaker 1: feed the image you have, the probe image into this database, 263 00:16:58,920 --> 00:17:02,080 Speaker 1: and the facial recognition software would analyze the probe photo. 264 00:17:02,760 --> 00:17:05,680 Speaker 1: It would end up assigning this numeric value and would 265 00:17:05,720 --> 00:17:08,960 Speaker 1: start looking through the database for other numeric values that 266 00:17:09,040 --> 00:17:12,800 Speaker 1: were as similar to that probe one as possible, and 267 00:17:12,920 --> 00:17:19,119 Speaker 1: start returning those images as potential matches or candidates. They 268 00:17:19,160 --> 00:17:23,200 Speaker 1: tend to use the word candidate photos. Otherwise you'll either 269 00:17:23,240 --> 00:17:25,520 Speaker 1: get no match at all or you get a false positive. 270 00:17:25,760 --> 00:17:28,000 Speaker 1: You will end up getting an image of someone who 271 00:17:28,040 --> 00:17:32,600 Speaker 1: looks like the person whose image you submitted, but is 272 00:17:32,720 --> 00:17:36,919 Speaker 1: not the same person. That does happen, and that's the 273 00:17:36,960 --> 00:17:41,040 Speaker 1: basic way that facial recognition software works. But keep in mind, 274 00:17:41,040 --> 00:17:44,240 Speaker 1: different vendors use all their own specific approaches, like I said, 275 00:17:44,760 --> 00:17:47,480 Speaker 1: and some could be less accurate than others. Some might 276 00:17:47,520 --> 00:17:51,159 Speaker 1: be accurate for specific ethnicities and not as accurate as 277 00:17:51,200 --> 00:17:57,400 Speaker 1: other ones. That's a huge problem. So it gets complicated. 278 00:17:57,960 --> 00:18:01,040 Speaker 1: Even when I'm talking in more general terms, you have 279 00:18:01,080 --> 00:18:05,120 Speaker 1: to remember that there are a lot of specific UH 280 00:18:05,359 --> 00:18:11,720 Speaker 1: incidents and specific implementations of facial recognition software that have 281 00:18:11,880 --> 00:18:15,119 Speaker 1: their own issues. So I'm gonna be as general as 282 00:18:15,160 --> 00:18:17,160 Speaker 1: I can. I'm not going to call out any particular 283 00:18:17,200 --> 00:18:21,119 Speaker 1: facial recognition software vendors out there. I'm more going to 284 00:18:21,160 --> 00:18:25,320 Speaker 1: talk about the overall issues that various organizations have had 285 00:18:25,359 --> 00:18:29,960 Speaker 1: as they've looked into this topic. Now, there are plenty 286 00:18:30,000 --> 00:18:32,600 Speaker 1: of applications for facial recognition that have nothing to do 287 00:18:32,640 --> 00:18:35,640 Speaker 1: with identifying a person. I mentioned that earlier that there 288 00:18:35,720 --> 00:18:38,040 Speaker 1: was the one for a webcam that could identify when 289 00:18:38,040 --> 00:18:40,160 Speaker 1: a face was in front of the webcam. This wasn't 290 00:18:40,200 --> 00:18:42,960 Speaker 1: to identify anybody. It was again just to say, yes, 291 00:18:43,040 --> 00:18:46,359 Speaker 1: there's somebody looking into the webcam at this moment, which 292 00:18:46,359 --> 00:18:48,800 Speaker 1: by itself can be useful and have nothing to do 293 00:18:48,880 --> 00:18:52,680 Speaker 1: with identification. There are plenty of digital cameras out there 294 00:18:52,760 --> 00:18:57,359 Speaker 1: and and camera phone apps that can identify when there's 295 00:18:57,400 --> 00:19:01,240 Speaker 1: a face looking at the camera. Again, it's not necessarily 296 00:19:01,280 --> 00:19:03,760 Speaker 1: to identify that person, but rather to say, oh, well, 297 00:19:03,960 --> 00:19:07,120 Speaker 1: this is a face. The camera is most likely trying 298 00:19:07,119 --> 00:19:09,760 Speaker 1: to focus on this person, so let's make this person 299 00:19:09,840 --> 00:19:12,840 Speaker 1: the point of focus and not focus on something in 300 00:19:12,840 --> 00:19:16,399 Speaker 1: the background like a tree that's fifty yards back. Instead, 301 00:19:16,440 --> 00:19:19,520 Speaker 1: let's focus on the person who's in the foreground. So 302 00:19:19,520 --> 00:19:24,800 Speaker 1: that's pretty handy, and again there's nothing particularly problematic from 303 00:19:24,840 --> 00:19:27,560 Speaker 1: an identification standpoint, because that's not the purpose of it. 304 00:19:29,400 --> 00:19:33,159 Speaker 1: But then you also have other implementations, like on social media, 305 00:19:33,480 --> 00:19:36,520 Speaker 1: which allow you to do things like tag people based 306 00:19:36,600 --> 00:19:41,080 Speaker 1: upon a an algorithm recognizing a person. So Facebook is 307 00:19:41,240 --> 00:19:43,480 Speaker 1: a great example of this. Right, if you upload a 308 00:19:43,520 --> 00:19:47,040 Speaker 1: picture of one of your Facebook friends onto Facebook. Chances 309 00:19:47,040 --> 00:19:49,720 Speaker 1: are it's giving you a suggestion to tag that photo 310 00:19:50,240 --> 00:19:54,040 Speaker 1: with the specific person in mind. That may not be 311 00:19:54,520 --> 00:19:58,679 Speaker 1: that problematic either, depending upon how your friend feels about 312 00:19:58,760 --> 00:20:03,159 Speaker 1: pictures being uploaded to Facebook. Some people are very um 313 00:20:03,200 --> 00:20:06,800 Speaker 1: cautious about that, And of course you know, I always 314 00:20:06,800 --> 00:20:09,760 Speaker 1: recommend you talk to anybody before you start tagging folks 315 00:20:10,240 --> 00:20:13,440 Speaker 1: on Facebook photos, just to make sure they're fine with it. Um. 316 00:20:13,480 --> 00:20:15,320 Speaker 1: I say that as a person who has done it, 317 00:20:15,840 --> 00:20:18,120 Speaker 1: and then notice that some of my tags got removed 318 00:20:18,160 --> 00:20:21,199 Speaker 1: by the people I tagged later on, which taught me 319 00:20:21,400 --> 00:20:26,359 Speaker 1: I should probably ask first, rather than give them the 320 00:20:26,400 --> 00:20:28,240 Speaker 1: feeling that they need to go and remove a tag 321 00:20:28,359 --> 00:20:33,400 Speaker 1: or two. We've also seen examples of this simple implementation 322 00:20:33,440 --> 00:20:38,800 Speaker 1: of facial recognition going awry. Google's street View will blur 323 00:20:38,880 --> 00:20:42,360 Speaker 1: out faces, for example, in an effort to protect people's 324 00:20:42,400 --> 00:20:46,520 Speaker 1: identity while street view cars are out and about taking images. 325 00:20:46,920 --> 00:20:49,359 Speaker 1: This makes sense. Let's let's say that you are in 326 00:20:49,400 --> 00:20:52,000 Speaker 1: a part of town that you normally would not be in. 327 00:20:52,119 --> 00:20:55,080 Speaker 1: For whatever reason, you might not want your picture to 328 00:20:55,200 --> 00:20:58,560 Speaker 1: be included on Google street View, so that whenever anyone 329 00:20:58,640 --> 00:21:01,480 Speaker 1: looks at that street for that point forward. They see 330 00:21:01,520 --> 00:21:05,080 Speaker 1: your face on there, you know, coming out of uh, 331 00:21:05,200 --> 00:21:08,680 Speaker 1: I don't know, a Windy's. Maybe you are a manager 332 00:21:08,760 --> 00:21:12,919 Speaker 1: for Burger King that would look bad, or you know, 333 00:21:13,440 --> 00:21:16,520 Speaker 1: lots of other reasons that obviously can spring to mind 334 00:21:16,560 --> 00:21:20,639 Speaker 1: as well. You don't want to violate someone's privacy. But 335 00:21:21,600 --> 00:21:24,679 Speaker 1: Google StreetView would also blur out images that were not 336 00:21:24,800 --> 00:21:28,720 Speaker 1: real people faces, like images on billboards or murals. Sometimes 337 00:21:28,720 --> 00:21:31,119 Speaker 1: if it had a person's face on a mural, the 338 00:21:31,160 --> 00:21:32,840 Speaker 1: face would be blurred out, even though it's not a 339 00:21:32,840 --> 00:21:38,000 Speaker 1: real person, it's just a painting. Or In September, Set 340 00:21:38,080 --> 00:21:40,800 Speaker 1: reported on an incident in which Google street View blurred 341 00:21:40,800 --> 00:21:44,080 Speaker 1: out the face of a cow. So Google was being 342 00:21:44,240 --> 00:21:49,879 Speaker 1: very thoughtful to protect that cow's privacy. But what about 343 00:21:50,720 --> 00:21:55,160 Speaker 1: matching faces to identities? So in some cases, again seemingly 344 00:21:55,200 --> 00:21:58,240 Speaker 1: harmless if you want to tag your friends, but when 345 00:21:58,280 --> 00:22:00,600 Speaker 1: it comes to law enforcement, things get a bit sticky, 346 00:22:01,080 --> 00:22:03,960 Speaker 1: particularly as you learn more about the specifics. And we'll 347 00:22:04,000 --> 00:22:07,240 Speaker 1: talk about that in just a second, but first let's 348 00:22:07,280 --> 00:22:17,679 Speaker 1: take a quick break to thank our sponsor. All right, 349 00:22:17,760 --> 00:22:22,520 Speaker 1: let's first start with the FBI's Interstate Photos System or 350 00:22:22,560 --> 00:22:26,000 Speaker 1: i p S, because this one has perhaps the least 351 00:22:26,119 --> 00:22:29,800 Speaker 1: controversial elements to it. When you really look at it, 352 00:22:29,800 --> 00:22:33,280 Speaker 1: it's still problematic, but not nearly as much as the 353 00:22:33,440 --> 00:22:38,760 Speaker 1: larger picture. The system contains both images from criminal cases 354 00:22:39,800 --> 00:22:43,280 Speaker 1: like mug shots UH and things of that nature, but 355 00:22:43,359 --> 00:22:47,760 Speaker 1: it also includes some photos from civil sources like UH, 356 00:22:48,040 --> 00:22:51,560 Speaker 1: like I D applications, that kind of thing. When the 357 00:22:51,600 --> 00:22:55,159 Speaker 1: Government Accountability Office or g a O, there's gonna be 358 00:22:55,200 --> 00:22:59,000 Speaker 1: a lot of acronyms and initializations are initialisms, I should 359 00:22:59,040 --> 00:23:01,320 Speaker 1: say in this episodes, so I apologize for that. But 360 00:23:01,720 --> 00:23:06,000 Speaker 1: Government Accountability Office they did a study on this matter 361 00:23:06,840 --> 00:23:11,240 Speaker 1: just in sten So not that long ago. They published 362 00:23:11,240 --> 00:23:15,040 Speaker 1: its report on facial recognition software use among law enforcements, 363 00:23:15,080 --> 00:23:19,479 Speaker 1: specifically the FBI because they're a federal agency, so they 364 00:23:19,520 --> 00:23:24,240 Speaker 1: were concerned with the federal use of this. The database 365 00:23:24,560 --> 00:23:27,800 Speaker 1: contained about thirty million photos at the time of the 366 00:23:27,880 --> 00:23:31,439 Speaker 1: g a O study, so thirty million pictures are in 367 00:23:31,480 --> 00:23:35,960 Speaker 1: this database. Most of those images came from eighteen thousand 368 00:23:36,240 --> 00:23:40,639 Speaker 1: different law enforcement agencies at all levels of government, that 369 00:23:40,680 --> 00:23:46,240 Speaker 1: includes the tribal law enforcement offices. About sevent of all 370 00:23:46,280 --> 00:23:50,120 Speaker 1: the photos and the database were mug shots. UH. More 371 00:23:50,160 --> 00:23:53,879 Speaker 1: than eighty percent of the photos in that database are 372 00:23:53,960 --> 00:23:58,040 Speaker 1: from criminal cases, so that means that less than twenty 373 00:23:58,600 --> 00:24:03,880 Speaker 1: were from civil sources. In addition to that, there were 374 00:24:03,920 --> 00:24:07,800 Speaker 1: some cases, plenty of them, where the database had images 375 00:24:08,160 --> 00:24:11,760 Speaker 1: of people both from a civil source and from a 376 00:24:11,760 --> 00:24:16,720 Speaker 1: criminal source. So I'll give you a theoretical example. Let's 377 00:24:16,720 --> 00:24:20,840 Speaker 1: say that sometime in the past, UH, I got nabbed 378 00:24:20,880 --> 00:24:26,000 Speaker 1: by the cops for for grand theft auto because I 379 00:24:26,080 --> 00:24:29,080 Speaker 1: play that game. But let's say that I stole a car, 380 00:24:29,240 --> 00:24:33,000 Speaker 1: which we already know is a complete fabrication because I 381 00:24:33,040 --> 00:24:35,120 Speaker 1: don't even drive. But let's say I stole a car 382 00:24:35,480 --> 00:24:39,080 Speaker 1: and that I had moved the car across state lines. 383 00:24:39,640 --> 00:24:45,000 Speaker 1: It became a federal case. Therefore, my criminal information is included. 384 00:24:45,040 --> 00:24:49,639 Speaker 1: My mug shot would be included in this particular database. UH. 385 00:24:50,359 --> 00:24:55,600 Speaker 1: On a related note, my I d also is in 386 00:24:55,640 --> 00:24:59,240 Speaker 1: that database as a civil UH image, not as a 387 00:24:59,240 --> 00:25:02,600 Speaker 1: criminal image. Well, in my case, they would tie those 388 00:25:02,600 --> 00:25:06,919 Speaker 1: two images together because they refer to the same person 389 00:25:07,160 --> 00:25:10,880 Speaker 1: and I had been involved in a criminal act. So 390 00:25:11,200 --> 00:25:13,720 Speaker 1: while I would have an image in there from a 391 00:25:13,800 --> 00:25:17,280 Speaker 1: civil source, it would be filed under the criminal side 392 00:25:17,280 --> 00:25:19,680 Speaker 1: of things. This is important when we get to how 393 00:25:19,800 --> 00:25:24,600 Speaker 1: the probes work. Now let's say you have been perfectly 394 00:25:25,000 --> 00:25:28,359 Speaker 1: law abiding this whole time, and that your I D 395 00:25:29,600 --> 00:25:32,320 Speaker 1: is also in this database, but it's just under the 396 00:25:32,440 --> 00:25:35,760 Speaker 1: civil side of things. Since you don't have any criminal background, 397 00:25:36,400 --> 00:25:40,200 Speaker 1: it's not connected to anything on the criminal side. So 398 00:25:40,280 --> 00:25:43,240 Speaker 1: when it comes to probes using the i p S, 399 00:25:43,920 --> 00:25:49,120 Speaker 1: your information will not be referenced. Because the FBI policy 400 00:25:49,720 --> 00:25:53,359 Speaker 1: is when it's running these these potential matches with a 401 00:25:53,359 --> 00:25:56,000 Speaker 1: photo that's been gathered as part of the evidence for 402 00:25:56,080 --> 00:26:00,800 Speaker 1: an ongoing investigation, they can only consult the criminal side, 403 00:26:01,160 --> 00:26:05,080 Speaker 1: not the civil side, with the exception of any civil 404 00:26:05,160 --> 00:26:08,240 Speaker 1: photos that are connected to a criminal case, as in 405 00:26:08,359 --> 00:26:13,040 Speaker 1: my example, those are fair game. So it might run 406 00:26:13,040 --> 00:26:15,399 Speaker 1: a match and it turns out that my photo for 407 00:26:15,520 --> 00:26:20,360 Speaker 1: my state given identification card is a better match than 408 00:26:20,400 --> 00:26:24,480 Speaker 1: the mugshot is. That's gonna be fine, because those two 409 00:26:24,520 --> 00:26:26,679 Speaker 1: things were both attached to a criminal file in the 410 00:26:26,720 --> 00:26:29,679 Speaker 1: first place. But let's say that it would have matched 411 00:26:29,760 --> 00:26:32,920 Speaker 1: up against you since you didn't have a criminal background, 412 00:26:33,160 --> 00:26:35,879 Speaker 1: and since the only record in there was a civil source, 413 00:26:36,840 --> 00:26:39,919 Speaker 1: the match would completely skip over you. It wouldn't return 414 00:26:39,960 --> 00:26:44,080 Speaker 1: your picture because your image is off limits in that 415 00:26:44,280 --> 00:26:50,480 Speaker 1: particular use. Very important because it's an effort to try 416 00:26:50,680 --> 00:26:56,320 Speaker 1: and make sure this facial recognition technology is focusing just 417 00:26:56,520 --> 00:27:03,359 Speaker 1: on the criminal side, not putting law abiding citizens in 418 00:27:03,480 --> 00:27:09,439 Speaker 1: danger of being pulled up in a virtual lineup, at 419 00:27:09,520 --> 00:27:13,600 Speaker 1: least not using that approach. That's the problem is that 420 00:27:13,600 --> 00:27:16,120 Speaker 1: that's not the only way the FBI runs searches. In fact, 421 00:27:16,160 --> 00:27:18,960 Speaker 1: that might not be the primary way the FBI runs 422 00:27:19,040 --> 00:27:22,240 Speaker 1: searches when they're looking for a match to a photo 423 00:27:22,840 --> 00:27:26,960 Speaker 1: that was taken as part of evidence gathering in pursuing 424 00:27:26,960 --> 00:27:32,320 Speaker 1: a case. But let's say that you are an FBI 425 00:27:32,400 --> 00:27:35,840 Speaker 1: agent and you've got a photo, a probe photo, and 426 00:27:35,920 --> 00:27:38,560 Speaker 1: you want to run it for a match. What's the procedure. 427 00:27:39,720 --> 00:27:42,200 Speaker 1: You would send off your request to the n g 428 00:27:42,440 --> 00:27:46,440 Speaker 1: I dash Ips Department, and you would have to indicate 429 00:27:46,480 --> 00:27:50,800 Speaker 1: how many potential photographs you want back, how many candidates 430 00:27:51,440 --> 00:27:56,200 Speaker 1: do you want. You can choose between two candidate photos 431 00:27:56,560 --> 00:28:00,720 Speaker 1: and fifty candidate photos. These are photos of different individuals, 432 00:28:00,800 --> 00:28:03,280 Speaker 1: by the way, not just here's here's a picture of 433 00:28:03,359 --> 00:28:05,560 Speaker 1: Jonathan on the beach. Here's a picture of Jonathan in 434 00:28:05,600 --> 00:28:09,320 Speaker 1: the woods. No, it's more like, here's a picture of Jonathan. 435 00:28:09,480 --> 00:28:11,399 Speaker 1: Here's a picture of a person who's not Jonathan but 436 00:28:11,440 --> 00:28:14,879 Speaker 1: also kind of matches this particular probe photo you advent 437 00:28:15,000 --> 00:28:19,000 Speaker 1: you you submitted, and here are forty eight others. The 438 00:28:19,080 --> 00:28:22,320 Speaker 1: default is twenty, so if you don't change the default 439 00:28:22,320 --> 00:28:25,919 Speaker 1: at all, you will get back twenty images. Uh that 440 00:28:25,960 --> 00:28:30,760 Speaker 1: our potential candidates matching your probe photo, assuming that any 441 00:28:30,800 --> 00:28:34,040 Speaker 1: are found at all. It is possible that you submit 442 00:28:34,080 --> 00:28:36,679 Speaker 1: a probe photo and the system doesn't find any matches 443 00:28:36,720 --> 00:28:39,760 Speaker 1: at all, in which case you'll just get a null. Uh, 444 00:28:39,960 --> 00:28:43,520 Speaker 1: you might get less than what you asked for if 445 00:28:44,520 --> 00:28:49,080 Speaker 1: only a few had met the threshold for reliability. Now 446 00:28:49,200 --> 00:28:54,959 Speaker 1: we call them candidate photos because you're supposed to acknowledge 447 00:28:55,000 --> 00:28:58,480 Speaker 1: the fact that these are meant to help you pursue 448 00:28:58,520 --> 00:29:01,680 Speaker 1: a lead of inquiry. In a case, it is not 449 00:29:01,840 --> 00:29:09,040 Speaker 1: meant to be a source of positive identification of a suspect. So, 450 00:29:09,080 --> 00:29:13,160 Speaker 1: in other words, you shouldn't run a facial recognition software probe, 451 00:29:13,920 --> 00:29:16,760 Speaker 1: get a result back and say that's our guy, let's 452 00:29:16,800 --> 00:29:19,960 Speaker 1: go pick them up. That's not enough. It's meant to 453 00:29:20,000 --> 00:29:25,400 Speaker 1: be the start of a line of inquiry, and uh, 454 00:29:25,520 --> 00:29:27,200 Speaker 1: whether or not it gets used that way all the 455 00:29:27,240 --> 00:29:30,200 Speaker 1: time is another matter. But the purpose of calling it 456 00:29:30,280 --> 00:29:34,360 Speaker 1: candidate photo is to remind everyone this is not meant 457 00:29:34,400 --> 00:29:40,600 Speaker 1: to be proof of someone's guilt or innocence. The FBI 458 00:29:40,680 --> 00:29:44,600 Speaker 1: also allows certain state authorities to use this same database, 459 00:29:44,960 --> 00:29:50,040 Speaker 1: and different agencies have different preferences. So in the g 460 00:29:50,200 --> 00:29:52,880 Speaker 1: a O report that I talked about earlier, the authors 461 00:29:52,880 --> 00:29:57,000 Speaker 1: noted that law enforcement officials from Michigan, for example, would 462 00:29:57,040 --> 00:30:00,200 Speaker 1: always ask for the maximum number of candidate photo is, 463 00:30:00,480 --> 00:30:04,200 Speaker 1: particularly when they'd use probe images that were of low quality. 464 00:30:05,160 --> 00:30:08,280 Speaker 1: So let's say you've got a picture captured from the 465 00:30:08,320 --> 00:30:11,720 Speaker 1: security camera and the lighting is pretty bad and perhaps 466 00:30:11,720 --> 00:30:14,760 Speaker 1: the person wasn't facing dead on into the camera. You 467 00:30:14,840 --> 00:30:17,480 Speaker 1: might ask for the maximum number of candidate photos to 468 00:30:17,560 --> 00:30:21,600 Speaker 1: read returned to you, knowing that the image you submitted 469 00:30:21,720 --> 00:30:27,560 Speaker 1: was low quality and therefore any match is only potentially 470 00:30:27,640 --> 00:30:32,440 Speaker 1: going to be the person you're actually looking for. And again, 471 00:30:32,920 --> 00:30:35,880 Speaker 1: this is all just to help you with the beginning 472 00:30:35,880 --> 00:30:39,120 Speaker 1: of your investigation. It's not meant to be the that's 473 00:30:39,160 --> 00:30:44,280 Speaker 1: our guy moment that you would see and say, uh, 474 00:30:44,480 --> 00:30:48,600 Speaker 1: police procedural that would appear on network television in prime time. 475 00:30:50,080 --> 00:30:52,960 Speaker 1: The FBI also has a policy and that all returned 476 00:30:53,040 --> 00:30:58,240 Speaker 1: candidate photos must first be analyzed by human specialists before 477 00:30:58,320 --> 00:31:02,680 Speaker 1: being passed on to other law enforcement agencies. Up to 478 00:31:02,840 --> 00:31:05,680 Speaker 1: that point, the entire process is automatic, so you don't 479 00:31:05,720 --> 00:31:10,000 Speaker 1: have people overseeing the process once it's probing all of 480 00:31:10,040 --> 00:31:14,040 Speaker 1: the database. But once the results come in, human analysts 481 00:31:14,040 --> 00:31:16,200 Speaker 1: who are supposed to be trained in this sort of 482 00:31:16,240 --> 00:31:19,400 Speaker 1: thing are supposed to look at each of those returned 483 00:31:19,520 --> 00:31:22,640 Speaker 1: candidates and determine if whether or not they really do 484 00:31:23,680 --> 00:31:27,320 Speaker 1: resemble the person in the probe photo that was submitted 485 00:31:27,360 --> 00:31:29,120 Speaker 1: in the first place, and if they're not, they are 486 00:31:29,160 --> 00:31:33,440 Speaker 1: not supposed to be passed on any further down the chain. Now, 487 00:31:33,480 --> 00:31:37,600 Speaker 1: so far, this probably doesn't sound too problematic. The FBI 488 00:31:37,680 --> 00:31:40,760 Speaker 1: has a database containing both criminal and civil photographs, but 489 00:31:40,800 --> 00:31:42,800 Speaker 1: when it runs a probe, it can only use the 490 00:31:42,800 --> 00:31:45,400 Speaker 1: criminal photos or the civil ones that are attached to 491 00:31:45,440 --> 00:31:49,000 Speaker 1: criminal files. Candidate photos are supposed to only be used 492 00:31:49,000 --> 00:31:51,600 Speaker 1: to help start a line of inquiry, not to positively 493 00:31:51,600 --> 00:31:54,800 Speaker 1: identify suspects, and everything has to be reviewed by a 494 00:31:54,840 --> 00:31:58,920 Speaker 1: human being. That sounds fairly reasonable. But even if you're 495 00:31:58,920 --> 00:32:02,600 Speaker 1: mostly okay with this coach, which still has some problems 496 00:32:02,600 --> 00:32:05,440 Speaker 1: will talk about in a bit, things get significantly more 497 00:32:05,520 --> 00:32:10,960 Speaker 1: dicey as you learn more about the FBI's policies. For example, 498 00:32:10,960 --> 00:32:14,400 Speaker 1: they have a unit called the Facial Analysis, Comparison and 499 00:32:14,440 --> 00:32:22,520 Speaker 1: Evaluation Services or FACE FACE. This is a part of 500 00:32:22,520 --> 00:32:26,160 Speaker 1: the Criminal Justice Information Services Department c G c J 501 00:32:26,480 --> 00:32:29,000 Speaker 1: rather I S. Yeah, I can spell justice with the G, 502 00:32:30,360 --> 00:32:33,560 Speaker 1: it doesn't make sense. No, the c J I S. 503 00:32:34,240 --> 00:32:37,120 Speaker 1: This is a department within the FBI, and FACE can 504 00:32:37,120 --> 00:32:40,280 Speaker 1: carry out a search far more wide reaching than one 505 00:32:40,320 --> 00:32:43,880 Speaker 1: that just uses the n G I I p S database. 506 00:32:45,440 --> 00:32:50,680 Speaker 1: FACE uses not only that database but also external databases 507 00:32:50,680 --> 00:32:53,200 Speaker 1: when conducting a search with a probe photo. So let's 508 00:32:53,200 --> 00:32:56,720 Speaker 1: say again, you're an FBI agent and you have an 509 00:32:56,720 --> 00:32:58,720 Speaker 1: image that you want to match. You want to find 510 00:32:58,760 --> 00:33:01,320 Speaker 1: out who this person is. Maybe it's just a person 511 00:33:01,320 --> 00:33:04,280 Speaker 1: of interest and doesn't even necessarily have to be a suspect. 512 00:33:04,600 --> 00:33:06,880 Speaker 1: Could be that, hey, maybe this person can tell us 513 00:33:06,880 --> 00:33:10,719 Speaker 1: more about this thing that happened later on. Well, you 514 00:33:10,760 --> 00:33:13,360 Speaker 1: could follow the n G I I p S procedure, 515 00:33:13,400 --> 00:33:16,880 Speaker 1: which would focus on those criminal photographs, or you could 516 00:33:16,960 --> 00:33:22,320 Speaker 1: submit your image to FACE. FACE then would search dozens 517 00:33:22,440 --> 00:33:28,360 Speaker 1: of databases holding more than four hundred eleven million photographs, 518 00:33:29,360 --> 00:33:33,000 Speaker 1: many of which are from civil sources. So n G 519 00:33:33,280 --> 00:33:36,560 Speaker 1: I I P S has thirty million, all of them 520 00:33:36,600 --> 00:33:42,120 Speaker 1: together a four hundred eleven million pictures. And again a 521 00:33:42,160 --> 00:33:45,400 Speaker 1: lot of those pictures just come from things like passport 522 00:33:45,680 --> 00:33:53,080 Speaker 1: I d U, driver's licenses, sometimes security clearances, that sort 523 00:33:53,120 --> 00:33:55,680 Speaker 1: of stuff. That's the You know, this database is a 524 00:33:55,680 --> 00:33:58,720 Speaker 1: lot of law abiding citizens who have no criminal record 525 00:33:59,280 --> 00:34:01,320 Speaker 1: and the image just have nothing to do with any 526 00:34:01,360 --> 00:34:06,520 Speaker 1: sort of criminal act, but they're in these databases. These 527 00:34:06,560 --> 00:34:11,279 Speaker 1: external databases belong to lots of different agencies, and both 528 00:34:11,280 --> 00:34:13,839 Speaker 1: of the federal level and state level. So you've got 529 00:34:13,840 --> 00:34:17,920 Speaker 1: state police agencies, you've got the Department of Defense, You've 530 00:34:17,960 --> 00:34:20,880 Speaker 1: got the Department of Justice, you have the Department of State, 531 00:34:21,600 --> 00:34:25,120 Speaker 1: and again it contains photos from licenses, passports, security I 532 00:34:25,200 --> 00:34:28,200 Speaker 1: d cards, and more. So your submission would then go 533 00:34:28,320 --> 00:34:32,880 Speaker 1: to one of twenty nine different biometric image specialists. They 534 00:34:32,880 --> 00:34:35,120 Speaker 1: would take that probe photo and run a scan through 535 00:34:35,160 --> 00:34:38,480 Speaker 1: these various databases and they would look for matches. Here's 536 00:34:38,520 --> 00:34:42,240 Speaker 1: another problem. Each of these systems has a different methodology 537 00:34:42,280 --> 00:34:46,400 Speaker 1: for performing and returning search results, which makes this even 538 00:34:46,400 --> 00:34:51,120 Speaker 1: more complicated. For example, I talked about how the n 539 00:34:51,160 --> 00:34:53,520 Speaker 1: G I I P S system gives you a return 540 00:34:53,680 --> 00:34:57,920 Speaker 1: between two and fifty candidate photos. Right, well, the Department 541 00:34:57,920 --> 00:35:00,480 Speaker 1: of State will return as many as a the eight 542 00:35:00,680 --> 00:35:04,960 Speaker 1: candidate photos if they are all from visa applications from 543 00:35:04,960 --> 00:35:08,600 Speaker 1: people who are not US citizens, So you can get 544 00:35:08,680 --> 00:35:12,640 Speaker 1: up to eighty eight pictures from visa applicants, or you 545 00:35:12,680 --> 00:35:17,520 Speaker 1: could just get three images from US citizen passport applicants, 546 00:35:18,320 --> 00:35:21,360 Speaker 1: because that's a hard limit. They can only return three 547 00:35:21,400 --> 00:35:25,560 Speaker 1: candidate photos from US citizens who applied for passports, but 548 00:35:25,600 --> 00:35:30,719 Speaker 1: they can return up to eight visa application photos. The 549 00:35:30,760 --> 00:35:33,520 Speaker 1: Department of Defense will will down all of their candidates 550 00:35:33,560 --> 00:35:37,520 Speaker 1: into a single entry, so, in other words, to burn 551 00:35:37,600 --> 00:35:40,960 Speaker 1: a defense. If you if you query that database with 552 00:35:41,040 --> 00:35:43,560 Speaker 1: your probe photo, you will only get one image back, 553 00:35:44,680 --> 00:35:47,239 Speaker 1: So they will call all the other ones and give 554 00:35:47,239 --> 00:35:50,080 Speaker 1: you the most likely match out of all the ones 555 00:35:50,120 --> 00:35:56,400 Speaker 1: that they find in their search. Some states will do 556 00:35:56,760 --> 00:36:01,000 Speaker 1: similar things where they will narrow down which images they 557 00:36:01,000 --> 00:36:02,719 Speaker 1: will return to you. Some of them will just give 558 00:36:02,719 --> 00:36:05,439 Speaker 1: you everything they've got. Every match that comes up, they'll 559 00:36:05,480 --> 00:36:10,200 Speaker 1: just return it back to the FBI. So it's very complicated. 560 00:36:10,800 --> 00:36:14,440 Speaker 1: You can't really be sure what methods people are using 561 00:36:14,800 --> 00:36:18,359 Speaker 1: to be certain that the the potential matches they have 562 00:36:18,520 --> 00:36:22,520 Speaker 1: represent a good match, a good chance that the person 563 00:36:22,640 --> 00:36:25,360 Speaker 1: that they've returned is actually the same one who was 564 00:36:25,400 --> 00:36:30,520 Speaker 1: in the probe photo. At any rate, you as an 565 00:36:30,560 --> 00:36:34,879 Speaker 1: FBI agent, wouldn't get all of these at all. All 566 00:36:34,920 --> 00:36:36,880 Speaker 1: of these photos that would come back, they would come 567 00:36:36,880 --> 00:36:40,279 Speaker 1: back to that biometric analyst over at face. So your 568 00:36:40,400 --> 00:36:43,520 Speaker 1: you send your request to face face takes care of 569 00:36:43,520 --> 00:36:46,960 Speaker 1: the rest. They get back all these results. Then they 570 00:36:47,000 --> 00:36:49,720 Speaker 1: go through the results they get back and they whittle 571 00:36:49,760 --> 00:36:52,600 Speaker 1: that down to one or two candidate photos and they 572 00:36:52,640 --> 00:36:55,319 Speaker 1: send those on to you, the FBI agent. So by 573 00:36:55,320 --> 00:36:57,200 Speaker 1: the time you get it, you only see one or 574 00:36:57,239 --> 00:37:01,160 Speaker 1: two out of the potentially more than one hundred images 575 00:37:01,200 --> 00:37:09,480 Speaker 1: that were returned on this search. But um, you might ask, well, 576 00:37:09,640 --> 00:37:12,120 Speaker 1: how frequently does this happen? I mean, how how often 577 00:37:12,200 --> 00:37:16,759 Speaker 1: is the FBI looking at images, including pictures of law 578 00:37:16,760 --> 00:37:22,040 Speaker 1: abiding citizens in these virtual lineups. It can't be that frequent, right, Well, again, 579 00:37:22,040 --> 00:37:25,560 Speaker 1: according to that g a O report, the FBI submitted 580 00:37:25,600 --> 00:37:30,600 Speaker 1: two hundred fifteen thousand searches between August two thousand eleven, 581 00:37:30,880 --> 00:37:34,040 Speaker 1: which is pretty much when the program went into pilot 582 00:37:34,120 --> 00:37:38,040 Speaker 1: mode and started to be rolled out more widely through 583 00:37:38,239 --> 00:37:44,800 Speaker 1: December two thousand, fifteen two thousand. From August to December, 584 00:37:46,640 --> 00:37:51,480 Speaker 1: thirty six thousand of those searches we're on state driver's 585 00:37:51,560 --> 00:37:56,000 Speaker 1: licensed databases, So it happens a lot thirty six thousand times. 586 00:37:56,080 --> 00:38:00,239 Speaker 1: Chances are, if you are an adult in America, you had, 587 00:38:00,280 --> 00:38:03,000 Speaker 1: like a coin flip situation, that your image was looked 588 00:38:03,000 --> 00:38:05,960 Speaker 1: at at some time or another by an algorithm comparing 589 00:38:05,960 --> 00:38:10,040 Speaker 1: it to a probe photo in the pursuit of information 590 00:38:10,120 --> 00:38:14,480 Speaker 1: regarding a federal case or in some cases, state cases, 591 00:38:14,560 --> 00:38:19,520 Speaker 1: because the FBI has also allowed certain states law agencies 592 00:38:19,960 --> 00:38:25,040 Speaker 1: access to this approach. Now, according to the rules, the 593 00:38:25,120 --> 00:38:29,560 Speaker 1: FBI should have submitted some important documents to inform the 594 00:38:29,600 --> 00:38:34,240 Speaker 1: public of their policies and to lay down the regulations, 595 00:38:34,280 --> 00:38:37,200 Speaker 1: the rules, the processes that they would have to follow 596 00:38:37,680 --> 00:38:40,319 Speaker 1: in order for this to be fair, for it to 597 00:38:40,440 --> 00:38:43,920 Speaker 1: not encroach on your privacy or to violate civil liberties 598 00:38:43,960 --> 00:38:48,120 Speaker 1: or civil rights. Without those rules, the use of the 599 00:38:48,160 --> 00:38:54,000 Speaker 1: system is largely unregulated, which can lead to misuse, whether 600 00:38:54,040 --> 00:38:58,320 Speaker 1: it's intentional or otherwise. The Government Accountability Office specifically pointed 601 00:38:58,320 --> 00:39:01,319 Speaker 1: out two different types of notific cations that the FBI 602 00:39:01,719 --> 00:39:05,080 Speaker 1: either failed to submit or was just very late in submitting. 603 00:39:05,600 --> 00:39:08,960 Speaker 1: The first is called a Privacy Impact Assessment or p 604 00:39:09,280 --> 00:39:12,600 Speaker 1: i A. Now, as that name suggests, a p i 605 00:39:12,719 --> 00:39:15,520 Speaker 1: A is meant to inform the public about any potential 606 00:39:15,560 --> 00:39:20,719 Speaker 1: conflicts with privacy with regards to methods for collecting personal information. 607 00:39:21,920 --> 00:39:25,200 Speaker 1: The FBI did submit a p i A for its 608 00:39:25,400 --> 00:39:28,319 Speaker 1: next generation system, but they did it back in two 609 00:39:28,320 --> 00:39:31,440 Speaker 1: thousand eight when they first launched the n g I 610 00:39:31,640 --> 00:39:36,560 Speaker 1: I p S. According to the Government Accountability Office, the 611 00:39:36,600 --> 00:39:40,600 Speaker 1: FBI made enough significant changes to the system to warrant 612 00:39:40,680 --> 00:39:44,920 Speaker 1: another p i A that anytime you make a significant 613 00:39:44,920 --> 00:39:50,200 Speaker 1: revision to your personal information systems, you have to submit 614 00:39:50,239 --> 00:39:53,799 Speaker 1: a new p i A because things have changed, and 615 00:39:54,120 --> 00:39:57,000 Speaker 1: according to the g a O, the FBI failed to 616 00:39:57,000 --> 00:40:01,800 Speaker 1: do that for way too long. Now, ultimately the FBI 617 00:40:01,960 --> 00:40:05,600 Speaker 1: would publish a new p i A, but by that point, 618 00:40:05,760 --> 00:40:09,280 Speaker 1: the Government Accountability Office said they had delayed so long 619 00:40:09,360 --> 00:40:13,600 Speaker 1: that it made it more problematic. And as a result, 620 00:40:13,680 --> 00:40:17,359 Speaker 1: because during the whole time that they were supposed to 621 00:40:17,560 --> 00:40:21,440 Speaker 1: have submitted this, they were actively using this system. It 622 00:40:21,480 --> 00:40:24,480 Speaker 1: wasn't like this was a system being tested. It was 623 00:40:24,560 --> 00:40:28,640 Speaker 1: actually being put to use in real cases, and that 624 00:40:28,840 --> 00:40:31,040 Speaker 1: kind of violates it, Well, it doesn't kind of. It 625 00:40:31,160 --> 00:40:34,120 Speaker 1: violates a Privacy Act of nineteen seventy four, which states 626 00:40:34,600 --> 00:40:37,760 Speaker 1: that when you make these revisions, you're supposed to file 627 00:40:37,800 --> 00:40:40,319 Speaker 1: a p i A before you put it into use. 628 00:40:42,640 --> 00:40:44,960 Speaker 1: According to the g a O, the FBI failed to 629 00:40:44,960 --> 00:40:48,839 Speaker 1: do so. And also the longer you wait to file this, 630 00:40:49,080 --> 00:40:53,680 Speaker 1: the more entrenched those uses come. So if you put 631 00:40:53,680 --> 00:40:57,640 Speaker 1: a system in place, you build everything out, you've actually 632 00:40:57,640 --> 00:41:00,800 Speaker 1: taken the time to do it, and then you publish 633 00:41:00,840 --> 00:41:03,880 Speaker 1: a p i A any objections that are raised, you 634 00:41:03,880 --> 00:41:06,000 Speaker 1: could say, well, we've got a system now, and it 635 00:41:06,040 --> 00:41:08,560 Speaker 1: cost one point two billion dollars to put it in place. 636 00:41:08,560 --> 00:41:12,160 Speaker 1: It's gonna cost more money, taxpayer money for us to 637 00:41:12,320 --> 00:41:16,399 Speaker 1: alter it, to remove it, to change it. You could 638 00:41:16,640 --> 00:41:21,880 Speaker 1: argue against any move to amend the situation. And the 639 00:41:21,920 --> 00:41:25,600 Speaker 1: g a O says, that's not playing cricket or playing 640 00:41:25,640 --> 00:41:31,960 Speaker 1: fair for my fellow Americans. So that's a problem. But 641 00:41:32,000 --> 00:41:34,400 Speaker 1: then there's another one. There's a second type of report 642 00:41:34,480 --> 00:41:37,640 Speaker 1: called a Systems of Records Notice or s o R. 643 00:41:37,719 --> 00:41:41,480 Speaker 1: In SORN, the Department of Justice was required to submit 644 00:41:41,520 --> 00:41:43,840 Speaker 1: a sworn upon the launch of n g i I 645 00:41:43,920 --> 00:41:48,600 Speaker 1: P S but didn't do so until May five. The 646 00:41:48,680 --> 00:41:51,760 Speaker 1: g a O criticized both the FBI and the Department 647 00:41:51,800 --> 00:41:54,040 Speaker 1: of Justice for failing to inform the public of the 648 00:41:54,120 --> 00:41:57,280 Speaker 1: nature of this technology and how it might impact personal privacy. 649 00:41:58,200 --> 00:42:02,720 Speaker 1: But wait, there's more. The g O report also accused 650 00:42:02,760 --> 00:42:06,000 Speaker 1: the FBI of failing to perform any audits to make 651 00:42:06,040 --> 00:42:09,440 Speaker 1: certain the use of facial recognition software isn't in violation 652 00:42:09,520 --> 00:42:12,920 Speaker 1: of other policies, or even to make sure it doesn't 653 00:42:13,000 --> 00:42:16,600 Speaker 1: violate the Fourth Amendment rights of U. S citizens. Now, 654 00:42:16,600 --> 00:42:18,400 Speaker 1: for those of you who are not US citizens, you 655 00:42:18,480 --> 00:42:20,799 Speaker 1: might wonder what does this actually mean. Well, the Fourth 656 00:42:20,800 --> 00:42:24,840 Speaker 1: Amendment is supposed to protect us against unreasonable search and seizure, 657 00:42:25,280 --> 00:42:27,880 Speaker 1: and part of that means law enforcement can't just demand 658 00:42:27,920 --> 00:42:31,160 Speaker 1: to search you for no reason. And some have argued 659 00:42:31,200 --> 00:42:35,520 Speaker 1: that using facial recognition software with other person's consent, using 660 00:42:35,520 --> 00:42:43,160 Speaker 1: it invisibly and widespread essentially amounts to crossing that line. Now, 661 00:42:43,200 --> 00:42:45,719 Speaker 1: in the United States, we've got plenty of examples of 662 00:42:45,719 --> 00:42:48,799 Speaker 1: troublesome policies that seem to overstep the bounds that are 663 00:42:49,000 --> 00:42:53,000 Speaker 1: established by the Fourth Amendment. But that's a tirade for 664 00:42:53,000 --> 00:42:56,399 Speaker 1: an entirely different show, probably not a tech stuff, maybe 665 00:42:56,400 --> 00:42:59,279 Speaker 1: a stuff they don't want you to know. There are 666 00:42:59,280 --> 00:43:01,320 Speaker 1: a couple of laws the United States that are important 667 00:43:01,320 --> 00:43:04,120 Speaker 1: to take note of here, besides that Fourth Amendment. One 668 00:43:04,160 --> 00:43:06,320 Speaker 1: of them I just mentioned the Privacy Act of nineteen 669 00:43:06,320 --> 00:43:08,719 Speaker 1: seventy four, and the other one is the e Government 670 00:43:08,760 --> 00:43:12,440 Speaker 1: Act of two thousand two. The Privacy actsts limitations on 671 00:43:12,480 --> 00:43:16,640 Speaker 1: the collection, disclosure, and use of personal information maintained in 672 00:43:16,760 --> 00:43:20,080 Speaker 1: systems of records, including the ones that law agencies use. 673 00:43:21,719 --> 00:43:24,240 Speaker 1: The E Government Act is the one that requires government 674 00:43:24,280 --> 00:43:26,920 Speaker 1: agencies to conduct p I A s to make certain 675 00:43:26,960 --> 00:43:30,000 Speaker 1: that personal information is handled properly in federal systems, and 676 00:43:30,040 --> 00:43:33,000 Speaker 1: the g O report alleges that the FBI policy wasn't 677 00:43:33,000 --> 00:43:37,880 Speaker 1: aligned with either of those. Part of this accusation depends 678 00:43:37,920 --> 00:43:40,359 Speaker 1: upon the fact that the FBI was using FACE in 679 00:43:40,360 --> 00:43:43,400 Speaker 1: investigations for years before they updated their s O r 680 00:43:43,560 --> 00:43:47,759 Speaker 1: N their SORN. According to the Privacy Act, agencies must 681 00:43:47,760 --> 00:43:50,400 Speaker 1: publish a new sworn upon the establishment or revision of 682 00:43:50,400 --> 00:43:52,160 Speaker 1: the system of records. This is what I was talking 683 00:43:52,160 --> 00:43:54,000 Speaker 1: about earlier, except I think I said p I a 684 00:43:54,160 --> 00:43:56,960 Speaker 1: earlier when actually I met s O r N. That's 685 00:43:57,120 --> 00:44:00,799 Speaker 1: entirely my fault because I didn't write in my notes 686 00:44:00,840 --> 00:44:03,200 Speaker 1: and I was talking next to berraneously. But s O 687 00:44:03,320 --> 00:44:06,440 Speaker 1: r N is what I should have said. The FBI 688 00:44:06,640 --> 00:44:09,960 Speaker 1: argued that it was continuously updating the database to refine 689 00:44:10,000 --> 00:44:12,680 Speaker 1: the system. But the g a O S argument was 690 00:44:12,760 --> 00:44:17,960 Speaker 1: that you could be continuously updating the system and argue, well, 691 00:44:18,000 --> 00:44:19,800 Speaker 1: we don't want to publish an s O r N 692 00:44:20,239 --> 00:44:25,320 Speaker 1: after every tiny revision because it's wasteful and time consuming. 693 00:44:26,360 --> 00:44:28,120 Speaker 1: The g A O S counter to that is, yeah, 694 00:44:28,200 --> 00:44:31,359 Speaker 1: but you were using this tool in actual cases. If 695 00:44:31,400 --> 00:44:34,960 Speaker 1: you were developing this, let's say, in a in a 696 00:44:35,040 --> 00:44:39,600 Speaker 1: department where you're not using real cases, you're just gradually 697 00:44:39,640 --> 00:44:42,120 Speaker 1: tweaking the system so that it's more and more accurate 698 00:44:42,160 --> 00:44:46,200 Speaker 1: in a controlled environment. That's one thing. But if you're 699 00:44:46,239 --> 00:44:50,280 Speaker 1: actively making use of this system in real world investigations, 700 00:44:50,880 --> 00:44:54,880 Speaker 1: you absolutely must adhere to these laws, because to do 701 00:44:54,960 --> 00:44:58,320 Speaker 1: otherwise is in violation two laws that are pasting the 702 00:44:58,400 --> 00:45:02,560 Speaker 1: United States. So you can't have it both ways. You 703 00:45:02,640 --> 00:45:07,120 Speaker 1: can't continuously tweak a system and put it to official 704 00:45:07,280 --> 00:45:12,439 Speaker 1: use and not also file these reports. You could argue 705 00:45:12,480 --> 00:45:14,080 Speaker 1: the FBI was trying to have its cake and eat 706 00:45:14,120 --> 00:45:17,760 Speaker 1: it too. It's the expression that I think I actually 707 00:45:17,840 --> 00:45:20,640 Speaker 1: used properly. All Right, we've got more to talk about, 708 00:45:21,120 --> 00:45:22,960 Speaker 1: but it's time for us to take another quick break 709 00:45:23,360 --> 00:45:34,200 Speaker 1: to thank our sponsor. All right. So, the Government Accountability 710 00:45:34,239 --> 00:45:38,760 Speaker 1: Office criticizes the FBI and various other agencies for failing 711 00:45:38,800 --> 00:45:42,600 Speaker 1: to establish the scope and use of its facial recognition technology. 712 00:45:42,640 --> 00:45:46,000 Speaker 1: But that's just the tip of the iceberg, because the 713 00:45:46,040 --> 00:45:48,800 Speaker 1: g O report goes on to make an equally troubling 714 00:45:48,840 --> 00:45:53,040 Speaker 1: point that the FBI had performed only a few studies 715 00:45:53,320 --> 00:45:56,520 Speaker 1: on how accurate these facial recognition systems were in the 716 00:45:56,560 --> 00:45:59,480 Speaker 1: first place. So, in other words, not only was this 717 00:45:59,560 --> 00:46:02,920 Speaker 1: a poor to find an unregulated tool, but it's a 718 00:46:02,960 --> 00:46:08,040 Speaker 1: tool of unknown accuracy and precision, which is terrifying when 719 00:46:08,040 --> 00:46:11,360 Speaker 1: you think about it. Now. According to the report, the 720 00:46:11,480 --> 00:46:15,560 Speaker 1: FBI did perform some initial tests before they deployed the 721 00:46:15,800 --> 00:46:18,399 Speaker 1: n G I I P s, and then occasionally did 722 00:46:18,400 --> 00:46:21,440 Speaker 1: a couple of tests when they made some changes. But 723 00:46:22,200 --> 00:46:25,680 Speaker 1: there were problems with these tests. For one thing, they 724 00:46:25,680 --> 00:46:28,480 Speaker 1: were limited in scope, and they didn't represent how the 725 00:46:28,520 --> 00:46:31,880 Speaker 1: system might be used out in the real world. When 726 00:46:31,920 --> 00:46:35,240 Speaker 1: they were actually running these tests, they ran on about 727 00:46:35,400 --> 00:46:39,160 Speaker 1: nine hundred thousand photographs in the database, so they took 728 00:46:39,160 --> 00:46:42,120 Speaker 1: a subset of the photos that they had. They took 729 00:46:42,200 --> 00:46:45,560 Speaker 1: nine hundred thousand of them, and they ran probe tests 730 00:46:46,200 --> 00:46:50,399 Speaker 1: using photos that they knew either were or were not 731 00:46:51,120 --> 00:46:55,160 Speaker 1: represented in that group of nine hundred thousand. However, you've 732 00:46:55,200 --> 00:46:58,600 Speaker 1: got to remember the full database is more than thirty 733 00:46:58,800 --> 00:47:03,480 Speaker 1: million images, so something that works on a smaller scale 734 00:47:03,600 --> 00:47:07,040 Speaker 1: may not work once you scale it up. For another, 735 00:47:07,480 --> 00:47:11,440 Speaker 1: the tests did not specify how often incorrect matches would 736 00:47:11,440 --> 00:47:15,600 Speaker 1: come back, so you didn't know how many false positives 737 00:47:16,040 --> 00:47:19,359 Speaker 1: were there because the FBI wasn't tracking false positives. They 738 00:47:19,400 --> 00:47:22,120 Speaker 1: were only concerned with how frequently they were getting a 739 00:47:22,239 --> 00:47:27,040 Speaker 1: match too, you know, an actual image. So the way 740 00:47:27,040 --> 00:47:31,000 Speaker 1: they test this is you've got nine D images, they've 741 00:47:31,000 --> 00:47:34,040 Speaker 1: got a probe image. They know for a fact that 742 00:47:34,120 --> 00:47:37,279 Speaker 1: the probe image is inside that database, and then they 743 00:47:37,400 --> 00:47:40,239 Speaker 1: run the search to see if the system sends that 744 00:47:40,320 --> 00:47:45,120 Speaker 1: image back. And their threshold was an eight detection rate 745 00:47:45,920 --> 00:47:49,000 Speaker 1: for a positive match. So, in other words, it went 746 00:47:49,120 --> 00:47:51,600 Speaker 1: like this, Let's say you need to conduct a test 747 00:47:51,719 --> 00:47:54,320 Speaker 1: of this system. This is one way you would determine 748 00:47:54,320 --> 00:47:58,920 Speaker 1: whether or not you had that detection rate. Let's say 749 00:47:58,920 --> 00:48:03,200 Speaker 1: you have a hundred robe photos that you've taken of 750 00:48:03,400 --> 00:48:07,120 Speaker 1: one person, and you know this person's face is in 751 00:48:07,280 --> 00:48:09,440 Speaker 1: that database. You know it's going to be in the 752 00:48:09,600 --> 00:48:14,200 Speaker 1: among those nine hundred thousand or so images, So then 753 00:48:14,239 --> 00:48:16,960 Speaker 1: you submit your query. If you have an eighty five 754 00:48:16,960 --> 00:48:20,360 Speaker 1: percent detection rate, then eighty five of those probe photos 755 00:48:20,560 --> 00:48:23,440 Speaker 1: should come back with a match, and that match should 756 00:48:23,440 --> 00:48:28,520 Speaker 1: be the actual person you're looking for. That's what they met. 757 00:48:28,640 --> 00:48:31,120 Speaker 1: By an eighty five percent detection rate, that eighty five 758 00:48:31,160 --> 00:48:34,640 Speaker 1: percent of the time an image that isn't their database 759 00:48:34,800 --> 00:48:41,040 Speaker 1: would be pulled due to a facial recognition software search. Now, 760 00:48:41,120 --> 00:48:44,680 Speaker 1: during this testing phase, the FBI reported that they met 761 00:48:44,680 --> 00:48:48,279 Speaker 1: this threshold. They used that subset of actually was nine 762 00:48:48,680 --> 00:48:51,920 Speaker 1: twenty six thousand photos as their subset when they were 763 00:48:51,920 --> 00:48:54,439 Speaker 1: testing it, and they said that they had an eighty 764 00:48:54,520 --> 00:48:58,560 Speaker 1: six detection rates, so they actually were exceeding what they 765 00:48:58,560 --> 00:49:01,719 Speaker 1: had set as their threshold. But that just meant that 766 00:49:02,080 --> 00:49:04,600 Speaker 1: six percent of the time the actual match for probe 767 00:49:04,600 --> 00:49:08,200 Speaker 1: photos showed up in a group of fifty candidate images, 768 00:49:09,280 --> 00:49:13,399 Speaker 1: so you would get forty nine other images that were 769 00:49:13,440 --> 00:49:18,399 Speaker 1: not your match. The match would be there of the time, 770 00:49:18,800 --> 00:49:22,920 Speaker 1: along with forty nine other images. So we know that 771 00:49:22,960 --> 00:49:25,560 Speaker 1: the system works if you are asking for the maximum 772 00:49:25,680 --> 00:49:28,600 Speaker 1: number of candidates. Remember in the FBI system, you can 773 00:49:28,600 --> 00:49:31,440 Speaker 1: ask for between two and fifty, but fifty is the max. 774 00:49:32,960 --> 00:49:35,680 Speaker 1: But what happens if you asked for fewer images? What 775 00:49:35,760 --> 00:49:40,520 Speaker 1: if you said, no, I want twenty returns? What's the accuracy? 776 00:49:40,560 --> 00:49:45,000 Speaker 1: Then the FBI can't tell you because they do not know. 777 00:49:45,440 --> 00:49:48,640 Speaker 1: According to the FBI, they did not run tests to 778 00:49:48,719 --> 00:49:51,640 Speaker 1: see what would happen if you decrease the number of 779 00:49:51,640 --> 00:49:55,160 Speaker 1: candidate photos you asked for. They only ran tests on 780 00:49:55,239 --> 00:49:59,360 Speaker 1: the maximum number of candidate photos. And keep in mind, 781 00:49:59,600 --> 00:50:03,160 Speaker 1: the fault for any search is twenty photos, so the 782 00:50:03,200 --> 00:50:05,959 Speaker 1: default is less than what they tested, and they never 783 00:50:06,040 --> 00:50:09,439 Speaker 1: tried to see if the eight six percent detection rate 784 00:50:09,680 --> 00:50:13,200 Speaker 1: held true at these lower numbers. That's a huge issue. 785 00:50:15,280 --> 00:50:18,160 Speaker 1: On top of that, the FBI didn't go so far 786 00:50:18,239 --> 00:50:21,880 Speaker 1: to determine how frequently its system would return false positives 787 00:50:21,920 --> 00:50:25,640 Speaker 1: to probes, so they never paid attention to how many 788 00:50:25,680 --> 00:50:31,280 Speaker 1: times they got responses that didn't reflect and the actual identity. 789 00:50:31,440 --> 00:50:35,000 Speaker 1: They didn't keep track of it. So, according to the FBI, 790 00:50:35,040 --> 00:50:37,359 Speaker 1: the purpose of the system is to generate leads, not 791 00:50:37,520 --> 00:50:40,239 Speaker 1: to positively identify persons of interest, so it shouldn't come 792 00:50:40,239 --> 00:50:43,520 Speaker 1: as a big surprise, or you shouldn't even care if 793 00:50:43,960 --> 00:50:47,520 Speaker 1: it returns a lot of false positives, because hey, this 794 00:50:47,600 --> 00:50:51,200 Speaker 1: technology isn't meant to be the smoking gun that says, 795 00:50:51,280 --> 00:50:54,399 Speaker 1: here's the evidence that will put this person away. It's 796 00:50:54,400 --> 00:50:56,440 Speaker 1: meant to just create a lead, So why do you 797 00:50:56,560 --> 00:51:00,720 Speaker 1: care how many false positives it returns? As if being 798 00:51:02,040 --> 00:51:06,120 Speaker 1: looped in on a an official inquiry when you had 799 00:51:06,160 --> 00:51:09,360 Speaker 1: nothing to do with it isn't disruptive or stressful or 800 00:51:09,360 --> 00:51:13,319 Speaker 1: provoke anxiety. I don't know about you, guys, but if 801 00:51:13,400 --> 00:51:15,279 Speaker 1: I had a federal agent show up at my door 802 00:51:16,160 --> 00:51:19,279 Speaker 1: asking me weird questions about a case that I had 803 00:51:19,320 --> 00:51:23,120 Speaker 1: no connection to because my image had popped up in 804 00:51:23,239 --> 00:51:26,560 Speaker 1: one of these searches and had I have nothing to 805 00:51:26,560 --> 00:51:28,919 Speaker 1: do with it. It just so happens that I look 806 00:51:29,080 --> 00:51:32,000 Speaker 1: enough like a photo that's being used in the case 807 00:51:32,560 --> 00:51:35,960 Speaker 1: to warrant this. I would probably find that pretty disruptive 808 00:51:36,000 --> 00:51:41,440 Speaker 1: in my life, so I would care about false positives. FBI, 809 00:51:41,520 --> 00:51:44,080 Speaker 1: at least according to this g a O report, apparently 810 00:51:44,120 --> 00:51:49,759 Speaker 1: didn't think it was that big a deal. Now, the 811 00:51:49,840 --> 00:51:51,640 Speaker 1: g a O points out that it is a big 812 00:51:51,680 --> 00:51:54,080 Speaker 1: deal and that they're not the only ones to think so. 813 00:51:54,920 --> 00:51:58,920 Speaker 1: The National Science and Technology Council and the National Institute 814 00:51:58,920 --> 00:52:02,040 Speaker 1: of Standards and technolo Ology both state then, in order 815 00:52:02,080 --> 00:52:05,000 Speaker 1: to know how accurate a system is, you need to 816 00:52:05,040 --> 00:52:09,040 Speaker 1: know two pieces of information, not just the detection rate, 817 00:52:09,280 --> 00:52:12,680 Speaker 1: which the FBI claims is at least when you're asking 818 00:52:12,680 --> 00:52:16,920 Speaker 1: for fifty candidates, but also the false positive rate. You 819 00:52:16,960 --> 00:52:19,600 Speaker 1: have to know both of them in order to understand 820 00:52:19,600 --> 00:52:22,799 Speaker 1: how accurate a system is. So only knowing one of 821 00:52:22,840 --> 00:52:26,320 Speaker 1: those pieces of information isn't enough to state this system 822 00:52:26,400 --> 00:52:29,919 Speaker 1: is accurate or not. You have to know both. So 823 00:52:30,120 --> 00:52:33,080 Speaker 1: not only do does the FBI not have a grasp 824 00:52:33,200 --> 00:52:35,759 Speaker 1: on how accurate their system is if you're asking for 825 00:52:35,880 --> 00:52:39,439 Speaker 1: fewer than the maximum number of candidates, they also don't 826 00:52:39,480 --> 00:52:43,320 Speaker 1: know how often it returns false positives. So the FBI 827 00:52:43,400 --> 00:52:46,200 Speaker 1: has no way of knowing how accurate this facial recognition 828 00:52:46,920 --> 00:52:53,719 Speaker 1: software is considering that it's being used to actually further investigations. 829 00:52:54,360 --> 00:52:58,480 Speaker 1: For official investigations of the FBI and also other state 830 00:52:58,520 --> 00:53:03,840 Speaker 1: agencies that have access to the system, that is beyond problematic. 831 00:53:04,600 --> 00:53:09,000 Speaker 1: If you cannot say that the system with any degree 832 00:53:09,320 --> 00:53:13,200 Speaker 1: of certainty is above a certain threshold of accuracy. Why 833 00:53:13,239 --> 00:53:15,719 Speaker 1: are you using it? Because, I mean, it has the 834 00:53:15,719 --> 00:53:21,800 Speaker 1: potential to dramatically impact people's lives and potentially lead people 835 00:53:21,840 --> 00:53:26,359 Speaker 1: down the pathway that could result in in false accusations 836 00:53:26,360 --> 00:53:30,279 Speaker 1: and imprisonment. Uh, the person who is actually responsible might 837 00:53:30,440 --> 00:53:32,759 Speaker 1: totally get away with something because of this. This is 838 00:53:32,800 --> 00:53:36,680 Speaker 1: a real problem. And then the thing is it might 839 00:53:36,719 --> 00:53:39,799 Speaker 1: be a perfectly accurate system, but we don't know that 840 00:53:40,080 --> 00:53:43,120 Speaker 1: because we haven't tested it. So until we test it, 841 00:53:43,160 --> 00:53:48,399 Speaker 1: we cannot just assume that it's accurate enough. That's not 842 00:53:48,440 --> 00:53:50,759 Speaker 1: when people's lives are at stake. This is where the 843 00:53:50,840 --> 00:53:54,760 Speaker 1: my bias doesn't so much creep in as it kicks 844 00:53:54,800 --> 00:53:57,480 Speaker 1: open the door and makes itself at home on your couch, 845 00:53:58,680 --> 00:54:04,560 Speaker 1: but I die gress. The g a O report also 846 00:54:04,880 --> 00:54:09,759 Speaker 1: goes into great detail about how this accuracy really can 847 00:54:09,800 --> 00:54:12,960 Speaker 1: have a clear impact on people's privacy, their civil liberties, 848 00:54:13,000 --> 00:54:17,160 Speaker 1: their civil rights. They also cite the Electronic Frontier Foundation 849 00:54:17,239 --> 00:54:20,080 Speaker 1: the e f F, which says that if a person 850 00:54:20,200 --> 00:54:23,120 Speaker 1: is brought up as a defendant in a case and 851 00:54:23,160 --> 00:54:26,040 Speaker 1: it is revealed that they were matched by a facial 852 00:54:26,120 --> 00:54:30,360 Speaker 1: recognition system. It puts a burden on the defendant to 853 00:54:30,520 --> 00:54:33,520 Speaker 1: argue that they are not the same person as was 854 00:54:34,160 --> 00:54:37,719 Speaker 1: seen in a probe photo, that they are not the 855 00:54:37,760 --> 00:54:41,200 Speaker 1: same one that the system has identified. And if you 856 00:54:41,239 --> 00:54:45,880 Speaker 1: cannot reliably state how accurate your system is because you 857 00:54:45,880 --> 00:54:48,759 Speaker 1: don't know how frequently it returns false positives, you have 858 00:54:48,960 --> 00:54:52,600 Speaker 1: unfairly burdened the defendant. Like if you were to say, 859 00:54:52,600 --> 00:54:54,080 Speaker 1: if you're the FBI, and you say, we have an 860 00:54:54,080 --> 00:54:58,080 Speaker 1: eight six percent detection rate, but you don't admit, oh, 861 00:54:58,120 --> 00:55:00,400 Speaker 1: by the way, we don't know how many false positive 862 00:55:00,400 --> 00:55:03,520 Speaker 1: as we get on any given search. The implication you 863 00:55:03,560 --> 00:55:05,920 Speaker 1: have given is that we're pretty sure that this is 864 00:55:05,960 --> 00:55:10,000 Speaker 1: the right guy. And again they argue that this is 865 00:55:10,040 --> 00:55:13,920 Speaker 1: meant to be a point of inquiry, but you could 866 00:55:13,960 --> 00:55:16,200 Speaker 1: easily see how it could also be used by a 867 00:55:16,280 --> 00:55:20,080 Speaker 1: lawyer to argue that a defendant is in fact the 868 00:55:20,120 --> 00:55:23,359 Speaker 1: person responsible for a crime, and they may not be. 869 00:55:25,840 --> 00:55:28,719 Speaker 1: And because you don't know the accuracy of the system, 870 00:55:29,520 --> 00:55:33,000 Speaker 1: you can't like using the system to argue for it 871 00:55:33,080 --> 00:55:38,759 Speaker 1: is irresponsible. There's no accountability there. Now, not only has 872 00:55:38,800 --> 00:55:41,279 Speaker 1: the FBI failed to establish the accuracy of its own 873 00:55:41,400 --> 00:55:44,239 Speaker 1: n G i I p S system. It has also 874 00:55:44,360 --> 00:55:47,880 Speaker 1: not assessed the accuracy of all those external databases that 875 00:55:47,920 --> 00:55:51,600 Speaker 1: are used whenever they use the face approach. There are 876 00:55:51,600 --> 00:55:55,960 Speaker 1: no accuracy requirements for these agencies, so there's not like 877 00:55:56,000 --> 00:55:58,799 Speaker 1: a threshold they have to prove that they meet in 878 00:55:58,920 --> 00:56:02,120 Speaker 1: order to be part of this. That's a huge problem. 879 00:56:02,200 --> 00:56:06,240 Speaker 1: While each agency might be accurate with no testing procedure 880 00:56:06,239 --> 00:56:09,120 Speaker 1: in place, it's impossible to be certain of that. And 881 00:56:09,160 --> 00:56:12,680 Speaker 1: since these databases include millions of people with no criminal 882 00:56:12,719 --> 00:56:16,880 Speaker 1: background and they all use different facial recognition software products, 883 00:56:18,120 --> 00:56:20,360 Speaker 1: this is a huge issue. You could be put in 884 00:56:20,400 --> 00:56:23,840 Speaker 1: a virtual lineup simply because you look enough like someone 885 00:56:23,880 --> 00:56:26,040 Speaker 1: else that a computer thinks you are in fact the 886 00:56:26,080 --> 00:56:29,600 Speaker 1: same person. The g a O report concludes with a 887 00:56:29,719 --> 00:56:33,840 Speaker 1: host of recommendations for future actions, including addressing the problem 888 00:56:33,840 --> 00:56:36,760 Speaker 1: of the FBI being so slow to publish those updated 889 00:56:36,760 --> 00:56:38,960 Speaker 1: a p I as in a timely manner, and to 890 00:56:39,000 --> 00:56:42,880 Speaker 1: create a means to assess each system's accuracy. The Department 891 00:56:42,880 --> 00:56:47,200 Speaker 1: of Justice read the report and then responded disagreeing with 892 00:56:47,239 --> 00:56:52,280 Speaker 1: several points that the g O report made, uh including 893 00:56:52,400 --> 00:56:55,160 Speaker 1: arguing that the FBI and the Department of Justice published 894 00:56:55,200 --> 00:56:58,320 Speaker 1: information when it made the most sense when the system 895 00:56:58,320 --> 00:57:01,840 Speaker 1: had been tweaked and find analized more or less. However, 896 00:57:01,920 --> 00:57:05,160 Speaker 1: by that time, again, they had been using that system 897 00:57:05,200 --> 00:57:08,799 Speaker 1: for real world cases throughout the entire process, So it 898 00:57:08,840 --> 00:57:11,400 Speaker 1: seems to me to be kind of a weak argument. 899 00:57:12,160 --> 00:57:14,799 Speaker 1: You can't really say, like, hey, it wasn't finished until then, 900 00:57:14,960 --> 00:57:17,919 Speaker 1: that's when we published it. If you allso at saying hey, 901 00:57:17,960 --> 00:57:22,400 Speaker 1: we use that for real zes to go after actual people, 902 00:57:23,880 --> 00:57:27,520 Speaker 1: you can't have it both ways and not not maintain 903 00:57:27,560 --> 00:57:34,840 Speaker 1: accountability at any rate. So that kind of gets to 904 00:57:34,840 --> 00:57:37,919 Speaker 1: the end of the Government Accountability Office report. But that's 905 00:57:37,920 --> 00:57:41,240 Speaker 1: not the end of the story. In March, Congress held 906 00:57:41,280 --> 00:57:45,040 Speaker 1: some hearings about this, and boy haality were some congress 907 00:57:45,080 --> 00:57:47,840 Speaker 1: people very upset with the FBI. On both sides of 908 00:57:47,840 --> 00:57:51,840 Speaker 1: the aisle. You had Democrats and Republicans really chastising the 909 00:57:51,880 --> 00:57:55,920 Speaker 1: FBI for their use of facial recognition software and arguing 910 00:57:55,960 --> 00:57:59,960 Speaker 1: that it could amount to an enormous invasion of privacy 911 00:58:00,080 --> 00:58:03,520 Speaker 1: as well as endangering the civil liberties of U. S citizens. 912 00:58:04,040 --> 00:58:10,080 Speaker 1: So people who have dramatically different political philosophies were agreeing 913 00:58:10,120 --> 00:58:13,040 Speaker 1: on this point. So it wasn't really a partisan issue. 914 00:58:13,120 --> 00:58:16,120 Speaker 1: In this case, and it got pretty ugly, but probably 915 00:58:16,160 --> 00:58:18,960 Speaker 1: not as ugly as the Georgetown University report that was 916 00:58:19,000 --> 00:58:23,760 Speaker 1: published in late This is an amazing report. Both the 917 00:58:23,800 --> 00:58:27,480 Speaker 1: Government Accountability Office report and the Georgetown University report are 918 00:58:27,480 --> 00:58:31,840 Speaker 1: available for free online. I will warn you collectively they're 919 00:58:31,880 --> 00:58:36,600 Speaker 1: about two hundred pages, so if you want some light reading, 920 00:58:37,440 --> 00:58:39,840 Speaker 1: you can check it out there. They are quite good, 921 00:58:39,880 --> 00:58:42,240 Speaker 1: both of them, and they're very accessible. Neither of them 922 00:58:42,240 --> 00:58:45,640 Speaker 1: are written in like crazy legal lease which will make 923 00:58:45,680 --> 00:58:49,680 Speaker 1: it impossible to understand. They're written in very plain English. Now. 924 00:58:49,680 --> 00:58:52,600 Speaker 1: It was in the Georgetown University report that was revealed 925 00:58:52,600 --> 00:58:55,360 Speaker 1: that one in every two American adults has their picture 926 00:58:55,400 --> 00:58:59,680 Speaker 1: contained in a database connected to law enforcement facial recognition systems. 927 00:59:00,560 --> 00:59:03,440 Speaker 1: And this report goes far beyond just that FBI to 928 00:59:03,640 --> 00:59:06,400 Speaker 1: state all the way down to state and local systems 929 00:59:06,440 --> 00:59:09,600 Speaker 1: that are implementing their own facial recognition databases, and many 930 00:59:09,640 --> 00:59:11,960 Speaker 1: of them have no understanding of how it might impact 931 00:59:12,000 --> 00:59:16,280 Speaker 1: the civil liberties or privacy of citizens. The report is 932 00:59:16,320 --> 00:59:19,160 Speaker 1: the summary of a study that lasted a full year 933 00:59:19,560 --> 00:59:23,120 Speaker 1: with more than one records requests to various police departments. 934 00:59:23,320 --> 00:59:26,640 Speaker 1: They looked at fifty two different law enforcement agencies across 935 00:59:26,640 --> 00:59:29,880 Speaker 1: the United States, and the report assessed the risks to 936 00:59:29,960 --> 00:59:33,120 Speaker 1: civil liberties and civil rights because up until this report 937 00:59:33,280 --> 00:59:37,439 Speaker 1: was was filed, no such study had been made, which 938 00:59:37,440 --> 00:59:39,920 Speaker 1: is a huge problem. You don't know the impact of 939 00:59:40,000 --> 00:59:43,320 Speaker 1: the tool that you've created until after it's been put 940 00:59:43,360 --> 00:59:46,880 Speaker 1: in use for a while. That's an issue. Ideally, you 941 00:59:46,920 --> 00:59:50,760 Speaker 1: think all this out before you implement the procedure, and 942 00:59:50,760 --> 00:59:54,040 Speaker 1: their findings were pretty upsetting. For example, the report found 943 00:59:54,040 --> 00:59:57,000 Speaker 1: that some agencies limit themselves to using facial recognition within 944 00:59:57,040 --> 01:00:00,720 Speaker 1: the framework of a targeted and public use, such as 945 01:00:00,840 --> 01:00:03,680 Speaker 1: using it on someone who has been legally arrested or 946 01:00:03,720 --> 01:00:07,880 Speaker 1: detained for a crime. And in this case, you're talking 947 01:00:07,920 --> 01:00:14,360 Speaker 1: about totally above board approach. You're assuming that everyone is 948 01:00:14,360 --> 01:00:19,720 Speaker 1: is following the law as regards to apprehending and charging 949 01:00:19,720 --> 01:00:22,520 Speaker 1: a suspect with a crime, and maybe that person is 950 01:00:22,600 --> 01:00:26,800 Speaker 1: unwilling or unable to to tell you what what their 951 01:00:26,840 --> 01:00:29,600 Speaker 1: identity is, and in that case, you would use this 952 01:00:30,000 --> 01:00:32,880 Speaker 1: facial recognition software stuff in order to figure out who 953 01:00:32,960 --> 01:00:38,480 Speaker 1: you are dealing with. That's largely a legitimate case. You know, 954 01:00:38,520 --> 01:00:42,880 Speaker 1: the government. The Georgetown University study didn't say that's bad. 955 01:00:43,040 --> 01:00:45,720 Speaker 1: They actually said, no, that's that makes sense. It's targeted, 956 01:00:45,800 --> 01:00:51,560 Speaker 1: it's public. But you could have a more general, invisible approach, 957 01:00:51,760 --> 01:00:55,640 Speaker 1: for example, using facial recognition software in real time on 958 01:00:55,680 --> 01:01:00,280 Speaker 1: a closed circuit camera pointed at a city street, where 959 01:01:00,320 --> 01:01:04,040 Speaker 1: you're literally picking up people as they pass by. They're 960 01:01:04,080 --> 01:01:07,400 Speaker 1: not people of interest, they're just people going about their day. 961 01:01:07,560 --> 01:01:11,000 Speaker 1: And if you're running facial recognition software on such a feed, 962 01:01:11,640 --> 01:01:17,160 Speaker 1: you are potentially invading privacy and stepping on civil rights 963 01:01:17,160 --> 01:01:21,600 Speaker 1: and civil liberties. So even if you were to argue 964 01:01:22,720 --> 01:01:25,640 Speaker 1: that this real time use where you're just looking at 965 01:01:25,680 --> 01:01:27,680 Speaker 1: people as they pass by, maybe a little name pops 966 01:01:27,760 --> 01:01:29,640 Speaker 1: up every now and then as it as the system 967 01:01:29,680 --> 01:01:33,200 Speaker 1: recognizes a person that matches a file in the database, 968 01:01:33,880 --> 01:01:36,560 Speaker 1: it's easy to imagine a scenario in which such a 969 01:01:36,600 --> 01:01:41,200 Speaker 1: technology could be abused. Either it picks up somebody mistakenly, 970 01:01:41,560 --> 01:01:45,200 Speaker 1: it thinks that identifies someone, but in fact it's a 971 01:01:45,200 --> 01:01:48,560 Speaker 1: totally different person, and then you end up establishing a 972 01:01:48,560 --> 01:01:53,640 Speaker 1: person's location by mistake, like it it's not really where 973 01:01:53,640 --> 01:01:56,360 Speaker 1: they were, but because the system has identified a person 974 01:01:56,400 --> 01:02:00,720 Speaker 1: as being at X place at Y time, you then 975 01:02:00,800 --> 01:02:05,320 Speaker 1: have established supposedly that person's location. When in fact that 976 01:02:05,360 --> 01:02:07,080 Speaker 1: person might be across town or not even in the 977 01:02:07,120 --> 01:02:11,440 Speaker 1: same state, but it's because of a misidentification in the system. 978 01:02:11,560 --> 01:02:13,880 Speaker 1: That's one problem. But think about this. Think of this 979 01:02:13,920 --> 01:02:18,520 Speaker 1: is a scary scenario. Imagine a situation in which a 980 01:02:18,560 --> 01:02:22,680 Speaker 1: group of people are discriminated against by a government agency. 981 01:02:22,800 --> 01:02:27,160 Speaker 1: Let's say they have a legitimate gripe. It's completely legitimate. 982 01:02:27,760 --> 01:02:31,040 Speaker 1: They're victims of unfair treatment. So a group of them 983 01:02:31,080 --> 01:02:33,800 Speaker 1: and some of their allies get together in a public 984 01:02:33,840 --> 01:02:38,040 Speaker 1: place for peaceful protest, to raise awareness of this issue 985 01:02:38,120 --> 01:02:43,560 Speaker 1: and to confront uh the government agencies that have discriminated 986 01:02:43,600 --> 01:02:46,800 Speaker 1: against them. This is all perfectly legal according to the U. 987 01:02:46,880 --> 01:02:50,040 Speaker 1: S Constitution. They're not doing anything legal. They're assembling on 988 01:02:50,120 --> 01:02:56,439 Speaker 1: public grounds in order to practice free speech. But it's 989 01:02:56,480 --> 01:02:59,680 Speaker 1: not hard to imagine a government agency using a camera 990 01:02:59,760 --> 01:03:02,560 Speaker 1: when this sort of facial recognition software to identify people 991 01:03:02,560 --> 01:03:05,000 Speaker 1: who are in the crowd, in order to use that 992 01:03:05,440 --> 01:03:08,840 Speaker 1: as leverage in the future for some purpose or another, 993 01:03:09,600 --> 01:03:12,360 Speaker 1: even if it's just to say we know you were there, 994 01:03:13,480 --> 01:03:16,080 Speaker 1: and to put that kind of pressure on a person 995 01:03:17,560 --> 01:03:22,920 Speaker 1: in order to essentially squelch people's freedom of speech. So 996 01:03:23,000 --> 01:03:25,200 Speaker 1: this is a first Amendment issue, not just a Fourth 997 01:03:25,200 --> 01:03:28,880 Speaker 1: Amendment issue. Now that might sound like a dramatic scenario 998 01:03:29,280 --> 01:03:33,040 Speaker 1: like something like Big Brother issue, its Orwellian, but it's 999 01:03:33,080 --> 01:03:37,120 Speaker 1: also entirely within the realm of possibility. From a technological standpoint, 1000 01:03:37,160 --> 01:03:40,760 Speaker 1: there's nothing technology technologically oriented that would prevent us from 1001 01:03:40,800 --> 01:03:43,600 Speaker 1: doing this, or prevent an agency from doing this. And 1002 01:03:43,720 --> 01:03:47,240 Speaker 1: even without the evil Empire scenario in place, you still 1003 01:03:47,320 --> 01:03:50,080 Speaker 1: have the problematic issue of treading on civil liberties just 1004 01:03:50,240 --> 01:03:54,600 Speaker 1: by having such technology available and unregulated. You don't have 1005 01:03:54,920 --> 01:03:59,960 Speaker 1: rules to to guide this sort of stuff. The Georgetown 1006 01:04:00,040 --> 01:04:03,760 Speaker 1: report found that only one agency out of the fifty 1007 01:04:03,840 --> 01:04:09,680 Speaker 1: two that they looked at have a specific rule against 1008 01:04:09,840 --> 01:04:14,440 Speaker 1: using facial recognition software to identify people participating in public 1009 01:04:14,520 --> 01:04:19,080 Speaker 1: demonstrations or free speech in general. So only one agency 1010 01:04:19,160 --> 01:04:22,240 Speaker 1: actually has rules against that. Now, that doesn't mean the 1011 01:04:22,320 --> 01:04:26,400 Speaker 1: other fifty one agencies are regularly using this technology to 1012 01:04:27,080 --> 01:04:31,400 Speaker 1: monitor acts of free speech, but it also doesn't mean 1013 01:04:31,440 --> 01:04:34,280 Speaker 1: that they can't. They don't have rules against it. Only 1014 01:04:34,360 --> 01:04:39,040 Speaker 1: one agency out of the fifty two, people are being 1015 01:04:39,080 --> 01:04:41,720 Speaker 1: watched and identified without any connection to a crime. In 1016 01:04:41,800 --> 01:04:46,560 Speaker 1: these cases, it's pretty terrifying. The Georgetown report also found 1017 01:04:46,600 --> 01:04:49,080 Speaker 1: that no state had yet passed a law to regulate 1018 01:04:49,200 --> 01:04:53,520 Speaker 1: police use of facial recognition software. No, no state in 1019 01:04:53,600 --> 01:04:56,480 Speaker 1: the US. They're fifty of them, and none of them 1020 01:04:56,560 --> 01:05:00,200 Speaker 1: have passed any regulations, any laws to regulate the use 1021 01:05:00,280 --> 01:05:03,880 Speaker 1: of facial recognition software. So without rules, how do you 1022 01:05:04,040 --> 01:05:07,040 Speaker 1: argue whether someone's misused or abused a system. You have 1023 01:05:07,120 --> 01:05:10,000 Speaker 1: to have rules so that you know what is allowed 1024 01:05:10,040 --> 01:05:13,160 Speaker 1: and what is not allowed. With no rules, the implication 1025 01:05:13,280 --> 01:05:16,960 Speaker 1: is that everything is allowed until it isn't. That's a 1026 01:05:17,240 --> 01:05:23,840 Speaker 1: that's a huge dangerous problem. The report also pointed out 1027 01:05:24,160 --> 01:05:27,200 Speaker 1: that most of these agencies lacked any sort of methodology 1028 01:05:27,320 --> 01:05:31,280 Speaker 1: to ensure that the accuracy of their respective systems was 1029 01:05:32,200 --> 01:05:35,400 Speaker 1: was decent. The reports stated that out of all the 1030 01:05:35,480 --> 01:05:40,200 Speaker 1: agencies they investigated, only to the San Francisco Police Department 1031 01:05:40,400 --> 01:05:44,439 Speaker 1: and the South Sound nine one from Seattle had made 1032 01:05:44,520 --> 01:05:48,120 Speaker 1: decisions about what facial recognition software they were going to 1033 01:05:48,240 --> 01:05:54,800 Speaker 1: incorporate in their office based off of accuracy rates. That 1034 01:05:55,000 --> 01:05:58,280 Speaker 1: that was not a consideration for all of the other agencies, 1035 01:05:58,280 --> 01:06:01,520 Speaker 1: at least not the ones that they asked. Moreover, the 1036 01:06:02,040 --> 01:06:05,160 Speaker 1: report points out that facial recognition companies are also trying 1037 01:06:05,200 --> 01:06:08,840 Speaker 1: to have it both ways. So, for example, they cite 1038 01:06:08,880 --> 01:06:13,520 Speaker 1: a company called fat face First. Now face First advertises 1039 01:06:13,920 --> 01:06:20,760 Speaker 1: that has accuracy rate, but it's simultaneously disclaims any liability 1040 01:06:20,880 --> 01:06:26,360 Speaker 1: for failing to meet that accuracy rate. So it's kind 1041 01:06:26,360 --> 01:06:29,040 Speaker 1: of like saying we we guarantee these tires. Tires are 1042 01:06:29,080 --> 01:06:35,160 Speaker 1: not guaranteed. Not quite like that, but similar. So again, 1043 01:06:35,200 --> 01:06:38,760 Speaker 1: this is according to the Georgetown University report. That's a 1044 01:06:38,840 --> 01:06:43,160 Speaker 1: problem for a company to to sell itself on a 1045 01:06:44,720 --> 01:06:48,040 Speaker 1: on a performance threshold, but then say, hey, you can't 1046 01:06:48,120 --> 01:06:50,520 Speaker 1: hold us to that performance threshold that we sold you on. 1047 01:06:51,560 --> 01:06:56,440 Speaker 1: That's a little dangerous there too. Then the report goes 1048 01:06:56,520 --> 01:06:58,760 Speaker 1: on to state that the human analysts, you know, the 1049 01:06:58,800 --> 01:07:01,720 Speaker 1: ones I was talking about earlier, that supposed to be 1050 01:07:01,920 --> 01:07:07,240 Speaker 1: a safeguard. Human analysts are supposed to take the images 1051 01:07:07,320 --> 01:07:11,760 Speaker 1: that are returned by these automated systems and manually review 1052 01:07:11,880 --> 01:07:14,000 Speaker 1: them to make sure that they do or do not 1053 01:07:14,320 --> 01:07:17,640 Speaker 1: match that probe photo. That was the whole thing to 1054 01:07:17,720 --> 01:07:22,160 Speaker 1: begin with, But it turns out, according to this report, 1055 01:07:22,280 --> 01:07:26,680 Speaker 1: those human analysts are not that accurate. In fact, they're 1056 01:07:26,840 --> 01:07:31,120 Speaker 1: no better than a coin flip. Literally. The report sites 1057 01:07:31,160 --> 01:07:34,200 Speaker 1: of study that showed that if analysts did not have 1058 01:07:34,760 --> 01:07:39,880 Speaker 1: highly specialized training, they would make the wrong decision for 1059 01:07:40,000 --> 01:07:43,680 Speaker 1: a potential match fifty percent of the time, literally a 1060 01:07:43,760 --> 01:07:48,480 Speaker 1: coin flip. That's ridiculous. Now, the report found only eight 1061 01:07:48,600 --> 01:07:54,040 Speaker 1: agencies of the fifty two used specialized personnel to review images. 1062 01:07:54,920 --> 01:07:57,960 Speaker 1: In other words, people who presumably have actually received that 1063 01:07:58,160 --> 01:08:02,400 Speaker 1: highly specialized training necessary to make more accurate decisions regarding 1064 01:08:02,440 --> 01:08:05,840 Speaker 1: these photos. And the report states that there's no formal 1065 01:08:06,040 --> 01:08:09,880 Speaker 1: training regime in place for examiners, which is a major 1066 01:08:09,920 --> 01:08:12,440 Speaker 1: problem for a system that's already in widespread use. So 1067 01:08:12,560 --> 01:08:15,440 Speaker 1: not only do you need highly specialized training, there's no 1068 01:08:15,720 --> 01:08:21,920 Speaker 1: formalized approach to to give or receive that highly specialized training. 1069 01:08:22,960 --> 01:08:25,599 Speaker 1: So we know you need it, but we haven't developed 1070 01:08:25,640 --> 01:08:29,599 Speaker 1: the best practices to actually deliver upon that. So meanwhile, 1071 01:08:29,640 --> 01:08:33,080 Speaker 1: you've got human analysts who are making mistakes half the 1072 01:08:33,200 --> 01:08:36,920 Speaker 1: time while reviewing these photos. And if you wondered if 1073 01:08:36,960 --> 01:08:41,240 Speaker 1: facial recognition systems would disproportionately affect some ethnicities over others, 1074 01:08:41,360 --> 01:08:45,560 Speaker 1: the answer to that is resounding and dismaying yes. The 1075 01:08:45,720 --> 01:08:49,880 Speaker 1: report found that African Americans would be affected more than 1076 01:08:50,160 --> 01:08:54,320 Speaker 1: other ethnicities. According to an FBI co authored study, that 1077 01:08:54,439 --> 01:08:58,879 Speaker 1: was cited by this Georgetown University report. Several facial recognition 1078 01:08:58,920 --> 01:09:03,280 Speaker 1: algorithms are less accurate for black people than for other ethnicities, 1079 01:09:03,600 --> 01:09:07,200 Speaker 1: and there's no independent testing process to determine if there's 1080 01:09:07,240 --> 01:09:10,960 Speaker 1: a racial bias in any of these facial recognition systems, 1081 01:09:11,240 --> 01:09:14,559 Speaker 1: So no one has developed a test to make certain 1082 01:09:15,160 --> 01:09:19,960 Speaker 1: that it is in fact accurate despite a person's age, gender, 1083 01:09:20,240 --> 01:09:24,320 Speaker 1: or race. Without being able to verify that it is 1084 01:09:24,439 --> 01:09:29,280 Speaker 1: accurate across all parameters, you have opened up an enormous 1085 01:09:29,360 --> 01:09:34,040 Speaker 1: can of worms, and you are disproportionately affecting people just 1086 01:09:34,240 --> 01:09:37,559 Speaker 1: because of their race because your system does not address 1087 01:09:37,760 --> 01:09:42,439 Speaker 1: that properly. The report also points out that the information 1088 01:09:42,479 --> 01:09:45,240 Speaker 1: about the systems and use had not been generally available 1089 01:09:45,280 --> 01:09:48,360 Speaker 1: to the public. In fact, all the fifty two agencies 1090 01:09:48,439 --> 01:09:54,480 Speaker 1: that they they contacted, only four had publicly available use policies. 1091 01:09:54,840 --> 01:09:57,080 Speaker 1: So in other words, only four of the fifty two 1092 01:09:57,560 --> 01:10:02,200 Speaker 1: could tell you what general policy was as far as 1093 01:10:02,240 --> 01:10:05,479 Speaker 1: facial recognition software goes. That's less than ten percent of 1094 01:10:05,800 --> 01:10:09,280 Speaker 1: all of the agencies they looked at, and only one 1095 01:10:09,320 --> 01:10:13,400 Speaker 1: of those agencies, which was San Diego's Association of Governments, 1096 01:10:13,680 --> 01:10:17,519 Speaker 1: had legislative approval for its policy. All the others were 1097 01:10:17,560 --> 01:10:20,679 Speaker 1: just self appointed policies that had not passed through any 1098 01:10:20,800 --> 01:10:24,960 Speaker 1: kind of official legislative support. Finally, the report asserted that 1099 01:10:25,439 --> 01:10:29,120 Speaker 1: most of these systems did not have an official audit 1100 01:10:29,200 --> 01:10:33,160 Speaker 1: process to determine if or when someone misuses the systems. 1101 01:10:33,840 --> 01:10:37,760 Speaker 1: Nine agencies reported that they did have a process, but 1102 01:10:38,040 --> 01:10:41,720 Speaker 1: only one provided Georgetown with any evidence that they had 1103 01:10:41,720 --> 01:10:44,759 Speaker 1: a working audit system, and that was the Michigan State Police, 1104 01:10:44,840 --> 01:10:47,479 Speaker 1: by the way, who said, we have an audit system, 1105 01:10:47,640 --> 01:10:50,240 Speaker 1: and here's proof that it actually works the way we 1106 01:10:50,320 --> 01:10:52,760 Speaker 1: said it did. So good on you, Michigan State for 1107 01:10:53,479 --> 01:10:55,760 Speaker 1: having that system in place and being able to back 1108 01:10:55,840 --> 01:11:00,400 Speaker 1: it up now. The Georgetown University report also urge some 1109 01:11:00,479 --> 01:11:03,439 Speaker 1: major changes in the way law enforcement uses facial recognition, 1110 01:11:03,479 --> 01:11:06,960 Speaker 1: including an appeal to Congress to create clear regulations to 1111 01:11:07,040 --> 01:11:09,800 Speaker 1: define the parameters of when such a system could be used. 1112 01:11:10,560 --> 01:11:13,880 Speaker 1: They also called for companies to publish processes that test 1113 01:11:13,920 --> 01:11:17,760 Speaker 1: their products accuracy regardless of race, gender, and age to 1114 01:11:17,920 --> 01:11:23,200 Speaker 1: remove that possibility of bias. And if we're being really 1115 01:11:23,360 --> 01:11:27,880 Speaker 1: super kind and generous toward law enforcement, we could say 1116 01:11:27,960 --> 01:11:31,520 Speaker 1: this is just another case where technology has clearly outpaced 1117 01:11:31,800 --> 01:11:35,040 Speaker 1: the law we see that all the time. Driverless cars, 1118 01:11:35,240 --> 01:11:40,320 Speaker 1: artificial intelligence, lots of different technologies are advancing far faster 1119 01:11:40,760 --> 01:11:45,240 Speaker 1: than legislation can come up with. All right, that's fair, 1120 01:11:45,680 --> 01:11:49,920 Speaker 1: we see it happen. However, it's particularly troublesome that this 1121 01:11:50,120 --> 01:11:53,640 Speaker 1: is happening within law enforcement that is already employing this 1122 01:11:53,800 --> 01:11:57,880 Speaker 1: technology before we've developed the policies to guide it. It's 1123 01:11:57,920 --> 01:12:01,120 Speaker 1: one thing to say, someone's out here working on a 1124 01:12:01,240 --> 01:12:04,040 Speaker 1: driverless car, and we need to start thinking about how 1125 01:12:04,120 --> 01:12:07,200 Speaker 1: are we going to regulate that in the future. Maybe 1126 01:12:07,360 --> 01:12:10,000 Speaker 1: right now we say you aren't allowed to operate your 1127 01:12:10,080 --> 01:12:13,439 Speaker 1: driverless car until we figured this out. That's fair. It's 1128 01:12:13,439 --> 01:12:17,080 Speaker 1: another thing to say, there's this technology that could potentially 1129 01:12:17,360 --> 01:12:20,320 Speaker 1: impact people's lives and we're allowing law enforcement to use 1130 01:12:20,400 --> 01:12:23,080 Speaker 1: it while we try and figure out the rules. That's 1131 01:12:23,720 --> 01:12:28,439 Speaker 1: at best a problem. And as I said at the 1132 01:12:28,520 --> 01:12:30,760 Speaker 1: top of the show, I'm really just talking about the 1133 01:12:30,840 --> 01:12:34,200 Speaker 1: United States with particulars here, but this is happening all 1134 01:12:34,240 --> 01:12:37,160 Speaker 1: around the world. There are lots of governments around the 1135 01:12:37,240 --> 01:12:42,240 Speaker 1: world that are incorporating facial recognition software along with law enforcement. 1136 01:12:42,760 --> 01:12:47,480 Speaker 1: So while I'm using specific US examples in this podcast, 1137 01:12:48,080 --> 01:12:50,800 Speaker 1: the same is true for lots of other places. Of course, 1138 01:12:51,160 --> 01:12:54,000 Speaker 1: the laws that protect the citizens can be different from 1139 01:12:54,080 --> 01:12:57,240 Speaker 1: country to country. Um and in some cases there might 1140 01:12:57,280 --> 01:13:00,880 Speaker 1: not be very many outlets for citizens to to voice 1141 01:13:00,920 --> 01:13:03,800 Speaker 1: their concern, or it might even be dangerous to do so. 1142 01:13:04,840 --> 01:13:06,720 Speaker 1: But this is something I think we need to be 1143 01:13:06,800 --> 01:13:09,519 Speaker 1: aware of. I'm not generally the kind of person who 1144 01:13:10,640 --> 01:13:12,760 Speaker 1: tells you that you're being watched or you know you 1145 01:13:12,760 --> 01:13:15,800 Speaker 1: should be paranoid. But I'm also not the person to 1146 01:13:15,960 --> 01:13:20,160 Speaker 1: just sit back and let something go on when I 1147 01:13:20,280 --> 01:13:25,439 Speaker 1: feel like it's potentially more of a problem than a solution. 1148 01:13:27,800 --> 01:13:32,479 Speaker 1: All Right, that's it. I'm done. It's an important topic, 1149 01:13:33,040 --> 01:13:37,760 Speaker 1: and it's one that's still developing. Obviously, perhaps once legislation 1150 01:13:37,840 --> 01:13:41,200 Speaker 1: has been passed, once regulations are in place, once we 1151 01:13:41,600 --> 01:13:45,360 Speaker 1: have more definition about what law agencies can and cannot 1152 01:13:45,439 --> 01:13:48,720 Speaker 1: do with this technology, maybe I'll revisit this topic and 1153 01:13:48,880 --> 01:13:51,400 Speaker 1: talk about whether or not it works, or whether or 1154 01:13:51,439 --> 01:13:54,439 Speaker 1: not it is a still a good idea or a 1155 01:13:54,479 --> 01:13:57,519 Speaker 1: bad idea, or are there any other problems that we 1156 01:13:57,600 --> 01:14:01,320 Speaker 1: did not anticipate when we had this podcast. But for now, 1157 01:14:02,320 --> 01:14:05,240 Speaker 1: we can conclude this, and I hope to do a 1158 01:14:06,280 --> 01:14:09,439 Speaker 1: more Zany Happy Fun tech Stuff for our next episode. 1159 01:14:10,560 --> 01:14:13,479 Speaker 1: In the meantime, if you have any suggestions for future 1160 01:14:13,520 --> 01:14:18,160 Speaker 1: topics for tech Stuff, email me. My email address is 1161 01:14:18,360 --> 01:14:21,000 Speaker 1: tech stuff at how stuff works dot com, or you 1162 01:14:21,040 --> 01:14:23,679 Speaker 1: can always drop me a line on Twitter or Facebook. 1163 01:14:23,760 --> 01:14:26,320 Speaker 1: The handle for that the show is tech Stuff h 1164 01:14:26,520 --> 01:14:29,439 Speaker 1: s W at both Facebook and Twitter. You can also 1165 01:14:29,520 --> 01:14:32,240 Speaker 1: go to twitch dot tv slash tech stuff to watch 1166 01:14:32,320 --> 01:14:35,760 Speaker 1: me live stream this show. If you want to see 1167 01:14:35,840 --> 01:14:40,280 Speaker 1: me make mistakes live on camera and hear about these 1168 01:14:40,360 --> 01:14:43,760 Speaker 1: podcasts about a month before they actually publish, you can 1169 01:14:43,840 --> 01:14:46,559 Speaker 1: check it out a record on Wednesdays and Friday's Twitch 1170 01:14:46,640 --> 01:14:49,519 Speaker 1: dot tv slash tech stuff for more details and I 1171 01:14:49,600 --> 01:14:58,120 Speaker 1: will talk to you guys again really soon. For more 1172 01:14:58,200 --> 01:15:00,479 Speaker 1: on this and thousands of other topics. Is that how 1173 01:15:00,520 --> 01:15:01,479 Speaker 1: stuff works dot com.