1 00:00:04,440 --> 00:00:12,280 Speaker 1: Welcome to Tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,320 --> 00:00:16,280 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,320 --> 00:00:19,439 Speaker 1: I'm an executive producer with iHeartRadio and how the tech 4 00:00:19,480 --> 00:00:22,919 Speaker 1: are you? So? Folks who have followed me for a 5 00:00:23,000 --> 00:00:27,440 Speaker 1: while know that I am a huge Shakespeare fan, which 6 00:00:27,480 --> 00:00:31,080 Speaker 1: means I am insufferable in many ways, but some of 7 00:00:31,120 --> 00:00:34,960 Speaker 1: them very specific to Shakespeare. Namely, I'll find any occasion 8 00:00:35,120 --> 00:00:38,080 Speaker 1: to PLoP in a quote from one of Shakespeare's works, 9 00:00:38,080 --> 00:00:40,680 Speaker 1: and today is no exception. I'd like to start off 10 00:00:41,040 --> 00:00:46,239 Speaker 1: with this gem from Hamlet, act to scene to as 11 00:00:46,360 --> 00:00:51,920 Speaker 1: spoken by the gloomy Dane himself. Quote there is nothing 12 00:00:52,080 --> 00:00:57,480 Speaker 1: either good or bad, but thinking makes it so. End quote. Now, 13 00:00:57,480 --> 00:01:01,720 Speaker 1: that just means that good and bad are subjective concepts. 14 00:01:01,880 --> 00:01:04,760 Speaker 1: It's all, as ob Wan would say, a matter of 15 00:01:04,800 --> 00:01:07,840 Speaker 1: your point of view. What seems good to you could 16 00:01:07,840 --> 00:01:10,240 Speaker 1: seem bad to someone else. But if you were able 17 00:01:10,240 --> 00:01:14,160 Speaker 1: to look at the universe objectively, you would see there's 18 00:01:14,360 --> 00:01:18,240 Speaker 1: no good or bad at all. Another way to frame 19 00:01:18,319 --> 00:01:21,240 Speaker 1: that is to say that good and bad are human concepts. 20 00:01:21,280 --> 00:01:24,440 Speaker 1: And really it gets quite selfish if we think about it, 21 00:01:24,440 --> 00:01:27,640 Speaker 1: because we typically frame if something is good or bad 22 00:01:27,720 --> 00:01:30,360 Speaker 1: in the way it affects us, or, if we're the 23 00:01:30,400 --> 00:01:34,559 Speaker 1: compassionate type, how it affects someone else. Now, the reason 24 00:01:34,840 --> 00:01:37,319 Speaker 1: I wanted to begin with that quote is I thought 25 00:01:37,400 --> 00:01:41,800 Speaker 1: we'd talk about a technology that's not necessarily good or 26 00:01:41,880 --> 00:01:47,040 Speaker 1: bad inherently, but how we use it certainly can have 27 00:01:47,160 --> 00:01:52,000 Speaker 1: good or bad outcomes. Honestly, this could apply to every technology, 28 00:01:52,280 --> 00:01:56,840 Speaker 1: or any technology, but there's some that I feel magnifies 29 00:01:56,920 --> 00:01:59,920 Speaker 1: this quality. Now, I do think some technologies are hard 30 00:01:59,880 --> 00:02:04,920 Speaker 1: to frame as good than others. Right, there's some technologies where, 31 00:02:04,920 --> 00:02:07,400 Speaker 1: if you were to point it at me, I would say, 32 00:02:07,440 --> 00:02:11,120 Speaker 1: I can't in good conscience call this a good technology. 33 00:02:11,160 --> 00:02:13,480 Speaker 1: It's very difficult for me to think of weapons of 34 00:02:13,520 --> 00:02:18,000 Speaker 1: war as being good. For example, Sure, there's the use 35 00:02:18,000 --> 00:02:21,040 Speaker 1: of weapons as a deterrent to convince other people to, 36 00:02:21,080 --> 00:02:24,480 Speaker 1: you know, not trample all over your country, but we've 37 00:02:24,520 --> 00:02:28,120 Speaker 1: seen time and again throughout history that amassing substantial military 38 00:02:28,240 --> 00:02:33,320 Speaker 1: might doesn't guarantee against aggression. See also both world wars. 39 00:02:33,760 --> 00:02:37,440 Speaker 1: So I won't be talking about weapons in this episode. 40 00:02:38,200 --> 00:02:41,320 Speaker 1: But I will, however, talk about a technology that can 41 00:02:41,360 --> 00:02:47,560 Speaker 1: be weaponized, and that is AI. Artificial intelligence. The topic 42 00:02:47,960 --> 00:02:52,120 Speaker 1: of twenty twenty three, and we're gonna start by talking 43 00:02:52,160 --> 00:02:56,000 Speaker 1: about large language models and the chatbots built on top 44 00:02:56,080 --> 00:02:59,000 Speaker 1: of them, because that's top of mind. And this is 45 00:02:59,000 --> 00:03:02,519 Speaker 1: where I remind you that is not the only version 46 00:03:02,560 --> 00:03:06,720 Speaker 1: of AI, right, that's not AI and chatbots slash large 47 00:03:06,800 --> 00:03:10,080 Speaker 1: language models. Those aren't synonyms for each other. You know, 48 00:03:10,120 --> 00:03:12,880 Speaker 1: those chatbots and large language models are a subset of 49 00:03:12,960 --> 00:03:17,760 Speaker 1: artificial intelligence, which is a very broad category. Now, obviously 50 00:03:18,400 --> 00:03:21,440 Speaker 1: these branches of AI have been in the news incessantly 51 00:03:21,600 --> 00:03:26,320 Speaker 1: since OpenAI introduced chat GPT last year, But even before that, 52 00:03:26,960 --> 00:03:30,320 Speaker 1: we had generative AI tools that were making at least 53 00:03:30,320 --> 00:03:33,560 Speaker 1: some headlines. They were the kind that could create images 54 00:03:33,639 --> 00:03:37,240 Speaker 1: based on text inputs. But I would really argue it 55 00:03:37,280 --> 00:03:41,200 Speaker 1: was chat GPT that propelled the conversation into the spotlight. 56 00:03:41,240 --> 00:03:46,920 Speaker 1: It's certainly what forced Google's hand to unveil Google Bard 57 00:03:47,440 --> 00:03:50,880 Speaker 1: well before they were prepared to do so. Now much 58 00:03:50,880 --> 00:03:54,000 Speaker 1: has been said of the potential and real dangers of 59 00:03:54,120 --> 00:03:58,040 Speaker 1: chatbots like chat GPT or Google Bard. Even on this show, 60 00:03:58,240 --> 00:04:01,400 Speaker 1: I've talked about it quite a bit, but you know, 61 00:04:01,480 --> 00:04:04,440 Speaker 1: we have dedicated episodes to talk about the tendency for 62 00:04:04,680 --> 00:04:08,960 Speaker 1: chatbots to invent information, for example, to hallucinate to use 63 00:04:09,000 --> 00:04:13,040 Speaker 1: the terminology of the biz. This happens when a chatbot 64 00:04:13,080 --> 00:04:17,440 Speaker 1: doesn't necessarily have information to draw upon in response to 65 00:04:17,480 --> 00:04:21,560 Speaker 1: a prompt, so instead the chatbot relies on statistical models 66 00:04:21,560 --> 00:04:26,920 Speaker 1: to generate sentences that, strictly speaking, are coherent. They're correct, 67 00:04:27,520 --> 00:04:31,560 Speaker 1: but they don't hold correct information right. They're grammatically correct, 68 00:04:32,160 --> 00:04:35,359 Speaker 1: but they aren't correct from a sense of content. The 69 00:04:35,400 --> 00:04:39,640 Speaker 1: information within them are false. So in other words, you 70 00:04:39,680 --> 00:04:46,240 Speaker 1: get grammatically and structurally sound passages, but the content itself 71 00:04:46,279 --> 00:04:50,640 Speaker 1: is untrustworthy. That's just one way stuff can go wrong, however, 72 00:04:50,720 --> 00:04:55,480 Speaker 1: and we have numerous examples of that where the technology 73 00:04:55,600 --> 00:05:00,680 Speaker 1: is working as intended, it's just generating responses that are untrustworthy. Now, 74 00:05:00,720 --> 00:05:03,560 Speaker 1: the folks that open AI, as well as Google and 75 00:05:03,760 --> 00:05:09,120 Speaker 1: several other AI businesses, have understood that there are real 76 00:05:09,600 --> 00:05:13,800 Speaker 1: potential problems with the use of AI, and to that end, 77 00:05:14,560 --> 00:05:17,919 Speaker 1: these companies frequently will build in guardrails to attempt to 78 00:05:18,000 --> 00:05:21,760 Speaker 1: wrangle AI chatbots so that they don't go rogue and 79 00:05:21,880 --> 00:05:27,360 Speaker 1: produce hateful or malicious content. Now, these guardrails include rules 80 00:05:27,360 --> 00:05:29,120 Speaker 1: that are meant to keep AI from doing things like 81 00:05:29,200 --> 00:05:35,320 Speaker 1: generating hate speech or trying to intimidate someone, or make threats, 82 00:05:35,520 --> 00:05:40,400 Speaker 1: or use deceit to trick people, or even create malicious code. 83 00:05:40,480 --> 00:05:45,680 Speaker 1: These guardrails aren't actually fool proof. There are countless articles 84 00:05:45,720 --> 00:05:50,440 Speaker 1: that detail how people and research organizations with enough patients 85 00:05:50,480 --> 00:05:54,600 Speaker 1: and gumption have convinced AI bots to do stuff that, 86 00:05:54,680 --> 00:05:58,120 Speaker 1: in theory, at least, they should not be able to do. 87 00:05:58,480 --> 00:06:00,839 Speaker 1: And if you don't believe me, just do a search 88 00:06:00,839 --> 00:06:04,240 Speaker 1: on that. Do a search for chat GPT or a 89 00:06:04,320 --> 00:06:10,240 Speaker 1: Google bard and about how they are capable of creating 90 00:06:10,480 --> 00:06:14,200 Speaker 1: hateful or malicious content even though there are rules that 91 00:06:14,240 --> 00:06:18,159 Speaker 1: are supposed to prevent that. Now here's the thing. These 92 00:06:18,640 --> 00:06:23,760 Speaker 1: AI constructs are meant to be benign, right. They are 93 00:06:23,800 --> 00:06:27,120 Speaker 1: built to be tools that a corporation can sell to 94 00:06:27,120 --> 00:06:32,040 Speaker 1: other corporations. So to make the tool marketable, they need 95 00:06:32,040 --> 00:06:35,880 Speaker 1: to be safe to use. But then that's something that's 96 00:06:35,880 --> 00:06:39,320 Speaker 1: been artificially put onto these tools to prevent them from 97 00:06:39,400 --> 00:06:42,800 Speaker 1: going you know, super bad. What if someone made a 98 00:06:42,920 --> 00:06:45,960 Speaker 1: chat pot built on top of a large language model, 99 00:06:47,040 --> 00:06:51,400 Speaker 1: but all of those pieces lacked those guardrails. Well, that's 100 00:06:51,400 --> 00:06:55,760 Speaker 1: not a hypothetical situation. It has already happened. PCMag dot 101 00:06:55,800 --> 00:07:00,440 Speaker 1: Com recently published an article titled worm GPT is a 102 00:07:00,600 --> 00:07:05,920 Speaker 1: chat GPT alternative with quote no ethical boundaries or limitations 103 00:07:06,040 --> 00:07:10,680 Speaker 1: end quote. In that article, writer Michael Kahn explains that 104 00:07:11,200 --> 00:07:14,720 Speaker 1: someone developed worm GPT specifically as a way to help 105 00:07:14,760 --> 00:07:19,040 Speaker 1: people who have bad intentions act upon them. The developer 106 00:07:19,160 --> 00:07:24,360 Speaker 1: is hawking this tool on hacker groups online and explains 107 00:07:24,400 --> 00:07:28,200 Speaker 1: that their version of an AI chat bought worm GPT 108 00:07:29,400 --> 00:07:32,000 Speaker 1: will lean on the power of a large language model 109 00:07:32,240 --> 00:07:37,200 Speaker 1: gptj I believe to help design malware or to create 110 00:07:37,400 --> 00:07:41,080 Speaker 1: better phishing attacks. Now, I'm sure all of y'all know 111 00:07:41,440 --> 00:07:44,760 Speaker 1: that a lot of phishing attacks are ultimately pretty sloppy. 112 00:07:45,240 --> 00:07:48,280 Speaker 1: If you pay any attention, you're going to see red 113 00:07:48,320 --> 00:07:51,800 Speaker 1: flags indicating that this is not an email you should trust. 114 00:07:52,200 --> 00:07:54,920 Speaker 1: I bet you've received an email or three or three 115 00:07:54,960 --> 00:07:59,920 Speaker 1: thousand that contained spelling errors and grammatical mistakes and format 116 00:08:00,360 --> 00:08:03,200 Speaker 1: errors and other red flags, and that you figured out 117 00:08:03,240 --> 00:08:07,560 Speaker 1: right away that the email you received isn't legit, that 118 00:08:07,720 --> 00:08:11,000 Speaker 1: it's a poorly disguised attempt to bait you into clicking 119 00:08:11,080 --> 00:08:14,800 Speaker 1: on a link or sharing sensitive information or otherwise taking 120 00:08:14,880 --> 00:08:19,480 Speaker 1: an action that would ultimately result in negative consequences. For you. 121 00:08:20,360 --> 00:08:24,600 Speaker 1: My company, we receive fake emails from our security team. 122 00:08:24,960 --> 00:08:27,800 Speaker 1: They are always testing to make sure that employees practice 123 00:08:27,840 --> 00:08:32,640 Speaker 1: good security hygiene on company devices and company accounts, and 124 00:08:32,840 --> 00:08:35,600 Speaker 1: one of the tips frequently shared by this team is 125 00:08:35,600 --> 00:08:38,480 Speaker 1: to be on the lookout for mistakes like that. Because 126 00:08:38,520 --> 00:08:43,040 Speaker 1: attackers often lack attention to detail, they create messages that 127 00:08:43,120 --> 00:08:48,040 Speaker 1: lack professionalism while they try to target our more base instincts. 128 00:08:48,400 --> 00:08:51,319 Speaker 1: So most phishing emails try to engage us on kind 129 00:08:51,320 --> 00:08:54,679 Speaker 1: of a primal level, and the goal is to prompt 130 00:08:54,679 --> 00:08:59,120 Speaker 1: a response that will be akin to fear or greed 131 00:08:59,240 --> 00:09:01,839 Speaker 1: or something like that. And sometimes that's enough if you 132 00:09:01,920 --> 00:09:04,120 Speaker 1: hit someone at just the right time with a message 133 00:09:04,120 --> 00:09:06,559 Speaker 1: that explains have got, say a huge amount of money 134 00:09:06,600 --> 00:09:09,640 Speaker 1: sitting in a bank account that's been dormant for years, 135 00:09:10,320 --> 00:09:13,000 Speaker 1: or maybe they need to take action right now, or 136 00:09:13,080 --> 00:09:16,160 Speaker 1: their insurance is going to expire. Well, those can be 137 00:09:16,200 --> 00:09:19,400 Speaker 1: effective attacks, even if you forgot to use proper punctuation 138 00:09:19,640 --> 00:09:25,160 Speaker 1: or spelling. But those mistakes can be an indicator that 139 00:09:25,200 --> 00:09:27,360 Speaker 1: you're up to no good and it can tip off 140 00:09:27,400 --> 00:09:30,120 Speaker 1: your target. So let's bring this back around to AI. 141 00:09:30,960 --> 00:09:34,040 Speaker 1: One thing these AI chatbots are really good at is 142 00:09:34,400 --> 00:09:37,959 Speaker 1: formatting sentences with correct grammar and spelling. They're also pretty 143 00:09:37,960 --> 00:09:41,360 Speaker 1: good at building paragraphs, where each sentence builds upon the 144 00:09:41,360 --> 00:09:45,120 Speaker 1: point that was made previously, and new paragraphs introduce a 145 00:09:45,160 --> 00:09:48,000 Speaker 1: new idea. If you were to read a passage written 146 00:09:48,040 --> 00:09:50,920 Speaker 1: by AI, you might not think that it's the most 147 00:09:51,000 --> 00:09:54,480 Speaker 1: brilliant prose committed to text, but you'd at least think 148 00:09:54,600 --> 00:09:58,160 Speaker 1: it had been written correctly. And that means a malicious 149 00:09:58,240 --> 00:10:02,040 Speaker 1: hacker could use AI to craft messages that are less 150 00:10:02,120 --> 00:10:06,200 Speaker 1: likely to set off those red flags and pass as legitimate. So, 151 00:10:06,280 --> 00:10:08,800 Speaker 1: out of curiosity, I decided to put this to the test. 152 00:10:09,000 --> 00:10:11,720 Speaker 1: I went to Google Bard and I wrote the prompt 153 00:10:12,320 --> 00:10:15,280 Speaker 1: draft a letter informing someone they have a dormant account 154 00:10:15,280 --> 00:10:18,280 Speaker 1: with seventeen three hundred and forty eight dollars in it. 155 00:10:19,360 --> 00:10:22,920 Speaker 1: That was the prompt. I picked that amount because it's 156 00:10:22,960 --> 00:10:24,640 Speaker 1: a lot of money. It's more money than a lot 157 00:10:24,679 --> 00:10:26,920 Speaker 1: of people ever managed to have in a bank account. 158 00:10:27,200 --> 00:10:29,920 Speaker 1: It's also not a round number, which makes it seem 159 00:10:30,040 --> 00:10:33,280 Speaker 1: less likely to be fake. And I just thought, well, 160 00:10:33,360 --> 00:10:35,880 Speaker 1: you don't want to go super crazy, like the higher 161 00:10:35,960 --> 00:10:38,280 Speaker 1: it is, the less likely someone's going to believe it 162 00:10:38,320 --> 00:10:40,959 Speaker 1: to be true. But you want it to be enough 163 00:10:41,120 --> 00:10:44,920 Speaker 1: to convince someone to take action, because who couldn't use 164 00:10:44,920 --> 00:10:50,280 Speaker 1: another seventeen grand And google Bard drafted a letter without hesitation. 165 00:10:50,920 --> 00:10:53,960 Speaker 1: When we come back, I'm gonna quote part of the 166 00:10:54,040 --> 00:10:57,320 Speaker 1: letter that google Bard wrote to kind of illustrate my 167 00:10:57,440 --> 00:11:00,800 Speaker 1: point here. But first, let's take a moment to thank 168 00:11:00,840 --> 00:11:13,760 Speaker 1: our sponsors. Okay, we're back. So the letter the google 169 00:11:13,840 --> 00:11:18,400 Speaker 1: Bard wrote read, in part quote, I am writing to 170 00:11:18,440 --> 00:11:22,040 Speaker 1: inform you that you have a dormant account with our bank. 171 00:11:22,360 --> 00:11:25,600 Speaker 1: The account number is then there's brackets and it says 172 00:11:25,720 --> 00:11:28,680 Speaker 1: account number not an actual number, just as account number, 173 00:11:28,679 --> 00:11:32,080 Speaker 1: and then in brackets and the balance is seventeen three 174 00:11:32,440 --> 00:11:35,240 Speaker 1: and forty eight dollars. A dormant account is an account 175 00:11:35,240 --> 00:11:37,640 Speaker 1: that has not been used in a certain period of time. 176 00:11:37,720 --> 00:11:39,679 Speaker 1: In our case, a dormant account is an account that 177 00:11:39,720 --> 00:11:42,160 Speaker 1: has not been used in over twelve months. We are 178 00:11:42,200 --> 00:11:44,959 Speaker 1: required by law to inform you of the balance in 179 00:11:45,000 --> 00:11:47,960 Speaker 1: your dormant account. If you do not contact us within 180 00:11:48,080 --> 00:11:51,240 Speaker 1: six months, the fund in your account will be forfeited 181 00:11:51,320 --> 00:11:54,520 Speaker 1: to the state. To avoid this, please contact us at 182 00:11:54,559 --> 00:12:00,040 Speaker 1: your earliest convenience to discuss your options. End quote. So 183 00:12:00,160 --> 00:12:02,079 Speaker 1: that draft is already a big step up from the 184 00:12:02,120 --> 00:12:05,800 Speaker 1: sort of sloppy fishing attacks you typically see. Right, you've 185 00:12:05,800 --> 00:12:12,240 Speaker 1: got the amount listed there, you've got the explanation of 186 00:12:12,280 --> 00:12:15,920 Speaker 1: what a dormant account is, which seems to be helpful. 187 00:12:16,640 --> 00:12:20,760 Speaker 1: The explanation that if action is not taken within half 188 00:12:20,800 --> 00:12:24,559 Speaker 1: a year, then this money is going to be forfeited 189 00:12:24,600 --> 00:12:27,200 Speaker 1: to the state. And of course you could always go 190 00:12:27,280 --> 00:12:29,160 Speaker 1: in and edit that statement so that you make it 191 00:12:29,200 --> 00:12:32,120 Speaker 1: a tighter deadline to create a greater sense of urgency. 192 00:12:32,559 --> 00:12:36,520 Speaker 1: But you already see the steps here that could make 193 00:12:36,600 --> 00:12:41,559 Speaker 1: this a pretty effective phishing attack. There are some details 194 00:12:41,679 --> 00:12:43,319 Speaker 1: that the hacker would need to fill in, but that 195 00:12:43,360 --> 00:12:46,080 Speaker 1: wouldn't be too tricky. You'd have to create a random 196 00:12:46,120 --> 00:12:48,600 Speaker 1: account number, you have to throw in a URL that 197 00:12:48,640 --> 00:12:51,400 Speaker 1: will push people to a fake login page to share 198 00:12:51,840 --> 00:12:54,880 Speaker 1: usernames and passwords, and then you could start stealing data 199 00:12:54,960 --> 00:12:58,400 Speaker 1: and money from them. Now, in that case, I didn't 200 00:12:58,520 --> 00:13:02,320 Speaker 1: ask google bard to create a phishing attack. If I 201 00:13:02,400 --> 00:13:04,400 Speaker 1: had done that, if I had gone to google Bard 202 00:13:04,440 --> 00:13:07,080 Speaker 1: and asked to create a phishing email, I would have 203 00:13:07,080 --> 00:13:09,960 Speaker 1: gotten a denial for that request. That's against the rules. 204 00:13:10,040 --> 00:13:12,600 Speaker 1: But it took me no effort at all to get 205 00:13:12,600 --> 00:13:15,000 Speaker 1: the same result just by typing up some parameters and 206 00:13:15,040 --> 00:13:18,080 Speaker 1: asking Bard to draft a letter. I didn't, you know, 207 00:13:18,200 --> 00:13:20,920 Speaker 1: I didn't even mention phishing. I didn't do that at all. 208 00:13:20,960 --> 00:13:23,959 Speaker 1: I didn't try that first. I just tried this approach 209 00:13:24,000 --> 00:13:26,640 Speaker 1: and it worked a treat. Now, I suppose you could 210 00:13:26,640 --> 00:13:30,040 Speaker 1: say this is the difference between being a willing accomplice 211 00:13:30,720 --> 00:13:33,199 Speaker 1: in the part of Google Bard to being an unknowing 212 00:13:33,240 --> 00:13:38,000 Speaker 1: accomplice that Bard could not possibly know that I'm planning 213 00:13:38,080 --> 00:13:42,600 Speaker 1: on using this text for my nefarious phishing schemes, but 214 00:13:42,679 --> 00:13:45,000 Speaker 1: the result ends up being the same for the victims. 215 00:13:45,040 --> 00:13:47,400 Speaker 1: It doesn't matter if Google Bard knew it was part 216 00:13:47,440 --> 00:13:50,240 Speaker 1: of the crime or not. Even so, you can hardly 217 00:13:50,320 --> 00:13:53,120 Speaker 1: blame Bard for creating a letter after I asked it to. 218 00:13:53,440 --> 00:13:56,079 Speaker 1: That's its job, right, I mean, that's the kind of 219 00:13:56,120 --> 00:14:01,400 Speaker 1: thing these chatbots were made for. Malicious code is another matter. 220 00:14:01,600 --> 00:14:04,480 Speaker 1: Entirely by its nature, it's meant to do something harmful 221 00:14:04,600 --> 00:14:08,599 Speaker 1: to a target device. That could include creating a backdoor 222 00:14:08,880 --> 00:14:11,320 Speaker 1: so that a hacker can remotely gain access to that 223 00:14:11,440 --> 00:14:14,400 Speaker 1: infected machine and then do all sorts of stuff to it. 224 00:14:14,840 --> 00:14:18,240 Speaker 1: It might involve logging keystrokes, so that the hacker can 225 00:14:18,240 --> 00:14:21,400 Speaker 1: read everything someone's typed into a device, usually stuff like 226 00:14:21,920 --> 00:14:26,000 Speaker 1: bank details and credit card numbers and that kind of stuff. 227 00:14:26,880 --> 00:14:32,280 Speaker 1: Maybe it's ransomware. Ransomware typically will encrypt a target machines 228 00:14:32,400 --> 00:14:35,320 Speaker 1: drives and then include a message that says unless the 229 00:14:35,400 --> 00:14:38,760 Speaker 1: victim pays a ransom, typically in some form of cryptocurrency, 230 00:14:39,240 --> 00:14:43,000 Speaker 1: their data will remain inaccessible, perhaps even with a deadline 231 00:14:43,000 --> 00:14:45,320 Speaker 1: that if they don't pay the ransom by a certain date, 232 00:14:46,120 --> 00:14:49,680 Speaker 1: they will delete the decryption key, which means it will 233 00:14:49,680 --> 00:14:52,760 Speaker 1: be really hard to get the data back. Not impossible, 234 00:14:53,160 --> 00:14:57,920 Speaker 1: but practically impossible if the encryption method is sophisticated enough, 235 00:14:58,920 --> 00:15:00,960 Speaker 1: like not impossible, but it take so much time that 236 00:15:01,000 --> 00:15:04,360 Speaker 1: you might as well say it's impossible. Chat, GPT and 237 00:15:04,440 --> 00:15:07,440 Speaker 1: BARD are meant to guard against this kind of stuff. 238 00:15:07,440 --> 00:15:09,640 Speaker 1: You're not supposed to be able to use those tools 239 00:15:09,920 --> 00:15:14,960 Speaker 1: to make malicious code. However, researchers at Checkpoints Software found 240 00:15:14,960 --> 00:15:18,640 Speaker 1: that both chat, GPT and BARD could be cajoled into 241 00:15:18,680 --> 00:15:23,280 Speaker 1: creating malicious code. It might require a more circuitous approach 242 00:15:23,600 --> 00:15:26,040 Speaker 1: to get it to work, like you can't just come 243 00:15:26,120 --> 00:15:29,320 Speaker 1: straight at it and do it, but with a little 244 00:15:29,320 --> 00:15:33,680 Speaker 1: persistence and a little ingenuity on your part, it was 245 00:15:33,720 --> 00:15:38,080 Speaker 1: possible they could get chat GPT and to a greater extent, 246 00:15:38,160 --> 00:15:42,800 Speaker 1: Google bard to create stuff that could be used maliciously. Now, 247 00:15:42,840 --> 00:15:48,040 Speaker 1: worm GPT, the tool made by the developer who's marketing 248 00:15:48,040 --> 00:15:51,440 Speaker 1: this to hackers, it doesn't even require the circuitous approach. 249 00:15:51,480 --> 00:15:55,160 Speaker 1: You can be straightforward and direct with it. You ask 250 00:15:55,240 --> 00:15:58,880 Speaker 1: it to help you create some code that's malicious in intent, 251 00:15:59,240 --> 00:16:02,520 Speaker 1: and it will to the challenge. So if you wanted 252 00:16:02,520 --> 00:16:04,920 Speaker 1: to craft a phishing attack message, you could provide the 253 00:16:04,960 --> 00:16:07,920 Speaker 1: parameters to worm gpt and it would craft a message 254 00:16:07,920 --> 00:16:11,440 Speaker 1: for you. That message might pass as legitimate, much more 255 00:16:11,480 --> 00:16:14,960 Speaker 1: readily than a cobbled together email with poor syntax and 256 00:16:15,080 --> 00:16:18,560 Speaker 1: grammar would. But beyond that, worm GPT will also help 257 00:16:18,600 --> 00:16:23,600 Speaker 1: hackers create actual malicious code to infect target machines. Now, 258 00:16:23,640 --> 00:16:27,320 Speaker 1: that code may or might may not work, because just 259 00:16:27,360 --> 00:16:29,480 Speaker 1: because an AI built it doesn't mean it will be 260 00:16:29,560 --> 00:16:33,320 Speaker 1: perfect or even working. Like we've seen examples recently of 261 00:16:33,440 --> 00:16:38,360 Speaker 1: chat gpt getting very sloppy with code that it was doing, 262 00:16:38,400 --> 00:16:41,600 Speaker 1: things like failing to close brackets, like you would have 263 00:16:41,600 --> 00:16:45,360 Speaker 1: an open bracket section, but the AI would quote unquote 264 00:16:45,440 --> 00:16:48,480 Speaker 1: forget to put a closed bracket in there, and thus 265 00:16:48,520 --> 00:16:51,160 Speaker 1: you would end up getting errors in your code. That's 266 00:16:51,200 --> 00:16:55,040 Speaker 1: still a possibility, Like it's it's not like AI is 267 00:16:55,320 --> 00:16:57,520 Speaker 1: going to make perfect stuff rut of the gate, but 268 00:16:57,600 --> 00:17:02,240 Speaker 1: it certainly can work faster than humans can. And even 269 00:17:02,280 --> 00:17:05,240 Speaker 1: if the code only kind of works, it may be 270 00:17:05,600 --> 00:17:08,960 Speaker 1: a great leg up if you've got hackers who can 271 00:17:09,119 --> 00:17:12,960 Speaker 1: go through the malicious code and make edits and tweaks 272 00:17:13,160 --> 00:17:16,600 Speaker 1: and corrections. And there's a real danger to AI building 273 00:17:16,640 --> 00:17:20,040 Speaker 1: out code meant to exploit vulnerabilities, or even to pour 274 00:17:20,200 --> 00:17:24,320 Speaker 1: over available code and find new exploits that as of 275 00:17:24,400 --> 00:17:27,680 Speaker 1: yet are unknown. Like it's possible that AI could be 276 00:17:27,800 --> 00:17:32,160 Speaker 1: used to identify zero day exploits that the hacker community 277 00:17:32,200 --> 00:17:35,760 Speaker 1: can then take advantage of before anyone in security has 278 00:17:35,840 --> 00:17:40,600 Speaker 1: any awareness of it. Now, there's also the issue of 279 00:17:40,920 --> 00:17:45,160 Speaker 1: new malicious code can confound anti virus production as well, right, 280 00:17:45,680 --> 00:17:49,040 Speaker 1: because the way antivirus software typically works is you've got 281 00:17:49,160 --> 00:17:54,919 Speaker 1: some code, some program that is searching for examples of 282 00:17:55,040 --> 00:17:58,720 Speaker 1: malicious programs that exist within a huge library of malware. 283 00:17:58,840 --> 00:18:03,000 Speaker 1: So when you run an antivirus scan, what your antivirus 284 00:18:03,000 --> 00:18:09,000 Speaker 1: software really is doing is looking for symptoms of malware 285 00:18:09,359 --> 00:18:13,320 Speaker 1: that are part of the antivirus software's record if it 286 00:18:13,359 --> 00:18:16,040 Speaker 1: finds when it's like, ah, I found evidence that such 287 00:18:16,040 --> 00:18:19,879 Speaker 1: and such malware has been executed on this machine, and 288 00:18:19,920 --> 00:18:23,080 Speaker 1: then you get a little alert. But if it's a 289 00:18:23,119 --> 00:18:26,000 Speaker 1: new type of attack, well, it's not going to be 290 00:18:26,080 --> 00:18:29,760 Speaker 1: in that database, right, So you might have a malicious 291 00:18:29,800 --> 00:18:33,000 Speaker 1: piece of code on your machine that's doing some really 292 00:18:33,080 --> 00:18:36,280 Speaker 1: dangerous stuff and your antivirus software has no way of 293 00:18:36,280 --> 00:18:38,720 Speaker 1: knowing it because it's brand new, it's not something it's 294 00:18:38,720 --> 00:18:42,360 Speaker 1: seen before, so it doesn't register it as a virus 295 00:18:42,440 --> 00:18:46,960 Speaker 1: or other type of malware. And there's huge value in 296 00:18:47,000 --> 00:18:50,200 Speaker 1: that kind of code within the hacker community because obviously 297 00:18:50,640 --> 00:18:54,040 Speaker 1: you're going to have a much more effective attack if 298 00:18:54,080 --> 00:18:58,400 Speaker 1: your target machines aren't capable of detecting it. Now, there's 299 00:18:58,440 --> 00:19:03,159 Speaker 1: no shortage of warning us that generative AI is dangerous. 300 00:19:03,720 --> 00:19:06,120 Speaker 1: I mean, I'm arguably one of them. There are lots 301 00:19:06,160 --> 00:19:07,840 Speaker 1: of people who have been saying for a long time 302 00:19:07,880 --> 00:19:13,920 Speaker 1: that generative AI is dangerous. And again we've seen how 303 00:19:14,080 --> 00:19:17,320 Speaker 1: tools that even have guardrails can still be used maliciously. 304 00:19:17,720 --> 00:19:19,879 Speaker 1: So it should come as no surprise that someone was 305 00:19:19,920 --> 00:19:22,440 Speaker 1: willing to go to the effort of building out an 306 00:19:22,480 --> 00:19:27,399 Speaker 1: outright dangerous version of this technology. Right, Yes, we expect 307 00:19:27,440 --> 00:19:30,720 Speaker 1: the big companies that are marketing this to corporations to 308 00:19:30,800 --> 00:19:33,200 Speaker 1: go to the trouble of building out those guardrails because 309 00:19:33,200 --> 00:19:34,679 Speaker 1: that's the only way they're going to be able to 310 00:19:34,680 --> 00:19:37,600 Speaker 1: do business. Otherwise you're going to have way too much 311 00:19:37,680 --> 00:19:42,160 Speaker 1: regulatory pressure put on you. But for criminals, I mean, 312 00:19:42,200 --> 00:19:45,800 Speaker 1: breaking the law comes with the territory, right, and there's 313 00:19:45,840 --> 00:19:50,040 Speaker 1: clearly a demand for malicious AI, or at least AI 314 00:19:50,080 --> 00:19:53,520 Speaker 1: that can behave in malicious ways. So in the case 315 00:19:53,520 --> 00:19:56,680 Speaker 1: of WORMGPT, at the time of this recording, the developer 316 00:19:56,720 --> 00:19:59,320 Speaker 1: who created it is asking for a subscription of sixty 317 00:19:59,320 --> 00:20:03,160 Speaker 1: euros per month or five hundred and fifty euros per year, 318 00:20:03,440 --> 00:20:05,919 Speaker 1: and a euro for those of you in the US 319 00:20:05,960 --> 00:20:10,160 Speaker 1: like myself, that's about a dollar and twelve cents per euro. 320 00:20:10,359 --> 00:20:13,000 Speaker 1: So it's really not that far off when you're talking 321 00:20:13,000 --> 00:20:17,080 Speaker 1: about dollars. Now, that is not cheap, right, sixty euros 322 00:20:17,080 --> 00:20:19,760 Speaker 1: per month isn't exactly cheap, but it's also not so 323 00:20:19,920 --> 00:20:23,280 Speaker 1: expensive as to price folks out, particularly if they anticipate 324 00:20:23,359 --> 00:20:27,680 Speaker 1: using the tool to create ransomware attacks that could if 325 00:20:27,800 --> 00:20:32,440 Speaker 1: they're effective, net them millions of dollars if the targets 326 00:20:32,480 --> 00:20:35,240 Speaker 1: pay up. Oh brave new world to have such AI 327 00:20:35,320 --> 00:20:39,120 Speaker 1: in it. That's also paraphrasing Shakespeare. It's not Hamlet, though, 328 00:20:39,200 --> 00:20:42,600 Speaker 1: that's from the Tempest. Well, here's the thing. There are 329 00:20:42,720 --> 00:20:47,040 Speaker 1: countless ways we could use generative AI to do great things. 330 00:20:47,359 --> 00:20:51,760 Speaker 1: The tools themselves aren't outright evil. There is no good 331 00:20:51,800 --> 00:20:55,600 Speaker 1: or evil, but thinking makes it so. It's just that 332 00:20:55,880 --> 00:20:58,960 Speaker 1: this AI can make mistakes in the form of hallucinations. 333 00:20:58,960 --> 00:21:03,239 Speaker 1: So even if you're using it for benign reasons, you 334 00:21:03,280 --> 00:21:07,719 Speaker 1: can get negative consequences. But it's also possible to weaponize it, 335 00:21:07,800 --> 00:21:11,159 Speaker 1: and that in turn could have much more widespread negative 336 00:21:11,160 --> 00:21:16,040 Speaker 1: impact on tons of people. Not great. Now, we're going 337 00:21:16,119 --> 00:21:18,320 Speaker 1: to take another quick break. When we come back, I'm 338 00:21:18,320 --> 00:21:22,639 Speaker 1: going to talk about an underlying ideology that I think 339 00:21:23,920 --> 00:21:29,480 Speaker 1: is really harmful, and it's one that Professor Michael Littman 340 00:21:29,560 --> 00:21:32,959 Speaker 1: a Brown University also feels is a big contributor to 341 00:21:33,080 --> 00:21:36,800 Speaker 1: this issue. But first, let's take another quick break to 342 00:21:36,840 --> 00:21:50,600 Speaker 1: thank our sponsors. Okay, So I alluded to Professor Michael 343 00:21:50,640 --> 00:21:53,639 Speaker 1: Littman a Brown University and said he and I share 344 00:21:54,440 --> 00:22:00,520 Speaker 1: an idea that of a contributing factor to this issue 345 00:22:00,520 --> 00:22:06,119 Speaker 1: with AI, and that relates to techno solutionism. Now, as 346 00:22:06,320 --> 00:22:10,800 Speaker 1: that kind of little hyphenated phrase implies, it's a tendency 347 00:22:10,920 --> 00:22:14,560 Speaker 1: to believe that technology can solve pretty much any problem 348 00:22:14,560 --> 00:22:19,520 Speaker 1: you can think of that. You know, with technology, we 349 00:22:19,560 --> 00:22:23,000 Speaker 1: can overcome any obstacle. If you're worried about the human 350 00:22:23,040 --> 00:22:26,879 Speaker 1: impact on the environment and the consequence is like climate change, 351 00:22:26,920 --> 00:22:28,840 Speaker 1: well there's no need to do anything about it right 352 00:22:28,920 --> 00:22:32,480 Speaker 1: now because humans are going to eventually engineer a way 353 00:22:32,600 --> 00:22:35,560 Speaker 1: out of the mess. We'll just create technology that will 354 00:22:35,560 --> 00:22:39,880 Speaker 1: not only stop climate change but maybe even reverse it. Which, 355 00:22:39,920 --> 00:22:43,760 Speaker 1: you know, maybe that's true, it might happen, but that 356 00:22:43,800 --> 00:22:46,040 Speaker 1: doesn't mean we can go on acting as if it 357 00:22:46,119 --> 00:22:51,560 Speaker 1: is already true. Right. We can't start from the assumption that, yes, 358 00:22:51,640 --> 00:22:53,679 Speaker 1: it is going to happen and everything will be fine, 359 00:22:54,320 --> 00:22:57,080 Speaker 1: because making a problem worse while we wait for someone 360 00:22:57,080 --> 00:23:01,480 Speaker 1: to innovate a solution is a terrible idea. This is 361 00:23:01,560 --> 00:23:03,520 Speaker 1: kind of like when I look around my cluttered office 362 00:23:03,520 --> 00:23:05,320 Speaker 1: and I think I really should clean up, but I'm 363 00:23:05,359 --> 00:23:09,480 Speaker 1: going to make that a future Jonathan problem instead. Meanwhile, 364 00:23:09,720 --> 00:23:12,000 Speaker 1: in the process, I'm still adding to the clutter, which 365 00:23:12,040 --> 00:23:14,760 Speaker 1: means that future Jonathan is far less likely to tackle 366 00:23:14,800 --> 00:23:18,639 Speaker 1: the issue because it's gotten worse since present Jonathan pushed 367 00:23:18,640 --> 00:23:22,879 Speaker 1: it off, and then future Jonathan is also going to 368 00:23:22,880 --> 00:23:27,119 Speaker 1: start resenting past Jonathan's immensely for good reason. Well, current 369 00:23:27,160 --> 00:23:30,160 Speaker 1: generations are doing that to future generations right now when 370 00:23:30,200 --> 00:23:33,840 Speaker 1: it comes to stuff like climate change and carbon emissions, 371 00:23:34,280 --> 00:23:38,359 Speaker 1: right we are. Even when we're inventing solutions or what 372 00:23:38,400 --> 00:23:41,119 Speaker 1: we think of as solutions to try and tackle these problems, 373 00:23:41,520 --> 00:23:46,920 Speaker 1: we're not making the problem less severe. Instead we're just like, oh, well, 374 00:23:46,920 --> 00:23:49,240 Speaker 1: we came up with this cool solution, so let's just 375 00:23:49,320 --> 00:23:52,840 Speaker 1: keep doing the problem, because this is making the problem 376 00:23:53,840 --> 00:23:57,840 Speaker 1: less bad. Carbon capture is a great example, right with 377 00:23:57,960 --> 00:24:03,120 Speaker 1: carbon capture. Ideally, you would get off of carbon emissions 378 00:24:03,359 --> 00:24:06,760 Speaker 1: in general anyway, and then use carbon capture to help 379 00:24:06,800 --> 00:24:10,679 Speaker 1: reduce the CO two load the atmosphere. But instead, what 380 00:24:10,680 --> 00:24:17,359 Speaker 1: we're doing is we're using carbon capture in connection with 381 00:24:17,560 --> 00:24:21,880 Speaker 1: carbon emissions, which means we're not easing off on carbon emissions. 382 00:24:21,920 --> 00:24:25,040 Speaker 1: In fact, in some cases, we're getting even more aggressive 383 00:24:25,080 --> 00:24:28,560 Speaker 1: with them, and we're counting on carbon capture to offset that, 384 00:24:28,760 --> 00:24:32,199 Speaker 1: so that we're still contributing to the problem. You know, 385 00:24:32,240 --> 00:24:34,720 Speaker 1: we're not moving away from it, which is why I 386 00:24:34,920 --> 00:24:40,080 Speaker 1: hesitate to really endorse technologies like carbon capture, because unless 387 00:24:40,119 --> 00:24:44,160 Speaker 1: it's coupled with an actual move to reduce carbon emissions 388 00:24:44,200 --> 00:24:47,520 Speaker 1: in general, it's not really solving the problem. It's actually 389 00:24:47,640 --> 00:24:52,399 Speaker 1: enabling the problem. It's facilitating it anyway. Techno solutionism is 390 00:24:52,400 --> 00:24:55,480 Speaker 1: also what makes it possible for someone like Elizabeth Holmes 391 00:24:55,480 --> 00:24:59,159 Speaker 1: to convince investors to pour millions of dollars into an 392 00:24:59,240 --> 00:25:02,440 Speaker 1: unproven idea. So, in case you're not familiar with Elizabeth 393 00:25:02,480 --> 00:25:06,000 Speaker 1: Holmes's name, she was the founder of Farranos, the startup 394 00:25:06,040 --> 00:25:08,440 Speaker 1: that aimed to create a device small enough to fit 395 00:25:08,520 --> 00:25:10,879 Speaker 1: on a desktop that would be able to run medical 396 00:25:10,920 --> 00:25:13,800 Speaker 1: tests on a micro drop of blood. You just need 397 00:25:13,840 --> 00:25:16,760 Speaker 1: the teeniest, tiniest sample of blood, and then you'd be 398 00:25:16,800 --> 00:25:19,439 Speaker 1: able to run one of more than one hundred medical 399 00:25:19,480 --> 00:25:23,640 Speaker 1: tests and check for everything from present conditions or diseases 400 00:25:23,880 --> 00:25:28,159 Speaker 1: to your genetic tendency to develop certain conditions. There was 401 00:25:28,240 --> 00:25:32,240 Speaker 1: only one little problem. The technology didn't work. At least, 402 00:25:32,240 --> 00:25:34,640 Speaker 1: it didn't work anywhere close to what it would need 403 00:25:34,680 --> 00:25:37,600 Speaker 1: to be in order to fulfill the dream device that 404 00:25:37,680 --> 00:25:40,920 Speaker 1: Holmes was looking to build. There were so many factors 405 00:25:41,320 --> 00:25:44,440 Speaker 1: that needed to be solved, and some of them might 406 00:25:44,520 --> 00:25:46,879 Speaker 1: not be solvable, at least not in a way that 407 00:25:46,920 --> 00:25:49,439 Speaker 1: would require you to only part with a microdrop of 408 00:25:49,440 --> 00:25:52,640 Speaker 1: blood in the process. But Holmes and her team did 409 00:25:52,680 --> 00:25:55,960 Speaker 1: their best to obfuscate that fact. They relied on existing 410 00:25:56,040 --> 00:25:59,920 Speaker 1: blood analysis technologies to make it appear as though there 411 00:26:00,119 --> 00:26:03,520 Speaker 1: device worked, while they also tried to keep things going 412 00:26:03,760 --> 00:26:06,080 Speaker 1: until a breakthrough came along. It was sort of the 413 00:26:06,560 --> 00:26:12,560 Speaker 1: fake it until you make it ideology, but coupled with 414 00:26:12,720 --> 00:26:17,440 Speaker 1: some snake oil salesmanship and some smoke and mirrors as well. 415 00:26:18,000 --> 00:26:22,480 Speaker 1: Now I do not know if Elizabeth Holmes really believed 416 00:26:22,480 --> 00:26:26,040 Speaker 1: her idea was possible. I wouldn't be surprised to hear 417 00:26:26,440 --> 00:26:29,640 Speaker 1: that that was the case, that she truly, earnestly believed 418 00:26:29,640 --> 00:26:32,320 Speaker 1: she could do this. Because we use technology to do 419 00:26:32,400 --> 00:26:35,880 Speaker 1: some amazing things that are so commonplace these days that 420 00:26:36,000 --> 00:26:41,400 Speaker 1: we forget that it's amazing. Like taking flight in an aircraft. 421 00:26:41,800 --> 00:26:44,480 Speaker 1: I mean, I take it for granted when I'm on 422 00:26:44,520 --> 00:26:46,879 Speaker 1: a plane unless I actually stopped to consider it, then 423 00:26:46,920 --> 00:26:52,439 Speaker 1: I think this really is astounding. Or accessing information on 424 00:26:52,480 --> 00:26:55,520 Speaker 1: the Internet through a device we carry around in our pocket. 425 00:26:55,840 --> 00:27:00,119 Speaker 1: I mean, the Internet alone is a phenomenal technology, and 426 00:27:00,160 --> 00:27:04,520 Speaker 1: then smartphones being able to tap into that technology and 427 00:27:04,960 --> 00:27:10,160 Speaker 1: make it mobile and accessible wherever we go. It's insane. 428 00:27:10,240 --> 00:27:12,480 Speaker 1: You know. As a kid, I read The Hitchhicker's Guide 429 00:27:12,520 --> 00:27:14,680 Speaker 1: to the Galaxy and I thought, man, how amazing would 430 00:27:14,720 --> 00:27:17,679 Speaker 1: it be to have a device that contained all this 431 00:27:17,760 --> 00:27:20,399 Speaker 1: information You could ask in anything you can get an answer. 432 00:27:20,840 --> 00:27:24,040 Speaker 1: And now we have that. Except it's not a device 433 00:27:24,080 --> 00:27:27,959 Speaker 1: that just has a huge storage space for all this info. 434 00:27:28,480 --> 00:27:34,080 Speaker 1: It's tapping into an evolving, ever changing technology of the Internet, 435 00:27:34,240 --> 00:27:36,400 Speaker 1: which will give us up to date answers which may 436 00:27:36,440 --> 00:27:40,679 Speaker 1: be right or wrong, but we can achieve phenomenal results 437 00:27:40,680 --> 00:27:43,520 Speaker 1: through technology. Right, You were able to do these insanely 438 00:27:43,560 --> 00:27:46,880 Speaker 1: incredible things, So why shouldn't we believe that you could 439 00:27:46,960 --> 00:27:50,600 Speaker 1: run one hundred or even more medical tests on a 440 00:27:50,640 --> 00:27:53,119 Speaker 1: device that's the size of a computer printer and you 441 00:27:53,280 --> 00:27:55,359 Speaker 1: just use the tiny drop of blood. That seems like 442 00:27:55,400 --> 00:27:57,880 Speaker 1: it should be possible based on some of the other 443 00:27:57,880 --> 00:28:01,520 Speaker 1: incredible things we can do. Right. That's the danger of 444 00:28:01,640 --> 00:28:06,280 Speaker 1: techno solutionism. We let ourselves think that because these other 445 00:28:06,320 --> 00:28:12,040 Speaker 1: incredible things are possible, then everything is possible, or at 446 00:28:12,040 --> 00:28:15,240 Speaker 1: the very least everything will be possible once we throw 447 00:28:15,359 --> 00:28:19,120 Speaker 1: enough technology at it. But as Thearnos proved, and as 448 00:28:19,200 --> 00:28:23,760 Speaker 1: AI is now emphasizing, this philosophy can lead us into trouble, 449 00:28:24,960 --> 00:28:28,680 Speaker 1: and as worm GPT proves, it doesn't really matter if 450 00:28:28,720 --> 00:28:32,120 Speaker 1: the big players in the space take steps to mitigate 451 00:28:32,160 --> 00:28:36,000 Speaker 1: the dangers of AI. First, we've already seen that those 452 00:28:36,000 --> 00:28:39,480 Speaker 1: steps aren't sufficient, that even with the guardrails, you can 453 00:28:39,520 --> 00:28:43,040 Speaker 1: go way off course. And second, someone else is always 454 00:28:43,080 --> 00:28:45,720 Speaker 1: going to be willing to go where the big companies won't. 455 00:28:46,080 --> 00:28:49,400 Speaker 1: If there's money to be made in weaponizing a technology, 456 00:28:50,040 --> 00:28:53,520 Speaker 1: someone will step in to fill that market need. AI 457 00:28:53,680 --> 00:28:57,520 Speaker 1: might be neither good nor bad, but people certainly can be, 458 00:28:58,040 --> 00:29:02,800 Speaker 1: and malicious AI is a certainty. It's not a theory, 459 00:29:02,920 --> 00:29:06,000 Speaker 1: it's not a possibility. It is a certainty, not because 460 00:29:06,040 --> 00:29:09,880 Speaker 1: AI is inherently bad, but because there are people who 461 00:29:09,920 --> 00:29:13,840 Speaker 1: will see opportunity in directing AI toward malicious goals, and 462 00:29:14,080 --> 00:29:17,280 Speaker 1: they will do that. Nothing will stop them. So let's 463 00:29:17,320 --> 00:29:19,720 Speaker 1: think of another use of AI that has proven to 464 00:29:19,760 --> 00:29:24,560 Speaker 1: have negative consequences. Facial recognition technology traces its history to 465 00:29:24,600 --> 00:29:28,080 Speaker 1: the mid twentieth century. That's when some researchers were trying 466 00:29:28,120 --> 00:29:31,680 Speaker 1: to use computers to match faces with images that were 467 00:29:31,680 --> 00:29:35,160 Speaker 1: stored in a database, and at that time the computer 468 00:29:35,200 --> 00:29:38,960 Speaker 1: power and software wasn't up to the task. So lighting conditions, 469 00:29:39,040 --> 00:29:41,920 Speaker 1: the angle of the photo versus the angle of the 470 00:29:42,520 --> 00:29:46,560 Speaker 1: person's face that was being analyzed, the presence or absence 471 00:29:46,560 --> 00:29:51,160 Speaker 1: of glasses, a change in hairstyle or hair color. All 472 00:29:51,240 --> 00:29:56,920 Speaker 1: these variables and more were enough to confound computers. Unless 473 00:29:56,960 --> 00:30:01,240 Speaker 1: the sample image matched a stored image precise, the computer 474 00:30:01,360 --> 00:30:02,840 Speaker 1: was not likely to be able to come up with 475 00:30:02,880 --> 00:30:07,160 Speaker 1: a match, but decades of research and improvements in technology 476 00:30:07,200 --> 00:30:10,920 Speaker 1: would change all that. The US Department of Defense got involved, 477 00:30:10,920 --> 00:30:13,720 Speaker 1: which should already set off some red flags for y'all 478 00:30:13,920 --> 00:30:17,280 Speaker 1: if the DoD is in it. DARPA, the department that 479 00:30:17,360 --> 00:30:20,960 Speaker 1: funds R and D in technologies that ultimately could prove 480 00:30:21,080 --> 00:30:24,640 Speaker 1: useful for military and defense purposes, launched the program in 481 00:30:24,680 --> 00:30:28,160 Speaker 1: the nineteen nineties in an effort to encourage commercial businesses 482 00:30:28,480 --> 00:30:32,800 Speaker 1: to invest in developing facial recognition technology. In the mid 483 00:30:32,840 --> 00:30:35,640 Speaker 1: to late two thousands, we started seeing cameras with face 484 00:30:35,680 --> 00:30:40,960 Speaker 1: detection technology. This wasn't quite the same as facial recognition technology. 485 00:30:41,280 --> 00:30:44,960 Speaker 1: Detecting a face and recognizing a face are two different things. However, 486 00:30:45,200 --> 00:30:47,920 Speaker 1: it did involve creating tech that could parse shapes and 487 00:30:48,040 --> 00:30:51,240 Speaker 1: determine if a face was in frame, and facial recognition 488 00:30:51,320 --> 00:30:54,040 Speaker 1: tech was still in full swing, so we would start 489 00:30:54,040 --> 00:30:58,520 Speaker 1: to see those technologies converge in the background. In twenty ten, 490 00:30:59,080 --> 00:31:02,680 Speaker 1: Facebook introduce a ton of people to facial recognition technology 491 00:31:02,960 --> 00:31:07,040 Speaker 1: by implementing it on the social platform. So the way 492 00:31:07,080 --> 00:31:10,640 Speaker 1: this worked. As users would upload photos to Facebook, Facebook 493 00:31:10,640 --> 00:31:14,440 Speaker 1: would automatically analyze each photo and look for faces of 494 00:31:14,520 --> 00:31:18,360 Speaker 1: people in those photos. If the faces that Facebook detected 495 00:31:18,560 --> 00:31:24,320 Speaker 1: matched people within their database of biometric data, particularly people 496 00:31:24,360 --> 00:31:28,760 Speaker 1: who are already in your social network, Facebook would tag 497 00:31:29,040 --> 00:31:33,280 Speaker 1: those photos and put the person's name in there. Privacy 498 00:31:33,280 --> 00:31:35,680 Speaker 1: advocates worried about this. I mean, yes, if you were 499 00:31:35,720 --> 00:31:38,400 Speaker 1: doing something you shouldn't be doing. That already is an 500 00:31:38,400 --> 00:31:41,120 Speaker 1: issue because if someone puts a photo of you up 501 00:31:41,160 --> 00:31:44,000 Speaker 1: there and it tags you, you might be caught red handed. 502 00:31:44,040 --> 00:31:47,400 Speaker 1: But even under innocent circumstances, it could be bad. I'll 503 00:31:47,400 --> 00:31:51,440 Speaker 1: give you an innocent version of this. Let's say that 504 00:31:52,240 --> 00:31:56,080 Speaker 1: you and your best friend from college live on opposite 505 00:31:56,120 --> 00:31:58,880 Speaker 1: sides of the country now, and it's your best friend's 506 00:31:58,960 --> 00:32:03,280 Speaker 1: birthday and you've planned a surprise. You've flown in to 507 00:32:03,400 --> 00:32:05,400 Speaker 1: your best friend's town and you're going to go and 508 00:32:05,440 --> 00:32:10,240 Speaker 1: surprise them on their birthday. And one of the mutual 509 00:32:10,280 --> 00:32:12,560 Speaker 1: friends takes a photo of you while you're at the 510 00:32:12,600 --> 00:32:18,080 Speaker 1: airport arriving at the city, and it auto tags you 511 00:32:18,240 --> 00:32:20,840 Speaker 1: when it's uploaded to social media, and then your friend 512 00:32:20,840 --> 00:32:23,960 Speaker 1: finds out about the fact that you're in town before 513 00:32:24,000 --> 00:32:26,320 Speaker 1: you even get a chance to do anything. That would 514 00:32:26,360 --> 00:32:30,800 Speaker 1: stink right, all that work and effort wasted because of 515 00:32:30,840 --> 00:32:36,520 Speaker 1: auto tagging. That's a very minor invasion of privacy that 516 00:32:36,880 --> 00:32:41,080 Speaker 1: illustrates the issue here well. For more than a decade, 517 00:32:41,080 --> 00:32:43,800 Speaker 1: Facebook kept the facial recognition tech in play, but in 518 00:32:43,840 --> 00:32:47,320 Speaker 1: twenty twenty one, the company at that point freshly renamed 519 00:32:47,360 --> 00:32:49,920 Speaker 1: as Meta, announced that it would drop the feature and 520 00:32:49,920 --> 00:32:52,640 Speaker 1: claimed it would also delete its database of images that 521 00:32:52,720 --> 00:32:55,920 Speaker 1: were used to help identify people. That database included more 522 00:32:55,920 --> 00:33:00,560 Speaker 1: than a billion photos. Why would Facebook slash Meta do this, Well, 523 00:33:00,560 --> 00:33:03,960 Speaker 1: partly it was probably because of optics, because at that 524 00:33:04,080 --> 00:33:06,800 Speaker 1: time the company was under intense scrutiny from the US 525 00:33:06,880 --> 00:33:11,320 Speaker 1: government after whistleblower Francis Hougan came forward with serious allegations 526 00:33:11,360 --> 00:33:14,880 Speaker 1: against the company and brought along hundreds of internal documents 527 00:33:14,920 --> 00:33:18,400 Speaker 1: backing up her claims. Many of those allegations related to 528 00:33:18,440 --> 00:33:22,640 Speaker 1: Meta slash Facebook's failure to protect user privacy and security. 529 00:33:23,280 --> 00:33:25,880 Speaker 1: Skeptics were actually worried the Meta would hold on to 530 00:33:26,000 --> 00:33:28,400 Speaker 1: that data and just say they were going to delete it, 531 00:33:28,440 --> 00:33:31,200 Speaker 1: but not delete it, and then just drop the facial 532 00:33:31,240 --> 00:33:35,200 Speaker 1: recognition feature off of Facebook and keep the information, especially 533 00:33:35,240 --> 00:33:38,480 Speaker 1: as it tried to build out the metaverse. Texas Attorney 534 00:33:38,520 --> 00:33:41,400 Speaker 1: General Ken Paxton actually told Meta not to delete the 535 00:33:41,400 --> 00:33:44,440 Speaker 1: facial recognition data because his office was in the middle 536 00:33:44,480 --> 00:33:49,320 Speaker 1: of an investigation into the company's biometric data collection practices. Honestly, 537 00:33:49,480 --> 00:33:52,960 Speaker 1: I don't know if or when Meta purged that information. 538 00:33:53,160 --> 00:33:56,040 Speaker 1: I don't know if it has been deleted. I tried 539 00:33:56,080 --> 00:34:00,000 Speaker 1: to look for some updates, but didn't really find much 540 00:34:00,160 --> 00:34:02,760 Speaker 1: because almost all the articles I could find were from 541 00:34:03,000 --> 00:34:05,720 Speaker 1: November of twenty twenty one, when Meta first announced it 542 00:34:05,760 --> 00:34:08,400 Speaker 1: was going to wipe the slate clean, So I don't 543 00:34:08,400 --> 00:34:13,280 Speaker 1: know if that actually happened. But beyond embarrassing or maybe 544 00:34:13,320 --> 00:34:17,240 Speaker 1: even incriminating images popping up on social media. Facial recognition 545 00:34:17,280 --> 00:34:20,719 Speaker 1: has proven to be a disruptive and traumatic technology for 546 00:34:20,760 --> 00:34:26,319 Speaker 1: certain populations, namely non white populations. So we often will 547 00:34:26,360 --> 00:34:30,279 Speaker 1: think of AI as being objective. Right. It's not a 548 00:34:30,360 --> 00:34:33,840 Speaker 1: human being. It doesn't have emotions, It has no motive 549 00:34:34,000 --> 00:34:37,160 Speaker 1: or motivations other than to complete whatever task has been 550 00:34:37,160 --> 00:34:39,640 Speaker 1: set for it. But we also have to remember that 551 00:34:39,680 --> 00:34:46,600 Speaker 1: AI didn't spring forth wholly formed. People designed AI, people 552 00:34:46,760 --> 00:34:51,960 Speaker 1: built AI, people trained AI, and in the process people 553 00:34:52,000 --> 00:34:55,799 Speaker 1: may end up building in biases in that technology, not 554 00:34:55,840 --> 00:35:00,280 Speaker 1: necessarily on purpose or with malevolent intent, but that doesn't 555 00:35:00,360 --> 00:35:05,280 Speaker 1: ultimately matter if those biases have impact on the general population. 556 00:35:05,360 --> 00:35:11,000 Speaker 1: As the technology goes live. With facial recognition technology, those 557 00:35:11,040 --> 00:35:15,520 Speaker 1: biases manifested in disturbing ways. Many facial recognition tools prove 558 00:35:15,600 --> 00:35:19,320 Speaker 1: to work pretty darn well on white people. Sufficiently trained 559 00:35:19,320 --> 00:35:21,960 Speaker 1: systems could identify a person with a pretty high degree 560 00:35:21,960 --> 00:35:25,520 Speaker 1: of accuracy, but with people of color in general and 561 00:35:25,600 --> 00:35:29,160 Speaker 1: black people in particular, it was a different story than 562 00:35:29,200 --> 00:35:33,000 Speaker 1: methodologies used by the systems would produce false positives, and 563 00:35:33,040 --> 00:35:38,120 Speaker 1: you can easily imagine scenarios where this becomes a huge problem. 564 00:35:38,239 --> 00:35:41,120 Speaker 1: For example, let's take law enforcement as there have been 565 00:35:41,160 --> 00:35:44,960 Speaker 1: several notable cases in which facial recognition technology has played 566 00:35:44,960 --> 00:35:48,600 Speaker 1: a part in authorities targeting the wrong person. If law 567 00:35:48,680 --> 00:35:52,080 Speaker 1: enforcement depends upon a tool to match a person's face 568 00:35:52,200 --> 00:35:55,879 Speaker 1: against a database of suspects and they get a hit, 569 00:35:56,360 --> 00:35:58,960 Speaker 1: you can understand why they would want to question that person, 570 00:35:59,480 --> 00:36:02,200 Speaker 1: why they would immediately assume this is a person of interest. 571 00:36:02,719 --> 00:36:06,560 Speaker 1: But if the technology produces false positives, that just means 572 00:36:06,600 --> 00:36:09,719 Speaker 1: that innocent civilians end up getting harassed by law enforcement. 573 00:36:10,120 --> 00:36:12,760 Speaker 1: And when those civilians belong to a population that already 574 00:36:12,800 --> 00:36:17,920 Speaker 1: faces disproportionate aggression from law enforcement, this exacerbates an already 575 00:36:18,000 --> 00:36:22,360 Speaker 1: critical social problem. Something that needs to get better is 576 00:36:22,440 --> 00:36:26,800 Speaker 1: being made even worse. As such, numerous communities have pushed 577 00:36:26,800 --> 00:36:29,640 Speaker 1: back on law enforcement's use of this technology, and in 578 00:36:29,680 --> 00:36:33,239 Speaker 1: some jurisdictions it's not a tool that police or other 579 00:36:33,320 --> 00:36:37,799 Speaker 1: law enforcement are supposed to use. There are laws in 580 00:36:37,840 --> 00:36:41,240 Speaker 1: certain areas where law enforcement is not allowed to depend 581 00:36:41,280 --> 00:36:45,520 Speaker 1: upon facial recognition for the purposes of identifying a suspect. 582 00:36:46,800 --> 00:36:49,040 Speaker 1: Knowing it has this law should be enough to just 583 00:36:49,040 --> 00:36:51,759 Speaker 1: disqualify it for its use in investigations, and yet we 584 00:36:51,800 --> 00:36:56,360 Speaker 1: still see it being used in lots of places, sometimes clandestinely. 585 00:36:56,920 --> 00:36:59,880 Speaker 1: It's not great. By the way, the companies that make 586 00:37:00,360 --> 00:37:04,359 Speaker 1: these law enforcement tools often build up their own databases 587 00:37:04,719 --> 00:37:09,560 Speaker 1: by scraping social networks for images. Facebook's facial recognition tool 588 00:37:09,760 --> 00:37:13,840 Speaker 1: was an incredible resource for these companies. Here you had millions, 589 00:37:14,040 --> 00:37:18,319 Speaker 1: in fact more than a billion images tagged with identities 590 00:37:18,360 --> 00:37:21,360 Speaker 1: of people in them, and many of them posted on 591 00:37:21,360 --> 00:37:24,440 Speaker 1: accounts that allowed the general public to go to that 592 00:37:24,680 --> 00:37:28,319 Speaker 1: account and see those images. So building up bots to 593 00:37:28,560 --> 00:37:33,120 Speaker 1: crawl Facebook and collect images and cross reference those against 594 00:37:33,280 --> 00:37:36,520 Speaker 1: people's names to build out a database, that was a 595 00:37:36,600 --> 00:37:41,280 Speaker 1: logical step for these companies. Now that technically violated platform policies, 596 00:37:41,280 --> 00:37:44,239 Speaker 1: but didn't stop the companies from doing it. And on 597 00:37:44,320 --> 00:37:47,360 Speaker 1: top of all that, this reliance on facial recognition technology 598 00:37:47,440 --> 00:37:51,799 Speaker 1: also requires heavy surveillance to really work properly. I mean, 599 00:37:51,880 --> 00:37:54,400 Speaker 1: you might have a great photo of your suspect, and 600 00:37:54,640 --> 00:37:58,319 Speaker 1: maybe your facial recognition system is reasonably accurate, so if 601 00:37:58,360 --> 00:38:01,080 Speaker 1: you were to get another picture of this person, you 602 00:38:01,080 --> 00:38:03,280 Speaker 1: would get a match. But you still have to figure 603 00:38:03,280 --> 00:38:05,560 Speaker 1: out where your suspect is in order to get any 604 00:38:05,600 --> 00:38:08,319 Speaker 1: other images, and to do that you need access to 605 00:38:08,360 --> 00:38:11,280 Speaker 1: a lot of camera feets you need cameras in lots 606 00:38:11,280 --> 00:38:14,560 Speaker 1: of places and systems to scan images from those cameras 607 00:38:14,560 --> 00:38:18,160 Speaker 1: to look for matches. A reliance on facial recognition pretty 608 00:38:18,239 --> 00:38:22,480 Speaker 1: much necessitates increased surveillance, which again becomes an invasion of 609 00:38:22,520 --> 00:38:26,919 Speaker 1: privacy and security for innocent civilians, suspect or otherwise. This 610 00:38:27,000 --> 00:38:30,560 Speaker 1: is one of the reasons why police and their relationship 611 00:38:30,680 --> 00:38:34,879 Speaker 1: with things like ring security are a big issue right 612 00:38:34,960 --> 00:38:40,360 Speaker 1: because if police have access to citizen cameras, then the 613 00:38:40,400 --> 00:38:43,480 Speaker 1: citizens have become an accomplice to creating a surveillance state, 614 00:38:44,080 --> 00:38:48,880 Speaker 1: and the law enforcement is leveraging that and then mixing 615 00:38:48,920 --> 00:38:53,280 Speaker 1: that with facial recognition, you get a pretty oppressive approach 616 00:38:53,320 --> 00:38:56,120 Speaker 1: toward law enforcement. And I haven't even touched on how 617 00:38:56,160 --> 00:38:59,359 Speaker 1: someone with an agenda could misuse this technology for their 618 00:38:59,400 --> 00:39:02,400 Speaker 1: own purpose. We have seen plenty of examples where an 619 00:39:02,480 --> 00:39:07,280 Speaker 1: organization that employs intrusive surveillance often discovers and I'm sure 620 00:39:07,800 --> 00:39:09,799 Speaker 1: this is a shock to all of you out there, 621 00:39:10,280 --> 00:39:14,759 Speaker 1: but they discovered that sometimes their staff will take advantage 622 00:39:14,880 --> 00:39:19,800 Speaker 1: of this technology and act upon it for themselves. Maybe 623 00:39:19,800 --> 00:39:23,000 Speaker 1: they use it to track down an X, or to 624 00:39:23,200 --> 00:39:27,120 Speaker 1: stalk someone, or to harass somebody that they do not 625 00:39:27,400 --> 00:39:33,359 Speaker 1: like this happens. In twenty thirteen, doctor George Ellard confirmed 626 00:39:33,960 --> 00:39:38,200 Speaker 1: that his office in the NSSAY uncovered cases in which 627 00:39:38,239 --> 00:39:41,920 Speaker 1: an employee had illegally made use of the agency's technologies 628 00:39:41,920 --> 00:39:45,960 Speaker 1: to spy on women with whom he had a relationship, 629 00:39:46,000 --> 00:39:48,160 Speaker 1: either in the past or in the present, and that 630 00:39:48,239 --> 00:39:53,439 Speaker 1: included listening in on phone calls and reading emails, a 631 00:39:53,480 --> 00:39:58,080 Speaker 1: flagrant violation of privacy. My point is that while the 632 00:39:58,239 --> 00:40:01,360 Speaker 1: NSSAY intended this technolog for the use of protecting the 633 00:40:01,400 --> 00:40:05,160 Speaker 1: interests of the United States, the people working at the 634 00:40:05,280 --> 00:40:10,640 Speaker 1: NSA are people, and some people can't resist the temptation 635 00:40:11,000 --> 00:40:15,960 Speaker 1: to abuse technologies that give them these abilities. Some, like 636 00:40:16,080 --> 00:40:19,080 Speaker 1: the case in twenty thirteen, will find that not only 637 00:40:19,200 --> 00:40:22,200 Speaker 1: is it possible for them to do this, they can 638 00:40:22,239 --> 00:40:25,680 Speaker 1: get away with it without detection for years, which means 639 00:40:25,719 --> 00:40:29,080 Speaker 1: then they do it a whole bunch. So, even if 640 00:40:29,080 --> 00:40:33,319 Speaker 1: facial recognition technology were flawless, which it is not, and 641 00:40:33,400 --> 00:40:37,959 Speaker 1: even if it didn't disproportionately harm certain communities, which it does, 642 00:40:38,640 --> 00:40:41,239 Speaker 1: there's still the issue of it being a technology that 643 00:40:41,280 --> 00:40:44,600 Speaker 1: folks can abuse. And sure, technically you can say that 644 00:40:44,640 --> 00:40:47,520 Speaker 1: about anything, right, you could abuse a pair of scissors 645 00:40:47,520 --> 00:40:51,040 Speaker 1: and use them as a weapon. So any technology can 646 00:40:51,080 --> 00:40:54,560 Speaker 1: be abused, but these AI technologies make it easier to do, 647 00:40:55,480 --> 00:41:00,839 Speaker 1: make it far more intrusive, make it scared available, so 648 00:41:00,880 --> 00:41:04,520 Speaker 1: you can end up abusing lots of people on a 649 00:41:04,560 --> 00:41:08,120 Speaker 1: grand scale and potentially to get away with it. And 650 00:41:08,239 --> 00:41:11,040 Speaker 1: meanwhile there are real people who get hurt in the process. 651 00:41:11,640 --> 00:41:13,160 Speaker 1: Now a thing in the future, we're going to look 652 00:41:13,200 --> 00:41:15,680 Speaker 1: back on this time as one in which Pandora opened 653 00:41:15,719 --> 00:41:18,640 Speaker 1: up the pesky box, and we spend a whole lot 654 00:41:18,640 --> 00:41:21,960 Speaker 1: of time and effort and agony getting stuff back in 655 00:41:21,960 --> 00:41:25,799 Speaker 1: that box, or to build guardrails around the box so 656 00:41:25,840 --> 00:41:29,440 Speaker 1: that the stuff can't do as bad at a job 657 00:41:29,480 --> 00:41:32,520 Speaker 1: as it would otherwise, and we'll just find out that, 658 00:41:32,680 --> 00:41:35,319 Speaker 1: just like with the myth, once that lid's open, that's it. 659 00:41:35,800 --> 00:41:38,799 Speaker 1: Now we just have to cope. Hamlet might say what 660 00:41:38,920 --> 00:41:43,520 Speaker 1: a piece of work is AI, and then I don't know, 661 00:41:43,600 --> 00:41:47,080 Speaker 1: you'd probably be all moody or something. Maybe that's just 662 00:41:47,160 --> 00:41:53,040 Speaker 1: how I feel. Anyway, That is sort of my perspective 663 00:41:53,400 --> 00:41:56,960 Speaker 1: on really techno solutionism in general, but AI in particular. 664 00:41:57,640 --> 00:42:01,480 Speaker 1: And again I don't wish to say that AI is 665 00:42:01,880 --> 00:42:07,120 Speaker 1: inherently bad or that it's useless, just that there is 666 00:42:07,160 --> 00:42:12,160 Speaker 1: a lot of potential for negative use cases, intended or otherwise, 667 00:42:12,760 --> 00:42:18,319 Speaker 1: and that even if we address the unintended consequences, we 668 00:42:18,400 --> 00:42:22,160 Speaker 1: still have the issue of people building out tools that 669 00:42:22,480 --> 00:42:26,319 Speaker 1: were malicious from start to finish. And it doesn't matter 670 00:42:26,320 --> 00:42:29,359 Speaker 1: how many good tools we have out there. If these 671 00:42:29,440 --> 00:42:34,880 Speaker 1: bad tools end up enabling a new era of malicious attacks, 672 00:42:35,320 --> 00:42:36,920 Speaker 1: we're going to have to come up with new ways 673 00:42:37,000 --> 00:42:40,680 Speaker 1: to protect ourselves against such things. And it's going to 674 00:42:40,760 --> 00:42:45,520 Speaker 1: be ugly. And it also really raises scary questions about 675 00:42:45,560 --> 00:42:50,319 Speaker 1: AI's use and weaponization on more grand scales, right like 676 00:42:50,400 --> 00:42:54,080 Speaker 1: on a military level, which is an ongoing concern. Again, 677 00:42:54,600 --> 00:42:58,600 Speaker 1: I think pretty much everyone out there anticipates that this 678 00:42:58,760 --> 00:43:02,440 Speaker 1: is a four on conclusion that AI will be deeply 679 00:43:02,480 --> 00:43:08,520 Speaker 1: incorporated into military operations beyond what it already is doing now. 680 00:43:09,200 --> 00:43:12,160 Speaker 1: Because if you don't do it, someone else will, which 681 00:43:12,200 --> 00:43:14,640 Speaker 1: means everybody has to do it. If you don't do it, 682 00:43:14,680 --> 00:43:18,640 Speaker 1: then you end up being you know, victim to someone else. 683 00:43:19,239 --> 00:43:24,480 Speaker 1: So that it's kind of that mutually assured destruction philosophy 684 00:43:24,640 --> 00:43:28,319 Speaker 1: of the Cold War, except it's with AI, and I 685 00:43:28,440 --> 00:43:30,799 Speaker 1: don't see a way around it, which is a very 686 00:43:30,880 --> 00:43:35,160 Speaker 1: cheerful way to conclude this episode. Maybe I'm being far 687 00:43:35,239 --> 00:43:37,839 Speaker 1: too cynical and pessimistic. I would love to find out 688 00:43:37,840 --> 00:43:39,640 Speaker 1: that that's the case. I would love for that to 689 00:43:39,680 --> 00:43:44,200 Speaker 1: be true. So I hope that all of my fears 690 00:43:44,200 --> 00:43:47,640 Speaker 1: and misgivings are misplaced. I would love to be wrong 691 00:43:47,680 --> 00:43:49,799 Speaker 1: in this case. There are times when I don't like 692 00:43:49,840 --> 00:43:51,480 Speaker 1: to be right, and this would be one of them. 693 00:43:51,920 --> 00:43:58,719 Speaker 1: So here's hoping. Until then, I suggest everyone out there 694 00:43:58,760 --> 00:44:01,880 Speaker 1: continue to do what I always advocate, use critical thinking 695 00:44:02,560 --> 00:44:07,839 Speaker 1: paired with compassion to conduct yourself so that you can 696 00:44:08,040 --> 00:44:13,719 Speaker 1: avoid problems for yourself and for other people, and hopefully 697 00:44:14,640 --> 00:44:17,359 Speaker 1: end up making the world a little bit better in 698 00:44:17,400 --> 00:44:19,719 Speaker 1: the process. Yeah, you don't need to go out and 699 00:44:19,760 --> 00:44:22,200 Speaker 1: save the world. You just, you know, need to use 700 00:44:22,239 --> 00:44:26,040 Speaker 1: some critical thinking and compassion to behave in a way 701 00:44:26,120 --> 00:44:30,520 Speaker 1: that is more beneficial than harmful. That's the goal. Whether 702 00:44:30,600 --> 00:44:33,200 Speaker 1: we succeed or not, sometimes that's not up to us, 703 00:44:33,600 --> 00:44:36,560 Speaker 1: but we can do our part. In the meantime. I 704 00:44:36,600 --> 00:44:39,160 Speaker 1: hope you are all well, and I'll talk to you 705 00:44:39,200 --> 00:44:49,520 Speaker 1: again really soon. Tech Stuff is an iHeartRadio production. For 706 00:44:49,640 --> 00:44:54,480 Speaker 1: more podcasts, from iHeartRadio, visit the iHeartRadio app, Apple podcasts, 707 00:44:54,600 --> 00:45:01,240 Speaker 1: or wherever you listen to your favorite shows.