1 00:00:04,480 --> 00:00:12,399 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,440 --> 00:00:15,960 Speaker 1: and welcome to tech Stuff. I'm your host Jonathan Strickland. 3 00:00:15,960 --> 00:00:19,000 Speaker 1: I'm an executive producer with iHeart Podcasts and how the 4 00:00:19,079 --> 00:00:24,720 Speaker 1: tech are you? So? In the late eighteenth century, there 5 00:00:24,760 --> 00:00:28,360 Speaker 1: was a man named Wolfgang von Kimpland, and he had 6 00:00:28,400 --> 00:00:32,520 Speaker 1: a clever idea. He really wanted to knock the proverbial 7 00:00:32,760 --> 00:00:38,640 Speaker 1: socks off of Maria Theresa, the Impress of Austria. Moreover, 8 00:00:38,920 --> 00:00:43,479 Speaker 1: he wanted to make a more spectacular display than an 9 00:00:43,560 --> 00:00:49,160 Speaker 1: illusionist named Francois Pelletier who had performed for the Impress 10 00:00:49,280 --> 00:00:54,240 Speaker 1: to great renown, and Kimpland was not impressed. He was like, huh, 11 00:00:54,280 --> 00:00:57,200 Speaker 1: I'm gonna show Frank up. I'm gonna make something that's 12 00:00:57,240 --> 00:00:59,560 Speaker 1: really gonna rub his face in it, and the Impress 13 00:00:59,640 --> 00:01:03,440 Speaker 1: is gonna think I'm her favorite. So, fueled by a 14 00:01:03,560 --> 00:01:07,679 Speaker 1: competitive and perhaps petty spirit, Kimpland came up with an 15 00:01:07,680 --> 00:01:13,200 Speaker 1: invention that some would call the mechanical Turk. Now the 16 00:01:13,840 --> 00:01:17,759 Speaker 1: machine hesitate to call it that, but the machine consisted 17 00:01:17,800 --> 00:01:21,559 Speaker 1: of a large table like cabinet, and on the top 18 00:01:21,640 --> 00:01:26,040 Speaker 1: of this cabinet was a chessboard and standing behind this 19 00:01:26,319 --> 00:01:30,920 Speaker 1: cabinet was a mechanical man dressed in the Western European 20 00:01:31,040 --> 00:01:35,240 Speaker 1: concept of traditional Turkish attire. If you were to open 21 00:01:35,319 --> 00:01:38,800 Speaker 1: the cabinet doors, you would reveal a mass of gears 22 00:01:38,840 --> 00:01:42,080 Speaker 1: and cogs and such, so it looked as though everything 23 00:01:42,360 --> 00:01:45,840 Speaker 1: was mechanical. Kimplan claimed that this machine could play an 24 00:01:45,880 --> 00:01:49,120 Speaker 1: expert game of chess against any opponent, and as it 25 00:01:49,160 --> 00:01:53,200 Speaker 1: turned out, the machine performed very well and won more 26 00:01:53,240 --> 00:01:55,720 Speaker 1: games than it lost. But it was all a trick. 27 00:01:56,000 --> 00:02:00,120 Speaker 1: The machine wasn't really a machine, or at least it 28 00:02:00,160 --> 00:02:05,120 Speaker 1: wasn't a machine that did any work. Instead, hidden inside 29 00:02:05,160 --> 00:02:09,639 Speaker 1: the cabinet, concealed from exposure, hidden by these gears and cogs, 30 00:02:10,080 --> 00:02:15,600 Speaker 1: was a cramped human chess player, and the player could 31 00:02:15,639 --> 00:02:18,720 Speaker 1: manipulate the Turkish figure and was able to play chess 32 00:02:18,760 --> 00:02:21,720 Speaker 1: from below the chess board. So it was an actual 33 00:02:21,800 --> 00:02:25,160 Speaker 1: human being who was actually playing these games against people. 34 00:02:25,240 --> 00:02:30,040 Speaker 1: It wasn't some mechanical construct. The Turk only seemed to 35 00:02:30,120 --> 00:02:33,480 Speaker 1: be a chess playing machine. Now, this was way back 36 00:02:33,520 --> 00:02:38,600 Speaker 1: in seventeen seventy. Today, in twenty twenty four, we still 37 00:02:38,600 --> 00:02:43,040 Speaker 1: have to deal with companies and entrepreneurs peddling artificial intelligence 38 00:02:43,080 --> 00:02:46,440 Speaker 1: that when you look at it more closely, is really 39 00:02:46,480 --> 00:02:52,840 Speaker 1: relying on plain, old, reliable human intelligence. Why, Well, the 40 00:02:53,120 --> 00:02:57,400 Speaker 1: short answer for that is money. It seems like, you know, 41 00:02:57,520 --> 00:03:00,800 Speaker 1: not a day goes by in twenty twenty that doesn't 42 00:03:00,840 --> 00:03:04,600 Speaker 1: include at least one news story about how artificial intelligence 43 00:03:04,680 --> 00:03:08,240 Speaker 1: is going to completely change our lives. And the stories 44 00:03:08,320 --> 00:03:12,799 Speaker 1: run the gamut of hyperbole, from doomsday prophecies about weaponized 45 00:03:12,840 --> 00:03:17,200 Speaker 1: AI making battlefield decisions, to company executives as saying that 46 00:03:17,240 --> 00:03:21,560 Speaker 1: AI programs are a viable alternative to hiring actual human beings, 47 00:03:21,800 --> 00:03:27,440 Speaker 1: to optimists who describe a star trek like utopia in 48 00:03:27,480 --> 00:03:30,480 Speaker 1: which AI handles all the dull stuff and it leaves 49 00:03:30,560 --> 00:03:33,760 Speaker 1: us to experience the world as a never ending series 50 00:03:33,800 --> 00:03:36,960 Speaker 1: of adventures. I'm not sure if any of those scenarios 51 00:03:37,000 --> 00:03:39,480 Speaker 1: are what's actually in store for us, but I do 52 00:03:39,640 --> 00:03:41,920 Speaker 1: know things are going to be messy for a good 53 00:03:42,240 --> 00:03:46,400 Speaker 1: long while. But AI is such a buzzy term, and 54 00:03:46,520 --> 00:03:53,200 Speaker 1: with big companies like Google, Microsoft, Amazon, Apple, Meta and 55 00:03:53,360 --> 00:03:59,640 Speaker 1: more all stomping relentlessly forward to make AI the next 56 00:03:59,680 --> 00:04:04,440 Speaker 1: big thing, there are literally billions of dollars pouring into 57 00:04:04,600 --> 00:04:09,840 Speaker 1: various AI pursuits. Now, with that much money, and enthusiasm 58 00:04:09,920 --> 00:04:13,840 Speaker 1: at play. It's no wonder that dozens of startups attempting 59 00:04:13,880 --> 00:04:16,640 Speaker 1: to cash in on the gold rush have cropped up 60 00:04:17,000 --> 00:04:20,240 Speaker 1: in recent years. And some of those companies might actually 61 00:04:20,320 --> 00:04:25,280 Speaker 1: be making genuine strides toward advancing AI or implementing it 62 00:04:25,360 --> 00:04:28,560 Speaker 1: in a useful way. Some might just be jumping on 63 00:04:28,600 --> 00:04:33,080 Speaker 1: the opportunity to get some of that sweet, sweet VC cash. 64 00:04:33,200 --> 00:04:38,200 Speaker 1: Since AI is the new metaverse slash NFT slash virtual 65 00:04:38,240 --> 00:04:42,080 Speaker 1: reality slash three D technology thing, what I'm saying is 66 00:04:42,120 --> 00:04:46,520 Speaker 1: that we've been through this hype cycle many many times before. 67 00:04:46,920 --> 00:04:53,480 Speaker 1: The term AI itself is incredibly useful if you want to, 68 00:04:53,520 --> 00:04:57,599 Speaker 1: you know, sell some snake oil, because AI as a 69 00:04:57,720 --> 00:05:02,640 Speaker 1: term is still a bit vague. Like the term AI 70 00:05:03,200 --> 00:05:07,760 Speaker 1: is seventy years old at this point, and yet we 71 00:05:07,880 --> 00:05:12,760 Speaker 1: don't have an easy definition for what really is artificial intelligence. 72 00:05:12,800 --> 00:05:16,440 Speaker 1: It's kind of like our definition for actual intelligence. We 73 00:05:16,560 --> 00:05:20,839 Speaker 1: don't have a super great explanation for that either. We 74 00:05:20,960 --> 00:05:24,360 Speaker 1: have ways of describing parts of it, but we don't 75 00:05:24,360 --> 00:05:28,560 Speaker 1: really have a holistic, perfect encapsulation of what is intelligence. 76 00:05:28,600 --> 00:05:31,800 Speaker 1: So how could we do that? For artificial intelligence? You 77 00:05:31,839 --> 00:05:35,000 Speaker 1: don't even have to make an AI application or implementation. 78 00:05:35,200 --> 00:05:40,559 Speaker 1: To take advantage of the opportunities that this vague state 79 00:05:40,600 --> 00:05:45,200 Speaker 1: of affairs creates, you just call whatever thing you're trying 80 00:05:45,240 --> 00:05:48,120 Speaker 1: to sell AI, and you let the hype do the 81 00:05:48,160 --> 00:05:52,200 Speaker 1: work for you. Because people don't understand it fully, you 82 00:05:52,400 --> 00:05:54,760 Speaker 1: probably aren't going to get called out on it unless 83 00:05:54,800 --> 00:05:58,479 Speaker 1: you're really sloppy, which means you can make hay while 84 00:05:58,520 --> 00:06:00,480 Speaker 1: the sun shines and then get up out of town 85 00:06:00,520 --> 00:06:03,080 Speaker 1: when the clouds roll in. So today I thought we 86 00:06:03,120 --> 00:06:07,120 Speaker 1: would talk a bit about fake artificial intelligence, or perhaps 87 00:06:07,120 --> 00:06:10,840 Speaker 1: we should call it artificial artificial intelligence, which in a 88 00:06:10,880 --> 00:06:13,679 Speaker 1: way comes back round to just plain old intelligence, because 89 00:06:13,680 --> 00:06:15,719 Speaker 1: we're going to chat about some cases in which a 90 00:06:15,839 --> 00:06:19,240 Speaker 1: person or group of people passed off stuff that isn't 91 00:06:19,279 --> 00:06:24,360 Speaker 1: really AI, but was rather powered by human intelligence onto 92 00:06:24,680 --> 00:06:29,480 Speaker 1: unsuspecting targets. First, however, let's do a quick refresher on AI, 93 00:06:29,880 --> 00:06:32,880 Speaker 1: because I find that the term is so broad and 94 00:06:32,920 --> 00:06:36,440 Speaker 1: it is so overused that it's really starting to lose 95 00:06:36,480 --> 00:06:40,279 Speaker 1: its meaning. These days, as consumers, you and I, we 96 00:06:40,400 --> 00:06:45,279 Speaker 1: are most likely to encounter AI as applied to generative AI. 97 00:06:45,640 --> 00:06:48,920 Speaker 1: That's the hotness right now, And I know I'm old. 98 00:06:49,640 --> 00:06:54,279 Speaker 1: I use phrases like the hotness. Sorry, but this is 99 00:06:54,520 --> 00:06:58,360 Speaker 1: artificial intelligence that's capable of generating something. Thus you have 100 00:06:58,680 --> 00:07:03,080 Speaker 1: generative AI. Now the something might be written text, it 101 00:07:03,160 --> 00:07:06,839 Speaker 1: might be spoken words, it might be music, it might 102 00:07:06,920 --> 00:07:10,920 Speaker 1: be a sketch or a painting. And there's no denying 103 00:07:11,280 --> 00:07:15,440 Speaker 1: that generative AI can be really impressive when it works well. 104 00:07:15,640 --> 00:07:17,640 Speaker 1: It seems to be able to do the same sort 105 00:07:17,680 --> 00:07:21,360 Speaker 1: of things we humans can, though there remain questions regarding 106 00:07:21,400 --> 00:07:23,960 Speaker 1: how much of that is thanks to the various AI 107 00:07:24,160 --> 00:07:28,360 Speaker 1: programs cribbing from actual human work. Because you'll often hear 108 00:07:28,640 --> 00:07:34,360 Speaker 1: human artists argue that generative AI borrows liberally or outright 109 00:07:34,440 --> 00:07:37,240 Speaker 1: steals if we're being more forthright about all this, and 110 00:07:37,320 --> 00:07:41,080 Speaker 1: does so from human artists. You know, AI doesn't magically 111 00:07:41,200 --> 00:07:44,640 Speaker 1: know how to paint something in a specific style or 112 00:07:44,920 --> 00:07:49,320 Speaker 1: maybe even more specifically, to mimic a particular artists style 113 00:07:49,440 --> 00:07:53,320 Speaker 1: and technique. The AI quote unquote knows how to do 114 00:07:53,360 --> 00:07:56,080 Speaker 1: this because it has been trained to do it on 115 00:07:56,360 --> 00:08:00,360 Speaker 1: countless examples of actual human generated art. That's a real 116 00:08:00,400 --> 00:08:04,520 Speaker 1: problem because it could mean that the AI is lifting 117 00:08:05,280 --> 00:08:10,560 Speaker 1: from real artists and thus potentially putting real artists livelihoods 118 00:08:10,560 --> 00:08:14,080 Speaker 1: at stake. But there are tons of other implementations that 119 00:08:14,120 --> 00:08:18,000 Speaker 1: have nothing or at least little to do with generative AI. So, 120 00:08:18,120 --> 00:08:24,960 Speaker 1: for example, facial recognition technology is a discipline under artificial intelligence. 121 00:08:25,320 --> 00:08:29,600 Speaker 1: The basic task is to compare an incoming signal with 122 00:08:29,680 --> 00:08:34,400 Speaker 1: a database of image records. This is relatively trivial if 123 00:08:34,440 --> 00:08:38,280 Speaker 1: the incoming signal is one that matches the database record precisely. 124 00:08:38,559 --> 00:08:41,680 Speaker 1: In other words, let's say that you've got an incoming 125 00:08:41,720 --> 00:08:45,800 Speaker 1: signal where the camera angle, the lighting, the distance from 126 00:08:45,920 --> 00:08:48,920 Speaker 1: the subject, all of that stuff is the same as 127 00:08:49,000 --> 00:08:52,440 Speaker 1: the reference image that you have in your database. Then 128 00:08:52,520 --> 00:08:54,839 Speaker 1: the computer can very quickly say, yes, these two are 129 00:08:54,880 --> 00:08:58,600 Speaker 1: a match. Typically, it does get trickier if you are 130 00:08:58,760 --> 00:09:03,640 Speaker 1: moving away from whatever types of faces the AI had 131 00:09:03,640 --> 00:09:07,960 Speaker 1: been trained upon. But it gets even trickier if conditions 132 00:09:08,040 --> 00:09:11,840 Speaker 1: are different between the incoming signal and the reference. So 133 00:09:12,080 --> 00:09:15,559 Speaker 1: an example I often give, and this isn't facial recognition, 134 00:09:15,600 --> 00:09:19,640 Speaker 1: this is image recognition. So imagine you have a coffee mug, 135 00:09:20,040 --> 00:09:22,959 Speaker 1: and let's say that we first we have a picture 136 00:09:23,120 --> 00:09:26,079 Speaker 1: of a coffee mug. It's sitting on a table. The 137 00:09:26,360 --> 00:09:29,680 Speaker 1: mug's handle is pointing to the right, you know, to 138 00:09:29,840 --> 00:09:32,480 Speaker 1: our right. As we look at the picture. The mug 139 00:09:32,720 --> 00:09:35,880 Speaker 1: is dark red in color. The body of the mug 140 00:09:35,920 --> 00:09:38,800 Speaker 1: is essentially just a you know, a simple cylinder. There's 141 00:09:38,840 --> 00:09:41,880 Speaker 1: no writing on it or anything. This is the reference 142 00:09:41,920 --> 00:09:44,199 Speaker 1: image that we are using. It's the one that's in 143 00:09:44,240 --> 00:09:48,440 Speaker 1: our database. Now imagine that you've pointed a camera at 144 00:09:48,440 --> 00:09:52,760 Speaker 1: a coffee mug, and this mug is an oversized coffee mug, 145 00:09:53,120 --> 00:09:56,240 Speaker 1: and it's off white in color, and it has the 146 00:09:56,559 --> 00:10:00,720 Speaker 1: words World's Greatest Podcaster on the mud ug and the 147 00:10:00,720 --> 00:10:03,280 Speaker 1: handle's pointed to the left, not the right. And this 148 00:10:03,440 --> 00:10:06,760 Speaker 1: mug actually isn't a perfect cylinder. Let's say that it 149 00:10:06,840 --> 00:10:10,160 Speaker 1: kind of curves outward from the base just about, you know, 150 00:10:10,240 --> 00:10:13,160 Speaker 1: sort of like a bowl more than a cylinder. And 151 00:10:13,360 --> 00:10:15,880 Speaker 1: if I ask you what is this thing, you would 152 00:10:16,000 --> 00:10:18,080 Speaker 1: quickly say, oh, that's a mug, or maybe that's a 153 00:10:18,080 --> 00:10:20,880 Speaker 1: coffee mug. You'd say that right away. You would recognize this. 154 00:10:21,400 --> 00:10:26,760 Speaker 1: But it doesn't match the reference picture in our database perfectly, right, Like, 155 00:10:27,480 --> 00:10:31,320 Speaker 1: it doesn't look exactly like or even close to the 156 00:10:31,440 --> 00:10:34,800 Speaker 1: red reference mug we have in our database. It's got 157 00:10:34,840 --> 00:10:37,160 Speaker 1: features that make it a mug, and you, as a 158 00:10:37,200 --> 00:10:40,800 Speaker 1: human being, can naturally apply your knowledge of those features 159 00:10:40,800 --> 00:10:43,480 Speaker 1: to identify a coffee mug. Even if you've never seen 160 00:10:43,480 --> 00:10:46,480 Speaker 1: that specific mug before, you immediately know, oh, that's a 161 00:10:46,480 --> 00:10:50,520 Speaker 1: coffee mug. Even if the coffee mugs form deviates from 162 00:10:50,679 --> 00:10:53,320 Speaker 1: others you've encountered in the past, You're able to apply 163 00:10:53,400 --> 00:10:56,360 Speaker 1: your intelligence to say that's a coffee mug. But a 164 00:10:56,400 --> 00:10:59,360 Speaker 1: computer cannot do that, not on its own. It has 165 00:10:59,400 --> 00:11:02,440 Speaker 1: to be fed hundreds of thousands, or even millions of 166 00:11:02,480 --> 00:11:06,959 Speaker 1: images of coffee mugs, all in various shapes and sizes 167 00:11:07,160 --> 00:11:11,840 Speaker 1: and colors and orientations to the camera and more. Even then, 168 00:11:12,040 --> 00:11:14,120 Speaker 1: there's no guarantee that the computer will be able to 169 00:11:14,160 --> 00:11:17,760 Speaker 1: identify a new image of a coffee mug that deviates 170 00:11:17,760 --> 00:11:20,520 Speaker 1: from this collection of reference material. So we can help 171 00:11:20,559 --> 00:11:24,680 Speaker 1: computers by applying metadata to information. We might take a 172 00:11:24,679 --> 00:11:28,959 Speaker 1: photo of a new coffee mug and apply metadata labels 173 00:11:29,280 --> 00:11:32,960 Speaker 1: to this image so that a computer can quickly reference 174 00:11:33,000 --> 00:11:36,200 Speaker 1: the metadata and then pull up our new photo of 175 00:11:36,240 --> 00:11:38,120 Speaker 1: a coffee mug when we ask for it. But this 176 00:11:38,280 --> 00:11:41,040 Speaker 1: is not the same thing as quote unquote knowing that 177 00:11:41,080 --> 00:11:43,920 Speaker 1: it's a coffee mug. That would be more like using 178 00:11:44,240 --> 00:11:49,240 Speaker 1: a reference index in order to pull up the matching image. 179 00:11:49,520 --> 00:11:52,360 Speaker 1: It doesn't involve the image itself, It just involves the 180 00:11:52,360 --> 00:11:56,480 Speaker 1: meta data about the image. So facial and image recognition 181 00:11:56,720 --> 00:12:01,040 Speaker 1: are just one of thousands of different aimplmentations that have 182 00:12:01,120 --> 00:12:04,160 Speaker 1: nothing to do or very little to do with generative AI. 183 00:12:04,400 --> 00:12:07,120 Speaker 1: They might ultimately have stuff to do with generative AI, 184 00:12:07,200 --> 00:12:09,960 Speaker 1: but that's because of convergence. It's not that they're the 185 00:12:10,000 --> 00:12:13,440 Speaker 1: same thing. It's that things these different disciplines are converging 186 00:12:13,600 --> 00:12:19,360 Speaker 1: into new implementations. Alan Turing, the great computer scientist, theorized 187 00:12:19,400 --> 00:12:22,360 Speaker 1: that machines might one day be able to take all 188 00:12:22,440 --> 00:12:27,040 Speaker 1: available information in a given situation and apply reasoning to 189 00:12:27,240 --> 00:12:30,960 Speaker 1: that situation in order to reach conclusions similar to how 190 00:12:31,000 --> 00:12:33,920 Speaker 1: we humans operate, and he wrote about it in a 191 00:12:33,920 --> 00:12:39,400 Speaker 1: paper titled Computing, Machinery and Intelligence. It was all hypothetical 192 00:12:39,440 --> 00:12:42,559 Speaker 1: at the time, since computers were still quite primitive back then. 193 00:12:42,840 --> 00:12:47,920 Speaker 1: For one thing, they lacked the capability of persistent memory. 194 00:12:48,320 --> 00:12:51,320 Speaker 1: I'll explain more in just a moment, but first let's 195 00:12:51,360 --> 00:13:03,679 Speaker 1: take a quick break to thank our sponsors. Okay, we're back. 196 00:13:03,960 --> 00:13:07,160 Speaker 1: What was I talking about? All right? Persistence of memory? 197 00:13:07,440 --> 00:13:10,480 Speaker 1: And I should get myself some of that anyway, What 198 00:13:10,640 --> 00:13:13,120 Speaker 1: I meant by that with Alan Turing and the lack 199 00:13:13,200 --> 00:13:17,280 Speaker 1: of persistent memory, is that the computers of Turing's day 200 00:13:17,559 --> 00:13:21,400 Speaker 1: could execute a command, but they couldn't quote unquote remember 201 00:13:21,840 --> 00:13:24,520 Speaker 1: what it was they just did. They would just perform 202 00:13:24,520 --> 00:13:27,559 Speaker 1: an operation, and they would continue to perform that operation 203 00:13:27,679 --> 00:13:32,240 Speaker 1: on new incoming data until you changed all the factors 204 00:13:32,240 --> 00:13:36,359 Speaker 1: of the computer, which often involved physical switches and cables 205 00:13:36,440 --> 00:13:39,800 Speaker 1: and plugs and stuff like. This was a big deal 206 00:13:39,880 --> 00:13:43,720 Speaker 1: to set a computer up to run calculations, so you 207 00:13:43,920 --> 00:13:49,600 Speaker 1: couldn't naturally build upon an outcome and then do a 208 00:13:49,640 --> 00:13:51,720 Speaker 1: new operation. You had to do a lot of work 209 00:13:51,920 --> 00:13:55,480 Speaker 1: in order to make this happen. But Turing thought, there 210 00:13:55,520 --> 00:13:58,040 Speaker 1: will come a day where computers will be able to 211 00:13:58,080 --> 00:14:02,040 Speaker 1: do this. They'll be able to complete the task, create 212 00:14:02,240 --> 00:14:05,600 Speaker 1: an outcome, and then take that outcome and then perform 213 00:14:06,120 --> 00:14:10,640 Speaker 1: new tasks upon that outcome, all with the goal of 214 00:14:10,720 --> 00:14:14,000 Speaker 1: some specific outcome further down the line, like ten or 215 00:14:14,040 --> 00:14:19,480 Speaker 1: twenty steps further along. So it was only a whole 216 00:14:19,520 --> 00:14:23,160 Speaker 1: bunch of different smarty pants is from different disciplines who 217 00:14:23,240 --> 00:14:26,680 Speaker 1: were able to advance the technology of computing that machines 218 00:14:26,720 --> 00:14:30,960 Speaker 1: could actually have something that resembled memory, let alone this 219 00:14:31,320 --> 00:14:36,160 Speaker 1: capability of taking in information and then being able to reason. 220 00:14:36,640 --> 00:14:41,480 Speaker 1: So from transistors to integrated circuits to computer languages, et cetera, 221 00:14:41,560 --> 00:14:44,240 Speaker 1: a lot of different pieces had to come together in 222 00:14:44,360 --> 00:14:47,800 Speaker 1: order to even make this a possibility. So in nineteen 223 00:14:47,880 --> 00:14:52,160 Speaker 1: fifty six, the Dartmouth Summer Research Project and Artificial Intelligence 224 00:14:52,280 --> 00:14:55,680 Speaker 1: saw buffins from across the young discipline of computer science 225 00:14:55,760 --> 00:14:59,960 Speaker 1: gathered to talk about researching concepts relating to machine intelligence. 226 00:15:00,200 --> 00:15:03,320 Speaker 1: This was the conference that serves as the official birthplace 227 00:15:03,440 --> 00:15:07,800 Speaker 1: for the term artificial intelligence. While there were a lot 228 00:15:07,880 --> 00:15:10,960 Speaker 1: of people really excited about the idea and many people 229 00:15:11,040 --> 00:15:14,000 Speaker 1: attending the conference felt pretty sure that machines would one 230 00:15:14,080 --> 00:15:17,480 Speaker 1: day reach a point where they could simulate human intelligence, 231 00:15:17,800 --> 00:15:20,760 Speaker 1: there was no agreement on exactly how this would happen. 232 00:15:21,840 --> 00:15:26,200 Speaker 1: No one proposed any standards or anything like that, and 233 00:15:26,800 --> 00:15:30,360 Speaker 1: because of that, the following decades would see various researchers 234 00:15:30,440 --> 00:15:34,880 Speaker 1: pursue their own pathways toward a common goal. So everyone 235 00:15:35,000 --> 00:15:37,520 Speaker 1: kind of knew where they wanted to get, but they 236 00:15:37,960 --> 00:15:40,520 Speaker 1: weren't in agreement as to how they were going to 237 00:15:40,560 --> 00:15:42,720 Speaker 1: get there, and so there was a lot of different 238 00:15:42,720 --> 00:15:47,280 Speaker 1: work being done in different approaches toward artificial intelligence. In 239 00:15:47,320 --> 00:15:51,320 Speaker 1: the nineteen sixties, a computer scientist and programmer named Joseph 240 00:15:51,360 --> 00:15:57,000 Speaker 1: Weisenbaum created an early chatbot called Eliza. So this chatbot 241 00:15:57,080 --> 00:16:01,320 Speaker 1: is exceedingly primitive by today's standards, and it gave the 242 00:16:01,360 --> 00:16:05,080 Speaker 1: illusion of understanding communications from a human being, But in fact, 243 00:16:05,240 --> 00:16:09,040 Speaker 1: Eliza was really just spouting off responses using some rudimentary 244 00:16:09,160 --> 00:16:13,560 Speaker 1: pattern recognition and substitution strategies. So in a very superficial 245 00:16:13,920 --> 00:16:19,000 Speaker 1: and not particularly useful way, Eliza could chat with humans. Now, 246 00:16:19,040 --> 00:16:21,840 Speaker 1: I think Eliza is a really important early step in 247 00:16:21,920 --> 00:16:26,960 Speaker 1: artificial intelligence. I would also say it's not very intelligent. 248 00:16:27,440 --> 00:16:30,120 Speaker 1: It's following a pretty simple set of rules in an 249 00:16:30,160 --> 00:16:35,200 Speaker 1: effort to simulate conversation, and limited conversation at that And 250 00:16:35,280 --> 00:16:38,680 Speaker 1: while we have much more sophisticated chatbots now, ones that 251 00:16:38,720 --> 00:16:42,400 Speaker 1: can draw on immense libraries of information and use complicated 252 00:16:42,440 --> 00:16:46,520 Speaker 1: statistics to select words and word order, ultimately they're kind 253 00:16:46,520 --> 00:16:49,520 Speaker 1: of doing the same thing. They are using rules to 254 00:16:49,560 --> 00:16:53,640 Speaker 1: create an illusion of intelligence. But the projects I really 255 00:16:53,680 --> 00:16:56,320 Speaker 1: want to talk about don't necessarily even do that much. 256 00:16:56,640 --> 00:17:00,240 Speaker 1: They are creating the illusion of artificial intelligence because us 257 00:17:00,440 --> 00:17:03,920 Speaker 1: that's a field that's getting crazy amounts of investment. Sometimes 258 00:17:04,320 --> 00:17:07,320 Speaker 1: these companies are doing it because they don't yet have 259 00:17:07,440 --> 00:17:10,760 Speaker 1: the money to really dive into AI. So it's not 260 00:17:10,800 --> 00:17:14,480 Speaker 1: that they want to deceive, it's that until they get 261 00:17:14,520 --> 00:17:17,000 Speaker 1: the investment to do the thing they want to do, 262 00:17:17,440 --> 00:17:22,359 Speaker 1: they can't do it. AI is expensive. The processing you 263 00:17:22,480 --> 00:17:27,879 Speaker 1: need in order to run complicated AI implementations is considerable, 264 00:17:28,040 --> 00:17:30,480 Speaker 1: and most people don't have access to that, especially if 265 00:17:30,480 --> 00:17:34,040 Speaker 1: you're just starting up a company. So sometimes an AI 266 00:17:34,200 --> 00:17:39,080 Speaker 1: startup is not using AI, not in an effort to 267 00:17:39,160 --> 00:17:44,240 Speaker 1: deceive investors, but rather as a placeholder with the intent 268 00:17:44,680 --> 00:17:48,199 Speaker 1: of using AI later on when it becomes feasible to 269 00:17:48,240 --> 00:17:51,480 Speaker 1: do so. Sometimes the company just needs to do a 270 00:17:51,520 --> 00:17:55,400 Speaker 1: lot of early work before it can launch whatever AI 271 00:17:55,440 --> 00:17:58,520 Speaker 1: tool it has in mind, and that this early work 272 00:17:58,640 --> 00:18:00,679 Speaker 1: needs to be done by humans. There might be a 273 00:18:00,720 --> 00:18:04,440 Speaker 1: lot of generating training data or that kind of thing, 274 00:18:04,680 --> 00:18:07,600 Speaker 1: and for that you might employ a bunch of people 275 00:18:07,640 --> 00:18:12,720 Speaker 1: to do it, and eventually you will develop your AI tool. 276 00:18:12,800 --> 00:18:15,240 Speaker 1: But again, it does not like AI tools just spring 277 00:18:15,440 --> 00:18:18,520 Speaker 1: fully formed and you can make use of them. So 278 00:18:19,160 --> 00:18:23,360 Speaker 1: again there are cases where an quote unquote AI startup 279 00:18:23,680 --> 00:18:27,800 Speaker 1: isn't using AI, but it's not necessarily an attempt to 280 00:18:27,880 --> 00:18:31,680 Speaker 1: mislead people, but sometimes it might just be a scam. 281 00:18:32,000 --> 00:18:35,040 Speaker 1: It might just be an effort to tap into people's 282 00:18:35,480 --> 00:18:40,560 Speaker 1: enthusiasm and excitement around a buzzy term, but have no 283 00:18:40,800 --> 00:18:45,159 Speaker 1: intent on ever doing any significant work within the AI field. 284 00:18:45,520 --> 00:18:47,840 Speaker 1: And this has been going on for quite a few 285 00:18:47,920 --> 00:18:52,359 Speaker 1: years now. Back in twenty eighteen, a writer named Kian 286 00:18:53,160 --> 00:18:57,600 Speaker 1: jen Cheng, and I apologize for my pronunciation. I am 287 00:18:57,680 --> 00:19:02,840 Speaker 1: notoriously terrible about this way Xichang wrote a piece for 288 00:19:03,119 --> 00:19:09,080 Speaker 1: sixth tone dot com and it's called AI company accused 289 00:19:09,240 --> 00:19:13,080 Speaker 1: of using humans to fake its AI. So the company 290 00:19:13,119 --> 00:19:16,919 Speaker 1: in question was iFly Tech that's little I, big f 291 00:19:17,800 --> 00:19:22,200 Speaker 1: l y te k, and among other things, I fly 292 00:19:22,359 --> 00:19:27,000 Speaker 1: Tech was offering AI powered interpretation services, or at least 293 00:19:27,359 --> 00:19:29,879 Speaker 1: that seems to be what the claim was. So the 294 00:19:29,960 --> 00:19:34,879 Speaker 1: product was supposed to provide real time interpretation and translation 295 00:19:35,080 --> 00:19:39,720 Speaker 1: services and was demonstrated at you know, things like international events. 296 00:19:40,119 --> 00:19:43,439 Speaker 1: But then a man named Belle Wong came forward and 297 00:19:43,440 --> 00:19:46,280 Speaker 1: claimed that he was part of a group of interpreters 298 00:19:46,400 --> 00:19:51,280 Speaker 1: who did the actual work and essentially they were posing 299 00:19:51,480 --> 00:19:56,879 Speaker 1: as the interpretation software. So Wang's accusations centered around a 300 00:19:56,920 --> 00:20:01,960 Speaker 1: symposium called the twenty eighteen International Forum on Innovation and 301 00:20:02,000 --> 00:20:07,359 Speaker 1: Emerging Industries Development Catchy. At this forum, there was a 302 00:20:07,400 --> 00:20:11,399 Speaker 1: professor from Japan who gave a presentation, and his presentation 303 00:20:11,640 --> 00:20:15,639 Speaker 1: was in English, and his words were being transcribed in 304 00:20:15,720 --> 00:20:19,080 Speaker 1: real time by a text to speech program from iFly 305 00:20:19,240 --> 00:20:23,680 Speaker 1: Tech and displayed on a screen behind him. So as 306 00:20:23,720 --> 00:20:27,360 Speaker 1: he spoke, his words in English were showing up behind him, 307 00:20:27,600 --> 00:20:32,720 Speaker 1: but next to the English transcription, his words appeared as 308 00:20:32,960 --> 00:20:39,280 Speaker 1: a Chinese transcription written in Chinese characters, also supposedly handled 309 00:20:39,320 --> 00:20:43,760 Speaker 1: by iFly Tech's incredible technology. Now this is amazing, not 310 00:20:43,920 --> 00:20:46,280 Speaker 1: just because you're talking about translation. I mean we have 311 00:20:46,480 --> 00:20:50,440 Speaker 1: translation apps out there right, We've got translation tools where 312 00:20:50,440 --> 00:20:53,240 Speaker 1: you can speak into a app and have it generate 313 00:20:53,880 --> 00:20:58,399 Speaker 1: an actual response in another language. This was incredible because 314 00:20:58,440 --> 00:21:03,719 Speaker 1: it's not just translation but in interpretation, meaning that the 315 00:21:03,760 --> 00:21:08,119 Speaker 1: turns of phrase that the speaker used were being interpreted 316 00:21:08,520 --> 00:21:12,879 Speaker 1: and then translated into Chinese so that the Chinese translation 317 00:21:13,480 --> 00:21:17,719 Speaker 1: would make sense. Because obviously, like the sayings, the idioms 318 00:21:17,760 --> 00:21:20,879 Speaker 1: that we use in one language do not necessarily translate 319 00:21:20,920 --> 00:21:24,160 Speaker 1: to another. If I say it's raining cats and dogs, 320 00:21:24,520 --> 00:21:27,399 Speaker 1: English speakers know what I mean is that it's raining 321 00:21:27,480 --> 00:21:31,359 Speaker 1: really hard. Non English speakers, if they saw that translation, 322 00:21:31,480 --> 00:21:35,200 Speaker 1: would wonder why are animals falling from the sky when 323 00:21:35,800 --> 00:21:38,199 Speaker 1: that's not what I literally mean when I say it's 324 00:21:38,280 --> 00:21:42,040 Speaker 1: raining cats and dogs. So interpretation requires an extra step. 325 00:21:42,400 --> 00:21:46,159 Speaker 1: It's not just translating word for word, and in fact, 326 00:21:46,280 --> 00:21:49,719 Speaker 1: what was appearing behind the speaker was an interpretation of 327 00:21:49,760 --> 00:21:53,840 Speaker 1: the speaker's words. The problem was Belle Wong says it 328 00:21:53,880 --> 00:21:57,360 Speaker 1: was really him and his colleagues who were doing the 329 00:21:57,400 --> 00:22:01,439 Speaker 1: work of that interpretation and translation. So Wong pointed to 330 00:22:01,480 --> 00:22:05,600 Speaker 1: the fact that the professor's accent was fairly strong. He 331 00:22:05,640 --> 00:22:09,280 Speaker 1: had a Japanese accent as he was giving his English presentation, 332 00:22:09,680 --> 00:22:13,480 Speaker 1: and the real time English text to speech program from 333 00:22:13,520 --> 00:22:17,560 Speaker 1: iFly Tech got into some issues with this. The program 334 00:22:17,640 --> 00:22:21,560 Speaker 1: would sometimes misinterpret what the professor was saying, and so 335 00:22:21,680 --> 00:22:24,879 Speaker 1: the transcript had errors in it. If you were reading 336 00:22:24,920 --> 00:22:27,520 Speaker 1: along while the professor was speaking, you would see, oh, 337 00:22:27,640 --> 00:22:30,359 Speaker 1: the program thought he said this one thing, but in 338 00:22:30,400 --> 00:22:33,400 Speaker 1: fact he was saying this other thing. But the Chinese 339 00:22:33,440 --> 00:22:37,679 Speaker 1: interpretation of the professor's words did not include these mistakes. 340 00:22:38,040 --> 00:22:42,399 Speaker 1: The Chinese translation was accurate, and that's because Wang and 341 00:22:42,440 --> 00:22:46,359 Speaker 1: his colleagues were translating accurately. They were listening to what 342 00:22:46,520 --> 00:22:50,000 Speaker 1: he was saying, interpreting it, translating it, and then putting 343 00:22:50,040 --> 00:22:54,440 Speaker 1: it up in Chinese text, so they had the appropriate 344 00:22:55,480 --> 00:23:01,600 Speaker 1: Chinese interpretations displaying not the mistaken text to speech stuff. 345 00:23:01,960 --> 00:23:04,840 Speaker 1: Wong said that I fly Tech never really acknowledged the 346 00:23:04,960 --> 00:23:08,320 Speaker 1: use of human interpreters at that event, and that the 347 00:23:08,359 --> 00:23:12,480 Speaker 1: implication was the technology was doing all the heavy lifting. 348 00:23:12,800 --> 00:23:16,359 Speaker 1: So Wang said this made him feel very uncomfortable to 349 00:23:16,400 --> 00:23:19,080 Speaker 1: be part of what he felt was a deceptive presentation. 350 00:23:19,480 --> 00:23:23,679 Speaker 1: And it's interesting because in that same piece in sixth tone, 351 00:23:24,119 --> 00:23:27,480 Speaker 1: the piece quotes I fly Tech executives who have essentially 352 00:23:27,520 --> 00:23:30,840 Speaker 1: said the machines are not a suitable replacement for human 353 00:23:30,880 --> 00:23:34,560 Speaker 1: interpreters and that it's far more likely that the future 354 00:23:34,680 --> 00:23:39,400 Speaker 1: of interpreting will involve humans and machines working together, rather 355 00:23:39,480 --> 00:23:43,080 Speaker 1: the machines replacing humans. Outright now, perhaps the ie fly 356 00:23:43,200 --> 00:23:46,800 Speaker 1: Tech representatives at the International Forum were a bit over 357 00:23:46,920 --> 00:23:49,840 Speaker 1: zealous in promoting the work of their company. But it 358 00:23:49,880 --> 00:23:53,440 Speaker 1: feels a lot like the mechanical turk. You know, at 359 00:23:53,440 --> 00:23:56,440 Speaker 1: a casual glance, you have a machine that's doing this 360 00:23:56,920 --> 00:24:00,800 Speaker 1: incredibly complex action but if you take a closer look, 361 00:24:00,880 --> 00:24:04,399 Speaker 1: you see that humans are powering the real process behind 362 00:24:04,440 --> 00:24:09,320 Speaker 1: the scenes. Then there's Olivia Salon's twenty eighteen piece in 363 00:24:09,359 --> 00:24:13,680 Speaker 1: The Guardian titled the Rise of Pseudo AI and how 364 00:24:13,720 --> 00:24:17,240 Speaker 1: tech firms quietly use humans to do bots work. Now. 365 00:24:17,280 --> 00:24:21,320 Speaker 1: I love how Salon frames her piece by saying, quote, 366 00:24:21,560 --> 00:24:24,960 Speaker 1: some startups have worked out it's cheaper and easier to 367 00:24:24,960 --> 00:24:27,960 Speaker 1: get humans to behave like robots than it is to 368 00:24:28,000 --> 00:24:32,280 Speaker 1: get machines to behave like humans. End quote that I 369 00:24:32,440 --> 00:24:35,199 Speaker 1: feel is bang on the money. She did a great 370 00:24:35,400 --> 00:24:39,399 Speaker 1: job with this article. We humans are really versatile. We 371 00:24:39,520 --> 00:24:42,439 Speaker 1: have evolved to be that way like It's not that 372 00:24:42,480 --> 00:24:46,919 Speaker 1: we're special. We have millions of years of evolution behind 373 00:24:47,000 --> 00:24:49,679 Speaker 1: us that have shaped us to be like this. But 374 00:24:50,440 --> 00:24:53,800 Speaker 1: we have to put that same work into machines in 375 00:24:53,880 --> 00:24:56,760 Speaker 1: order to make machines perform in versatile ways, and that 376 00:24:56,880 --> 00:25:00,160 Speaker 1: is a considerable amount of work. We haven't been working 377 00:25:00,200 --> 00:25:03,200 Speaker 1: with computers for millions of years. We've only been doing 378 00:25:03,200 --> 00:25:06,080 Speaker 1: it for a few decades. So companies like open Ai 379 00:25:06,240 --> 00:25:09,119 Speaker 1: and Google and such are spending billions of dollars to 380 00:25:09,200 --> 00:25:12,480 Speaker 1: achieve that goal. It is not at all easy and 381 00:25:12,520 --> 00:25:16,680 Speaker 1: it doesn't always go smoothly. So some startups use humans 382 00:25:16,720 --> 00:25:18,880 Speaker 1: in the early days almost as a way to show 383 00:25:18,960 --> 00:25:22,280 Speaker 1: the proof of concept for their end product. So sure, 384 00:25:22,440 --> 00:25:25,080 Speaker 1: right now, humans are the ones doing whatever it is, 385 00:25:25,160 --> 00:25:29,280 Speaker 1: like the coding or the translating or whatever the startup 386 00:25:29,359 --> 00:25:32,600 Speaker 1: AI is focused on. But further down the line, well 387 00:25:32,600 --> 00:25:35,439 Speaker 1: that's going to be bots. Maybe in fact, it'll have 388 00:25:35,520 --> 00:25:38,320 Speaker 1: to be bots because if the startup were to take 389 00:25:38,320 --> 00:25:41,560 Speaker 1: off and become a big company, then it could become 390 00:25:41,640 --> 00:25:44,280 Speaker 1: too expensive to rely on humans to do all the 391 00:25:44,320 --> 00:25:47,680 Speaker 1: work that needs to be done when you're operating at scale. 392 00:25:48,080 --> 00:25:51,000 Speaker 1: So there's a danger of doing this as a startup, 393 00:25:51,080 --> 00:25:53,560 Speaker 1: right Like, if you're doing it early on, you're saying, 394 00:25:53,840 --> 00:25:56,520 Speaker 1: I'm going to be transparent with you. Right now, we 395 00:25:56,600 --> 00:25:59,560 Speaker 1: have human beings doing this work, but what we're working 396 00:25:59,600 --> 00:26:03,600 Speaker 1: on is developing AI to do the work instead. And 397 00:26:04,359 --> 00:26:08,280 Speaker 1: this is how we're presenting it to you, and we 398 00:26:08,320 --> 00:26:12,560 Speaker 1: want you to be aware of our goals and our strategy. 399 00:26:13,000 --> 00:26:15,440 Speaker 1: If it turns out that whatever they want to do 400 00:26:15,640 --> 00:26:18,080 Speaker 1: is too hard to do by AI, like it's just 401 00:26:18,119 --> 00:26:21,359 Speaker 1: too hard to develop the AI to accomplish this goal, 402 00:26:21,760 --> 00:26:25,520 Speaker 1: and the company is getting big because people value whatever 403 00:26:25,560 --> 00:26:29,399 Speaker 1: the process is, you've kind of shut yourself in the foot, like, yeah, 404 00:26:29,440 --> 00:26:33,720 Speaker 1: you might become successful, but you might not be profitable 405 00:26:34,040 --> 00:26:37,760 Speaker 1: because you can't switch to AI. You never figured that 406 00:26:37,840 --> 00:26:41,600 Speaker 1: part out, and scaling up means that you're employing so 407 00:26:41,680 --> 00:26:44,280 Speaker 1: many humans to do the work that you're not being 408 00:26:44,320 --> 00:26:48,400 Speaker 1: efficient and you're not really making profit. That's a real issue, 409 00:26:48,760 --> 00:26:51,480 Speaker 1: and especially if people are still associating your company with 410 00:26:51,600 --> 00:26:54,640 Speaker 1: AI and you're still not doing AI stuff. So it's 411 00:26:54,680 --> 00:26:57,400 Speaker 1: a dangerous path to go down, even if you're being 412 00:26:57,480 --> 00:27:01,080 Speaker 1: sincere at the beginning. Now so low One also mentions 413 00:27:01,080 --> 00:27:04,840 Speaker 1: a piece in the Wall Street Journal that uncovered how 414 00:27:04,920 --> 00:27:08,240 Speaker 1: Google would work with third party companies and allow them 415 00:27:08,280 --> 00:27:13,119 Speaker 1: access to user email inboxes. So the identities of the 416 00:27:13,160 --> 00:27:17,240 Speaker 1: people who own those emails would be masked, but it 417 00:27:17,240 --> 00:27:20,440 Speaker 1: would mean that these third party companies could essentially read 418 00:27:20,800 --> 00:27:23,400 Speaker 1: emails and stuff, which seems like a bad idea, right, 419 00:27:23,760 --> 00:27:27,000 Speaker 1: Why would Google let that happen? Well, the research was 420 00:27:27,080 --> 00:27:30,520 Speaker 1: largely focused on the field of AI generated responses, you know, 421 00:27:30,600 --> 00:27:33,280 Speaker 1: like using AI to fire off a quick reply to 422 00:27:33,359 --> 00:27:36,679 Speaker 1: someone rather than having to compose a message yourself. But 423 00:27:36,680 --> 00:27:39,760 Speaker 1: in order to train AI to be able to do this, 424 00:27:39,960 --> 00:27:42,920 Speaker 1: humans have to do it first, and that meant other 425 00:27:43,040 --> 00:27:46,800 Speaker 1: human beings were reading like Gmail user emails, so maybe 426 00:27:46,840 --> 00:27:48,879 Speaker 1: they would read the email to make sure that the 427 00:27:49,040 --> 00:27:53,520 Speaker 1: AI generated response was appropriate based upon the email it 428 00:27:53,560 --> 00:27:56,639 Speaker 1: was responding to. Even if you mask the identity of 429 00:27:56,720 --> 00:27:59,199 Speaker 1: the people who are sending and receiving these emails, that 430 00:27:59,240 --> 00:28:01,800 Speaker 1: still seems a bit sketchy, doesn't it. Because I don't 431 00:28:01,800 --> 00:28:04,720 Speaker 1: know about you, but I typically assume other folks aren't 432 00:28:04,720 --> 00:28:07,680 Speaker 1: allowed to read messages that were sent to me. I mean, 433 00:28:07,680 --> 00:28:10,960 Speaker 1: we have rules about that with physical mail. You would 434 00:28:10,960 --> 00:28:14,600 Speaker 1: imagine the same thing applies to electronic mail. Those messages 435 00:28:14,640 --> 00:28:18,040 Speaker 1: being sent might include really sensitive information. So let me 436 00:28:18,080 --> 00:28:20,520 Speaker 1: give you a personal example. This year, as I'm sure 437 00:28:20,560 --> 00:28:22,960 Speaker 1: many of you know, I've been dealing with a lot 438 00:28:23,000 --> 00:28:26,680 Speaker 1: of medical issues and I don't mind sharing that if 439 00:28:26,720 --> 00:28:29,440 Speaker 1: I do so on my own terms, but I don't 440 00:28:29,480 --> 00:28:31,800 Speaker 1: want people to be able to read the messages that 441 00:28:31,840 --> 00:28:34,680 Speaker 1: are coming to me from my various doctors. And sure, 442 00:28:34,760 --> 00:28:39,080 Speaker 1: the actual identities of users was redacted, so my identity 443 00:28:39,160 --> 00:28:42,480 Speaker 1: would be masked in such an email. But I'm sure 444 00:28:42,520 --> 00:28:44,920 Speaker 1: you're all aware it does not take that many points 445 00:28:44,920 --> 00:28:47,720 Speaker 1: of data to be able to identify someone. It's pretty 446 00:28:47,800 --> 00:28:50,440 Speaker 1: easy to do. Actually, there was a famous case, this 447 00:28:50,600 --> 00:28:53,200 Speaker 1: was like more than a decade ago now, where a 448 00:28:53,240 --> 00:28:56,200 Speaker 1: researcher showed that she could use three points of data 449 00:28:56,640 --> 00:29:00,320 Speaker 1: and identify like eighty percent of the people in the 450 00:29:00,400 --> 00:29:02,920 Speaker 1: United States based upon those three data points. Now, those 451 00:29:02,920 --> 00:29:05,480 Speaker 1: were specific, it was like zip code and things like that. 452 00:29:05,760 --> 00:29:08,560 Speaker 1: But my point stance, it does not take a lot 453 00:29:08,600 --> 00:29:11,960 Speaker 1: of information for you to be able to identify specific person, 454 00:29:12,120 --> 00:29:14,720 Speaker 1: So having your ID mast is not that big of 455 00:29:14,720 --> 00:29:18,959 Speaker 1: a comfort to me. Solon also cites an older example 456 00:29:19,200 --> 00:29:21,440 Speaker 1: in her article one from two thousand and eight, and 457 00:29:21,480 --> 00:29:24,320 Speaker 1: this was of a company in the UK called SpinVox 458 00:29:24,360 --> 00:29:27,400 Speaker 1: that claimed to use technology to convert speech so that 459 00:29:27,760 --> 00:29:32,720 Speaker 1: customers could have their voicemails converted into text messages. But 460 00:29:32,840 --> 00:29:37,800 Speaker 1: a BBC reporter named Rory celenne Jones said SpinVox actually 461 00:29:37,920 --> 00:29:41,440 Speaker 1: was sending these voicemail recordings to call centers in Africa, 462 00:29:41,560 --> 00:29:45,680 Speaker 1: which was already questionable under UK and EU law at 463 00:29:45,720 --> 00:29:47,720 Speaker 1: the time of the UK was still in the EU, 464 00:29:48,080 --> 00:29:52,320 Speaker 1: and that humans were actually transcribing the voicemails into text. 465 00:29:52,920 --> 00:29:56,240 Speaker 1: She also cited Bloomberg reports made in twenty sixteen of 466 00:29:56,280 --> 00:29:59,920 Speaker 1: companies like x dot Ai using humans posing as chatbo 467 00:30:00,200 --> 00:30:03,440 Speaker 1: for the purposes of calendar scheduling services. And she mentioned 468 00:30:03,440 --> 00:30:07,720 Speaker 1: a company called Expensify, which made a business expense management 469 00:30:07,760 --> 00:30:11,920 Speaker 1: tool reportedly using AI scanning technology to handle receipts. But 470 00:30:11,960 --> 00:30:14,720 Speaker 1: it turned out that at least some of those receipts 471 00:30:15,040 --> 00:30:18,880 Speaker 1: were transcribed not by a machine but by humans working 472 00:30:18,920 --> 00:30:23,320 Speaker 1: for Amazon's crowdsourced labor business. That's a business which has 473 00:30:23,320 --> 00:30:27,880 Speaker 1: the appropriate name, Amazon's Mechanical Turk. I kid you, not 474 00:30:28,560 --> 00:30:31,200 Speaker 1: all right, We've got more to talk about, but I'm 475 00:30:31,280 --> 00:30:33,480 Speaker 1: running a bit long, so let's take another quick break 476 00:30:33,640 --> 00:30:46,800 Speaker 1: and we'll be back to chat about some more fake AI. Okay, 477 00:30:46,800 --> 00:30:50,200 Speaker 1: we're back. Next up, I want to talk about an 478 00:30:50,360 --> 00:30:54,280 Speaker 1: article written by James Vincent for The Verge. This was 479 00:30:54,320 --> 00:30:57,800 Speaker 1: in twenty nineteen and the article is titled forty percent 480 00:30:58,000 --> 00:31:02,560 Speaker 1: of AI startups in Europe don't actually use AI claims report. 481 00:31:02,920 --> 00:31:07,000 Speaker 1: So the report that is mentioned in that headline came 482 00:31:07,040 --> 00:31:11,280 Speaker 1: from a venture capital firm in London called MMC, and 483 00:31:11,640 --> 00:31:15,840 Speaker 1: MMC looked into nearly three thousand AI startups across thirteen 484 00:31:15,960 --> 00:31:19,280 Speaker 1: different EU member states and found that forty percent of 485 00:31:19,320 --> 00:31:21,840 Speaker 1: them weren't actually using AI in a way that was 486 00:31:21,920 --> 00:31:25,400 Speaker 1: quote unquote material to their business. In fact, the guy 487 00:31:25,400 --> 00:31:29,240 Speaker 1: who wrote the report, a man named David Kellner, went 488 00:31:29,320 --> 00:31:32,480 Speaker 1: even further. He said that in those cases, quote we 489 00:31:32,760 --> 00:31:38,120 Speaker 1: could find no mention of evidence of AI end quote jauza. 490 00:31:38,400 --> 00:31:42,120 Speaker 1: Not just no evidence of AI, no evidence of no 491 00:31:42,360 --> 00:31:47,960 Speaker 1: mention of evidence of AI. That's that's not good. So 492 00:31:48,000 --> 00:31:50,120 Speaker 1: the piece does go on to give at least some 493 00:31:50,200 --> 00:31:53,920 Speaker 1: slack to some of the companies that were included in 494 00:31:54,000 --> 00:31:57,080 Speaker 1: this study because they point out that, you know, the 495 00:31:57,160 --> 00:32:01,440 Speaker 1: AI designation didn't necessarily come from the star ups themselves. Rather, 496 00:32:01,960 --> 00:32:06,320 Speaker 1: independent industry analysts may have categorized some of these startups 497 00:32:06,320 --> 00:32:09,320 Speaker 1: as falling into the AI bucket, but it wasn't coming 498 00:32:09,320 --> 00:32:12,080 Speaker 1: from the company. It was coming from these independent analysts. So, 499 00:32:12,120 --> 00:32:14,640 Speaker 1: in other words, it wouldn't be fair to like walk 500 00:32:14,720 --> 00:32:17,520 Speaker 1: up to an executive from one of those startups and say, hey, 501 00:32:17,640 --> 00:32:20,880 Speaker 1: your company doesn't even use AI. The executive might just 502 00:32:20,960 --> 00:32:24,240 Speaker 1: look a little confused and then say, uh, we never 503 00:32:24,680 --> 00:32:27,800 Speaker 1: claimed it did. So I don't want to paint with 504 00:32:27,880 --> 00:32:29,960 Speaker 1: too broad a brush here. I don't want to suggest 505 00:32:29,960 --> 00:32:33,440 Speaker 1: that forty percent of these twenty eight hundred and some 506 00:32:33,520 --> 00:32:37,840 Speaker 1: odd companies are purposefully trying to trick people. Some of 507 00:32:37,880 --> 00:32:40,960 Speaker 1: them are, I'm sure, but not all of them. Sometimes 508 00:32:40,960 --> 00:32:44,880 Speaker 1: it's literally because some other Yahoo said, oh, that startup 509 00:32:45,000 --> 00:32:48,520 Speaker 1: that belongs in AI. So this same venture capital firm 510 00:32:48,680 --> 00:32:52,360 Speaker 1: MMC gets a shout out in another article I read 511 00:32:52,520 --> 00:32:56,560 Speaker 1: while researching this episode. This article is by Lauren Hamer 512 00:32:56,920 --> 00:32:59,840 Speaker 1: in twenty twenty one and she wrote it for Chip. 513 00:33:00,440 --> 00:33:03,320 Speaker 1: The article is titled how to spot when a company 514 00:33:03,400 --> 00:33:07,120 Speaker 1: is trying to pedal you fake AI, and Hamer cites 515 00:33:07,360 --> 00:33:11,719 Speaker 1: MMC Ventures just like the Verge piece did, And in 516 00:33:11,760 --> 00:33:16,080 Speaker 1: this article, MMC Ventures says that startups that are in 517 00:33:16,120 --> 00:33:19,440 Speaker 1: the AI space tend to attract up to fifty percent 518 00:33:19,680 --> 00:33:23,479 Speaker 1: more investment dollars than startups that are not in the 519 00:33:23,520 --> 00:33:26,200 Speaker 1: AI space. So again we see this is where the 520 00:33:26,280 --> 00:33:30,880 Speaker 1: money is. Like, if you know that AI companies are 521 00:33:30,920 --> 00:33:34,600 Speaker 1: getting fifty percent more in at least sometimes getting fifty 522 00:33:34,640 --> 00:33:38,120 Speaker 1: percent more investment than non AI companies, You're probably gonna 523 00:33:38,120 --> 00:33:40,640 Speaker 1: start scrambling to figure out how can I shove the 524 00:33:40,680 --> 00:33:43,720 Speaker 1: AI into my business idea? Because I want to be 525 00:33:43,760 --> 00:33:46,640 Speaker 1: able to get it funded, and there's only so much 526 00:33:46,680 --> 00:33:50,040 Speaker 1: funding money that's out there. You're fighting for a pool. 527 00:33:50,440 --> 00:33:53,720 Speaker 1: It's a big pool, but it's a pool of investment dollars. 528 00:33:54,080 --> 00:33:57,640 Speaker 1: And if you know that people are more likely to 529 00:33:57,720 --> 00:34:00,960 Speaker 1: invest in companies that are related to AI, then you 530 00:34:01,000 --> 00:34:05,040 Speaker 1: are incentivized to make sure your company is positioned to 531 00:34:05,680 --> 00:34:09,239 Speaker 1: at least appear to be AI related. And I imagine 532 00:34:09,280 --> 00:34:13,040 Speaker 1: that this number has actually grown since twenty twenty one. 533 00:34:13,080 --> 00:34:15,799 Speaker 1: I don't think that this has diminished at all, as 534 00:34:15,840 --> 00:34:20,120 Speaker 1: we saw other hype trains derail over the last few years. 535 00:34:20,480 --> 00:34:23,239 Speaker 1: Like I mentioned at the top of the show, NFTs 536 00:34:23,320 --> 00:34:26,920 Speaker 1: that was a big thing briefly, but it totally and 537 00:34:27,000 --> 00:34:31,720 Speaker 1: spectacularly failed. And then the Metaverse that was a really 538 00:34:31,840 --> 00:34:35,360 Speaker 1: big thing for like a few months, and lots of 539 00:34:35,400 --> 00:34:37,600 Speaker 1: investors got really excited in that. Not to say that 540 00:34:37,640 --> 00:34:41,200 Speaker 1: metaverse development has stopped. It's still going on, but it's 541 00:34:41,239 --> 00:34:43,920 Speaker 1: nowhere close to the level of hype that it was 542 00:34:44,200 --> 00:34:46,680 Speaker 1: a couple of years ago. That means that since then, 543 00:34:46,719 --> 00:34:49,799 Speaker 1: a lot of people have shifted over to AI as 544 00:34:49,840 --> 00:34:53,239 Speaker 1: the next money ticket. And I'm curious what if any 545 00:34:53,320 --> 00:34:56,200 Speaker 1: gap exists between startups that claim to be in the 546 00:34:56,239 --> 00:34:59,879 Speaker 1: AI space and those that don't. As far as funding goes. 547 00:35:00,000 --> 00:35:02,319 Speaker 1: I would imagine that it's more dramatic than it was 548 00:35:02,360 --> 00:35:05,200 Speaker 1: in twenty twenty one. Well, in twenty twenty three, the 549 00:35:05,280 --> 00:35:09,600 Speaker 1: US government began to weigh in on startups making AI claims, 550 00:35:09,600 --> 00:35:13,320 Speaker 1: not just startups companies in general making AI claims. Specifically, 551 00:35:13,400 --> 00:35:17,080 Speaker 1: the Federal Trade Commission or FTC posted a blog post 552 00:35:17,280 --> 00:35:20,840 Speaker 1: titled keep your AI Claims in Check. This is it 553 00:35:20,840 --> 00:35:23,160 Speaker 1: again in twenty twenty three, and the blog post is 554 00:35:23,160 --> 00:35:26,600 Speaker 1: a warning to companies that are attempted to fake it 555 00:35:26,719 --> 00:35:29,920 Speaker 1: until they make it in the AI space. The FTC 556 00:35:30,000 --> 00:35:34,000 Speaker 1: post reads quote, when you talk about AI in your advertising, 557 00:35:34,239 --> 00:35:38,360 Speaker 1: the FTC may be wondering, among other things, are you 558 00:35:38,480 --> 00:35:42,360 Speaker 1: exaggerating what your AI product can do? And then also 559 00:35:42,440 --> 00:35:46,080 Speaker 1: it says, are you promising that your AI product does 560 00:35:46,120 --> 00:35:49,239 Speaker 1: something better than a non AI product? And then on 561 00:35:49,239 --> 00:35:52,120 Speaker 1: top of that, it says are you aware of the risks? 562 00:35:52,440 --> 00:35:56,800 Speaker 1: And finally it says does the product actually use AI 563 00:35:56,880 --> 00:36:00,000 Speaker 1: at all? So the implication here is that the FTC 564 00:36:00,320 --> 00:36:03,920 Speaker 1: might call on an AI startup or other company to 565 00:36:04,080 --> 00:36:08,239 Speaker 1: prove its claims, and if the companies unable to do this, 566 00:36:08,320 --> 00:36:12,880 Speaker 1: the FTC might impose penalties on that company. The FTC 567 00:36:12,960 --> 00:36:15,359 Speaker 1: is also not the only government agency in the United 568 00:36:15,400 --> 00:36:19,320 Speaker 1: States getting involved. The Securities and Exchange Commission, or SEC, 569 00:36:19,560 --> 00:36:24,239 Speaker 1: brought charges against two different investment firms, one called Delphia 570 00:36:24,360 --> 00:36:29,000 Speaker 1: Incorporated and another called Global Predictions Incorporated. This was a 571 00:36:29,040 --> 00:36:31,840 Speaker 1: matter that was just settled earlier this year in March 572 00:36:31,880 --> 00:36:35,319 Speaker 1: twenty twenty four. So the charges stated that both of 573 00:36:35,320 --> 00:36:39,400 Speaker 1: these companies had made quote false and misleading statements about 574 00:36:39,400 --> 00:36:43,520 Speaker 1: their purported use of artificial intelligence end quote. So as 575 00:36:43,560 --> 00:36:46,920 Speaker 1: I said, the two companies each settled with the SEC 576 00:36:47,120 --> 00:36:50,080 Speaker 1: just this past March, and in total they paid four 577 00:36:50,160 --> 00:36:54,520 Speaker 1: hundred thousand dollars in civil penalties. So obviously, with the 578 00:36:54,640 --> 00:36:58,600 Speaker 1: recent explosion of AI startups, there are lots of similar 579 00:36:58,719 --> 00:37:02,440 Speaker 1: articles that are coming out to about be wary of 580 00:37:02,640 --> 00:37:08,439 Speaker 1: AI claims. One is by Pauline Tomaer. I believe that's 581 00:37:08,440 --> 00:37:12,160 Speaker 1: how you say Pauline's name. It's a twenty twenty three piece. 582 00:37:12,239 --> 00:37:15,200 Speaker 1: It's in a blog called be Cool, but Be Cool. 583 00:37:15,239 --> 00:37:20,080 Speaker 1: It's spelled b qool. The post is titled how to 584 00:37:20,160 --> 00:37:23,359 Speaker 1: spot the Fake AI Claims. That's a good one. And 585 00:37:23,400 --> 00:37:28,360 Speaker 1: then sheikar Quatra has an article titled the AI hype Machine. 586 00:37:28,600 --> 00:37:31,200 Speaker 1: When companies fake it till they make it. I found 587 00:37:31,239 --> 00:37:34,560 Speaker 1: that one on LinkedIn. Actually it's also where I first 588 00:37:34,840 --> 00:37:38,040 Speaker 1: saw the term AI washing, which once I saw that term, 589 00:37:38,080 --> 00:37:40,520 Speaker 1: I thought, oh, well, of course that's a perfect phrase, 590 00:37:40,680 --> 00:37:44,000 Speaker 1: because we're already familiar with stuff like greenwashing. That's when 591 00:37:44,040 --> 00:37:47,239 Speaker 1: a company claims to follow eco friendly processes, but in 592 00:37:47,280 --> 00:37:50,280 Speaker 1: fact it fails to live up to those promises. AI 593 00:37:50,440 --> 00:37:53,560 Speaker 1: washing is similar. A company uses AI to drive interest 594 00:37:53,600 --> 00:37:56,120 Speaker 1: in support for the business, even if the company itself 595 00:37:56,120 --> 00:37:59,239 Speaker 1: has little, if anything to do with AI. Now, Quatra's 596 00:37:59,239 --> 00:38:02,640 Speaker 1: piece is large a warning to potential investors that it 597 00:38:02,719 --> 00:38:06,000 Speaker 1: behooves you to examine a company's claims closely and to 598 00:38:06,080 --> 00:38:10,040 Speaker 1: employ critical thinking before handing over a sizable chunk of change. 599 00:38:10,120 --> 00:38:13,120 Speaker 1: Of course, that's true no matter what business a startup 600 00:38:13,239 --> 00:38:16,480 Speaker 1: might be in. But the frenzy around AI creates the 601 00:38:16,520 --> 00:38:19,200 Speaker 1: sense that if you do not act now, you're going 602 00:38:19,239 --> 00:38:22,000 Speaker 1: to be left behind, and you'll sit there while your 603 00:38:22,040 --> 00:38:25,480 Speaker 1: neighbors and co workers all make millions of dollars and 604 00:38:25,520 --> 00:38:28,520 Speaker 1: they move out to live in solid gold yachts or something, 605 00:38:28,760 --> 00:38:31,560 Speaker 1: and you're stuck at home doom scrolling through your various 606 00:38:31,560 --> 00:38:34,960 Speaker 1: social media accounts. So don't give in to the fomo, y'all. 607 00:38:35,160 --> 00:38:39,400 Speaker 1: But my warning goes beyond investors. My warning is for 608 00:38:39,560 --> 00:38:42,880 Speaker 1: all of us out there. We always need to remember 609 00:38:42,920 --> 00:38:45,719 Speaker 1: to use critical thinking. I say that as someone who 610 00:38:46,080 --> 00:38:49,240 Speaker 1: often I will forget to use critical thinking. It's terrible. 611 00:38:49,600 --> 00:38:51,120 Speaker 1: I say it all the time. When I do use 612 00:38:51,160 --> 00:38:54,359 Speaker 1: critical thinking, I'm always thankful for it. But the point is, like, 613 00:38:54,400 --> 00:38:57,239 Speaker 1: this is a skill you exercise. It's not something that 614 00:38:57,360 --> 00:39:00,759 Speaker 1: just passively happens in the background. You've got to employ it, 615 00:39:00,920 --> 00:39:03,279 Speaker 1: and we have to remember to do that. We need 616 00:39:03,320 --> 00:39:06,759 Speaker 1: to remember to ask questions, and we have to examine 617 00:39:06,760 --> 00:39:09,200 Speaker 1: the answers that we receive. And we need to do 618 00:39:09,239 --> 00:39:11,920 Speaker 1: this for a lot of reasons. So top reason is 619 00:39:11,920 --> 00:39:14,279 Speaker 1: probably just you don't want to get tricked, you know, 620 00:39:14,360 --> 00:39:16,319 Speaker 1: unless you're at a magic show, in which case that's 621 00:39:16,320 --> 00:39:19,920 Speaker 1: exactly what you want. But typically getting tricked means someone 622 00:39:20,000 --> 00:39:23,200 Speaker 1: is taking advantage of you, and that's not cool. But 623 00:39:23,320 --> 00:39:25,800 Speaker 1: another good reason is that we need to look into 624 00:39:25,880 --> 00:39:29,400 Speaker 1: how a company is actually doing its business. For example, 625 00:39:29,440 --> 00:39:33,200 Speaker 1: if that business involves relying on call centers or data 626 00:39:33,200 --> 00:39:37,319 Speaker 1: centers located in developing countries, and it all depends upon 627 00:39:37,480 --> 00:39:42,320 Speaker 1: severely underpaid staff working insane hours to do the things 628 00:39:42,320 --> 00:39:45,359 Speaker 1: that a company claims AI is doing well. That comes 629 00:39:45,400 --> 00:39:49,200 Speaker 1: across as mightily unethical to me. I've seen far too 630 00:39:49,280 --> 00:39:53,239 Speaker 1: many stories about people enduring terrible working conditions while the 631 00:39:53,320 --> 00:39:57,520 Speaker 1: companies that are exploiting those people are posting record profits 632 00:39:57,520 --> 00:40:00,960 Speaker 1: and shareholder returns, all while claiming that AI is the 633 00:40:01,040 --> 00:40:04,359 Speaker 1: cornerstone of their business. That just strikes me as inherently 634 00:40:04,480 --> 00:40:08,560 Speaker 1: unethical and really downright evil if we're getting honest about it. So, 635 00:40:08,640 --> 00:40:10,680 Speaker 1: I feel like critical thinking is important, not just for 636 00:40:10,760 --> 00:40:13,359 Speaker 1: our own welfare, but those of people who live in 637 00:40:13,480 --> 00:40:17,759 Speaker 1: other countries. Like I do want them to find gainful employment, 638 00:40:17,800 --> 00:40:19,799 Speaker 1: but I want to find them to find employment that's 639 00:40:19,840 --> 00:40:22,719 Speaker 1: not you know, exploiting them to the point where they're 640 00:40:23,080 --> 00:40:27,000 Speaker 1: falling apart, and meanwhile these companies are posting record profits. 641 00:40:27,320 --> 00:40:31,080 Speaker 1: It's good to remember that AI can be dangerous, not 642 00:40:31,200 --> 00:40:35,440 Speaker 1: just through misuse or weaponization, or through having AI replace 643 00:40:35,600 --> 00:40:39,040 Speaker 1: real folks at their jobs, though those are real dangers. 644 00:40:39,120 --> 00:40:41,759 Speaker 1: I mean, my old colleagues at the editorial department of 645 00:40:41,800 --> 00:40:44,839 Speaker 1: HowStuffWorks dot com found themselves out of a job when 646 00:40:44,880 --> 00:40:47,719 Speaker 1: the site shifted to AI generated articles. If I had 647 00:40:47,760 --> 00:40:49,680 Speaker 1: still been there, I would have been one of them. 648 00:40:49,920 --> 00:40:53,080 Speaker 1: But yeah, AI can be dangerous even when the AI 649 00:40:53,120 --> 00:40:56,160 Speaker 1: itself isn't real or I guess that really just points 650 00:40:56,200 --> 00:40:59,279 Speaker 1: out that humans can be dangerous and deceptive. But we 651 00:40:59,440 --> 00:41:03,160 Speaker 1: kind of knew that already, didn't we. Anyway, I hope 652 00:41:03,200 --> 00:41:05,480 Speaker 1: you learned something in this episode. I hope you go 653 00:41:05,560 --> 00:41:07,839 Speaker 1: and read some of those articles I mentioned, because they 654 00:41:07,880 --> 00:41:13,879 Speaker 1: are really well done and they illustrate some specific examples 655 00:41:13,960 --> 00:41:17,760 Speaker 1: of what I'm talking about and might help you spot 656 00:41:17,800 --> 00:41:20,680 Speaker 1: when it happens again, so that you ask the tough 657 00:41:20,760 --> 00:41:24,000 Speaker 1: questions and you do examine those answers. In the meantime, 658 00:41:24,080 --> 00:41:26,480 Speaker 1: I hope all of you out there are doing well, 659 00:41:26,880 --> 00:41:36,320 Speaker 1: and I'll talk to you again really soon. Tech Stuff 660 00:41:36,400 --> 00:41:40,919 Speaker 1: is an iHeartRadio production. For more podcasts from iHeartRadio, visit 661 00:41:40,960 --> 00:41:44,520 Speaker 1: the iHeartRadio app, Apple Podcasts, or wherever you listen to 662 00:41:44,560 --> 00:41:49,000 Speaker 1: your favorite shows.