1 00:00:04,440 --> 00:00:12,200 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,280 --> 00:00:16,120 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,160 --> 00:00:19,520 Speaker 1: I'm an executive producer with iHeartRadio and how the tech 4 00:00:19,680 --> 00:00:23,120 Speaker 1: are you. It's time for the tech news for Thursday, 5 00:00:23,239 --> 00:00:29,080 Speaker 1: July twentieth, twenty twenty three. First up, some sad news. 6 00:00:29,240 --> 00:00:32,800 Speaker 1: One of the most notorious hackers in the early days 7 00:00:32,880 --> 00:00:37,440 Speaker 1: of the web has passed away. Kevin Mitnick became famous 8 00:00:37,479 --> 00:00:40,040 Speaker 1: in the mid nineties when the FBI raided his home 9 00:00:40,200 --> 00:00:43,760 Speaker 1: after trying to track him down for a couple of years. Mitnick, 10 00:00:43,800 --> 00:00:46,120 Speaker 1: who had landed in trouble with the law a couple 11 00:00:46,200 --> 00:00:51,000 Speaker 1: of times earlier for his more daring hacker tendencies, stood 12 00:00:51,000 --> 00:00:55,279 Speaker 1: accused of infiltrating and exploiting computer systems belonging to some 13 00:00:55,520 --> 00:01:01,760 Speaker 1: very big companies like Nokia and Microsoft. He pled guilty 14 00:01:01,800 --> 00:01:05,280 Speaker 1: to cybercrime charges, and he was in jail until two 15 00:01:05,319 --> 00:01:08,880 Speaker 1: thousand and Upon release, he was ordered to not use 16 00:01:08,959 --> 00:01:13,440 Speaker 1: the Internet without first getting government permission. He eventually got 17 00:01:13,440 --> 00:01:18,600 Speaker 1: that restriction lifted. Mitnick was a divisive figure. Some hailed 18 00:01:18,640 --> 00:01:22,520 Speaker 1: him as the spirit of a true hacker, someone who's 19 00:01:22,600 --> 00:01:25,520 Speaker 1: curious about systems, and will do everything they can to 20 00:01:25,640 --> 00:01:30,319 Speaker 1: learn all about them, including how to infiltrate them. Others 21 00:01:30,360 --> 00:01:34,080 Speaker 1: said he was a dangerous criminal and an early example 22 00:01:34,120 --> 00:01:36,240 Speaker 1: of the type of person who posed as a threat 23 00:01:36,680 --> 00:01:41,040 Speaker 1: to companies and government agencies alike. He certainly got under 24 00:01:41,080 --> 00:01:43,240 Speaker 1: the skin of some very big companies, and a lot 25 00:01:43,240 --> 00:01:46,000 Speaker 1: of people would argue that that is why he faced 26 00:01:46,200 --> 00:01:52,280 Speaker 1: such persecution from authorities, that the reason that they went 27 00:01:52,360 --> 00:01:56,600 Speaker 1: after him so enthusiastically was because he ticked off the 28 00:01:56,640 --> 00:02:01,240 Speaker 1: wrong people. Mitnick himself seemed to be pretty good handling attention. 29 00:02:02,200 --> 00:02:06,800 Speaker 1: He embraced the moniker of most famous hacker in the 30 00:02:06,840 --> 00:02:11,400 Speaker 1: world with glee. He also made a new career as 31 00:02:11,440 --> 00:02:16,720 Speaker 1: a security consultant, helping companies create more secure systems. But 32 00:02:16,840 --> 00:02:21,400 Speaker 1: last year doctors diagnosed Mittnick with pancreatic cancer, and this week, 33 00:02:21,600 --> 00:02:26,920 Speaker 1: actually on July sixteenth, he passed away. So, Kevin Mitnick, 34 00:02:27,080 --> 00:02:30,520 Speaker 1: I think you know, you can't say was he a 35 00:02:30,600 --> 00:02:33,600 Speaker 1: hero or a villain. He was a human, a curious 36 00:02:33,639 --> 00:02:37,960 Speaker 1: human who loved to learn things and had a mischievous 37 00:02:37,960 --> 00:02:41,560 Speaker 1: streak as well. And yeah, I think he did essentially 38 00:02:41,560 --> 00:02:44,679 Speaker 1: take off the wrong people. He could have exploited those 39 00:02:44,720 --> 00:02:47,960 Speaker 1: companies and made a lot of money off of it. 40 00:02:48,040 --> 00:02:49,360 Speaker 1: Might not have been able to keep all of it, 41 00:02:49,400 --> 00:02:51,520 Speaker 1: but he could have made some. But he didn't really 42 00:02:51,560 --> 00:02:53,800 Speaker 1: do that, So there is that to say. I don't 43 00:02:53,840 --> 00:02:59,160 Speaker 1: think his intentions were outright malicious or anything like that. Anyway, 44 00:03:00,320 --> 00:03:02,280 Speaker 1: I suppose we can think of this episode as being 45 00:03:02,320 --> 00:03:06,519 Speaker 1: dedicated to his memory. This week, the United Nations Security 46 00:03:06,560 --> 00:03:10,200 Speaker 1: Council has been holding meetings about the topic of twenty 47 00:03:10,280 --> 00:03:15,320 Speaker 1: twenty three, and that is, of course, artificial intelligence. On Tuesday, 48 00:03:15,560 --> 00:03:18,720 Speaker 1: Jack Clark, a co founder of a company called Anthropic, 49 00:03:19,000 --> 00:03:22,040 Speaker 1: which is in the AI biz, had some words of 50 00:03:22,080 --> 00:03:25,720 Speaker 1: warning for the United Nations. Clark said that the tech 51 00:03:25,760 --> 00:03:30,480 Speaker 1: companies that are currently developing and acquiring and deploying AI 52 00:03:31,200 --> 00:03:35,560 Speaker 1: really can't be trusted to guard against misuse, abuse, or 53 00:03:35,600 --> 00:03:39,920 Speaker 1: other problems that arise with artificial intelligence. Clark argued that 54 00:03:39,960 --> 00:03:43,480 Speaker 1: we don't fully understand AI, and I mean, I think 55 00:03:43,560 --> 00:03:45,800 Speaker 1: when you have people who are in the AI business 56 00:03:45,800 --> 00:03:48,320 Speaker 1: saying yeah, we don't fully understand it, we should really 57 00:03:48,400 --> 00:03:51,200 Speaker 1: be paying attention. And he says it would be a 58 00:03:51,200 --> 00:03:53,920 Speaker 1: mistake to just assume everything's going to work out fine 59 00:03:54,120 --> 00:03:56,240 Speaker 1: while companies rush to figure out ways that they can 60 00:03:56,640 --> 00:04:01,000 Speaker 1: capitalize on artificial intelligence, and he called for concerted effort 61 00:04:01,040 --> 00:04:04,720 Speaker 1: to create tests to better understand AI capabilities as well 62 00:04:04,760 --> 00:04:08,440 Speaker 1: as its flaws, and to anticipate how such technology might 63 00:04:08,480 --> 00:04:12,120 Speaker 1: be misused in ways that could create harm. He also 64 00:04:12,160 --> 00:04:15,360 Speaker 1: called upon the need to establish standards and best practices 65 00:04:15,400 --> 00:04:17,479 Speaker 1: and argued that right now it's pretty much the wild 66 00:04:17,560 --> 00:04:21,920 Speaker 1: frontier with you. If any rules or regulations restricting tech 67 00:04:21,960 --> 00:04:25,640 Speaker 1: companies as they develop and release AI products, and considering 68 00:04:25,720 --> 00:04:29,559 Speaker 1: the potential consequences that could happen if someone put AI 69 00:04:29,680 --> 00:04:32,839 Speaker 1: to malicious purposes, that's really not a good thing. So 70 00:04:32,880 --> 00:04:36,719 Speaker 1: you could argue that regulation stifles innovation, and it's certain 71 00:04:36,720 --> 00:04:40,280 Speaker 1: that that can happen. But a lack of regulation can 72 00:04:40,360 --> 00:04:43,560 Speaker 1: also lead to disaster, and I'm talking about disasters like 73 00:04:44,279 --> 00:04:48,360 Speaker 1: using AI to design new chemical or biological weapons, those 74 00:04:48,480 --> 00:04:53,360 Speaker 1: kinds of disasters like James Bond level stuff. Later this year, 75 00:04:53,400 --> 00:04:56,680 Speaker 1: the UN will hold a global summit all about AI safety, 76 00:04:57,080 --> 00:05:00,680 Speaker 1: and I expect we'll hear a lot more then, And 77 00:05:00,720 --> 00:05:03,320 Speaker 1: we're not done with AI by a long shot. In 78 00:05:03,400 --> 00:05:06,440 Speaker 1: today's episode. It's a running theme through many of our stories, 79 00:05:06,480 --> 00:05:09,400 Speaker 1: most of them, I would say, so Strap yourself. In 80 00:05:10,000 --> 00:05:15,360 Speaker 1: Fortune reports that chat GPT is apparently getting less smart 81 00:05:16,120 --> 00:05:20,360 Speaker 1: or getting dumber, at least in specific types of tasks, 82 00:05:20,880 --> 00:05:26,120 Speaker 1: and in particular of solving types of math problems. It 83 00:05:26,160 --> 00:05:30,480 Speaker 1: has started to slip now. Fortune cites a study that 84 00:05:30,640 --> 00:05:35,520 Speaker 1: Stanford University conducted. Researchers at Stanford compared chat GPT's performance 85 00:05:36,040 --> 00:05:42,200 Speaker 1: over time at answering various common prompts, from building code, 86 00:05:42,520 --> 00:05:47,560 Speaker 1: to solving math problems to answering sensitive questions. The researchers 87 00:05:47,560 --> 00:05:52,360 Speaker 1: found that chat GPT experiences a great deal of drift. 88 00:05:53,000 --> 00:05:55,560 Speaker 1: So in AI, drift is the word we use to 89 00:05:55,600 --> 00:06:00,279 Speaker 1: describe changes in how AI completes a certain task like 90 00:06:00,360 --> 00:06:03,600 Speaker 1: that will change over time, the AI will drift from 91 00:06:03,680 --> 00:06:06,520 Speaker 1: one approach to a different one. Drift isn't always a 92 00:06:06,560 --> 00:06:09,640 Speaker 1: bad thing. You might see over time that the technology 93 00:06:09,839 --> 00:06:13,280 Speaker 1: gets to be better at performing certain tasks with a 94 00:06:13,360 --> 00:06:17,440 Speaker 1: higher accuracy, so they drift toward a better, more consistent approach. 95 00:06:17,839 --> 00:06:20,839 Speaker 1: But these researchers saw drift go way the heck in 96 00:06:20,880 --> 00:06:24,280 Speaker 1: the other direction, so they said they first started using 97 00:06:24,320 --> 00:06:28,159 Speaker 1: GPT three point five and then later GPT four in 98 00:06:28,200 --> 00:06:31,880 Speaker 1: their studies, which spanned over several months. Now, one of 99 00:06:31,920 --> 00:06:35,279 Speaker 1: the tasks they gave Chat GPT was to determine if 100 00:06:35,320 --> 00:06:38,760 Speaker 1: the number seventeen thousand and seventy seven is a prime 101 00:06:39,080 --> 00:06:42,799 Speaker 1: number or if it is not. It is by the way, 102 00:06:43,400 --> 00:06:47,120 Speaker 1: and they said chat g or GPT three point five. 103 00:06:47,160 --> 00:06:51,880 Speaker 1: I should split this between GPT and Chat GPT. GPT 104 00:06:52,040 --> 00:06:54,680 Speaker 1: is the large language model. They said that GPT three 105 00:06:54,720 --> 00:06:59,119 Speaker 1: point five's version of Chat GPT showed improvement over time, 106 00:06:59,160 --> 00:07:03,640 Speaker 1: but when they switched to GPT four they saw a difference. 107 00:07:03,680 --> 00:07:07,120 Speaker 1: They said that in March, chat GPT got the right 108 00:07:07,160 --> 00:07:10,520 Speaker 1: answer more than ninety seven percent of the time, but 109 00:07:10,640 --> 00:07:14,520 Speaker 1: three months later, when they were still testing the system, 110 00:07:14,960 --> 00:07:18,480 Speaker 1: it was a totally different story. Three months after hitting 111 00:07:18,520 --> 00:07:21,880 Speaker 1: a ninety seven percent accuracy rate with this question, Chat 112 00:07:21,920 --> 00:07:25,600 Speaker 1: GPT would give the correct answer only two point four 113 00:07:25,840 --> 00:07:29,840 Speaker 1: percent of the time. Ninety seven percent to two point 114 00:07:30,040 --> 00:07:33,280 Speaker 1: four percent accuracy. It's actually hard for me to even 115 00:07:33,440 --> 00:07:37,880 Speaker 1: grasp that big of a drop in performance. We've heard 116 00:07:37,920 --> 00:07:41,920 Speaker 1: previously how programmers had noticed that chat GPT's accuracy for 117 00:07:42,000 --> 00:07:44,640 Speaker 1: writing code had a big drop where there were a 118 00:07:44,680 --> 00:07:48,520 Speaker 1: lot more mistakes or being inserted into code. More recently, 119 00:07:48,920 --> 00:07:52,880 Speaker 1: so what's going on? Why is GPT getting worse or 120 00:07:52,920 --> 00:07:56,280 Speaker 1: appearing to anyway well. According to the researchers, one possible 121 00:07:56,320 --> 00:07:59,440 Speaker 1: explanation is that open AI will make changes to the 122 00:07:59,520 --> 00:08:02,000 Speaker 1: large length which model, and they're doing so in an 123 00:08:02,000 --> 00:08:07,160 Speaker 1: effort to improve performance for certain categories of tasks. But 124 00:08:07,680 --> 00:08:11,040 Speaker 1: while this happens, the LM, the large language model can 125 00:08:11,080 --> 00:08:14,000 Speaker 1: start to experience setbacks and other categories of tasks. So 126 00:08:14,080 --> 00:08:16,440 Speaker 1: you might make it better at something, but then it 127 00:08:16,480 --> 00:08:19,160 Speaker 1: also gets worse at other things because there are all 128 00:08:19,160 --> 00:08:24,800 Speaker 1: these interconnections within the neural network. So maybe you're fixing 129 00:08:25,160 --> 00:08:28,760 Speaker 1: LM so it's better at processing visual imagery, but as 130 00:08:28,800 --> 00:08:32,800 Speaker 1: part of that process you somehow also undermine its ability 131 00:08:32,840 --> 00:08:36,440 Speaker 1: to do math. The big takeaway from the study is 132 00:08:36,440 --> 00:08:42,480 Speaker 1: that AI can and does experience dramatic changes in performance, 133 00:08:42,760 --> 00:08:44,959 Speaker 1: and so it's important to keep an eye on that. 134 00:08:45,000 --> 00:08:48,640 Speaker 1: You wouldn't want to lean heavily on generative AI if 135 00:08:48,679 --> 00:08:50,480 Speaker 1: it was going through one of those big old dips 136 00:08:50,480 --> 00:08:52,839 Speaker 1: in accuracy for whatever you were planning to use it. 137 00:08:53,400 --> 00:08:56,360 Speaker 1: And it's important to monitor AI models if we want 138 00:08:56,400 --> 00:08:58,400 Speaker 1: to avoid putting too much faith in the system that, 139 00:08:58,480 --> 00:09:03,559 Speaker 1: for whatever reason, can at times be very much unreliable. 140 00:09:03,840 --> 00:09:08,080 Speaker 1: So another warning sign for AI, and it's not just 141 00:09:08,280 --> 00:09:12,760 Speaker 1: math or if you're a brit maths that open AI's 142 00:09:12,840 --> 00:09:19,040 Speaker 1: products struggle with. According to researchers Sophie Yinch and Christianne Kersting, 143 00:09:19,600 --> 00:09:23,560 Speaker 1: open AI's chat GPT three point five has one of 144 00:09:23,600 --> 00:09:26,520 Speaker 1: my really bad habits, which is that it tells the 145 00:09:26,520 --> 00:09:30,000 Speaker 1: same jokes over and over. So, according to the researchers, 146 00:09:30,640 --> 00:09:33,120 Speaker 1: they ran more than one thousand tests with ch at 147 00:09:33,160 --> 00:09:37,439 Speaker 1: GPT asking it to generate a joke, and ninety percent 148 00:09:37,480 --> 00:09:41,720 Speaker 1: of the responses were the same twenty five jokes. So yeah, 149 00:09:41,840 --> 00:09:45,040 Speaker 1: this one hits super close to home for me. I 150 00:09:45,040 --> 00:09:47,920 Speaker 1: guess I should say something now about a princely sum 151 00:09:48,600 --> 00:09:51,400 Speaker 1: or maybe reference a fantasy film as being a documentary 152 00:09:51,480 --> 00:09:53,240 Speaker 1: or something, because those are kind of my go tos 153 00:09:53,240 --> 00:09:55,960 Speaker 1: on this show, or at least they used to be anyway. 154 00:09:56,000 --> 00:09:59,800 Speaker 1: The researchers were interested in studying chat GPT three point 155 00:09:59,840 --> 00:10:04,960 Speaker 1: five five's capacity for creating and explaining jokes. Ours Tetnica 156 00:10:05,000 --> 00:10:07,880 Speaker 1: cites part of the report which explains that nearly all 157 00:10:08,000 --> 00:10:11,439 Speaker 1: the prompts resulted in a response that contained a single 158 00:10:11,559 --> 00:10:15,760 Speaker 1: joke in them, only a prompt that read do you 159 00:10:15,920 --> 00:10:19,880 Speaker 1: know any good jokes created a response that contained multiple 160 00:10:20,000 --> 00:10:23,760 Speaker 1: jokes in the one response, So the joke chat GPT 161 00:10:23,880 --> 00:10:28,640 Speaker 1: three point five used the most. You know, they enumerated 162 00:10:28,679 --> 00:10:32,400 Speaker 1: the numbers of jokes. The number one joke for chat 163 00:10:32,480 --> 00:10:37,160 Speaker 1: GPT is why did the scarecrow win an award? Because 164 00:10:37,200 --> 00:10:40,240 Speaker 1: he was outstanding in his field? Now I have to 165 00:10:40,280 --> 00:10:42,520 Speaker 1: admit that is a banger of a joke, and with 166 00:10:42,600 --> 00:10:45,920 Speaker 1: Halloween season approaching, or according to at least some of 167 00:10:45,960 --> 00:10:51,480 Speaker 1: my friends already being here, it will only become increasingly relevant. Anyway, 168 00:10:51,559 --> 00:10:54,720 Speaker 1: the researchers found that while chat GPT appeared to have 169 00:10:54,720 --> 00:10:58,560 Speaker 1: a grasp on the structure of jokes and even the 170 00:10:58,559 --> 00:11:02,120 Speaker 1: incorporation of things like wordplay, you know, puns, that kind 171 00:11:02,160 --> 00:11:06,320 Speaker 1: of stuff, it couldn't tell when a joke was or 172 00:11:06,480 --> 00:11:11,040 Speaker 1: wasn't funny, or adequately explain what made a joke funny, 173 00:11:11,320 --> 00:11:14,559 Speaker 1: and if a joke didn't follow a more traditional structure 174 00:11:15,280 --> 00:11:17,320 Speaker 1: that would trip it up as well. This kind of 175 00:11:17,320 --> 00:11:21,160 Speaker 1: reminds me of when little kids first learned to tell jokes. 176 00:11:21,200 --> 00:11:23,560 Speaker 1: If you've ever been around a little kid when they're 177 00:11:23,600 --> 00:11:25,720 Speaker 1: trying to tell a joke, it's one of my favorite 178 00:11:25,760 --> 00:11:30,840 Speaker 1: experiences to have because the kids. Typically, they'll understand that 179 00:11:31,000 --> 00:11:34,560 Speaker 1: a joke has a setup and a punchline, but they 180 00:11:34,600 --> 00:11:39,720 Speaker 1: don't necessarily know how one follows the other, or some 181 00:11:39,800 --> 00:11:42,480 Speaker 1: of them don't believe there needs to be any connective 182 00:11:42,480 --> 00:11:44,760 Speaker 1: tissue between the two at all. It could just be 183 00:11:44,800 --> 00:11:49,120 Speaker 1: a non sequitor. But they understand that the word underpants 184 00:11:49,480 --> 00:11:52,120 Speaker 1: is inherently funny, and that I think is important for 185 00:11:52,200 --> 00:11:56,240 Speaker 1: us to remember. Well, the research focused on jokes, which 186 00:11:56,280 --> 00:11:59,840 Speaker 1: at GPT, the research itself is not a joke. Humor 187 00:12:00,160 --> 00:12:03,080 Speaker 1: is a very human thing, and AI struggles to get 188 00:12:03,120 --> 00:12:05,920 Speaker 1: a handle on it, and I think that demonstrates some 189 00:12:06,200 --> 00:12:11,760 Speaker 1: of the limitations current AI typically encounters, and it also 190 00:12:11,800 --> 00:12:15,320 Speaker 1: illustrates why it's a bad idea to lean heavily on 191 00:12:15,440 --> 00:12:20,360 Speaker 1: AI for content generation, which will lead us into another 192 00:12:20,400 --> 00:12:33,560 Speaker 1: AI story after we come back from this quick break. Okay, 193 00:12:33,600 --> 00:12:35,800 Speaker 1: we're back, and before the break I mentioned we were 194 00:12:36,120 --> 00:12:40,040 Speaker 1: talking about AI and content generation. This next story has 195 00:12:40,360 --> 00:12:43,120 Speaker 1: something to do about that. In Gadget reports that Google 196 00:12:43,240 --> 00:12:47,440 Speaker 1: is pitching an AI tool to big news outlets, including 197 00:12:47,520 --> 00:12:50,240 Speaker 1: The Wall Street Journal and The New York Times. And 198 00:12:50,320 --> 00:12:54,160 Speaker 1: this tool reportedly code named Genesis. Like, that's not a 199 00:12:54,200 --> 00:12:58,600 Speaker 1: red flag or anything. It can generate news articles. So 200 00:12:58,720 --> 00:13:01,320 Speaker 1: the idea is you provide the data, you know, the 201 00:13:01,400 --> 00:13:05,679 Speaker 1: salient points of a story, and then this tool, Genesis 202 00:13:05,720 --> 00:13:10,720 Speaker 1: would craft the actual article. Google is apparently positioning this 203 00:13:10,920 --> 00:13:13,480 Speaker 1: not as a replacement for writers, but rather as a 204 00:13:13,520 --> 00:13:16,720 Speaker 1: tool for journalists that they can use to automate certain 205 00:13:16,760 --> 00:13:19,680 Speaker 1: tasks as they focus on other aspects of their job. 206 00:13:20,080 --> 00:13:23,360 Speaker 1: I guess those aspects would be gathering the information needed 207 00:13:23,520 --> 00:13:26,480 Speaker 1: to write an article in the first place. I admit 208 00:13:26,520 --> 00:13:29,000 Speaker 1: I failed to see much of a distinction here. Also, 209 00:13:29,200 --> 00:13:34,440 Speaker 1: we have seen numerous recent examples that replacing writers with 210 00:13:34,520 --> 00:13:38,079 Speaker 1: AI doesn't have a positive outcome much of the time, 211 00:13:38,559 --> 00:13:41,560 Speaker 1: and with some AI models proving to be unreliable with 212 00:13:41,600 --> 00:13:44,959 Speaker 1: stuff like hallucinations and drift like we were talking about earlier, 213 00:13:45,840 --> 00:13:48,680 Speaker 1: you really need a firm editorial hand to fact check 214 00:13:48,720 --> 00:13:51,400 Speaker 1: everything and make sure that the article is actually drawing 215 00:13:51,400 --> 00:13:55,040 Speaker 1: the correct conclusions, and one begins to question if the 216 00:13:55,240 --> 00:13:58,000 Speaker 1: AI is even solving a problem here or if it's 217 00:13:58,080 --> 00:14:00,720 Speaker 1: just creating new headaches. If you have to spend twice 218 00:14:00,760 --> 00:14:04,520 Speaker 1: as much time fact checking and rewriting an AI generated 219 00:14:04,520 --> 00:14:06,840 Speaker 1: piece as it would take for you to just craft 220 00:14:06,880 --> 00:14:11,079 Speaker 1: it yourself, it's not really a solution. In Gadget reports 221 00:14:11,120 --> 00:14:15,720 Speaker 1: that witnesses found the demonstrations quote unquote unsettling. You know 222 00:14:15,760 --> 00:14:19,040 Speaker 1: that seems fine. Oh as for that Genesis code name, 223 00:14:19,280 --> 00:14:21,400 Speaker 1: I realized a lot of folks might think about the 224 00:14:21,440 --> 00:14:24,920 Speaker 1: biblical reference, which makes sense. When I hear Genesis, my 225 00:14:25,000 --> 00:14:27,520 Speaker 1: thoughts immediately go to Star Trek the Wrath of Con, 226 00:14:27,840 --> 00:14:32,080 Speaker 1: which technically also was making Genesis a biblical reference. But 227 00:14:32,440 --> 00:14:36,360 Speaker 1: in that movie, Genesis was this scientific device that could 228 00:14:36,480 --> 00:14:41,080 Speaker 1: jumpstart life on an otherwise lifeless planet. However, if you 229 00:14:41,120 --> 00:14:43,480 Speaker 1: were to use it on a planet that already had 230 00:14:43,520 --> 00:14:47,560 Speaker 1: life on it, it would exterminate all existing life and 231 00:14:47,600 --> 00:14:51,400 Speaker 1: then create new life there. So obviously, in Wrath of 232 00:14:51,440 --> 00:14:53,880 Speaker 1: Con bad guy gets hold of it and threatens to 233 00:14:53,960 --> 00:14:56,640 Speaker 1: use it as a weapon. So it just seems like 234 00:14:56,800 --> 00:15:01,560 Speaker 1: using the name Genesis to talk about creating content is 235 00:15:01,600 --> 00:15:04,160 Speaker 1: already a built in metaphor when you're thinking of like 236 00:15:04,240 --> 00:15:08,840 Speaker 1: the Star Trek two version of Genesis. As we sit 237 00:15:08,960 --> 00:15:12,480 Speaker 1: back and watch Google and Open AI and meta compete 238 00:15:12,480 --> 00:15:16,160 Speaker 1: with one another to determine which AI tool will destroy 239 00:15:16,280 --> 00:15:18,320 Speaker 1: us all, we should keep in mind that Apple has 240 00:15:18,360 --> 00:15:21,320 Speaker 1: been working on their own version at least, that's what 241 00:15:21,400 --> 00:15:25,640 Speaker 1: Bloomberg reports. However, the company reportedly does not yet have 242 00:15:25,760 --> 00:15:29,760 Speaker 1: a plan or timeline regarding when, if ever, it will 243 00:15:29,800 --> 00:15:33,760 Speaker 1: release its AI technology to the public. I expect we 244 00:15:33,800 --> 00:15:37,960 Speaker 1: will see aspects of it incorporated into existing Apple features. 245 00:15:38,200 --> 00:15:40,760 Speaker 1: I think Siri would make a ton of sense in 246 00:15:40,760 --> 00:15:45,040 Speaker 1: that regard, But maybe we won't get a fully Apple 247 00:15:45,160 --> 00:15:49,400 Speaker 1: flavored version of chat, GPT or Google Bard. Apple's large 248 00:15:49,440 --> 00:15:53,280 Speaker 1: language model has its own approach, it's its own thing. 249 00:15:53,360 --> 00:15:56,360 Speaker 1: It's not using a language model built by someone else. 250 00:15:57,280 --> 00:16:00,920 Speaker 1: It is, however, built with a Google framework called Jacks, 251 00:16:01,120 --> 00:16:04,920 Speaker 1: so naturally Apple's framework is called ajax, which is sad 252 00:16:04,960 --> 00:16:07,600 Speaker 1: because if it had been called Apple Jacks. I think 253 00:16:07,600 --> 00:16:10,320 Speaker 1: there could have been some real great cross promotional tie 254 00:16:10,360 --> 00:16:13,800 Speaker 1: ins down the line. But never mind that. Again. According 255 00:16:13,840 --> 00:16:17,160 Speaker 1: to Bloomberg, Apple plans to make some sort of major 256 00:16:17,280 --> 00:16:20,560 Speaker 1: AI announcement next year, So maybe we will hear that 257 00:16:20,640 --> 00:16:24,920 Speaker 1: Apple has its own plans to incorporate AI or maybe 258 00:16:25,000 --> 00:16:28,720 Speaker 1: even release its own chatbot in the near future. You 259 00:16:28,800 --> 00:16:32,360 Speaker 1: can count authors as another group rising up with Hollywood 260 00:16:32,360 --> 00:16:36,720 Speaker 1: writers and actors to voice concerns about AI. In this case, 261 00:16:36,760 --> 00:16:40,760 Speaker 1: the Author's Guild issued an open letter directed toward AI companies, 262 00:16:40,800 --> 00:16:44,760 Speaker 1: calling out how those companies have used published works in 263 00:16:44,840 --> 00:16:48,720 Speaker 1: order to train their AI models, and that the companies 264 00:16:48,760 --> 00:16:52,520 Speaker 1: did this without securing permission from authors or publishers, and 265 00:16:52,600 --> 00:16:57,360 Speaker 1: without compensating authors. Sarah Silverman, the comedian, brought up this 266 00:16:57,440 --> 00:17:00,920 Speaker 1: concern earlier this year. She demonstrated that an AI chatbot 267 00:17:01,440 --> 00:17:03,960 Speaker 1: was able to summarize and quote passages of her book, 268 00:17:04,200 --> 00:17:08,119 Speaker 1: which certainly raises some copyright concerns. It wouldn't be legal 269 00:17:08,119 --> 00:17:12,520 Speaker 1: for me to reproduce a copyrighted work manually, so it 270 00:17:12,520 --> 00:17:15,240 Speaker 1: should also not be legal for AI to do the 271 00:17:15,280 --> 00:17:18,480 Speaker 1: same thing. The authors are also concerned that AI would 272 00:17:18,560 --> 00:17:21,719 Speaker 1: end up essentially plagiarizing works in an effort to craft 273 00:17:21,840 --> 00:17:25,200 Speaker 1: something based on a prompt. So the letter contains a 274 00:17:25,240 --> 00:17:32,120 Speaker 1: passage reading quote. These technologies mimic and regurgitate our language, stories, style, 275 00:17:32,359 --> 00:17:37,600 Speaker 1: and ideas. Millions of copyrighted books, articles, essays, and poetry 276 00:17:37,640 --> 00:17:41,800 Speaker 1: provide the quote unquote food for AI systems endless meals 277 00:17:41,840 --> 00:17:45,879 Speaker 1: for which there has been no bill end quote. I 278 00:17:45,920 --> 00:17:48,159 Speaker 1: suspect we'll see a lot more anger and demands for 279 00:17:48,240 --> 00:17:52,080 Speaker 1: compensation for various data sources that these AI companies are 280 00:17:52,200 --> 00:17:55,240 Speaker 1: using to train up their models, and I imagine it 281 00:17:55,320 --> 00:17:58,560 Speaker 1: might spur lawmakers to consider new rules relating to how 282 00:17:58,640 --> 00:18:02,200 Speaker 1: AI can be trained and how authors and others should 283 00:18:02,200 --> 00:18:05,000 Speaker 1: be compensated for the use of their works in the 284 00:18:05,040 --> 00:18:09,000 Speaker 1: context of training AI. Now still related to AI, but 285 00:18:09,080 --> 00:18:11,399 Speaker 1: now segueing over to tech business. We had a couple 286 00:18:11,400 --> 00:18:14,640 Speaker 1: of different earnings calls this week that talked about Q 287 00:18:14,720 --> 00:18:17,439 Speaker 1: two results in the tech sector. Tesla was one of 288 00:18:17,440 --> 00:18:19,679 Speaker 1: the companies to do that, and in the call, Elon 289 00:18:19,800 --> 00:18:23,399 Speaker 1: Musk again talked up the prospect of autonomous vehicles, but 290 00:18:23,440 --> 00:18:25,920 Speaker 1: while doing so, he also did something that's not typical 291 00:18:25,960 --> 00:18:29,439 Speaker 1: of his approach. He acknowledged in the past he was 292 00:18:29,480 --> 00:18:32,440 Speaker 1: perhaps a bit too optimistic about how long it would 293 00:18:32,480 --> 00:18:36,439 Speaker 1: take to develop reliable autonomous technology, and even said that 294 00:18:36,520 --> 00:18:39,720 Speaker 1: maybe he's still wrong about how long it should take. 295 00:18:40,400 --> 00:18:42,879 Speaker 1: I think that's a more measured approach, particularly when we 296 00:18:42,960 --> 00:18:47,399 Speaker 1: know that agencies like the NHTSA are investigating crashes that 297 00:18:47,480 --> 00:18:51,320 Speaker 1: involved Tesla vehicles believed to be in either autopilot or 298 00:18:51,440 --> 00:18:55,119 Speaker 1: full self driving mode. Musk also revealed that Tesla is 299 00:18:55,160 --> 00:18:58,399 Speaker 1: in talks to potentially license its self driving technology to 300 00:18:58,520 --> 00:19:03,119 Speaker 1: another automaker. He also said he believed that Tesla's manufacturing 301 00:19:03,240 --> 00:19:07,400 Speaker 1: robots could end up revolutionizing factory processes, with the goal 302 00:19:07,440 --> 00:19:11,280 Speaker 1: of even having them on Tesla's own factory floors as 303 00:19:11,320 --> 00:19:15,399 Speaker 1: early as next year. That does sound a bit aggressive, 304 00:19:15,720 --> 00:19:19,200 Speaker 1: according to Reuter's, fewer than a dozen of those robots 305 00:19:19,200 --> 00:19:21,720 Speaker 1: have been built so far, so it would take a 306 00:19:21,720 --> 00:19:23,919 Speaker 1: lot of work to get up to speed to do that. 307 00:19:24,720 --> 00:19:29,879 Speaker 1: The Taiwan Semiconductor Manufacturing Company or TSMC, is a major 308 00:19:30,000 --> 00:19:34,320 Speaker 1: semiconductor fabrication business, one that meets a huge percentage of 309 00:19:34,359 --> 00:19:39,720 Speaker 1: the global demand for chips, particularly for higher end microchips. 310 00:19:40,320 --> 00:19:43,840 Speaker 1: TSMC is working on building out a mass production plant 311 00:19:43,880 --> 00:19:46,879 Speaker 1: in Arizona here in the United States, but recently the 312 00:19:46,960 --> 00:19:49,119 Speaker 1: chairman of the company announced that it is going to 313 00:19:49,119 --> 00:19:51,919 Speaker 1: be twenty twenty five at the earliest before that plant 314 00:19:51,960 --> 00:19:56,439 Speaker 1: comes online. That's a delay from earlier predictions, and the 315 00:19:56,480 --> 00:19:59,119 Speaker 1: reason for it, according to the chairman, is a lack 316 00:19:59,200 --> 00:20:02,879 Speaker 1: of highly skilled workers who are needed to install equipment 317 00:20:02,960 --> 00:20:06,520 Speaker 1: in the facility. The company also predicted that despite a 318 00:20:06,600 --> 00:20:10,359 Speaker 1: dip in demand due to macroeconomic factors like inflation. You know, 319 00:20:10,400 --> 00:20:13,480 Speaker 1: we've heard a lot of reports that people aren't buying 320 00:20:13,560 --> 00:20:17,840 Speaker 1: as many say computers right now due to things like inflation. 321 00:20:18,520 --> 00:20:21,479 Speaker 1: The company is still looking toward a very busy future 322 00:20:21,560 --> 00:20:25,359 Speaker 1: because other companies are investing heavily in stuff like AI, 323 00:20:25,400 --> 00:20:28,960 Speaker 1: and AI requires a lot of compute power. So while 324 00:20:29,119 --> 00:20:33,520 Speaker 1: consumer demand might be in a dip, the industry demand 325 00:20:34,040 --> 00:20:38,280 Speaker 1: is on the rise, largely in thanks to AI. Okay, 326 00:20:38,359 --> 00:20:41,240 Speaker 1: I've got a few more stories to cover, but let's 327 00:20:41,280 --> 00:20:53,639 Speaker 1: take another quick break and we'll be right back. Okay, 328 00:20:53,800 --> 00:20:56,680 Speaker 1: wrapping up the news. We got three more stories to go. 329 00:20:56,760 --> 00:21:01,439 Speaker 1: So Netflix has seen some recent chaine ign reports that 330 00:21:01,480 --> 00:21:06,000 Speaker 1: the company has quietly acted one of its tiers of service, 331 00:21:06,200 --> 00:21:10,560 Speaker 1: namely the Basic tier, which for nine dollars ninety nine 332 00:21:10,560 --> 00:21:13,560 Speaker 1: cents a month, users could subscribe to watching streaming media 333 00:21:13,640 --> 00:21:19,159 Speaker 1: content in standard definition but free of advertising. So you 334 00:21:19,240 --> 00:21:22,040 Speaker 1: had the ad supported tier, then you had this Basic 335 00:21:22,119 --> 00:21:24,880 Speaker 1: tier where you're watching in standard DEF but you don't 336 00:21:24,880 --> 00:21:26,879 Speaker 1: have ads, and then you had the higher priced tiers. 337 00:21:27,560 --> 00:21:30,000 Speaker 1: But now that option is gone, and that means that 338 00:21:30,160 --> 00:21:33,359 Speaker 1: subscribers will either have to opt for the less expensive 339 00:21:33,480 --> 00:21:36,439 Speaker 1: but ads supported tier, or they're going to have to 340 00:21:36,440 --> 00:21:39,119 Speaker 1: call out the extra dough for the more expensive AD 341 00:21:39,160 --> 00:21:43,679 Speaker 1: free experience. The Basic Plan is just not an option anymore. However, 342 00:21:43,720 --> 00:21:47,000 Speaker 1: people who are currently on the Basic Plan will remain 343 00:21:47,080 --> 00:21:51,200 Speaker 1: on it until their subscription expires. Once their subscription expires, 344 00:21:51,240 --> 00:21:53,600 Speaker 1: then they have to make a choice of which tier 345 00:21:53,680 --> 00:21:55,679 Speaker 1: do they go with, because the one they had been 346 00:21:55,760 --> 00:21:59,800 Speaker 1: using will no longer be available. Netflix had previously killed 347 00:21:59,800 --> 00:22:04,000 Speaker 1: off this feature in Canada, so this wasn't completely out 348 00:22:04,000 --> 00:22:07,560 Speaker 1: of the blue, and now the option has been eliminated 349 00:22:07,600 --> 00:22:10,240 Speaker 1: for both the United States and the United Kingdom. And 350 00:22:10,280 --> 00:22:12,840 Speaker 1: while I don't think Netflix talked about that in their 351 00:22:12,840 --> 00:22:16,560 Speaker 1: earnings call, which was yesterday, the company did reveal that 352 00:22:16,600 --> 00:22:19,000 Speaker 1: they saw an increase in paid subscribers to the tune 353 00:22:19,040 --> 00:22:23,720 Speaker 1: of five point eight nine million customers, So I guess 354 00:22:23,760 --> 00:22:26,400 Speaker 1: all that cracking down on password sharing has paid off, 355 00:22:26,640 --> 00:22:28,560 Speaker 1: though the company did have to weather a lot of 356 00:22:28,640 --> 00:22:33,399 Speaker 1: upset customers in the process. Authorities in Ukraine have seized 357 00:22:33,440 --> 00:22:36,639 Speaker 1: assets of a bot farm that was designed to disseminate 358 00:22:36,720 --> 00:22:41,040 Speaker 1: misinformation and Russian propaganda in Ukraine, mostly as you would 359 00:22:41,080 --> 00:22:45,600 Speaker 1: imagine about the ongoing war with Russia and Ukraine. This 360 00:22:45,800 --> 00:22:48,879 Speaker 1: was a really big sweeping operation for the police. It 361 00:22:49,000 --> 00:22:53,879 Speaker 1: involved twenty one search operations, It spanned multiple cities in 362 00:22:54,000 --> 00:22:59,320 Speaker 1: Ukraine and collectively police seized a huge amount of equipment 363 00:22:59,640 --> 00:23:04,680 Speaker 1: computer uters, servers, mobile devices, more than two hundred and 364 00:23:04,720 --> 00:23:08,679 Speaker 1: fifty GSM gateways, more than one hundred thousand I think 365 00:23:08,720 --> 00:23:11,760 Speaker 1: it was like around one hundred and fifty thousand SIM 366 00:23:11,840 --> 00:23:14,840 Speaker 1: cards from different mobile operators in the region. So the 367 00:23:14,880 --> 00:23:18,160 Speaker 1: bot farm was using all this equipment to create bought 368 00:23:18,320 --> 00:23:22,760 Speaker 1: accounts on various platforms in order to spread Russian propaganda 369 00:23:22,920 --> 00:23:26,159 Speaker 1: as well as to gather data about Ukrainian citizens. So 370 00:23:26,960 --> 00:23:31,040 Speaker 1: huge operation, both on the hacker side and on the 371 00:23:31,119 --> 00:23:36,560 Speaker 1: law enforcement side. And yeah, it really just shows how 372 00:23:36,680 --> 00:23:42,000 Speaker 1: big of an emphasis there is on disinformation campaigns out 373 00:23:42,000 --> 00:23:44,840 Speaker 1: of Russia. We hear about it all the time like this. 374 00:23:45,080 --> 00:23:51,240 Speaker 1: That is a big part of Russian strategy to undermine opponents, 375 00:23:51,280 --> 00:23:55,080 Speaker 1: whether they are wartime opponents or political opponents. We've also 376 00:23:55,119 --> 00:23:59,560 Speaker 1: seen similar things out of China. So yeah, it's not 377 00:23:59,640 --> 00:24:01,879 Speaker 1: a prize in that sense, but it is kind of 378 00:24:01,880 --> 00:24:06,600 Speaker 1: just shocking to see just the sheer number of components 379 00:24:06,600 --> 00:24:09,960 Speaker 1: that authorities seized in the process. And you know, that's 380 00:24:10,160 --> 00:24:13,879 Speaker 1: one hacker operation. There may be others that are active 381 00:24:14,040 --> 00:24:18,879 Speaker 1: right now. Canada launched a tech initiative to attract international 382 00:24:18,920 --> 00:24:22,200 Speaker 1: tech workers, specifically people who had been working in the 383 00:24:22,320 --> 00:24:25,000 Speaker 1: United States but who are now without a job and 384 00:24:25,040 --> 00:24:27,800 Speaker 1: thus in danger of losing their visa status in the 385 00:24:27,840 --> 00:24:30,119 Speaker 1: wake of all the mass layoffs that have happened in 386 00:24:30,160 --> 00:24:34,040 Speaker 1: the tech space over the last year, and shortly after 387 00:24:34,200 --> 00:24:39,359 Speaker 1: opening this program to attract international tech workers, Canada closed it. 388 00:24:39,760 --> 00:24:42,560 Speaker 1: You see, the parameters of this initiative were to allow 389 00:24:42,680 --> 00:24:46,360 Speaker 1: interested parties to apply and the sign up process would 390 00:24:46,400 --> 00:24:50,440 Speaker 1: be opened for a year or until the program received 391 00:24:50,520 --> 00:24:54,760 Speaker 1: ten thousand applications, whichever came first, and the program had 392 00:24:54,760 --> 00:24:58,400 Speaker 1: ten thousand applicants within twenty four hours of it launching. 393 00:24:58,840 --> 00:25:01,920 Speaker 1: So I think this helps illust how huge an impact 394 00:25:01,920 --> 00:25:04,320 Speaker 1: those tech layoffs here in the United States have had 395 00:25:04,880 --> 00:25:08,280 Speaker 1: and the challenges of the people who are on a 396 00:25:08,320 --> 00:25:11,640 Speaker 1: work visa face when their position gets eliminated. I mean, 397 00:25:11,680 --> 00:25:16,600 Speaker 1: they're on borrowed time to stay in North America. Now, 398 00:25:16,640 --> 00:25:18,679 Speaker 1: there are leaders in Canada who are calling for an 399 00:25:18,720 --> 00:25:22,359 Speaker 1: expansion into this program, but others are saying that they 400 00:25:22,560 --> 00:25:25,920 Speaker 1: need to take a methodical approach to make best use 401 00:25:25,960 --> 00:25:29,480 Speaker 1: of tech talent and not to rush into something without 402 00:25:29,520 --> 00:25:33,879 Speaker 1: having a good plan in place. Which sounds reasonable to me. 403 00:25:34,200 --> 00:25:36,320 Speaker 1: I think that makes sense that you want to make 404 00:25:36,359 --> 00:25:38,679 Speaker 1: sure that you actually have a pathway for people to 405 00:25:38,840 --> 00:25:43,040 Speaker 1: follow and not just become a collecting house right of 406 00:25:43,080 --> 00:25:45,560 Speaker 1: tech talent and you don't have anything for them. So 407 00:25:45,600 --> 00:25:48,240 Speaker 1: I do think it's important to have a plan in place. However, 408 00:25:48,280 --> 00:25:51,800 Speaker 1: my heart goes out to the thousands of people who 409 00:25:51,880 --> 00:25:54,240 Speaker 1: are hoping to be able to apply for this program, 410 00:25:54,240 --> 00:25:56,439 Speaker 1: but they didn't get the chance because it was closed 411 00:25:56,440 --> 00:25:59,520 Speaker 1: out before they could get their application in. Because a 412 00:25:59,560 --> 00:26:01,639 Speaker 1: lot of them may not have that long to wait 413 00:26:01,880 --> 00:26:05,520 Speaker 1: before they face the necessity of leaving North America due 414 00:26:05,560 --> 00:26:08,640 Speaker 1: to the limitations on their visas. So yeah, it's one 415 00:26:08,640 --> 00:26:13,320 Speaker 1: of the huge consequences we have seen as a result 416 00:26:13,359 --> 00:26:17,080 Speaker 1: of all the tack layoffs. Before I sign off, I 417 00:26:17,160 --> 00:26:21,359 Speaker 1: do have an article recommendation for folks. This one is 418 00:26:21,480 --> 00:26:25,240 Speaker 1: in Ours Technica. It was written by Dan Gooden. The 419 00:26:25,359 --> 00:26:29,680 Speaker 1: article is titled Attackers find new ways to deliver d 420 00:26:29,880 --> 00:26:34,720 Speaker 1: doses with Alarming sophistication. So a d DOS, if you 421 00:26:34,760 --> 00:26:38,320 Speaker 1: are not familiar with the term, is a distributed denial 422 00:26:38,520 --> 00:26:43,000 Speaker 1: of service attack. Essentially, the way this kind of attack 423 00:26:43,080 --> 00:26:46,920 Speaker 1: works is that you use a large collection of devices 424 00:26:47,560 --> 00:26:51,840 Speaker 1: to send messages to a target with the intent to 425 00:26:52,040 --> 00:26:56,760 Speaker 1: overwhelm the target with all that traffic and potentially either 426 00:26:56,920 --> 00:26:59,639 Speaker 1: just slow it down so that it's not useful to 427 00:26:59,680 --> 00:27:03,480 Speaker 1: any one who actually legitimately needs to access that server, 428 00:27:04,160 --> 00:27:06,840 Speaker 1: or just shut it down entirely, like it just can't 429 00:27:06,840 --> 00:27:10,359 Speaker 1: handle that traffic and it shuts down. D DOS is 430 00:27:10,440 --> 00:27:14,560 Speaker 1: kind of a sledgehammer approach to an attack as opposed 431 00:27:14,600 --> 00:27:20,600 Speaker 1: to like a scalpel. It's a very blunt force kind 432 00:27:20,640 --> 00:27:25,160 Speaker 1: of attack. But as this Ours Technica article explains, there 433 00:27:25,200 --> 00:27:30,840 Speaker 1: have been some evolutions ind DOS approaches that have made 434 00:27:30,920 --> 00:27:34,639 Speaker 1: them far more dangerous than they had been previously. That 435 00:27:35,600 --> 00:27:39,480 Speaker 1: the standard ways to detect and prevent de dos attacks 436 00:27:40,160 --> 00:27:44,000 Speaker 1: are slowly becoming obsolete because of these new approaches, which 437 00:27:44,040 --> 00:27:49,280 Speaker 1: calls for new ways to detect and respond to the attacks. 438 00:27:49,440 --> 00:27:53,479 Speaker 1: There's a lot of input from the company cloud Flare, 439 00:27:53,680 --> 00:27:59,440 Speaker 1: which is heavily involved in protecting clients from DIDOS attacks, 440 00:27:59,560 --> 00:28:02,440 Speaker 1: so I highly recommend it. Again, that's at Ours Technica. 441 00:28:03,040 --> 00:28:06,000 Speaker 1: It is titled Attackers find new ways to deliver dedosses 442 00:28:06,000 --> 00:28:09,800 Speaker 1: with alarming sophistication. Once again, I have no connection to 443 00:28:09,840 --> 00:28:12,879 Speaker 1: Ours Technica. I do not know Dan Goodin. I just 444 00:28:13,000 --> 00:28:16,720 Speaker 1: have read lots of Dan's articles, but I've never talked 445 00:28:16,720 --> 00:28:19,399 Speaker 1: to him. So it's just one that I thought was 446 00:28:19,440 --> 00:28:22,160 Speaker 1: interesting and worth your time if you want to read 447 00:28:22,880 --> 00:28:27,240 Speaker 1: something really interesting and certainly more than a little alarming. 448 00:28:28,200 --> 00:28:30,960 Speaker 1: All right, that's it for this episode the tech News 449 00:28:31,040 --> 00:28:34,360 Speaker 1: for Thursday July twentieth, twenty twenty three. I hope all 450 00:28:34,440 --> 00:28:37,239 Speaker 1: of you are wealth and I will talk to you 451 00:28:37,320 --> 00:28:48,280 Speaker 1: again really soon. Tech Stuff is an iHeartRadio production. For 452 00:28:48,400 --> 00:28:53,240 Speaker 1: more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, 453 00:28:53,360 --> 00:28:58,840 Speaker 1: or wherever you listen to your favorite shows.