1 00:00:04,440 --> 00:00:12,320 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,360 --> 00:00:15,720 Speaker 1: and welcome to tech Stuff. I'm your host Jonathan Strickland. 3 00:00:15,760 --> 00:00:19,159 Speaker 1: I'm an executive producer with iHeart Podcasts. And how the 4 00:00:19,200 --> 00:00:22,239 Speaker 1: tech are you. This is the tech news for the 5 00:00:22,239 --> 00:00:27,040 Speaker 1: week ending on May twenty fourth, twenty twenty four, and 6 00:00:27,080 --> 00:00:29,280 Speaker 1: we've got a lot of AI stories this week, so 7 00:00:29,360 --> 00:00:32,320 Speaker 1: let's get to it. Early this week, the world learned 8 00:00:32,360 --> 00:00:36,240 Speaker 1: of a dispute between actor Scarlett Johanson and open Ai, 9 00:00:36,600 --> 00:00:40,280 Speaker 1: the company behind the chat GPT chatbot, among other things, 10 00:00:40,320 --> 00:00:43,239 Speaker 1: and here's how it breaks down. Johansson says that in 11 00:00:43,280 --> 00:00:47,360 Speaker 1: twenty twenty three, she was contacted by open Ai CEO 12 00:00:47,880 --> 00:00:51,120 Speaker 1: Sam Altman and was asked if the company could license 13 00:00:51,200 --> 00:00:54,600 Speaker 1: her voice for the purposes of creating a digital assistant, 14 00:00:54,800 --> 00:00:59,000 Speaker 1: something similar to Siri or Alexa, but built on open 15 00:00:59,040 --> 00:01:03,560 Speaker 1: ais ai model. Johansson had starred, or at least her 16 00:01:03,600 --> 00:01:06,920 Speaker 1: her voice had starred in the film Her, in which 17 00:01:06,920 --> 00:01:10,160 Speaker 1: a man falls in love with his AI enabled operating 18 00:01:10,200 --> 00:01:14,240 Speaker 1: system played by Johansson. She said she was not interested, 19 00:01:14,640 --> 00:01:16,959 Speaker 1: and then she said that this year, just two days 20 00:01:17,000 --> 00:01:19,839 Speaker 1: before open ai was going to hold a keynote event 21 00:01:19,959 --> 00:01:23,840 Speaker 1: about this digital assistant, Sam Altman reached out to again 22 00:01:23,959 --> 00:01:26,520 Speaker 1: to ask her to reconsider, and she says she didn't 23 00:01:26,520 --> 00:01:29,440 Speaker 1: actually speak with him on this one, and that the 24 00:01:29,480 --> 00:01:31,880 Speaker 1: implication was that she still had not changed her mind. 25 00:01:31,920 --> 00:01:35,039 Speaker 1: She didn't want to license her voice. So then the 26 00:01:35,120 --> 00:01:40,360 Speaker 1: keynote happens and OpenAI debuted the digital assistant called Sky, 27 00:01:40,880 --> 00:01:43,639 Speaker 1: which has a selection of voices that you can choose from, 28 00:01:43,640 --> 00:01:46,679 Speaker 1: but one of those voices sounded an awful lot like 29 00:01:46,720 --> 00:01:51,320 Speaker 1: Scarlett Johansson. The actor was quote shocked, angered, and in 30 00:01:51,480 --> 00:01:54,520 Speaker 1: disbelieved that mister Altman would pursue a voice that sounded 31 00:01:54,560 --> 00:01:58,160 Speaker 1: so eerily similar to mine. End quote. Altman denied that 32 00:01:58,240 --> 00:02:00,880 Speaker 1: they had trained the AI on Joehiah's voice at all. 33 00:02:01,040 --> 00:02:04,440 Speaker 1: The company has said that they actually used a different actor. 34 00:02:04,760 --> 00:02:08,280 Speaker 1: They showed footage of this actor speaking, but the actor's 35 00:02:08,280 --> 00:02:11,520 Speaker 1: face was blurred out, which kind of brings other questions 36 00:02:11,600 --> 00:02:13,799 Speaker 1: up about whether or not that's actually the person talking. 37 00:02:13,840 --> 00:02:16,600 Speaker 1: But anyway, out of respect to Johansson, they said they 38 00:02:16,600 --> 00:02:19,760 Speaker 1: would take down that particular Sky voice, which, again, if 39 00:02:19,800 --> 00:02:23,760 Speaker 1: it's not her voice, that's odd, right, Like, why take 40 00:02:23,800 --> 00:02:26,040 Speaker 1: down someone's voice if it's not Like, if it's not 41 00:02:26,160 --> 00:02:29,080 Speaker 1: that person's voice, and the other actor presumably did sign 42 00:02:29,120 --> 00:02:32,000 Speaker 1: an agreement to have their voice licensed for this, then 43 00:02:32,240 --> 00:02:36,160 Speaker 1: that's a different matter. Anyway, Altman's claims of innocence aren't 44 00:02:36,200 --> 00:02:39,079 Speaker 1: helped that he appeared to directly reference the film her 45 00:02:39,480 --> 00:02:42,280 Speaker 1: on X formerly known as Twitter, and did so the 46 00:02:42,360 --> 00:02:46,239 Speaker 1: same day that the assistant debuted. So if he's slightly 47 00:02:46,280 --> 00:02:49,120 Speaker 1: giving the nod to a movie in which Scarlett Johansson 48 00:02:49,280 --> 00:02:52,400 Speaker 1: does the voice, it does seem to kind of imply 49 00:02:52,639 --> 00:02:57,560 Speaker 1: that perhaps she had some involvement with the actual digital assistant. Anyway, 50 00:02:57,600 --> 00:03:01,280 Speaker 1: that's where things stand now. There's the possibility of Johansson 51 00:03:01,400 --> 00:03:05,400 Speaker 1: pursuing legal action, but honestly, I haven't heard very much 52 00:03:05,720 --> 00:03:08,560 Speaker 1: firmly one way or the other, and there's questions about 53 00:03:08,560 --> 00:03:11,320 Speaker 1: whether that would be possible if in fact open Ai 54 00:03:11,480 --> 00:03:15,560 Speaker 1: did use this other actor's voice likeness as training material 55 00:03:15,560 --> 00:03:18,520 Speaker 1: for the AI. But it's another moment with NAI that 56 00:03:18,600 --> 00:03:23,200 Speaker 1: highlights the potential threat the technology poses to creatives. Meanwhile, 57 00:03:23,440 --> 00:03:27,440 Speaker 1: another departure from open ai made news this week. Gretchen Krueger, 58 00:03:27,600 --> 00:03:30,560 Speaker 1: who had served as a policy research worker at the company, 59 00:03:30,720 --> 00:03:33,920 Speaker 1: posted on X that she had left open ai and 60 00:03:33,960 --> 00:03:37,880 Speaker 1: she had resigned before news broke that Ilia Sutzkever, who 61 00:03:37,920 --> 00:03:39,920 Speaker 1: was a co founder and one of the board members 62 00:03:39,960 --> 00:03:43,560 Speaker 1: who had ousted and then reinstated Sam Altman as CEO, 63 00:03:44,080 --> 00:03:46,640 Speaker 1: had also left the company. So she said she had 64 00:03:46,640 --> 00:03:49,840 Speaker 1: made this decision independently, did not know that Sutskev was 65 00:03:49,840 --> 00:03:52,280 Speaker 1: stepping away, but that she just felt she could not 66 00:03:52,360 --> 00:03:55,000 Speaker 1: work for the company anymore, and the reason she felt 67 00:03:55,000 --> 00:03:57,320 Speaker 1: that way was mostly out of concerns that the company 68 00:03:57,400 --> 00:04:00,560 Speaker 1: was ignoring safety protocols among some other things. As well. 69 00:04:00,800 --> 00:04:02,760 Speaker 1: She said the company was not doing its part to 70 00:04:02,800 --> 00:04:05,720 Speaker 1: live up to the principles that open ai was founded upon, 71 00:04:06,240 --> 00:04:12,560 Speaker 1: such as transparency and quote mitigations for impacts on inequality 72 00:04:13,000 --> 00:04:16,880 Speaker 1: rights and the environment, among other things. I'm including this 73 00:04:17,040 --> 00:04:19,680 Speaker 1: story in the lineup because it really is showing a 74 00:04:19,720 --> 00:04:23,359 Speaker 1: pattern at open ai. Numerous people connected to safety have 75 00:04:23,560 --> 00:04:26,200 Speaker 1: left the company in recent months. It's not just been 76 00:04:26,600 --> 00:04:30,080 Speaker 1: a couple very high ranking executives. Some safety researchers have 77 00:04:30,160 --> 00:04:33,000 Speaker 1: left as well, and this should be something of a 78 00:04:33,040 --> 00:04:36,160 Speaker 1: red flag that open ai isn't being so thorough when 79 00:04:36,200 --> 00:04:39,440 Speaker 1: it comes to developing AI in a safe and responsible way, 80 00:04:39,560 --> 00:04:42,320 Speaker 1: which again was the mission statement for the original non 81 00:04:42,400 --> 00:04:45,719 Speaker 1: profit version of open Ai. Of course, when we'd say 82 00:04:45,760 --> 00:04:50,000 Speaker 1: open Ai today, we're largely talking about the for profit version, 83 00:04:50,120 --> 00:04:54,960 Speaker 1: not the not for profit company. Speaking of leaving open ai, 84 00:04:55,160 --> 00:04:57,760 Speaker 1: it turns out that the company has some measures in 85 00:04:57,800 --> 00:05:01,280 Speaker 1: place to make that a really difficult decision for an employee, 86 00:05:01,480 --> 00:05:03,520 Speaker 1: or at least it did have those measures in place 87 00:05:03,600 --> 00:05:05,919 Speaker 1: until word got out about them and the company was 88 00:05:06,120 --> 00:05:10,920 Speaker 1: shamed into changing things. Vox reports that employees leaving open 89 00:05:11,000 --> 00:05:16,240 Speaker 1: ai are frequently compelled to sign exit documents, and among 90 00:05:16,320 --> 00:05:20,400 Speaker 1: other things, these exit documents allegedly threatened to dissolve the 91 00:05:20,440 --> 00:05:24,559 Speaker 1: employees vested equity in the company if that employee says 92 00:05:24,600 --> 00:05:28,440 Speaker 1: anything negative about open Ai. So open ai is valued 93 00:05:28,440 --> 00:05:31,560 Speaker 1: at around eighty billion dollars. That's billion with a B, 94 00:05:31,960 --> 00:05:35,760 Speaker 1: and obviously for each individual employee that has equity, that 95 00:05:35,800 --> 00:05:38,599 Speaker 1: can represent a huge chunk of money. We're talking like 96 00:05:38,640 --> 00:05:41,000 Speaker 1: maybe millions of dollars for some of these folks. So 97 00:05:41,080 --> 00:05:44,279 Speaker 1: the implication here is that open ai will hold that 98 00:05:44,400 --> 00:05:47,560 Speaker 1: money hostage in return for exiting employees promising that they're 99 00:05:47,600 --> 00:05:50,799 Speaker 1: not going to bad mouth open Ai. And you might think, huh, 100 00:05:51,279 --> 00:05:56,480 Speaker 1: vested equity, not potential equity, but vested equity. That sounds 101 00:05:56,520 --> 00:05:59,559 Speaker 1: like you're at the point where those assets definitely belong 102 00:05:59,600 --> 00:06:02,599 Speaker 1: to them employee and not the company. And since this 103 00:06:02,720 --> 00:06:06,080 Speaker 1: documentation has come to light, open ai has walked things 104 00:06:06,120 --> 00:06:08,480 Speaker 1: back a bit, with Sam Altman himself saying that he 105 00:06:08,520 --> 00:06:11,080 Speaker 1: felt ashamed of it all and that he also he 106 00:06:11,200 --> 00:06:13,920 Speaker 1: totally didn't know about it, despite the fact that some 107 00:06:14,000 --> 00:06:17,799 Speaker 1: of these various documents had C Suite executive signatures attached 108 00:06:17,800 --> 00:06:20,000 Speaker 1: to them, which I don't know seems like the kind 109 00:06:20,000 --> 00:06:24,120 Speaker 1: of thing a CEO should know about. Anyway, Altman posted 110 00:06:24,160 --> 00:06:28,200 Speaker 1: that quote, we have never clawed back anyone's vested equity, 111 00:06:28,480 --> 00:06:31,080 Speaker 1: nor will we do that if people do not sign 112 00:06:31,120 --> 00:06:35,160 Speaker 1: a separation agreement or don't agree to a non disparagement agreement. 113 00:06:35,520 --> 00:06:39,719 Speaker 1: Vested equity is vested equity, full stop end quote. That 114 00:06:39,760 --> 00:06:42,880 Speaker 1: seems like a reasonable thing to do. I'm just scratching 115 00:06:42,880 --> 00:06:45,360 Speaker 1: the surface of the story, though there's so much more 116 00:06:45,400 --> 00:06:48,560 Speaker 1: to it, and to really dive in, I highly recommend 117 00:06:48,560 --> 00:06:51,880 Speaker 1: you read Kelsey Piper's piece on vox dot com. It 118 00:06:51,960 --> 00:06:57,040 Speaker 1: is titled leaked OpenAI documents reveal aggressive tactics toward former 119 00:06:57,120 --> 00:07:00,719 Speaker 1: employees over at Google. The Internet had field day with 120 00:07:00,760 --> 00:07:04,640 Speaker 1: some rather concerning results from the company's AI Overview product. 121 00:07:04,760 --> 00:07:08,800 Speaker 1: So this is Google's AI enhanced search feature, in which 122 00:07:08,839 --> 00:07:14,080 Speaker 1: AI curated information appears above some search results, and folks 123 00:07:14,160 --> 00:07:16,920 Speaker 1: have noticed that the AI has offered up some pretty 124 00:07:16,960 --> 00:07:21,360 Speaker 1: weird and sometimes dangerous suggestions. For example, if you were 125 00:07:21,480 --> 00:07:24,200 Speaker 1: to google how do I make pizza so that the 126 00:07:24,280 --> 00:07:27,760 Speaker 1: cheese doesn't just slide right off? One person found that 127 00:07:27,840 --> 00:07:31,320 Speaker 1: Google's answer to this was to add glue to the 128 00:07:31,400 --> 00:07:33,960 Speaker 1: recipe keep that cheese in place, which is a big 129 00:07:33,960 --> 00:07:37,040 Speaker 1: out you. But another one was even more concerning. There 130 00:07:37,200 --> 00:07:39,960 Speaker 1: was someone who was asking about how to sanitize a 131 00:07:40,080 --> 00:07:44,840 Speaker 1: washing machine, and essentially the suggestion that the Overview AI 132 00:07:44,960 --> 00:07:49,200 Speaker 1: made was the equivalent of mixing chlorine gas in the washer. Now, 133 00:07:49,200 --> 00:07:51,680 Speaker 1: in case you didn't know, chlorine gas is very poisonous 134 00:07:51,680 --> 00:07:54,880 Speaker 1: and it can kill you. In another example, it was 135 00:07:54,880 --> 00:07:59,600 Speaker 1: clear that Overview AI was essentially plagiarizing content because it 136 00:07:59,640 --> 00:08:03,600 Speaker 1: was for smoothie recipe and the answer that the AI 137 00:08:03,720 --> 00:08:08,480 Speaker 1: gave included the phrase my kid's favorite. Now, presumably the 138 00:08:08,520 --> 00:08:12,280 Speaker 1: AI does not have children, but the smoothie recipe that 139 00:08:12,360 --> 00:08:16,400 Speaker 1: it pulled from did use that phrase, So again it 140 00:08:16,440 --> 00:08:19,720 Speaker 1: looks like the AI is actually just directly lifting something 141 00:08:19,880 --> 00:08:25,080 Speaker 1: from a source rather than synthesizing information. Right. That's the 142 00:08:25,320 --> 00:08:28,280 Speaker 1: promise we get with generative AI, is that it's synthesizing 143 00:08:28,320 --> 00:08:30,360 Speaker 1: stuff and then presenting it to us in a way 144 00:08:30,400 --> 00:08:33,679 Speaker 1: that we can understand. But when you see instances like this, 145 00:08:33,880 --> 00:08:36,520 Speaker 1: it seems to suggest that, well, there's a lot more 146 00:08:36,559 --> 00:08:39,440 Speaker 1: copy and pasting going on than synthesizing, at least in 147 00:08:39,480 --> 00:08:42,560 Speaker 1: some cases, and that's not a good look. Over at 148 00:08:42,600 --> 00:08:46,440 Speaker 1: Meta Yon Lucun, the chief AI scientist, has said that 149 00:08:46,520 --> 00:08:49,760 Speaker 1: while large language models are interesting, they're not going to 150 00:08:49,840 --> 00:08:54,320 Speaker 1: lead to AGI, which is artificial general intelligence. That's the 151 00:08:54,480 --> 00:08:56,800 Speaker 1: kind of AI you find in science fiction stories in 152 00:08:56,840 --> 00:09:00,959 Speaker 1: which you know, robots or chatbits start to for themselves. 153 00:09:01,160 --> 00:09:03,760 Speaker 1: So Lacun has said that the large language model branch 154 00:09:03,800 --> 00:09:06,640 Speaker 1: of AI isn't going to get us there. Lacun says 155 00:09:06,679 --> 00:09:09,800 Speaker 1: that generative AI models essentially have the intelligence of a 156 00:09:09,840 --> 00:09:13,800 Speaker 1: house cat, which if any cats are listening to this podcast, 157 00:09:13,800 --> 00:09:15,640 Speaker 1: I would just like to point out it was Lacoon 158 00:09:15,760 --> 00:09:18,080 Speaker 1: who said that I think that you are a very 159 00:09:18,160 --> 00:09:21,400 Speaker 1: good kiddy. I'm just covering my bases here. Lacun said 160 00:09:21,400 --> 00:09:26,119 Speaker 1: that the chatbots built on lms are quote unquote intrinsically unsafe. 161 00:09:26,280 --> 00:09:28,600 Speaker 1: Now by that, he means that a model is only 162 00:09:28,640 --> 00:09:31,000 Speaker 1: as good as the data that you use to train it, 163 00:09:31,200 --> 00:09:34,440 Speaker 1: and that if the data has unreliable or wrong stuff 164 00:09:34,440 --> 00:09:37,840 Speaker 1: in it, the LLM will reflect that it doesn't have 165 00:09:37,920 --> 00:09:41,040 Speaker 1: the ability to discern between what is reliable and what 166 00:09:41,160 --> 00:09:43,400 Speaker 1: is not, So you end up with an AI model 167 00:09:43,440 --> 00:09:47,280 Speaker 1: that sometimes gives you incorrect responses, but with the confidence 168 00:09:47,280 --> 00:09:49,640 Speaker 1: of someone who seems to really know what they're talking about. 169 00:09:49,800 --> 00:09:52,720 Speaker 1: Lacoon has also expressed that people leaving open AI over 170 00:09:52,760 --> 00:09:56,080 Speaker 1: safety concerns are perhaps blowing things out of proportion. I 171 00:09:56,200 --> 00:09:59,440 Speaker 1: take issue with that. I agree with Lacun that saying 172 00:09:59,480 --> 00:10:02,280 Speaker 1: things like we're dealing with intelligence that we don't really 173 00:10:02,360 --> 00:10:05,520 Speaker 1: understand is perhaps overblowing things. I think that was really 174 00:10:05,640 --> 00:10:08,520 Speaker 1: Lukun's main point. But I counter that it doesn't actually 175 00:10:08,559 --> 00:10:12,280 Speaker 1: require high intelligence for an entity to become dangerous, and 176 00:10:12,400 --> 00:10:16,240 Speaker 1: if a company continuously undermines safety, the matter of how 177 00:10:16,280 --> 00:10:19,199 Speaker 1: intelligent the AI agent is could be a moot point. 178 00:10:19,280 --> 00:10:22,559 Speaker 1: It could still be really dangerous. Okay, we've got a 179 00:10:22,600 --> 00:10:24,640 Speaker 1: lot more news to get through before we get to that. 180 00:10:24,800 --> 00:10:37,040 Speaker 1: Let's take a quick break to thank our sponsors. We're back. 181 00:10:37,200 --> 00:10:40,920 Speaker 1: And imagine that you are a computer science student who 182 00:10:41,040 --> 00:10:44,160 Speaker 1: creates a studying tool that makes use of AI, and 183 00:10:44,280 --> 00:10:48,280 Speaker 1: your school is so impressed that they award you and 184 00:10:48,320 --> 00:10:51,200 Speaker 1: your research partner with a ten thousand dollars prize for 185 00:10:51,240 --> 00:10:54,240 Speaker 1: coming up with a great business idea. Then that same 186 00:10:54,280 --> 00:10:57,400 Speaker 1: school says you are suspended or expelled because of that 187 00:10:57,679 --> 00:10:59,839 Speaker 1: exact same tool that they gave you ten grand for. 188 00:11:00,440 --> 00:11:03,320 Speaker 1: This is the story of IMRI University, which is located 189 00:11:03,360 --> 00:11:07,040 Speaker 1: in my hometown of Atlanta, Georgia, and two students who 190 00:11:07,040 --> 00:11:10,760 Speaker 1: built an AI powered studying tool that they called eight Ball. Now, 191 00:11:10,760 --> 00:11:14,000 Speaker 1: the tool can do stuff like analyze coursework and create 192 00:11:14,040 --> 00:11:17,480 Speaker 1: flash cards and practice tests, so it helps you study, 193 00:11:17,640 --> 00:11:22,959 Speaker 1: and it can retrieve information from a university tool called Canvas. 194 00:11:23,040 --> 00:11:25,400 Speaker 1: This is not specific to EMRI University, but it is 195 00:11:25,400 --> 00:11:28,199 Speaker 1: a tool that's available to universities and it's a platform 196 00:11:28,400 --> 00:11:32,280 Speaker 1: to which professors can upload class materials like coursework. So 197 00:11:32,360 --> 00:11:35,000 Speaker 1: the idea is that the teachers use Canvas. They do 198 00:11:35,040 --> 00:11:38,040 Speaker 1: that to distribute the coursework to the students, and eight 199 00:11:38,040 --> 00:11:41,479 Speaker 1: ball can actually pull information directly down from this platform. 200 00:11:41,800 --> 00:11:45,240 Speaker 1: Emory's Honor counsel decide that eight ball amounted to cheating 201 00:11:45,520 --> 00:11:49,600 Speaker 1: and that the students were accessing canvas without university permission. 202 00:11:49,840 --> 00:11:52,319 Speaker 1: And this is an accusation that the students have denied. 203 00:11:52,440 --> 00:11:54,920 Speaker 1: And now one of the students is bringing a lawsuit 204 00:11:55,000 --> 00:11:57,640 Speaker 1: against the school and arguing that the school itself knew 205 00:11:57,800 --> 00:12:00,840 Speaker 1: and approved of their work as evidenced by that hefty 206 00:12:00,920 --> 00:12:04,400 Speaker 1: ten grand the school awarded the two students for this project, 207 00:12:04,520 --> 00:12:07,040 Speaker 1: and the student says that the university has no evidence 208 00:12:07,080 --> 00:12:10,400 Speaker 1: that anyone ever used eight ball to cheat. So we 209 00:12:10,440 --> 00:12:14,200 Speaker 1: will see how this unfolds. Now, let's loop back around 210 00:12:14,440 --> 00:12:17,200 Speaker 1: to AI powered operating systems. That's how we started this 211 00:12:17,240 --> 00:12:20,600 Speaker 1: whole episode off. Well, Microsoft has been aggressively pushing AI 212 00:12:20,679 --> 00:12:24,160 Speaker 1: features into Windows eleven in preview mode, so it's not 213 00:12:24,240 --> 00:12:27,199 Speaker 1: being rolled out as a general feature yet, but it 214 00:12:27,280 --> 00:12:30,800 Speaker 1: is a preview feature. One of the things that the 215 00:12:31,200 --> 00:12:34,600 Speaker 1: Microsoft has pushed AI to do is called Windows Recall 216 00:12:34,920 --> 00:12:38,880 Speaker 1: or Windows Recall if you prefer. Essentially, it means the 217 00:12:38,920 --> 00:12:41,480 Speaker 1: AI is taking snapshots of what's going on your PC 218 00:12:41,840 --> 00:12:45,600 Speaker 1: every few seconds. That can include everything from which programs 219 00:12:45,600 --> 00:12:48,000 Speaker 1: you're running, you know, any tabs that you have open 220 00:12:48,040 --> 00:12:49,960 Speaker 1: on your browser, all that kind of stuff, and it 221 00:12:50,000 --> 00:12:51,800 Speaker 1: will just take a snapshot of that, and then you 222 00:12:51,800 --> 00:12:54,720 Speaker 1: can search through them and look through your history of 223 00:12:54,720 --> 00:12:58,400 Speaker 1: activity on your computer. Further, while Microsoft Edge users will 224 00:12:58,400 --> 00:13:00,600 Speaker 1: have at least some controls that allow them to filter 225 00:13:00,760 --> 00:13:04,160 Speaker 1: what is or is not captured by the tool, anyone 226 00:13:04,200 --> 00:13:07,000 Speaker 1: who's using a different browser will not necessarily have that 227 00:13:07,040 --> 00:13:10,520 Speaker 1: same luck. So you might be an incognito mode, but 228 00:13:10,800 --> 00:13:13,440 Speaker 1: it's still going to get captured by Windows Recall. And 229 00:13:13,480 --> 00:13:15,640 Speaker 1: this has led some to argue that Microsoft is trying 230 00:13:15,640 --> 00:13:18,280 Speaker 1: to push more people to adopt Edge as their browser 231 00:13:18,280 --> 00:13:21,040 Speaker 1: of choice because that's the browser that actually does have 232 00:13:21,120 --> 00:13:23,760 Speaker 1: the filter. But whether that's the case or not, plenty 233 00:13:23,800 --> 00:13:26,560 Speaker 1: of people have come forward to criticize Windows Recall. While 234 00:13:26,600 --> 00:13:30,920 Speaker 1: Microsoft says the snapshots are encrypted, some cybersecurity folks worried 235 00:13:30,920 --> 00:13:33,840 Speaker 1: that Windows Recall will create a new target for hackers. 236 00:13:34,040 --> 00:13:36,840 Speaker 1: So imagine being able to pull snapshots off a target 237 00:13:36,880 --> 00:13:40,640 Speaker 1: computer and learn about things like login credentials or credit 238 00:13:40,640 --> 00:13:43,080 Speaker 1: card information that kind of thing. Now, a lot of 239 00:13:43,160 --> 00:13:47,320 Speaker 1: sites mask that stuff, but some don't, and so there's 240 00:13:47,360 --> 00:13:49,960 Speaker 1: a real worry that Windows Recall will become a security 241 00:13:50,000 --> 00:13:53,760 Speaker 1: and privacy vulnerability that will just encourage more hacking attacks. 242 00:13:53,840 --> 00:13:56,079 Speaker 1: Some of the critics have even wondered what the use 243 00:13:56,160 --> 00:13:58,800 Speaker 1: case is for this tool in the first place. I mean, 244 00:13:58,840 --> 00:14:00,880 Speaker 1: you can use it to search through as activity, but 245 00:14:01,080 --> 00:14:04,280 Speaker 1: to what end. Richard Speed of The Register also points 246 00:14:04,320 --> 00:14:06,439 Speaker 1: out that this feature is likely going to run into 247 00:14:06,480 --> 00:14:11,280 Speaker 1: compliance issues with the EU's GDPR laws. So will Microsoft 248 00:14:11,320 --> 00:14:13,439 Speaker 1: walk this feature back never to speak of it again. 249 00:14:13,920 --> 00:14:16,920 Speaker 1: It wouldn't surprise me, but we'll have to wait and see. 250 00:14:17,160 --> 00:14:20,120 Speaker 1: X AKA Twitter has made a change and it doesn't 251 00:14:20,160 --> 00:14:21,800 Speaker 1: have anything to do with AI, So we're off the 252 00:14:21,840 --> 00:14:24,480 Speaker 1: AI stuff now. So now you will no longer be 253 00:14:24,520 --> 00:14:28,040 Speaker 1: able to see which posts someone else has liked. You 254 00:14:28,080 --> 00:14:30,720 Speaker 1: will still be able to see which posts you have liked, 255 00:14:30,920 --> 00:14:33,640 Speaker 1: and you'll be able to see who has liked your posts, 256 00:14:34,000 --> 00:14:35,960 Speaker 1: but you wouldn't be able to see what old Jimbob 257 00:14:36,040 --> 00:14:38,560 Speaker 1: over there has hit like on. And jim Bob's not 258 00:14:38,600 --> 00:14:40,240 Speaker 1: going to be able to see what you've hit like on. 259 00:14:40,640 --> 00:14:43,520 Speaker 1: So why make this change? Well, according to the director 260 00:14:43,560 --> 00:14:47,880 Speaker 1: of engineering at x QUOTE, public likes are incentivizing the 261 00:14:47,880 --> 00:14:51,440 Speaker 1: wrong behavior. For example, many people feel discouraged from liking 262 00:14:51,480 --> 00:14:56,520 Speaker 1: content that might be edgy in fear of retaliation from 263 00:14:56,920 --> 00:15:00,760 Speaker 1: trolls to protect their public image end quote. And I 264 00:15:00,800 --> 00:15:03,520 Speaker 1: can see how that could be helpful if I were 265 00:15:03,560 --> 00:15:06,520 Speaker 1: still on x and if I were using my account 266 00:15:06,600 --> 00:15:09,680 Speaker 1: to say like posts that were made by activists who 267 00:15:09,720 --> 00:15:13,000 Speaker 1: are in the LGBTQ community, I might prefer it if 268 00:15:13,000 --> 00:15:16,240 Speaker 1: trolls who just want to harass people didn't see that 269 00:15:16,320 --> 00:15:19,600 Speaker 1: I was supporting that, although I think public support is 270 00:15:19,640 --> 00:15:22,760 Speaker 1: really helpful in those cases. On the flip side, if 271 00:15:22,840 --> 00:15:25,680 Speaker 1: let's say you're I don't know a justice on a 272 00:15:25,760 --> 00:15:28,920 Speaker 1: Supreme Court, as a hypothetical example, you might not want 273 00:15:28,960 --> 00:15:31,400 Speaker 1: people being able to see that you've liked comments that 274 00:15:31,480 --> 00:15:34,120 Speaker 1: appear to confirm a political bias one way or the other, 275 00:15:34,280 --> 00:15:37,400 Speaker 1: since you're supposed to be impartial. I'm not saying that's happened, 276 00:15:37,760 --> 00:15:40,160 Speaker 1: just saying that's a use case. Now. I don't think 277 00:15:40,240 --> 00:15:43,040 Speaker 1: this change really means much to me personally, because I 278 00:15:43,120 --> 00:15:46,040 Speaker 1: left Twitter ages ago, I have no plans to go back. 279 00:15:46,120 --> 00:15:50,120 Speaker 1: I feel that Twitter has largely continued to move in 280 00:15:50,160 --> 00:15:53,920 Speaker 1: a direction that is just completely in opposition to the 281 00:15:54,000 --> 00:15:57,080 Speaker 1: values I have. Not saying my values are right, just 282 00:15:57,120 --> 00:15:59,800 Speaker 1: that they're very different from the ones that I see 283 00:15:59,840 --> 00:16:03,000 Speaker 1: on Twitter, but for those people who still are on X, 284 00:16:03,480 --> 00:16:05,720 Speaker 1: I can see how this could be a welcome change 285 00:16:05,760 --> 00:16:07,280 Speaker 1: where you know, it's just one less thing for you 286 00:16:07,320 --> 00:16:10,920 Speaker 1: to worry about getting hassled about. Zach Whittaker of tech 287 00:16:10,960 --> 00:16:13,440 Speaker 1: Crunch has a piece about how at least three Windom 288 00:16:13,480 --> 00:16:16,359 Speaker 1: hotels in the United States have had Consumer Grades spyware 289 00:16:16,520 --> 00:16:19,600 Speaker 1: installed in their check in systems, which means those check 290 00:16:19,640 --> 00:16:23,720 Speaker 1: in systems have essentially been capturing screenshots of the login process, 291 00:16:23,920 --> 00:16:25,720 Speaker 1: not that different from what we were talking about with 292 00:16:25,760 --> 00:16:30,200 Speaker 1: Windows Recall, and then storing these screenshots in a database 293 00:16:30,240 --> 00:16:32,960 Speaker 1: that hackers can access anywhere in the world, which means 294 00:16:33,000 --> 00:16:36,040 Speaker 1: hackers can comb through these screenshots to get personal information 295 00:16:36,280 --> 00:16:40,240 Speaker 1: about guests, including things like potentially credit card information or 296 00:16:40,280 --> 00:16:43,320 Speaker 1: at least partial credit card information. The spyware is called 297 00:16:43,400 --> 00:16:46,240 Speaker 1: PC tattle Tale, and it's usually marketed as a way 298 00:16:46,240 --> 00:16:48,080 Speaker 1: to keep an eye on someone who's like, you know, 299 00:16:48,560 --> 00:16:51,720 Speaker 1: your husband or wife or your kids or partner, because 300 00:16:51,720 --> 00:16:53,440 Speaker 1: you don't trust them and you want to see what 301 00:16:53,480 --> 00:16:56,160 Speaker 1: they've been getting up to, you know, healthy, wholesome stuff 302 00:16:56,200 --> 00:16:58,640 Speaker 1: like that. Tech Crunch reports that a flaw on the 303 00:16:58,640 --> 00:17:00,960 Speaker 1: app makes it possible for anyone in the world to 304 00:17:01,040 --> 00:17:04,080 Speaker 1: access these screenshots if they know how to exploit the flaw, 305 00:17:04,320 --> 00:17:07,480 Speaker 1: and that despite a security researcher trying to contact the 306 00:17:07,600 --> 00:17:10,920 Speaker 1: developers behind this app, there's been no response to their 307 00:17:11,040 --> 00:17:14,159 Speaker 1: inquiries and the flaw has remained in place. Tech Crunch 308 00:17:14,440 --> 00:17:17,240 Speaker 1: chose not to reveal the specifics about these three hotels 309 00:17:17,240 --> 00:17:20,520 Speaker 1: in order to prevent retaliation against employees of those hotels 310 00:17:20,520 --> 00:17:22,840 Speaker 1: who may not be at fault because we don't know 311 00:17:23,000 --> 00:17:26,160 Speaker 1: why the spyware was on the computers in the first place. 312 00:17:26,440 --> 00:17:28,720 Speaker 1: It could be that there was a manager who was 313 00:17:28,760 --> 00:17:31,240 Speaker 1: just trying to make sure that their employees weren't goofing 314 00:17:31,320 --> 00:17:33,680 Speaker 1: off while on the job. It could be that hackers 315 00:17:34,000 --> 00:17:37,359 Speaker 1: use social engineering to trick staff into installing something on 316 00:17:37,400 --> 00:17:40,639 Speaker 1: their computers that definitely shouldn't be there. We don't know. 317 00:17:41,040 --> 00:17:43,720 Speaker 1: The Pentagon has revealed that Russia has launched something into 318 00:17:43,760 --> 00:17:46,760 Speaker 1: space that's in the same orbit as a US government satellite, 319 00:17:46,800 --> 00:17:50,359 Speaker 1: and further that this something is likely a counter space 320 00:17:50,440 --> 00:17:53,480 Speaker 1: weapon of some sort. The presumption is that this Russian 321 00:17:53,520 --> 00:17:57,000 Speaker 1: spacecraft has the ability to attack satellites in low Earth orbit. So, 322 00:17:57,640 --> 00:18:01,520 Speaker 1: you know, fun stuff. Okay, let's end with a fun story. 323 00:18:01,560 --> 00:18:03,879 Speaker 1: So this one hits me right in the nostalgia. So 324 00:18:04,000 --> 00:18:07,199 Speaker 1: when I was a kid, I had an Atari twenty 325 00:18:07,240 --> 00:18:10,480 Speaker 1: six hundred game console, and then much later, after the 326 00:18:10,560 --> 00:18:13,240 Speaker 1: video game crash from nineteen eighty three, I got a 327 00:18:13,359 --> 00:18:17,439 Speaker 1: cousin's old intelevision system and a dozen or so games, 328 00:18:17,480 --> 00:18:20,199 Speaker 1: plus a dozen or so controller overlays, only some of 329 00:18:20,240 --> 00:18:23,240 Speaker 1: which corresponded with the actual games I had, So I 330 00:18:23,240 --> 00:18:25,360 Speaker 1: felt like I had the best of both worlds, despite 331 00:18:25,400 --> 00:18:28,080 Speaker 1: the fact that by that time the Nintendo Entertainment System 332 00:18:28,119 --> 00:18:31,640 Speaker 1: was out and was undeniably the superior game console. Anyway. 333 00:18:31,960 --> 00:18:35,439 Speaker 1: This week, Atari, which I should add is not really 334 00:18:35,480 --> 00:18:37,159 Speaker 1: the same company as the one that was in the 335 00:18:37,240 --> 00:18:40,800 Speaker 1: late seventies early eighties, Atari announced that it had acquired 336 00:18:40,800 --> 00:18:45,040 Speaker 1: the Intellivision brand and a bunch of Intellivision games from 337 00:18:45,200 --> 00:18:48,800 Speaker 1: a company that up until now really it was called 338 00:18:48,880 --> 00:18:53,000 Speaker 1: Intelevision Entertainment LLC. SO for several years in Television, which 339 00:18:53,000 --> 00:18:55,720 Speaker 1: has also gone through major changes in ownership and isn't 340 00:18:55,800 --> 00:18:58,879 Speaker 1: really the same company anymore, has been trying to release 341 00:18:58,960 --> 00:19:02,840 Speaker 1: a home video game consol called the Amiko So. Atari 342 00:19:02,880 --> 00:19:06,640 Speaker 1: did not purchase the rights to Amiko. So the company 343 00:19:06,920 --> 00:19:10,520 Speaker 1: in television Entertainment LLC will continue, but it will change 344 00:19:10,560 --> 00:19:13,000 Speaker 1: its name. Don't know what's changing its name too yet, 345 00:19:13,000 --> 00:19:15,600 Speaker 1: but it's going to change its name. The Intellivision brand 346 00:19:15,640 --> 00:19:18,000 Speaker 1: and all that stuff is what goes over to Atari, 347 00:19:18,280 --> 00:19:22,240 Speaker 1: and Atari will have purchased the legacy in television system 348 00:19:22,280 --> 00:19:25,359 Speaker 1: and games, so we should see that incorporated in some 349 00:19:25,480 --> 00:19:27,800 Speaker 1: way in the not too distant future. I have a 350 00:19:27,840 --> 00:19:30,040 Speaker 1: couple of articles for suggested reading for y'all before I 351 00:19:30,080 --> 00:19:33,439 Speaker 1: sign off. First up is Mike Masnek's piece on tech Dirt. 352 00:19:33,480 --> 00:19:36,760 Speaker 1: It's titled The Plan to Sunset. Section two thirty is 353 00:19:36,800 --> 00:19:40,399 Speaker 1: about a rogue Congress taking the Internet hostage if it 354 00:19:40,480 --> 00:19:43,280 Speaker 1: doesn't get its way. I did an episode about Section 355 00:19:43,320 --> 00:19:45,800 Speaker 1: two thirty back in December twenty twenty. It's a piece 356 00:19:45,840 --> 00:19:48,520 Speaker 1: of legislation that was drafted to give protection to Internet 357 00:19:48,520 --> 00:19:51,359 Speaker 1: platforms in order to allow the Internet to grow, but 358 00:19:51,440 --> 00:19:54,200 Speaker 1: now Congress is debating on sunsetting that protection by the 359 00:19:54,280 --> 00:19:57,080 Speaker 1: end of next year. Masnext piece explains why that would 360 00:19:57,119 --> 00:20:00,600 Speaker 1: be a very bad thing. The other suggests article I 361 00:20:00,640 --> 00:20:03,280 Speaker 1: have is on the Verge and it's by Lauren Finer 362 00:20:03,640 --> 00:20:07,000 Speaker 1: and just Weatherbed and it's titled the US Government is 363 00:20:07,040 --> 00:20:10,080 Speaker 1: trying to break up Live Nation ticket Master. So the 364 00:20:10,160 --> 00:20:13,440 Speaker 1: piece explains how Live Nation has created an insular ecosystem 365 00:20:13,480 --> 00:20:17,080 Speaker 1: that is reportedly anti competitive and locks artists and venues 366 00:20:17,119 --> 00:20:20,200 Speaker 1: into using Live Nation and Ticketmaster systems, and how the 367 00:20:20,320 --> 00:20:23,000 Speaker 1: US government is possibly going to bring that to an end. 368 00:20:23,280 --> 00:20:25,320 Speaker 1: That could be welcome news to all y'all out there 369 00:20:25,320 --> 00:20:27,680 Speaker 1: who are sick of paying so called convenience fees that 370 00:20:27,720 --> 00:20:30,760 Speaker 1: are almost as much as the show ticket price itself. 371 00:20:30,840 --> 00:20:34,600 Speaker 1: I count myself among you. It is Memorial Day weekend 372 00:20:34,640 --> 00:20:36,800 Speaker 1: here in the United States. There will be a rerun 373 00:20:36,840 --> 00:20:39,480 Speaker 1: episode on Monday and on Wednesday, because I'll be out 374 00:20:39,520 --> 00:20:41,800 Speaker 1: of town. There'll be a new episode of tech Stuff 375 00:20:41,840 --> 00:20:45,520 Speaker 1: next Friday. I hope all of you celebrating Memorial Day 376 00:20:45,600 --> 00:20:48,480 Speaker 1: have a safe and happy holiday. I hope everyone else 377 00:20:48,560 --> 00:20:50,879 Speaker 1: out there has a great weekend, and I'll talk to 378 00:20:50,920 --> 00:21:00,240 Speaker 1: you again really soon. Tech Stuff is an iHeart Heart 379 00:21:00,280 --> 00:21:05,200 Speaker 1: Radio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, 380 00:21:05,359 --> 00:21:08,520 Speaker 1: Apple Podcasts, or wherever you listen to your favorite shows.