1 00:00:04,480 --> 00:00:12,360 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. He there, 2 00:00:12,400 --> 00:00:16,160 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,200 --> 00:00:19,759 Speaker 1: I'm an executive producer with iHeart Podcasts and how the 4 00:00:19,800 --> 00:00:22,599 Speaker 1: tech are you. It's time for the tech news though, 5 00:00:22,600 --> 00:00:25,479 Speaker 1: for the week that ends on Friday, October twenty fifth, 6 00:00:25,520 --> 00:00:28,800 Speaker 1: twenty twenty four, and there are quite a few pieces 7 00:00:28,880 --> 00:00:33,080 Speaker 1: this week about open AI. Kylie Robison of The Verge 8 00:00:33,120 --> 00:00:37,480 Speaker 1: has an informative article titled Departing Open AI. Leader says 9 00:00:37,560 --> 00:00:41,520 Speaker 1: no company is ready for AGI, and that's a really 10 00:00:41,520 --> 00:00:45,280 Speaker 1: good start for our open AI discussion. So, first of all, 11 00:00:45,400 --> 00:00:51,000 Speaker 1: AGI stands for artificial general intelligence. So it's an example 12 00:00:51,000 --> 00:00:54,280 Speaker 1: of a term that on the surface sounds like it's 13 00:00:54,320 --> 00:00:57,600 Speaker 1: fairly straightforward, but when you start to get into the weeds, 14 00:00:57,640 --> 00:01:00,680 Speaker 1: you find out it's actually really vague and bor defined. 15 00:01:00,720 --> 00:01:06,080 Speaker 1: So generally AGI references a computer system that's capable of 16 00:01:06,160 --> 00:01:09,720 Speaker 1: processing information in a way that's similar to how we 17 00:01:09,840 --> 00:01:14,320 Speaker 1: humans think. That's the most vague way of defining it. Now. 18 00:01:14,400 --> 00:01:19,600 Speaker 1: AGI does not necessarily mean superhuman intelligence, though that's typically 19 00:01:19,640 --> 00:01:24,240 Speaker 1: where the conversation ends up going. An AGI system could 20 00:01:24,319 --> 00:01:28,000 Speaker 1: theoretically be pretty darn stupid in fact, but still process 21 00:01:28,080 --> 00:01:32,920 Speaker 1: information in a way that mimics how we humans do it. Anyway, 22 00:01:33,160 --> 00:01:36,440 Speaker 1: It's hard to define AGI because it's hard to justifine 23 00:01:36,480 --> 00:01:39,960 Speaker 1: regular old intelligence, like how do you do that before 24 00:01:40,000 --> 00:01:44,800 Speaker 1: you even get into artificial intelligence. But recently OpenAI dissolved 25 00:01:44,880 --> 00:01:49,640 Speaker 1: its AGI readiness team, and that in turn follows a 26 00:01:49,720 --> 00:01:52,840 Speaker 1: decision that the company had made earlier this year to 27 00:01:53,160 --> 00:01:57,240 Speaker 1: nix its super alignment team. And both of these teams 28 00:01:57,280 --> 00:02:01,600 Speaker 1: focused on ways to develop AI recis responsibly and to 29 00:02:02,360 --> 00:02:07,880 Speaker 1: identify and mitigate potential risks relating to AI. Turns out 30 00:02:08,360 --> 00:02:12,160 Speaker 1: that's not necessarily a revenue driver, right. Putting limitations on 31 00:02:12,200 --> 00:02:15,040 Speaker 1: your development process can be a real drag. Sure, I 32 00:02:15,040 --> 00:02:18,079 Speaker 1: mean those limitations might be there in order to prevent 33 00:02:18,200 --> 00:02:22,000 Speaker 1: AI implementations from doing really bad stuff, But come on, 34 00:02:22,639 --> 00:02:25,520 Speaker 1: what are the odds that's going to happen? Oh, we 35 00:02:25,680 --> 00:02:28,880 Speaker 1: don't know the odds. We actually we actually don't know 36 00:02:29,400 --> 00:02:33,519 Speaker 1: how likely it is that AI could mess things up. Well, 37 00:02:33,560 --> 00:02:35,959 Speaker 1: then that's just as good as knowing there's no chance 38 00:02:36,040 --> 00:02:38,760 Speaker 1: that things can go wrong? Am I right? Just a note, 39 00:02:38,800 --> 00:02:42,760 Speaker 1: I am not right anyway, snark aside. This move is 40 00:02:42,840 --> 00:02:45,960 Speaker 1: yet another nail in the coffin of the original mission 41 00:02:46,120 --> 00:02:49,519 Speaker 1: for open ai. Now, you might recall that way back 42 00:02:49,560 --> 00:02:53,480 Speaker 1: when it was first founded, open ai was just a simple, humble, 43 00:02:53,520 --> 00:02:57,120 Speaker 1: little country non profit company with the mission to conduct 44 00:02:57,200 --> 00:03:00,000 Speaker 1: AI research in a way that stood to benefit human 45 00:03:00,040 --> 00:03:04,600 Speaker 1: humanity the most with the least likely potential for bad consequences. 46 00:03:04,840 --> 00:03:06,440 Speaker 1: I don't know why I decided to go to simple 47 00:03:06,480 --> 00:03:10,000 Speaker 1: country lawyer mode for that, but anyway, later on, open 48 00:03:10,040 --> 00:03:14,720 Speaker 1: ai spun off a so called capped for profit company, 49 00:03:15,240 --> 00:03:18,480 Speaker 1: and this company would in turn still be governed by 50 00:03:18,520 --> 00:03:22,640 Speaker 1: the larger nonprofit organization. It's just that the for profit 51 00:03:22,720 --> 00:03:27,880 Speaker 1: company would be operated for profit to help fund the research. 52 00:03:28,200 --> 00:03:30,800 Speaker 1: And it didn't take very long for the for profit 53 00:03:30,919 --> 00:03:35,520 Speaker 1: arm to eclipse the nonprofit side, and the for profit 54 00:03:35,680 --> 00:03:42,120 Speaker 1: side essentially shed itself of the shackles of restrictions and caution. Why. Well, 55 00:03:42,120 --> 00:03:44,480 Speaker 1: the big reason is that AI, as I have said 56 00:03:44,720 --> 00:03:49,640 Speaker 1: many times, is wicked expensive. Open ai churns through money 57 00:03:49,680 --> 00:03:53,360 Speaker 1: at a rate that is hard for me to conceive. 58 00:03:53,480 --> 00:03:58,240 Speaker 1: We're talking billions of dollars per year spent just to 59 00:03:58,360 --> 00:04:02,600 Speaker 1: run operations, and I think they were on track to 60 00:04:02,680 --> 00:04:06,200 Speaker 1: lose like five billion dollars this year before they ended 61 00:04:06,280 --> 00:04:12,040 Speaker 1: up having a massive fundraising round. Nonprofits are just incapable 62 00:04:12,120 --> 00:04:15,760 Speaker 1: of raising enough money fast enough to meet that kind 63 00:04:15,800 --> 00:04:19,560 Speaker 1: of demand, Like, no one's going to pour money into 64 00:04:19,839 --> 00:04:24,040 Speaker 1: a drain like that over and over, you know, perpetually, 65 00:04:24,200 --> 00:04:27,680 Speaker 1: so again and again. Open Ai has chosen the pathways 66 00:04:27,680 --> 00:04:30,440 Speaker 1: that are most likely to lead to heaps of cash 67 00:04:30,480 --> 00:04:34,320 Speaker 1: while downplaying the concerns of people within and outside the 68 00:04:34,360 --> 00:04:38,760 Speaker 1: company about the potential dangerous decisions being made. In Robison's 69 00:04:38,760 --> 00:04:41,760 Speaker 1: Peace in the Verge, she covers how the former senior 70 00:04:41,839 --> 00:04:47,440 Speaker 1: advisor for AGI Readiness, Miles Brundage, warns that no company 71 00:04:47,560 --> 00:04:53,520 Speaker 1: in the world, including open Ai, is prepared for AGI. Further, 72 00:04:54,040 --> 00:04:57,479 Speaker 1: he warned that governments around the world really need access 73 00:04:57,560 --> 00:05:01,080 Speaker 1: to unbiased experts who are in the field of AI 74 00:05:01,600 --> 00:05:05,720 Speaker 1: while they are actually formulating policies and regulations, because as 75 00:05:05,720 --> 00:05:10,560 Speaker 1: it stands, the people who are extremely passionate about guiding 76 00:05:10,640 --> 00:05:15,080 Speaker 1: policy tend to be folks like Sam Altman, the CEO 77 00:05:15,360 --> 00:05:18,640 Speaker 1: of open Ai, and as you might imagine, these people 78 00:05:18,680 --> 00:05:22,480 Speaker 1: are not unbiased. It really creates a conflict of interest, right, 79 00:05:22,520 --> 00:05:25,760 Speaker 1: It seems like a pretty safe bet that Altman's guidance 80 00:05:25,839 --> 00:05:29,320 Speaker 1: to leaders around the world will focus on policies that 81 00:05:29,400 --> 00:05:33,400 Speaker 1: will help or at the very least not hinder open 82 00:05:33,440 --> 00:05:38,120 Speaker 1: Ai while potentially having a larger impact on the company's competition. 83 00:05:38,680 --> 00:05:42,200 Speaker 1: So Brundage is essentially saying like, the fact that open 84 00:05:42,200 --> 00:05:46,080 Speaker 1: ai is making these decisions should cause some concern, and 85 00:05:46,200 --> 00:05:50,240 Speaker 1: leaders around the world need to be savvy when they 86 00:05:50,360 --> 00:05:54,400 Speaker 1: are consulting with various experts and making sure that the 87 00:05:54,480 --> 00:05:59,240 Speaker 1: advice they receive isn't being guided by self interest. Another 88 00:05:59,360 --> 00:06:03,880 Speaker 1: good read is an opinion piece from John Herman of Intelligencer. 89 00:06:04,320 --> 00:06:06,160 Speaker 1: I guess you could argue that it's not really an 90 00:06:06,200 --> 00:06:08,280 Speaker 1: opinion piece, but I feel like Herman injects a lot 91 00:06:08,320 --> 00:06:11,320 Speaker 1: of his own opinions in it, and I am pretty 92 00:06:11,360 --> 00:06:13,960 Speaker 1: much on the same page as Herman. Well, anyway, the 93 00:06:14,040 --> 00:06:18,159 Speaker 1: article is titled Microsoft has an open Ai problem. Now. 94 00:06:18,240 --> 00:06:21,080 Speaker 1: I mentioned in an earlier tech Stuff News episode that 95 00:06:21,240 --> 00:06:26,040 Speaker 1: Microsoft has seemed to have kind of a rethink when 96 00:06:26,120 --> 00:06:29,680 Speaker 1: it comes to its relationship with open AI. In Facebook terms, 97 00:06:29,960 --> 00:06:32,920 Speaker 1: I think we would say that the current relationship status 98 00:06:33,000 --> 00:06:38,120 Speaker 1: would be at it's complicated. So Microsoft has invested billions 99 00:06:38,160 --> 00:06:42,520 Speaker 1: of dollars, like nearly fourteen billion bucks in open Ai 100 00:06:42,600 --> 00:06:46,119 Speaker 1: so far, and Herman points out that the agreements between 101 00:06:46,200 --> 00:06:50,320 Speaker 1: the two companies happened relatively quickly and without a whole 102 00:06:50,360 --> 00:06:53,839 Speaker 1: lot of thought as to how that process of funding 103 00:06:53,880 --> 00:06:58,080 Speaker 1: would actually work, which seems like pretty loosey goosey to me. 104 00:06:58,480 --> 00:07:02,560 Speaker 1: And now Microsoft and o AI's partnership hinges on if 105 00:07:02,680 --> 00:07:07,800 Speaker 1: open ai develops AGI. So essentially the agreement is that 106 00:07:07,960 --> 00:07:11,600 Speaker 1: this partnership between the two companies will dissolve if open 107 00:07:11,640 --> 00:07:16,680 Speaker 1: Ai creates an AGI product of some sort, allegedly because 108 00:07:16,720 --> 00:07:19,680 Speaker 1: open ai has a real, deep concern that a company 109 00:07:19,720 --> 00:07:25,040 Speaker 1: like Microsoft might misuse such a powerful and potentially destructive 110 00:07:25,120 --> 00:07:29,480 Speaker 1: tool as artificial general intelligence. And sure, I think that's possible, 111 00:07:29,880 --> 00:07:33,920 Speaker 1: Like I think if Microsoft had access to AGI, there 112 00:07:33,960 --> 00:07:36,760 Speaker 1: could be some pretty negative consequences. But it's not like 113 00:07:36,800 --> 00:07:39,840 Speaker 1: I have an enormous amount of faith in open Ai either. 114 00:07:41,240 --> 00:07:43,120 Speaker 1: It's not like I look at them and think, oh no, 115 00:07:43,240 --> 00:07:47,200 Speaker 1: I want them to have the keys. I don't want 116 00:07:47,200 --> 00:07:48,640 Speaker 1: anyone to have the keys. I don't want it to 117 00:07:48,640 --> 00:07:50,800 Speaker 1: be a thing. But I guess that's not an option. 118 00:07:51,200 --> 00:07:54,280 Speaker 1: But anyway, when you're talking about a company like open ai, 119 00:07:54,440 --> 00:07:57,960 Speaker 1: which has actively been dismantling the systems. But in place 120 00:07:58,000 --> 00:08:02,720 Speaker 1: to ensure safe development or artificial intelligence, you can't really 121 00:08:02,760 --> 00:08:06,400 Speaker 1: be advocating that they're the ones who should be, you know, 122 00:08:06,440 --> 00:08:10,680 Speaker 1: the stewards of AGI. Herman cites a Times article that 123 00:08:10,720 --> 00:08:13,440 Speaker 1: points out that open AI's board of directors actually has 124 00:08:13,440 --> 00:08:17,760 Speaker 1: the authority to determine what AGI is and when, if ever, 125 00:08:17,920 --> 00:08:22,200 Speaker 1: open Ai achieves it. So, at least hypothetically, even if 126 00:08:22,200 --> 00:08:26,040 Speaker 1: outside parties would disagree with the definition that open AI's 127 00:08:26,200 --> 00:08:30,240 Speaker 1: board comes up with, that wouldn't matter. If open Ai said, oh, 128 00:08:30,320 --> 00:08:33,640 Speaker 1: we did it, we created AGI, and if literally no 129 00:08:33,720 --> 00:08:36,920 Speaker 1: one else in the world said that's definitely AGI, it 130 00:08:36,960 --> 00:08:39,480 Speaker 1: would be enough for open ai to sever its relationship 131 00:08:39,520 --> 00:08:43,160 Speaker 1: with Microsoft. Now, the implication here is that this isn't 132 00:08:43,280 --> 00:08:46,839 Speaker 1: really some sort of protection for open ai to make 133 00:08:46,880 --> 00:08:49,360 Speaker 1: sure that AGI is not unleashed upon a world and 134 00:08:49,520 --> 00:08:52,840 Speaker 1: like the next build of Windows or something. It's really 135 00:08:52,880 --> 00:08:56,160 Speaker 1: more about creating kind of a switch, a kill switch, 136 00:08:56,200 --> 00:08:59,400 Speaker 1: a bargaining tool with Microsoft, so that when it comes 137 00:08:59,480 --> 00:09:04,040 Speaker 1: to renegotiating with Microsoft, open Ai is able to essentially say, hey, 138 00:09:04,440 --> 00:09:06,520 Speaker 1: if you don't give us more money, we'll say we 139 00:09:06,600 --> 00:09:10,240 Speaker 1: made AGI, and then that's it. We walk and you 140 00:09:10,280 --> 00:09:13,120 Speaker 1: can't do anything about it. So that's a possibility. I 141 00:09:13,160 --> 00:09:16,719 Speaker 1: don't know if it's a reality, but it's possible. It's 142 00:09:16,720 --> 00:09:19,400 Speaker 1: almost enough to make one jaded, isn't it. Anyway? The 143 00:09:19,480 --> 00:09:22,640 Speaker 1: article's well worth a read again. It's Microsoft has an 144 00:09:22,720 --> 00:09:27,840 Speaker 1: open AI problem, and it's an intelligencer by John Herman. Okay, 145 00:09:28,080 --> 00:09:30,160 Speaker 1: I got a lot more news to go through, but first, 146 00:09:30,360 --> 00:09:42,600 Speaker 1: let's take a quick break to thank our sponsors. All Right, 147 00:09:42,679 --> 00:09:45,720 Speaker 1: we're back, and David E. Sanger of The New York 148 00:09:45,760 --> 00:09:50,520 Speaker 1: Times has a piece titled Biden Administration outlines government guardrails 149 00:09:50,520 --> 00:09:52,880 Speaker 1: for AI tools, and that brings us back around to 150 00:09:52,880 --> 00:09:56,679 Speaker 1: those government backed policies relating to the development and deployment 151 00:09:56,679 --> 00:10:00,720 Speaker 1: of artificial intelligence. So essentially, these guardrails lay out the 152 00:10:00,760 --> 00:10:04,280 Speaker 1: scenarios in which using AI would be appropriate and allowed, 153 00:10:04,559 --> 00:10:07,480 Speaker 1: and the scenarios in which it really absolutely should not 154 00:10:07,559 --> 00:10:11,080 Speaker 1: be allowed. So, for example, using AI to detect and 155 00:10:11,200 --> 00:10:16,040 Speaker 1: deter cybersecurity threats, that's pretty okay. Using AI to develop 156 00:10:16,080 --> 00:10:20,080 Speaker 1: a new generation of fully autonomous weaponry, that one falls 157 00:10:20,120 --> 00:10:23,040 Speaker 1: into the no no category. So the guardrails put up 158 00:10:23,040 --> 00:10:26,480 Speaker 1: a lot of responsibility on individual departments and agencies to 159 00:10:26,480 --> 00:10:30,080 Speaker 1: conduct their own reviews and determine in which use cases 160 00:10:30,120 --> 00:10:32,680 Speaker 1: AI might be appropriate and in which ones it would 161 00:10:32,720 --> 00:10:35,080 Speaker 1: not be. There's a lot more to the guardrails. There's 162 00:10:35,120 --> 00:10:39,080 Speaker 1: like thirty eight pages of material that's available for the 163 00:10:39,120 --> 00:10:41,600 Speaker 1: public to read. It includes a call to have the 164 00:10:41,679 --> 00:10:45,000 Speaker 1: US attract more AI experts to the United States rather 165 00:10:45,040 --> 00:10:47,520 Speaker 1: than have them work for a rival nation like China 166 00:10:47,640 --> 00:10:51,840 Speaker 1: or Russia. But at least some of the information about 167 00:10:51,840 --> 00:10:55,280 Speaker 1: these guardrails is classified, so I have no idea what 168 00:10:55,320 --> 00:10:58,040 Speaker 1: could be in those Also, as Anger points out, the 169 00:10:58,120 --> 00:11:01,920 Speaker 1: efficacy of these decisions is somewhat unknown since the United 170 00:11:01,920 --> 00:11:04,560 Speaker 1: States is currently in an election year and the next 171 00:11:04,640 --> 00:11:08,600 Speaker 1: president might just make all these guardrails go away, so 172 00:11:08,679 --> 00:11:10,720 Speaker 1: we don't really know if this is going to matter 173 00:11:10,800 --> 00:11:13,840 Speaker 1: at all in the long run. Fun times. The Attorney 174 00:11:13,920 --> 00:11:16,320 Speaker 1: General for the state of Montana has filed a new 175 00:11:16,400 --> 00:11:19,480 Speaker 1: lawsuit against TikTok. It's a new lawsuit, but it's an 176 00:11:19,480 --> 00:11:23,720 Speaker 1: old tune. They're accusing the company of purposefully promoting addictive 177 00:11:23,720 --> 00:11:28,560 Speaker 1: and harmful content to young users through the recommendation algorithm. 178 00:11:28,679 --> 00:11:32,080 Speaker 1: The Attorney General alleges that TikTok essentially has lied about 179 00:11:32,120 --> 00:11:35,720 Speaker 1: the nature of promoted content in violation of the Montana 180 00:11:35,800 --> 00:11:39,200 Speaker 1: Consumer Protection Act. So essentially, the attorney general says that 181 00:11:39,280 --> 00:11:44,320 Speaker 1: TikTok knew that users, including young users, would encounter content 182 00:11:44,400 --> 00:11:47,120 Speaker 1: that's just unsuitable for people who could be as young 183 00:11:47,120 --> 00:11:51,439 Speaker 1: as thirteen years old, but pretended that it has safeguards 184 00:11:51,440 --> 00:11:55,320 Speaker 1: in place to prevent such young users from encountering, you know, 185 00:11:55,400 --> 00:11:59,880 Speaker 1: mature or disturbing content. An unnamed spokesperson for TikTok disputes 186 00:11:59,880 --> 00:12:04,240 Speaker 1: the allegations and says they are quote inaccurate and misleading. 187 00:12:04,320 --> 00:12:06,720 Speaker 1: In the quote, you might recall that previously, the state 188 00:12:06,760 --> 00:12:10,240 Speaker 1: of Montana actually banned TikTok entirely, but then a judge 189 00:12:10,320 --> 00:12:13,319 Speaker 1: later overturned that ban on the grounds that it violated 190 00:12:13,400 --> 00:12:16,560 Speaker 1: the First Amendment, that's the right to free speech. Several 191 00:12:16,600 --> 00:12:21,880 Speaker 1: other states have similarly filed charges against TikTok with similar accusations. Essentially, 192 00:12:21,880 --> 00:12:25,360 Speaker 1: it boils down to TikTok's algorithm promoting inappropriate and sometimes 193 00:12:25,360 --> 00:12:29,920 Speaker 1: outright dangerous content to impressionable young users. Meanwhile, of course, 194 00:12:29,960 --> 00:12:32,880 Speaker 1: TikTok is facing down a well a ticking clock situation 195 00:12:32,960 --> 00:12:35,360 Speaker 1: here in the United States. Currently, the app faces the 196 00:12:35,400 --> 00:12:38,959 Speaker 1: possibility of a nationwide ban unless its Chinese parent company 197 00:12:39,040 --> 00:12:43,400 Speaker 1: Bite Dance, divests itself of TikTok. So it might be 198 00:12:43,520 --> 00:12:46,440 Speaker 1: that by this time next year, lawsuits against TikTok, at 199 00:12:46,520 --> 00:12:48,800 Speaker 1: least here in the United States, will no longer really 200 00:12:48,840 --> 00:12:52,040 Speaker 1: be relevant, or at least not as straightforward. Speaking of 201 00:12:52,120 --> 00:12:55,320 Speaker 1: bans and social media, Norway is increasing the minimum age 202 00:12:55,400 --> 00:12:58,760 Speaker 1: allowed for folks to use social media. So previously the 203 00:12:58,760 --> 00:13:01,360 Speaker 1: minimum age in Norway was thirteen, but now it's going 204 00:13:01,400 --> 00:13:04,280 Speaker 1: to be increased to fifteen years old out of a 205 00:13:04,320 --> 00:13:08,000 Speaker 1: concern that social media can have a harmful impact on 206 00:13:08,200 --> 00:13:11,680 Speaker 1: young users. Of course, making that law is one thing, 207 00:13:11,800 --> 00:13:15,280 Speaker 1: but enforcing it is entirely something else. I mean, as 208 00:13:15,320 --> 00:13:19,360 Speaker 1: Miranda Bryant of The Guardian reports, the Norwegian Media Authority 209 00:13:19,480 --> 00:13:21,880 Speaker 1: found that more than half of nine year olds in 210 00:13:21,960 --> 00:13:25,320 Speaker 1: Norway are already on social media, and as you go 211 00:13:25,520 --> 00:13:29,200 Speaker 1: up and age the percentage increases, so already there are 212 00:13:29,360 --> 00:13:33,400 Speaker 1: tons of kids in Norway who are actually using social 213 00:13:33,480 --> 00:13:37,480 Speaker 1: media when they're underage, like they're below the age of thirteen, 214 00:13:37,800 --> 00:13:40,120 Speaker 1: even though the law says they ain't supposed to. So 215 00:13:40,200 --> 00:13:42,200 Speaker 1: apparently part of the plan is to change the way 216 00:13:42,240 --> 00:13:46,200 Speaker 1: that social media companies verify the age of users in Norway, 217 00:13:46,480 --> 00:13:49,360 Speaker 1: so that it's not a trivial task. Just fib about 218 00:13:49,360 --> 00:13:52,320 Speaker 1: one's own age admidly, that's a pretty low bar. I mean, 219 00:13:52,400 --> 00:13:54,079 Speaker 1: if I were a kid, then I was trying to 220 00:13:54,120 --> 00:13:57,080 Speaker 1: make an account on a social media platform and the 221 00:13:57,200 --> 00:14:00,920 Speaker 1: age verification was just asking me, Hey, when when's your birthday? 222 00:14:01,240 --> 00:14:04,640 Speaker 1: I could probably do some pretty quick math and fudge 223 00:14:04,679 --> 00:14:07,280 Speaker 1: that date enough so that it would let me in. Anyway, 224 00:14:07,360 --> 00:14:09,480 Speaker 1: while I have my doubts about how effective this new 225 00:14:09,520 --> 00:14:12,320 Speaker 1: policy is going to be, I absolutely do feel empathy 226 00:14:12,360 --> 00:14:15,440 Speaker 1: for those who want to shield kids from the ravages 227 00:14:15,559 --> 00:14:19,040 Speaker 1: of the algorithm, because there are days when the various 228 00:14:19,040 --> 00:14:22,600 Speaker 1: recommendation algorithms that I encounter make me want to go 229 00:14:22,640 --> 00:14:26,720 Speaker 1: and hide in the woods, possibly in Norway. Alison Morrow 230 00:14:26,760 --> 00:14:29,480 Speaker 1: of CNN has a piece titled almost anything goes on 231 00:14:29,520 --> 00:14:32,920 Speaker 1: social media as long as it doesn't make billionaires feel 232 00:14:32,960 --> 00:14:36,600 Speaker 1: even a little bit unsafe, And yeah, billionaires live by 233 00:14:36,680 --> 00:14:40,240 Speaker 1: different rules than everybody else. In fact, they often are 234 00:14:40,280 --> 00:14:44,040 Speaker 1: the ones who decide what the rules are. So while 235 00:14:44,080 --> 00:14:48,120 Speaker 1: people like Elon Musk are extremely vocal about the importance 236 00:14:48,160 --> 00:14:51,440 Speaker 1: and sanctity of free speech, they're also quick to say no, 237 00:14:51,560 --> 00:14:54,480 Speaker 1: not like that if the free speech includes stuff that 238 00:14:54,920 --> 00:14:59,280 Speaker 1: is potentially harmful to them. So the case Morrow is 239 00:14:59,280 --> 00:15:02,760 Speaker 1: focusing on that of Jack Sweeney. He's a college student 240 00:15:02,800 --> 00:15:06,960 Speaker 1: who has created numerous social media profiles on different platforms 241 00:15:07,200 --> 00:15:12,560 Speaker 1: that are dedicated to tracking specific billionaire private jet routes, which, 242 00:15:12,600 --> 00:15:16,440 Speaker 1: by the way, is publicly available data. Sweeney is not 243 00:15:16,800 --> 00:15:19,920 Speaker 1: hacking into the mainframe or anything like that. All he's 244 00:15:19,920 --> 00:15:23,360 Speaker 1: doing is just scraping data from public sources and posting 245 00:15:23,360 --> 00:15:27,760 Speaker 1: it to social Anyone could do this, but various platforms 246 00:15:27,800 --> 00:15:32,640 Speaker 1: have removed Sweeney's accounts, often without giving him any forewarning 247 00:15:32,880 --> 00:15:36,160 Speaker 1: or explanation, and when asked, they usually give a pretty 248 00:15:36,200 --> 00:15:40,160 Speaker 1: hand wavy explanation that the information poses a risk to 249 00:15:40,320 --> 00:15:43,080 Speaker 1: certain individuals. But I mean, if you happen to know 250 00:15:43,160 --> 00:15:45,880 Speaker 1: that Elon Musk flew into your town, you might track 251 00:15:45,960 --> 00:15:48,000 Speaker 1: him down an assault him or something, or maybe you know, 252 00:15:48,160 --> 00:15:50,760 Speaker 1: just yell at him because your cyber truck don't go no, 253 00:15:50,880 --> 00:15:53,560 Speaker 1: mo I guess that's the fear, Like, how do you 254 00:15:53,760 --> 00:15:56,200 Speaker 1: know where the person specifically is? Who just know where 255 00:15:56,200 --> 00:15:59,800 Speaker 1: they're jet landed? Anyway, Marrow points out that the various 256 00:16:00,240 --> 00:16:03,200 Speaker 1: platforms seem to have very little problem with hosting stuff 257 00:16:03,200 --> 00:16:05,680 Speaker 1: that could be extremely harmful to the rest of us, 258 00:16:05,920 --> 00:16:10,080 Speaker 1: whether that's misinformation or hate speech or whatever. That stuff, well, 259 00:16:10,120 --> 00:16:12,920 Speaker 1: that stuff's not likely to touch the billionaires, so there's 260 00:16:12,920 --> 00:16:15,160 Speaker 1: no reason to worry about it. Let that stay up 261 00:16:15,200 --> 00:16:18,800 Speaker 1: on those profiles. We all just have to fend for ourselves. 262 00:16:18,840 --> 00:16:23,040 Speaker 1: But publicly available information about billionaires, now you're talking about 263 00:16:23,120 --> 00:16:26,560 Speaker 1: dangerous information. Anyway, read the article. You already can tell 264 00:16:26,560 --> 00:16:29,440 Speaker 1: what my opinion is about all this. I'll just Quotemorrow 265 00:16:29,520 --> 00:16:33,240 Speaker 1: directly for this little bit, because you know we're aligned 266 00:16:33,280 --> 00:16:36,680 Speaker 1: on this quote. It's just not clear that Meta cares 267 00:16:36,680 --> 00:16:39,280 Speaker 1: as much about its users' privacy and well being as 268 00:16:39,280 --> 00:16:45,160 Speaker 1: it does about Zuckerberg's end quote Preachmorrow. The US Federal 269 00:16:45,200 --> 00:16:48,040 Speaker 1: Trade Commission created new rules this past summer that gives 270 00:16:48,080 --> 00:16:51,000 Speaker 1: the agency the power to find people who post, buy, 271 00:16:51,200 --> 00:16:54,160 Speaker 1: or sell fake reviews online. I'm sure you're aware that 272 00:16:54,200 --> 00:16:57,160 Speaker 1: fake reviews are a real problem. Once upon a time, 273 00:16:57,200 --> 00:16:59,280 Speaker 1: I think the average person could go to an online 274 00:16:59,320 --> 00:17:03,320 Speaker 1: marketplace like Amazon and lean on user reviews when making 275 00:17:03,400 --> 00:17:06,320 Speaker 1: purchasing decisions. But over time, companies have found ways to 276 00:17:06,440 --> 00:17:10,520 Speaker 1: incentivize folks to write good reviews. Sometimes it's fairly innocent. 277 00:17:10,680 --> 00:17:12,959 Speaker 1: You know, you might have a little card in the 278 00:17:13,000 --> 00:17:15,280 Speaker 1: box that your product came in saying something like, hey, 279 00:17:15,320 --> 00:17:18,359 Speaker 1: if you like this, please consider writing a review. That 280 00:17:18,440 --> 00:17:22,360 Speaker 1: seems pretty innocuous, but it can quickly get more murky, 281 00:17:22,560 --> 00:17:24,359 Speaker 1: like you might get one that says, hey, hey, if 282 00:17:24,400 --> 00:17:26,720 Speaker 1: you write a five star review, we'll give you a 283 00:17:26,760 --> 00:17:29,960 Speaker 1: coupon for some other product or service. And sometimes it 284 00:17:30,080 --> 00:17:34,840 Speaker 1: just gets downright overtly wrong, like write a positive review 285 00:17:34,920 --> 00:17:37,800 Speaker 1: for this thing that you might not even use or own, 286 00:17:38,200 --> 00:17:41,159 Speaker 1: and we will pay you money. Well, the new rules 287 00:17:41,160 --> 00:17:43,840 Speaker 1: that the FTC passed are now in effect, and essentially, 288 00:17:43,960 --> 00:17:46,680 Speaker 1: if you are caught trading in fake reviews, you could 289 00:17:46,720 --> 00:17:49,480 Speaker 1: face a fine of up to fifty one, seven hundred 290 00:17:49,480 --> 00:17:53,240 Speaker 1: and forty four bucks, and that includes AI generated reviews. 291 00:17:53,280 --> 00:17:55,680 Speaker 1: If they find that you've been doing that, you could 292 00:17:55,680 --> 00:17:59,480 Speaker 1: be paying some pretty hefty cash. Now, on top of that, 293 00:17:59,520 --> 00:18:03,160 Speaker 1: companies are not allowed to suppress negative reviews. They're also 294 00:18:03,240 --> 00:18:07,320 Speaker 1: not allowed to promote positive reviews that they suspect are fake. Now, 295 00:18:07,480 --> 00:18:10,560 Speaker 1: how the FTC plans to go about proving that a 296 00:18:10,600 --> 00:18:13,280 Speaker 1: company was aware that a review was fake, I'm sure 297 00:18:13,280 --> 00:18:15,840 Speaker 1: that'll be challenging. On a case by case basis. But 298 00:18:15,960 --> 00:18:18,320 Speaker 1: the rule is a hefty piece of writing. It is 299 00:18:18,320 --> 00:18:21,119 Speaker 1: one hundred and sixty three pages long, and no, I 300 00:18:21,320 --> 00:18:24,000 Speaker 1: have not read the whole thing, so I'm not prepared 301 00:18:24,040 --> 00:18:26,159 Speaker 1: to give my full opinion as to whether or not 302 00:18:26,240 --> 00:18:28,679 Speaker 1: this is a good idea or a bad idea, or 303 00:18:28,720 --> 00:18:32,080 Speaker 1: a good response or an ineffective response. I need to 304 00:18:32,119 --> 00:18:33,840 Speaker 1: be able to read the whole thing before I come 305 00:18:33,920 --> 00:18:35,399 Speaker 1: up with that. So it looks like I've got my 306 00:18:35,480 --> 00:18:39,840 Speaker 1: reading material for my upcoming flights. Yay, okay. I got 307 00:18:39,880 --> 00:18:41,960 Speaker 1: a couple of articles I want to recommend for your 308 00:18:42,040 --> 00:18:46,439 Speaker 1: reading pleasure. One is a John Broadcin article on Ours 309 00:18:46,520 --> 00:18:50,920 Speaker 1: Technica and it's titled Cable companies ask Fifth Circuit to 310 00:18:51,000 --> 00:18:54,359 Speaker 1: block FTC's click to cancel rule. I think that's the 311 00:18:54,440 --> 00:18:58,719 Speaker 1: least surprising headline I've ever read. Obviously, cable companies don't 312 00:18:58,760 --> 00:19:01,520 Speaker 1: want there to be a straight forward click to cancel 313 00:19:01,960 --> 00:19:04,480 Speaker 1: route for users. But yeah, read the article because it 314 00:19:04,720 --> 00:19:07,800 Speaker 1: kind of explains what the situation is and how the 315 00:19:08,000 --> 00:19:12,199 Speaker 1: specific courts that were chosen could end up having a 316 00:19:12,280 --> 00:19:16,880 Speaker 1: massive impact on this potential rule. And finally, I have 317 00:19:16,960 --> 00:19:20,240 Speaker 1: one other reading recommendation. It's an article by John Walker 318 00:19:20,359 --> 00:19:25,720 Speaker 1: of Kotaku. It is titled Roadblocks's new child safety changes 319 00:19:25,800 --> 00:19:29,560 Speaker 1: reveal how dangerous it's been for years. So it's an 320 00:19:29,560 --> 00:19:33,280 Speaker 1: eye opening piece that applauds these safety changes, but it 321 00:19:33,320 --> 00:19:36,719 Speaker 1: does ask, hey, how is it that this wasn't already 322 00:19:36,800 --> 00:19:40,399 Speaker 1: a thing? So check that out as well. I hope 323 00:19:40,520 --> 00:19:43,560 Speaker 1: all you out there are doing well, and I will 324 00:19:43,560 --> 00:19:52,920 Speaker 1: talk to you again really soon. Tech Stuff is an 325 00:19:52,960 --> 00:19:58,119 Speaker 1: iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio 326 00:19:58,200 --> 00:20:01,760 Speaker 1: app Apple Podcasts, where whoever you listen to your favorite shows. 327 00:20:05,920 --> 00:20:05,960 Speaker 1: M