1 00:00:04,400 --> 00:00:12,200 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Be there 2 00:00:12,240 --> 00:00:15,800 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:15,840 --> 00:00:19,239 Speaker 1: I'm an executive producer with iHeartRadio. And how the tech 4 00:00:19,320 --> 00:00:21,759 Speaker 1: are you? It's time for the tech news for Tuesday, 5 00:00:21,840 --> 00:00:27,360 Speaker 1: March twenty first, twenty twenty three. First up, Google's bug 6 00:00:27,480 --> 00:00:34,760 Speaker 1: hunting team found eighteen zero day vulnerabilities in Samsung hardware, 7 00:00:35,040 --> 00:00:39,440 Speaker 1: specifically the xin US chip sets. That's e x Y 8 00:00:39,920 --> 00:00:44,199 Speaker 1: n os SO. Zero day vulnerabilities, as the name suggests, 9 00:00:44,200 --> 00:00:49,600 Speaker 1: are vulnerabilities that are in various technologies from day one, 10 00:00:50,159 --> 00:00:55,120 Speaker 1: that have been there since the technology came out, and 11 00:00:55,280 --> 00:01:02,959 Speaker 1: potentially are actively being exploited by hackers. And the xosed 12 00:01:03,000 --> 00:01:08,319 Speaker 1: chip sets are arm based system on a chip chip sets, 13 00:01:08,520 --> 00:01:11,480 Speaker 1: and they serve as the brains of stuff like certain 14 00:01:11,560 --> 00:01:15,760 Speaker 1: smartphones and wearables and even some car systems. The team 15 00:01:15,760 --> 00:01:19,880 Speaker 1: has been uncovering these vulnerabilities since late twenty twenty two, 16 00:01:20,319 --> 00:01:23,399 Speaker 1: so it's not like this is brand new discoveries in 17 00:01:23,400 --> 00:01:26,640 Speaker 1: all cases, but it has been ongoing and at least 18 00:01:26,680 --> 00:01:31,960 Speaker 1: four of them are considered severe vulnerabilities, potentially allowing an 19 00:01:31,959 --> 00:01:35,880 Speaker 1: attacker to target any affected device as long as that 20 00:01:35,959 --> 00:01:40,480 Speaker 1: attacker knows the associated phone number with the XANOS modem 21 00:01:40,520 --> 00:01:43,600 Speaker 1: in that device. So if someone happened to own a 22 00:01:43,680 --> 00:01:48,000 Speaker 1: smartphone with one of these chips as its processor, an 23 00:01:48,040 --> 00:01:53,400 Speaker 1: attacker could potentially perform a remote code execution or RCE 24 00:01:53,800 --> 00:01:56,880 Speaker 1: on that phone just by knowing that phone number. So 25 00:01:56,960 --> 00:01:59,480 Speaker 1: you wouldn't have to even be tricked into doing anything, right. 26 00:01:59,520 --> 00:02:04,000 Speaker 1: A lot of vulnerabilities we hear about involve someone taking 27 00:02:04,000 --> 00:02:06,760 Speaker 1: an action that they shouldn't have. They get a prompt, 28 00:02:06,960 --> 00:02:10,800 Speaker 1: they respond to it, and that in turn creates the 29 00:02:10,880 --> 00:02:15,200 Speaker 1: opportunity for an attacker to compromise a device. In this case, 30 00:02:15,560 --> 00:02:17,720 Speaker 1: it literally just means that the attacker has to have 31 00:02:17,760 --> 00:02:21,320 Speaker 1: the phone number of the target device. This is a 32 00:02:21,520 --> 00:02:25,440 Speaker 1: massive security flaw obviously, because then the attacker could force 33 00:02:25,520 --> 00:02:28,919 Speaker 1: the victim's phone to run all sorts of different types 34 00:02:28,960 --> 00:02:34,400 Speaker 1: of code. They could have the phone execute code that 35 00:02:34,480 --> 00:02:39,600 Speaker 1: turns it into an espionage device similar to the Pegasus 36 00:02:39,639 --> 00:02:42,320 Speaker 1: stuff we talked about with iPhones a couple of years ago, 37 00:02:43,320 --> 00:02:46,760 Speaker 1: or maybe they could lock the phone in a sort 38 00:02:46,800 --> 00:02:50,680 Speaker 1: of ransomware attack. Now, the other fourteen flaws, because there 39 00:02:50,680 --> 00:02:53,440 Speaker 1: were eighteen total, four of them were super extreme. The 40 00:02:53,480 --> 00:02:56,200 Speaker 1: other fourteen are also bad news, but they don't pose 41 00:02:56,320 --> 00:03:01,440 Speaker 1: the same critical risk as the four that I'm alluding to. Now, 42 00:03:01,520 --> 00:03:06,160 Speaker 1: patches are going out in security updates, but patches are 43 00:03:06,200 --> 00:03:12,000 Speaker 1: things that different companies roll out at different scales and 44 00:03:12,040 --> 00:03:18,080 Speaker 1: at different timetables, so not all of the effective devices 45 00:03:18,120 --> 00:03:23,040 Speaker 1: are necessarily able to be patched right now. That's a 46 00:03:23,120 --> 00:03:26,640 Speaker 1: real serious issue that you know, companies like Google have 47 00:03:26,760 --> 00:03:30,280 Speaker 1: rolled out patches and security updates already, so as long 48 00:03:30,320 --> 00:03:32,639 Speaker 1: as you're up to day in your security patches, you 49 00:03:32,680 --> 00:03:39,320 Speaker 1: have presumably blocked off this potential attack vector. But the 50 00:03:39,360 --> 00:03:42,880 Speaker 1: proposed workaround for those who are still waiting to get 51 00:03:42,880 --> 00:03:46,640 Speaker 1: a patch is pretty extreme. It essentially involves shutting down 52 00:03:46,680 --> 00:03:50,720 Speaker 1: both the call over Wi Fi and voice over LTE 53 00:03:50,760 --> 00:03:54,840 Speaker 1: functions of your phone, So essentially it means removing the 54 00:03:54,880 --> 00:03:57,640 Speaker 1: phone from your phone, and I get it a lot 55 00:03:57,640 --> 00:04:00,120 Speaker 1: of y'all never use a phone as a phone anyway, 56 00:04:00,600 --> 00:04:03,720 Speaker 1: right Like a voice call is probably the last thing 57 00:04:03,720 --> 00:04:06,320 Speaker 1: you would use your phone for. But it does strike 58 00:04:06,360 --> 00:04:09,440 Speaker 1: me as particularly weird to remove the phone part from 59 00:04:09,440 --> 00:04:12,960 Speaker 1: a phone anyway. Some of the devices affected include the 60 00:04:13,040 --> 00:04:16,640 Speaker 1: Pixel six and Pixel seven from Google, a whole bunch 61 00:04:16,680 --> 00:04:20,400 Speaker 1: of different Samsung Mobile devices are affected, which isn't a surprise. 62 00:04:20,480 --> 00:04:23,800 Speaker 1: They include the S twenty two, the M twelve, the 63 00:04:23,960 --> 00:04:26,880 Speaker 1: M thirteen, the M thirty three, and a whole bunch 64 00:04:26,880 --> 00:04:30,000 Speaker 1: of devices that are in the A series of Samsung 65 00:04:30,040 --> 00:04:33,159 Speaker 1: Mobile devices, not all a series, but a lot of them. 66 00:04:33,520 --> 00:04:37,839 Speaker 1: Plus any vehicles that have an Xinos Auto T five 67 00:04:38,000 --> 00:04:41,240 Speaker 1: one two three chip set or any wearables that have 68 00:04:41,320 --> 00:04:45,600 Speaker 1: the Xinos W nine twenty chip set are affected by 69 00:04:45,640 --> 00:04:49,360 Speaker 1: this vulnerability, which is a big old yikes. By the way, 70 00:04:49,400 --> 00:04:52,919 Speaker 1: this is a good reminder to activate real security updates 71 00:04:52,960 --> 00:04:57,159 Speaker 1: when you get them, because security updates can address critical 72 00:04:57,240 --> 00:05:00,240 Speaker 1: vulnerabilities like this one and prevent you from the coming 73 00:05:00,279 --> 00:05:03,039 Speaker 1: a victim in the future. The only thing is that 74 00:05:03,080 --> 00:05:06,680 Speaker 1: you should verify that a security update you receive is 75 00:05:07,000 --> 00:05:10,760 Speaker 1: real first, so like do just bear minimum research to 76 00:05:10,839 --> 00:05:14,200 Speaker 1: make sure that yes, you know, whatever company has issued 77 00:05:14,320 --> 00:05:18,120 Speaker 1: a security update. So if you are using say a 78 00:05:18,160 --> 00:05:20,919 Speaker 1: Google Pixel device, just make sure that Google did in 79 00:05:21,000 --> 00:05:24,000 Speaker 1: fact issue a security update. Just do a quick search 80 00:05:24,040 --> 00:05:26,440 Speaker 1: to make sure that is a thing that has happened, 81 00:05:26,760 --> 00:05:29,440 Speaker 1: and then yeah, install that as soon as you can 82 00:05:29,680 --> 00:05:35,960 Speaker 1: because it can potentially cut off attack vectors. Sometimes these 83 00:05:36,080 --> 00:05:39,600 Speaker 1: updates can also affect other things on a device, which 84 00:05:40,279 --> 00:05:43,920 Speaker 1: is a pain in the butt, but it's still better 85 00:05:44,040 --> 00:05:48,080 Speaker 1: than having your phone compromised. So you know, just a 86 00:05:48,080 --> 00:05:52,400 Speaker 1: word out there. Google has recently banned a Chinese based 87 00:05:52,440 --> 00:05:55,760 Speaker 1: app from the Google Play Store. It is not TikTok, 88 00:05:56,120 --> 00:05:59,719 Speaker 1: though we will talk about the besieged video based social 89 00:06:00,040 --> 00:06:02,960 Speaker 1: work a little later in this episode. No, it's Pen 90 00:06:03,160 --> 00:06:08,279 Speaker 1: Duo Duo. Now. Essentially, this app matches consumers with merchants 91 00:06:08,760 --> 00:06:12,400 Speaker 1: for various products. It started out as an agricultural thing 92 00:06:12,600 --> 00:06:15,920 Speaker 1: and went on to stuff like groceries, and now it's 93 00:06:15,920 --> 00:06:20,040 Speaker 1: pretty much everything. So why is Google removing the app? Well, 94 00:06:20,080 --> 00:06:22,559 Speaker 1: according to Google, the most recent version of the app 95 00:06:22,600 --> 00:06:27,000 Speaker 1: contains malware of some sort, so Google suspended the app 96 00:06:27,120 --> 00:06:30,599 Speaker 1: out of a concern for security. Reuters reached out to 97 00:06:30,680 --> 00:06:33,440 Speaker 1: Penn Duo Duo and received a response that claims that 98 00:06:33,480 --> 00:06:36,760 Speaker 1: Google has alerted the company that the app has been suspended, 99 00:06:36,800 --> 00:06:39,479 Speaker 1: but that Google did not give any details as to 100 00:06:39,640 --> 00:06:43,000 Speaker 1: why the app was suspended anyway. This isn't the first 101 00:06:43,000 --> 00:06:45,960 Speaker 1: time that Penn Duo Duo has been in legal hot 102 00:06:46,000 --> 00:06:48,760 Speaker 1: water here in the United States. One of the major 103 00:06:48,920 --> 00:06:53,120 Speaker 1: issues that is often connected to it involves the counterfeiting 104 00:06:53,200 --> 00:06:56,960 Speaker 1: of various goods. So essentially, it's saying that pen Duo 105 00:06:57,080 --> 00:07:01,000 Speaker 1: Duo tends to match folks up with merchants who sell 106 00:07:01,080 --> 00:07:06,039 Speaker 1: knockoff versions of designer or luxury goods. That is a 107 00:07:06,120 --> 00:07:09,800 Speaker 1: big part of China's business as well, is just creating 108 00:07:09,880 --> 00:07:15,560 Speaker 1: counterfeit goods that mimic high end designer goods. It's a 109 00:07:15,600 --> 00:07:19,160 Speaker 1: big problem in the designer good industry. I can't pretend 110 00:07:19,160 --> 00:07:24,640 Speaker 1: like I fully understand it because I'm basic. I don't 111 00:07:24,680 --> 00:07:27,840 Speaker 1: do like the whole designer goods thing because I have 112 00:07:27,880 --> 00:07:32,120 Speaker 1: no class. Routers also reports that Google reps have claimed 113 00:07:32,120 --> 00:07:35,680 Speaker 1: that it was just an accident that certain communication records 114 00:07:35,720 --> 00:07:38,640 Speaker 1: that could have been used as evidence against it in 115 00:07:38,680 --> 00:07:42,160 Speaker 1: an upcoming trial in which the US Justice Department is 116 00:07:42,200 --> 00:07:46,880 Speaker 1: bringing antitrust charges against Google is scheduled to happen. So essentially, 117 00:07:47,200 --> 00:07:50,920 Speaker 1: the Department of Justice is saying, hey, there's all these 118 00:07:50,960 --> 00:07:54,360 Speaker 1: records that we potentially should have had access to, and 119 00:07:54,440 --> 00:07:58,440 Speaker 1: you're deleting them. It's essentially like your shredding documents because 120 00:07:58,480 --> 00:08:01,720 Speaker 1: you anticipate a raid. And Google is saying, no, we 121 00:08:01,760 --> 00:08:05,280 Speaker 1: didn't mean to delete anything. We were actually taking reasonable 122 00:08:05,320 --> 00:08:09,320 Speaker 1: steps to preserve everything, but some stuff slipped through the cracks, 123 00:08:09,880 --> 00:08:12,440 Speaker 1: not the heart of the lawsuit or claims from most 124 00:08:12,480 --> 00:08:15,120 Speaker 1: of the states here in the US that Google has 125 00:08:15,240 --> 00:08:18,640 Speaker 1: been using anti competitive strategies to prevent any other company 126 00:08:18,680 --> 00:08:21,800 Speaker 1: from getting a foothold in the search business. The d 127 00:08:21,960 --> 00:08:25,400 Speaker 1: o J suspects that Google has denied access to or 128 00:08:25,480 --> 00:08:30,200 Speaker 1: deleted documents that would lend credence to their case against 129 00:08:30,240 --> 00:08:33,520 Speaker 1: the company, and Google says, ah, that's ridiculous. We've already 130 00:08:33,520 --> 00:08:36,480 Speaker 1: shared millions of documents with you. Now. I don't have 131 00:08:36,520 --> 00:08:39,599 Speaker 1: eyes on this one. I have no idea if Google's 132 00:08:39,640 --> 00:08:43,400 Speaker 1: claims are with or without merit. I don't know if Google, 133 00:08:44,080 --> 00:08:49,520 Speaker 1: you know, intentionally deleted files that would have been damaging 134 00:08:49,559 --> 00:08:53,200 Speaker 1: to the company's case, or if this truly was just 135 00:08:53,720 --> 00:08:57,839 Speaker 1: something that happened without any malice involved. I will say 136 00:08:57,840 --> 00:09:00,920 Speaker 1: it's not a good look. I mean, anytime documents go missing, 137 00:09:01,800 --> 00:09:06,320 Speaker 1: that isn't exactly a ringing endorsement of your claims of 138 00:09:06,360 --> 00:09:09,920 Speaker 1: being innocent. But yeah, I don't know. Well way or 139 00:09:09,920 --> 00:09:14,079 Speaker 1: the other. Tech Crunch has a piece titled Facebook political 140 00:09:14,200 --> 00:09:18,760 Speaker 1: micro targeting at center of GDPR complaints in Germany. So 141 00:09:19,040 --> 00:09:21,960 Speaker 1: GDPR if you are not in the know. Is a 142 00:09:22,000 --> 00:09:26,200 Speaker 1: set of regulations that are centered around protecting EU citizen 143 00:09:26,320 --> 00:09:30,560 Speaker 1: personal information. And if you want to do business in 144 00:09:30,600 --> 00:09:33,280 Speaker 1: the EU, if you're a digital company, then you want 145 00:09:33,280 --> 00:09:37,559 Speaker 1: to have any sort of business practices within the European Union. 146 00:09:38,200 --> 00:09:41,520 Speaker 1: Complying with GDPR is a really big deal. In fact, 147 00:09:41,520 --> 00:09:45,839 Speaker 1: their entire consulting companies out there designed to help other 148 00:09:45,920 --> 00:09:49,679 Speaker 1: companies make sure that their business practices are in alignment 149 00:09:49,720 --> 00:09:53,840 Speaker 1: with GDPR, because if you don't comply with it, then 150 00:09:53,960 --> 00:09:58,160 Speaker 1: your company can get hit with massive fines or even 151 00:09:58,240 --> 00:10:03,120 Speaker 1: being denied access to doing business within the European Union, 152 00:10:03,559 --> 00:10:06,880 Speaker 1: which would be a huge deal. Right. So in this case, 153 00:10:07,360 --> 00:10:11,280 Speaker 1: what has happened is that Facebook has used microtargeting to 154 00:10:11,440 --> 00:10:16,000 Speaker 1: direct political advertisements to German citizens without first getting their 155 00:10:16,040 --> 00:10:21,000 Speaker 1: express consent to make such use of their personal data. 156 00:10:21,040 --> 00:10:24,200 Speaker 1: And the EU assigns different levels of importance to different 157 00:10:24,320 --> 00:10:28,280 Speaker 1: kinds of personal information, so things like your political leanings, 158 00:10:28,679 --> 00:10:32,559 Speaker 1: that's way way way up there. That's highly protected information. 159 00:10:33,080 --> 00:10:35,600 Speaker 1: So there is a high bar to clear if you 160 00:10:35,640 --> 00:10:39,120 Speaker 1: are going to make use of that personal information in 161 00:10:39,160 --> 00:10:44,040 Speaker 1: any way. And according to the charges meta Slash, Facebook 162 00:10:44,320 --> 00:10:47,199 Speaker 1: totally failed to do that like they should have gotten 163 00:10:47,200 --> 00:10:51,640 Speaker 1: the express consent of each and every person who's information 164 00:10:51,679 --> 00:10:56,719 Speaker 1: they used to sell microtargeting ads. You know that they did, 165 00:10:56,760 --> 00:11:00,679 Speaker 1: and they didn't do that. And arguably just as bad, 166 00:11:00,720 --> 00:11:04,360 Speaker 1: if not worse, is that every single political party in 167 00:11:04,520 --> 00:11:10,559 Speaker 1: Germany took advantage of meta's disregard for the GDPR regulations. 168 00:11:11,240 --> 00:11:15,120 Speaker 1: So whether it was a far right political party, far left, 169 00:11:15,160 --> 00:11:19,720 Speaker 1: and everything in between, all of them used microtargeting to 170 00:11:19,760 --> 00:11:25,719 Speaker 1: deliver advertising messages, including messages that contradicted each other from 171 00:11:25,720 --> 00:11:29,880 Speaker 1: the same party, just based on microtargeting of individuals. So, 172 00:11:30,000 --> 00:11:34,199 Speaker 1: for example, some one party might have sent out a 173 00:11:34,240 --> 00:11:38,640 Speaker 1: message to someone who is identified as being very pro 174 00:11:39,559 --> 00:11:45,120 Speaker 1: climate change regulations as saying, we're the party that is 175 00:11:45,160 --> 00:11:50,280 Speaker 1: pursuing climate protection over everything else, so believe in us. 176 00:11:50,760 --> 00:11:54,000 Speaker 1: But that same party might have identified someone else as 177 00:11:54,040 --> 00:12:00,679 Speaker 1: being more in tune with personal freedom over time regulation, 178 00:12:00,720 --> 00:12:03,319 Speaker 1: as saying, we believe in the climate just like you do, 179 00:12:03,559 --> 00:12:05,960 Speaker 1: but we also believe you should be able to travel 180 00:12:06,000 --> 00:12:09,320 Speaker 1: wherever you want, whenever you want, so we're about protecting 181 00:12:09,360 --> 00:12:13,360 Speaker 1: the climate without infringing on personal liberty. These two messages 182 00:12:13,840 --> 00:12:17,120 Speaker 1: cannot coexist from the same party, right, you cannot stay 183 00:12:17,120 --> 00:12:20,120 Speaker 1: true to both simultaneously. If you're telling one person we're 184 00:12:20,160 --> 00:12:23,080 Speaker 1: going to protect the climate at all costs, and you're 185 00:12:23,080 --> 00:12:25,520 Speaker 1: telling someone else will protect the climate, but we're not 186 00:12:25,600 --> 00:12:28,320 Speaker 1: gonna you know, we're not going to cramp your style. 187 00:12:28,760 --> 00:12:32,680 Speaker 1: Those are two messages that cannot be true at the 188 00:12:32,720 --> 00:12:36,559 Speaker 1: same time. And the point is that if the same 189 00:12:36,600 --> 00:12:41,760 Speaker 1: party is delivering these two contradictory messages, they are lying 190 00:12:41,840 --> 00:12:45,680 Speaker 1: to someone or they're lying to everyone, at least one person, 191 00:12:45,720 --> 00:12:49,840 Speaker 1: but possibly everybody, And ultimately it undermines the democratic process 192 00:12:49,880 --> 00:12:53,040 Speaker 1: because you cannot trust the messages coming out of a 193 00:12:53,040 --> 00:12:57,440 Speaker 1: political party. If those messages are contradictory of one another, 194 00:12:58,080 --> 00:13:03,480 Speaker 1: it destroys democracy itself. Fun times anyway, an advocacy group 195 00:13:03,840 --> 00:13:07,600 Speaker 1: in OYB has really cranked up the pressure here, and 196 00:13:07,920 --> 00:13:10,200 Speaker 1: I would argue for a very good reason, and that 197 00:13:10,520 --> 00:13:14,760 Speaker 1: in OYB is putting pressure on EU officials to enforce 198 00:13:14,840 --> 00:13:18,480 Speaker 1: the rules under GDPR like they've created these rules, and 199 00:13:18,559 --> 00:13:21,800 Speaker 1: in OYB is saying clearly these rules are not being followed, 200 00:13:21,840 --> 00:13:24,560 Speaker 1: so it's your job to get in here and to 201 00:13:25,200 --> 00:13:29,320 Speaker 1: actually enforce them and to get things like explicit consent 202 00:13:29,760 --> 00:13:32,040 Speaker 1: before any of this kind of data can be used 203 00:13:32,080 --> 00:13:36,080 Speaker 1: for microtargeting advertising as well as you know, putting some 204 00:13:36,080 --> 00:13:40,160 Speaker 1: pressure on the political party system within Germany to obey 205 00:13:40,280 --> 00:13:44,720 Speaker 1: the laws that these political parties are actually, you know, 206 00:13:44,800 --> 00:13:48,200 Speaker 1: supposed to uphold in the first place. So yeah, pretty 207 00:13:48,440 --> 00:13:52,200 Speaker 1: big expose when you look at it in that term, 208 00:13:52,679 --> 00:13:56,400 Speaker 1: and I'm very curious where it goes from here. Okay, 209 00:13:56,520 --> 00:13:59,480 Speaker 1: we've got some more news to cover before we get 210 00:13:59,480 --> 00:14:12,280 Speaker 1: to that. Let's take a quick break. Russia is gearing 211 00:14:12,360 --> 00:14:16,160 Speaker 1: up for its own presidential election that will happen in 212 00:14:16,200 --> 00:14:20,080 Speaker 1: twenty twenty four, and now election officials in Russia have 213 00:14:20,400 --> 00:14:25,560 Speaker 1: a new directive ditch the iPhone. So Reuters has reported 214 00:14:25,560 --> 00:14:28,280 Speaker 1: that the Kremlin has ordered all officials connected with the 215 00:14:28,320 --> 00:14:32,320 Speaker 1: twenty twenty four Russian presidential election, which I am sure 216 00:14:32,360 --> 00:14:36,240 Speaker 1: will be absolutely straightforward and fair, we'll have to stop 217 00:14:36,440 --> 00:14:40,680 Speaker 1: using Apple iPhones, reportedly due to concern that Western intelligence 218 00:14:40,720 --> 00:14:44,720 Speaker 1: agencies could be working with Apple, or maybe even without 219 00:14:44,720 --> 00:14:49,960 Speaker 1: Apple's knowledge and participation, to leverage iPhones in order to 220 00:14:50,000 --> 00:14:54,680 Speaker 1: gather intelligence. I mean, this could sound familiar because we 221 00:14:54,760 --> 00:14:57,800 Speaker 1: certainly have worried about Russia doing the exact same sort 222 00:14:57,840 --> 00:15:01,200 Speaker 1: of thing, not through the iPhone, but through various means 223 00:15:01,240 --> 00:15:08,200 Speaker 1: to inject misinformation and to affect elections across the United States, 224 00:15:08,200 --> 00:15:12,040 Speaker 1: to undermine the democratic process. So Russian officials are kind 225 00:15:12,080 --> 00:15:14,920 Speaker 1: of worried that the same thing's going to happen in Russia. Now, 226 00:15:15,320 --> 00:15:17,400 Speaker 1: I don't have a lot of faith in the fairness 227 00:15:17,520 --> 00:15:20,360 Speaker 1: of the Russian election system, if I'm being totally honest, 228 00:15:20,680 --> 00:15:23,800 Speaker 1: But I also think that perhaps the officials have a point. 229 00:15:23,880 --> 00:15:27,320 Speaker 1: Year I don't think that Apple would work so closely 230 00:15:27,320 --> 00:15:30,240 Speaker 1: with US authorities on this. I don't think Apple would 231 00:15:30,280 --> 00:15:35,880 Speaker 1: comply and allow their phones to be turned into surveillance devices. 232 00:15:36,960 --> 00:15:40,160 Speaker 1: Apple has a history of resisting US government efforts to 233 00:15:42,080 --> 00:15:46,000 Speaker 1: leverage Apple's abilities to do this kind of thing. Apple 234 00:15:46,080 --> 00:15:50,040 Speaker 1: famously resisted the FBI's efforts to create a backdoor entry 235 00:15:50,080 --> 00:15:54,360 Speaker 1: point for Apple's systems, saying that no, that compromises user 236 00:15:54,440 --> 00:16:00,000 Speaker 1: security and privacy, and it compromises our reputation with our customers. 237 00:16:00,600 --> 00:16:03,960 Speaker 1: So I don't think Apple would necessarily be on board 238 00:16:04,240 --> 00:16:11,160 Speaker 1: for any official attempt to surveil a foreign election process. However, 239 00:16:11,680 --> 00:16:16,920 Speaker 1: we have seen companies, surveillance companies creating apps that exploit 240 00:16:17,000 --> 00:16:20,200 Speaker 1: vulnerabilities and iOS devices in the past. You know, we 241 00:16:20,240 --> 00:16:24,240 Speaker 1: saw that with an Israeli company creating the Pegasus app 242 00:16:24,640 --> 00:16:27,600 Speaker 1: that would turn an iPhone into a spy device just 243 00:16:27,640 --> 00:16:32,680 Speaker 1: by sending a message through I Message. Apple has since 244 00:16:32,800 --> 00:16:36,280 Speaker 1: patched out that vulnerability, But the point is we have 245 00:16:36,400 --> 00:16:40,400 Speaker 1: seen that in the past, and really anytime you're talking 246 00:16:40,440 --> 00:16:43,560 Speaker 1: about relying upon a device that's going to send data 247 00:16:43,920 --> 00:16:47,160 Speaker 1: out of the country you're in to another country that 248 00:16:47,280 --> 00:16:50,880 Speaker 1: might be, if not outright hostile, at least wary of 249 00:16:50,920 --> 00:16:53,760 Speaker 1: the one you live in, there are concerns, right because 250 00:16:54,000 --> 00:16:57,120 Speaker 1: if you don't have full control of the information chain, 251 00:16:57,800 --> 00:17:01,120 Speaker 1: then there are worries about how that could hack national security. 252 00:17:01,200 --> 00:17:03,600 Speaker 1: So while I don't think everything's on the up and 253 00:17:03,640 --> 00:17:07,080 Speaker 1: up and Russia necessarily, I do also understand the concern 254 00:17:07,640 --> 00:17:13,679 Speaker 1: about officials relying upon technology that is created in a 255 00:17:13,720 --> 00:17:18,119 Speaker 1: country that at least is wary of Russia, if not 256 00:17:18,160 --> 00:17:23,520 Speaker 1: outright hostile at times. So yeah, I get the security concern. 257 00:17:23,600 --> 00:17:25,680 Speaker 1: It's not really a surprise. I don't even think it's 258 00:17:25,680 --> 00:17:28,439 Speaker 1: really an indictment against Apple, because again, Apple does not 259 00:17:28,560 --> 00:17:33,240 Speaker 1: have a reputation of working alongside US government to that degree. 260 00:17:33,760 --> 00:17:37,720 Speaker 1: But allegedly all election officials will need to be free 261 00:17:37,760 --> 00:17:43,280 Speaker 1: of iPhones by April first, Stanford has introduced Alpaca AI, 262 00:17:43,480 --> 00:17:46,720 Speaker 1: which according to researchers, performs on a level similar to 263 00:17:47,160 --> 00:17:52,720 Speaker 1: GPT three point five that's the previous generation of open 264 00:17:52,840 --> 00:17:57,240 Speaker 1: aiyes language model. But the surprising thing is that these 265 00:17:57,280 --> 00:18:02,200 Speaker 1: researchers spend about six hundred dollars developing their Alpaca AI. 266 00:18:02,840 --> 00:18:06,040 Speaker 1: You know, open ai has had billions of dollars of 267 00:18:06,119 --> 00:18:10,080 Speaker 1: investments poured into it, and apparently these Stanford researchers were 268 00:18:10,119 --> 00:18:13,920 Speaker 1: able to achieve a similar outcome to the previous generation 269 00:18:13,960 --> 00:18:17,880 Speaker 1: of GPT for just six hundred bucks. So they started 270 00:18:17,920 --> 00:18:22,760 Speaker 1: with Meta's open source LAMA seven B language model. So 271 00:18:23,000 --> 00:18:27,040 Speaker 1: this is Meta's own large language model. It is not 272 00:18:27,119 --> 00:18:30,640 Speaker 1: the same one that open ai uses, and Meta allows 273 00:18:30,680 --> 00:18:35,320 Speaker 1: academics to access these models for a price. The seven 274 00:18:35,440 --> 00:18:38,760 Speaker 1: B model is the cheapest and smallest of the large 275 00:18:38,840 --> 00:18:41,399 Speaker 1: language models, and that's the one the researchers went with, 276 00:18:41,840 --> 00:18:45,240 Speaker 1: so they saved some money by going with that. Then 277 00:18:45,280 --> 00:18:50,560 Speaker 1: they used open AI's GPT to generate sample conversations around 278 00:18:50,560 --> 00:18:56,960 Speaker 1: fifty two thousand sample conversations to train to post train 279 00:18:57,080 --> 00:19:01,200 Speaker 1: their own AI model. So they made use of open 280 00:19:01,240 --> 00:19:07,160 Speaker 1: aiyes official GPT research tools to do that that cost 281 00:19:07,200 --> 00:19:10,199 Speaker 1: a few hundred bucks. Then they made some tweaks and 282 00:19:10,240 --> 00:19:13,800 Speaker 1: some adjustments, and they ended up with an AI language 283 00:19:13,800 --> 00:19:17,679 Speaker 1: model that can perform really well against GPT itself, like 284 00:19:17,720 --> 00:19:21,640 Speaker 1: almost at the same level. I think in something like 285 00:19:23,000 --> 00:19:28,120 Speaker 1: almost one hundred and eighty tests, ninety showed their AI 286 00:19:28,160 --> 00:19:33,159 Speaker 1: model outperforming GPT, and an eighty nine cases GPT outperformed 287 00:19:33,160 --> 00:19:36,359 Speaker 1: the AI model, so they're essentially neck and neck. As 288 00:19:36,600 --> 00:19:39,760 Speaker 1: New Atlas the website New Atlas points out, the research 289 00:19:39,800 --> 00:19:43,159 Speaker 1: shows that if you have the right tools at your disposal, 290 00:19:44,080 --> 00:19:47,119 Speaker 1: you can potentially create your own large language model for 291 00:19:47,200 --> 00:19:50,320 Speaker 1: AI and then build stuff on top of that, like 292 00:19:50,840 --> 00:19:55,600 Speaker 1: chatbots and whatnot. And while Meta limits access to LAMA 293 00:19:55,680 --> 00:20:00,080 Speaker 1: to academics and such, the problem is someone leaked a 294 00:20:00,200 --> 00:20:05,479 Speaker 1: code online. So while you officially have to be an 295 00:20:05,480 --> 00:20:08,880 Speaker 1: academic to get access to LAMA, if you're the unethical type, 296 00:20:08,880 --> 00:20:12,200 Speaker 1: you can just grab the code online. It is available 297 00:20:12,240 --> 00:20:15,440 Speaker 1: out there, and you can start to develop your own 298 00:20:15,600 --> 00:20:18,280 Speaker 1: language model and your own work based on top of that. 299 00:20:19,440 --> 00:20:21,760 Speaker 1: This means we could be at the beginning of an 300 00:20:21,800 --> 00:20:27,679 Speaker 1: era of chat bought explosions and AI activations, all powered 301 00:20:28,160 --> 00:20:34,919 Speaker 1: by similar but independent language models. It's entirely possible that 302 00:20:35,040 --> 00:20:38,440 Speaker 1: some or even many of these will lack the restrictions 303 00:20:39,040 --> 00:20:41,840 Speaker 1: that we see open aie putting on top of its 304 00:20:41,880 --> 00:20:45,679 Speaker 1: own products like chat GPT. You know, open ai works 305 00:20:45,760 --> 00:20:51,119 Speaker 1: hard to try and limit things like you know, racist 306 00:20:51,240 --> 00:20:58,440 Speaker 1: or misogynyistic or transphobic language from being generated through its tools, 307 00:20:58,920 --> 00:21:03,280 Speaker 1: and you know, obviously we've also seen issues with the 308 00:21:03,400 --> 00:21:08,240 Speaker 1: reliability of information generated through tools like chat GPT. So 309 00:21:08,280 --> 00:21:11,600 Speaker 1: the fear is that these new tools, which could be 310 00:21:11,960 --> 00:21:15,880 Speaker 1: pretty easy to make yourself based upon this research project, 311 00:21:17,119 --> 00:21:21,440 Speaker 1: won't have those safeguards in place, and that we could 312 00:21:21,440 --> 00:21:25,080 Speaker 1: see a lot more misinformation or abuse of such systems, 313 00:21:25,440 --> 00:21:28,800 Speaker 1: maybe using systems to try and trick people with phishing attacks, 314 00:21:28,840 --> 00:21:33,200 Speaker 1: all sorts of different tactics that could be harmful could 315 00:21:33,800 --> 00:21:36,399 Speaker 1: be coming down the road. Because we've already seen the 316 00:21:36,440 --> 00:21:39,400 Speaker 1: barrier to entry, it's not that high, and if you're 317 00:21:39,440 --> 00:21:43,200 Speaker 1: able to figure out a way to exploit that and 318 00:21:43,280 --> 00:21:45,879 Speaker 1: make a whole lot of money, maybe you'd be willing 319 00:21:46,000 --> 00:21:50,240 Speaker 1: to take that unethical route and create a whole new 320 00:21:50,280 --> 00:21:54,199 Speaker 1: generation of victims out there. Oh brave new world to 321 00:21:54,359 --> 00:21:57,119 Speaker 1: have such chat bots in it. And on the topic 322 00:21:57,119 --> 00:22:02,080 Speaker 1: of chat GPT recently above started showing users the title 323 00:22:02,200 --> 00:22:06,600 Speaker 1: and short description of chat GPT sessions that were not 324 00:22:06,760 --> 00:22:08,840 Speaker 1: their own. You would look at your history and start 325 00:22:08,920 --> 00:22:12,479 Speaker 1: seeing stuff that you definitely did not do. So users 326 00:22:12,480 --> 00:22:15,160 Speaker 1: were having the titles and short descriptions of their chat 327 00:22:15,200 --> 00:22:18,199 Speaker 1: sessions shared with other users without their consent, and this 328 00:22:18,359 --> 00:22:21,560 Speaker 1: was not by design. This is a good time to 329 00:22:21,600 --> 00:22:26,600 Speaker 1: remind everyone that if you are using chat GPT, you 330 00:22:26,640 --> 00:22:31,440 Speaker 1: should not share sensitive information in chat GPT conversations because 331 00:22:31,480 --> 00:22:35,880 Speaker 1: you never know when those conversations could become public. So, 332 00:22:35,920 --> 00:22:38,360 Speaker 1: if you were using chat GPT to I don't know, 333 00:22:38,880 --> 00:22:43,040 Speaker 1: maybe draft a resignation letter, but maybe you aren't seriously 334 00:22:43,040 --> 00:22:45,200 Speaker 1: considering leaving your job, you were just kind of toying 335 00:22:45,200 --> 00:22:47,960 Speaker 1: with the idea, and you have chat GPT draft a 336 00:22:48,040 --> 00:22:51,640 Speaker 1: resignation letter for your behalf. Well, that could end up 337 00:22:51,640 --> 00:22:55,000 Speaker 1: being awfully awkward. If your chat session title and description 338 00:22:55,400 --> 00:22:58,800 Speaker 1: then gets shared with somebody else but without your permission, 339 00:22:59,280 --> 00:23:04,280 Speaker 1: and now people have the impression that you are leaving 340 00:23:04,280 --> 00:23:07,719 Speaker 1: your job, that could create all sorts of complications in 341 00:23:07,720 --> 00:23:09,800 Speaker 1: your life that you probably don't want to have to 342 00:23:09,800 --> 00:23:13,440 Speaker 1: deal with. That's just one example. What if you were drafting. 343 00:23:13,480 --> 00:23:16,000 Speaker 1: I don't know, a love letter to your crush. I 344 00:23:16,040 --> 00:23:18,440 Speaker 1: mean like, there's all sorts of things that you could 345 00:23:18,440 --> 00:23:21,720 Speaker 1: have been using chat GPT to do, even just for fun, 346 00:23:22,440 --> 00:23:25,919 Speaker 1: that you probably don't want people to see. So in 347 00:23:25,960 --> 00:23:28,280 Speaker 1: my case, if people were to see my history with 348 00:23:28,400 --> 00:23:31,880 Speaker 1: chat GPT, they would probably see how I was trying 349 00:23:31,880 --> 00:23:35,520 Speaker 1: to get chat gpt to compose a Shakespearean sonnet about AI. 350 00:23:36,240 --> 00:23:39,600 Speaker 1: Or there was one time when I asked it to 351 00:23:39,640 --> 00:23:42,080 Speaker 1: create an episode of tech stuff. I was just curious 352 00:23:42,119 --> 00:23:44,639 Speaker 1: what would happen, and I told it to create an 353 00:23:44,640 --> 00:23:47,680 Speaker 1: episode of tech stuff about chat GPT, which I had 354 00:23:47,880 --> 00:23:50,760 Speaker 1: already done an episode about chat GPT. So it's not 355 00:23:50,800 --> 00:23:52,960 Speaker 1: like I was trying to get out of doing my work, y'all. 356 00:23:53,000 --> 00:23:54,840 Speaker 1: That's not what this was about. I was just curious 357 00:23:54,840 --> 00:23:58,240 Speaker 1: what would happen, Like would it actually sound like one 358 00:23:58,240 --> 00:24:00,480 Speaker 1: of my episodes? The answer is no, it did not 359 00:24:00,600 --> 00:24:05,040 Speaker 1: sound anything like one of my episodes. It created an 360 00:24:05,080 --> 00:24:09,760 Speaker 1: interview with a fake chat GPT representative. So if I 361 00:24:09,760 --> 00:24:12,199 Speaker 1: had actually recorded the episode, I would have had to 362 00:24:12,280 --> 00:24:16,560 Speaker 1: invent a person who supposedly worked at open AI who 363 00:24:16,560 --> 00:24:19,600 Speaker 1: would be talking about chat GPT in an authoritative way 364 00:24:19,840 --> 00:24:24,480 Speaker 1: which seems unethical at the very slightest, but anyway, Open 365 00:24:24,520 --> 00:24:28,880 Speaker 1: Ai shut down chat GPT for a short while yesterday 366 00:24:28,880 --> 00:24:33,720 Speaker 1: evening and they took the chat history offline for a while. 367 00:24:33,760 --> 00:24:36,840 Speaker 1: They are reportedly starting to roll that back into service, 368 00:24:36,920 --> 00:24:40,080 Speaker 1: but it might be a while before all users are 369 00:24:40,119 --> 00:24:44,280 Speaker 1: actually able to access their chat histories, so there may 370 00:24:44,320 --> 00:24:46,960 Speaker 1: be a while before you're able to see back on, 371 00:24:47,560 --> 00:24:49,399 Speaker 1: you know, the stuff you have been talking about with 372 00:24:49,480 --> 00:24:53,160 Speaker 1: chat GPT in the past. The verges. Tom Warren has 373 00:24:53,160 --> 00:24:58,159 Speaker 1: a great article titled Microsoft's new Copilot will change Office 374 00:24:58,200 --> 00:25:02,119 Speaker 1: documents Forever. This kind of again falls in line with 375 00:25:02,200 --> 00:25:07,200 Speaker 1: AI and chatbots. So Tom Warren was in a Microsoft 376 00:25:07,200 --> 00:25:11,399 Speaker 1: Teams meeting with Microsoft reps to talk about Copilot. That's 377 00:25:11,520 --> 00:25:17,200 Speaker 1: Microsoft's AI feature that's going to be incorporated into all 378 00:25:17,400 --> 00:25:21,600 Speaker 1: office products moving forward. During the meeting, the reps turned 379 00:25:21,680 --> 00:25:25,480 Speaker 1: Copilot on to do stuff like take notes about the 380 00:25:25,840 --> 00:25:28,919 Speaker 1: Microsoft Teams meeting. At the conclusion of the meeting, Warren 381 00:25:28,960 --> 00:25:31,600 Speaker 1: actually received these notes, which included a recap of all 382 00:25:31,600 --> 00:25:34,560 Speaker 1: the questions he had asked during the interview, and it 383 00:25:34,600 --> 00:25:37,800 Speaker 1: sounds like Copilot can help users take advantage of office 384 00:25:37,840 --> 00:25:42,080 Speaker 1: products as if the user was an elite user. You 385 00:25:42,200 --> 00:25:44,440 Speaker 1: probably maybe you are an elite user in one or 386 00:25:44,480 --> 00:25:47,040 Speaker 1: two office products, or maybe all of them, or maybe 387 00:25:47,040 --> 00:25:49,720 Speaker 1: you know someone who is, like someone who just knows 388 00:25:49,800 --> 00:25:54,840 Speaker 1: all the different tricks to create the most snazzy and 389 00:25:54,960 --> 00:26:00,359 Speaker 1: effective presentation, or they're able to create really help full 390 00:26:00,760 --> 00:26:03,919 Speaker 1: graphs and charts out of Excel sheets, like there's an 391 00:26:04,040 --> 00:26:06,119 Speaker 1: art to it. You know, you can learn the basic 392 00:26:06,200 --> 00:26:10,040 Speaker 1: steps and do a fine job, but to do something 393 00:26:10,119 --> 00:26:14,280 Speaker 1: really effective requires a little more work. And you know, 394 00:26:14,520 --> 00:26:17,920 Speaker 1: it sounds to me like Copilot can actually help you 395 00:26:17,960 --> 00:26:20,400 Speaker 1: achieve those things and not only help you do them, 396 00:26:20,400 --> 00:26:24,160 Speaker 1: but also teach you how to do them yourself. So 397 00:26:24,200 --> 00:26:27,920 Speaker 1: it's not just like a magic wand that makes things better, 398 00:26:28,359 --> 00:26:31,520 Speaker 1: but a tutorial tool that can help you learn how 399 00:26:31,560 --> 00:26:34,080 Speaker 1: to take advantage of features that maybe you never even 400 00:26:34,160 --> 00:26:38,880 Speaker 1: knew existed. It's kind of like the old Microsoft Clippy, 401 00:26:38,920 --> 00:26:43,440 Speaker 1: but on steroids. Warren also mentions that the reps from 402 00:26:43,440 --> 00:26:46,440 Speaker 1: Microsoft acknowledge that the tool is good, but it's not perfect, 403 00:26:46,880 --> 00:26:49,880 Speaker 1: that it can make mistakes, it can generate stuff that 404 00:26:50,080 --> 00:26:54,280 Speaker 1: isn't appropriate, and that human review and interaction is still 405 00:26:54,400 --> 00:26:56,520 Speaker 1: very much a needed part of the process. That you 406 00:26:56,560 --> 00:27:00,199 Speaker 1: can't just step back and let office do all the 407 00:27:00,240 --> 00:27:04,840 Speaker 1: work for you. This is a collaborative approach to work, 408 00:27:04,960 --> 00:27:08,320 Speaker 1: not one where AI takes over for you. I recommend 409 00:27:08,320 --> 00:27:10,480 Speaker 1: you pop on over to the Verge and read the 410 00:27:10,520 --> 00:27:13,119 Speaker 1: piece to learn more. I do think the messaging from 411 00:27:13,160 --> 00:27:15,640 Speaker 1: Microsoft is pretty good here, that these tools are meant 412 00:27:15,680 --> 00:27:19,720 Speaker 1: to augment, but not replace, your own skills. That is 413 00:27:19,760 --> 00:27:22,119 Speaker 1: the approach to AI that I find the most intriguing 414 00:27:22,160 --> 00:27:26,119 Speaker 1: and certainly far less concerning than the doomsday scenarios we 415 00:27:26,160 --> 00:27:28,840 Speaker 1: often hear about about how AI is going to take 416 00:27:28,920 --> 00:27:31,680 Speaker 1: all of our jobs. The Verge also has a piece 417 00:27:31,680 --> 00:27:35,840 Speaker 1: titled text to Video AI inches closer as startup Runway 418 00:27:35,880 --> 00:27:39,760 Speaker 1: announces new model and yeah, the article is pretty much 419 00:27:39,760 --> 00:27:43,119 Speaker 1: what sounds like. Just as doll E and mid Journey 420 00:27:43,200 --> 00:27:48,160 Speaker 1: can generate images based on text inputs, Runway can generate 421 00:27:48,240 --> 00:27:51,960 Speaker 1: short snippets of video, and soon Runway Gen two will 422 00:27:52,000 --> 00:27:55,520 Speaker 1: release and show the progress of this technology. It is 423 00:27:55,560 --> 00:27:59,720 Speaker 1: not a high fidelity or photo realistic experience. So far, 424 00:28:00,280 --> 00:28:02,200 Speaker 1: the video I've seen has been in a fairly low 425 00:28:02,240 --> 00:28:07,200 Speaker 1: resolution and has elements in it that don't look real, 426 00:28:07,400 --> 00:28:10,560 Speaker 1: so you can tell it's like computer generated. But the 427 00:28:10,640 --> 00:28:13,639 Speaker 1: demonstrations are pretty fascinating. You just type in a simple 428 00:28:13,680 --> 00:28:16,560 Speaker 1: description of what you want to see, and you know, 429 00:28:16,600 --> 00:28:19,720 Speaker 1: the more descriptive you are, both about the scene you 430 00:28:19,760 --> 00:28:21,600 Speaker 1: want to see and the kind of camera work you 431 00:28:21,640 --> 00:28:26,840 Speaker 1: want will affect the clip that Runway generates. Runway also 432 00:28:27,040 --> 00:28:31,719 Speaker 1: offers a suite of aipowered video editing tools that can 433 00:28:31,760 --> 00:28:34,359 Speaker 1: do stuff like help remove a background from a video 434 00:28:34,480 --> 00:28:37,119 Speaker 1: or stabilize a camera and all that kind of stuff. 435 00:28:37,160 --> 00:28:39,040 Speaker 1: But the yeah, Gen two, the big thing is that 436 00:28:39,080 --> 00:28:44,240 Speaker 1: it's able to just create entire video clips. Obviously, AI 437 00:28:44,320 --> 00:28:47,800 Speaker 1: generated video opens up new opportunities for abuse, So it's 438 00:28:47,800 --> 00:28:50,440 Speaker 1: not all wine and roses here, like, there are some real, 439 00:28:51,320 --> 00:28:55,760 Speaker 1: potentially harmful uses of this technology. But the actual video 440 00:28:55,840 --> 00:28:58,800 Speaker 1: so far again aren't. They're not photo realistic, and they 441 00:28:58,840 --> 00:29:01,680 Speaker 1: sometimes include stuff just you know, gives it away that's 442 00:29:01,720 --> 00:29:05,200 Speaker 1: a computer generated clip. So it's not quite to the 443 00:29:05,240 --> 00:29:08,280 Speaker 1: point where we can never believe anything on video ever again, 444 00:29:08,520 --> 00:29:12,000 Speaker 1: But we're getting there. Okay, I've got a few more 445 00:29:12,040 --> 00:29:14,160 Speaker 1: stories to cover before I get to that. Let's take 446 00:29:14,280 --> 00:29:27,000 Speaker 1: another quick break. Last week, I mentioned that after bay 447 00:29:27,080 --> 00:29:32,120 Speaker 1: Do demonstrated its own Ernie chatbot, investors were initially disappointed 448 00:29:32,160 --> 00:29:36,040 Speaker 1: in the company and we saw stocks fall. They recovered 449 00:29:36,040 --> 00:29:40,880 Speaker 1: a bit afterward, as Bid opened up Ernie to a 450 00:29:40,960 --> 00:29:45,040 Speaker 1: small group of testers who demonstrated their own interactions with 451 00:29:45,080 --> 00:29:48,520 Speaker 1: the bot, and so investors started to come around a 452 00:29:48,520 --> 00:29:51,600 Speaker 1: little bit on that and Reuters reports that the chatbot 453 00:29:51,920 --> 00:29:55,560 Speaker 1: has some pretty big blind spots that are not going 454 00:29:55,600 --> 00:29:58,480 Speaker 1: to surprise you in the least. Namely, if you asked 455 00:29:58,480 --> 00:30:03,000 Speaker 1: the chatbot questions about, say, Shi Jenping, the president of China, 456 00:30:03,080 --> 00:30:05,479 Speaker 1: then Ernie Bott will say it hasn't learned how to 457 00:30:05,520 --> 00:30:09,760 Speaker 1: answer those kinds of questions. Similarly, Reuters found that if 458 00:30:09,760 --> 00:30:12,760 Speaker 1: they asked about, you know, sensitive or taboo subjects, such 459 00:30:12,760 --> 00:30:15,960 Speaker 1: as the events at Tianamen Square in nineteen eighty nine, 460 00:30:16,400 --> 00:30:19,960 Speaker 1: Ernie would suggest changing the subject. This is pretty much 461 00:30:19,960 --> 00:30:22,920 Speaker 1: in line with how China limits and sensors Internet content 462 00:30:23,400 --> 00:30:26,880 Speaker 1: on politically sensitive subjects, and in the interest of fairness, 463 00:30:26,960 --> 00:30:29,320 Speaker 1: I should point out that open AI's version of chat 464 00:30:29,360 --> 00:30:34,000 Speaker 1: GPT has its own limitations. When that model was first 465 00:30:34,000 --> 00:30:38,920 Speaker 1: put out, it would not answer questions about developing news 466 00:30:39,080 --> 00:30:41,640 Speaker 1: or breaking news or anything, because it would only have 467 00:30:41,760 --> 00:30:46,200 Speaker 1: access to information from twenty twenty and earlier. As I recall, 468 00:30:46,600 --> 00:30:49,280 Speaker 1: so there were limitations of what it could talk about, 469 00:30:49,280 --> 00:30:52,760 Speaker 1: so it wasn't necessarily censorship so much as it was 470 00:30:53,160 --> 00:30:57,400 Speaker 1: open AI trying to avoid issues where their tool could 471 00:30:57,440 --> 00:31:03,080 Speaker 1: be used to frame news in a particular perspective or bias, 472 00:31:03,680 --> 00:31:07,400 Speaker 1: so it's not unheard of here either. I should say. 473 00:31:07,400 --> 00:31:14,000 Speaker 1: Also Microsoft's being incorporation of chat GBT lifted those limitations 474 00:31:14,040 --> 00:31:18,320 Speaker 1: you could ask about developing news on that platform. While 475 00:31:18,400 --> 00:31:21,600 Speaker 1: TikTok continues to argue for its own existence in the US, 476 00:31:21,640 --> 00:31:25,120 Speaker 1: the company has introduced new rules about AI deep fake 477 00:31:25,320 --> 00:31:29,040 Speaker 1: videos a lot of thematic similarities in our stories today. 478 00:31:29,760 --> 00:31:33,280 Speaker 1: The new rules say that synthetic and manipulated media must 479 00:31:33,320 --> 00:31:35,959 Speaker 1: be clearly disclosed as such. So, in other words, if 480 00:31:35,960 --> 00:31:39,040 Speaker 1: a video uses deep fake technology, it needs to make 481 00:31:39,080 --> 00:31:42,120 Speaker 1: that clear in the video itself, Otherwise it might be 482 00:31:42,200 --> 00:31:45,640 Speaker 1: seen as attempting to pass itself off as the legit article, 483 00:31:46,120 --> 00:31:49,880 Speaker 1: and thus will be subject to moderation like being deleted 484 00:31:50,000 --> 00:31:54,320 Speaker 1: or whatever. Further, deep fakes of private individuals are strictly 485 00:31:54,400 --> 00:31:57,200 Speaker 1: off the table. You can't make a deep fake of 486 00:31:57,240 --> 00:32:01,480 Speaker 1: some private citizen. That is against the rules. The only 487 00:32:01,520 --> 00:32:04,040 Speaker 1: deep fakes that are allowed are of public figures, and 488 00:32:04,080 --> 00:32:06,840 Speaker 1: again they have to be disclosed as being deep fakes. 489 00:32:06,880 --> 00:32:09,800 Speaker 1: And in deep fake videos, you cannot make the public 490 00:32:09,800 --> 00:32:13,400 Speaker 1: figure do stuff like endorse something, because that is a 491 00:32:13,480 --> 00:32:18,080 Speaker 1: violation of the rules. It is creating a perceived relationship 492 00:32:18,080 --> 00:32:21,440 Speaker 1: where none actually exists. You also can't have a deep 493 00:32:21,480 --> 00:32:24,320 Speaker 1: fake version of a celebrity doing stuff like calling for 494 00:32:24,480 --> 00:32:29,120 Speaker 1: violence or spreading hate speech. These are against the terms 495 00:32:29,120 --> 00:32:32,120 Speaker 1: and conditions of TikTok, so those would also be in 496 00:32:32,200 --> 00:32:35,479 Speaker 1: violation of the rules. I think this would be an 497 00:32:35,480 --> 00:32:39,240 Speaker 1: important step even if TikTok were not facing political pressure 498 00:32:39,280 --> 00:32:42,280 Speaker 1: from all corners at the moment, but certainly as there 499 00:32:42,360 --> 00:32:46,320 Speaker 1: is unprecedented security directed at TikTok right now, these new 500 00:32:46,400 --> 00:32:49,440 Speaker 1: rules I think are a necessity, especially in the wake 501 00:32:49,520 --> 00:32:53,120 Speaker 1: of descriptions of stuff like the Runway Gen two tool 502 00:32:53,200 --> 00:32:55,720 Speaker 1: that's coming out. Deep fakes are already a big problem. 503 00:32:56,160 --> 00:33:00,920 Speaker 1: The ability for AI to replicate voices is a big problem. 504 00:33:01,440 --> 00:33:06,080 Speaker 1: It's just going to get worse, so creating rules about 505 00:33:06,120 --> 00:33:10,560 Speaker 1: these things is really really important. Italy's AGCM, which is 506 00:33:10,560 --> 00:33:14,880 Speaker 1: a consumer watchdog group, has launched an investigation into TikTok, 507 00:33:15,160 --> 00:33:18,080 Speaker 1: alleging the platform does not do enough to stop videos 508 00:33:18,080 --> 00:33:22,600 Speaker 1: that promote harmful content, ranging from videos that promote eating 509 00:33:22,600 --> 00:33:27,880 Speaker 1: disorders to encouraging suicide. So these videos violate the terms 510 00:33:27,880 --> 00:33:30,400 Speaker 1: and conditions of TikTok. It is against the rules, But 511 00:33:30,640 --> 00:33:33,920 Speaker 1: what the watchdog is saying is that TikTok isn't doing 512 00:33:34,040 --> 00:33:38,280 Speaker 1: enough to enforce those rules. The AGCM is focusing on 513 00:33:38,320 --> 00:33:41,600 Speaker 1: TikTok Technology Limited. That's a branch of TikTok that's based 514 00:33:41,600 --> 00:33:45,520 Speaker 1: in Ireland. It's responsible for consumer relations in Europe. And 515 00:33:45,600 --> 00:33:48,720 Speaker 1: the event that apparently prompted this investigation was a string 516 00:33:48,760 --> 00:33:53,680 Speaker 1: of viral videos about a quote unquote French scar challenge. 517 00:33:54,520 --> 00:33:56,200 Speaker 1: The fact that I had to look up what the 518 00:33:56,240 --> 00:33:59,000 Speaker 1: heck that was is really a bummer and if you're curious. 519 00:33:59,080 --> 00:34:03,000 Speaker 1: Essentially it involves pinching the skin along a line to 520 00:34:03,080 --> 00:34:06,520 Speaker 1: create the illusion that you have a faded but permanent scar, 521 00:34:07,200 --> 00:34:11,080 Speaker 1: typically on your face. But this can actually cause permanent 522 00:34:11,160 --> 00:34:13,560 Speaker 1: damage itself. If you're pinching so hard, you can actually 523 00:34:13,560 --> 00:34:16,439 Speaker 1: create a permanent line on your face, and that fake 524 00:34:16,480 --> 00:34:19,640 Speaker 1: scar essentially becomes a real scar. That's not super cool. 525 00:34:19,960 --> 00:34:23,279 Speaker 1: AGCUM express concern not just about this particular trend, but 526 00:34:23,320 --> 00:34:27,120 Speaker 1: the overall problem of such content going viral without moderators 527 00:34:27,120 --> 00:34:30,120 Speaker 1: stepping in to enforce the rules. Amazon is laying off 528 00:34:30,120 --> 00:34:33,279 Speaker 1: another nine thousand employees. This is after a previous round 529 00:34:33,320 --> 00:34:37,920 Speaker 1: of layoffs affected eighteen thousand staff, which means this year 530 00:34:38,000 --> 00:34:40,560 Speaker 1: Amazon will have laid off about nine percent of its 531 00:34:40,600 --> 00:34:44,800 Speaker 1: global corporate workforce, a lot of people, you know, twenty 532 00:34:44,800 --> 00:34:49,040 Speaker 1: seven thousand people. The upcoming layoffs are expected to impact 533 00:34:49,080 --> 00:34:53,600 Speaker 1: divisions like Twitch, Amazon Web Services, or AWS, which previously 534 00:34:53,680 --> 00:34:57,880 Speaker 1: was one of the most profitable divisions within Amazon, and 535 00:34:58,080 --> 00:35:03,200 Speaker 1: advertising departments, among others. CEO Andy Jassey cited an uncertain 536 00:35:03,320 --> 00:35:06,440 Speaker 1: economy as the reason the company is holding these layoffs. 537 00:35:06,840 --> 00:35:11,720 Speaker 1: It's a popular phrase to yell right now. Apparently around 538 00:35:11,840 --> 00:35:14,680 Speaker 1: four hundred, actually more than four hundred of those cuts 539 00:35:14,719 --> 00:35:19,440 Speaker 1: will affect Twitch, which recently sought CEO Emmett Sheer resign 540 00:35:19,560 --> 00:35:22,120 Speaker 1: his position. It's hard for me to feel bad about 541 00:35:22,120 --> 00:35:25,839 Speaker 1: a company as huge as Amazon, but I definitely do 542 00:35:25,920 --> 00:35:28,920 Speaker 1: feel empathy for all the people affected by these layoffs, 543 00:35:28,920 --> 00:35:32,480 Speaker 1: both the ones directly affected and their coworkers who then 544 00:35:32,560 --> 00:35:36,520 Speaker 1: have to figure out how to proceed without the input 545 00:35:36,600 --> 00:35:39,000 Speaker 1: of the people who were just laid off. And I 546 00:35:39,080 --> 00:35:41,880 Speaker 1: definitely hope all those who are directly affected by these 547 00:35:41,960 --> 00:35:46,360 Speaker 1: layoffs land on their feet in this increasingly uncertain economic climate. 548 00:35:46,840 --> 00:35:49,719 Speaker 1: That is a very rough position to be in, and 549 00:35:49,960 --> 00:35:54,160 Speaker 1: I wish you all the best. Amazon itself is facing 550 00:35:54,200 --> 00:35:57,919 Speaker 1: several government investigations, and Politico reports that some of these 551 00:35:57,920 --> 00:36:01,759 Speaker 1: could be leading to lawsuits brought against Amazon by the 552 00:36:02,000 --> 00:36:05,720 Speaker 1: federal government of the United States. So Amazon's already involved 553 00:36:05,760 --> 00:36:09,799 Speaker 1: in state level lawsuits regarding things like consumer laws and 554 00:36:09,880 --> 00:36:14,320 Speaker 1: accusations of anti competitive practices such as promoting Amazon owned 555 00:36:14,400 --> 00:36:18,239 Speaker 1: brands over competitors when you go into search results, but 556 00:36:18,920 --> 00:36:23,200 Speaker 1: these new lawsuits would be federal level charges, not state level, 557 00:36:23,520 --> 00:36:26,560 Speaker 1: so it would significantly turn up the heat on Amazon. 558 00:36:27,360 --> 00:36:30,280 Speaker 1: Some of the potential moves could force Amazon to divest 559 00:36:30,280 --> 00:36:33,600 Speaker 1: itself of companies that it has already acquired. One of 560 00:36:33,640 --> 00:36:37,800 Speaker 1: them could potentially push Amazon to divest itself of I Robot, 561 00:36:37,880 --> 00:36:41,600 Speaker 1: that's the company that makes the Roomba vacuum cleaners. Amazon 562 00:36:41,640 --> 00:36:45,520 Speaker 1: may also face charges alleging that Amazon products like the 563 00:36:45,640 --> 00:36:50,440 Speaker 1: Ring Security doorbell system potentially violates the Children's Online Privacy 564 00:36:50,480 --> 00:36:55,680 Speaker 1: Protection Act or KAPPA, and there are other investigations that 565 00:36:55,760 --> 00:36:59,600 Speaker 1: could conceivably lead to different types of lawsuits and it 566 00:36:59,680 --> 00:37:02,880 Speaker 1: sounds like it's a multi pronged attack and a sign 567 00:37:02,920 --> 00:37:05,520 Speaker 1: that the US government is taking a much harder regulatory 568 00:37:05,600 --> 00:37:08,680 Speaker 1: stance when it comes to big companies in general, but 569 00:37:08,840 --> 00:37:13,840 Speaker 1: specifically big tech companies. Okay, that's it. That's the news 570 00:37:13,840 --> 00:37:17,279 Speaker 1: for Tuesday, March twenty, twenty twenty three. Hope you are 571 00:37:17,360 --> 00:37:19,960 Speaker 1: all well. If you would like to reach out to me, 572 00:37:20,080 --> 00:37:22,400 Speaker 1: do so on Twitter. The handle for the show is 573 00:37:22,480 --> 00:37:26,719 Speaker 1: tech Stuff HSW or you can download the iHeartRadio app. 574 00:37:26,719 --> 00:37:29,480 Speaker 1: It's free to download, free to use tight tech Stuff. 575 00:37:29,480 --> 00:37:31,360 Speaker 1: In the little search field, you'll see the tech stuff 576 00:37:31,360 --> 00:37:33,960 Speaker 1: podcast page pop up. Click into that you'll see a 577 00:37:33,960 --> 00:37:36,839 Speaker 1: little microphone icon, and if you use that, you can 578 00:37:36,920 --> 00:37:39,120 Speaker 1: leave a voice message up to thirty seconds in length 579 00:37:39,160 --> 00:37:40,400 Speaker 1: to let me know what you would like to hear 580 00:37:40,440 --> 00:37:44,120 Speaker 1: in the future, and I'll talk to you again really soon. 581 00:37:50,120 --> 00:37:54,759 Speaker 1: Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, 582 00:37:55,160 --> 00:37:58,840 Speaker 1: visit the iHeartRadio app, Apple Podcasts, or wherever you listen 583 00:37:58,880 --> 00:38:03,520 Speaker 1: to your favorite shows.