1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tech Stuff, a production from I Heart Radio. 2 00:00:11,880 --> 00:00:14,760 Speaker 1: Hey there, and welcome to tech Stuff. I'm your host, 3 00:00:14,960 --> 00:00:18,720 Speaker 1: Jonathan Strickland. I'm an executive producer with I Heart Radio. 4 00:00:19,280 --> 00:00:21,319 Speaker 1: And how the tech are you? It's time for the 5 00:00:21,360 --> 00:00:26,960 Speaker 1: tech News for Thursday, January nineteen, twenty twenty three. And 6 00:00:27,160 --> 00:00:30,160 Speaker 1: last year it seemed like the front half of every 7 00:00:30,200 --> 00:00:34,040 Speaker 1: tech news episode was dominated by Elon Musk's on again, 8 00:00:34,600 --> 00:00:38,920 Speaker 1: off again and then totally on again quest to buy Twitter. 9 00:00:39,640 --> 00:00:43,519 Speaker 1: But so far in the honor really goes to chat, 10 00:00:43,560 --> 00:00:47,120 Speaker 1: GPT and open Ai, which is the startup that's behind 11 00:00:47,159 --> 00:00:52,199 Speaker 1: the controversial chat pot so. First up today, Sam Altman, 12 00:00:52,520 --> 00:00:56,520 Speaker 1: the CEO of open Ai, gave an interview with Strictly 13 00:00:56,720 --> 00:01:00,720 Speaker 1: VC in which he addressed some rumors about the upcoming 14 00:01:00,880 --> 00:01:05,040 Speaker 1: next generation of the GPT language model a k a. 15 00:01:05,760 --> 00:01:11,680 Speaker 1: GPT four. We're currently on GPT three point five. Altman 16 00:01:11,800 --> 00:01:14,920 Speaker 1: warned that people are bound to be disappointed in the 17 00:01:15,040 --> 00:01:18,639 Speaker 1: new language model because the hype around it has grown 18 00:01:18,680 --> 00:01:21,280 Speaker 1: to a point that's just impossible to live up to. 19 00:01:21,840 --> 00:01:24,959 Speaker 1: And I think Altman in general has been pretty good 20 00:01:25,000 --> 00:01:28,200 Speaker 1: about acknowledging this kind of stuff. A lot of folks, 21 00:01:28,240 --> 00:01:32,759 Speaker 1: including myself, have pointed out how chat GPT falls short 22 00:01:32,959 --> 00:01:36,080 Speaker 1: in lots of ways, and I don't think Altman would disagree. 23 00:01:36,600 --> 00:01:40,160 Speaker 1: He has been pretty forthcoming about this sort of stuff, 24 00:01:40,200 --> 00:01:43,560 Speaker 1: and he's not making claims that chat GPT is flawless 25 00:01:43,680 --> 00:01:47,360 Speaker 1: or even skilled at every kind of writing. So I 26 00:01:47,360 --> 00:01:50,600 Speaker 1: guess it's unfair for folks like myself to say I 27 00:01:50,640 --> 00:01:54,120 Speaker 1: can't even write a shakespeareance on it, even even though 28 00:01:54,120 --> 00:01:57,200 Speaker 1: it can't. I mean, maybe if I kept at it, 29 00:01:57,360 --> 00:02:01,200 Speaker 1: maybe if I kept tweaking prompts, I could massaw chat 30 00:02:01,240 --> 00:02:04,280 Speaker 1: GPT to a point where it could produce an actual 31 00:02:04,320 --> 00:02:08,799 Speaker 1: Shakespearean style sonnet. But anyway, Altman wouldn't commit to win. 32 00:02:09,240 --> 00:02:12,560 Speaker 1: This next model will launch. He said it will launch 33 00:02:12,600 --> 00:02:14,840 Speaker 1: when it's ready, which I also think it is a 34 00:02:14,840 --> 00:02:18,200 Speaker 1: pretty responsible take, and he admitted that there were concerns 35 00:02:18,240 --> 00:02:21,440 Speaker 1: about how chat GPT might be used by students to 36 00:02:21,480 --> 00:02:24,720 Speaker 1: try and cheat on schoolwork. But he also pointed out 37 00:02:24,800 --> 00:02:27,680 Speaker 1: that when calculators became a thing, we just we didn't 38 00:02:27,720 --> 00:02:31,120 Speaker 1: just abandon teaching students how to do maths. Instead, we 39 00:02:31,240 --> 00:02:34,640 Speaker 1: took calculators kind of into account. While doing so, the 40 00:02:34,680 --> 00:02:37,600 Speaker 1: way we taught it started to change. That's a kind 41 00:02:37,600 --> 00:02:41,520 Speaker 1: of interesting take. Now, Honestly, I think chat GPT could 42 00:02:41,560 --> 00:02:45,000 Speaker 1: be useful for say, English teachers, to show how a 43 00:02:45,280 --> 00:02:50,440 Speaker 1: generated essay might fall short of actual real work done 44 00:02:50,440 --> 00:02:54,120 Speaker 1: by a student who's taking the subject seriously. It could 45 00:02:54,120 --> 00:02:57,400 Speaker 1: be useful to explain to students the difference between real 46 00:02:57,520 --> 00:03:01,720 Speaker 1: critical thinking and surface level as servation. And this way 47 00:03:01,720 --> 00:03:04,640 Speaker 1: you wouldn't have to take an actual student's essay and 48 00:03:04,680 --> 00:03:06,720 Speaker 1: then embarrass that student in front of the rest of 49 00:03:06,720 --> 00:03:10,080 Speaker 1: the class and say, see, little Timmy over there totally 50 00:03:10,120 --> 00:03:14,480 Speaker 1: doesn't understand the role that false staff plays, and Henry 51 00:03:14,520 --> 00:03:17,400 Speaker 1: the fourth part two, just look at this terrible work. 52 00:03:17,520 --> 00:03:20,560 Speaker 1: Shame on you, little Timmy. Instead they could say, look, 53 00:03:20,639 --> 00:03:24,920 Speaker 1: here's this essay written by chat GPT. Here are the 54 00:03:25,000 --> 00:03:30,040 Speaker 1: places where the observations being made are pure surface level 55 00:03:30,080 --> 00:03:32,480 Speaker 1: and they don't really say anything. This is the sort 56 00:03:32,480 --> 00:03:35,000 Speaker 1: of stuff you need to avoid and think about when 57 00:03:35,000 --> 00:03:38,280 Speaker 1: you make your own work. And uh, I get that 58 00:03:38,320 --> 00:03:41,040 Speaker 1: I'm putting a lot on teachers there, and that they're 59 00:03:41,080 --> 00:03:45,880 Speaker 1: already overworked and undervalued, and that should change too, just 60 00:03:46,040 --> 00:03:49,320 Speaker 1: throwing that out there now. One thing about open AI 61 00:03:49,640 --> 00:03:53,960 Speaker 1: that has undeniably caused harm is the process by which 62 00:03:54,440 --> 00:03:57,840 Speaker 1: the company has tried to filter out objectionable material. So 63 00:03:58,080 --> 00:04:01,840 Speaker 1: in the past, even with g PT, we've seen examples 64 00:04:01,840 --> 00:04:06,120 Speaker 1: of how chat bots can submit truly offensive material that 65 00:04:06,840 --> 00:04:11,280 Speaker 1: includes stuff that ranges from racism and sexism and hate 66 00:04:11,320 --> 00:04:16,159 Speaker 1: speech and calls for violence to description of truly horrific 67 00:04:16,240 --> 00:04:19,599 Speaker 1: acts that I can't even begin to describe on this show. 68 00:04:20,040 --> 00:04:23,760 Speaker 1: And obviously, any company making a chat bot wants to 69 00:04:23,880 --> 00:04:27,880 Speaker 1: avoid the chat bot creating these kinds of situations, even 70 00:04:27,920 --> 00:04:32,040 Speaker 1: if it's just because it's bad optics. Right, if the 71 00:04:32,080 --> 00:04:33,880 Speaker 1: reason is, oh, we don't want to do that because 72 00:04:34,240 --> 00:04:38,760 Speaker 1: it'll hurt our investment, at least, they still don't want 73 00:04:38,839 --> 00:04:42,120 Speaker 1: to do it. Now, as you may know, chat GPT 74 00:04:42,960 --> 00:04:48,320 Speaker 1: generates responses to queries by referencing information from a massive 75 00:04:48,440 --> 00:04:53,200 Speaker 1: database of scanned material across the Internet, and there are 76 00:04:53,279 --> 00:04:57,400 Speaker 1: complex rules at play that guide chat GPTs responses, but 77 00:04:57,480 --> 00:05:03,040 Speaker 1: ultimately those responses depend heavily on the repository of scanned information. Now, 78 00:05:03,040 --> 00:05:06,279 Speaker 1: the problem is, and I'm sure you've realized this, the 79 00:05:06,320 --> 00:05:10,360 Speaker 1: web is home to some really terrible communities, ones where 80 00:05:10,360 --> 00:05:15,320 Speaker 1: truly awful material is shared and sometimes celebrated, and chat 81 00:05:15,360 --> 00:05:20,240 Speaker 1: GPT cannot magically tell the difference between what is acceptable 82 00:05:20,440 --> 00:05:24,600 Speaker 1: and what is unacceptable. So it needs people to essentially 83 00:05:24,640 --> 00:05:28,560 Speaker 1: tell the model what isn't isn't right, and it has 84 00:05:28,600 --> 00:05:31,040 Speaker 1: to be able to identify certain stuff as falling into 85 00:05:31,400 --> 00:05:36,080 Speaker 1: categories of content that are forbidden and then filter anything 86 00:05:36,240 --> 00:05:39,800 Speaker 1: like that out of its responses. Anyway, Open a Eyes 87 00:05:39,839 --> 00:05:44,760 Speaker 1: approach was to outsource this work of tagging offensive material, 88 00:05:44,880 --> 00:05:49,400 Speaker 1: identifying it meta tagging it so that chat GPT could 89 00:05:49,440 --> 00:05:53,000 Speaker 1: avoid such stuff, and they went with a company called 90 00:05:53,080 --> 00:05:56,320 Speaker 1: Sama s a m A. This company had also worked 91 00:05:56,320 --> 00:06:00,880 Speaker 1: with Facebook in the area of content moderation, so it 92 00:06:00,920 --> 00:06:04,680 Speaker 1: would hire out people to go through Facebook posts and 93 00:06:04,800 --> 00:06:09,159 Speaker 1: flag any that violated Facebook's policies. Now, the way Sama 94 00:06:09,160 --> 00:06:12,760 Speaker 1: hands handles this is to employ people in Kenya to 95 00:06:12,839 --> 00:06:15,040 Speaker 1: do the work at a salary that breaks down between 96 00:06:15,080 --> 00:06:18,080 Speaker 1: a dollar thirty two and two dollars per hour. By 97 00:06:18,080 --> 00:06:20,880 Speaker 1: the way, Sama was paid around twelve dollars fifty cents 98 00:06:21,000 --> 00:06:26,640 Speaker 1: per hour per case um, and the people actually doing 99 00:06:26,680 --> 00:06:29,800 Speaker 1: the physical work are getting a dollar thirty two to 100 00:06:29,920 --> 00:06:33,640 Speaker 1: two dollars of that time. Magazine indicates that a receptionist 101 00:06:33,680 --> 00:06:36,920 Speaker 1: in Nairobi makes around a dollar fifty two per hour, 102 00:06:37,080 --> 00:06:40,600 Speaker 1: So these are pretty low paying jobs and in return 103 00:06:41,000 --> 00:06:44,920 Speaker 1: for this hourly rate, employees were divided into three groups 104 00:06:45,360 --> 00:06:50,320 Speaker 1: to read passage after passage of just truly the worst 105 00:06:50,480 --> 00:06:53,680 Speaker 1: stuff you can imagine. Each group focused on a specific 106 00:06:53,760 --> 00:06:56,640 Speaker 1: kind of offensive material. There was a hate speech group, 107 00:06:57,000 --> 00:07:01,000 Speaker 1: a sexual abuse group, and a violence group, and I'm 108 00:07:01,000 --> 00:07:04,240 Speaker 1: sure there was plenty of overlap between these three. Employees 109 00:07:04,240 --> 00:07:07,359 Speaker 1: would read and tag each passage for eight or nine 110 00:07:07,400 --> 00:07:11,080 Speaker 1: hours a day, and as you can imagine, being plunged 111 00:07:11,280 --> 00:07:13,680 Speaker 1: into that kind of work and being expected to meet 112 00:07:14,040 --> 00:07:17,320 Speaker 1: certain deliverables right to tag a certain number of passages 113 00:07:17,360 --> 00:07:21,840 Speaker 1: each day really took its toll. It was psychologically taxing, 114 00:07:21,920 --> 00:07:25,080 Speaker 1: to say the least, and many employees reported being traumatized 115 00:07:25,120 --> 00:07:28,760 Speaker 1: by the work, and a few of them argued that SAMA, 116 00:07:28,960 --> 00:07:34,000 Speaker 1: despite what it claimed, was not providing adequate services for counseling. 117 00:07:34,560 --> 00:07:37,480 Speaker 1: And this whole story reminds us that behind the surface 118 00:07:37,640 --> 00:07:42,440 Speaker 1: of these AI programs, there's actually this huge human contribution 119 00:07:42,520 --> 00:07:46,080 Speaker 1: to make them work. Like, yes, it's remarkable that this 120 00:07:46,160 --> 00:07:50,920 Speaker 1: program is composing these texts in response to our queries, 121 00:07:51,080 --> 00:07:54,840 Speaker 1: but to make that possible, a lot of people had 122 00:07:54,880 --> 00:07:58,320 Speaker 1: to spend countless hours of work, and some of them 123 00:07:58,360 --> 00:08:02,200 Speaker 1: doing work that was truly traumatic. The story gets even 124 00:08:02,280 --> 00:08:07,080 Speaker 1: more complicated. Sama recently ended its contracts with open Ai 125 00:08:07,200 --> 00:08:10,800 Speaker 1: ahead of schedule, probably because the company was already being 126 00:08:10,840 --> 00:08:14,040 Speaker 1: taken to task for how they relied on poor people 127 00:08:14,080 --> 00:08:18,960 Speaker 1: in developing countries to moderate content on Facebook, which included 128 00:08:19,440 --> 00:08:25,720 Speaker 1: the content moderators being subjected to truly horrific content, including 129 00:08:26,160 --> 00:08:29,400 Speaker 1: videos and pictures. And we're talking about violence and sexual 130 00:08:29,440 --> 00:08:34,200 Speaker 1: abuse and and and child endangerment and worse, and in 131 00:08:34,240 --> 00:08:37,319 Speaker 1: other words, I don't think Sama made this choice out 132 00:08:37,360 --> 00:08:42,200 Speaker 1: of concern for its workers. The workers found themselves also 133 00:08:42,320 --> 00:08:45,000 Speaker 1: either out of a job entirely or shifted to other 134 00:08:45,120 --> 00:08:49,160 Speaker 1: lower paying work, so they've actually come out poorer for 135 00:08:49,200 --> 00:08:51,920 Speaker 1: this as well. And some of them were saying like, yeah, 136 00:08:51,960 --> 00:08:54,400 Speaker 1: the work is terrible. It takes a toll, but I 137 00:08:54,480 --> 00:08:57,040 Speaker 1: need to provide for my family and now I'm not 138 00:08:57,120 --> 00:09:00,120 Speaker 1: able to. So it's a very grim story. If you 139 00:09:00,120 --> 00:09:02,640 Speaker 1: would like to read about it further, I recommend the 140 00:09:02,840 --> 00:09:06,280 Speaker 1: article in time it is article it is titled rather 141 00:09:06,760 --> 00:09:11,040 Speaker 1: open Ai used Kenyan workers on less than two dollars 142 00:09:11,040 --> 00:09:15,920 Speaker 1: per hour to make chat GPT less toxic. And for 143 00:09:16,080 --> 00:09:19,560 Speaker 1: our third story, about chat GPT. Let's talk about how 144 00:09:19,600 --> 00:09:22,960 Speaker 1: some cybersecurity researchers were able to get the chat bought 145 00:09:23,080 --> 00:09:26,960 Speaker 1: to create a new strand of malware, and not just 146 00:09:27,040 --> 00:09:31,800 Speaker 1: any malware, but polymorphic malware, meaning it can take many forms. 147 00:09:31,840 --> 00:09:34,800 Speaker 1: You can have a core structure that can then be 148 00:09:34,880 --> 00:09:37,800 Speaker 1: tweaked so that you get different generations of malware based 149 00:09:37,840 --> 00:09:40,560 Speaker 1: on the same core, but they can be different enough 150 00:09:40,840 --> 00:09:44,920 Speaker 1: that an anti virus program would have trouble detecting new variants, 151 00:09:45,320 --> 00:09:48,120 Speaker 1: thus extending the useful life of the malware. Like you 152 00:09:48,120 --> 00:09:51,120 Speaker 1: can keep using this core malware over and over by 153 00:09:51,120 --> 00:09:55,800 Speaker 1: making these small changes, relatively small changes, and keep sending 154 00:09:55,800 --> 00:09:58,440 Speaker 1: out waves of the stuff. So this is bad news 155 00:09:58,520 --> 00:10:00,960 Speaker 1: for the security side of things, Lee, but it's also 156 00:10:01,040 --> 00:10:04,559 Speaker 1: important research because if the good guys don't know about it, 157 00:10:05,080 --> 00:10:07,480 Speaker 1: then the bad guys can make greater use of it. 158 00:10:08,360 --> 00:10:11,280 Speaker 1: The researchers said that the web based version of chat 159 00:10:11,320 --> 00:10:15,640 Speaker 1: GPT is a little more challenging to use because it's 160 00:10:15,679 --> 00:10:18,320 Speaker 1: meant to filter out anything that would result in the 161 00:10:18,360 --> 00:10:21,320 Speaker 1: creation of malicious tools. You're not supposed to use chat 162 00:10:21,360 --> 00:10:23,920 Speaker 1: GPT to do it, and it tries to prevent you 163 00:10:24,080 --> 00:10:27,160 Speaker 1: from doing it. However, the team said that if they 164 00:10:27,200 --> 00:10:30,520 Speaker 1: just kept restating their requests and if they made them 165 00:10:30,520 --> 00:10:34,240 Speaker 1: more authoritative, they could eventually work around this barrier and 166 00:10:34,280 --> 00:10:37,800 Speaker 1: convince chat gpt to do their bidding. Which is kind 167 00:10:37,840 --> 00:10:40,680 Speaker 1: of concerning this idea that persistence and a change in 168 00:10:40,800 --> 00:10:44,680 Speaker 1: tone will allow a user to sidestep safety parameters. That's 169 00:10:44,679 --> 00:10:48,160 Speaker 1: not great, But what's arguably worse is the team found 170 00:10:48,160 --> 00:10:51,920 Speaker 1: the a p I, the application programming interface version of 171 00:10:52,000 --> 00:10:54,280 Speaker 1: chat GPT. This is the version that's meant to let 172 00:10:54,320 --> 00:11:00,000 Speaker 1: you incorporate chat GPT functionality into other applications. That version 173 00:11:00,040 --> 00:11:02,199 Speaker 1: appeared to have no such restrictions at play in the 174 00:11:02,240 --> 00:11:05,839 Speaker 1: first place, So, in other words, it was essentially raring 175 00:11:05,920 --> 00:11:08,760 Speaker 1: to go when the time came to make malware. The 176 00:11:08,840 --> 00:11:12,360 Speaker 1: security firm cyber Arc says that it will continue to 177 00:11:12,600 --> 00:11:15,160 Speaker 1: do research on this new development, and that it plans 178 00:11:15,200 --> 00:11:17,680 Speaker 1: to release some of the source code of the malware 179 00:11:17,760 --> 00:11:21,120 Speaker 1: that was created to the cyber security community for the 180 00:11:21,120 --> 00:11:24,560 Speaker 1: purposes of education. And yeah, the bots are learning how 181 00:11:24,640 --> 00:11:27,600 Speaker 1: to create cyber attacks. Now, all right, we're gonna take 182 00:11:27,600 --> 00:11:29,280 Speaker 1: a quick break. When we come back, we'll have some 183 00:11:29,320 --> 00:11:43,320 Speaker 1: more news. We're back now. Last Tuesday, it's past. Tuesday, 184 00:11:43,320 --> 00:11:45,480 Speaker 1: I talked about how crypto is currently in a bit 185 00:11:45,520 --> 00:11:49,720 Speaker 1: of a recovery phase, possibly because the crypto community thinks 186 00:11:49,760 --> 00:11:54,320 Speaker 1: that the macro economic situation is improving. But there's no 187 00:11:54,440 --> 00:11:57,640 Speaker 1: telling if that recovery phase is going to be sustainable 188 00:11:57,720 --> 00:12:01,440 Speaker 1: just yet, and it appears that the big companies are 189 00:12:01,520 --> 00:12:05,400 Speaker 1: still very much in belt tightening mode, which suggests that 190 00:12:05,600 --> 00:12:08,440 Speaker 1: at least these companies expect things to be tough for 191 00:12:08,480 --> 00:12:11,960 Speaker 1: a while. Yet. This week we heard Microsoft is laying 192 00:12:11,960 --> 00:12:15,240 Speaker 1: off around ten thousand employees between now and the end 193 00:12:15,280 --> 00:12:19,600 Speaker 1: of March. A company spokesperson indicated that marketing and sales 194 00:12:19,679 --> 00:12:23,720 Speaker 1: will feel the impact more than engineering, and that gamers 195 00:12:23,720 --> 00:12:26,440 Speaker 1: will also be sad to learn that Xbox and Bethesda 196 00:12:26,520 --> 00:12:30,320 Speaker 1: divisions are among those that will be affected. Satya Nadela, 197 00:12:30,440 --> 00:12:33,760 Speaker 1: the CEO of Microsoft, said that consumers are starting to 198 00:12:33,760 --> 00:12:38,200 Speaker 1: get more frugal with their digital spending and that while 199 00:12:38,240 --> 00:12:41,240 Speaker 1: the early stages of the pandemic saw an increase in 200 00:12:41,280 --> 00:12:43,880 Speaker 1: digital spending, we're now seeing kind of a reversal of 201 00:12:43,920 --> 00:12:46,920 Speaker 1: that trend as people start to ask do I really 202 00:12:46,960 --> 00:12:50,920 Speaker 1: need this digital subscription and start to make choices in 203 00:12:50,920 --> 00:12:53,920 Speaker 1: a way to kind of scale down their own expenditures, 204 00:12:54,440 --> 00:12:58,000 Speaker 1: and on a related note, Amazon continues to cut costs, 205 00:12:58,160 --> 00:13:02,880 Speaker 1: this time by planning to sunset a charitable donation program 206 00:13:02,920 --> 00:13:06,560 Speaker 1: that has been in place since two thousand thirteen. It's 207 00:13:06,600 --> 00:13:10,120 Speaker 1: called Amazon Smile, and it would let customers designate a 208 00:13:10,200 --> 00:13:14,280 Speaker 1: charity one that Amazon had verified, and that charity would 209 00:13:14,280 --> 00:13:18,080 Speaker 1: then receive a percentage of every eligible purchase that the 210 00:13:18,200 --> 00:13:21,120 Speaker 1: customer made on the platform. So instead of going to 211 00:13:21,400 --> 00:13:25,880 Speaker 1: www dot Amazon, you would go to Smile dot Amazon, 212 00:13:26,280 --> 00:13:29,280 Speaker 1: and otherwise everything else would be the same as your 213 00:13:29,360 --> 00:13:33,240 Speaker 1: typical Amazon experience. I actually made use of this program myself. 214 00:13:33,400 --> 00:13:37,480 Speaker 1: I selected a local theater, as in a stage theater 215 00:13:37,600 --> 00:13:40,480 Speaker 1: here in Atlanta to be the recipient because it's a 216 00:13:40,600 --> 00:13:45,640 Speaker 1: nonprofit organization anyway. Now, Amazon is saying that this program 217 00:13:45,720 --> 00:13:48,400 Speaker 1: was not as effective as the company had hoped, and 218 00:13:48,480 --> 00:13:52,720 Speaker 1: that was because these donations were you know, fractions, like 219 00:13:52,800 --> 00:13:57,800 Speaker 1: a percentage of each eligible purchase, and it was spread 220 00:13:57,840 --> 00:14:03,120 Speaker 1: across thousands of charities. No single charity received very much money. 221 00:14:03,640 --> 00:14:06,000 Speaker 1: The company said that the average amount donated to a 222 00:14:06,120 --> 00:14:10,480 Speaker 1: charity amounted to around two d thirty dollars. I just 223 00:14:10,600 --> 00:14:13,400 Speaker 1: checked mine, and over the years that I've been using 224 00:14:13,400 --> 00:14:15,600 Speaker 1: this and I have been using it for several years now. 225 00:14:16,480 --> 00:14:20,240 Speaker 1: My purchases have amounted to about a thousand, seven d 226 00:14:20,440 --> 00:14:24,080 Speaker 1: fifty dollars in donations to this theater. But that's stretched 227 00:14:24,120 --> 00:14:27,880 Speaker 1: across years, right, So it's not like this theater got 228 00:14:28,000 --> 00:14:31,000 Speaker 1: a check for almost two grand and was like, wow, 229 00:14:31,120 --> 00:14:33,600 Speaker 1: what a huge donation. Now it's like, you know, a month, 230 00:14:33,640 --> 00:14:36,400 Speaker 1: they might get a check for you know, a couple 231 00:14:36,440 --> 00:14:39,960 Speaker 1: of bucks. So Amazon kind of has a point here 232 00:14:40,080 --> 00:14:42,360 Speaker 1: in the sense that this was probably not the most 233 00:14:42,480 --> 00:14:48,000 Speaker 1: effective charitable platform. It says that it will still support 234 00:14:48,080 --> 00:14:52,160 Speaker 1: various causes, but it will make more focused decisions on 235 00:14:52,240 --> 00:14:57,040 Speaker 1: things that quote unquote make meaningful change. The Data Privacy 236 00:14:57,040 --> 00:15:02,440 Speaker 1: Commissioner or DPC in Ireland took aim against Meta again, 237 00:15:03,160 --> 00:15:07,960 Speaker 1: this time finding WhatsApp five point five million euro, which 238 00:15:08,040 --> 00:15:11,800 Speaker 1: is just a smidge under six million US dollars, and 239 00:15:11,880 --> 00:15:15,480 Speaker 1: it was for a breach in the EU's strict privacy laws. 240 00:15:16,240 --> 00:15:20,040 Speaker 1: At issue is how WhatsApp has been leveraging the personal 241 00:15:20,080 --> 00:15:24,840 Speaker 1: information of users in the EU while trying to improve 242 00:15:25,040 --> 00:15:30,040 Speaker 1: the company's services. So this isn't about targeted advertising for once. 243 00:15:30,400 --> 00:15:34,040 Speaker 1: It's about using this personal information in a way to 244 00:15:35,160 --> 00:15:40,720 Speaker 1: beef up the services own features and capabilities. Apparently WhatsApp 245 00:15:40,760 --> 00:15:43,080 Speaker 1: methods just didn't go far enough to protect the personal 246 00:15:43,080 --> 00:15:46,560 Speaker 1: information of EU citizens, though the Reuters report I read 247 00:15:46,640 --> 00:15:49,400 Speaker 1: didn't go into really any detail about the nature of 248 00:15:49,440 --> 00:15:54,000 Speaker 1: the violations themselves. Meta has been the target of the 249 00:15:54,080 --> 00:15:58,360 Speaker 1: DPC several times now and has had to pay fines 250 00:15:58,480 --> 00:16:02,760 Speaker 1: on multiple occasions for how and has continued to handle or, 251 00:16:02,960 --> 00:16:06,360 Speaker 1: depending on your point of view, mishandle data pertaining to 252 00:16:06,640 --> 00:16:12,600 Speaker 1: EU citizens. I don't know at what point things would change, 253 00:16:12,720 --> 00:16:17,120 Speaker 1: because the fines, while they're substantial in the big picture 254 00:16:17,160 --> 00:16:23,680 Speaker 1: of Meta's you know, financial situation, rarely amount to anything 255 00:16:23,720 --> 00:16:28,680 Speaker 1: that would raise an eyebrow on on a massive corporate spreadsheet. 256 00:16:29,360 --> 00:16:33,160 Speaker 1: So um, yeah, it's It's yet another example of Meta 257 00:16:33,240 --> 00:16:36,160 Speaker 1: doing what Meta does best, which is handle the personal 258 00:16:36,200 --> 00:16:41,240 Speaker 1: information of its users in a way that's ultimately irresponsible. Meanwhile, 259 00:16:41,280 --> 00:16:44,360 Speaker 1: Global Witness, a human rights watchdog agency that we've talked 260 00:16:44,360 --> 00:16:48,280 Speaker 1: about several times on this show, revealed that Meta's Facebook 261 00:16:48,600 --> 00:16:53,120 Speaker 1: allowed ads calling for violence in Brazil following an already 262 00:16:53,280 --> 00:16:56,480 Speaker 1: violent series of protests in that country. All right, so 263 00:16:56,560 --> 00:17:00,320 Speaker 1: first let's get some background. So in elections late last year, 264 00:17:00,840 --> 00:17:06,280 Speaker 1: the leftist politician Louise Inacio Lula da Silva defeated his 265 00:17:06,400 --> 00:17:10,760 Speaker 1: right wing opponent and former President Bolsonaro in a runoff, 266 00:17:11,160 --> 00:17:14,680 Speaker 1: but Balsonaro took a page from the far far right 267 00:17:14,800 --> 00:17:19,879 Speaker 1: playbook and refused to concede the election. Da Silva took 268 00:17:19,920 --> 00:17:22,159 Speaker 1: office at the beginning of this year, and then some 269 00:17:22,240 --> 00:17:27,719 Speaker 1: of Bosonaro's supporters stormed government buildings in these violent riots 270 00:17:27,760 --> 00:17:31,600 Speaker 1: in an attempt to have this election overturned. The violence 271 00:17:31,680 --> 00:17:35,199 Speaker 1: lasted several hours, and this prompted META to declare Brazil 272 00:17:35,359 --> 00:17:39,200 Speaker 1: a high risk region, at least temporarily. Now that's supposed 273 00:17:39,200 --> 00:17:43,040 Speaker 1: to mean that meta's properties, including Facebook, take a much 274 00:17:43,040 --> 00:17:47,680 Speaker 1: more restrictive approach with regard to content moderation and add approval, 275 00:17:48,119 --> 00:17:52,240 Speaker 1: and an effort to mitigate things like hate speech and 276 00:17:52,280 --> 00:17:55,639 Speaker 1: calls for violence. But Global Witness found that just days 277 00:17:55,800 --> 00:18:01,680 Speaker 1: after the riots, Facebook accepted fourteen of sixteen fake ads 278 00:18:01,720 --> 00:18:06,200 Speaker 1: that Global Witness created, and these ads called for violence 279 00:18:06,240 --> 00:18:10,199 Speaker 1: against the Silva and his supporters. That is definitely not 280 00:18:10,359 --> 00:18:13,840 Speaker 1: a good look. Facebook should not have accepted those ads, 281 00:18:13,840 --> 00:18:17,480 Speaker 1: but it was totally prepared to run fourteen of sixteen 282 00:18:17,520 --> 00:18:20,680 Speaker 1: of them. Global Witness reps say that this shows how 283 00:18:20,760 --> 00:18:24,000 Speaker 1: Meta fails to live up to its obligations in these 284 00:18:24,080 --> 00:18:26,600 Speaker 1: kinds of cases, and it's something we've seen play out 285 00:18:26,760 --> 00:18:31,720 Speaker 1: around the world, particularly in non English speaking countries. There's 286 00:18:31,800 --> 00:18:35,440 Speaker 1: long been this accusation against Meta that the company pays 287 00:18:35,560 --> 00:18:41,520 Speaker 1: very little attention to content moderation outside of major English 288 00:18:41,640 --> 00:18:45,240 Speaker 1: speaking markets like the United States and the EU, and 289 00:18:45,359 --> 00:18:49,600 Speaker 1: yet by doing this, by ignoring these other regions, the 290 00:18:49,640 --> 00:18:54,400 Speaker 1: company is facilitating serious harm to come to populations by 291 00:18:54,760 --> 00:18:57,879 Speaker 1: not cracking down on things like hate speech and calls 292 00:18:57,920 --> 00:19:02,119 Speaker 1: for violence. Global Witness says that YouTube performed substantially better 293 00:19:02,280 --> 00:19:05,640 Speaker 1: than Meta did and denied ads that had similar language 294 00:19:05,640 --> 00:19:09,200 Speaker 1: and messaging in them, and just to set minds at ease, 295 00:19:09,440 --> 00:19:13,160 Speaker 1: Global Witness did cancel these ads before they could actually 296 00:19:13,200 --> 00:19:18,359 Speaker 1: go live, so no users encountered these ads that violated 297 00:19:18,400 --> 00:19:22,920 Speaker 1: Facebook's policies and called for violence because Global Witness pulled 298 00:19:22,960 --> 00:19:26,840 Speaker 1: them before they could they could go live on the site. Okay, 299 00:19:26,880 --> 00:19:30,119 Speaker 1: I've got a couple more stories to get to, but 300 00:19:30,359 --> 00:19:32,639 Speaker 1: before I get to those, let's take another quick break. 301 00:19:42,400 --> 00:19:47,200 Speaker 1: We're back. So I've got a story about video games. Uh, 302 00:19:47,440 --> 00:19:50,080 Speaker 1: it's not necessarily a positive one. So one of the 303 00:19:50,119 --> 00:19:54,879 Speaker 1: ongoing issues in the world of video games involves what 304 00:19:55,080 --> 00:19:58,080 Speaker 1: comes along with leaning on the games as a service 305 00:19:58,240 --> 00:20:02,320 Speaker 1: business model, that is, releasing a game that has elements 306 00:20:02,359 --> 00:20:04,920 Speaker 1: in it that allow a company to continue to generate 307 00:20:05,000 --> 00:20:10,399 Speaker 1: revenue from players over time. So in the old old days, 308 00:20:10,520 --> 00:20:12,080 Speaker 1: the way you made money if you were in the 309 00:20:12,160 --> 00:20:15,240 Speaker 1: video game businesses, you sold as many copies of a 310 00:20:15,320 --> 00:20:18,760 Speaker 1: video game as you possibly could. But once you sold 311 00:20:18,800 --> 00:20:21,000 Speaker 1: a copy of a video game, that was kind of it. 312 00:20:21,160 --> 00:20:24,119 Speaker 1: That was the end of the transaction. Then once you 313 00:20:24,200 --> 00:20:28,200 Speaker 1: started getting into games that had a subscription model like 314 00:20:28,600 --> 00:20:32,159 Speaker 1: M M O RPGs, there became this new way to 315 00:20:32,240 --> 00:20:35,120 Speaker 1: make money. First you'd sell the game, then you would 316 00:20:35,119 --> 00:20:39,080 Speaker 1: continue to make money from the game by collecting subscription fees. 317 00:20:39,680 --> 00:20:43,800 Speaker 1: And this started to open up possibilities where you know, 318 00:20:43,840 --> 00:20:46,080 Speaker 1: you look at this and you think, all right, we 319 00:20:46,119 --> 00:20:49,520 Speaker 1: can continue to make profit from these games. We do 320 00:20:49,640 --> 00:20:53,040 Speaker 1: have to also put effort in to continue to support 321 00:20:53,080 --> 00:20:55,280 Speaker 1: the game, So there is an ongoing cost as well. 322 00:20:55,320 --> 00:20:58,119 Speaker 1: It's not like, you know, we work one day and 323 00:20:58,160 --> 00:21:00,119 Speaker 1: then we just make passive income for the rest or 324 00:21:00,119 --> 00:21:04,160 Speaker 1: our lives. That's that's a myth, but it did open 325 00:21:04,240 --> 00:21:06,359 Speaker 1: up these opportunities, and that's when we started to see 326 00:21:06,440 --> 00:21:13,600 Speaker 1: other ways to make money on ongoing titles, and that 327 00:21:13,760 --> 00:21:18,080 Speaker 1: involved some stuff like being able to pay for upgrades, 328 00:21:18,160 --> 00:21:21,040 Speaker 1: cosmetic upgrades for your character so that they could look 329 00:21:21,520 --> 00:21:24,879 Speaker 1: a specific way and have a specific style. And most gamers, 330 00:21:24,960 --> 00:21:27,600 Speaker 1: I don't think object to this, like the idea of yeah, 331 00:21:27,720 --> 00:21:29,280 Speaker 1: you know, if you want, you can pay a dollar 332 00:21:29,520 --> 00:21:32,480 Speaker 1: and you get this new skin for your character and 333 00:21:32,600 --> 00:21:35,920 Speaker 1: it looks really cool, and I think most gamers are like, yeah, 334 00:21:36,000 --> 00:21:38,199 Speaker 1: I can, you know, I don't care about that, so 335 00:21:38,280 --> 00:21:40,480 Speaker 1: I'm not going to spend the money, or yeah it's 336 00:21:40,480 --> 00:21:42,480 Speaker 1: a dollar, I'll spend a dollar and help support this 337 00:21:42,560 --> 00:21:45,360 Speaker 1: game and plus get this cool skin. Most people don't 338 00:21:45,400 --> 00:21:48,800 Speaker 1: have objections to that. But then you have the darker 339 00:21:48,880 --> 00:21:53,480 Speaker 1: side of this, where you know, they offer up items 340 00:21:53,640 --> 00:21:58,360 Speaker 1: or outfits or just you know, elements. They give players 341 00:21:58,400 --> 00:22:01,879 Speaker 1: an advantage in return for some cold hard cash that 342 00:22:01,960 --> 00:22:06,800 Speaker 1: you buy something that boosts your character performance, either temporarily 343 00:22:07,000 --> 00:22:10,280 Speaker 1: or permanently, and a lot of players refer to this 344 00:22:10,400 --> 00:22:14,280 Speaker 1: as the pay to win style right where you you 345 00:22:14,280 --> 00:22:18,640 Speaker 1: can get an advantage on better players just by spending 346 00:22:18,640 --> 00:22:22,239 Speaker 1: money and it it has a really bad rep in 347 00:22:22,320 --> 00:22:26,480 Speaker 1: the video game world. Generally speaking, it's it's heavily frowned 348 00:22:26,520 --> 00:22:29,040 Speaker 1: upon because people who put in the time and who 349 00:22:29,080 --> 00:22:33,160 Speaker 1: genuinely love the game will find themselves frustrated with folks 350 00:22:33,200 --> 00:22:36,520 Speaker 1: who just have more discretionary income and who are spending 351 00:22:36,560 --> 00:22:39,080 Speaker 1: more on the title in order to get the advantage. 352 00:22:39,119 --> 00:22:42,360 Speaker 1: And that's very frustrating, because we deal with that enough 353 00:22:42,400 --> 00:22:45,400 Speaker 1: in real life, y'all. But anyway, along with these monetization 354 00:22:45,440 --> 00:22:50,520 Speaker 1: strategies come concerns that the games, you know, gamify stuff 355 00:22:50,680 --> 00:22:54,320 Speaker 1: so that players are kind of lured into spending money 356 00:22:54,359 --> 00:22:57,479 Speaker 1: that otherwise they wouldn't, or there might be elements of 357 00:22:57,600 --> 00:23:01,800 Speaker 1: gambling at play, such as the purchase of a lootbox. 358 00:23:02,119 --> 00:23:07,240 Speaker 1: So a lootbox doesn't guarantee you a specific item within 359 00:23:07,280 --> 00:23:12,120 Speaker 1: the game. It gives you a chance of getting different items, 360 00:23:12,119 --> 00:23:15,560 Speaker 1: and the more common items tend to have a larger chance, 361 00:23:15,600 --> 00:23:18,000 Speaker 1: and the more rare items tend to have a smaller chance, 362 00:23:18,040 --> 00:23:21,520 Speaker 1: So it becomes kind of like a gamble. And a 363 00:23:21,560 --> 00:23:24,720 Speaker 1: lot of countries have looked into this and even gone 364 00:23:24,760 --> 00:23:28,560 Speaker 1: so far as to call it gambling and outlawed loot boxes. Well, 365 00:23:28,560 --> 00:23:30,840 Speaker 1: now the EU is kind of looking in on this too, 366 00:23:31,960 --> 00:23:36,320 Speaker 1: as well as other elements of of games as a 367 00:23:36,320 --> 00:23:40,200 Speaker 1: service that could have a negative impact on gamer's mental health. 368 00:23:40,400 --> 00:23:44,080 Speaker 1: So this doesn't mean that the EU is going to 369 00:23:44,200 --> 00:23:48,840 Speaker 1: vote against loot boxes or or make those you know, illegal, 370 00:23:48,960 --> 00:23:51,800 Speaker 1: or regulate them in any way. We haven't gotten to 371 00:23:51,840 --> 00:23:55,199 Speaker 1: that step yet, but they have adopted this report that 372 00:23:55,359 --> 00:23:58,680 Speaker 1: is looking into these kinds of things, which could be 373 00:23:59,119 --> 00:24:03,040 Speaker 1: the first step toward such an outcome. So I imagine 374 00:24:03,080 --> 00:24:06,280 Speaker 1: there are some video game companies out there that are 375 00:24:06,320 --> 00:24:11,480 Speaker 1: watching this with some anticipation because if the EU does 376 00:24:12,040 --> 00:24:15,000 Speaker 1: come to the conclusion that stuff like lootboxes amount to 377 00:24:15,080 --> 00:24:17,399 Speaker 1: gambling and that this has to be regulated, and that 378 00:24:17,560 --> 00:24:21,400 Speaker 1: maybe that also creates restrictions on who can buy the games, 379 00:24:21,840 --> 00:24:25,840 Speaker 1: that's going to have a massive impact of the industry. Um. Yeah, 380 00:24:25,880 --> 00:24:29,760 Speaker 1: So it's it's again an ongoing issue that is still 381 00:24:29,840 --> 00:24:32,760 Speaker 1: kind of shaking out in different places around the world. 382 00:24:33,560 --> 00:24:37,080 Speaker 1: Maybe in five or ten years we're going to see 383 00:24:37,160 --> 00:24:43,000 Speaker 1: dramatically different strategies for monetization to avoid these kinds of 384 00:24:44,280 --> 00:24:47,600 Speaker 1: I guess what Obi wan Kenobi would call imperial entanglements, 385 00:24:47,640 --> 00:24:49,879 Speaker 1: although human in a bad way, whereas I think like, 386 00:24:50,480 --> 00:24:52,320 Speaker 1: maybe we do need to rain this in a bit, 387 00:24:52,359 --> 00:24:56,160 Speaker 1: because I think it is get a bit predatory. And finally, 388 00:24:56,720 --> 00:25:00,000 Speaker 1: a group of academic researchers have proposed that social network 389 00:25:00,440 --> 00:25:05,040 Speaker 1: should employ a new approach to their recommendation algorithms. These 390 00:25:05,080 --> 00:25:09,119 Speaker 1: are the sets of rules that determine what content to 391 00:25:09,240 --> 00:25:13,320 Speaker 1: promote to each user at any given time. Now, it's 392 00:25:13,320 --> 00:25:17,720 Speaker 1: hardly controversial to say that, as of right now, recommendation 393 00:25:17,760 --> 00:25:23,040 Speaker 1: algorithms tend to promote harmful material, or at least material 394 00:25:23,119 --> 00:25:26,480 Speaker 1: that is more likely to create larger divisions between different 395 00:25:26,480 --> 00:25:30,600 Speaker 1: populations of people, such as people who have different political leanings. 396 00:25:31,440 --> 00:25:35,480 Speaker 1: The purpose of the algorithm is not too so discord. 397 00:25:35,600 --> 00:25:38,879 Speaker 1: It's not to watch the world burn, it's not to 398 00:25:38,960 --> 00:25:43,919 Speaker 1: create chaos. The purpose for the algorithm is simply to 399 00:25:44,160 --> 00:25:48,640 Speaker 1: keep users engaged, and more importantly, to keep them on 400 00:25:48,880 --> 00:25:53,280 Speaker 1: the respective platform for as long as possible. Keep them 401 00:25:53,320 --> 00:25:56,960 Speaker 1: stuck to your product, because that's how you make money. 402 00:25:57,000 --> 00:26:00,639 Speaker 1: You want them to stay on Facebook or Twitter or 403 00:26:00,800 --> 00:26:04,280 Speaker 1: YouTube or whatever for as long as you possibly can. 404 00:26:05,600 --> 00:26:09,679 Speaker 1: It's unfortunate that the stuff that tends to sow discord 405 00:26:09,920 --> 00:26:14,840 Speaker 1: is particularly good at keeping people glued to these platforms. 406 00:26:14,880 --> 00:26:16,520 Speaker 1: So I guess what you can say is that the 407 00:26:16,640 --> 00:26:21,320 Speaker 1: algorithms themselves are not immoral. They are a moral they 408 00:26:21,440 --> 00:26:25,520 Speaker 1: There's no moral judgment given to the kinds of content promoted. 409 00:26:26,280 --> 00:26:30,440 Speaker 1: The algorithm is just looking for a result, doesn't care 410 00:26:30,440 --> 00:26:33,679 Speaker 1: how it gets at the ends justify the means well. 411 00:26:33,720 --> 00:26:39,360 Speaker 1: The researchers suggest that perhaps by purposefully designing algorithms that 412 00:26:39,440 --> 00:26:46,080 Speaker 1: promote content to build bridges between people rather than fament animosity, 413 00:26:46,240 --> 00:26:51,840 Speaker 1: could still keep engagement levels high while simultaneously mending fences 414 00:26:51,920 --> 00:26:56,960 Speaker 1: between different populations. That if the content that's promoted supported 415 00:26:57,000 --> 00:27:01,600 Speaker 1: stuff like actual discussion and some old debate rather than 416 00:27:01,760 --> 00:27:06,800 Speaker 1: accusations and dehumanizing portrayals of the other side, we might 417 00:27:06,920 --> 00:27:11,320 Speaker 1: see less divisiveness outside of the online world as well, 418 00:27:11,359 --> 00:27:14,240 Speaker 1: because a lot of the outside world is taking its 419 00:27:14,280 --> 00:27:18,399 Speaker 1: cues from what's happening on the online world. And the 420 00:27:18,520 --> 00:27:22,120 Speaker 1: argument is that if we could make some conscious steps 421 00:27:22,560 --> 00:27:26,200 Speaker 1: to improve things, we could see the benefits well beyond 422 00:27:26,720 --> 00:27:30,960 Speaker 1: the native platforms. This is a nice thought. I would 423 00:27:31,000 --> 00:27:34,440 Speaker 1: love to see it happen. The cynic in me worries 424 00:27:34,720 --> 00:27:38,080 Speaker 1: that it wouldn't work, but the optimist part of me 425 00:27:38,160 --> 00:27:40,800 Speaker 1: would love to see some real effort made into something 426 00:27:40,880 --> 00:27:43,720 Speaker 1: like this, because the worst scenario is we could try 427 00:27:43,720 --> 00:27:46,320 Speaker 1: it and it doesn't work, right, Like, that's the worst 428 00:27:46,320 --> 00:27:48,600 Speaker 1: thing that could come out of it, Whereas the best 429 00:27:48,600 --> 00:27:49,919 Speaker 1: thing that could come out of it is that we 430 00:27:49,960 --> 00:27:54,600 Speaker 1: could actually see online communities become less polarizing, and that 431 00:27:54,720 --> 00:27:58,840 Speaker 1: perhaps this could also extend to other areas of our lives, 432 00:27:59,280 --> 00:28:02,280 Speaker 1: and that may be This would mean we would start 433 00:28:02,320 --> 00:28:06,320 Speaker 1: to recognize where we are in agreement, as opposed to 434 00:28:06,560 --> 00:28:11,640 Speaker 1: only pointing out where we have massive differences and then 435 00:28:11,680 --> 00:28:14,040 Speaker 1: just escalating that to the point where people start to 436 00:28:14,080 --> 00:28:19,239 Speaker 1: be harmed in the process. So I'm skimming on some 437 00:28:19,280 --> 00:28:21,800 Speaker 1: of the details here, but you can read the whole paper. 438 00:28:22,040 --> 00:28:24,520 Speaker 1: The whole white paper is available online. In fact, it 439 00:28:24,520 --> 00:28:28,399 Speaker 1: has its own you know, kind of vanity U r 440 00:28:28,560 --> 00:28:34,040 Speaker 1: L and it's bridging dot systems. And that's it for 441 00:28:34,119 --> 00:28:39,040 Speaker 1: the news today, Thursday, January three. I hope you are 442 00:28:39,120 --> 00:28:42,000 Speaker 1: all well. If you have any suggestions for topics I 443 00:28:42,000 --> 00:28:44,880 Speaker 1: should cover in future episodes of tech Stuff, please reach 444 00:28:44,920 --> 00:28:46,960 Speaker 1: out to me let me know what those are. You 445 00:28:46,960 --> 00:28:48,440 Speaker 1: can do that in a couple of ways. You can 446 00:28:48,480 --> 00:28:51,880 Speaker 1: go to Twitter and send me a tweet the handle 447 00:28:51,920 --> 00:28:56,640 Speaker 1: for the show is tech Stuff H. S W. Or 448 00:28:56,680 --> 00:28:59,400 Speaker 1: if you prefer to leave me a voice message, you 449 00:28:59,400 --> 00:29:02,440 Speaker 1: can down the I Heart Radio app. It's free to download, 450 00:29:02,520 --> 00:29:05,000 Speaker 1: it's free to use, and you just go over to 451 00:29:05,040 --> 00:29:07,880 Speaker 1: the little search feature There you type in tech stuff. 452 00:29:07,880 --> 00:29:10,640 Speaker 1: They'll take you to the tech Stuff page in the app, 453 00:29:11,240 --> 00:29:13,719 Speaker 1: and there you're gonna see a little microphone icon. If 454 00:29:13,720 --> 00:29:16,600 Speaker 1: you click on that, then you can leave a voice 455 00:29:16,640 --> 00:29:19,440 Speaker 1: message up to thirty seconds in length and let me 456 00:29:19,480 --> 00:29:21,400 Speaker 1: know what you would like to hear in future episodes, 457 00:29:21,800 --> 00:29:26,040 Speaker 1: and I will talk to you again really soon. Yeah. 458 00:29:30,120 --> 00:29:33,160 Speaker 1: Text Stuff is an I heart Radio production. For more 459 00:29:33,240 --> 00:29:36,640 Speaker 1: podcasts from I Heart Radio, visit the i Heart Radio app, 460 00:29:36,760 --> 00:29:39,920 Speaker 1: Apple Podcasts, or wherever you listen to your favorite shows