1 00:00:04,360 --> 00:00:12,280 Speaker 1: Welcome to tech Stuff, a production from iHeartRadio. Hey there, 2 00:00:12,280 --> 00:00:15,920 Speaker 1: and welcome to tech Stuff. I'm your host, Jonathan Strickland. 3 00:00:16,000 --> 00:00:19,919 Speaker 1: I'm an executive producer with iHeartRadio. And how the tech 4 00:00:20,000 --> 00:00:22,640 Speaker 1: are you. It's time for the tech news for Thursday, 5 00:00:23,160 --> 00:00:28,760 Speaker 1: March second in twenty twenty three. And AI is on 6 00:00:28,800 --> 00:00:32,199 Speaker 1: the brink of changing the world for the better, with 7 00:00:32,320 --> 00:00:36,160 Speaker 1: the potential to boost the world economy by nearly sixteen 8 00:00:36,520 --> 00:00:41,360 Speaker 1: trillion dollars in just a few years, or it's all 9 00:00:41,400 --> 00:00:45,280 Speaker 1: overblown and far less impressive than you think. My first 10 00:00:45,320 --> 00:00:50,360 Speaker 1: two stories take these very different perspectives. So first up 11 00:00:50,680 --> 00:00:55,720 Speaker 1: from Markets Insider is a piece that's titled Artificial intelligence 12 00:00:55,880 --> 00:00:58,560 Speaker 1: is on the brink of an iPhone moment and can 13 00:00:58,640 --> 00:01:03,000 Speaker 1: boost the world economy by fifteen point seven trillion dollars 14 00:01:03,120 --> 00:01:07,880 Speaker 1: in seven years. Bank of America says, now, command Markets Insider, 15 00:01:08,240 --> 00:01:11,160 Speaker 1: you can at least save some of the content for 16 00:01:11,280 --> 00:01:13,839 Speaker 1: the actual article. It doesn't all have to go into 17 00:01:13,840 --> 00:01:17,280 Speaker 1: the headline, but yeah. Bank of America sent on a 18 00:01:17,319 --> 00:01:21,120 Speaker 1: message to clients outlining why the financial institution believes that 19 00:01:21,200 --> 00:01:25,479 Speaker 1: AI is poised to change things forever, similar to how 20 00:01:25,480 --> 00:01:29,480 Speaker 1: the iPhone helped transform the web from something most folks 21 00:01:29,560 --> 00:01:34,360 Speaker 1: access through computers into a mobile experience, and honestly, that 22 00:01:34,440 --> 00:01:39,160 Speaker 1: transformation was huge. Anyone who was just working in web 23 00:01:39,160 --> 00:01:41,840 Speaker 1: based content at the time can tell you that and 24 00:01:41,959 --> 00:01:45,360 Speaker 1: about how it was so incredibly disruptive. We still see 25 00:01:45,400 --> 00:01:49,000 Speaker 1: that today, with companies offering up web page services designed 26 00:01:49,000 --> 00:01:51,960 Speaker 1: to scale properly no matter what kind of device connects 27 00:01:52,000 --> 00:01:55,720 Speaker 1: to the page. That doesn't even touch the rise of 28 00:01:55,760 --> 00:02:00,400 Speaker 1: the app developer market, which really didn't exist in any 29 00:02:00,720 --> 00:02:05,520 Speaker 1: significant way before the iPhone. Anyway, Bank of America says 30 00:02:05,560 --> 00:02:08,920 Speaker 1: AI is about to do something on the same scale, 31 00:02:09,400 --> 00:02:13,760 Speaker 1: and surely lots of companies are pushing AI prominently. That's 32 00:02:13,800 --> 00:02:16,200 Speaker 1: something that we're coming to be revisiting throughout the early 33 00:02:16,240 --> 00:02:21,400 Speaker 1: part of this episode. But I take some issue with 34 00:02:21,560 --> 00:02:26,160 Speaker 1: the conclusions. For example, the Bank of America note includes 35 00:02:26,240 --> 00:02:31,320 Speaker 1: this bit quote it took chat GPT just five days 36 00:02:31,400 --> 00:02:35,960 Speaker 1: to reach one million users, one billion cumulative visits in 37 00:02:36,040 --> 00:02:39,440 Speaker 1: three months, and an adoption rate which is three times 38 00:02:39,520 --> 00:02:45,519 Speaker 1: tiktoks and ten times instagrams. The technology is developing exponentially 39 00:02:45,720 --> 00:02:50,320 Speaker 1: end quote. Okay, that conclusion does not follow the premise. 40 00:02:51,080 --> 00:02:53,600 Speaker 1: You know, I think you could say that the appeal 41 00:02:53,840 --> 00:02:58,360 Speaker 1: of AI, the curiosity people have, the eagerness they have 42 00:02:58,440 --> 00:03:01,080 Speaker 1: to try it out, the enthusiasm around it. All of 43 00:03:01,120 --> 00:03:05,520 Speaker 1: that developed very quickly. Perhaps you could even argue it 44 00:03:05,600 --> 00:03:11,399 Speaker 1: developed exponentially, but that is not the same thing as 45 00:03:11,440 --> 00:03:17,440 Speaker 1: saying the technology itself is developing exponentially. I think conflating 46 00:03:17,520 --> 00:03:22,320 Speaker 1: these two things is dangerous because it creates this heightened 47 00:03:22,440 --> 00:03:26,320 Speaker 1: expectation for a technology that, depending upon its use, has 48 00:03:26,360 --> 00:03:30,560 Speaker 1: often proven to be far from perfect or reliable. I 49 00:03:30,680 --> 00:03:34,160 Speaker 1: do think it's undeniable that companies are going to continue 50 00:03:34,160 --> 00:03:38,280 Speaker 1: to invest huge amounts of money in developing AI. That 51 00:03:38,520 --> 00:03:41,760 Speaker 1: is unavoidable. It is happening and will continue to happen. 52 00:03:42,120 --> 00:03:46,800 Speaker 1: But I would caution against making this assumption that it 53 00:03:46,840 --> 00:03:50,880 Speaker 1: means we're going to see incredibly rapid development in the space. 54 00:03:51,240 --> 00:03:55,000 Speaker 1: That might happen, but we might also just see more 55 00:03:55,120 --> 00:04:00,760 Speaker 1: iterative improvements rather than these big revolutionary leaps. A huge 56 00:04:00,800 --> 00:04:05,280 Speaker 1: increase in attention is not the same as an increase 57 00:04:05,320 --> 00:04:10,400 Speaker 1: in a technology's capabilities. Meanwhile, Alex Shephard at New Republic 58 00:04:10,680 --> 00:04:14,920 Speaker 1: has a different take from the Bank of America approach. 59 00:04:15,400 --> 00:04:19,320 Speaker 1: It is far less bullish. Alex wrote an article titled 60 00:04:19,520 --> 00:04:23,479 Speaker 1: artificial intelligence is dumb. Okay, So this is a much 61 00:04:23,839 --> 00:04:28,479 Speaker 1: more straightforward but simple title, and Shepherd's argument is that 62 00:04:28,600 --> 00:04:31,599 Speaker 1: AI is in the early stage of the Gartner hype 63 00:04:31,640 --> 00:04:35,799 Speaker 1: cycle that we've talked about in recent episodes. That enthusiasm, excitement, 64 00:04:35,839 --> 00:04:38,279 Speaker 1: and expectations are on the rise, but if you spend 65 00:04:38,279 --> 00:04:41,360 Speaker 1: any meaningful amount of time with tools like chat GPT, 66 00:04:42,320 --> 00:04:45,840 Speaker 1: you come to the realization that the actual experience doesn't 67 00:04:45,880 --> 00:04:49,679 Speaker 1: quite live up to the hype surrounding it. Shepherd spends 68 00:04:50,120 --> 00:04:54,440 Speaker 1: much of the article using chat GPT as the primary example, 69 00:04:54,800 --> 00:04:58,040 Speaker 1: and argues that while the chat bot is remarkably more 70 00:04:58,040 --> 00:05:02,040 Speaker 1: advanced than ones that preceded it, conclusions like what Bank 71 00:05:02,080 --> 00:05:05,960 Speaker 1: of America has come to are largely unfounded. That the 72 00:05:05,960 --> 00:05:10,400 Speaker 1: belief that we're on the precipice of disruptive transformation is 73 00:05:10,440 --> 00:05:14,400 Speaker 1: really based on nothing more than conjecture and hypotheses, and 74 00:05:14,440 --> 00:05:16,880 Speaker 1: that we lack any actual evidence to say that we 75 00:05:16,960 --> 00:05:21,320 Speaker 1: are in fact on that precipice. Now, you could argue 76 00:05:21,640 --> 00:05:23,680 Speaker 1: that these are the sort of things that we really 77 00:05:23,760 --> 00:05:27,919 Speaker 1: can't assess until they've actually happened, but really it's only 78 00:05:27,960 --> 00:05:32,159 Speaker 1: with hindsight that we can say that moment was when 79 00:05:32,279 --> 00:05:36,279 Speaker 1: everything changed, because as you're living through the moment, you 80 00:05:36,320 --> 00:05:39,000 Speaker 1: don't have enough perspective to judge whether or not it's 81 00:05:39,080 --> 00:05:41,520 Speaker 1: that pivotal. It's only after the fact that you can 82 00:05:41,560 --> 00:05:45,200 Speaker 1: make that assessment. I do think Shepherd makes some very 83 00:05:45,240 --> 00:05:48,000 Speaker 1: good points, but I also find the arguments of the 84 00:05:48,120 --> 00:05:51,640 Speaker 1: article to be a little too narrow and reductive. Shepherd 85 00:05:51,640 --> 00:05:54,159 Speaker 1: says that those who claim AI is going to have 86 00:05:54,520 --> 00:06:00,880 Speaker 1: transformational impact on absolutely everything are making quote unquote insane claims. 87 00:06:00,920 --> 00:06:06,360 Speaker 1: But since the article almost exclusively focuses on chat GPT, 88 00:06:06,880 --> 00:06:09,880 Speaker 1: I feel that leaving out all the other manifestations of 89 00:06:09,920 --> 00:06:13,839 Speaker 1: AI undermines this argument, because we're already seeing how AI 90 00:06:13,960 --> 00:06:18,360 Speaker 1: is transforming the world, both in good and bad ways. 91 00:06:18,839 --> 00:06:22,640 Speaker 1: It can help optimize processes which might not be as 92 00:06:22,680 --> 00:06:27,000 Speaker 1: world shattering as I don't know, facilitating meaningful conversations between 93 00:06:27,040 --> 00:06:31,880 Speaker 1: different countries, but it does have an impact. We've seen 94 00:06:31,960 --> 00:06:35,479 Speaker 1: it in stock trades, right, We've seen micro trading and 95 00:06:36,000 --> 00:06:40,520 Speaker 1: ultra fast stock trades that's making economic impacts that are 96 00:06:40,680 --> 00:06:42,920 Speaker 1: honestly still kind of difficult for us to get our 97 00:06:42,920 --> 00:06:45,799 Speaker 1: minds wrapped around. And then we've also seen how AI 98 00:06:45,880 --> 00:06:49,760 Speaker 1: can exacerbate social problems like the use of facial recognition 99 00:06:49,760 --> 00:06:54,240 Speaker 1: technology among law enforcement. That kind of AI can really 100 00:06:54,400 --> 00:06:58,440 Speaker 1: crank the knob on already difficult problems, like the fact 101 00:06:58,440 --> 00:07:03,200 Speaker 1: that people of color are disproportionately targeted by law enforcement 102 00:07:03,279 --> 00:07:05,800 Speaker 1: here in the United States. So while I think some 103 00:07:05,920 --> 00:07:10,320 Speaker 1: versions of AI are undeniably dumber than what the hype suggests, 104 00:07:10,680 --> 00:07:13,520 Speaker 1: we also have to remember AI does not manifest in 105 00:07:13,600 --> 00:07:19,280 Speaker 1: just one way. AI is not just chat GPT. It's 106 00:07:19,760 --> 00:07:23,280 Speaker 1: not just the idea of a seemingly sentient computer like 107 00:07:23,400 --> 00:07:26,680 Speaker 1: how in two thousand and one. It's in all sorts 108 00:07:26,680 --> 00:07:31,640 Speaker 1: of stuff, from robotics to stock trading to assisting surgeons 109 00:07:31,680 --> 00:07:34,840 Speaker 1: with medical procedures. So I think we have to avoid 110 00:07:35,000 --> 00:07:39,400 Speaker 1: using a specific product as the gateway to criticizing the 111 00:07:39,400 --> 00:07:44,200 Speaker 1: general field. It's too reductive and it doesn't really help 112 00:07:44,280 --> 00:07:47,960 Speaker 1: us get a deeper understanding of what's actually happening. Moving 113 00:07:47,960 --> 00:07:52,680 Speaker 1: on to another version of AI. Earlier this week, Microsoft 114 00:07:52,720 --> 00:08:00,400 Speaker 1: researchers unveiled Cosmos one. That's Kosmos one. This is a 115 00:08:00,440 --> 00:08:04,840 Speaker 1: form of multimodal AI that, according to the researchers, can 116 00:08:04,920 --> 00:08:09,520 Speaker 1: solve visual puzzles. It can recognize text visually, so it's 117 00:08:09,560 --> 00:08:12,320 Speaker 1: not you know, reading, it's reading the text like a 118 00:08:12,360 --> 00:08:15,160 Speaker 1: person would. So you could have like a picture that 119 00:08:15,240 --> 00:08:17,560 Speaker 1: has text in it, and this would be able to 120 00:08:17,600 --> 00:08:22,040 Speaker 1: distinguish what that text is. It can analyze pictures and 121 00:08:22,120 --> 00:08:26,040 Speaker 1: be able to tell what's in those pictures and describe them. 122 00:08:26,080 --> 00:08:30,720 Speaker 1: It can have natural language interactions, and that could mean 123 00:08:30,760 --> 00:08:33,520 Speaker 1: we're about to have another change in how captures work. 124 00:08:34,080 --> 00:08:37,360 Speaker 1: You know, captures are those tools that websites and other 125 00:08:37,440 --> 00:08:41,480 Speaker 1: services use to determine whether or not you're human. And 126 00:08:41,880 --> 00:08:44,200 Speaker 1: you know, you sometimes will encounter a capture that will 127 00:08:44,240 --> 00:08:47,520 Speaker 1: ask you to do something like select all the images 128 00:08:47,559 --> 00:08:50,160 Speaker 1: that have a fire hydrant in them, or a crosswalk 129 00:08:50,280 --> 00:08:54,360 Speaker 1: or a motorcycle or whatever. Well that's because that's a 130 00:08:54,440 --> 00:08:59,720 Speaker 1: task that most humans can do fairly easily, but bots 131 00:09:00,000 --> 00:09:04,160 Speaker 1: additionally have a hard time doing it. Cosmos one, it seems, 132 00:09:04,400 --> 00:09:09,439 Speaker 1: could potentially complete those kinds of captures. It could analyze 133 00:09:09,760 --> 00:09:14,400 Speaker 1: images and determine which of them, if any, have the 134 00:09:14,480 --> 00:09:17,880 Speaker 1: important feature in them. The whole history of captcha actually 135 00:09:18,000 --> 00:09:22,240 Speaker 1: is a swinging pendulum between foiling AI and then creating 136 00:09:22,280 --> 00:09:25,000 Speaker 1: AI that's capable of foiling the capture. So this is 137 00:09:25,040 --> 00:09:30,120 Speaker 1: nothing new anyway. The Cosmos ones system was given tasks 138 00:09:30,160 --> 00:09:34,880 Speaker 1: that included writing captions for images, like trying to describe 139 00:09:34,960 --> 00:09:39,160 Speaker 1: what the image showed, and it even took a visual 140 00:09:39,240 --> 00:09:44,320 Speaker 1: IQ test. Essentially, what the researchers did was they fed 141 00:09:45,559 --> 00:09:49,640 Speaker 1: answers to a visual IQ test to the Cosmos one 142 00:09:49,720 --> 00:09:54,160 Speaker 1: model and ask Cosmos whether or not the answer was correct. Now, 143 00:09:54,200 --> 00:09:58,080 Speaker 1: according to the researchers, the AI scored below thirty percent 144 00:09:58,320 --> 00:10:03,360 Speaker 1: on that visual i Q test. That's a pretty dang 145 00:10:03,440 --> 00:10:05,800 Speaker 1: low score. Technically, I think it was between twenty two 146 00:10:05,800 --> 00:10:09,880 Speaker 1: and twenty six percent. That's not good, but it is 147 00:10:10,520 --> 00:10:15,360 Speaker 1: better than chance, so it's better than just answering yes 148 00:10:15,440 --> 00:10:18,200 Speaker 1: or no randomly. So that suggests that this could be 149 00:10:18,200 --> 00:10:22,240 Speaker 1: a starting point from which this model will improve over time. 150 00:10:22,960 --> 00:10:26,240 Speaker 1: Microsoft has indicated that the company plans to make Cosmos 151 00:10:26,280 --> 00:10:29,160 Speaker 1: one available to developers in the future, though at the 152 00:10:29,160 --> 00:10:32,480 Speaker 1: time I'm recording this, there's not a timetable mentioned about 153 00:10:32,520 --> 00:10:36,160 Speaker 1: when that might happen. This is a different approach than 154 00:10:36,200 --> 00:10:43,120 Speaker 1: the generative pre trained transform of GPT, So again, we're 155 00:10:43,200 --> 00:10:47,880 Speaker 1: looking at different ways that AI manifests. It's not always 156 00:10:47,880 --> 00:10:53,720 Speaker 1: in just one single direction. There's so many different disciplines 157 00:10:53,760 --> 00:10:56,800 Speaker 1: that are involved in AI, and many of them are 158 00:10:56,840 --> 00:11:00,720 Speaker 1: approaching AI from a very different angle, and there's no 159 00:11:00,800 --> 00:11:03,320 Speaker 1: telling which versions are going to end up being the 160 00:11:03,360 --> 00:11:06,559 Speaker 1: most dominant further on, or if it will truly be 161 00:11:07,160 --> 00:11:12,160 Speaker 1: a convergence of all these different disciplines that ultimately produces 162 00:11:12,800 --> 00:11:16,760 Speaker 1: the AI that we're thinking of that will be truly transformational. 163 00:11:17,280 --> 00:11:20,920 Speaker 1: The US Federal Trade Commission, or FTC has its own 164 00:11:21,000 --> 00:11:24,280 Speaker 1: concerns about AI, and in this case, it has more 165 00:11:24,280 --> 00:11:27,079 Speaker 1: to do with the way companies are marketing their services 166 00:11:27,160 --> 00:11:31,680 Speaker 1: by including mentions of AI. The FTC is concerned the 167 00:11:31,720 --> 00:11:36,280 Speaker 1: companies could be overpromising or misleading people by leaning on 168 00:11:36,320 --> 00:11:40,880 Speaker 1: a trendy, buzzworthy term and concept. If you need to 169 00:11:40,880 --> 00:11:43,360 Speaker 1: get investors to pour money and your startup, well, you know, 170 00:11:43,559 --> 00:11:46,000 Speaker 1: just start using the term AI in there. Even if 171 00:11:46,040 --> 00:11:49,439 Speaker 1: AI doesn't really make sense or you're not really using it, 172 00:11:49,720 --> 00:11:52,480 Speaker 1: you're bound to snag a few fish with that approach 173 00:11:52,520 --> 00:11:58,240 Speaker 1: because AI is such a crazy popular concept right now. 174 00:11:58,800 --> 00:12:01,360 Speaker 1: That's the kind of thing that the FTC is concerned about. 175 00:12:01,640 --> 00:12:04,160 Speaker 1: Folks who are trying to cash in on a popular 176 00:12:04,240 --> 00:12:08,200 Speaker 1: but largely misunderstood technology. And as I've said many times, 177 00:12:08,440 --> 00:12:11,880 Speaker 1: when you have excitement mixed with a lack of information 178 00:12:12,160 --> 00:12:16,200 Speaker 1: or knowledge or understanding what you have is the perfect 179 00:12:16,360 --> 00:12:20,679 Speaker 1: condition for scam artists, or if not outright scams, at 180 00:12:20,760 --> 00:12:23,840 Speaker 1: least unethical folks who don't mind leaning a little heavily 181 00:12:23,880 --> 00:12:27,600 Speaker 1: on ignorance in order to make some money. So, if 182 00:12:28,080 --> 00:12:33,000 Speaker 1: there's something that sounds really exciting, like a huge investment opportunity, 183 00:12:33,040 --> 00:12:37,880 Speaker 1: but you don't actually understand the underlying approach, whether it's 184 00:12:37,880 --> 00:12:42,800 Speaker 1: a technology or otherwise, huge red flag, y'all, Huge red flag. 185 00:12:43,160 --> 00:12:46,280 Speaker 1: I don't care if it's an NFT or if it's AI. 186 00:12:47,040 --> 00:12:49,720 Speaker 1: It is something you need to take a step back 187 00:12:49,800 --> 00:12:54,160 Speaker 1: from and start asking critical questions to get a better understanding. 188 00:12:54,160 --> 00:12:57,240 Speaker 1: And it might turn out to be total legit, which 189 00:12:57,280 --> 00:13:00,120 Speaker 1: is awesome. But if it's not total legit, it will 190 00:13:00,160 --> 00:13:03,520 Speaker 1: benefit you from taking that step back. So the FTC 191 00:13:03,679 --> 00:13:06,679 Speaker 1: is essentially sending a message out there, and it is saying, hey, 192 00:13:07,640 --> 00:13:10,800 Speaker 1: be sure any claims y'all are making about AI in 193 00:13:10,880 --> 00:13:14,240 Speaker 1: your products and services are substantive or else we're going 194 00:13:14,280 --> 00:13:17,080 Speaker 1: to ask you to prove it, and if you can't 195 00:13:17,120 --> 00:13:21,120 Speaker 1: prove it, you're gonna be in trouble. Mashable reports that 196 00:13:21,160 --> 00:13:24,600 Speaker 1: Google layoffs have affected all sorts of employees that you 197 00:13:24,600 --> 00:13:28,840 Speaker 1: wouldn't expect, like robots. I mean, I'm talking about actual robots. 198 00:13:29,440 --> 00:13:33,320 Speaker 1: We usually worry about robots taking our jobs. Rarely do 199 00:13:33,360 --> 00:13:36,520 Speaker 1: we think about them losing their jobs. All right, So 200 00:13:36,600 --> 00:13:40,120 Speaker 1: the robots in question are one armed robots from the 201 00:13:40,240 --> 00:13:44,120 Speaker 1: Everyday Robots team that was within Google. This team had 202 00:13:44,160 --> 00:13:48,960 Speaker 1: been working on robots systems that could operate in consumer applications, 203 00:13:49,200 --> 00:13:51,840 Speaker 1: and Google was actually making use of them in the 204 00:13:51,960 --> 00:13:55,920 Speaker 1: Google HQ to do odd jobs like cleaning surfaces like 205 00:13:56,040 --> 00:13:58,920 Speaker 1: counters and stuff and that kind of thing, or taking 206 00:13:58,960 --> 00:14:01,880 Speaker 1: stuff to recycling bends. But it now sounds like this 207 00:14:01,920 --> 00:14:05,360 Speaker 1: project has been dissolved, and in addition, the robots themselves 208 00:14:05,360 --> 00:14:09,320 Speaker 1: have been shut down and packed away. So times are 209 00:14:09,360 --> 00:14:12,319 Speaker 1: tough even for the machines out there. I guess Okay, 210 00:14:12,320 --> 00:14:14,920 Speaker 1: we've got some more news stories we're going to be covering, 211 00:14:14,920 --> 00:14:27,160 Speaker 1: but first let's take a quick break. All right, we're back. 212 00:14:27,720 --> 00:14:30,760 Speaker 1: We still have another AI story, because, like I said, 213 00:14:30,840 --> 00:14:34,080 Speaker 1: it has just become the big tech topic for twenty 214 00:14:34,160 --> 00:14:38,080 Speaker 1: twenty three, unless something massive changes later in this year, 215 00:14:38,120 --> 00:14:41,440 Speaker 1: which is entirely possible. I suspect that end of the 216 00:14:41,520 --> 00:14:44,680 Speaker 1: year roundups in various tech podcasts are going to be 217 00:14:44,680 --> 00:14:48,160 Speaker 1: talking about how this was the year of AI hype, 218 00:14:48,560 --> 00:14:52,560 Speaker 1: but switching over to Apple. The company has a very 219 00:14:52,600 --> 00:14:57,400 Speaker 1: well earned reputation for having an obtuse process for approving 220 00:14:57,440 --> 00:15:02,840 Speaker 1: apps on its iOS platform. You can re countless descriptions 221 00:15:02,880 --> 00:15:07,440 Speaker 1: among app developers of encountering frustration as they have submitted 222 00:15:07,440 --> 00:15:11,720 Speaker 1: apps to Apple and only found them rejected and often 223 00:15:11,800 --> 00:15:14,680 Speaker 1: with not enough direction for them to be able to 224 00:15:15,160 --> 00:15:18,440 Speaker 1: make informed changes to the app so that it could 225 00:15:18,440 --> 00:15:22,880 Speaker 1: actually pass. But recently, Apple send a communication to one 226 00:15:22,920 --> 00:15:27,720 Speaker 1: app developer called blue Mail that was planning on pushing 227 00:15:27,720 --> 00:15:31,360 Speaker 1: out an update to its existing email application, and this 228 00:15:31,480 --> 00:15:35,600 Speaker 1: update would have incorporated an AI powered feature that could 229 00:15:35,600 --> 00:15:38,800 Speaker 1: assist with language tools. Think of something kind of similar 230 00:15:39,200 --> 00:15:44,160 Speaker 1: to chat GPT that could help you construct an email message. 231 00:15:44,600 --> 00:15:48,280 Speaker 1: Apple has delayed this upgrade rollout, citing concerns with that 232 00:15:48,360 --> 00:15:52,080 Speaker 1: the AI could end up generating inappropriate content and that 233 00:15:52,240 --> 00:15:55,640 Speaker 1: the app developer needs to take that into consideration since 234 00:15:55,960 --> 00:15:59,600 Speaker 1: children could be using the app. So Apple is telling 235 00:15:59,680 --> 00:16:04,360 Speaker 1: blue Mail that if it wishes to incorporate this AI feature, 236 00:16:04,440 --> 00:16:07,080 Speaker 1: that it also has to change the app so that 237 00:16:07,120 --> 00:16:09,440 Speaker 1: the app is now going to be restricted to users 238 00:16:09,440 --> 00:16:13,440 Speaker 1: who are seventeen years or older, just in case the 239 00:16:14,280 --> 00:16:18,880 Speaker 1: AI starts to generate offensive messages or material that could 240 00:16:18,920 --> 00:16:22,960 Speaker 1: be considered harmful for kids. Blue Mail has protested this decision. 241 00:16:23,480 --> 00:16:27,480 Speaker 1: The company has argued that there are already apps on 242 00:16:27,760 --> 00:16:31,000 Speaker 1: the iOS platform that are not held under the same 243 00:16:31,040 --> 00:16:35,520 Speaker 1: sort of restriction, but that have some similarity in capability, 244 00:16:36,240 --> 00:16:39,280 Speaker 1: and the company says that if it is forced to 245 00:16:39,520 --> 00:16:42,880 Speaker 1: offer this email app with that restriction, the age restriction, 246 00:16:43,240 --> 00:16:47,120 Speaker 1: that this harms visibility and discoverability, and it hurts the 247 00:16:47,200 --> 00:16:51,080 Speaker 1: app's performance in the marketplace. Now, I do not doubt 248 00:16:51,360 --> 00:16:54,880 Speaker 1: that there is an uneven landscape among apps on iOS. 249 00:16:55,320 --> 00:16:58,000 Speaker 1: I don't think it's fair at all. I think there 250 00:16:58,040 --> 00:17:03,000 Speaker 1: are far too many inconsistencies with apps that get the 251 00:17:03,720 --> 00:17:06,639 Speaker 1: green light and apps that are prevented. I do not 252 00:17:06,720 --> 00:17:10,119 Speaker 1: think it's a very transparent process at all. But I 253 00:17:10,160 --> 00:17:14,120 Speaker 1: also think concerns about generative AI have some validity to them. 254 00:17:14,560 --> 00:17:18,040 Speaker 1: If you just take the Nothing Forever show on Twitch, 255 00:17:18,160 --> 00:17:23,520 Speaker 1: that's the AI generated show that creates an endless Seinfeld episode. 256 00:17:23,960 --> 00:17:27,800 Speaker 1: That's proof that without careful guidelines and controls, you can 257 00:17:27,880 --> 00:17:31,240 Speaker 1: run into problems So for those who don't remember, Twitch 258 00:17:31,320 --> 00:17:36,879 Speaker 1: actually temporarily banned the Nothing Forever show the channel because 259 00:17:37,600 --> 00:17:41,360 Speaker 1: they had temporarily reverted to an earlier version of GPT 260 00:17:41,560 --> 00:17:45,120 Speaker 1: when they encountered some technical issues, and the earlier version 261 00:17:45,119 --> 00:17:50,520 Speaker 1: of GPT did not have the content restrictions that the 262 00:17:50,560 --> 00:17:53,879 Speaker 1: more recent version had, and the show began to generate 263 00:17:53,920 --> 00:17:57,760 Speaker 1: content that was homophobic in nature. They violated twitch his policies. 264 00:17:58,000 --> 00:18:01,879 Speaker 1: They got a ban. Well, that show that these AI 265 00:18:01,960 --> 00:18:06,679 Speaker 1: tools can end up being problematic. I know that's a 266 00:18:06,720 --> 00:18:10,480 Speaker 1: word we use to the point where people complain about 267 00:18:10,480 --> 00:18:13,920 Speaker 1: it's use, but it's it's an inappropriate word in this case. 268 00:18:14,160 --> 00:18:18,600 Speaker 1: So I do get the concern, but I also can 269 00:18:18,640 --> 00:18:23,920 Speaker 1: sympathize with Blue Mail's argument that it's already an unfair 270 00:18:23,960 --> 00:18:27,360 Speaker 1: playing ground on iOS. I don't think anyone comes out 271 00:18:27,359 --> 00:18:30,800 Speaker 1: a winner in this one. Now, to talk about TikTok 272 00:18:30,840 --> 00:18:35,840 Speaker 1: a bit, the American Civil Liberties Union or ACLU, has 273 00:18:35,880 --> 00:18:41,840 Speaker 1: issued a statement protesting US House Bill one one five three. Now, 274 00:18:41,880 --> 00:18:47,240 Speaker 1: this proposed bill would, according to the ACLU quote, effectively 275 00:18:47,359 --> 00:18:51,240 Speaker 1: ban TikTok in the US end quote. This isn't about 276 00:18:51,240 --> 00:18:57,399 Speaker 1: removing TikTok from government devices. But according to the ACLU, 277 00:18:57,960 --> 00:19:04,119 Speaker 1: banning TikTok outright and similar platforms. So the official description 278 00:19:04,160 --> 00:19:07,840 Speaker 1: of the bill is quote to provide a clarification of 279 00:19:08,040 --> 00:19:14,040 Speaker 1: non applicability for regulation and prohibition relating to sensitive personal 280 00:19:14,119 --> 00:19:19,919 Speaker 1: data under International Emergency Economic Powers Act and for other purposes. Quote. 281 00:19:20,440 --> 00:19:22,959 Speaker 1: I wish I could tell you more about the text, 282 00:19:23,240 --> 00:19:25,160 Speaker 1: but when I went online to read it, it had 283 00:19:25,200 --> 00:19:28,280 Speaker 1: not yet been uploaded to the database, so I haven't 284 00:19:28,320 --> 00:19:31,560 Speaker 1: been able to actually read the bill. The ACLU says 285 00:19:31,600 --> 00:19:36,360 Speaker 1: that the US Congress quote must not censor entire platforms 286 00:19:36,400 --> 00:19:40,000 Speaker 1: and strip Americans of their constitutional right to freedom of 287 00:19:40,040 --> 00:19:43,960 Speaker 1: speech and expression. Quote and yeah, the right to free 288 00:19:44,000 --> 00:19:46,919 Speaker 1: speech is one of the fundamental core values of the 289 00:19:47,000 --> 00:19:51,080 Speaker 1: United States, send the First Amendment to the Constitution. But 290 00:19:51,240 --> 00:19:54,520 Speaker 1: this is a complicated issue because TikTok critics worry that 291 00:19:54,600 --> 00:19:57,920 Speaker 1: the app is on the back end, essentially performing as 292 00:19:58,000 --> 00:20:02,200 Speaker 1: a data siphon and pulling in information that the Chinese 293 00:20:02,240 --> 00:20:06,360 Speaker 1: government can then use as intelligence. And this information includes 294 00:20:06,440 --> 00:20:12,400 Speaker 1: personal information about users, things like employer information, government information, 295 00:20:12,960 --> 00:20:15,960 Speaker 1: and more like. People are using TikTok all over the place, 296 00:20:16,520 --> 00:20:21,399 Speaker 1: so potentially, if you are gathering intelligence, you could comb 297 00:20:21,440 --> 00:20:23,879 Speaker 1: through TikTok and look for stuff that could give you 298 00:20:24,600 --> 00:20:27,760 Speaker 1: an advantage in that arena. So generally speaking, I tend 299 00:20:27,800 --> 00:20:32,200 Speaker 1: to side with the ACLU on most topics, but this 300 00:20:32,240 --> 00:20:35,200 Speaker 1: one is a little tricky, and I'm not sure where 301 00:20:35,400 --> 00:20:39,280 Speaker 1: I land on this. I do have concerns about data 302 00:20:39,320 --> 00:20:43,080 Speaker 1: security with TikTok, but then again, as a lot of 303 00:20:43,160 --> 00:20:46,920 Speaker 1: people have pointed out, TikTok's practices are really not all 304 00:20:46,920 --> 00:20:51,480 Speaker 1: that different from other platforms like Meta, YouTube, etc. It's 305 00:20:51,520 --> 00:20:56,200 Speaker 1: just that those companies aren't owned by a Chinese company, right, 306 00:20:56,720 --> 00:21:00,320 Speaker 1: But they are gathering the same kinds of information and more, 307 00:21:00,520 --> 00:21:03,600 Speaker 1: and they're definitely exploiting it. So you can make a 308 00:21:03,600 --> 00:21:06,960 Speaker 1: strong argument that we've already decided that handing over information 309 00:21:06,960 --> 00:21:10,159 Speaker 1: to platforms is fine, and therefore it would be unfair 310 00:21:10,200 --> 00:21:13,000 Speaker 1: to single out TikTok just because its parent company happens 311 00:21:13,000 --> 00:21:17,000 Speaker 1: to be based in China. Plus, obviously, freedom of speech 312 00:21:17,480 --> 00:21:23,200 Speaker 1: is critically important. I'm not really sure if banning a 313 00:21:23,240 --> 00:21:28,480 Speaker 1: platform falls into the bucket of restricting free speech, but 314 00:21:28,520 --> 00:21:32,400 Speaker 1: then I'm no constitutional expert either. Also, there's nothing stopping 315 00:21:32,520 --> 00:21:35,640 Speaker 1: someone else from making a similar app. In fact, we've 316 00:21:35,680 --> 00:21:39,720 Speaker 1: seen that right Instagram YouTube, Snapchat, and others have all 317 00:21:39,760 --> 00:21:44,160 Speaker 1: introduced features that are extremely similar to what TikTok does. 318 00:21:45,280 --> 00:21:47,280 Speaker 1: I think it's fair to say some of these have 319 00:21:47,320 --> 00:21:51,360 Speaker 1: outright tried to copy what TikTok does, to varying degrees 320 00:21:51,600 --> 00:21:56,280 Speaker 1: of success. So I don't know that eliminating a platform 321 00:21:56,480 --> 00:22:00,439 Speaker 1: amounts to the same thing as eliminating Americans free speech. 322 00:22:00,920 --> 00:22:04,600 Speaker 1: But again, I am not an expert on the subject matter, 323 00:22:04,760 --> 00:22:07,639 Speaker 1: so I don't I am genuinely conflicted. I do not 324 00:22:07,760 --> 00:22:12,280 Speaker 1: know what to think about this particular topic. TikTok itself 325 00:22:12,400 --> 00:22:15,200 Speaker 1: is introducing features that are meant to limit screen time 326 00:22:15,240 --> 00:22:19,000 Speaker 1: for younger users. So all TikTok users who are under 327 00:22:19,040 --> 00:22:22,960 Speaker 1: eighteen will get a message when they hit sixteen minutes 328 00:22:23,000 --> 00:22:25,560 Speaker 1: of screen time in a day. Once this rolls out, 329 00:22:25,920 --> 00:22:28,400 Speaker 1: and at that point, the user will see a prompt 330 00:22:28,440 --> 00:22:32,440 Speaker 1: asking for a passcode before they can continue watching content 331 00:22:32,520 --> 00:22:37,080 Speaker 1: on TikTok. However, they can also disable the feature entirely, 332 00:22:37,600 --> 00:22:40,160 Speaker 1: but after one hundred minutes of screen time in a day, 333 00:22:40,200 --> 00:22:42,879 Speaker 1: they will receive a prompt that requires them to create 334 00:22:42,920 --> 00:22:46,440 Speaker 1: a new daily limit. Now I'm not sure how effective 335 00:22:46,560 --> 00:22:49,320 Speaker 1: this will actually be on limiting screen time, because to me. 336 00:22:49,359 --> 00:22:52,360 Speaker 1: It sounds mostly like something that the average user would 337 00:22:52,400 --> 00:22:54,920 Speaker 1: just kind of roll their eyes at and then disable 338 00:22:55,040 --> 00:22:58,520 Speaker 1: and then continue on unless parents set the passcode and 339 00:22:58,520 --> 00:23:00,639 Speaker 1: they don't tell their kids what the pass code is. 340 00:23:01,040 --> 00:23:03,639 Speaker 1: But then if the user can actually just disable the feature, 341 00:23:03,680 --> 00:23:06,119 Speaker 1: I find it hard to believe that most folks will say, ah, 342 00:23:06,119 --> 00:23:09,639 Speaker 1: thank you, TikTok, where did the time go? I shall 343 00:23:09,720 --> 00:23:12,320 Speaker 1: now go outside to take in the fresh air and 344 00:23:12,400 --> 00:23:16,000 Speaker 1: play at sport or something. I guess you could say. 345 00:23:16,160 --> 00:23:18,760 Speaker 1: I'm skeptical that this is going to make much of 346 00:23:18,800 --> 00:23:21,719 Speaker 1: a difference. Some of the other features potentially could help 347 00:23:21,800 --> 00:23:23,640 Speaker 1: parents keep an eye on how much time their kids 348 00:23:23,680 --> 00:23:26,080 Speaker 1: are spending on the app that at least allows for 349 00:23:26,200 --> 00:23:31,479 Speaker 1: intervention if usage spirals out of control. I'm glad I 350 00:23:31,520 --> 00:23:34,520 Speaker 1: don't have kids, because I don't know how I would 351 00:23:34,560 --> 00:23:39,000 Speaker 1: approach this one either. I'm also glad that TikTok was 352 00:23:39,040 --> 00:23:40,520 Speaker 1: not a thing when I was a kid, because I 353 00:23:40,560 --> 00:23:45,360 Speaker 1: have a feeling I would have been a hardcore addict 354 00:23:45,400 --> 00:23:48,680 Speaker 1: of TikTok if I had had the opportunity to access 355 00:23:48,680 --> 00:23:51,080 Speaker 1: it back when I was a kid. In the UK, 356 00:23:51,600 --> 00:23:56,160 Speaker 1: a man named Duncan McCann has lodged a formal complaint 357 00:23:56,240 --> 00:24:01,840 Speaker 1: with the country's Information Commissioner's Office or ICO, accusing YouTube 358 00:24:01,920 --> 00:24:05,400 Speaker 1: of collecting information about the videos that children are watching 359 00:24:05,440 --> 00:24:10,560 Speaker 1: within the UK and this is against an ICO children's code. 360 00:24:11,200 --> 00:24:14,280 Speaker 1: YouTube responded by saying that the platform has never been 361 00:24:14,320 --> 00:24:17,879 Speaker 1: intended for children under the age of thirteen, that accounts 362 00:24:17,880 --> 00:24:22,120 Speaker 1: that are registered to young users follow protocol. They don't 363 00:24:22,119 --> 00:24:26,000 Speaker 1: collect data on young users if that's the account that's 364 00:24:26,000 --> 00:24:29,440 Speaker 1: connected to YouTube, and that for the younger kids. There's 365 00:24:29,480 --> 00:24:33,000 Speaker 1: also the YouTube Kids platform, which also does not track 366 00:24:33,040 --> 00:24:36,960 Speaker 1: activity and collect personal information. But McCann's argument is that 367 00:24:37,000 --> 00:24:40,600 Speaker 1: a lot of kids are accessing YouTube on family accounts 368 00:24:40,720 --> 00:24:43,720 Speaker 1: or on family devices that are under a parent's account, 369 00:24:44,160 --> 00:24:47,280 Speaker 1: and that these kids, as they use the app, have 370 00:24:47,560 --> 00:24:53,000 Speaker 1: their data and activity tracked. And you might be thinking, well, yeah, 371 00:24:53,040 --> 00:24:56,199 Speaker 1: if YouTube is being told an adult is in charge 372 00:24:56,200 --> 00:24:58,920 Speaker 1: of the account, then YouTube is going to treat the 373 00:24:58,960 --> 00:25:01,640 Speaker 1: activity on that account as if it were any adult 374 00:25:01,840 --> 00:25:04,960 Speaker 1: using it. So obviously it's going to track all the information. 375 00:25:05,200 --> 00:25:08,439 Speaker 1: That's the YouTube business model. And you might wonder what 376 00:25:08,680 --> 00:25:12,960 Speaker 1: McCann's solution to this problem is, And essentially he says 377 00:25:12,960 --> 00:25:15,399 Speaker 1: the ideal solution would be to create an opt in 378 00:25:15,720 --> 00:25:19,639 Speaker 1: system in which only accounts that are registered to adults 379 00:25:19,720 --> 00:25:24,200 Speaker 1: would have the option to agree to having their activity tracked, 380 00:25:24,720 --> 00:25:28,679 Speaker 1: kind of similar to how Apple approaches app tracking. So 381 00:25:28,720 --> 00:25:31,560 Speaker 1: it becomes an opt in system, and the can believes 382 00:25:31,600 --> 00:25:34,240 Speaker 1: that only a minority of users would ever opt into 383 00:25:34,280 --> 00:25:37,400 Speaker 1: such a system, and I'm pretty sure he's right. But 384 00:25:37,440 --> 00:25:41,359 Speaker 1: I also bet that if you forced that change on YouTube, 385 00:25:41,440 --> 00:25:44,560 Speaker 1: it would result in such a drastic impact to the 386 00:25:44,560 --> 00:25:48,560 Speaker 1: company's revenue that they would have to make drastic changes 387 00:25:48,600 --> 00:25:51,119 Speaker 1: to operations or else it would become too expensive to 388 00:25:51,200 --> 00:25:55,960 Speaker 1: run the business. Keep in mind, they are hosting hundreds 389 00:25:56,119 --> 00:26:01,400 Speaker 1: of new hours of content every single minute. So as 390 00:26:01,400 --> 00:26:04,280 Speaker 1: it stands, this matter is going to test the ICO 391 00:26:04,400 --> 00:26:08,320 Speaker 1: Children's Code. The UK only put that code into operation 392 00:26:08,400 --> 00:26:11,920 Speaker 1: in twenty twenty, so it's a pretty young set of rules. 393 00:26:12,560 --> 00:26:17,520 Speaker 1: And this, to me starts to raise larger questions because 394 00:26:18,400 --> 00:26:21,960 Speaker 1: if you start with the premise that a child could 395 00:26:22,040 --> 00:26:27,280 Speaker 1: possibly access a particular device or account that belongs to 396 00:26:27,320 --> 00:26:32,760 Speaker 1: an adult, does that mean that all online services from 397 00:26:32,800 --> 00:26:35,240 Speaker 1: here on out have to be designed in such a 398 00:26:35,280 --> 00:26:38,800 Speaker 1: way to assume by default that a child could be 399 00:26:38,960 --> 00:26:42,399 Speaker 1: accessing it, Like, do you have to start thinking, well, 400 00:26:42,960 --> 00:26:45,600 Speaker 1: a child might have gotten hold of their parents' iPhone, 401 00:26:46,000 --> 00:26:49,080 Speaker 1: and because of that, we need to design this so 402 00:26:49,160 --> 00:26:54,719 Speaker 1: it's child friendly, because obviously that would end up impacting everything. 403 00:26:55,160 --> 00:26:57,879 Speaker 1: There are tons of apps that are not appropriate for children, 404 00:26:57,960 --> 00:27:01,160 Speaker 1: whether because it's content or of the use. I mean, 405 00:27:01,240 --> 00:27:05,080 Speaker 1: like banking apps would not be appropriate for children. Right, 406 00:27:05,720 --> 00:27:10,159 Speaker 1: So if you start from that premise that you have 407 00:27:10,240 --> 00:27:13,360 Speaker 1: to assume that a child could be using this, therefore 408 00:27:13,400 --> 00:27:16,360 Speaker 1: you can't be tracking data or usage. It would mean 409 00:27:16,400 --> 00:27:20,120 Speaker 1: that tons of things would have to change. So I'm 410 00:27:20,200 --> 00:27:23,960 Speaker 1: very curious to see how this develops, because I don't 411 00:27:24,000 --> 00:27:28,720 Speaker 1: see it as being sustainable. All Right, I've got four 412 00:27:28,800 --> 00:27:31,280 Speaker 1: more stories to cover, so we're going to take another 413 00:27:31,320 --> 00:27:33,480 Speaker 1: quick break. When we come back, we will wrap up 414 00:27:33,960 --> 00:27:46,320 Speaker 1: tech news for this week. Okay, we're back, and here 415 00:27:46,480 --> 00:27:51,760 Speaker 1: is a quick Airbnb story. I sometimes resist including Airbnb 416 00:27:51,920 --> 00:27:55,080 Speaker 1: stories and tech stuff because the company kind of is 417 00:27:55,080 --> 00:27:58,280 Speaker 1: a tech company, and kind of isn't. I mean, ultimately 418 00:27:58,320 --> 00:28:02,880 Speaker 1: it is a tech company. It's just that our experience 419 00:28:02,920 --> 00:28:07,200 Speaker 1: with Airbnb is more on the actual like either hosting 420 00:28:07,320 --> 00:28:10,919 Speaker 1: a property or staying at a property, and not so 421 00:28:11,000 --> 00:28:14,640 Speaker 1: much thinking about the back end that's making all this happen. However, 422 00:28:14,880 --> 00:28:16,879 Speaker 1: in the case of this particular story, I think it 423 00:28:16,920 --> 00:28:23,040 Speaker 1: really qualifies as a tech company. So sometimes Airbnb will 424 00:28:23,080 --> 00:28:26,760 Speaker 1: issue a ban on a user and prevent that person 425 00:28:26,800 --> 00:28:29,720 Speaker 1: from ever being able to make a reservation and an 426 00:28:29,720 --> 00:28:34,080 Speaker 1: Airbnb property. There are a few reasons why airbb would 427 00:28:34,119 --> 00:28:36,640 Speaker 1: do this. They might do it if someone has been 428 00:28:36,680 --> 00:28:39,880 Speaker 1: reported as violating the rules, like if a host says, hey, 429 00:28:40,160 --> 00:28:42,560 Speaker 1: you know, I opened up my home to this renter 430 00:28:43,160 --> 00:28:46,240 Speaker 1: and they ended up causing an enormous amount of damage 431 00:28:47,360 --> 00:28:49,480 Speaker 1: that I'm now going to have to address, that might 432 00:28:49,520 --> 00:28:52,440 Speaker 1: be a reason. In some cases, it might be a 433 00:28:52,480 --> 00:28:55,600 Speaker 1: background check. Airbnb does partner with a company that does 434 00:28:56,000 --> 00:29:00,280 Speaker 1: rapid background checks. If that background check reveals that person 435 00:29:00,320 --> 00:29:02,520 Speaker 1: has a criminal history, that could be a reason to 436 00:29:02,520 --> 00:29:05,040 Speaker 1: get a ban. In fact, even in a few cases, 437 00:29:05,400 --> 00:29:08,800 Speaker 1: the quote unquote criminal history has been one where someone 438 00:29:08,920 --> 00:29:12,760 Speaker 1: was guilty or found guilty on a misdemeanor charge that 439 00:29:12,880 --> 00:29:18,000 Speaker 1: wasn't remotely related to rental of property. There was one 440 00:29:18,040 --> 00:29:21,480 Speaker 1: story about how someone had a misdemeanor of having their 441 00:29:21,560 --> 00:29:25,600 Speaker 1: dog off a leash in an area that required dogs 442 00:29:25,640 --> 00:29:28,760 Speaker 1: to be leashed, and that alone prevented them from being 443 00:29:28,800 --> 00:29:31,400 Speaker 1: able to stay at Airbnb, and that does seem like 444 00:29:31,400 --> 00:29:34,720 Speaker 1: that might be an overreach. And on the one hand, 445 00:29:34,760 --> 00:29:37,520 Speaker 1: you can understand how a company like Airbnb would air 446 00:29:37,680 --> 00:29:42,840 Speaker 1: on the side of draconian caution because Airbnb ultimately is 447 00:29:42,920 --> 00:29:48,280 Speaker 1: matching prospective customers with hosts, and Airbnb does not own 448 00:29:48,440 --> 00:29:51,720 Speaker 1: this property. In most cases, a host owns the property. 449 00:29:51,800 --> 00:29:55,880 Speaker 1: So if Airbnb allows some I don't know, TV tossing 450 00:29:56,040 --> 00:30:00,320 Speaker 1: rock Star to totally trash a host's home, there would 451 00:30:00,320 --> 00:30:04,560 Speaker 1: be some pretty major problems. And Vice reports that Airbnb's 452 00:30:04,600 --> 00:30:08,520 Speaker 1: policies now extend to folks who haven't broken any rules 453 00:30:09,080 --> 00:30:12,560 Speaker 1: or have a criminal past, but they have been found 454 00:30:12,720 --> 00:30:15,480 Speaker 1: to associate with someone who has already received a ban 455 00:30:15,680 --> 00:30:19,040 Speaker 1: on Airbnb. So let's just say, for example, that you 456 00:30:19,120 --> 00:30:22,680 Speaker 1: happen to be friends with somebody who occasionally makes bad 457 00:30:22,680 --> 00:30:26,400 Speaker 1: life choices, this person goes and does something that gets 458 00:30:26,400 --> 00:30:30,560 Speaker 1: them banned by Airbnb. Well, then you might find the 459 00:30:30,640 --> 00:30:33,360 Speaker 1: next time you try to book a place that you've 460 00:30:33,360 --> 00:30:36,720 Speaker 1: been banned by association because Airbnb did a little quick 461 00:30:36,760 --> 00:30:40,360 Speaker 1: background check and saw through Instagram that you and this 462 00:30:40,480 --> 00:30:43,080 Speaker 1: friend of yours had been on multiple trips together. And 463 00:30:43,120 --> 00:30:45,600 Speaker 1: they're like, oh, well, they travel with this person who 464 00:30:45,600 --> 00:30:48,080 Speaker 1: we've already banned, so now we're going to ban them too, 465 00:30:48,120 --> 00:30:51,320 Speaker 1: even though they haven't been found to have done anything 466 00:30:51,360 --> 00:30:55,360 Speaker 1: wrong themselves. Now there is an appeals process, but it's 467 00:30:55,360 --> 00:30:58,040 Speaker 1: not very transparent. If you would like to learn more 468 00:30:58,080 --> 00:31:02,200 Speaker 1: about this, I recommend the Vice Slash motherboard article. It's 469 00:31:02,240 --> 00:31:06,959 Speaker 1: called Airbnb is banning people who are closely associated with 470 00:31:07,080 --> 00:31:12,280 Speaker 1: already banned users. Over in Texas, Tesla has announced during 471 00:31:12,320 --> 00:31:16,840 Speaker 1: an investor call that it will offer Texas Tesla owners 472 00:31:17,200 --> 00:31:21,360 Speaker 1: an overnight home charging package that costs thirty dollars a 473 00:31:21,400 --> 00:31:25,600 Speaker 1: month for unlimited overnight charging of their Tesla. This is 474 00:31:25,640 --> 00:31:28,640 Speaker 1: to encourage Tesla owners to recharge their vehicles at night, 475 00:31:29,000 --> 00:31:32,840 Speaker 1: and further will depend heavily on electricity generated by wind farms, 476 00:31:32,880 --> 00:31:36,880 Speaker 1: so it comes from a sustainable source. Tesla executive Drew 477 00:31:36,960 --> 00:31:41,360 Speaker 1: Baglino pointed out that quote Texas has a ton of wind, 478 00:31:41,680 --> 00:31:45,200 Speaker 1: and in Texas, the wind blows at night end quote. 479 00:31:45,720 --> 00:31:49,120 Speaker 1: According to Insider, the average monthly cost to charge a 480 00:31:49,200 --> 00:31:53,680 Speaker 1: Tesla would typically amount to around fifty six dollars a month, 481 00:31:54,280 --> 00:31:57,280 Speaker 1: so thirty dollars a month would be a bargain. Now 482 00:31:57,280 --> 00:31:59,840 Speaker 1: there are restrictions. Only people who happen to live in 483 00:31:59,840 --> 00:32:03,440 Speaker 1: an area of Texas that allows homeowners retail choice in 484 00:32:03,520 --> 00:32:07,280 Speaker 1: electricity providers would be able to qualify, and they will 485 00:32:07,320 --> 00:32:12,840 Speaker 1: already have to have a Tesla Powerwall battery installed inside 486 00:32:12,880 --> 00:32:16,000 Speaker 1: their home. So they have to meet these these qualifiers 487 00:32:16,080 --> 00:32:22,000 Speaker 1: first before they can be part of this particular incentive package. Now, 488 00:32:22,040 --> 00:32:25,520 Speaker 1: on that same investor day call where we got this 489 00:32:25,720 --> 00:32:30,320 Speaker 1: incentive announced, Elon Musk himself said that Tesla's humanoid robot 490 00:32:30,440 --> 00:32:34,640 Speaker 1: program called Optimus, is one that he believes will lead 491 00:32:34,680 --> 00:32:38,200 Speaker 1: to a future in which humanoid robots could potentially outnumber 492 00:32:38,280 --> 00:32:41,880 Speaker 1: humans in a greater than one to one ratio, he said. 493 00:32:42,120 --> 00:32:45,080 Speaker 1: He also said, quote you could sort of see a 494 00:32:45,160 --> 00:32:49,760 Speaker 1: home use for robots. Certainly industrial uses for robots humanoid 495 00:32:50,160 --> 00:32:57,360 Speaker 1: robots quote, I respectfully disagree. I think we've seen tons 496 00:32:57,440 --> 00:33:02,000 Speaker 1: of examples of how humanoid robots are not always the 497 00:33:02,040 --> 00:33:05,040 Speaker 1: best approach. In fact, they rarely are the best approach. Now, 498 00:33:05,080 --> 00:33:10,400 Speaker 1: hear me out. The reason industrial processes are the way 499 00:33:10,400 --> 00:33:12,920 Speaker 1: they are is in large part because we humans have 500 00:33:12,960 --> 00:33:18,280 Speaker 1: certain abilities and certain limitations. For example, before we got 501 00:33:18,320 --> 00:33:21,280 Speaker 1: to industrial robots, the way we build a car is 502 00:33:21,320 --> 00:33:24,320 Speaker 1: not necessarily the best way to build it, full stop. 503 00:33:24,680 --> 00:33:26,920 Speaker 1: It's the best way to build it based upon what 504 00:33:27,120 --> 00:33:30,760 Speaker 1: we humans can do. But then we could also design 505 00:33:30,840 --> 00:33:34,760 Speaker 1: robots to do stuff that humans can't do, which means 506 00:33:34,760 --> 00:33:38,200 Speaker 1: we can actually make those processes better and more efficient 507 00:33:38,200 --> 00:33:41,920 Speaker 1: and safer and less expensive. Because we can start from 508 00:33:41,960 --> 00:33:47,800 Speaker 1: scratch and design an idealized industrial process that isn't limited 509 00:33:48,200 --> 00:33:52,240 Speaker 1: by the capabilities or lack thereof, that human beings possess. 510 00:33:52,960 --> 00:33:56,240 Speaker 1: Robots don't have to be humanoid at all, And in fact, 511 00:33:56,880 --> 00:34:00,400 Speaker 1: making robots humanoid means the machines end up having but 512 00:34:00,520 --> 00:34:05,000 Speaker 1: not identical limitations to human beings. So why would we 513 00:34:05,080 --> 00:34:09,360 Speaker 1: limit ourselves to this? Why would we choose the humanoid 514 00:34:09,480 --> 00:34:12,160 Speaker 1: robot approach if it means that we have to make 515 00:34:12,160 --> 00:34:15,520 Speaker 1: all these other considerations just to make them work. Plus 516 00:34:15,880 --> 00:34:19,160 Speaker 1: it turns out creating a really good humanoid robot is 517 00:34:19,640 --> 00:34:23,520 Speaker 1: exceedingly difficult. Then you have to take into account how 518 00:34:23,600 --> 00:34:27,640 Speaker 1: humans and robots will interact in social settings. You might 519 00:34:27,680 --> 00:34:31,480 Speaker 1: spend a ton of time making a robot that works 520 00:34:31,640 --> 00:34:35,440 Speaker 1: great in a laboratory setting and then find that once 521 00:34:35,480 --> 00:34:38,440 Speaker 1: you put it into the same environment with humans, there 522 00:34:38,440 --> 00:34:40,720 Speaker 1: are tons of problems that crop up that you didn't 523 00:34:40,719 --> 00:34:44,279 Speaker 1: anticipate because you didn't take into account how humans would 524 00:34:44,280 --> 00:34:47,600 Speaker 1: react to this machine. I guess what I'm saying is 525 00:34:47,640 --> 00:34:51,880 Speaker 1: that I'm far more skeptical about humanoid robots being super useful, 526 00:34:52,040 --> 00:34:55,280 Speaker 1: at least in the near term, because I'm not convinced 527 00:34:55,320 --> 00:34:58,799 Speaker 1: they fix many problems and in fact might make some 528 00:34:58,880 --> 00:35:03,120 Speaker 1: stuff a whole lot harder. Finally, DARPA, which is the 529 00:35:03,239 --> 00:35:07,319 Speaker 1: US Department of Defenses agency that funds technology intended to 530 00:35:07,360 --> 00:35:12,320 Speaker 1: advance the US's defense capabilities, has announced an initiative called 531 00:35:12,360 --> 00:35:19,640 Speaker 1: the Speed and Runway Independent Technologies Program or SPRINT. According 532 00:35:19,640 --> 00:35:23,359 Speaker 1: to the agency's director, Stephanie Tompkins, the goal is to 533 00:35:23,400 --> 00:35:27,680 Speaker 1: develop aircraft that can take off and land without a runway, 534 00:35:28,239 --> 00:35:32,400 Speaker 1: but also still have excellent speed and mobility. How the 535 00:35:32,440 --> 00:35:35,840 Speaker 1: aircraft achieves these goals is not part of the brief, 536 00:35:36,000 --> 00:35:38,920 Speaker 1: and that makes sense. DARPA's method is to propose an 537 00:35:38,920 --> 00:35:42,040 Speaker 1: engineering challenge, like this is the goal we want to achieve. 538 00:35:42,480 --> 00:35:47,279 Speaker 1: It comes down to various companies and research institutions to 539 00:35:47,520 --> 00:35:51,440 Speaker 1: attempt to meet that goal, often taking very different pathways 540 00:35:51,440 --> 00:35:55,160 Speaker 1: to try and achieve it. DARPA is really more about 541 00:35:55,200 --> 00:35:58,319 Speaker 1: awarding contracts for these jobs. The agency itself is not 542 00:35:58,400 --> 00:36:01,000 Speaker 1: some sort of skunk where its labor tory. Instead, it's 543 00:36:01,040 --> 00:36:05,799 Speaker 1: more of an administrator that evaluates proposals from various sources 544 00:36:06,239 --> 00:36:09,279 Speaker 1: and then chooses which ones to fund. As for why 545 00:36:09,280 --> 00:36:13,000 Speaker 1: the Department of Defense would want runway independent aircraft, it's 546 00:36:13,080 --> 00:36:15,480 Speaker 1: likely to make certain that the US would be capable 547 00:36:15,520 --> 00:36:19,120 Speaker 1: of fielding aircraft even if an enemy were to target, say, 548 00:36:19,440 --> 00:36:23,520 Speaker 1: military runways, because as it stands, no satellite information has 549 00:36:23,520 --> 00:36:26,920 Speaker 1: pretty much blown the cover off of military runways and 550 00:36:27,000 --> 00:36:30,440 Speaker 1: air fields. Once upon a time, there were secret air 551 00:36:30,480 --> 00:36:35,480 Speaker 1: fields and secret runways on military installations that people just 552 00:36:35,800 --> 00:36:40,000 Speaker 1: weren't aware of, at least not widely aware of. But 553 00:36:40,160 --> 00:36:45,479 Speaker 1: satellite imagery has really changed that pretty dramatically, and even 554 00:36:45,520 --> 00:36:48,680 Speaker 1: the fabled Area fifty one was not immune to this. 555 00:36:49,600 --> 00:36:52,959 Speaker 1: You can easily imagine scenarios in which you might want to, say, 556 00:36:53,520 --> 00:36:56,800 Speaker 1: evacuate people from a region. Maybe there's a natural disaster, 557 00:36:56,960 --> 00:36:59,680 Speaker 1: maybe there's a military threat, and you want to send 558 00:37:00,360 --> 00:37:03,680 Speaker 1: rescue operations to help evacuate the area, but you might 559 00:37:03,719 --> 00:37:06,839 Speaker 1: not have access to a runway to land and then 560 00:37:06,920 --> 00:37:10,239 Speaker 1: take off with your evacuation aircraft. So having a way 561 00:37:10,239 --> 00:37:13,360 Speaker 1: to land in those kinds of conditions would be absolutely critical. 562 00:37:14,040 --> 00:37:17,120 Speaker 1: It will be interesting to see how respondents will propose 563 00:37:17,160 --> 00:37:21,279 Speaker 1: different solutions to this problem, because again DARPA did not 564 00:37:21,440 --> 00:37:24,920 Speaker 1: specify anything. There was no mention of vertical takeoff and 565 00:37:25,040 --> 00:37:29,560 Speaker 1: landing or any related technology. So we might end up 566 00:37:29,600 --> 00:37:35,080 Speaker 1: seeing some really innovative solutions to this issue, and that's fascinating. 567 00:37:35,080 --> 00:37:38,359 Speaker 1: In fact, I would argue that a lot of the 568 00:37:38,400 --> 00:37:42,440 Speaker 1: technological advances we've seen from DARPA projects came as a 569 00:37:42,480 --> 00:37:47,000 Speaker 1: result of DARPA defining the problem but giving all the 570 00:37:47,080 --> 00:37:51,440 Speaker 1: different parties involved the freedom to craft their own solution 571 00:37:51,600 --> 00:37:55,520 Speaker 1: to that problem. Really interesting stuff. All right, that's it 572 00:37:55,880 --> 00:37:58,360 Speaker 1: for the news. If you have suggestions for topics I 573 00:37:58,360 --> 00:38:01,200 Speaker 1: should cover in future episodes of Texts Stuff, feel free 574 00:38:01,200 --> 00:38:02,759 Speaker 1: to reach out to me. One way to do that 575 00:38:02,880 --> 00:38:05,680 Speaker 1: is to go over onto Twitter and tweet to the 576 00:38:05,719 --> 00:38:10,400 Speaker 1: handle tech stuff HSW. Another way is to download the 577 00:38:10,400 --> 00:38:13,560 Speaker 1: iHeartRadio app. It's free to download, free to use. Type 578 00:38:13,600 --> 00:38:16,040 Speaker 1: tech Stuff in the little search field. He'll take you 579 00:38:16,120 --> 00:38:17,879 Speaker 1: over to the tech Stuff page. You'll see a little 580 00:38:17,920 --> 00:38:20,279 Speaker 1: microphone icon there. If you click on that, you can 581 00:38:20,400 --> 00:38:22,560 Speaker 1: leave a voice message up to thirty seconds in length. 582 00:38:22,840 --> 00:38:24,839 Speaker 1: I'll look forward to hearing from that, and I'll talk 583 00:38:24,880 --> 00:38:35,719 Speaker 1: to you again really soon. Tech Stuff is an iHeartRadio production. 584 00:38:35,960 --> 00:38:41,000 Speaker 1: For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, 585 00:38:41,120 --> 00:38:43,080 Speaker 1: or wherever you listen to your favorite shows.