1 00:00:04,400 --> 00:00:07,800 Speaker 1: Welcome to tech Stuff, a production from I Heart Radio. 2 00:00:11,880 --> 00:00:14,600 Speaker 1: Hey there, and welcome to tech Stuff. I'm your host, 3 00:00:14,760 --> 00:00:18,360 Speaker 1: Jonathan Strickland. I'm an executive producer with I Heart Radio 4 00:00:18,440 --> 00:00:21,520 Speaker 1: and how the tech area. It's time for the tech 5 00:00:21,560 --> 00:00:27,560 Speaker 1: news for Tuesday, January twenty twenty three, and let's start 6 00:00:27,640 --> 00:00:31,120 Speaker 1: off with cryptocurrency. Now, I'm sure most of y'all know 7 00:00:31,440 --> 00:00:34,919 Speaker 1: I'm pretty skeptical of crypto in general, and they've got 8 00:00:34,960 --> 00:00:38,720 Speaker 1: a lot of bones to pick with cryptocurrency. But I 9 00:00:38,760 --> 00:00:40,920 Speaker 1: also think it's only fair to report on when the 10 00:00:40,960 --> 00:00:45,479 Speaker 1: market gets some momentum in a positive direction. So Reuter's 11 00:00:45,479 --> 00:00:48,720 Speaker 1: reports that Bitcoin is up twenty six percent since the 12 00:00:48,800 --> 00:00:51,920 Speaker 1: beginning of this year and has climbed above the twenty 13 00:00:52,000 --> 00:00:55,800 Speaker 1: thousand dollars per coin mark for the first time in months. 14 00:00:56,240 --> 00:00:59,920 Speaker 1: Other crypto coins are also performing well, many of them 15 00:01:00,120 --> 00:01:05,280 Speaker 1: kind of you know, following bitcoins footsteps, and some so 16 00:01:05,400 --> 00:01:10,360 Speaker 1: called meme coins, which others would argue our trash coins, 17 00:01:10,840 --> 00:01:15,360 Speaker 1: are going absolutely bonkers. You're seeing like five thousand percent 18 00:01:15,560 --> 00:01:18,400 Speaker 1: increase in value for some of these. Now I should 19 00:01:18,600 --> 00:01:22,240 Speaker 1: add that these meme coins are usually worth like fractions 20 00:01:22,480 --> 00:01:25,880 Speaker 1: of a penny, so even a small amount of growth 21 00:01:26,360 --> 00:01:29,400 Speaker 1: ends up looking huge if you talk about percentage, right, 22 00:01:30,240 --> 00:01:34,639 Speaker 1: If it went from point zero zero zero zero zero 23 00:01:34,760 --> 00:01:39,960 Speaker 1: zero eight to point zero zero zero zero zero one zero, 24 00:01:40,360 --> 00:01:44,240 Speaker 1: that's a huge jump in percentage, but in actual money 25 00:01:44,240 --> 00:01:48,600 Speaker 1: it's not much at all. Right. Also, Reuters rightfully points 26 00:01:48,600 --> 00:01:51,720 Speaker 1: out that these meme coins tend to be even more 27 00:01:51,880 --> 00:01:55,640 Speaker 1: unstable than other types of crypto, and the value can 28 00:01:55,720 --> 00:01:59,040 Speaker 1: drop just as quickly as it can go up. See 29 00:01:59,080 --> 00:02:02,400 Speaker 1: also coin, which was all the rage for about a 30 00:02:02,400 --> 00:02:04,600 Speaker 1: month a couple of years ago before the bottom fell out. 31 00:02:05,120 --> 00:02:09,399 Speaker 1: Reuter's also cautions that this upturn may not sustain itself, 32 00:02:09,840 --> 00:02:13,920 Speaker 1: that the recent gains in cryptocurrency are probably connected to 33 00:02:14,000 --> 00:02:17,720 Speaker 1: an overall positive look at the global economy and people 34 00:02:17,800 --> 00:02:21,440 Speaker 1: making a bet that inflation is at least for the moment, 35 00:02:21,600 --> 00:02:26,680 Speaker 1: kind of done. Obviously, any big change could send values 36 00:02:26,760 --> 00:02:31,279 Speaker 1: down again, and as we all learned with Russia's invasion 37 00:02:31,400 --> 00:02:37,799 Speaker 1: of Ukraine, sometimes it's just an unpredictable global event that 38 00:02:37,880 --> 00:02:40,880 Speaker 1: can really have a huge impact on the economy. So 39 00:02:41,840 --> 00:02:45,360 Speaker 1: nothing is ever certain. The changes are not always predictable. 40 00:02:45,400 --> 00:02:48,440 Speaker 1: So I guess what I'm saying is crypto appears to 41 00:02:48,480 --> 00:02:51,560 Speaker 1: be recovering and I think I said last year that 42 00:02:51,639 --> 00:02:55,160 Speaker 1: I didn't believe that the setbacks we saw in late 43 00:02:55,240 --> 00:02:59,000 Speaker 1: two were the death knell for crypto in general. It 44 00:02:59,120 --> 00:03:02,840 Speaker 1: was just a real reckoning for crypto and blockchain and 45 00:03:02,919 --> 00:03:05,800 Speaker 1: related topics. So I suppose we're gonna have to keep 46 00:03:05,800 --> 00:03:08,120 Speaker 1: an eye on stuff like how governments are going to 47 00:03:08,240 --> 00:03:12,160 Speaker 1: treat the crypto exchange Binance, for example, to see where 48 00:03:12,280 --> 00:03:16,840 Speaker 1: crypto goes in because it's not out of the woods 49 00:03:16,960 --> 00:03:19,880 Speaker 1: right There are plenty of governments around the world that 50 00:03:19,960 --> 00:03:23,880 Speaker 1: are taking a more critical view of cryptocurrency and looking 51 00:03:23,880 --> 00:03:28,640 Speaker 1: at imposing regulations. So while we're seeing some improvement right now, 52 00:03:28,960 --> 00:03:32,520 Speaker 1: that could be just a short term gain. Uh it's 53 00:03:32,600 --> 00:03:35,880 Speaker 1: too early to say, but I don't know. Maybe we'll 54 00:03:35,920 --> 00:03:37,880 Speaker 1: see something kind of akin to what we saw in 55 00:03:39,240 --> 00:03:44,160 Speaker 1: where the value really went up dramatically in the early 56 00:03:44,240 --> 00:03:48,520 Speaker 1: part of the year. Now we've got several stories about AI, 57 00:03:49,600 --> 00:03:51,320 Speaker 1: and when I said I thought AI was going to 58 00:03:51,400 --> 00:03:54,320 Speaker 1: be a big topic in three I didn't necessarily mean 59 00:03:54,360 --> 00:03:59,440 Speaker 1: it would dominate headlines in mid January. Uh. I suspect 60 00:03:59,440 --> 00:04:03,120 Speaker 1: we're gonna see AI coverage calmed down a bit later 61 00:04:03,160 --> 00:04:05,360 Speaker 1: this year. Like I think it's always going to be 62 00:04:05,440 --> 00:04:08,560 Speaker 1: a big part of the news in three, but I 63 00:04:08,560 --> 00:04:11,440 Speaker 1: don't think it's going to be quite as dominant once 64 00:04:11,960 --> 00:04:14,880 Speaker 1: things kind of initially calmed down. Right at the moment, 65 00:04:14,920 --> 00:04:17,800 Speaker 1: it's sort of the scary new thing that has media 66 00:04:17,800 --> 00:04:20,640 Speaker 1: outlets jumping to cover, And I don't really think it's 67 00:04:20,760 --> 00:04:23,760 Speaker 1: it's necessarily that scary. It's just that's how it's being treated. 68 00:04:24,720 --> 00:04:30,119 Speaker 1: But first up, getting Images, which maintains a huge library 69 00:04:30,160 --> 00:04:33,919 Speaker 1: of stock photography and artwork that people can license to 70 00:04:34,080 --> 00:04:38,520 Speaker 1: use for their works, is suing the company Stability AI. 71 00:04:38,720 --> 00:04:44,080 Speaker 1: Getting images accuses Stability AI of scraping the library of 72 00:04:44,120 --> 00:04:47,880 Speaker 1: images in getting images, just in an effort to train 73 00:04:48,600 --> 00:04:52,799 Speaker 1: an art tool. An AI art tool called stable diffusion. 74 00:04:53,200 --> 00:04:58,880 Speaker 1: So quick explanation here. One discipline within artificial intelligence is 75 00:04:59,240 --> 00:05:01,800 Speaker 1: machine learn arning, or you might even argue that it's 76 00:05:02,160 --> 00:05:05,880 Speaker 1: adjacent artificial intelligence. The ven diagram has a lot of 77 00:05:05,920 --> 00:05:11,000 Speaker 1: overlap and machine intelligence, just like AI, can take lots 78 00:05:11,160 --> 00:05:15,360 Speaker 1: of different forms. But one common practice in machine learning 79 00:05:15,880 --> 00:05:19,279 Speaker 1: is to teach a computer model how to do whatever 80 00:05:19,320 --> 00:05:21,839 Speaker 1: it is the model that is supposed to do by 81 00:05:21,960 --> 00:05:26,040 Speaker 1: feeding the model lots and lots of information. For example, 82 00:05:26,600 --> 00:05:28,840 Speaker 1: let's say you wanted to teach an AI system to 83 00:05:28,960 --> 00:05:32,559 Speaker 1: recognize images of cats, well, you would want to feed 84 00:05:32,600 --> 00:05:35,440 Speaker 1: the model tens of thousands of pictures of kiddie cats. 85 00:05:35,440 --> 00:05:38,680 Speaker 1: But you would also want to feed lots of pictures 86 00:05:38,720 --> 00:05:41,360 Speaker 1: that have no cats in them at all to the 87 00:05:41,400 --> 00:05:44,000 Speaker 1: model so that it could tell the difference, and you 88 00:05:44,040 --> 00:05:46,880 Speaker 1: would continuously tweak the model so that it would get 89 00:05:46,920 --> 00:05:50,920 Speaker 1: better and better at distinguishing which images actually have cats 90 00:05:50,960 --> 00:05:54,240 Speaker 1: in them. Well, if you want an AI image generation 91 00:05:54,360 --> 00:05:59,440 Speaker 1: program that makes images that are actually, you know, recognizable 92 00:05:59,480 --> 00:06:02,359 Speaker 1: as the of stuff you prompted the a A model 93 00:06:02,400 --> 00:06:06,400 Speaker 1: to create, you have to feed it lots and lots 94 00:06:06,400 --> 00:06:09,919 Speaker 1: of images, right, otherwise you could just get these random 95 00:06:10,040 --> 00:06:13,760 Speaker 1: shapes and colors that don't look like anything, or that 96 00:06:13,800 --> 00:06:16,760 Speaker 1: look like really disturbing versions of whatever it was you 97 00:06:16,800 --> 00:06:20,640 Speaker 1: wanted to create. So Getty Images has a truly enormous 98 00:06:20,640 --> 00:06:25,080 Speaker 1: supply of photographs. They're stretched across decades. Some of them 99 00:06:25,240 --> 00:06:29,560 Speaker 1: are work for higher images, where photographers have set up, 100 00:06:29,720 --> 00:06:32,080 Speaker 1: you know, like scenes in a studio and they just 101 00:06:32,520 --> 00:06:35,640 Speaker 1: shoot tons and tons and tons of photographs and then 102 00:06:35,760 --> 00:06:40,320 Speaker 1: sell those photos to Getty Images. You get some pretty 103 00:06:40,320 --> 00:06:43,400 Speaker 1: wacky versions of that. If you've ever seen pictures of 104 00:06:43,440 --> 00:06:46,279 Speaker 1: a woman wearing a silvery body suit and a metallic 105 00:06:46,400 --> 00:06:49,000 Speaker 1: visor while she's holding an ear of corn. You know 106 00:06:49,040 --> 00:06:51,640 Speaker 1: what I'm talking about. And yes, that's a real stock image. 107 00:06:52,160 --> 00:06:54,280 Speaker 1: Most of you, I'm sure have seen it somewhere at 108 00:06:54,320 --> 00:06:59,279 Speaker 1: some point. Other images come from like paparazzi who take 109 00:06:59,360 --> 00:07:02,680 Speaker 1: pictures of clebrities and notable figures and then they license 110 00:07:02,760 --> 00:07:05,320 Speaker 1: or sell their photographs to Getty. And what Getty is 111 00:07:05,320 --> 00:07:10,960 Speaker 1: saying is that Stability AI used software to scrape Getty's 112 00:07:11,000 --> 00:07:14,840 Speaker 1: images and copy them without paying for them, without paying 113 00:07:14,840 --> 00:07:17,720 Speaker 1: a license for it, and using those images then to 114 00:07:17,760 --> 00:07:24,320 Speaker 1: train a I that will ultimately you be used to 115 00:07:24,440 --> 00:07:29,280 Speaker 1: create a competing product, right, a competing service, Like instead 116 00:07:29,320 --> 00:07:34,400 Speaker 1: of going to Getty Images to license a stock photograph 117 00:07:34,720 --> 00:07:37,960 Speaker 1: that represents whatever it is you want, you would go 118 00:07:38,160 --> 00:07:43,400 Speaker 1: to Stable Diffusion and have it generate an image based 119 00:07:43,400 --> 00:07:46,440 Speaker 1: on what you want. But Getty says, well, first of all, 120 00:07:46,480 --> 00:07:49,960 Speaker 1: you're using our images to train your computer to do this, 121 00:07:50,480 --> 00:07:53,320 Speaker 1: then you're going to introduce something that competes against our 122 00:07:53,360 --> 00:07:57,800 Speaker 1: own business. That's an unfair business practice. Now we're starting 123 00:07:57,840 --> 00:07:59,960 Speaker 1: to get into an area where the law is really 124 00:08:00,160 --> 00:08:02,760 Speaker 1: lagging behind. The law is not designed to deal with 125 00:08:02,800 --> 00:08:08,400 Speaker 1: this kind of intellectual property issue, so it does sound 126 00:08:08,440 --> 00:08:10,520 Speaker 1: like this method is creeping in on the violation of 127 00:08:10,560 --> 00:08:14,600 Speaker 1: intellectual property laws, or at least tiptoeing around them. But 128 00:08:14,680 --> 00:08:18,680 Speaker 1: without actual law or court precedents. With AI, it's a 129 00:08:18,680 --> 00:08:21,600 Speaker 1: gray area at best. But my guess is this year 130 00:08:21,680 --> 00:08:23,120 Speaker 1: is going to be one where we start to see 131 00:08:23,120 --> 00:08:26,920 Speaker 1: that gray turn into more black and white sooner rather 132 00:08:27,000 --> 00:08:31,960 Speaker 1: than later, because it is becoming oppressing and uh and 133 00:08:32,400 --> 00:08:36,560 Speaker 1: relevant issue. On a similar note, a group of artists 134 00:08:36,559 --> 00:08:40,959 Speaker 1: have joined to file a class action lawsuit against three companies, 135 00:08:41,160 --> 00:08:44,880 Speaker 1: dev and Art, mid Journey, and our buddy Stability AI 136 00:08:45,000 --> 00:08:48,760 Speaker 1: that we just talked about like Getty. These artists argue 137 00:08:48,840 --> 00:08:52,439 Speaker 1: that the companies have made illegal use of copyrighted works 138 00:08:52,480 --> 00:08:56,600 Speaker 1: in order to train their respective AI models to generate images. 139 00:08:57,040 --> 00:08:59,960 Speaker 1: Ours Technica has a great article about this, and they 140 00:09:00,120 --> 00:09:04,560 Speaker 1: site and AI analyst named Alex Champion Dard, who has 141 00:09:04,600 --> 00:09:09,680 Speaker 1: pointed out some potential problems with the lawsuit. For one, 142 00:09:09,920 --> 00:09:13,960 Speaker 1: the lawsuit states that quote, when used to produce images 143 00:09:14,080 --> 00:09:18,160 Speaker 1: from prompts by its users, stable diffusion uses the training 144 00:09:18,200 --> 00:09:23,720 Speaker 1: images to produce seemingly new images through a mathematical software process. 145 00:09:23,760 --> 00:09:27,120 Speaker 1: These quote unquote new images are based entirely on the 146 00:09:27,160 --> 00:09:31,680 Speaker 1: training images and are derivative works of the particular images 147 00:09:31,800 --> 00:09:36,599 Speaker 1: stable diffusion draws from when assembling a given output. Ultimately, 148 00:09:36,679 --> 00:09:41,280 Speaker 1: it is merely a complex collage tool end quote. So 149 00:09:42,280 --> 00:09:45,760 Speaker 1: they're they're really saying that there's nothing original in any 150 00:09:46,160 --> 00:09:50,040 Speaker 1: image that stable diffusion produces, that everything has come from 151 00:09:50,120 --> 00:09:54,280 Speaker 1: one element of one of the images in its massive 152 00:09:55,080 --> 00:10:00,120 Speaker 1: mound of training images. That's not really how genera to 153 00:10:00,320 --> 00:10:03,760 Speaker 1: AI works. The language says, the AI is just taking 154 00:10:03,800 --> 00:10:06,520 Speaker 1: this massive amount of content as a starting point and 155 00:10:06,520 --> 00:10:09,440 Speaker 1: then using that to create a new image, almost like, well, 156 00:10:09,520 --> 00:10:11,920 Speaker 1: let me take the nose from this one, and the 157 00:10:12,000 --> 00:10:15,880 Speaker 1: eyes from this one, and this smile and that hairstyle, 158 00:10:15,920 --> 00:10:18,040 Speaker 1: and then putting them all together to make the quote 159 00:10:18,120 --> 00:10:21,920 Speaker 1: unquote new image. But that's not actually how these models work. 160 00:10:22,280 --> 00:10:25,959 Speaker 1: It's an oversimplification. It's more akin to how a human 161 00:10:26,080 --> 00:10:29,679 Speaker 1: artist would study works made by other people before they 162 00:10:29,679 --> 00:10:33,240 Speaker 1: start producing their own work. They would not necessarily be 163 00:10:33,360 --> 00:10:35,880 Speaker 1: copying someone else's work directly. I mean, that could be 164 00:10:35,920 --> 00:10:39,000 Speaker 1: an exercise to see if you can master the same 165 00:10:39,040 --> 00:10:42,040 Speaker 1: techniques as some other artist. But if you're making your 166 00:10:42,040 --> 00:10:46,520 Speaker 1: own work. You're not trying to copy someone else's work. Instead, 167 00:10:46,520 --> 00:10:49,720 Speaker 1: you're using other people's work as sort of an inspirational 168 00:10:49,800 --> 00:10:53,360 Speaker 1: launching ground on how to proceed. That could include things 169 00:10:53,400 --> 00:10:59,000 Speaker 1: like a color palette, the brush technique, the perspective, all 170 00:10:59,040 --> 00:11:01,760 Speaker 1: these sorts of things. So that's a little bit closer 171 00:11:01,800 --> 00:11:04,960 Speaker 1: to what these AI models are doing. And Champion Dard 172 00:11:05,000 --> 00:11:07,679 Speaker 1: points out that the lawsuit could lead to nothing if 173 00:11:07,679 --> 00:11:10,520 Speaker 1: the defense is able to argue that what they're being 174 00:11:10,520 --> 00:11:14,000 Speaker 1: accused of doing is just not true. But then, as 175 00:11:14,000 --> 00:11:17,199 Speaker 1: I mentioned earlier, the real issue is that there isn't 176 00:11:17,280 --> 00:11:20,640 Speaker 1: law that outlines the parameters to which generative AI can 177 00:11:20,679 --> 00:11:25,320 Speaker 1: rely upon existing copyrighted works as training material. I mean, presumably, 178 00:11:25,720 --> 00:11:28,800 Speaker 1: if we're looking at the written word, most writers have 179 00:11:29,000 --> 00:11:32,520 Speaker 1: read a lot before they start writing. So as long 180 00:11:32,559 --> 00:11:35,760 Speaker 1: as a writer is not plagiarizing some other writer, would 181 00:11:35,760 --> 00:11:39,199 Speaker 1: you argue that a writer owes money to all the 182 00:11:39,280 --> 00:11:42,040 Speaker 1: authors that they themselves have read before they produce their 183 00:11:42,040 --> 00:11:48,320 Speaker 1: own work, Because surely by reading that influences your own style, 184 00:11:48,800 --> 00:11:53,440 Speaker 1: it influences your own sensibilities. So if we if we 185 00:11:53,520 --> 00:11:55,800 Speaker 1: take this to an extreme, you would say, ah, you 186 00:11:55,880 --> 00:11:59,040 Speaker 1: owe money to anyone who has influenced you in the 187 00:11:59,040 --> 00:12:04,280 Speaker 1: production of your own work. That's clearly not realistic or uh, 188 00:12:04,480 --> 00:12:08,040 Speaker 1: you know, reasonable. But when it comes to AI, it 189 00:12:08,080 --> 00:12:12,200 Speaker 1: gets a lot more tricky because you're literally using these 190 00:12:12,280 --> 00:12:16,400 Speaker 1: materials to train up the AI to make its own So, yeah, 191 00:12:16,559 --> 00:12:18,520 Speaker 1: we're gonna have to keep a close eye to see 192 00:12:18,559 --> 00:12:21,679 Speaker 1: how this develops throughout this year. We're gonna take a 193 00:12:21,760 --> 00:12:24,040 Speaker 1: quick break. When we come back, we'll talk about some 194 00:12:24,080 --> 00:12:36,560 Speaker 1: more tech news. We're back, and we're not done with 195 00:12:36,720 --> 00:12:40,760 Speaker 1: AI yet. Several news outlets have reported that the chat 196 00:12:40,800 --> 00:12:46,600 Speaker 1: GPT three point five system successfully passed sections of the 197 00:12:46,679 --> 00:12:50,160 Speaker 1: US Bar Exam. That's the exam that lawyers have to 198 00:12:50,200 --> 00:12:53,240 Speaker 1: pass here in the United States before they can law 199 00:12:53,360 --> 00:12:56,720 Speaker 1: your year or something. I understand that the Bar exam 200 00:12:56,840 --> 00:12:59,880 Speaker 1: is very challenging and that for lots of lawyers in 201 00:13:00,000 --> 00:13:03,880 Speaker 1: may take more than one attempt to pass the Bar exam. Now, 202 00:13:03,920 --> 00:13:07,760 Speaker 1: to be clear, chat GPT did not pass the full exam, 203 00:13:08,160 --> 00:13:10,960 Speaker 1: so it it it didn't pass it, and it is 204 00:13:11,000 --> 00:13:15,720 Speaker 1: certainly not currently allowed to practice law. Rather, it passed 205 00:13:15,840 --> 00:13:19,720 Speaker 1: sections on evidence and torts, and I was disappointed to 206 00:13:19,720 --> 00:13:23,079 Speaker 1: find out torts are not pastries filled with fruit m 207 00:13:24,440 --> 00:13:28,679 Speaker 1: raspberry tort The program fell short when it came to 208 00:13:28,760 --> 00:13:32,600 Speaker 1: taking the full exam. It scored fifty point three on 209 00:13:32,640 --> 00:13:36,439 Speaker 1: a test that sets a minimum passing grade of sixty 210 00:13:36,200 --> 00:13:38,959 Speaker 1: eight percent. And I don't think anyone is expecting the 211 00:13:39,040 --> 00:13:43,200 Speaker 1: US legal system to recognize AI lawyers in the near future, 212 00:13:43,880 --> 00:13:46,560 Speaker 1: but some of the field have already expressed concern about this, 213 00:13:46,760 --> 00:13:49,720 Speaker 1: and then others have said these tools could be really 214 00:13:49,800 --> 00:13:53,520 Speaker 1: useful as a way to augment human lawyers as they 215 00:13:53,520 --> 00:13:56,400 Speaker 1: go about their jobs. For example, this kind of a 216 00:13:56,480 --> 00:14:00,280 Speaker 1: chatbot might be able to generate a first draft aft 217 00:14:00,640 --> 00:14:04,800 Speaker 1: of deposition questions, but a lawyer would then subsequently go 218 00:14:05,040 --> 00:14:07,800 Speaker 1: through this draft and then refine it. So in other words, 219 00:14:08,120 --> 00:14:11,520 Speaker 1: the chat bot could do what we hear advocates say 220 00:14:11,640 --> 00:14:14,400 Speaker 1: AI is good for all the time. It can make 221 00:14:14,520 --> 00:14:18,320 Speaker 1: jobs easier and more efficient, but it doesn't actually replace 222 00:14:18,840 --> 00:14:22,080 Speaker 1: the human behind the job. Of course, for it to 223 00:14:22,120 --> 00:14:23,640 Speaker 1: do that, it has to work well enough to be 224 00:14:23,720 --> 00:14:27,480 Speaker 1: useful rather than distracting or counter productive. And as we've 225 00:14:27,520 --> 00:14:32,400 Speaker 1: seen with chat GPT, it's it's really impressive, but it's 226 00:14:32,440 --> 00:14:37,000 Speaker 1: not there yet, and AI powered chat bots are not 227 00:14:37,160 --> 00:14:41,000 Speaker 1: likely to disappear either. In fact, I'm sure we're gonna 228 00:14:41,040 --> 00:14:44,280 Speaker 1: hear a lot more about them. Microsoft just announced that 229 00:14:44,360 --> 00:14:48,520 Speaker 1: it is increasing access to chat GPT, making it generally 230 00:14:48,560 --> 00:14:51,880 Speaker 1: available to its customers. In an earlier news episode, I 231 00:14:51,920 --> 00:14:56,480 Speaker 1: mentioned how Microsoft is apparently planning a massive investment in 232 00:14:56,560 --> 00:15:01,240 Speaker 1: open Ai that's the startup behind chat g pt, and 233 00:15:01,440 --> 00:15:04,440 Speaker 1: effectively it would mean Microsoft would end up buying just 234 00:15:04,680 --> 00:15:10,760 Speaker 1: a scoch under ownership of the startup. Well, Microsoft has 235 00:15:10,800 --> 00:15:14,280 Speaker 1: not yet confirmed those reports. It has indicated that it 236 00:15:14,280 --> 00:15:17,280 Speaker 1: will let more Microsoft customers make use of chat GPT 237 00:15:17,440 --> 00:15:21,520 Speaker 1: tool via Microsoft Cloud Services. What this means in the 238 00:15:21,560 --> 00:15:25,040 Speaker 1: nearer and long term, I can't really say. I sure 239 00:15:25,080 --> 00:15:28,960 Speaker 1: do hope we'll see, uh that we do not get 240 00:15:29,000 --> 00:15:32,720 Speaker 1: an era of AI generated news and entertainment, because that 241 00:15:32,760 --> 00:15:34,760 Speaker 1: would be kind of the opposite of what we really 242 00:15:34,760 --> 00:15:37,440 Speaker 1: want AI to do, Like we wanted to augment but 243 00:15:37,520 --> 00:15:40,680 Speaker 1: not replace. So I don't want to see it doing 244 00:15:40,720 --> 00:15:43,400 Speaker 1: stuff like what we saw was c net, where we 245 00:15:43,520 --> 00:15:46,760 Speaker 1: just assigned AI these writing jobs and they churn out 246 00:15:47,560 --> 00:15:51,720 Speaker 1: low value articles. Um, I don't really want to see that. 247 00:15:51,800 --> 00:15:54,080 Speaker 1: I mean, obviously, what I want to see is high 248 00:15:54,120 --> 00:15:59,400 Speaker 1: value articles created by talented human writers. But yeah, it 249 00:15:59,440 --> 00:16:04,120 Speaker 1: would be pretty darn hard to be a professional creative 250 00:16:04,680 --> 00:16:08,280 Speaker 1: in a world where we accept the level of creative 251 00:16:08,280 --> 00:16:14,240 Speaker 1: output generated by a I as being desirable. Um, it's 252 00:16:14,320 --> 00:16:17,600 Speaker 1: very hard already to make your living as a creative 253 00:16:18,080 --> 00:16:20,120 Speaker 1: with all the competition that's out there. It would be 254 00:16:20,160 --> 00:16:23,840 Speaker 1: even more difficult if we say, yeah, this AI stuff, 255 00:16:23,840 --> 00:16:26,320 Speaker 1: it's not great, but it gets the job done, so 256 00:16:26,400 --> 00:16:28,680 Speaker 1: let's just go with it. That would make it even harder. 257 00:16:29,440 --> 00:16:32,240 Speaker 1: Musician Nick Cave knows what I'm talking about. A fan 258 00:16:32,400 --> 00:16:36,280 Speaker 1: sent him an AI generated song that mimicked the musician's style. 259 00:16:36,360 --> 00:16:39,200 Speaker 1: And for those of you not familiar with Nick Cave 260 00:16:39,720 --> 00:16:42,400 Speaker 1: of Nick Cave and the Bad Seeds fame, he wrote 261 00:16:42,440 --> 00:16:45,400 Speaker 1: the song Red Right Hand, which is featured heavily in 262 00:16:45,440 --> 00:16:48,080 Speaker 1: the Scream franchise. It's also in a lot of other media. 263 00:16:48,720 --> 00:16:53,560 Speaker 1: He's known for dark, gothic and often melodramatic ballads and 264 00:16:53,600 --> 00:16:57,080 Speaker 1: other style songs. And I recommend listening to him. Depending 265 00:16:57,120 --> 00:16:59,800 Speaker 1: upon your mood and the song you've picked, you'll either 266 00:17:00,040 --> 00:17:06,840 Speaker 1: find him really intriguing or very very silly, or perhaps both. Anyway, 267 00:17:06,920 --> 00:17:10,240 Speaker 1: Nick Cave very much did not like the AI generated piece. 268 00:17:10,280 --> 00:17:14,120 Speaker 1: He said, quote songs arise out of suffering, by which 269 00:17:14,160 --> 00:17:17,840 Speaker 1: I mean they are predicated upon the complex internal human 270 00:17:17,920 --> 00:17:21,320 Speaker 1: struggle of creation. And well, as far as I know, 271 00:17:21,880 --> 00:17:26,480 Speaker 1: algorithms don't feel end quote. He also said the song 272 00:17:26,600 --> 00:17:29,680 Speaker 1: was quote a grotesque mockery of what it is to 273 00:17:29,720 --> 00:17:33,199 Speaker 1: be human, and well, I don't much like it end quote. 274 00:17:33,600 --> 00:17:36,200 Speaker 1: And you know, I find it difficult to disagree with him. 275 00:17:36,240 --> 00:17:39,760 Speaker 1: I think it's pretty true. The song had lyrics like quote, 276 00:17:40,040 --> 00:17:42,360 Speaker 1: in the depths of the night, I hear a call, 277 00:17:42,680 --> 00:17:46,040 Speaker 1: a voice that echoes through the hall. It's a sirens 278 00:17:46,080 --> 00:17:48,879 Speaker 1: song that pulls me in, takes me to a place 279 00:17:49,040 --> 00:17:53,040 Speaker 1: where I can't begin end quote. It's not exactly meaningful. 280 00:17:53,520 --> 00:17:56,399 Speaker 1: I think I mentioned in an earlier news episode that 281 00:17:56,560 --> 00:17:59,600 Speaker 1: actually tried to have chat GPT create a high kup 282 00:18:00,040 --> 00:18:04,120 Speaker 1: and the program produced something that's superficially looked like a hiku, 283 00:18:04,560 --> 00:18:08,359 Speaker 1: but it lacked any poetic value, and further, it didn't 284 00:18:08,400 --> 00:18:12,960 Speaker 1: adhere to the structure of a hiku poem. And generally 285 00:18:13,000 --> 00:18:15,880 Speaker 1: that's what I found when I've used chat GPT. Honestly, 286 00:18:15,920 --> 00:18:18,200 Speaker 1: I find that a little surprising, Like I would think 287 00:18:18,240 --> 00:18:21,680 Speaker 1: the rules part would be something that a program would 288 00:18:21,680 --> 00:18:25,960 Speaker 1: be better at handling. Clearly that's not what chat GPT 289 00:18:26,119 --> 00:18:28,399 Speaker 1: was intended for, So I mean, I gotta cut it 290 00:18:28,480 --> 00:18:31,080 Speaker 1: some slack in that sense. But when it comes to 291 00:18:31,640 --> 00:18:34,880 Speaker 1: two works of art that have specific form and structure 292 00:18:34,920 --> 00:18:37,800 Speaker 1: that you're supposed to follow, I would think computer programs 293 00:18:37,840 --> 00:18:41,120 Speaker 1: would be better at doing that. It wouldn't necessarily mean 294 00:18:41,160 --> 00:18:43,960 Speaker 1: the stuff created would be any good, but it would 295 00:18:43,960 --> 00:18:47,600 Speaker 1: at least adhere to the rules of the art form. So, 296 00:18:47,640 --> 00:18:51,479 Speaker 1: for example, sonnets, that it would adhere to the number 297 00:18:51,680 --> 00:18:56,439 Speaker 1: of of versus and the uh the rhythm of them 298 00:18:56,560 --> 00:19:01,600 Speaker 1: and the rhyme scheme. But it doesn't. So oh yeah, 299 00:19:01,640 --> 00:19:03,679 Speaker 1: it's kind of surprising to me on that. But except 300 00:19:03,680 --> 00:19:05,760 Speaker 1: for the fact that I do know, chat gpt wasn't 301 00:19:05,800 --> 00:19:11,320 Speaker 1: exactly designed to be a sonnet producing machine. Uh but yeah, like, 302 00:19:11,720 --> 00:19:15,320 Speaker 1: we're not at the point where these programs are capable 303 00:19:15,520 --> 00:19:21,399 Speaker 1: of generating artistic content that has real merit to It 304 00:19:21,400 --> 00:19:24,000 Speaker 1: doesn't mean we won't get there, and doesn't mean that, 305 00:19:24,119 --> 00:19:28,600 Speaker 1: you know, it can't at least mimic the lower lower 306 00:19:28,680 --> 00:19:32,520 Speaker 1: forms of pop art. And I don't mean to dismiss 307 00:19:32,600 --> 00:19:34,760 Speaker 1: pop art because I love a lot of pop art, 308 00:19:35,480 --> 00:19:40,440 Speaker 1: but uh, it's hard to argue that certain popular songs 309 00:19:40,520 --> 00:19:47,880 Speaker 1: aren't trivial and simple when they very much are. But yeah, 310 00:19:47,920 --> 00:19:50,680 Speaker 1: I'm I'm with Nick Cave on this one. Okay, moving 311 00:19:50,720 --> 00:19:54,119 Speaker 1: away from AI, Meta is in the news again, and 312 00:19:54,160 --> 00:19:58,080 Speaker 1: for once, Meta is the one doing the suing. So 313 00:19:58,119 --> 00:20:01,320 Speaker 1: in this case, Meta is suing a company called Voyager Labs, 314 00:20:01,840 --> 00:20:05,240 Speaker 1: and Meta says that Voyager Labs created thirty eight thousand 315 00:20:05,320 --> 00:20:09,280 Speaker 1: fake accounts as part of an effort to scrape Facebook 316 00:20:09,359 --> 00:20:13,520 Speaker 1: and Instagram and other social networks of user data. So 317 00:20:13,760 --> 00:20:16,680 Speaker 1: Meta says that the effort pulled data from more than 318 00:20:16,760 --> 00:20:20,679 Speaker 1: half a million Facebook pages and Facebook groups, including stuff 319 00:20:20,720 --> 00:20:25,400 Speaker 1: like the posts people were putting onto Facebook, their photos, 320 00:20:25,520 --> 00:20:29,240 Speaker 1: their friends lists, and any other information that was set 321 00:20:29,280 --> 00:20:33,480 Speaker 1: to be publicly viewable. And well, people who may have 322 00:20:33,520 --> 00:20:36,280 Speaker 1: set their profiles so that only their friends can view it, 323 00:20:36,600 --> 00:20:39,000 Speaker 1: you have to remember they could still end up popping 324 00:20:39,119 --> 00:20:42,920 Speaker 1: up in other pages. So even if you are someone 325 00:20:42,960 --> 00:20:45,480 Speaker 1: who isn't on Facebook at all, some of your information 326 00:20:45,680 --> 00:20:48,399 Speaker 1: maybe up there just because some friends of yours shared 327 00:20:48,480 --> 00:20:52,160 Speaker 1: some stuff. So Voyager Labs is a company that's based 328 00:20:52,200 --> 00:20:55,640 Speaker 1: in Israel, and it describes itself as an AI powered 329 00:20:55,760 --> 00:21:01,840 Speaker 1: investigation company, and it mostly works with agencies like law 330 00:21:01,920 --> 00:21:07,240 Speaker 1: enforcement or military organizations, and one of those is apparently 331 00:21:07,280 --> 00:21:09,880 Speaker 1: the Los Angeles Police Department. But it's not just here 332 00:21:09,880 --> 00:21:12,000 Speaker 1: in the United States. It's all around the world that 333 00:21:12,080 --> 00:21:17,480 Speaker 1: this company does business. So Facebook has long maintained that 334 00:21:17,800 --> 00:21:22,560 Speaker 1: scraping its sites for data violates Facebook's policies that you 335 00:21:22,600 --> 00:21:25,960 Speaker 1: are not allowed to do that. You cannot create tools 336 00:21:26,000 --> 00:21:29,600 Speaker 1: that are meant to go across all of Facebook and 337 00:21:29,680 --> 00:21:32,560 Speaker 1: just pull down as much data as you possibly can. 338 00:21:33,200 --> 00:21:37,159 Speaker 1: Only Facebook is allowed to collect personal information on that 339 00:21:37,280 --> 00:21:41,480 Speaker 1: kind of scale and then exploit it. I guess what 340 00:21:41,520 --> 00:21:44,320 Speaker 1: I'm saying is I'm just grouchy. I hate all this 341 00:21:44,520 --> 00:21:47,600 Speaker 1: data collection stuff. It doesn't matter to me who ends 342 00:21:47,680 --> 00:21:51,720 Speaker 1: up using it. I just think it's a bad thing 343 00:21:51,760 --> 00:21:55,919 Speaker 1: that we've allowed to have happened in general. Uh Now, admittedly, 344 00:21:56,600 --> 00:22:00,199 Speaker 1: it is way more scary when the agency that's maying 345 00:22:00,359 --> 00:22:03,720 Speaker 1: use of this information is say, law enforcement or the military, 346 00:22:03,800 --> 00:22:08,400 Speaker 1: because we've got very long histories of those kinds of 347 00:22:08,520 --> 00:22:15,119 Speaker 1: institutions disproportionately harming specific populations. So I don't mean to 348 00:22:15,160 --> 00:22:19,240 Speaker 1: diminish this. It is definitely scarier that this is being 349 00:22:19,359 --> 00:22:24,280 Speaker 1: used in relation to things like law enforcement but in general, 350 00:22:24,640 --> 00:22:27,840 Speaker 1: I just think this massive amount of data collection and 351 00:22:27,920 --> 00:22:35,280 Speaker 1: analysis is inherently harmful, whether it's a law enforcement agency 352 00:22:35,320 --> 00:22:39,560 Speaker 1: that's relying upon it, or it's the platform itself like 353 00:22:39,680 --> 00:22:44,520 Speaker 1: Facebook or Instagram. Uh, it's just the older I get, 354 00:22:44,560 --> 00:22:47,119 Speaker 1: the less comfortable I am that we allowed this to 355 00:22:47,200 --> 00:22:50,520 Speaker 1: happen and that it has grown to the extent that 356 00:22:50,600 --> 00:22:55,160 Speaker 1: it has. Okay, I don't mean to be a fear mongerer. 357 00:22:55,440 --> 00:22:57,480 Speaker 1: Let's take another quick break. When we come back, we'll 358 00:22:57,480 --> 00:23:10,040 Speaker 1: have some other news stories to talk about. We're back 359 00:23:10,680 --> 00:23:14,359 Speaker 1: over in the UK, Parliament is drafting an amendment to 360 00:23:14,480 --> 00:23:17,560 Speaker 1: the Online Safety Bill. That's a bill that's intended to 361 00:23:17,960 --> 00:23:23,960 Speaker 1: mostly protect children against harmful content that is proliferated upon 362 00:23:24,000 --> 00:23:28,000 Speaker 1: the Internet, and this particular amendment is likely to have 363 00:23:28,160 --> 00:23:33,159 Speaker 1: tech executives sweating. So a group of lawmakers proposed an 364 00:23:33,200 --> 00:23:37,800 Speaker 1: amendment last year that would seek criminal charges against tech 365 00:23:37,880 --> 00:23:42,200 Speaker 1: executives who failed to protect children from harmful content on 366 00:23:42,400 --> 00:23:46,720 Speaker 1: their respective platforms. So, in other words, Mark Zuckerberg could 367 00:23:46,760 --> 00:23:53,000 Speaker 1: be held criminally responsible for allowing harmful material to perpetuate 368 00:23:53,080 --> 00:23:56,840 Speaker 1: on Facebook, and if if the court could show that 369 00:23:58,040 --> 00:24:01,280 Speaker 1: he didn't do enough to protect Chill Drin then he 370 00:24:01,320 --> 00:24:04,639 Speaker 1: could face criminal charges for that in the UK. So 371 00:24:05,240 --> 00:24:07,680 Speaker 1: this whole thing created kind of a lively debate within 372 00:24:07,760 --> 00:24:13,240 Speaker 1: Parliament between the desire to protect children, which is obviously important, 373 00:24:13,840 --> 00:24:18,480 Speaker 1: and protecting freedom of speech, which is also really important. 374 00:24:18,840 --> 00:24:21,320 Speaker 1: In fact, that's why we have these concepts like safe 375 00:24:21,359 --> 00:24:25,560 Speaker 1: Harbor on the Internet, where a platform is not responsible 376 00:24:25,600 --> 00:24:29,560 Speaker 1: necessarily for the content that its users post to that 377 00:24:29,640 --> 00:24:33,240 Speaker 1: platform because the platform didn't generate it. It's just it's 378 00:24:33,320 --> 00:24:39,520 Speaker 1: just a a place where people gather. So this had 379 00:24:39,600 --> 00:24:43,360 Speaker 1: kind of a struggle within Parliament, and ultimately the Prime 380 00:24:43,400 --> 00:24:47,399 Speaker 1: Minister of England has indicated that what they're gonna do 381 00:24:47,720 --> 00:24:50,240 Speaker 1: is they're going to create a similar amendment to the 382 00:24:50,280 --> 00:24:53,400 Speaker 1: proposed one that aims for the same goal, that will 383 00:24:53,480 --> 00:24:56,760 Speaker 1: hold tech executives responsible for a failure to protect children, 384 00:24:57,640 --> 00:25:00,639 Speaker 1: and that will then be drafted and a amended to 385 00:25:01,040 --> 00:25:05,960 Speaker 1: the Online Safety Bill. The original proposed amendment will be withdrawn. 386 00:25:07,160 --> 00:25:10,720 Speaker 1: I can't pretend to fully understand the political process here, 387 00:25:11,040 --> 00:25:13,080 Speaker 1: except that maybe the goal is to have a more 388 00:25:13,160 --> 00:25:16,879 Speaker 1: focused amendment put in place. Anyway, this would be a 389 00:25:16,920 --> 00:25:20,440 Speaker 1: really dramatic step if it does in fact go through, 390 00:25:20,520 --> 00:25:24,080 Speaker 1: which it seems like that's the case, and it should 391 00:25:24,080 --> 00:25:28,320 Speaker 1: really worry anyone who is a leader in a social 392 00:25:28,359 --> 00:25:31,840 Speaker 1: network because it could mean that they could potentially be 393 00:25:31,960 --> 00:25:34,920 Speaker 1: charged as a criminal in the UK in the future 394 00:25:35,000 --> 00:25:38,200 Speaker 1: if they failed to live up to the Online Safety 395 00:25:38,240 --> 00:25:42,720 Speaker 1: Bills requirements to protect children. And again I think protecting 396 00:25:42,800 --> 00:25:47,960 Speaker 1: children is absolutely critical. I I I have worried about 397 00:25:48,400 --> 00:25:56,159 Speaker 1: social networks effects on children, even beyond the really obvious stuff. 398 00:25:57,080 --> 00:25:59,840 Speaker 1: But yeah, this is a this is a dramatic step 399 00:26:00,119 --> 00:26:03,280 Speaker 1: and we'll have to see how it plays out. It 400 00:26:03,359 --> 00:26:07,679 Speaker 1: also points to the kind of precarious position that the 401 00:26:07,800 --> 00:26:12,239 Speaker 1: UK Prime Minister is in because his own party is 402 00:26:13,080 --> 00:26:17,120 Speaker 1: not fully united. So sometimes they're going to be cases 403 00:26:17,200 --> 00:26:20,440 Speaker 1: where they're going to be making some compromises that otherwise 404 00:26:20,440 --> 00:26:23,199 Speaker 1: they probably wouldn't if they had a united party behind them, 405 00:26:23,240 --> 00:26:27,720 Speaker 1: the Conservative Party in this case. So yeah, probably means 406 00:26:27,720 --> 00:26:31,040 Speaker 1: we're gonna see some more political drama over in the 407 00:26:31,119 --> 00:26:35,439 Speaker 1: UK this year. Next up, the Sony Walkman is coming 408 00:26:35,480 --> 00:26:38,720 Speaker 1: back sort of, so for those of you all too 409 00:26:38,720 --> 00:26:43,679 Speaker 1: young to remember, the original Walkman was a portable cassette player. Uh. 410 00:26:43,800 --> 00:26:46,920 Speaker 1: It really helped promote music cassettes as being able to 411 00:26:46,960 --> 00:26:49,360 Speaker 1: take your music on the go was a new thing 412 00:26:49,720 --> 00:26:52,720 Speaker 1: when the Walkman first debut back in nineteen seventy nine. 413 00:26:53,480 --> 00:26:56,560 Speaker 1: You know, remember we didn't have streaming services or MP 414 00:26:56,680 --> 00:26:59,919 Speaker 1: three's and such. Back then, we had cassettes that stored 415 00:27:00,119 --> 00:27:03,240 Speaker 1: music on magnetic tape that sometimes we get tangled up 416 00:27:03,400 --> 00:27:05,400 Speaker 1: and then you'd have to use a pencil to unwind 417 00:27:05,400 --> 00:27:07,399 Speaker 1: the tape. Then you'd have to untangle the tape, and 418 00:27:07,440 --> 00:27:09,200 Speaker 1: then you use the pencil to wind it back up again. 419 00:27:09,200 --> 00:27:14,000 Speaker 1: And we liked it anyway. The new Walkman models aren't 420 00:27:14,040 --> 00:27:17,840 Speaker 1: cassette players at all. Instead, they are digital media players, 421 00:27:18,240 --> 00:27:21,480 Speaker 1: kind of like a fancy version of an iPod, and 422 00:27:21,520 --> 00:27:25,360 Speaker 1: Sony is positioning them as being able to provide lossless 423 00:27:25,440 --> 00:27:30,080 Speaker 1: audio quality, meaning you get you know, ce D quality 424 00:27:30,200 --> 00:27:34,440 Speaker 1: audio out of these babies, instead of the super compressed 425 00:27:34,920 --> 00:27:36,919 Speaker 1: stuff that you would get if you were just listening 426 00:27:36,960 --> 00:27:41,520 Speaker 1: to your basic streaming service. Of course, this will depend 427 00:27:41,600 --> 00:27:43,840 Speaker 1: upon how the music was encoded in the first place. 428 00:27:43,880 --> 00:27:47,320 Speaker 1: It can't just magically make all music sound better. And 429 00:27:47,359 --> 00:27:49,800 Speaker 1: you might wonder, is this product going to do well? 430 00:27:49,880 --> 00:27:52,520 Speaker 1: Will it take off? Now? I'm a little skeptical. I 431 00:27:52,520 --> 00:27:56,640 Speaker 1: think most folks have proven that they value convenience more 432 00:27:56,760 --> 00:28:00,879 Speaker 1: than they do quality. I include myself in this category. 433 00:28:00,960 --> 00:28:03,920 Speaker 1: I mean, obviously we would all love our music to 434 00:28:04,000 --> 00:28:07,560 Speaker 1: sound as good as it possibly could. That I think 435 00:28:07,600 --> 00:28:11,080 Speaker 1: goes beyond question, Like we could all want that. But 436 00:28:12,359 --> 00:28:14,960 Speaker 1: if we're making a choice between that and being able 437 00:28:15,000 --> 00:28:18,880 Speaker 1: to have our music whenever and wherever we like, we're 438 00:28:18,920 --> 00:28:21,760 Speaker 1: probably gonna go with the second choice, unless you're an 439 00:28:21,760 --> 00:28:23,879 Speaker 1: audio file, in which case you might just be like, 440 00:28:23,960 --> 00:28:27,359 Speaker 1: I can't even I can't even stomach the thought of 441 00:28:27,520 --> 00:28:32,480 Speaker 1: listening to highly compressed audio. But uh, I don't know. 442 00:28:32,520 --> 00:28:34,640 Speaker 1: Maybe we're headed back to an age where people rely 443 00:28:34,760 --> 00:28:39,040 Speaker 1: on specific technologies to perform specific tasks. But it has 444 00:28:39,080 --> 00:28:42,280 Speaker 1: been years since I've carried a dedicated digital media player 445 00:28:42,440 --> 00:28:45,360 Speaker 1: because my phone can do all of that stuff. But 446 00:28:45,560 --> 00:28:47,920 Speaker 1: maybe we're seeing that trend take a turn. Maybe we're 447 00:28:47,920 --> 00:28:49,880 Speaker 1: going to head back to where people start to go 448 00:28:49,960 --> 00:28:54,320 Speaker 1: with more simple phones and they go to dedicated digital 449 00:28:54,400 --> 00:28:59,240 Speaker 1: media players and they separate out their tech. Again, that 450 00:28:59,280 --> 00:29:02,200 Speaker 1: would be weird to me, but I don't understand young 451 00:29:02,280 --> 00:29:05,120 Speaker 1: people anyway, So maybe this is what young people are doing. 452 00:29:05,600 --> 00:29:11,200 Speaker 1: I don't get out much. Finally, tomorrow officially marks the 453 00:29:11,240 --> 00:29:15,320 Speaker 1: point where Google will end Google Stadia service. Now, if 454 00:29:15,360 --> 00:29:18,720 Speaker 1: you're not familiar, Stadia is, and pretty soon I'll have 455 00:29:18,760 --> 00:29:23,440 Speaker 1: to say, was a cloud based streaming gaming platform with 456 00:29:23,520 --> 00:29:28,520 Speaker 1: a WiFi connected controller. So subscribers could purchase this game 457 00:29:28,560 --> 00:29:32,000 Speaker 1: controller and then using a compatible system like a television 458 00:29:32,000 --> 00:29:35,920 Speaker 1: that has Chrome cast capabilities, they could access a selection 459 00:29:36,160 --> 00:29:39,800 Speaker 1: of game titles. They could also purchase games to add 460 00:29:39,800 --> 00:29:42,640 Speaker 1: to their virtual library, and then they could access that 461 00:29:42,720 --> 00:29:46,440 Speaker 1: game at any time. But Stadia struggled and flailed around 462 00:29:46,480 --> 00:29:49,720 Speaker 1: a bit, and Google decided to pull the plug. However, 463 00:29:50,040 --> 00:29:52,200 Speaker 1: the company is doing some stuff to take a little 464 00:29:52,200 --> 00:29:54,640 Speaker 1: bit of the staying out. For one thing, Google says 465 00:29:54,680 --> 00:29:58,480 Speaker 1: it will release a quote self serve tool to enable 466 00:29:58,560 --> 00:30:02,280 Speaker 1: Bluetooth connections on your Stadia controller end quote. So at 467 00:30:02,360 --> 00:30:05,959 Speaker 1: least that means Stadia owners will be able to pair 468 00:30:06,000 --> 00:30:10,960 Speaker 1: their otherwise useless controllers with other systems such as a PC, 469 00:30:11,440 --> 00:30:14,960 Speaker 1: and use the Stadia controller as a standard wireless game 470 00:30:15,000 --> 00:30:18,040 Speaker 1: pad kind of device. This is a pretty big deal 471 00:30:18,120 --> 00:30:21,680 Speaker 1: because previously there are only two ways you could connect 472 00:30:21,960 --> 00:30:25,760 Speaker 1: the Stadia to something else. You could use a physical 473 00:30:25,880 --> 00:30:28,840 Speaker 1: USB cable, but you know a lot of folks hate 474 00:30:28,920 --> 00:30:32,440 Speaker 1: using wired controllers, so that was a non starter for 475 00:30:32,520 --> 00:30:36,720 Speaker 1: a lot of people, or you could use WiFi, but 476 00:30:37,880 --> 00:30:43,000 Speaker 1: nothing else uses WiFi as a connectivity standard for game controllers, 477 00:30:43,040 --> 00:30:47,440 Speaker 1: so unless you enable Bluetooth support, the controller just can't 478 00:30:47,440 --> 00:30:52,320 Speaker 1: connect wirelessly to other devices magically. Anyway, it's good to 479 00:30:52,360 --> 00:30:54,680 Speaker 1: hear that Google is at least making some efforts to 480 00:30:54,720 --> 00:30:58,840 Speaker 1: avoid Stadia controllers. Transitioning from that thing you never used 481 00:30:58,880 --> 00:31:03,040 Speaker 1: because you forgot about the nervis too e waste. It 482 00:31:03,440 --> 00:31:08,280 Speaker 1: extends the useful livelihood of the hardware, and that's important. 483 00:31:09,040 --> 00:31:12,640 Speaker 1: Also means that once that Bluetooth connectivity is enabled, I 484 00:31:12,640 --> 00:31:16,280 Speaker 1: can use my Google Stadia controller with my computer and 485 00:31:16,360 --> 00:31:20,640 Speaker 1: play games on it without having to, you know, use 486 00:31:20,680 --> 00:31:23,520 Speaker 1: an Xbox controller or something like that. So I'm all 487 00:31:23,560 --> 00:31:26,440 Speaker 1: for it. And that's it. That's the tech news I 488 00:31:26,480 --> 00:31:30,880 Speaker 1: have for you Tuesday, January three. If you have suggestions 489 00:31:30,880 --> 00:31:33,120 Speaker 1: for topics I should cover in future episodes of tech Stuff, 490 00:31:33,120 --> 00:31:35,360 Speaker 1: please reach out to me. You can download the I 491 00:31:35,480 --> 00:31:38,200 Speaker 1: Heart Radio app and navigate over to the tech stuff 492 00:31:38,240 --> 00:31:41,120 Speaker 1: page using the search field. There's a little microphone icon. 493 00:31:41,240 --> 00:31:42,960 Speaker 1: Click on that you can leave a message up to 494 00:31:43,000 --> 00:31:44,680 Speaker 1: thirty seconds in length. Let me know what you would 495 00:31:44,720 --> 00:31:47,400 Speaker 1: like to hear or If you prefer, head on over 496 00:31:47,440 --> 00:31:50,080 Speaker 1: to Twitter and send me a message using to handle 497 00:31:50,360 --> 00:31:54,000 Speaker 1: tech Stuff hs W and I'll talk to you again 498 00:31:54,720 --> 00:32:04,080 Speaker 1: really soon. Text Stuff is an I heart Radio production. 499 00:32:04,320 --> 00:32:07,120 Speaker 1: For more podcasts from I Heart Radio, visit the i 500 00:32:07,240 --> 00:32:10,480 Speaker 1: heart Radio app, Apple Podcasts, or wherever you listen to 501 00:32:10,520 --> 00:32:11,440 Speaker 1: your favorite shows.