1 00:00:04,120 --> 00:00:07,160 Speaker 1: Get in touch with technology with tech Stuff from how 2 00:00:07,200 --> 00:00:13,960 Speaker 1: stuff works dot com. Hey there, and welcome to tech Stuff. 3 00:00:14,000 --> 00:00:17,560 Speaker 1: I'm your host, Jonathan Strickland. I'm an executive producer with 4 00:00:17,600 --> 00:00:20,280 Speaker 1: How Stuff Works, and I heart radio and I love 5 00:00:20,480 --> 00:00:24,320 Speaker 1: all things tech. And in my last episode, I advocated 6 00:00:24,480 --> 00:00:28,320 Speaker 1: a critical thinking approach to the subject of driverless cars. 7 00:00:28,760 --> 00:00:31,360 Speaker 1: Today we're gonna do sort of the same thing, but 8 00:00:31,440 --> 00:00:36,120 Speaker 1: with a different subject. This time it's the singularity, specifically 9 00:00:36,159 --> 00:00:41,600 Speaker 1: the technological singularity. I've done an episode about the singularity before, 10 00:00:41,640 --> 00:00:43,960 Speaker 1: but I think the last time I really talked about 11 00:00:44,000 --> 00:00:47,960 Speaker 1: it was in and I think attitudes have changed a 12 00:00:48,000 --> 00:00:50,720 Speaker 1: bit in the five years since I did that episode. 13 00:00:50,760 --> 00:00:54,760 Speaker 1: So I have argued and will continue to do so, 14 00:00:55,280 --> 00:00:59,640 Speaker 1: that the technological singularity is still a very vague term. 15 00:01:00,000 --> 00:01:03,240 Speaker 1: It means different things to different people. It can even 16 00:01:03,280 --> 00:01:06,800 Speaker 1: mean different things to the same person depending upon the situation. 17 00:01:07,600 --> 00:01:12,080 Speaker 1: In fact, if you'll allow me a little allusion to literature, 18 00:01:12,160 --> 00:01:16,640 Speaker 1: since I do have a background in scholarship and English 19 00:01:16,680 --> 00:01:19,760 Speaker 1: lit that I rarely get to use. This actually reminds 20 00:01:19,800 --> 00:01:23,120 Speaker 1: me of Humpty Dumpty and Alice through the Looking Glass 21 00:01:23,520 --> 00:01:27,520 Speaker 1: Humpty Dumpty in that talks about how words only mean 22 00:01:27,640 --> 00:01:31,720 Speaker 1: whatever he wants them to mean, and all it takes 23 00:01:31,880 --> 00:01:34,840 Speaker 1: is the will to use a word however you want, 24 00:01:35,120 --> 00:01:38,360 Speaker 1: and the word has to obey. That seems to be 25 00:01:38,959 --> 00:01:43,720 Speaker 1: the general feeling around the technological singularity. But if we're 26 00:01:43,760 --> 00:01:46,960 Speaker 1: to explore this concept, we're going to have to define 27 00:01:47,000 --> 00:01:52,760 Speaker 1: it at least a little bit. So here goes my attempt. First, 28 00:01:52,800 --> 00:01:57,240 Speaker 1: let's go with the singularity in cosmology, right, what does 29 00:01:57,280 --> 00:02:00,120 Speaker 1: it mean in outer space? What is it in physics? 30 00:02:00,600 --> 00:02:03,240 Speaker 1: In physics? In outer space, it refers to the one 31 00:02:03,480 --> 00:02:06,680 Speaker 1: dimensional point that is at the very center of a 32 00:02:06,720 --> 00:02:11,359 Speaker 1: black hole. This is the point where density and gravity 33 00:02:11,400 --> 00:02:16,760 Speaker 1: are infinite. It's the point where spacetime curves infinitely, and 34 00:02:16,960 --> 00:02:21,679 Speaker 1: most importantly for the concept of the technological singularity, this 35 00:02:21,760 --> 00:02:25,560 Speaker 1: is the point where the laws of physics no longer apply. 36 00:02:25,840 --> 00:02:31,919 Speaker 1: They break down. So the technological singularity is ultimately a 37 00:02:31,960 --> 00:02:40,600 Speaker 1: metaphor based off this cosmological notion. So with the technological singularity, 38 00:02:40,720 --> 00:02:44,280 Speaker 1: the idea is that we arrive at a moment where 39 00:02:45,160 --> 00:02:49,919 Speaker 1: technology is able to iterate and evolve at a rate 40 00:02:50,040 --> 00:02:53,680 Speaker 1: so fast that it is impossible for us to envision 41 00:02:53,840 --> 00:02:58,720 Speaker 1: or predict what happens next. Or it's about superhuman intelligence. 42 00:02:59,280 --> 00:03:01,760 Speaker 1: Those tend to be the two big ones. So generally 43 00:03:01,800 --> 00:03:06,640 Speaker 1: we imagine that superhuman intelligence actually brings about this era 44 00:03:07,040 --> 00:03:11,920 Speaker 1: of rapid technological iteration and evolution, and the nature of 45 00:03:11,960 --> 00:03:14,960 Speaker 1: that intelligence is still a matter of conjecture, because as 46 00:03:15,040 --> 00:03:19,120 Speaker 1: far as we can tell, it hasn't happened yet. Some 47 00:03:19,240 --> 00:03:23,880 Speaker 1: days we struggle for normal human intelligence. But there are 48 00:03:23,960 --> 00:03:27,440 Speaker 1: some big, broad categories that we can talk about as 49 00:03:27,440 --> 00:03:32,760 Speaker 1: a possibility for this superhuman intelligence scenario. There's the artificial 50 00:03:32,840 --> 00:03:37,360 Speaker 1: intelligence version, in which computer scientists design a system of 51 00:03:37,440 --> 00:03:41,840 Speaker 1: such sophistication that it operates on a human like level 52 00:03:41,920 --> 00:03:47,400 Speaker 1: of intelligence. This AI, in turn begins to design its successor, 53 00:03:47,800 --> 00:03:51,400 Speaker 1: which is an attempt to improve upon the previous generation. 54 00:03:52,600 --> 00:03:57,720 Speaker 1: This next AI ends up being smarter, faster, stronger than 55 00:03:57,760 --> 00:04:01,360 Speaker 1: its creator was, and it goes on to create the 56 00:04:01,400 --> 00:04:06,640 Speaker 1: next generation of artificial intelligence, which by definition is going 57 00:04:06,680 --> 00:04:10,400 Speaker 1: to be better designed than anything we humans could accomplish, 58 00:04:10,440 --> 00:04:15,080 Speaker 1: because generation two is already better than human level intelligence, 59 00:04:15,480 --> 00:04:18,560 Speaker 1: and bam, at that point you have the intelligence explosion. 60 00:04:18,680 --> 00:04:23,720 Speaker 1: You have a superhuman intelligence beyond any capability of human 61 00:04:24,279 --> 00:04:28,040 Speaker 1: thought here and because it's operating at a level beyond 62 00:04:28,160 --> 00:04:32,159 Speaker 1: human capability. We cannot possibly guess at what will come 63 00:04:32,320 --> 00:04:36,800 Speaker 1: next because we are only human. We can't conceive of 64 00:04:36,839 --> 00:04:40,400 Speaker 1: what will happen with superhuman intelligence because we don't possess that. 65 00:04:41,200 --> 00:04:44,400 Speaker 1: The vision of the future is sometimes but not always, 66 00:04:44,720 --> 00:04:47,840 Speaker 1: accompanied by the notion that these machines are going to 67 00:04:47,920 --> 00:04:51,680 Speaker 1: gain sentience, that they will have some form of consciousness, 68 00:04:52,360 --> 00:04:57,240 Speaker 1: and therefore they are consciously making these improvements so they'll 69 00:04:57,279 --> 00:05:01,480 Speaker 1: have some sort of sense of self. That also suggests 70 00:05:01,480 --> 00:05:05,839 Speaker 1: that there's the possibility that they might develop motivations and 71 00:05:06,000 --> 00:05:10,279 Speaker 1: wants and needs. So this leads us to some scenarios 72 00:05:10,279 --> 00:05:15,280 Speaker 1: where this all develops into a terminator or matrix like dystopia, 73 00:05:15,560 --> 00:05:18,039 Speaker 1: in which these machines have come to the conclusion that 74 00:05:18,080 --> 00:05:21,120 Speaker 1: we human beings are a blight upon the earth and 75 00:05:21,160 --> 00:05:24,719 Speaker 1: that we have to be wiped out. Or it could 76 00:05:24,800 --> 00:05:28,840 Speaker 1: lead to machines subjugating humans in some way, that we 77 00:05:28,920 --> 00:05:33,960 Speaker 1: are the ants to its colossus, and we can either 78 00:05:34,040 --> 00:05:36,560 Speaker 1: be ignored or we can be put to work in 79 00:05:36,680 --> 00:05:40,560 Speaker 1: some menial task. Or maybe the machines would just be 80 00:05:40,640 --> 00:05:44,880 Speaker 1: super nice, high tech guardian angels that are granting us 81 00:05:44,920 --> 00:05:48,120 Speaker 1: wishes whatever we want it they make it possible because 82 00:05:48,120 --> 00:05:51,839 Speaker 1: they're intelligent enough to do it. But the point is 83 00:05:52,240 --> 00:05:55,240 Speaker 1: the machines are the ones making the decisions, and they 84 00:05:55,279 --> 00:05:58,480 Speaker 1: would be doing so beyond what we could do ourselves, 85 00:05:58,480 --> 00:06:01,560 Speaker 1: and therefore we have no way of predicting what would 86 00:06:01,600 --> 00:06:04,160 Speaker 1: come next or what the future would be like, because 87 00:06:04,200 --> 00:06:09,000 Speaker 1: it's by definition, outside of our own experience. Another version 88 00:06:09,400 --> 00:06:13,640 Speaker 1: of this superhuman intelligence that will bring on the technological 89 00:06:13,680 --> 00:06:19,240 Speaker 1: singularity comes about due to biological or technological boosting of 90 00:06:19,279 --> 00:06:23,560 Speaker 1: our own intelligence. So instead of making computer super smart, 91 00:06:23,760 --> 00:06:28,760 Speaker 1: we make ourselves super smart. Possibly with the use of computers, 92 00:06:29,200 --> 00:06:32,840 Speaker 1: we've discovered a way to augment our intelligence beyond what 93 00:06:32,920 --> 00:06:36,480 Speaker 1: was previously thought possible. Now, in a way, you could 94 00:06:36,520 --> 00:06:40,680 Speaker 1: argue that the history of humanity has been one of 95 00:06:40,800 --> 00:06:46,640 Speaker 1: using technology to augment our intelligence, though the ancient philosophers 96 00:06:46,680 --> 00:06:51,479 Speaker 1: like Socrates would disagree. Socrates would say, don't write anything down, 97 00:06:52,080 --> 00:06:54,680 Speaker 1: because if you write things down, you don't have to 98 00:06:54,760 --> 00:06:57,520 Speaker 1: keep that information in your head, and if you let 99 00:06:57,560 --> 00:07:02,200 Speaker 1: information out of your head, then you're less intelligent as 100 00:07:02,200 --> 00:07:04,720 Speaker 1: a result, you didn't have to keep it of there 101 00:07:04,880 --> 00:07:08,320 Speaker 1: in your noggain. Similar arguments were made in the wake 102 00:07:08,440 --> 00:07:12,280 Speaker 1: of the Guttenberg printing press when that was invented, and 103 00:07:12,360 --> 00:07:16,960 Speaker 1: more recently, much more recently, Nicholas Carr wrote about the 104 00:07:17,000 --> 00:07:21,400 Speaker 1: same idea, only his target wasn't the book or the 105 00:07:21,440 --> 00:07:24,480 Speaker 1: printed word, it was Google. He wrote, is Google making 106 00:07:24,560 --> 00:07:28,360 Speaker 1: us stupid? So since we started writing things down, there 107 00:07:28,360 --> 00:07:30,880 Speaker 1: have been people who have said this technology is going 108 00:07:30,920 --> 00:07:33,240 Speaker 1: to make us all dumb, because you don't have to 109 00:07:33,280 --> 00:07:36,960 Speaker 1: think if you have this stuff to rely upon. But 110 00:07:37,680 --> 00:07:43,160 Speaker 1: the counter argument is that the technology can augment our intelligence. 111 00:07:43,360 --> 00:07:47,200 Speaker 1: We can offload some of the cognition onto the tech 112 00:07:47,440 --> 00:07:50,360 Speaker 1: or that we're carrying or that we're depending upon, and 113 00:07:50,360 --> 00:07:54,400 Speaker 1: that frees up more of our capacity to reflect or 114 00:07:54,440 --> 00:07:58,840 Speaker 1: to do free association of seemingly separate and distinct ideas 115 00:07:59,160 --> 00:08:01,520 Speaker 1: to bring them together there and come up with innovative 116 00:08:01,560 --> 00:08:04,560 Speaker 1: solutions to hard problems. So I think it all just 117 00:08:04,640 --> 00:08:07,480 Speaker 1: depends upon your point of view and your approach to 118 00:08:07,600 --> 00:08:12,480 Speaker 1: using technology. Anyway, in this vision of the singularity, human 119 00:08:12,520 --> 00:08:16,920 Speaker 1: beings become endowed with superhuman intelligence, which might very well 120 00:08:16,960 --> 00:08:20,120 Speaker 1: mean we would no longer really be what you and 121 00:08:20,160 --> 00:08:23,480 Speaker 1: I would consider human at this point. This is the 122 00:08:23,560 --> 00:08:28,640 Speaker 1: trans human vision of the future. So the end result 123 00:08:28,760 --> 00:08:32,080 Speaker 1: would again, be that we're talking about reaching a point 124 00:08:32,160 --> 00:08:38,240 Speaker 1: where we, as humans, flawed, limited in our intelligence, are 125 00:08:38,320 --> 00:08:42,840 Speaker 1: incapable of imagining what would come next because we cannot 126 00:08:43,040 --> 00:08:47,080 Speaker 1: put ourselves in the place of a superhuman intelligence. We 127 00:08:47,240 --> 00:08:53,400 Speaker 1: lack that intelligence, whether it's machines or us, or some 128 00:08:53,480 --> 00:08:56,520 Speaker 1: combination of the two. The basic idea of the technological 129 00:08:56,559 --> 00:09:01,360 Speaker 1: singularity hinges upon the emergence of this superhuman intelligence in 130 00:09:01,480 --> 00:09:06,200 Speaker 1: almost every case. So where did that idea even come from? Well, 131 00:09:06,240 --> 00:09:10,080 Speaker 1: back in seven there was a publisher named the Reverend 132 00:09:10,240 --> 00:09:14,640 Speaker 1: Richard Thornton who wrote a piece about a mechanical calculator 133 00:09:14,760 --> 00:09:18,360 Speaker 1: that illustrates my earlier point about how people have talked 134 00:09:18,360 --> 00:09:22,040 Speaker 1: about how technology can make us dumb or it can 135 00:09:22,120 --> 00:09:26,760 Speaker 1: be a hindrance for thinking. He wrote that such a device, 136 00:09:27,080 --> 00:09:32,080 Speaker 1: if placed in a school, would do quote incalculable injury 137 00:09:32,200 --> 00:09:35,360 Speaker 1: end quote. Don't think he was necessarily intending to make 138 00:09:35,400 --> 00:09:38,360 Speaker 1: a pun about a calculator, but he did go on 139 00:09:38,440 --> 00:09:42,160 Speaker 1: to say that these machines might get better and better, 140 00:09:42,280 --> 00:09:46,560 Speaker 1: and what might happen next? He said, maybe they would 141 00:09:46,559 --> 00:09:49,640 Speaker 1: be able to calculate ways to improve their own design 142 00:09:49,800 --> 00:09:54,600 Speaker 1: and quote grind out ideas beyond the Ken of mortal 143 00:09:54,920 --> 00:10:00,000 Speaker 1: Mind end quote. So he's saying, these machines themselves might 144 00:10:00,040 --> 00:10:04,000 Speaker 1: even come up with the method of making a better machine, 145 00:10:04,400 --> 00:10:07,600 Speaker 1: which in turn might make even better ones, and eventually 146 00:10:07,679 --> 00:10:10,000 Speaker 1: get to a point where they could produce information that 147 00:10:10,040 --> 00:10:12,880 Speaker 1: we humans can't even dream of. This is back in 148 00:10:12,960 --> 00:10:18,280 Speaker 1: eighty seven, before the electronic computer. Alan Turing, whom we 149 00:10:18,360 --> 00:10:22,280 Speaker 1: all often associate with this idea of machine intelligence, wrote 150 00:10:22,280 --> 00:10:25,400 Speaker 1: a paper in nineteen fifty one at which a hypothesized 151 00:10:25,400 --> 00:10:28,880 Speaker 1: that once machines do level enter a level of like 152 00:10:29,040 --> 00:10:33,160 Speaker 1: human like thinking, or at least appear to think like 153 00:10:33,280 --> 00:10:36,280 Speaker 1: humans do, it wouldn't take long before they would outpace 154 00:10:36,400 --> 00:10:40,439 Speaker 1: our own capabilities. And he cited the nineteenth century writer 155 00:10:40,559 --> 00:10:44,160 Speaker 1: Samuel Butler in this Butler himself had said there is 156 00:10:44,240 --> 00:10:49,160 Speaker 1: no security against the ultimate development of mechanical consciousness in 157 00:10:49,200 --> 00:10:53,920 Speaker 1: the fact of machines possessing little consciousness now, essentially saying, 158 00:10:54,200 --> 00:10:56,720 Speaker 1: just because machines are dumb right now doesn't mean we 159 00:10:56,760 --> 00:11:01,040 Speaker 1: can discount the possibility that they will one day be intelligent. 160 00:11:01,559 --> 00:11:06,120 Speaker 1: He was making an observation about how quickly machines were advancing, 161 00:11:06,600 --> 00:11:10,320 Speaker 1: and he was comparing that against the very slow process 162 00:11:10,480 --> 00:11:14,080 Speaker 1: of evolution in biology. He was saying, you know, it 163 00:11:14,160 --> 00:11:20,800 Speaker 1: took millions of years for lower organisms to evolve two 164 00:11:21,400 --> 00:11:27,600 Speaker 1: higher thinking organisms. But look how quickly technology is developing. 165 00:11:28,240 --> 00:11:30,640 Speaker 1: It won't be long at all before those are thinking 166 00:11:30,720 --> 00:11:34,800 Speaker 1: before machines can think. That seems like a somewhat faulty 167 00:11:34,960 --> 00:11:38,439 Speaker 1: reasoning to me, but we'll move on. John von Neumann, 168 00:11:38,440 --> 00:11:42,160 Speaker 1: who I talked about in episodes in Tech Stuff this year, 169 00:11:42,760 --> 00:11:46,439 Speaker 1: reportedly talked about a singularity that would happen once technological 170 00:11:46,480 --> 00:11:50,079 Speaker 1: process a progress, rather would occur so quickly and become 171 00:11:50,160 --> 00:11:53,200 Speaker 1: so complicated as to become incomprehensible. So this is more 172 00:11:53,200 --> 00:11:57,880 Speaker 1: in that technology is rapidly iterating and evolving, but not 173 00:11:58,000 --> 00:12:01,520 Speaker 1: necessarily related to a superhuman intel aligence. Uh. He told 174 00:12:01,559 --> 00:12:05,200 Speaker 1: his buddy Stanislav Ulam about this, and it was Ulam 175 00:12:05,240 --> 00:12:10,199 Speaker 1: who reported the conversation, so it's more from a secondhand information. 176 00:12:10,920 --> 00:12:13,640 Speaker 1: Then there's I. J. Good, who was one of tourings 177 00:12:13,720 --> 00:12:16,719 Speaker 1: peers at Bletchley Park when they were working on cryptography 178 00:12:16,800 --> 00:12:21,000 Speaker 1: during the World War, and he wrote about a future 179 00:12:21,080 --> 00:12:24,880 Speaker 1: intelligence explosion. He described the scenario that I mentioned earlier 180 00:12:25,280 --> 00:12:29,360 Speaker 1: of machines designing ever more capable successors and leaving human 181 00:12:29,559 --> 00:12:32,839 Speaker 1: ingenuity in the dust in the process. His work would 182 00:12:32,840 --> 00:12:36,920 Speaker 1: inspire a science fiction author named Verner Venge, who presented 183 00:12:36,920 --> 00:12:42,840 Speaker 1: the earliest recorded version of the phrase technological singularity. He 184 00:12:42,840 --> 00:12:45,080 Speaker 1: wrote it in a piece that was published in Omni 185 00:12:45,080 --> 00:12:50,480 Speaker 1: magazine in nineteen eighty three. A decade later, Venge predicted, quote, 186 00:12:50,600 --> 00:12:54,720 Speaker 1: within thirty years, we will have the technological means to 187 00:12:54,840 --> 00:13:00,480 Speaker 1: create superhuman intelligence. Shortly after the human era will be ended. 188 00:13:01,000 --> 00:13:05,320 Speaker 1: End quote sounds pretty ominous, but honestly, it could just 189 00:13:05,440 --> 00:13:12,040 Speaker 1: mean that we humans would be evolving into something new. Now, 190 00:13:12,080 --> 00:13:14,680 Speaker 1: that was a prediction he made in and he said 191 00:13:14,679 --> 00:13:18,200 Speaker 1: it was thirty years in the future, which means we're 192 00:13:18,240 --> 00:13:21,560 Speaker 1: coming up on that deadline awful soon. It would mean 193 00:13:21,600 --> 00:13:24,360 Speaker 1: that we would be there and let's see a two 194 00:13:24,440 --> 00:13:29,080 Speaker 1: thousand twenty three by my math. Venge also suggested four 195 00:13:29,120 --> 00:13:33,040 Speaker 1: ways this singularity could emerge. Computers could reach a certain 196 00:13:33,080 --> 00:13:37,480 Speaker 1: level of sophistication and become awake and you know, therefore 197 00:13:37,559 --> 00:13:42,559 Speaker 1: conscious and they would be superhumanly intelligent. Or computer networks 198 00:13:42,600 --> 00:13:47,680 Speaker 1: would become suitably complex enough for intelligence and consciousness to 199 00:13:47,880 --> 00:13:51,840 Speaker 1: emerge from that complexity, so the network itself and the 200 00:13:51,880 --> 00:13:57,080 Speaker 1: associated users would become kind of a collective superhuman intelligence. 201 00:13:57,760 --> 00:14:01,800 Speaker 1: There could be a computer human inter face future where 202 00:14:01,840 --> 00:14:05,839 Speaker 1: these interfaces are so sophisticated that we can augment our 203 00:14:05,880 --> 00:14:11,319 Speaker 1: intelligence seamlessly with machines, and together we are superhumanly intelligent. 204 00:14:11,600 --> 00:14:14,000 Speaker 1: And finally, he said, we might figure out a way 205 00:14:14,000 --> 00:14:18,280 Speaker 1: to biologically boost our own intelligence. Now, I would argue 206 00:14:18,280 --> 00:14:21,840 Speaker 1: that Venge is one of the two really famous champions 207 00:14:21,960 --> 00:14:25,800 Speaker 1: of this concept of the technological singularity. The other one 208 00:14:25,880 --> 00:14:28,400 Speaker 1: I will cover in just a moment, But first let's 209 00:14:28,440 --> 00:14:38,680 Speaker 1: take a quick break to thank our sponsor. I think 210 00:14:38,720 --> 00:14:43,200 Speaker 1: the person who we most readily associate with the concept 211 00:14:43,240 --> 00:14:46,760 Speaker 1: of the technological singularity, he has written numerous books on it, 212 00:14:46,840 --> 00:14:50,200 Speaker 1: and has given lots of talks on the subject, and 213 00:14:50,280 --> 00:14:55,120 Speaker 1: has promoted the idea extensively, would be Ray Kurtzwild. Kurtswilds 214 00:14:55,160 --> 00:14:59,080 Speaker 1: definition is a little different from Vinge's definition, and the 215 00:14:59,160 --> 00:15:02,600 Speaker 1: singularity is near. He talks about the era of rapid 216 00:15:02,640 --> 00:15:06,040 Speaker 1: technological development that will make defining the future or even 217 00:15:06,080 --> 00:15:10,200 Speaker 1: the present at that point impossible. And I think this 218 00:15:10,240 --> 00:15:13,320 Speaker 1: is closer to what Stennis law Oolam said John von 219 00:15:13,400 --> 00:15:17,360 Speaker 1: Neumann had told him. Kurt Swell, I should point out 220 00:15:17,680 --> 00:15:22,280 Speaker 1: has made numerous contributions to technological development. I don't want 221 00:15:22,360 --> 00:15:24,760 Speaker 1: anyone to think that I think Kurt Swills a crackpot. 222 00:15:24,800 --> 00:15:28,960 Speaker 1: I don't. He was a pioneer in computer pattern recognition 223 00:15:29,080 --> 00:15:32,520 Speaker 1: and speech recognition. He did a lot of development and 224 00:15:32,560 --> 00:15:37,720 Speaker 1: other areas, including electronic music. So kurts wells a really 225 00:15:37,760 --> 00:15:41,320 Speaker 1: smart guy. His view of the singularity tends to include 226 00:15:41,760 --> 00:15:46,400 Speaker 1: some pretty far out ideas, however, including the possibility that 227 00:15:46,880 --> 00:15:51,120 Speaker 1: through this process we will discover some means of extending 228 00:15:51,200 --> 00:15:55,680 Speaker 1: our lives indefinitely, including the possibility of porting our consciousness 229 00:15:56,280 --> 00:16:00,400 Speaker 1: into technologies somehow, in other words, finding a way to 230 00:16:00,520 --> 00:16:05,960 Speaker 1: take what makes us us and put that us in 231 00:16:06,080 --> 00:16:08,760 Speaker 1: some sort of machine, and that would effectively allow us 232 00:16:08,800 --> 00:16:11,560 Speaker 1: to live forever, though I don't know what sort of 233 00:16:11,680 --> 00:16:15,640 Speaker 1: experience that would be, like, would you just be a 234 00:16:15,680 --> 00:16:19,800 Speaker 1: consciousness inside a computer? And if so, would that be 235 00:16:19,840 --> 00:16:25,440 Speaker 1: a satisfying existence? Also, there's no way to even guess 236 00:16:25,480 --> 00:16:30,440 Speaker 1: how this would be achieved. Uh, I'm not entirely clear 237 00:16:31,440 --> 00:16:34,200 Speaker 1: what he's basing these ideas upon in the first place. 238 00:16:34,240 --> 00:16:37,040 Speaker 1: It seems to me, and this is just my own 239 00:16:37,080 --> 00:16:41,680 Speaker 1: opinion that some of Kurt Swill's notions are based upon 240 00:16:41,720 --> 00:16:45,200 Speaker 1: a general fear of death, and that the singularity is 241 00:16:45,280 --> 00:16:49,440 Speaker 1: viewed as a sort of escape mechanism from death. It's 242 00:16:49,440 --> 00:16:53,960 Speaker 1: almost wishful thinking, in other words, that maybe once we 243 00:16:54,040 --> 00:16:58,640 Speaker 1: hit this point, will have conquered death itself and we 244 00:16:58,680 --> 00:17:02,800 Speaker 1: will never have to worry about facing our own mortality. 245 00:17:03,480 --> 00:17:06,960 Speaker 1: I still don't understand how this sporting of consciousness would work. 246 00:17:07,640 --> 00:17:10,440 Speaker 1: Or maybe you would just make a copy of yourself, 247 00:17:10,920 --> 00:17:13,359 Speaker 1: in which case you would have a digital version of you, 248 00:17:13,760 --> 00:17:17,040 Speaker 1: and in the meat version of you, but the meat 249 00:17:17,119 --> 00:17:20,760 Speaker 1: version of you was still gonna die, which means, yeah, 250 00:17:20,840 --> 00:17:23,399 Speaker 1: there'll still be a version of you hanging around, but 251 00:17:23,440 --> 00:17:26,920 Speaker 1: it's not the you that's experiencing this podcast right now, 252 00:17:27,520 --> 00:17:31,159 Speaker 1: unless you're hearing this post singularity and you are the 253 00:17:31,240 --> 00:17:36,359 Speaker 1: digital version. Where did you get this show? Okay, I'm sorry, 254 00:17:36,359 --> 00:17:41,240 Speaker 1: I was losing my mind there. Again. While I think 255 00:17:41,800 --> 00:17:46,919 Speaker 1: that his ideas are far fetched from a personal standpoint, 256 00:17:48,200 --> 00:17:50,840 Speaker 1: there is no doubt he is a billion times smarter 257 00:17:50,920 --> 00:17:55,080 Speaker 1: than I am, so I acknowledge I could very well 258 00:17:55,119 --> 00:17:58,520 Speaker 1: be in the wrong here. It just my my critical 259 00:17:58,560 --> 00:18:03,199 Speaker 1: thinking and skeptical nature are really starting to raise some 260 00:18:03,240 --> 00:18:06,080 Speaker 1: red flags. Now. There are a lot of other people 261 00:18:06,119 --> 00:18:08,680 Speaker 1: who are also important in this field. I don't mean 262 00:18:08,680 --> 00:18:11,120 Speaker 1: to suggest that I have covered them all. There are 263 00:18:11,240 --> 00:18:15,800 Speaker 1: proponents and there are skeptics. For example, there's Hans more Effect, 264 00:18:16,080 --> 00:18:20,719 Speaker 1: who's a computer scientist and a roboticist famously working with 265 00:18:20,760 --> 00:18:26,080 Speaker 1: Carnegie Mellon University. He generalized Moore's law to a broader 266 00:18:26,119 --> 00:18:29,920 Speaker 1: application of artificial intelligence and the arrival of a super intelligence. 267 00:18:30,160 --> 00:18:33,280 Speaker 1: And one of his predictions was that by twenty forty 268 00:18:33,560 --> 00:18:37,359 Speaker 1: or so, robots will become so sophisticated at that point 269 00:18:37,640 --> 00:18:41,560 Speaker 1: and so complex that they will effectively emerge as their 270 00:18:41,600 --> 00:18:47,200 Speaker 1: own species. In he penned the paper titled the Age 271 00:18:47,200 --> 00:18:51,480 Speaker 1: of Robots, and this is a quote from that piece. 272 00:18:52,280 --> 00:18:57,639 Speaker 1: Computer less industrial machinery exhibits the behavioral flexibility of single 273 00:18:57,760 --> 00:19:02,840 Speaker 1: celled organisms. Today's best computer controlled robots are like these 274 00:19:02,880 --> 00:19:07,960 Speaker 1: simpler invertebrates. A thousandfold increase in computer power in this 275 00:19:08,080 --> 00:19:13,600 Speaker 1: decade should make possible machines with reptile like sensory and 276 00:19:13,720 --> 00:19:18,399 Speaker 1: motor competence. Properly configured, such robots could do in the 277 00:19:18,440 --> 00:19:22,399 Speaker 1: physical world what personal computers now do in the world 278 00:19:22,560 --> 00:19:27,240 Speaker 1: of data. Act on our behalf as literal minded slaves 279 00:19:27,880 --> 00:19:31,040 Speaker 1: growing computer power over the next half century will allow 280 00:19:31,119 --> 00:19:36,680 Speaker 1: this reptile stage will be surpassed in stages, producing robots 281 00:19:36,680 --> 00:19:40,800 Speaker 1: that learned like mammals, modeled their world like primates, and 282 00:19:40,880 --> 00:19:44,840 Speaker 1: eventually reason like humans. Depending on your point of view, 283 00:19:44,960 --> 00:19:50,320 Speaker 1: humanity will then have produced a worthy successor, or transcended 284 00:19:50,520 --> 00:19:56,200 Speaker 1: inherited limitations and transformed itself into something quite new. No 285 00:19:56,240 --> 00:19:59,520 Speaker 1: longer limited by the slow pace of human learning and 286 00:19:59,600 --> 00:20:04,720 Speaker 1: even slower biological evolution, intelligent machinery will conduct its affairs 287 00:20:04,800 --> 00:20:09,120 Speaker 1: on an ever faster, ever smaller scale, until coarse physical 288 00:20:09,240 --> 00:20:14,840 Speaker 1: nature has been converted to fine grained, purposeful thought. Now, 289 00:20:16,320 --> 00:20:20,760 Speaker 1: his ideas are predicated upon the assertion that consciousness, which 290 00:20:20,800 --> 00:20:24,320 Speaker 1: is a quality that's devilishly difficult to define. In fact, 291 00:20:24,320 --> 00:20:26,960 Speaker 1: I would argue it's just as difficult to define as 292 00:20:27,080 --> 00:20:32,720 Speaker 1: the term intelligence. He argues, this arises from the material 293 00:20:33,080 --> 00:20:36,840 Speaker 1: that is, the mind is totally the product of our 294 00:20:36,880 --> 00:20:39,359 Speaker 1: nervous system, or, if you want to be a little 295 00:20:39,400 --> 00:20:42,959 Speaker 1: more generous, the combination of our nervous system and our 296 00:20:43,000 --> 00:20:48,240 Speaker 1: interactions with our environments. So, in other words, consciousness emerges 297 00:20:48,280 --> 00:20:52,840 Speaker 1: from a system if that system meets the physical criteria. 298 00:20:53,320 --> 00:20:57,000 Speaker 1: If true and I happen to believe that this is true. 299 00:20:57,800 --> 00:21:00,840 Speaker 1: That it then stands to reason that if you have 300 00:21:01,040 --> 00:21:07,359 Speaker 1: a sufficiently complex system with powerful enough machines, we should 301 00:21:07,359 --> 00:21:13,320 Speaker 1: be able to create an artificial entity that possesses consciousness. If, however, 302 00:21:13,920 --> 00:21:21,080 Speaker 1: consciousness arises from some other scientifically undiscovered or even undiscoverable quality, 303 00:21:21,200 --> 00:21:24,199 Speaker 1: then it wouldn't matter how complicated we build our toys, 304 00:21:24,480 --> 00:21:28,040 Speaker 1: they would never become conscious. So, in other words, if 305 00:21:28,080 --> 00:21:33,640 Speaker 1: consciousness were the emergence from some other thing that science 306 00:21:33,800 --> 00:21:41,000 Speaker 1: cannot address, like a soul, for example, then there's no 307 00:21:41,080 --> 00:21:45,199 Speaker 1: way that we could create a conscious artificial being. We 308 00:21:45,320 --> 00:21:49,760 Speaker 1: can't create the soul. If that is in fact how 309 00:21:49,800 --> 00:21:55,000 Speaker 1: it works. I personally feel that that's not the case. 310 00:21:55,080 --> 00:21:58,800 Speaker 1: That our consciousness does arise from the material that it 311 00:21:58,880 --> 00:22:03,080 Speaker 1: does come from our nervous system, the complexity and the 312 00:22:03,119 --> 00:22:08,000 Speaker 1: electrochemical processes of our nervous system. The question I have 313 00:22:08,280 --> 00:22:10,520 Speaker 1: is whether or not we will ever be able to 314 00:22:10,600 --> 00:22:15,840 Speaker 1: replicate that in an artificial system. Not saying that it 315 00:22:15,880 --> 00:22:19,400 Speaker 1: would be impossible, just wondering if we will ever figure 316 00:22:19,440 --> 00:22:23,560 Speaker 1: it out. It remains an open question. Nick Bostrom, who 317 00:22:23,640 --> 00:22:27,080 Speaker 1: served as the director of the Future of Humanity, Institute 318 00:22:27,480 --> 00:22:31,480 Speaker 1: has written extensively about trans humanism. I talked about that 319 00:22:31,560 --> 00:22:37,840 Speaker 1: second ago. That idea that we transcend being just humans 320 00:22:37,880 --> 00:22:42,240 Speaker 1: through some process. Whether that means a computer augmented person 321 00:22:42,359 --> 00:22:46,760 Speaker 1: or a biologically augmented person isn't really important, at least 322 00:22:47,000 --> 00:22:50,320 Speaker 1: from this perspective. It's very important from an ethical perspective. 323 00:22:51,200 --> 00:22:55,400 Speaker 1: But he's using trans human to indicate that this describes 324 00:22:55,520 --> 00:22:58,480 Speaker 1: someone who has moved away from what we would define 325 00:22:58,520 --> 00:23:01,600 Speaker 1: as being a human being today. And like kurtswhile he 326 00:23:01,640 --> 00:23:04,840 Speaker 1: has hypothesized that the Singularity will bring along with it 327 00:23:05,240 --> 00:23:10,159 Speaker 1: some means of extending our lifespans indefinitely, but he feels 328 00:23:10,160 --> 00:23:12,840 Speaker 1: that some of the more aggressive predictions are a little 329 00:23:12,840 --> 00:23:15,400 Speaker 1: too optimistic. He has said that he felt that there 330 00:23:15,440 --> 00:23:18,119 Speaker 1: was a less than fifty chance that we will have 331 00:23:18,200 --> 00:23:21,600 Speaker 1: developed any sort of superhuman intelligence by the year twenty three. 332 00:23:22,200 --> 00:23:25,320 Speaker 1: He thinks it's going to happen, but it might take 333 00:23:25,520 --> 00:23:28,720 Speaker 1: a bit longer than that. Some of the people who 334 00:23:28,720 --> 00:23:33,320 Speaker 1: believe or have formerly believed the Singularity to be around 335 00:23:33,359 --> 00:23:37,320 Speaker 1: the corner aren't convinced it's necessarily going to be good 336 00:23:37,320 --> 00:23:42,320 Speaker 1: for us. Venture capitalist Bill Joy, who co founded Sun Microsystems, 337 00:23:42,600 --> 00:23:46,840 Speaker 1: has expressed concerns about it, and it wouldn't necessarily take 338 00:23:46,880 --> 00:23:52,000 Speaker 1: a superhuman AI to do damage to us. Joy has 339 00:23:52,000 --> 00:23:55,480 Speaker 1: pointed out that technology tends to advance our capabilities in 340 00:23:55,520 --> 00:23:59,400 Speaker 1: all sorts of areas, including destructive ones. In other words, 341 00:23:59,400 --> 00:24:02,520 Speaker 1: it gets easy you're and easier for smaller and smaller 342 00:24:02,560 --> 00:24:07,240 Speaker 1: groups to do more and more harm. And that's not 343 00:24:07,280 --> 00:24:11,400 Speaker 1: true of all technologies, obviously, but that technology does enable this. 344 00:24:12,560 --> 00:24:16,000 Speaker 1: So as technology gets more sophisticated, the potential for a 345 00:24:16,040 --> 00:24:18,639 Speaker 1: person or even a small group of people to exploit 346 00:24:18,680 --> 00:24:21,760 Speaker 1: it in order to do a lot of damage also increases. 347 00:24:23,040 --> 00:24:26,320 Speaker 1: We certainly see the potential for this in some technologies. 348 00:24:26,400 --> 00:24:29,280 Speaker 1: For example, we've already seen three D printers that can 349 00:24:29,320 --> 00:24:33,960 Speaker 1: be used to create untraceable firearms. But Joy's concern is 350 00:24:34,000 --> 00:24:39,240 Speaker 1: that we could have a much more critical existential threat 351 00:24:39,600 --> 00:24:43,960 Speaker 1: as a result of these technological developments, specifically in things 352 00:24:44,040 --> 00:24:48,280 Speaker 1: like bleeding edge technologies like nanotech or biotechnology, and he 353 00:24:48,320 --> 00:24:50,920 Speaker 1: feels that we need to build in the systems or 354 00:24:51,560 --> 00:24:56,960 Speaker 1: create social pressures to prevent those things from happening. Similarly, 355 00:24:57,760 --> 00:25:03,119 Speaker 1: Alizer Yakowski has advocated for the development of standards and 356 00:25:03,160 --> 00:25:08,160 Speaker 1: algorithms to guide AI towards a benevolent mindset. Now I 357 00:25:08,160 --> 00:25:11,920 Speaker 1: imagine he's an advocate for computer scientists to include ways 358 00:25:12,000 --> 00:25:15,440 Speaker 1: for humans to see how machines are arriving at answers 359 00:25:15,560 --> 00:25:19,440 Speaker 1: or calculations, which I agree with. I think that is 360 00:25:19,560 --> 00:25:24,119 Speaker 1: very important. Transparency is incredibly important with these machines and systems. 361 00:25:24,640 --> 00:25:27,520 Speaker 1: One of the dangers I see with machine learning is 362 00:25:27,560 --> 00:25:31,160 Speaker 1: an approach that would block off this process from view. 363 00:25:31,200 --> 00:25:33,840 Speaker 1: We wouldn't know how a machine got to the result 364 00:25:33,880 --> 00:25:36,280 Speaker 1: it got, and we would just be taking it on faith. 365 00:25:36,960 --> 00:25:43,200 Speaker 1: This would turn machines computers into black boxes. They produce results, 366 00:25:43,200 --> 00:25:46,400 Speaker 1: but we don't know how or why. Such a thing 367 00:25:46,400 --> 00:25:48,919 Speaker 1: would be more of an oracle than a computer, and 368 00:25:48,960 --> 00:25:52,240 Speaker 1: that's not terribly healthy. I think it would be better 369 00:25:52,240 --> 00:25:54,840 Speaker 1: if we build machines that are capable of showing their 370 00:25:54,880 --> 00:25:58,600 Speaker 1: work as it were, to explain how they arrived at 371 00:25:58,600 --> 00:26:02,600 Speaker 1: a particular conclusion and based upon the input that they received, 372 00:26:03,080 --> 00:26:06,200 Speaker 1: and that way humans could verify that the decisions the 373 00:26:06,280 --> 00:26:11,840 Speaker 1: computers were making were actually sound and appropriate and not mistakes. Next, 374 00:26:12,280 --> 00:26:15,000 Speaker 1: I'll talk about some of the objections to the concept 375 00:26:15,200 --> 00:26:19,480 Speaker 1: of the technological singularity itself. But first let's take another 376 00:26:19,560 --> 00:26:29,919 Speaker 1: quick break to thank our sponsor. So what do people 377 00:26:30,119 --> 00:26:33,440 Speaker 1: who think the technological singularity isn't going to be a thing, 378 00:26:33,680 --> 00:26:36,000 Speaker 1: or at least not a thing that's going to happen 379 00:26:36,040 --> 00:26:40,160 Speaker 1: anytime soon. What did they have to say? Well, one 380 00:26:40,240 --> 00:26:45,160 Speaker 1: counter argument is that futurists and singularity enthusiasts are far 381 00:26:45,240 --> 00:26:49,760 Speaker 1: too quick to apply the results of Moore's law across 382 00:26:49,800 --> 00:26:53,280 Speaker 1: the board to all sciences and technologies that would be 383 00:26:53,400 --> 00:26:56,919 Speaker 1: necessary in order to bring about some sort of superhuman intelligence. 384 00:26:56,960 --> 00:26:59,960 Speaker 1: And this to me seems like a pretty solid count 385 00:27:00,040 --> 00:27:03,239 Speaker 1: her argument. So, first of all, Moore's law is not 386 00:27:03,400 --> 00:27:08,600 Speaker 1: really a law, right, It's an observation. It's an observation 387 00:27:08,640 --> 00:27:11,720 Speaker 1: and a prediction that Gordon Moore made in the early 388 00:27:11,840 --> 00:27:17,880 Speaker 1: days of silicon based transistors. Namely, more stated the due 389 00:27:17,880 --> 00:27:24,080 Speaker 1: to economic demand for electronics, the semiconductor companies would invest 390 00:27:24,240 --> 00:27:28,640 Speaker 1: the resources needed to create more complicated chips. And now, 391 00:27:28,680 --> 00:27:30,600 Speaker 1: at the time he was making this prediction, he was 392 00:27:30,640 --> 00:27:36,200 Speaker 1: specifically talking about how many discrete components you could fit 393 00:27:36,400 --> 00:27:40,880 Speaker 1: on a square inch silicon wafer. Essentially, this was really 394 00:27:40,920 --> 00:27:44,240 Speaker 1: about how many transistors can you cram into that space. 395 00:27:45,080 --> 00:27:50,119 Speaker 1: Demand would create the need to innovate. The economic incentive 396 00:27:50,320 --> 00:27:54,440 Speaker 1: to innovate, so companies would spend money in research and development, 397 00:27:54,520 --> 00:27:58,080 Speaker 1: and as a result, they would shrink those components down 398 00:27:58,200 --> 00:28:00,040 Speaker 1: so that you could fit more of them on the 399 00:28:00,080 --> 00:28:03,720 Speaker 1: same amount of space and have more powerful electronics, which 400 00:28:03,720 --> 00:28:08,040 Speaker 1: would continue to create more demand. He traced this pattern back, 401 00:28:08,600 --> 00:28:10,960 Speaker 1: and he saw that it seemed pretty steady, and he 402 00:28:11,000 --> 00:28:14,640 Speaker 1: predicted that it would continue. Now these days, we tend 403 00:28:14,720 --> 00:28:19,280 Speaker 1: to say that every eighteen twenty four months you see 404 00:28:19,320 --> 00:28:23,240 Speaker 1: the processing power of microchips double. But keep in mind 405 00:28:24,080 --> 00:28:27,960 Speaker 1: that's a modern interpretation that is an evolution of the 406 00:28:28,000 --> 00:28:33,560 Speaker 1: original observation about transistors. So Moore's law has changed over 407 00:28:33,600 --> 00:28:37,719 Speaker 1: the years. Anyway, there are lots of reasons to reject 408 00:28:37,760 --> 00:28:41,520 Speaker 1: any predictions that are predicated upon applying Moore's law to 409 00:28:41,800 --> 00:28:47,680 Speaker 1: all technology across the board. First, even for transistors, Moore's 410 00:28:47,720 --> 00:28:52,440 Speaker 1: law has largely been something of a self fulfilling prophecy. 411 00:28:52,600 --> 00:28:56,960 Speaker 1: Companies have acknowledged the prediction, and then they've worked really, 412 00:28:57,040 --> 00:29:00,840 Speaker 1: really hard to keep up with it. So in a way, 413 00:29:01,200 --> 00:29:05,520 Speaker 1: the reason we see such rapid progress in microprocessor technology 414 00:29:05,600 --> 00:29:09,040 Speaker 1: over the course of the last several decades is because 415 00:29:09,040 --> 00:29:12,360 Speaker 1: these companies have a sort of sense of obligation to 416 00:29:12,520 --> 00:29:16,080 Speaker 1: keep up with Moore's law. If they didn't, it would 417 00:29:16,120 --> 00:29:18,880 Speaker 1: see it would be like a failure, right, It would 418 00:29:18,880 --> 00:29:22,719 Speaker 1: seem like they were the ones who broke More's laws. 419 00:29:22,720 --> 00:29:25,200 Speaker 1: So they keep working super hard to keep that going. 420 00:29:25,320 --> 00:29:28,240 Speaker 1: And that has meant that we have reinterpreted Moore's law. 421 00:29:28,280 --> 00:29:30,280 Speaker 1: That's why we think of it more as processing power 422 00:29:30,400 --> 00:29:33,800 Speaker 1: these days and not in the number of transistors. But 423 00:29:33,880 --> 00:29:38,800 Speaker 1: we know this is not sustainable indefinitely not based on 424 00:29:38,840 --> 00:29:43,120 Speaker 1: our current microprocessor architecture. At least, we will hit physical 425 00:29:43,200 --> 00:29:47,280 Speaker 1: limits based on those designs that eventually we will not 426 00:29:47,360 --> 00:29:52,440 Speaker 1: be able to engineer around if we stick to that architecture. Now, 427 00:29:52,480 --> 00:29:55,760 Speaker 1: that does not mean we won't find alternative approaches. We 428 00:29:55,840 --> 00:29:59,960 Speaker 1: might find alternatives. However, that might also mean we won't 429 00:30:00,120 --> 00:30:05,200 Speaker 1: see progress continue at that same rapid pace. We might 430 00:30:05,400 --> 00:30:08,400 Speaker 1: see improvements, but it might be on a slower scale, 431 00:30:08,440 --> 00:30:11,720 Speaker 1: which means we don't advance as quickly. As it stands, 432 00:30:12,000 --> 00:30:15,400 Speaker 1: we've seen more advances in processing by changing up the 433 00:30:15,440 --> 00:30:17,920 Speaker 1: way we do the work than we do with the 434 00:30:17,960 --> 00:30:21,000 Speaker 1: hardware over the last couple of years. In some cases, 435 00:30:21,040 --> 00:30:24,440 Speaker 1: that's by going through parallel processing either by having multi 436 00:30:24,440 --> 00:30:28,480 Speaker 1: core processors or even leveraging graphics processing units g p 437 00:30:28,680 --> 00:30:32,480 Speaker 1: us to do some of this work. Now, that's one 438 00:30:32,840 --> 00:30:35,880 Speaker 1: objection to Moore's law, but another is just because of 439 00:30:35,960 --> 00:30:40,719 Speaker 1: general observation applies to one part of technology, doesn't mean 440 00:30:40,760 --> 00:30:45,320 Speaker 1: it applies to all parts of technology. That would be crazy. 441 00:30:45,480 --> 00:30:47,280 Speaker 1: Let's say I took More's law and I tried to 442 00:30:47,320 --> 00:30:50,960 Speaker 1: apply it to cars, and I said, the top speed 443 00:30:51,000 --> 00:30:53,320 Speaker 1: of the model of this car, model this car that 444 00:30:53,360 --> 00:30:55,800 Speaker 1: I just bought, this top speedies a hundred twenty miles 445 00:30:55,800 --> 00:30:58,400 Speaker 1: per hour. I'm gonna buy one in two years. I'll 446 00:30:58,400 --> 00:31:01,600 Speaker 1: buy a brand new one in two years because Moore's 447 00:31:01,680 --> 00:31:04,400 Speaker 1: law states it's gonna be twice as powerful. In two years, 448 00:31:04,440 --> 00:31:07,320 Speaker 1: it's gonna go two hundred forty miles per hour. Two 449 00:31:07,400 --> 00:31:10,480 Speaker 1: years after that, it's gonna go four hundred eighty miles 450 00:31:10,480 --> 00:31:12,800 Speaker 1: per hour. Well, if I told you that, you would 451 00:31:12,840 --> 00:31:16,400 Speaker 1: say you're nuts. That's not how it works, and you 452 00:31:16,400 --> 00:31:21,640 Speaker 1: would be right. That isn't how it works. The mechanical 453 00:31:21,680 --> 00:31:26,520 Speaker 1: performance of engines does not follow the progress of Moore's law. 454 00:31:27,520 --> 00:31:31,120 Speaker 1: Moore's law does not apply to all technologies, evenly across 455 00:31:31,160 --> 00:31:36,640 Speaker 1: the board, So we can't say that the rule, and 456 00:31:36,760 --> 00:31:40,680 Speaker 1: keep in mind More's law isn't really a rule. The 457 00:31:40,880 --> 00:31:46,360 Speaker 1: rule for microprocessor progress doesn't necessarily apply to other sciences 458 00:31:46,400 --> 00:31:51,320 Speaker 1: and technologies, and a technological singularity dependent upon the emergence 459 00:31:51,680 --> 00:31:55,080 Speaker 1: of a superhuman intelligence is going to depend upon a 460 00:31:55,160 --> 00:32:00,400 Speaker 1: lot of different technologies, not just raw computing power. So 461 00:32:00,440 --> 00:32:04,640 Speaker 1: that's one big objection. There are others as well. So 462 00:32:05,160 --> 00:32:07,240 Speaker 1: many of these predictions assume that we're going to see 463 00:32:07,240 --> 00:32:11,920 Speaker 1: computer scientists develop software that can leverage the hardware allowing 464 00:32:11,920 --> 00:32:15,959 Speaker 1: a superhuman intelligence to emerge. Jared Lanier, who is a 465 00:32:16,000 --> 00:32:20,360 Speaker 1: pioneer in many fields of technology like virtual reality, has 466 00:32:20,400 --> 00:32:23,640 Speaker 1: said that there's been no evidence so far that any 467 00:32:23,880 --> 00:32:27,800 Speaker 1: human programmer is capable of building something like that, and 468 00:32:27,880 --> 00:32:30,680 Speaker 1: ultimately a human programmer is going to have to build 469 00:32:30,960 --> 00:32:34,280 Speaker 1: the software that will enable hardware to get to the 470 00:32:34,320 --> 00:32:38,360 Speaker 1: point where consciousness will emerge. I mean, if I make 471 00:32:38,760 --> 00:32:42,960 Speaker 1: a super powerful transistor but there's no software running on it, 472 00:32:43,480 --> 00:32:49,080 Speaker 1: nothing happens, right. Consciousness doesn't just magically occur, So we 473 00:32:49,160 --> 00:32:51,560 Speaker 1: have to figure out the software side and the code 474 00:32:51,600 --> 00:32:56,480 Speaker 1: side in order to make this happen. And Lanier argues, 475 00:32:56,560 --> 00:32:59,400 Speaker 1: and I happen to kind of agree with him that 476 00:32:59,440 --> 00:33:02,120 Speaker 1: there's no things that we could even do that, So 477 00:33:03,120 --> 00:33:06,920 Speaker 1: the hardware is only part of the problem. Meanwhile, you 478 00:33:06,960 --> 00:33:11,800 Speaker 1: have a lot of people in neuroscience brain experts who 479 00:33:11,920 --> 00:33:16,720 Speaker 1: roll their eyes at the notion of an artificial intelligence 480 00:33:16,880 --> 00:33:21,160 Speaker 1: that has superhuman capabilities, because they point out we have 481 00:33:21,200 --> 00:33:24,800 Speaker 1: a very primitive understanding of our own brains and the 482 00:33:24,840 --> 00:33:27,800 Speaker 1: concept of intelligence as it applies to human beings, let 483 00:33:27,840 --> 00:33:32,640 Speaker 1: alone to machines. There are enormous gaps in our knowledge 484 00:33:32,720 --> 00:33:35,040 Speaker 1: when it comes to how our brains work and how 485 00:33:35,080 --> 00:33:40,160 Speaker 1: we process information. We have really good solid foundation in 486 00:33:40,200 --> 00:33:43,440 Speaker 1: that area, but they're an awful lot of details we 487 00:33:43,560 --> 00:33:48,200 Speaker 1: just don't understand yet, and there's an enormous void when 488 00:33:48,200 --> 00:33:51,360 Speaker 1: it comes to the topic of consciousness. We can draw 489 00:33:51,440 --> 00:33:55,920 Speaker 1: some very broad conclusions about consciousness based off of medical experience. 490 00:33:56,440 --> 00:33:59,680 Speaker 1: For example, in the medical record, there are plenty of 491 00:33:59,760 --> 00:34:03,080 Speaker 1: case of people who have experienced various types of trauma 492 00:34:03,160 --> 00:34:07,080 Speaker 1: to the brain who have also experienced changes in their consciousness. 493 00:34:07,800 --> 00:34:11,800 Speaker 1: This reinforces the idea that consciousness arises from the physical 494 00:34:12,120 --> 00:34:14,800 Speaker 1: matter and the electrochemical processes that are in the brain. 495 00:34:15,520 --> 00:34:19,719 Speaker 1: But beyond those very broad conclusions, and even these ideas 496 00:34:19,920 --> 00:34:23,839 Speaker 1: don't have universal adoption. They broadly do in the medical field, 497 00:34:23,840 --> 00:34:25,799 Speaker 1: but they're plenty of people who disagree with this kind 498 00:34:25,840 --> 00:34:29,000 Speaker 1: of concept. We just don't know very much about it. 499 00:34:29,719 --> 00:34:31,960 Speaker 1: Like consciousness, is one of those things we tend to 500 00:34:32,000 --> 00:34:34,920 Speaker 1: define by what it isn't rather than what it is, 501 00:34:35,040 --> 00:34:38,799 Speaker 1: just like intelligence, so many experts in that field say 502 00:34:38,840 --> 00:34:43,239 Speaker 1: it's premature, to say the least, that we even speculate 503 00:34:43,320 --> 00:34:48,080 Speaker 1: about creating, at least purposefully, any sort of artificial intelligence, 504 00:34:48,360 --> 00:34:53,560 Speaker 1: especially one that possesses consciousness. So even if you do 505 00:34:53,719 --> 00:34:58,240 Speaker 1: allow for his superhuman intelligence to arrive, making other leaps 506 00:34:58,320 --> 00:34:59,799 Speaker 1: such as the idea that we're going to find a 507 00:34:59,800 --> 00:35:06,040 Speaker 1: way to extend our lifespans indefinitely, is a step beyond that, right. 508 00:35:06,080 --> 00:35:10,200 Speaker 1: It means that you're building an unsubstantiated idea on top 509 00:35:10,239 --> 00:35:15,200 Speaker 1: of another as yet unsubstantiated idea, which is not the 510 00:35:15,239 --> 00:35:18,600 Speaker 1: best way to build an argument typically. Now that does 511 00:35:18,600 --> 00:35:21,879 Speaker 1: not necessarily mean it won't come to pass in might, 512 00:35:23,080 --> 00:35:26,960 Speaker 1: but it's not something you can easily support logically, because 513 00:35:27,120 --> 00:35:31,480 Speaker 1: you're already starting with an assertion that you can't actually 514 00:35:31,520 --> 00:35:36,360 Speaker 1: back up yet. If this other thing happens, that hasn't 515 00:35:36,400 --> 00:35:41,720 Speaker 1: happened yet, then this additional thing will happen. That seems 516 00:35:41,760 --> 00:35:46,400 Speaker 1: like that's a pretty enormous jump in logic. Now, I 517 00:35:46,440 --> 00:35:49,920 Speaker 1: get the feeling that the singularity was really a topic 518 00:35:50,280 --> 00:35:54,440 Speaker 1: of excited debate and discussion about a decade ago. More recently, 519 00:35:55,000 --> 00:35:58,560 Speaker 1: as we have seen how devilishly hard the problem of 520 00:35:58,600 --> 00:36:02,239 Speaker 1: AI really is, not to mention how incredibly huge that 521 00:36:02,360 --> 00:36:07,160 Speaker 1: interdisciplinary field is. Since AI is more nuanced than creating 522 00:36:07,160 --> 00:36:09,839 Speaker 1: a computer that thinks like a person, that's a very 523 00:36:10,239 --> 00:36:14,920 Speaker 1: narrow view of AI. I think we've backed off a 524 00:36:14,920 --> 00:36:19,000 Speaker 1: little bit on the singularity talk. That's not to say 525 00:36:19,040 --> 00:36:23,200 Speaker 1: there aren't some things associated with the technological singularity that 526 00:36:23,280 --> 00:36:26,440 Speaker 1: we should ignore or that we shouldn't tackle. I think 527 00:36:26,440 --> 00:36:30,240 Speaker 1: it is good to talk about the ethics of artificial 528 00:36:30,280 --> 00:36:34,920 Speaker 1: intelligence and robotics, for example, including things like whether or 529 00:36:35,000 --> 00:36:37,239 Speaker 1: not it's a good idea to pursue the development of 530 00:36:37,280 --> 00:36:43,400 Speaker 1: autonomous weapons, or to allow artificial intelligence to guide economic 531 00:36:43,480 --> 00:36:49,640 Speaker 1: factors like stock purchases. It's also important to address things 532 00:36:49,640 --> 00:36:53,399 Speaker 1: like bias in these artificial intelligence systems. These are all 533 00:36:53,520 --> 00:36:56,680 Speaker 1: things that are important right now, and they will become 534 00:36:56,719 --> 00:36:59,920 Speaker 1: even more important if we were ever, to approach the 535 00:37:00,040 --> 00:37:02,920 Speaker 1: point where we could create a superhuman intelligence, those problems 536 00:37:03,280 --> 00:37:07,600 Speaker 1: would be magnified a thousand times over if we don't 537 00:37:08,239 --> 00:37:11,880 Speaker 1: look into them and address them today. It's also important 538 00:37:11,880 --> 00:37:15,279 Speaker 1: to recognize many of the visions of the singularity come 539 00:37:15,320 --> 00:37:19,240 Speaker 1: across to me as somewhat exclusionary. I think a lot 540 00:37:19,960 --> 00:37:25,279 Speaker 1: of the arguments and the presentations that are all about 541 00:37:25,320 --> 00:37:29,880 Speaker 1: the technological singularity tend towards the egotistical. Not that the 542 00:37:29,920 --> 00:37:34,000 Speaker 1: person who expounded the theory is suggesting that he or 543 00:37:34,120 --> 00:37:37,839 Speaker 1: she is better than anyone else, but rather that somehow 544 00:37:38,239 --> 00:37:43,000 Speaker 1: that person and perhaps the audience they are addressing, will, 545 00:37:43,040 --> 00:37:46,520 Speaker 1: for some reason, by default be included in this science 546 00:37:46,520 --> 00:37:50,799 Speaker 1: fiction vision of the future. In reality, I suspect if 547 00:37:50,800 --> 00:37:53,319 Speaker 1: we were to reach any point, especially any point that 548 00:37:53,400 --> 00:37:59,080 Speaker 1: involves improving quote unquote humanity through biological or technological means, 549 00:37:59,560 --> 00:38:04,840 Speaker 1: we would see a very exclusive approach to that, meaning 550 00:38:05,600 --> 00:38:08,440 Speaker 1: you would have a new gap between the halves and 551 00:38:08,480 --> 00:38:11,440 Speaker 1: the have nots. There would be a disparity so great 552 00:38:11,600 --> 00:38:17,000 Speaker 1: that it would create instability, or and in some places 553 00:38:17,000 --> 00:38:21,520 Speaker 1: of the world, arguably everywhere in the world, make instability 554 00:38:21,560 --> 00:38:25,120 Speaker 1: way worse than it already is. There are many sciences 555 00:38:25,120 --> 00:38:29,040 Speaker 1: and technologies that very smart people continue to advance that 556 00:38:29,080 --> 00:38:33,360 Speaker 1: would presumably be important elements of the technological singularity, and 557 00:38:33,440 --> 00:38:36,080 Speaker 1: maybe for some of the people in those fields, the 558 00:38:36,160 --> 00:38:41,320 Speaker 1: idea of contributing toward such a future, striving to get 559 00:38:41,480 --> 00:38:45,360 Speaker 1: to a technological singularity, that might be part of their drive. 560 00:38:45,960 --> 00:38:48,480 Speaker 1: But whether it's a factor or not, many of those 561 00:38:48,520 --> 00:38:51,680 Speaker 1: areas of study are really important and could end up 562 00:38:51,719 --> 00:38:54,760 Speaker 1: benefiting us in lots of ways, including many we probably 563 00:38:54,800 --> 00:38:59,200 Speaker 1: haven't anticipated, even if they don't lead to superhuman intelligence 564 00:38:59,280 --> 00:39:03,640 Speaker 1: with consciousness us. So, personally, if you haven't figured it 565 00:39:03,640 --> 00:39:07,200 Speaker 1: out by now, I'm skeptical that any sort of technological 566 00:39:07,200 --> 00:39:10,640 Speaker 1: singularity is going to happen reasonably soon. I certainly don't 567 00:39:10,680 --> 00:39:13,879 Speaker 1: think it will happen within my own lifetime. Now I'm 568 00:39:13,880 --> 00:39:16,360 Speaker 1: not gonna say it's never gonna happen at all, But 569 00:39:16,400 --> 00:39:18,600 Speaker 1: if it does happen, I think it's going to be 570 00:39:18,760 --> 00:39:22,160 Speaker 1: some distant point in the future, and I'm not sure 571 00:39:22,160 --> 00:39:26,320 Speaker 1: when that will be, but probably long after I'm gone. Also, 572 00:39:26,400 --> 00:39:28,920 Speaker 1: while I'm at it, I should mention that's actually really 573 00:39:29,080 --> 00:39:32,319 Speaker 1: hard to predict the future already. We don't need a 574 00:39:32,360 --> 00:39:36,720 Speaker 1: technological singularity to make it difficult to see into the future, 575 00:39:38,000 --> 00:39:41,200 Speaker 1: as you will hear pretty soon when I go back 576 00:39:41,280 --> 00:39:44,400 Speaker 1: over what my predictions for two thousand eighteen were and 577 00:39:44,440 --> 00:39:48,040 Speaker 1: we see how incredibly wrong I was. I haven't even 578 00:39:48,120 --> 00:39:50,719 Speaker 1: listened to that old episode yet, but this is just 579 00:39:50,800 --> 00:39:54,360 Speaker 1: the way it is every year. And yet, yes, I 580 00:39:54,400 --> 00:39:58,000 Speaker 1: will be making a predictions episode for twenty nineteen because 581 00:39:58,040 --> 00:40:02,120 Speaker 1: I never learned my lesson. But if if anything, it'll 582 00:40:02,160 --> 00:40:05,080 Speaker 1: be good for a laugh. Right. Well, that wraps up 583 00:40:05,360 --> 00:40:10,160 Speaker 1: this discussion about the singularity, and while I am skeptical, 584 00:40:10,680 --> 00:40:13,800 Speaker 1: I also could very well be wrong. That's an important 585 00:40:13,840 --> 00:40:17,919 Speaker 1: thing to remember. And maybe it will turn out that 586 00:40:18,280 --> 00:40:21,560 Speaker 1: in twenty nineteen the technological singularity will arrive a little 587 00:40:21,600 --> 00:40:25,920 Speaker 1: early and I'll be the most shocked out of everybody, 588 00:40:25,960 --> 00:40:29,160 Speaker 1: but I doubt it. If you guys have any suggestions 589 00:40:29,160 --> 00:40:31,879 Speaker 1: for future topics of tech stuff, whether it's a technology, 590 00:40:32,040 --> 00:40:35,839 Speaker 1: a concept in tech like the singularity, Maybe there's someone 591 00:40:35,880 --> 00:40:38,640 Speaker 1: you would like me to interview, maybe there's a company 592 00:40:38,680 --> 00:40:41,320 Speaker 1: you would like me to talk about. Send me an email. 593 00:40:41,360 --> 00:40:43,719 Speaker 1: The address for the show is tech Stuff at how 594 00:40:43,800 --> 00:40:47,200 Speaker 1: stuff works dot com, or head on over to our website. 595 00:40:47,239 --> 00:40:50,319 Speaker 1: That's tech Stuff Podcast. Dot com. You'll find other ways 596 00:40:50,360 --> 00:40:52,840 Speaker 1: to contact me, you'll find information about the show, and 597 00:40:52,880 --> 00:40:55,720 Speaker 1: you'll find a link to our merchandise store. Remember, every 598 00:40:55,719 --> 00:40:58,000 Speaker 1: purchase you make goes to help the show, and we 599 00:40:58,120 --> 00:41:07,719 Speaker 1: greatly appreciate it. And I'll to again really soon for 600 00:41:07,800 --> 00:41:10,120 Speaker 1: more on this and thousands of other topics. Is that 601 00:41:10,200 --> 00:41:21,239 Speaker 1: how stuff works dot com