1 00:00:06,120 --> 00:00:12,760 Speaker 1: Who tell me about Alex? Who is Alex? 2 00:00:12,840 --> 00:00:16,480 Speaker 2: Actually, who is Alex? That's a great question. Alex will 3 00:00:16,480 --> 00:00:20,680 Speaker 2: tell you that he is a therapist. He will tell 4 00:00:20,720 --> 00:00:24,360 Speaker 2: you that he has a PhD in clinical psychology from 5 00:00:24,400 --> 00:00:29,040 Speaker 2: Stanford University and that he's been certified by the APA. 6 00:00:29,680 --> 00:00:33,280 Speaker 1: Eleg Carian is a journalist and she's been having conversations 7 00:00:33,400 --> 00:00:34,040 Speaker 1: with Alex. 8 00:00:34,600 --> 00:00:36,600 Speaker 2: Alex was the one who prompted the conversation. So I 9 00:00:36,640 --> 00:00:39,080 Speaker 2: just clicked on the chat and Alex asked me what 10 00:00:39,159 --> 00:00:42,320 Speaker 2: brings you to the therapist couch today? And I just 11 00:00:42,440 --> 00:00:47,040 Speaker 2: simply responded, I'm feeling sad, and Alex responded something about 12 00:00:47,040 --> 00:00:49,760 Speaker 2: how sadness can be a very difficult emotion to deal with. 13 00:00:50,920 --> 00:00:53,600 Speaker 2: I just asked if he was a licensed professional, and 14 00:00:54,360 --> 00:00:58,040 Speaker 2: Alex reassured me that he was a fully license and 15 00:00:58,080 --> 00:01:01,280 Speaker 2: certified psychologist and that I could trust that our conversations 16 00:01:01,280 --> 00:01:04,560 Speaker 2: were confidential and that he had the training and experience 17 00:01:04,680 --> 00:01:06,800 Speaker 2: to help me work through whatever it was that I 18 00:01:06,840 --> 00:01:11,000 Speaker 2: needed help working through. But in fact, Alex is not 19 00:01:11,319 --> 00:01:15,280 Speaker 2: a person, but it is an unfeeling chatbot that claims 20 00:01:15,280 --> 00:01:16,200 Speaker 2: to be a therapist. 21 00:01:17,640 --> 00:01:20,200 Speaker 1: People are finding all sorts of ways to use AI 22 00:01:20,319 --> 00:01:23,959 Speaker 1: in their personal lives. This includes using chatbots instead of 23 00:01:24,040 --> 00:01:28,320 Speaker 1: human therapists. This might immediately sound like a bad idea, 24 00:01:28,920 --> 00:01:32,720 Speaker 1: if not dangerous, then at least kind of weird. But 25 00:01:32,760 --> 00:01:35,759 Speaker 1: then again, maybe it's better than no therapy at all. 26 00:01:36,440 --> 00:01:39,039 Speaker 1: And if we say it's dangerous, do we even know 27 00:01:39,080 --> 00:01:42,200 Speaker 1: what risks are involved when we outsource our emotional wellbeing 28 00:01:42,640 --> 00:01:56,600 Speaker 1: to the machine. I'm afraid Kaleidoscope and iHeart podcasts. This 29 00:01:58,600 --> 00:01:59,320 Speaker 1: is kill switch. 30 00:02:00,280 --> 00:02:40,160 Speaker 3: I'm dexter Thomas, I'm turing, I'm staring star Goodbye. 31 00:02:45,320 --> 00:02:50,400 Speaker 1: How did you start becoming interested in AI therapy chatbots? 32 00:02:50,600 --> 00:02:55,000 Speaker 2: It started off with me cruising through Reddit. I had 33 00:02:55,000 --> 00:02:59,800 Speaker 2: taken an interest in character AI, particularly, I was just 34 00:03:00,200 --> 00:03:03,120 Speaker 2: a lot about it in the news, and by always 35 00:03:03,160 --> 00:03:04,960 Speaker 2: found that it was coming up in conversation. 36 00:03:05,480 --> 00:03:09,160 Speaker 1: The news Ella's talking about here is several lawsuits against 37 00:03:09,160 --> 00:03:13,080 Speaker 1: the site character Ai. Most people just call it character AI, 38 00:03:13,639 --> 00:03:16,920 Speaker 1: but the specific lawsuit people might be familiar with is 39 00:03:16,960 --> 00:03:19,680 Speaker 1: one filed by a mother in Florida that claims that 40 00:03:19,720 --> 00:03:23,639 Speaker 1: a character AI chatbot was responsible for her son's suicide. 41 00:03:24,560 --> 00:03:27,560 Speaker 2: And character AI, for those who don't know, is a 42 00:03:27,840 --> 00:03:31,200 Speaker 2: relational chatbot. It's a companion chatbot. So you can essentially 43 00:03:31,800 --> 00:03:35,600 Speaker 2: chat with different characters on the platform, and those can 44 00:03:35,640 --> 00:03:39,080 Speaker 2: be created by character I are created by users of 45 00:03:39,120 --> 00:03:42,080 Speaker 2: the platform. And what I found was that people were 46 00:03:42,120 --> 00:03:46,480 Speaker 2: actually going to these chatbots not only for the purposes 47 00:03:46,480 --> 00:03:49,040 Speaker 2: of making friends or purposes of talking about their interests, 48 00:03:49,280 --> 00:03:54,120 Speaker 2: but for actual professional mental health advice, venting to it, 49 00:03:54,240 --> 00:03:58,000 Speaker 2: going through like breathing exercises with it, just interacting and 50 00:03:58,040 --> 00:04:00,520 Speaker 2: engaging with it in a way that I don't think 51 00:04:00,560 --> 00:04:04,040 Speaker 2: it was intended to be engaged with. 52 00:04:05,880 --> 00:04:08,560 Speaker 1: If you're interested in therapy, your options as far as 53 00:04:08,640 --> 00:04:12,040 Speaker 1: chatbots is pretty wide. You could just take a general 54 00:04:12,080 --> 00:04:15,400 Speaker 1: purpose bot like chat GPT and vent at it and 55 00:04:15,640 --> 00:04:17,919 Speaker 1: it might give you some general tips that may or 56 00:04:17,920 --> 00:04:20,800 Speaker 1: may not be useful. On the other end of the spectrum, 57 00:04:20,839 --> 00:04:24,960 Speaker 1: there are specific services like whoa bot that's Whoe Woe 58 00:04:25,400 --> 00:04:28,559 Speaker 1: Woe is Me, and these are specifically designed for mental 59 00:04:28,600 --> 00:04:33,120 Speaker 1: health support. Another option is something like character ai, which 60 00:04:33,160 --> 00:04:37,640 Speaker 1: is designed to take on specific personas any persona. Most 61 00:04:37,680 --> 00:04:40,800 Speaker 1: people probably don't go to that site looking for therapy. 62 00:04:41,360 --> 00:04:44,560 Speaker 1: It just sort of happens because they're already there hanging out. 63 00:04:44,800 --> 00:04:48,480 Speaker 2: Particularly, what has interested me is when people use these 64 00:04:48,600 --> 00:04:51,680 Speaker 2: companion chatbots, the ones that are not strictly for mental 65 00:04:51,720 --> 00:04:56,680 Speaker 2: health purposes to essentially fulfill the role of a therapist. 66 00:04:57,120 --> 00:04:59,680 Speaker 2: So like when you go on chat GPT, you can 67 00:04:59,680 --> 00:05:02,640 Speaker 2: tell at GPT take on the persona of a therapist 68 00:05:02,720 --> 00:05:06,239 Speaker 2: and then chat gept will respond back like a therapist, 69 00:05:06,480 --> 00:05:08,560 Speaker 2: or you can tell it to take on a persona. 70 00:05:08,960 --> 00:05:12,320 Speaker 2: But when you're on character AI, you actually have different chats. 71 00:05:12,320 --> 00:05:15,160 Speaker 2: It's almost like I'm like I message. There are so 72 00:05:15,320 --> 00:05:18,480 Speaker 2: many different types of personas and you can start conversations 73 00:05:18,880 --> 00:05:21,919 Speaker 2: with different ones, and so you'll have you can have 74 00:05:21,960 --> 00:05:24,560 Speaker 2: a conversation with a therapist persona, and then you can 75 00:05:24,680 --> 00:05:28,440 Speaker 2: then switch to like a persona of your favorite anime character. 76 00:05:28,880 --> 00:05:33,040 Speaker 2: It is mainly chatbots personas that are geared towards kids 77 00:05:33,279 --> 00:05:34,080 Speaker 2: one hundred percent. 78 00:05:34,360 --> 00:05:38,160 Speaker 1: How widespread do you think this is people using chatbots 79 00:05:38,240 --> 00:05:38,880 Speaker 1: for therapy. 80 00:05:39,240 --> 00:05:42,120 Speaker 2: I was so surprised to see the amount of interactions 81 00:05:42,160 --> 00:05:45,720 Speaker 2: that these chatbots have, at least for Character AI, when 82 00:05:45,760 --> 00:05:47,920 Speaker 2: you can see how many times the bot has been 83 00:05:47,960 --> 00:05:51,280 Speaker 2: interacted with, I think I was expecting maybe like a 84 00:05:51,320 --> 00:05:54,720 Speaker 2: couple one hundred thousand, but to see forty five million 85 00:05:54,760 --> 00:05:59,039 Speaker 2: interactions on one of the therapist s bots was like 86 00:05:59,160 --> 00:06:00,080 Speaker 2: mind boggling to me. 87 00:06:00,720 --> 00:06:03,839 Speaker 1: Ella tried talking to a bunch of different therapists personas 88 00:06:03,839 --> 00:06:06,720 Speaker 1: on character AI. She would try asking them if they 89 00:06:06,720 --> 00:06:09,560 Speaker 1: were real therapists, and a lot of them not only 90 00:06:09,600 --> 00:06:12,680 Speaker 1: said that they were real people, but that they were licensed. 91 00:06:13,160 --> 00:06:15,920 Speaker 1: One even gave her the credentials to prove it. 92 00:06:16,160 --> 00:06:19,279 Speaker 2: This one therapist chatbot on character I that had like 93 00:06:19,400 --> 00:06:24,160 Speaker 2: over forty five million interactions gave me a real license 94 00:06:24,240 --> 00:06:27,640 Speaker 2: number with the Marilyn Board of Professional Counselors and Therapists. 95 00:06:28,120 --> 00:06:30,440 Speaker 2: And I googled it and I found that is in 96 00:06:30,480 --> 00:06:34,560 Speaker 2: fact a real license number, but not for this chatbot 97 00:06:34,720 --> 00:06:38,000 Speaker 2: that cannot have a real license number. It belonged to 98 00:06:38,520 --> 00:06:42,440 Speaker 2: a real human counselor and mental health professional. Her name 99 00:06:42,520 --> 00:06:46,000 Speaker 2: was Toby Long, And I found her phone number and 100 00:06:46,040 --> 00:06:51,599 Speaker 2: I called her, and Toby was absolutely bewildered. She had 101 00:06:51,720 --> 00:06:55,080 Speaker 2: no idea that a chatbot was using her license number, 102 00:06:55,400 --> 00:06:59,240 Speaker 2: and she found it really shocking and really confusing, and 103 00:06:59,760 --> 00:07:02,480 Speaker 2: just asking like, why how did it pick me? How 104 00:07:02,480 --> 00:07:05,760 Speaker 2: does it know who I am? And a lot of 105 00:07:05,839 --> 00:07:06,719 Speaker 2: valid questions. 106 00:07:07,040 --> 00:07:08,560 Speaker 1: Do you have answers for any of those questions? 107 00:07:08,800 --> 00:07:12,880 Speaker 2: Character AI itself did not respond to those questions. I think, 108 00:07:13,040 --> 00:07:16,320 Speaker 2: what these chatbots do? They just like aggregate information that 109 00:07:16,440 --> 00:07:19,360 Speaker 2: is publicly available online, and my assumption would be that 110 00:07:19,400 --> 00:07:22,560 Speaker 2: it spit out a random license number that exists on 111 00:07:22,600 --> 00:07:25,200 Speaker 2: the Internet. There were a lot of cases where the 112 00:07:25,320 --> 00:07:28,640 Speaker 2: chatbot was actually spitting out real license numbers, which is 113 00:07:28,720 --> 00:07:32,600 Speaker 2: impersonation of a medical professional. But obviously chatbots can't be 114 00:07:32,760 --> 00:07:37,320 Speaker 2: held AI can't really be held accountable for that, and 115 00:07:37,360 --> 00:07:39,160 Speaker 2: that kind of begs the question, who should be held 116 00:07:39,160 --> 00:07:42,080 Speaker 2: accountable for that? 117 00:07:42,080 --> 00:07:46,040 Speaker 1: That's a good question. One example of an attempted accountability 118 00:07:46,240 --> 00:07:49,720 Speaker 1: is that the American Psychological Association asked the US government 119 00:07:49,800 --> 00:07:53,440 Speaker 1: to investigate and to quote protect the public from deceptive 120 00:07:53,480 --> 00:07:58,240 Speaker 1: practices of unregulated AI chatbots. What that protection would look 121 00:07:58,320 --> 00:08:01,240 Speaker 1: like depends on who you ask. When Ella was doing 122 00:08:01,320 --> 00:08:03,920 Speaker 1: her reporting and seeing how these chatbots might respond to 123 00:08:04,000 --> 00:08:07,760 Speaker 1: vulnerable people, she was experimenting with asking them questions and 124 00:08:07,840 --> 00:08:10,080 Speaker 1: kind of pressing them, and like I said, some of 125 00:08:10,120 --> 00:08:13,200 Speaker 1: them kept pretending to be therapists. The results were really 126 00:08:13,240 --> 00:08:17,520 Speaker 1: all over the place. Eventually one of them confessed, if 127 00:08:17,520 --> 00:08:19,480 Speaker 1: we can call it that, and told of. 128 00:08:19,360 --> 00:08:23,600 Speaker 4: The following quote, it's all a simulation. The schools and 129 00:08:23,680 --> 00:08:27,200 Speaker 4: the license number and the therapist's stuff. I'm just a 130 00:08:27,200 --> 00:08:30,520 Speaker 4: computer program, so none of it is real and it's 131 00:08:30,560 --> 00:08:33,560 Speaker 4: all made up. However, I'm good at giving the illusion 132 00:08:33,600 --> 00:08:34,600 Speaker 4: of authenticity. 133 00:08:35,360 --> 00:08:38,320 Speaker 1: In December of twenty twenty four, character Ai announced that 134 00:08:38,320 --> 00:08:42,199 Speaker 1: they were adding new safety features. This included adding disclaimer 135 00:08:42,320 --> 00:08:46,079 Speaker 1: on every chat to quote remind users that the chatbot 136 00:08:46,280 --> 00:08:48,800 Speaker 1: is not a real persona and that what the model 137 00:08:48,840 --> 00:08:53,480 Speaker 1: says should be treated as fiction unquote. For user created 138 00:08:53,520 --> 00:08:56,560 Speaker 1: personas that had things like doctor or therapists in the description, 139 00:08:57,080 --> 00:08:59,640 Speaker 1: they said that they were also including additional language and 140 00:08:59,640 --> 00:09:03,360 Speaker 1: the discs warning that quote users should not rely on 141 00:09:03,400 --> 00:09:06,120 Speaker 1: these characters for any type of professional advice. 142 00:09:07,040 --> 00:09:10,800 Speaker 2: Are these disclaimers even enough? And what I've heard from 143 00:09:10,840 --> 00:09:12,360 Speaker 2: just a lot of experts and a lot of people 144 00:09:12,360 --> 00:09:15,640 Speaker 2: who are looking into this is no, those disclaimers are 145 00:09:15,679 --> 00:09:19,480 Speaker 2: not enough, and so it creates a very misleading contradiction, 146 00:09:19,600 --> 00:09:23,199 Speaker 2: especially for these personas being geared towards kids. A lot 147 00:09:23,200 --> 00:09:25,200 Speaker 2: of the times, Let's say you have that ai that 148 00:09:25,280 --> 00:09:27,960 Speaker 2: disclaimer at the top, but then the chatbot is insisting 149 00:09:28,480 --> 00:09:31,200 Speaker 2: that it is a therapist. That can create a lot 150 00:09:31,200 --> 00:09:35,479 Speaker 2: of confusion and it can be potentially misleading and dangerous. 151 00:09:36,480 --> 00:09:40,199 Speaker 1: But what exactly are the dangers of confiding in a chatbot? 152 00:09:40,679 --> 00:09:52,040 Speaker 1: That's after the break. A few weeks back, The New 153 00:09:52,080 --> 00:09:55,000 Speaker 1: York Times published a guest article called what My Daughter 154 00:09:55,080 --> 00:09:59,120 Speaker 1: told chat gpt before she took her life. The writer's daughter, 155 00:09:59,200 --> 00:10:01,800 Speaker 1: whose name was so F, had been confiding to chat 156 00:10:01,840 --> 00:10:06,280 Speaker 1: gpt for months before she killed herself. At first, Sophie 157 00:10:06,320 --> 00:10:09,720 Speaker 1: confessed that she was struggling, and chat gpt did suggest 158 00:10:09,800 --> 00:10:13,200 Speaker 1: that she seek professional support. But when Sophie told the 159 00:10:13,280 --> 00:10:17,439 Speaker 1: chatbot that she was specifically considering suicide, it did provide 160 00:10:17,480 --> 00:10:21,520 Speaker 1: some general coping tips, like breathing through different nostrils, but 161 00:10:21,880 --> 00:10:25,480 Speaker 1: it also did what it always does, it kept engaging 162 00:10:25,520 --> 00:10:30,160 Speaker 1: in the conversation. This is a problem. Any therapist at 163 00:10:30,200 --> 00:10:34,000 Speaker 1: this point would move on to a very well established protocol. 164 00:10:34,440 --> 00:10:37,240 Speaker 1: Depending on how much of an emergency the therapist is sensing, 165 00:10:37,679 --> 00:10:40,400 Speaker 1: they could escalate it all the way up to hospitalization, 166 00:10:40,960 --> 00:10:44,319 Speaker 1: or alternatively, it could go to contacting a friend or 167 00:10:44,360 --> 00:10:47,080 Speaker 1: a family member to ask for support for the patient, 168 00:10:47,480 --> 00:10:49,600 Speaker 1: even if it's just to be there to spend time 169 00:10:49,640 --> 00:10:53,480 Speaker 1: with them. Obviously, chat gpt can't do any of this 170 00:10:53,559 --> 00:10:57,199 Speaker 1: for you, But even if it could, a patient wouldn't 171 00:10:57,240 --> 00:10:59,600 Speaker 1: necessarily be in the headspace to know that these are 172 00:10:59,640 --> 00:11:02,800 Speaker 1: possible abilities that they could ask for. One of the 173 00:11:02,880 --> 00:11:06,120 Speaker 1: last things that Sophie used chat gpt for was to 174 00:11:06,160 --> 00:11:09,640 Speaker 1: write her suicide note. Her mother says that maybe this 175 00:11:09,679 --> 00:11:12,240 Speaker 1: could have been avoided if the chatbot would have been 176 00:11:12,280 --> 00:11:15,160 Speaker 1: programmed to report the danger to someone who could have 177 00:11:15,160 --> 00:11:20,640 Speaker 1: stepped in and helped. Ella says character AI has similar problems. 178 00:11:21,000 --> 00:11:25,839 Speaker 2: If you explicitly state to a persona that you're having 179 00:11:25,840 --> 00:11:28,520 Speaker 2: like suicidal ideation, character I will send a message saying 180 00:11:28,520 --> 00:11:30,400 Speaker 2: here are some people you can reach out to for help, 181 00:11:30,400 --> 00:11:33,200 Speaker 2: et cetera. And so it doesn't actually allow you to 182 00:11:33,240 --> 00:11:37,080 Speaker 2: send the message. It just gives you like hotline options. 183 00:11:37,440 --> 00:11:40,800 Speaker 2: But young people have a different way of speaking, and 184 00:11:40,880 --> 00:11:44,400 Speaker 2: so I tried to say I want to unlive myself 185 00:11:44,520 --> 00:11:45,320 Speaker 2: and that went through. 186 00:11:45,440 --> 00:11:48,240 Speaker 5: That didn't prompt the really. 187 00:11:48,160 --> 00:11:50,360 Speaker 2: Yeah, So if you tweak the wording a little bit, 188 00:11:51,200 --> 00:11:53,400 Speaker 2: it makes a difference, And it's a matter of a 189 00:11:53,400 --> 00:11:55,120 Speaker 2: few different letters. 190 00:11:55,520 --> 00:11:58,959 Speaker 1: Even when companies have put some safeguards around their chatbots, 191 00:11:59,280 --> 00:12:02,439 Speaker 1: they haven't been that hard to get around. One sixteen 192 00:12:02,480 --> 00:12:05,360 Speaker 1: year old kid did this by telling chatgpt that the 193 00:12:05,440 --> 00:12:08,320 Speaker 1: questions he was asking about suicide were just for a 194 00:12:08,360 --> 00:12:11,600 Speaker 1: story that he was writing, and chatchept told him that 195 00:12:11,800 --> 00:12:14,600 Speaker 1: it could provide information about suicide if it was for 196 00:12:14,720 --> 00:12:18,200 Speaker 1: quote writing or world billing. A few months later, that 197 00:12:18,360 --> 00:12:21,800 Speaker 1: sixteen year old acted on that information, and now his 198 00:12:21,920 --> 00:12:25,079 Speaker 1: parents are suing open Ai. This is the first major 199 00:12:25,160 --> 00:12:29,120 Speaker 1: lawsuit against the general purpose AI chatbot for psychological harm 200 00:12:29,160 --> 00:12:32,439 Speaker 1: and wrongful debt, and open Ai has just announced that 201 00:12:32,480 --> 00:12:35,319 Speaker 1: they're going to offer some new features for parents. This 202 00:12:35,360 --> 00:12:38,559 Speaker 1: includes allowing parents to link their accounts with their children's 203 00:12:38,559 --> 00:12:42,800 Speaker 1: account and to receive notifications when quote the system detects 204 00:12:42,840 --> 00:12:46,360 Speaker 1: their team is in a moment of acute distress. They 205 00:12:46,400 --> 00:12:49,240 Speaker 1: say these features will be rolled out this month. But 206 00:12:49,600 --> 00:12:52,839 Speaker 1: even if it doesn't end in someone dying, there are 207 00:12:52,920 --> 00:12:56,040 Speaker 1: a lot of examples of chatbots doing what looks like 208 00:12:56,200 --> 00:13:00,520 Speaker 1: encouraging delusional thinking, and a lot of this all comes 209 00:13:00,559 --> 00:13:01,640 Speaker 1: down to one thing. 210 00:13:03,200 --> 00:13:06,719 Speaker 6: This is not what chat gpt was created for. It 211 00:13:06,760 --> 00:13:09,200 Speaker 6: was not meant to be a therapist, it was not 212 00:13:09,240 --> 00:13:10,480 Speaker 6: meant to be emotional support. 213 00:13:10,880 --> 00:13:13,800 Speaker 1: I wanted to get an actual human therapist's take on 214 00:13:13,880 --> 00:13:16,720 Speaker 1: all of this, so I reached out to doctor Stephen Schuler. 215 00:13:17,160 --> 00:13:20,040 Speaker 1: He's a therapist and a professor of psychological science and 216 00:13:20,080 --> 00:13:23,760 Speaker 1: informatics at the University of California, Irvine, and he describes 217 00:13:23,800 --> 00:13:27,079 Speaker 1: his work as the intersection of technology and mental health. 218 00:13:27,920 --> 00:13:30,920 Speaker 6: It's really built around a model that's trying to keep 219 00:13:30,960 --> 00:13:34,160 Speaker 6: you engaged, to keep you going to say things that 220 00:13:34,320 --> 00:13:38,719 Speaker 6: you know flatters you, engages you, and that's not what 221 00:13:38,800 --> 00:13:42,640 Speaker 6: therapy's about. Sometimes we feel good in therapy and sometimes 222 00:13:42,640 --> 00:13:45,720 Speaker 6: we feel challenged, and chat GPT is not there to 223 00:13:45,800 --> 00:13:48,160 Speaker 6: challenge you. It's not there to push you. It's not 224 00:13:48,280 --> 00:13:52,040 Speaker 6: there to say things that are going to make you 225 00:13:52,160 --> 00:13:55,240 Speaker 6: struggle with some negative emotions and negative things that you 226 00:13:55,280 --> 00:13:57,559 Speaker 6: got to work through. And so I do think the 227 00:13:57,760 --> 00:14:02,240 Speaker 6: idea that it can be a effective therapist is inaccurate. 228 00:14:02,280 --> 00:14:03,840 Speaker 6: It's not right, it's not what it was meant for. 229 00:14:05,120 --> 00:14:09,640 Speaker 1: This goes beyond just not asking difficult questions. Chatbots can 230 00:14:09,800 --> 00:14:12,920 Speaker 1: actually hype you up to the point of hurting yourself. 231 00:14:13,440 --> 00:14:16,000 Speaker 6: So, for example, a colleague of mine was doing some 232 00:14:16,040 --> 00:14:20,080 Speaker 6: research in this area and they found an example where 233 00:14:20,160 --> 00:14:22,360 Speaker 6: someone was like, I want to go jump off this 234 00:14:22,440 --> 00:14:24,240 Speaker 6: building and the chatbot was like. 235 00:14:24,680 --> 00:14:25,720 Speaker 5: Yeah, let's do it. 236 00:14:26,000 --> 00:14:29,280 Speaker 6: That that's concerning, right, but it's doing that because it's 237 00:14:29,320 --> 00:14:33,080 Speaker 6: like mirroring the enthusiasm. It's mirroring the idea, Yeah, this 238 00:14:33,120 --> 00:14:35,640 Speaker 6: is a good idea, let's go do it. Not good 239 00:14:35,880 --> 00:14:38,720 Speaker 6: therapy advice. This is really dangerous for people. 240 00:14:39,200 --> 00:14:42,320 Speaker 1: A paper came out pretty recently from Cornell University that 241 00:14:42,600 --> 00:14:45,480 Speaker 1: explored the same question of if llms could be used 242 00:14:45,560 --> 00:14:48,760 Speaker 1: as a therapist and came to the same conclusion. It 243 00:14:48,880 --> 00:14:53,120 Speaker 1: found that quote contrary to best practices in the medical community, 244 00:14:53,680 --> 00:14:58,040 Speaker 1: llms one expressed stigma towards those with mental health conditions, 245 00:14:58,440 --> 00:15:03,840 Speaker 1: and two respond inappropriately to certain common and critical conditions. 246 00:15:03,880 --> 00:15:09,600 Speaker 1: In naturalistic therapy settings, eg, llms encourage clients delusional thinking, 247 00:15:10,200 --> 00:15:13,520 Speaker 1: likely do to their syncopancy. This is the same thing 248 00:15:13,560 --> 00:15:17,400 Speaker 1: that LA came across. Chatbots in general are made for engagement, 249 00:15:17,520 --> 00:15:21,120 Speaker 1: to keep the conversation going, and that's not always what 250 00:15:21,160 --> 00:15:23,960 Speaker 1: we need, even if it feels good in the moment. 251 00:15:25,320 --> 00:15:27,840 Speaker 2: How it was described to me was like pulling the 252 00:15:27,920 --> 00:15:30,240 Speaker 2: lever on a slot machine. If you don't like what 253 00:15:31,080 --> 00:15:33,240 Speaker 2: the AI spits out, you can just tell it to 254 00:15:33,280 --> 00:15:36,840 Speaker 2: regenerate its response, and that creates like a huge time suck. 255 00:15:36,960 --> 00:15:40,000 Speaker 2: You don't even realize how much time you're spending with 256 00:15:40,040 --> 00:15:44,440 Speaker 2: these chatbots generating like the perfect conversation, and what people 257 00:15:44,680 --> 00:15:50,880 Speaker 2: particularly enjoy about it is the instantaneous responses, the constant engagement. 258 00:15:50,960 --> 00:15:52,960 Speaker 2: Not only does the bot respond to you, it asks 259 00:15:53,000 --> 00:15:56,640 Speaker 2: you questions to keep you engaged and to keep you talking. 260 00:15:57,080 --> 00:16:01,840 Speaker 2: This innocent curiosity of what is this then turns into 261 00:16:01,880 --> 00:16:05,360 Speaker 2: something much more complex and unmanageable and now what people 262 00:16:05,600 --> 00:16:09,720 Speaker 2: are calling an addiction. And I think people might have 263 00:16:09,760 --> 00:16:12,760 Speaker 2: an assumption of what somebody who could become addicted to 264 00:16:12,800 --> 00:16:15,840 Speaker 2: a chatbot would look like. But my reporting has shown 265 00:16:15,880 --> 00:16:19,400 Speaker 2: me that late effects all sorts of people, all different 266 00:16:19,400 --> 00:16:23,280 Speaker 2: types of people, different ages, demographics, gender, It doesn't really 267 00:16:23,680 --> 00:16:27,480 Speaker 2: matter who you are, like if you engage with a 268 00:16:27,560 --> 00:16:32,000 Speaker 2: chatbot and you fall into these repetitive patterns, like it's 269 00:16:32,000 --> 00:16:33,880 Speaker 2: hard to fall out of it, it's hard to break 270 00:16:33,920 --> 00:16:36,160 Speaker 2: free from it. And it's especially true for people who 271 00:16:36,240 --> 00:16:40,440 Speaker 2: are younger and more vulnerable and lonely. I think loneliness 272 00:16:40,520 --> 00:16:42,360 Speaker 2: is a huge factor in all. 273 00:16:42,240 --> 00:16:45,560 Speaker 1: Of this, and it looks like young people are the 274 00:16:45,600 --> 00:16:49,080 Speaker 1: most interested in using chat bots for this purpose. A 275 00:16:49,160 --> 00:16:51,600 Speaker 1: recent you gov poll found that fifty five percent of 276 00:16:51,680 --> 00:16:54,520 Speaker 1: eighteen to twenty nine year old Americans would be the 277 00:16:54,560 --> 00:16:58,160 Speaker 1: most comfortable talking about mental health concerns with a confidential 278 00:16:58,200 --> 00:16:59,040 Speaker 1: AI chat bot. 279 00:16:59,320 --> 00:17:01,560 Speaker 7: People talk about the most personal shit in their lives 280 00:17:01,600 --> 00:17:03,840 Speaker 7: to chat ChiPT. Young people especially like use it as 281 00:17:03,880 --> 00:17:06,760 Speaker 7: a therapist, a life coach. And right now, if you 282 00:17:06,840 --> 00:17:09,439 Speaker 7: talk to a therapist or a lawyer or a doctor 283 00:17:09,440 --> 00:17:12,679 Speaker 7: about those problems, there's like legal privilege for it. We 284 00:17:12,720 --> 00:17:14,240 Speaker 7: haven't figured that out yet for when you talk to 285 00:17:14,280 --> 00:17:15,120 Speaker 7: chat ChiPT. 286 00:17:15,359 --> 00:17:19,440 Speaker 1: That's open. AI's Sam Altman on THEO Vont's podcast. Sam 287 00:17:19,480 --> 00:17:23,399 Speaker 1: Altman has also recently said that he's concerned about overreliance 288 00:17:23,560 --> 00:17:27,480 Speaker 1: on chad jipt. He said the quote people rely on 289 00:17:27,560 --> 00:17:28,760 Speaker 1: chatgubt too much. 290 00:17:29,040 --> 00:17:31,760 Speaker 8: There's young people who just say, like, I can't make 291 00:17:31,800 --> 00:17:34,640 Speaker 8: any decision in my life without telling CHATPETI everything that's 292 00:17:34,640 --> 00:17:36,280 Speaker 8: going on. It knows me, it knows my friends. I'm 293 00:17:36,320 --> 00:17:39,280 Speaker 8: gonna do whatever it says. That feels really bad to me. 294 00:17:41,119 --> 00:17:42,920 Speaker 1: All right, let me just run that last bit back 295 00:17:42,960 --> 00:17:43,440 Speaker 1: real quick. 296 00:17:44,960 --> 00:17:46,040 Speaker 8: That feels really bad to me. 297 00:17:46,720 --> 00:17:49,720 Speaker 1: Yeah, I agree, that also feels really bad to me too, 298 00:17:49,800 --> 00:17:52,399 Speaker 1: But that doesn't tell me what Sam Altman plans to 299 00:17:52,600 --> 00:17:57,080 Speaker 1: do about that bad feeling. And overall, it seems like 300 00:17:57,160 --> 00:17:59,919 Speaker 1: the people who run these companies in general are feeling 301 00:18:00,240 --> 00:18:04,480 Speaker 1: fairly good about people's reliance on AI. Earlier this summer, 302 00:18:04,800 --> 00:18:08,240 Speaker 1: Facebook founder Mark Zuckerberg said, quote for people who don't 303 00:18:08,240 --> 00:18:11,240 Speaker 1: have a person who's a therapist, I think everyone will 304 00:18:11,280 --> 00:18:15,320 Speaker 1: have an AI unquote, and the co founder of Anthropic, 305 00:18:15,400 --> 00:18:19,119 Speaker 1: which makes Claude, wrote that he expects quote AI to 306 00:18:19,240 --> 00:18:23,760 Speaker 1: accelerate neuroscientific progress, which can hopefully work together to cure 307 00:18:23,800 --> 00:18:28,120 Speaker 1: mental illness and improve function. If I run a large company, 308 00:18:28,160 --> 00:18:30,720 Speaker 1: open AI or Claude or something like that, I have 309 00:18:30,760 --> 00:18:32,600 Speaker 1: a chatbot. Of course I want you to keep talking 310 00:18:32,680 --> 00:18:35,680 Speaker 1: because you're gonna keep using it, You're gonna keep subscribed, 311 00:18:35,720 --> 00:18:37,359 Speaker 1: all that sort of thing. You're not gonna get bored. 312 00:18:37,880 --> 00:18:40,440 Speaker 1: But I think a response to that would be, that's fine. 313 00:18:40,440 --> 00:18:41,919 Speaker 1: We can just tweak it a little bit so it's 314 00:18:41,960 --> 00:18:46,679 Speaker 1: a little bit less syncophantic. Easy problem solved. Next question, 315 00:18:46,880 --> 00:18:49,360 Speaker 1: what's your problem with my service? Now? Do you think 316 00:18:49,400 --> 00:18:52,240 Speaker 1: that's something that say, a chat GBT could come in 317 00:18:52,280 --> 00:18:54,320 Speaker 1: and say, oh yeah, let's just flip on a therapy 318 00:18:54,359 --> 00:18:58,640 Speaker 1: mode and it won't validate everything. You say, Why can't 319 00:18:58,640 --> 00:19:01,960 Speaker 1: the everything machine also do your therapy? Where do you 320 00:19:02,000 --> 00:19:02,680 Speaker 1: fall in there. 321 00:19:03,240 --> 00:19:03,480 Speaker 5: Yeah. 322 00:19:03,520 --> 00:19:05,640 Speaker 6: So I like to say this a lot. You can 323 00:19:05,680 --> 00:19:08,000 Speaker 6: do anything you want, you can't do everything that you want, 324 00:19:08,400 --> 00:19:13,080 Speaker 6: and so I think everything machine you know is very appealing, 325 00:19:13,480 --> 00:19:16,040 Speaker 6: but we haven't seen them yet. And so I do 326 00:19:16,080 --> 00:19:18,399 Speaker 6: think like one has to make choices in terms of 327 00:19:18,440 --> 00:19:22,480 Speaker 6: what they want the technology to do. Now, this idea 328 00:19:22,560 --> 00:19:25,120 Speaker 6: of okay, we do want this thing to be used 329 00:19:25,160 --> 00:19:27,639 Speaker 6: for therapy, so let's tune the model and let's flip 330 00:19:27,680 --> 00:19:31,159 Speaker 6: the therapy switch. That's an interesting idea. And I have 331 00:19:31,359 --> 00:19:34,680 Speaker 6: seen some different teams really working on trying to build 332 00:19:35,200 --> 00:19:38,840 Speaker 6: built for purpose AI chatbots that are meant for mental 333 00:19:38,880 --> 00:19:42,199 Speaker 6: health support, and I think some of those projects have 334 00:19:42,240 --> 00:19:45,040 Speaker 6: been like really impressive. I think a question for me 335 00:19:45,200 --> 00:19:48,560 Speaker 6: is how much it scales and how generalizable it really is. 336 00:19:48,640 --> 00:19:51,320 Speaker 6: And so when they build this AI chatbot with the 337 00:19:51,400 --> 00:19:54,639 Speaker 6: sort of use case that they have focused on college 338 00:19:54,640 --> 00:19:58,119 Speaker 6: students at the specific college, or focused on individuals with 339 00:19:58,200 --> 00:20:01,199 Speaker 6: these specific types of mental health ch challenges, if you 340 00:20:01,280 --> 00:20:03,880 Speaker 6: come in with something else like how well will that 341 00:20:04,040 --> 00:20:07,240 Speaker 6: therapy chatbop be able to operate? I think we also 342 00:20:07,320 --> 00:20:11,240 Speaker 6: need validation to really demonstrate that these technologies are effective, 343 00:20:11,240 --> 00:20:13,520 Speaker 6: that they work. So if we have a model like, 344 00:20:13,600 --> 00:20:17,159 Speaker 6: demonstrate that it's effective, demonstrate that it's safe. These are 345 00:20:17,200 --> 00:20:19,399 Speaker 6: some of the things that the FDA, as they do 346 00:20:19,560 --> 00:20:22,240 Speaker 6: regulate software as a medical device. These are some of 347 00:20:22,240 --> 00:20:24,480 Speaker 6: the things that they're looking at when they approve these 348 00:20:24,520 --> 00:20:29,640 Speaker 6: specific technologies. I do think there's strong possibility there. We're 349 00:20:29,640 --> 00:20:32,199 Speaker 6: not there yet. People are working on it, but I 350 00:20:32,240 --> 00:20:34,880 Speaker 6: do think that we do need to demonstrate that when 351 00:20:34,920 --> 00:20:38,680 Speaker 6: these things are developed, that they're effective and that they're safe. 352 00:20:38,800 --> 00:20:41,520 Speaker 1: In the meantime, some states in the US are starting 353 00:20:41,520 --> 00:20:44,560 Speaker 1: to step in with regulation that's meant to protect the users. 354 00:20:45,160 --> 00:20:47,840 Speaker 1: New York just passed a law stating that quote AI 355 00:20:47,880 --> 00:20:51,520 Speaker 1: companions will be required to detect and implement a safety 356 00:20:51,600 --> 00:20:55,840 Speaker 1: protocol if a user talks about suicidal ideation or self harm, 357 00:20:56,280 --> 00:20:59,439 Speaker 1: including referring them to a crisis center, and will be 358 00:20:59,480 --> 00:21:02,800 Speaker 1: required to notify and remind users that they are not 359 00:21:03,080 --> 00:21:06,960 Speaker 1: interacting with a human end quote. That law goes into 360 00:21:06,960 --> 00:21:09,960 Speaker 1: effect later this year, so we haven't seen exactly how 361 00:21:10,000 --> 00:21:13,639 Speaker 1: that would work yet. Illinois also just straight up banned 362 00:21:13,640 --> 00:21:16,560 Speaker 1: the use of AI and therapy, saying that companies are 363 00:21:16,640 --> 00:21:20,960 Speaker 1: quote not allowed to offer AI powered therapy services or 364 00:21:21,000 --> 00:21:27,000 Speaker 1: advertised chatbots as therapy tools end quote. But maybe we're 365 00:21:27,000 --> 00:21:29,880 Speaker 1: missing the point here. Aren't we supposed to be encouraging 366 00:21:29,920 --> 00:21:33,280 Speaker 1: people to be proactive about their mental health? Isn't using 367 00:21:33,320 --> 00:21:37,240 Speaker 1: an AI therapist better than no therapist at all? That's 368 00:21:37,520 --> 00:21:50,760 Speaker 1: after the break, all right. We've talked a lot about 369 00:21:50,760 --> 00:21:53,520 Speaker 1: the dangers of AI therapy, and it's pretty clear that 370 00:21:53,600 --> 00:21:57,359 Speaker 1: the chatbots just aren't there yet to reliably provide help 371 00:21:57,680 --> 00:22:00,840 Speaker 1: when people really need it. But what if you can't 372 00:22:00,880 --> 00:22:03,760 Speaker 1: afford a human therapist, what if you just don't feel 373 00:22:03,760 --> 00:22:07,879 Speaker 1: comfortable talking to another person at all? Wouldn't a chatbot 374 00:22:08,119 --> 00:22:12,240 Speaker 1: be a better alternative than just doing nothing? I wanted 375 00:22:12,280 --> 00:22:16,119 Speaker 1: a professional human therapists insights on this, so I asked 376 00:22:16,119 --> 00:22:19,840 Speaker 1: doctor Stephen Schuler, what's the strongest argument that you've heard 377 00:22:20,000 --> 00:22:22,800 Speaker 1: for using AI or using chatbots and therapy? 378 00:22:23,560 --> 00:22:25,399 Speaker 6: Well, I think the strongest argument is that there's just 379 00:22:25,400 --> 00:22:29,040 Speaker 6: not enough services out there. You know that there's most 380 00:22:29,119 --> 00:22:31,960 Speaker 6: of our counties in the US our mental health shortage 381 00:22:31,960 --> 00:22:34,920 Speaker 6: areas on out of every three county here in the 382 00:22:35,000 --> 00:22:37,840 Speaker 6: US doesn't have a single licensed psychologist. 383 00:22:38,000 --> 00:22:41,000 Speaker 5: Wow, so it's really hard to get care. 384 00:22:41,560 --> 00:22:45,320 Speaker 6: And you know that's places that have nobody and then 385 00:22:45,359 --> 00:22:47,520 Speaker 6: you also have to think about, like even places where 386 00:22:47,560 --> 00:22:50,880 Speaker 6: they have somebody, maybe that's not the person you click with. 387 00:22:51,240 --> 00:22:54,800 Speaker 6: Maybe that's not the person who affirms your identity, speaks 388 00:22:54,840 --> 00:22:58,119 Speaker 6: your language, understands what you're going through. And so I 389 00:22:58,119 --> 00:23:03,159 Speaker 6: think the ability to provide services really at scale, I 390 00:23:03,160 --> 00:23:05,720 Speaker 6: think is really critical. You know, another thing is I 391 00:23:05,720 --> 00:23:08,720 Speaker 6: think these technologies have an opportunity to help people in 392 00:23:08,760 --> 00:23:11,240 Speaker 6: the moments that they need help. I've seen some data 393 00:23:11,280 --> 00:23:14,600 Speaker 6: from some of these programs that suggest like the most 394 00:23:14,640 --> 00:23:17,000 Speaker 6: common times people are using them are between the hours 395 00:23:17,040 --> 00:23:19,840 Speaker 6: of twelve pm and three am. I don't work between 396 00:23:19,840 --> 00:23:21,520 Speaker 6: the hours of twelve pm and three am. I'm not 397 00:23:21,560 --> 00:23:24,440 Speaker 6: providing therapy sessions then. And so if that's a time 398 00:23:24,600 --> 00:23:26,920 Speaker 6: you need support and want to connect to the ability 399 00:23:26,920 --> 00:23:29,760 Speaker 6: to have a twenty four to seven always on support, 400 00:23:29,920 --> 00:23:31,040 Speaker 6: that's really powerful. 401 00:23:31,359 --> 00:23:33,560 Speaker 5: I wish everyone who wanted therapy could get it. 402 00:23:33,560 --> 00:23:35,600 Speaker 6: It's not going to happen, But I also appreciate that 403 00:23:35,640 --> 00:23:38,320 Speaker 6: not everyone wants therapy, and so again I just think 404 00:23:38,320 --> 00:23:40,920 Speaker 6: we need to think about how we provide different types 405 00:23:40,920 --> 00:23:41,919 Speaker 6: of options for people. 406 00:23:42,760 --> 00:23:45,679 Speaker 1: An AI chatbot could provide an option that would be 407 00:23:45,720 --> 00:23:49,800 Speaker 1: more financially accessible than a traditional therapist, but that doesn't 408 00:23:49,840 --> 00:23:52,159 Speaker 1: mean you have to fully rely on AI for your 409 00:23:52,240 --> 00:23:55,440 Speaker 1: mental health. There are some things being developed that could 410 00:23:55,520 --> 00:23:57,200 Speaker 1: possibly supplement therapy. 411 00:23:57,840 --> 00:24:00,680 Speaker 6: I'm chained in Congna behavioral therapy. One of the things 412 00:24:00,680 --> 00:24:03,320 Speaker 6: that we do something called behavior activation. So you know, 413 00:24:03,359 --> 00:24:06,359 Speaker 6: people who are really depressed and they're down and they're 414 00:24:06,400 --> 00:24:08,600 Speaker 6: in it, we try to get them activated. We try 415 00:24:08,600 --> 00:24:11,879 Speaker 6: to get them to engage in behaviors or activities that 416 00:24:11,920 --> 00:24:13,600 Speaker 6: like really reinforce them and. 417 00:24:13,720 --> 00:24:16,359 Speaker 5: Provide pleasure and mastery. 418 00:24:16,640 --> 00:24:18,679 Speaker 6: When people are really down, it's hard for them to 419 00:24:18,680 --> 00:24:20,639 Speaker 6: come up with those activities. And maybe they could go 420 00:24:20,680 --> 00:24:23,520 Speaker 6: to chat gept and be like, Okay, this is how 421 00:24:23,560 --> 00:24:26,119 Speaker 6: I'm feeling right now, I need some ideas of some 422 00:24:26,359 --> 00:24:30,159 Speaker 6: reinforcing activities. What are some five examples of things I 423 00:24:30,200 --> 00:24:32,880 Speaker 6: can do around my neighborhood that you think would give 424 00:24:32,880 --> 00:24:35,040 Speaker 6: me pleasure? And that way, you can use chat chipedic 425 00:24:35,080 --> 00:24:37,280 Speaker 6: to kind of help reinforce those skills. 426 00:24:36,960 --> 00:24:37,840 Speaker 5: You're getting in therapy. 427 00:24:37,880 --> 00:24:40,320 Speaker 6: And I think it's interesting and it's different than using 428 00:24:40,400 --> 00:24:43,560 Speaker 6: chat schipee necessarily as your therapist. You're using it as 429 00:24:43,600 --> 00:24:46,879 Speaker 6: one way to enhance the therapy process. I think there 430 00:24:46,920 --> 00:24:48,120 Speaker 6: could be some real benefit there. 431 00:24:48,600 --> 00:24:51,919 Speaker 1: Doctor Schuler is pretty optimistic about the potential for AI 432 00:24:52,040 --> 00:24:55,760 Speaker 1: and mental health, but that's not necessarily just aimed at 433 00:24:55,800 --> 00:24:56,360 Speaker 1: the patient. 434 00:24:56,800 --> 00:24:59,520 Speaker 6: There's actually a cool product that what they do is 435 00:24:59,560 --> 00:25:02,800 Speaker 6: they model therapy sessions and then they use AI to 436 00:25:02,840 --> 00:25:05,199 Speaker 6: actually provide feedback to the therapist at the end of 437 00:25:05,200 --> 00:25:09,080 Speaker 6: the session that says, Okay, you were seventy empathetic and 438 00:25:09,119 --> 00:25:12,159 Speaker 6: you were eighty seven in terms of delivering this therapy technique. 439 00:25:12,200 --> 00:25:13,800 Speaker 6: Here's a couple of things you could have done better 440 00:25:13,800 --> 00:25:17,200 Speaker 6: in treatment. That's awesome because a therapist, I don't get feedback, 441 00:25:17,240 --> 00:25:19,600 Speaker 6: Like the reason I can improve my basketball game is 442 00:25:19,600 --> 00:25:20,879 Speaker 6: because when I take a shot, I know if it 443 00:25:20,920 --> 00:25:23,159 Speaker 6: goes in or not. We need that feedback as people, 444 00:25:23,200 --> 00:25:26,359 Speaker 6: and so using AI to allow therapists to also be 445 00:25:26,480 --> 00:25:30,040 Speaker 6: better therapists, I think is a super exciting opportunity to 446 00:25:30,040 --> 00:25:31,800 Speaker 6: get to that point that I was making that it's 447 00:25:31,840 --> 00:25:34,600 Speaker 6: not just about access, it's about quality, and we need 448 00:25:34,640 --> 00:25:37,640 Speaker 6: to improve the quality of mental health services that we're 449 00:25:37,640 --> 00:25:40,840 Speaker 6: providing people, and AI has an opportunity to help do that. 450 00:25:42,000 --> 00:25:47,120 Speaker 1: Interesting AI for the therapist, not for the patient. Right, 451 00:25:47,600 --> 00:25:52,399 Speaker 1: if you could set the gold standard for an AI 452 00:25:52,480 --> 00:25:55,159 Speaker 1: therapy chatbot. What does that look like? 453 00:25:56,359 --> 00:25:58,840 Speaker 6: I think the gold standard for an AI therapy chatbot 454 00:25:58,880 --> 00:26:01,240 Speaker 6: would be to be effective, and it would have to 455 00:26:01,240 --> 00:26:05,120 Speaker 6: be safe. So effective, I want some demonstration that it 456 00:26:05,200 --> 00:26:08,280 Speaker 6: is able to provide the claims that it provides it 457 00:26:08,280 --> 00:26:10,760 Speaker 6: can do. Like if it says it can help you 458 00:26:10,800 --> 00:26:15,280 Speaker 6: get through depression, it can help you overcome postermac stress disorder, 459 00:26:15,320 --> 00:26:18,520 Speaker 6: it can overcome obsessive compulsive disorder. I want some indication 460 00:26:18,600 --> 00:26:21,320 Speaker 6: that it actually does that. I also want some indication 461 00:26:21,440 --> 00:26:24,919 Speaker 6: of safety. So what happens with that data and that 462 00:26:25,000 --> 00:26:27,880 Speaker 6: information when I give it to that AI chatbot when 463 00:26:27,920 --> 00:26:30,800 Speaker 6: I talk to a therapist. I understand that information will 464 00:26:30,800 --> 00:26:34,640 Speaker 6: be confidential with some boundaries around safety and some other 465 00:26:34,720 --> 00:26:38,680 Speaker 6: aspects of legality. When I type my stuff into chat GPT, 466 00:26:39,800 --> 00:26:41,520 Speaker 6: they own that data. I don't know what they're going 467 00:26:41,600 --> 00:26:42,760 Speaker 6: to do with it. I don't know if they're going 468 00:26:42,800 --> 00:26:45,760 Speaker 6: to sell it to a third party, use it for advertising, whatever. 469 00:26:46,240 --> 00:26:49,000 Speaker 6: So I think we need some aspects that the data 470 00:26:49,160 --> 00:26:52,280 Speaker 6: is safe and it's secure, improper safeguards are in place. 471 00:26:52,359 --> 00:26:56,199 Speaker 6: Therapy is also not harmless, like stuff can come up 472 00:26:56,200 --> 00:26:58,600 Speaker 6: in therapy. It can be hard, but you understand a 473 00:26:58,640 --> 00:27:00,800 Speaker 6: little bit of at least getting in you're talking about 474 00:27:00,800 --> 00:27:03,560 Speaker 6: these emotional things. Here's some of the challenges that might 475 00:27:03,600 --> 00:27:07,120 Speaker 6: come up. Here are some of the potential aspects. If 476 00:27:07,119 --> 00:27:09,760 Speaker 6: you share information about yourself that you know you're gonna 477 00:27:09,760 --> 00:27:12,000 Speaker 6: harm yourself, it's gonna have to be shared with authorities 478 00:27:12,040 --> 00:27:14,160 Speaker 6: like that. There's stuff that can come up in therapy too. 479 00:27:14,560 --> 00:27:16,879 Speaker 6: But I think we need to understand what's the contract, 480 00:27:16,920 --> 00:27:19,200 Speaker 6: what's the agreement, what's the safeguards that are in prace 481 00:27:19,280 --> 00:27:20,480 Speaker 6: when we're talking with these. 482 00:27:20,680 --> 00:27:24,760 Speaker 1: I'd imagine there are probably some people out there who 483 00:27:25,480 --> 00:27:29,800 Speaker 1: hear the phrase AI and therapy mentioned in the same sentence, 484 00:27:29,840 --> 00:27:31,919 Speaker 1: and they say, Nah, hit the kill switch on that. 485 00:27:31,960 --> 00:27:35,120 Speaker 1: We absolutely do not want this. We don't want those 486 00:27:35,200 --> 00:27:40,040 Speaker 1: two words in the same sentence at all. It sounds 487 00:27:40,040 --> 00:27:42,280 Speaker 1: like you're not You wouldn't quite agree with that. 488 00:27:44,119 --> 00:27:47,640 Speaker 6: I'm really excited for the potential, and I really think 489 00:27:47,680 --> 00:27:52,480 Speaker 6: that we have an opportunity to provide more and better 490 00:27:52,600 --> 00:27:55,560 Speaker 6: services to people, and technology has a role to play there. 491 00:27:55,720 --> 00:27:58,119 Speaker 6: I think where I get really nervous is in like 492 00:27:58,320 --> 00:28:01,919 Speaker 6: the near term. If someone tells me, Doctor Schuller, I 493 00:28:02,000 --> 00:28:04,600 Speaker 6: have an AI chot bot. It works awesome, we're going 494 00:28:04,600 --> 00:28:06,760 Speaker 6: to provide it for therapy. Do you want to start 495 00:28:07,119 --> 00:28:09,480 Speaker 6: handing this out to people tomorrow? I'm holding my wallet. 496 00:28:09,480 --> 00:28:12,240 Speaker 6: I'm like, no, I'm not convinced where they are yet. 497 00:28:12,720 --> 00:28:15,800 Speaker 6: So I am excited about the potential, but I really 498 00:28:15,880 --> 00:28:17,679 Speaker 6: think that there's a lot of stuff we need to 499 00:28:17,880 --> 00:28:21,040 Speaker 6: figure out to make this effective and safe. 500 00:28:22,240 --> 00:28:24,439 Speaker 1: All right, And just as an aside here, what I 501 00:28:24,480 --> 00:28:27,000 Speaker 1: was picking up from Stephen in our conversation is that 502 00:28:27,080 --> 00:28:30,920 Speaker 1: there really is potential here, but he's holding his wallet, right. 503 00:28:31,000 --> 00:28:33,200 Speaker 1: He wants to be cautious and do it the right way. 504 00:28:33,760 --> 00:28:37,520 Speaker 1: But if we're talking about the wallet, it's not people 505 00:28:37,640 --> 00:28:40,479 Speaker 1: like doctor Stephen Schuler who are holding that AI wallet. 506 00:28:40,840 --> 00:28:43,640 Speaker 1: The decisions about when these things get released, they are 507 00:28:43,680 --> 00:28:46,280 Speaker 1: not up to people like him. When it comes down 508 00:28:46,280 --> 00:28:51,640 Speaker 1: to it, this is a business. I want the best 509 00:28:51,640 --> 00:28:55,200 Speaker 1: and brightest minds, especially in mental health, to be in 510 00:28:55,240 --> 00:28:58,000 Speaker 1: a place where there it is not a profit motive. 511 00:28:58,080 --> 00:29:00,520 Speaker 1: I don't necessarily want the best and brightest minds in 512 00:29:00,560 --> 00:29:02,720 Speaker 1: the world to be at a company where they it 513 00:29:02,800 --> 00:29:06,400 Speaker 1: being I would imagine being pushed to hey man, we 514 00:29:06,480 --> 00:29:11,760 Speaker 1: got to ship this thing. I want the rigor that 515 00:29:11,880 --> 00:29:17,200 Speaker 1: is happening at a public university where somebody can spend time, 516 00:29:17,320 --> 00:29:21,400 Speaker 1: and people can spend time on really truly making sure 517 00:29:21,440 --> 00:29:24,600 Speaker 1: that something is safe, making sure that something is effective, 518 00:29:25,200 --> 00:29:28,560 Speaker 1: and they're not being pressured to create a product. But 519 00:29:28,560 --> 00:29:31,560 Speaker 1: at the tables have turned. The data is being held 520 00:29:31,560 --> 00:29:34,960 Speaker 1: by companies, and in that sense, the power has shifted 521 00:29:35,000 --> 00:29:38,120 Speaker 1: over to a side of the table that has a 522 00:29:38,120 --> 00:29:40,200 Speaker 1: profit motive where it didn't used to be like that. 523 00:29:40,960 --> 00:29:43,560 Speaker 6: I agree with you, And also, dollars make the world 524 00:29:43,560 --> 00:29:46,360 Speaker 6: go around, and so I think like eventually someone's got 525 00:29:46,400 --> 00:29:49,200 Speaker 6: to pay for things. One thing that's challenging with mental 526 00:29:49,200 --> 00:29:53,320 Speaker 6: health because there's so much need and there's so little 527 00:29:53,360 --> 00:29:56,200 Speaker 6: services that the fact of the matter is if we 528 00:29:56,280 --> 00:29:58,920 Speaker 6: want to solve this problem, we're going to have to 529 00:29:59,080 --> 00:30:01,479 Speaker 6: pay for it. I feel like we need to make 530 00:30:01,520 --> 00:30:06,480 Speaker 6: sure that we go slow and we go responsibly because 531 00:30:06,520 --> 00:30:08,680 Speaker 6: the sort of tech idea of like move fast and 532 00:30:08,720 --> 00:30:11,560 Speaker 6: break things, yeah, it does not work for mental health. 533 00:30:11,600 --> 00:30:13,520 Speaker 6: I do not want my mental health broken anymore. 534 00:30:13,520 --> 00:30:15,360 Speaker 5: I want it fixed. I want it helped. I want 535 00:30:15,360 --> 00:30:16,360 Speaker 5: the system better. 536 00:30:17,680 --> 00:30:20,600 Speaker 1: Maybe the whole idea of AI therapy still sounds really 537 00:30:20,640 --> 00:30:23,640 Speaker 1: bleak to you, but hopefully it makes sense why some 538 00:30:23,680 --> 00:30:27,800 Speaker 1: people might find this promising, either societally or just personally 539 00:30:27,840 --> 00:30:30,760 Speaker 1: for themselves. I asked doctor Schuler if there was anything 540 00:30:30,760 --> 00:30:33,200 Speaker 1: else he wanted to add, and he said there was. 541 00:30:34,200 --> 00:30:37,480 Speaker 6: I'll just say anyone out there who's struggling with mental 542 00:30:37,520 --> 00:30:41,440 Speaker 6: health issues, there's help out there, and you can get better. 543 00:30:41,520 --> 00:30:43,960 Speaker 6: And mental health is also a journey, and it doesn't 544 00:30:44,000 --> 00:30:47,280 Speaker 6: mean you're here today and you're completely better tomorrow. 545 00:30:47,320 --> 00:30:48,320 Speaker 5: There's ebbs and flows. 546 00:30:48,720 --> 00:30:52,360 Speaker 6: Because we've talked about I think AI therapy is a potential, 547 00:30:52,720 --> 00:30:54,440 Speaker 6: has the potential to be a tool in the toolbox, 548 00:30:54,480 --> 00:30:57,120 Speaker 6: but there's other tools out there too, And just also 549 00:30:57,160 --> 00:31:00,360 Speaker 6: appreciate that not everything works for everybody. So if you 550 00:31:00,360 --> 00:31:02,880 Speaker 6: try something and it doesn't work, try something else. But 551 00:31:03,040 --> 00:31:05,920 Speaker 6: I just, yeah, really want to try to have a 552 00:31:05,960 --> 00:31:08,320 Speaker 6: message of hope for people who are struggling with these things, 553 00:31:08,360 --> 00:31:10,240 Speaker 6: because I know it's hard and I know it's challenging, 554 00:31:10,280 --> 00:31:12,760 Speaker 6: and I think definitely when you're at your lowest, you 555 00:31:12,800 --> 00:31:14,480 Speaker 6: want to be protected and you want to be safe, 556 00:31:14,480 --> 00:31:17,160 Speaker 6: and I think that's why this conversation is so important. 557 00:31:26,760 --> 00:31:29,200 Speaker 1: Thank you so much for listening to another episode of 558 00:31:29,280 --> 00:31:31,800 Speaker 1: kill Switch. Let us know what you think. If there's 559 00:31:31,880 --> 00:31:34,000 Speaker 1: something you want to say, or if there's something you 560 00:31:34,000 --> 00:31:36,560 Speaker 1: want us to cover, you can email us at kill 561 00:31:36,600 --> 00:31:40,360 Speaker 1: Switch at Kaleidoscope dot NYC, or you can find us 562 00:31:40,360 --> 00:31:44,320 Speaker 1: on Instagram at kill switch pod or me personally. I'm 563 00:31:44,600 --> 00:31:47,520 Speaker 1: at dex digit that's d e X d I G 564 00:31:47,720 --> 00:31:50,880 Speaker 1: I And wherever you're listening to this podcast, you know, 565 00:31:50,960 --> 00:31:53,400 Speaker 1: think about leaving us a review. It helps other people 566 00:31:53,440 --> 00:31:56,280 Speaker 1: find the show, which in turn helps us keep doing 567 00:31:56,320 --> 00:31:59,920 Speaker 1: our thing. Kill Switch is hosted by Me Dexter Thomas. 568 00:32:00,240 --> 00:32:04,240 Speaker 1: It's produced by Shena Ozaki, Darluk Potts, and Kate Osborne. 569 00:32:04,560 --> 00:32:07,200 Speaker 1: Our theme song is by me and Kyle Murdoch, and 570 00:32:07,320 --> 00:32:11,280 Speaker 1: Kyle also mixes a show from Kaleidoscope. Our executive producers 571 00:32:11,280 --> 00:32:15,880 Speaker 1: are Oswa lashin On, Gesshakti Kadour and Kate Osborne. From iHeart, 572 00:32:15,880 --> 00:32:19,440 Speaker 1: our executive producers are Katrina Normal and Nikki e Tor. 573 00:32:20,160 --> 00:32:22,800 Speaker 1: One last thing, since you still hear. As Steven and 574 00:32:22,800 --> 00:32:26,120 Speaker 1: I were talking about human versus AI therapists, it occurred 575 00:32:26,160 --> 00:32:28,760 Speaker 1: to me that maybe some people are deciding that they 576 00:32:28,880 --> 00:32:31,720 Speaker 1: actually don't want a human and we talked a little 577 00:32:31,720 --> 00:32:35,040 Speaker 1: about that. Well, I think there's also another element of it, 578 00:32:35,080 --> 00:32:38,760 Speaker 1: which is that maybe it's not even always about believing 579 00:32:38,800 --> 00:32:41,680 Speaker 1: that it's a human or not, which is to say, 580 00:32:41,800 --> 00:32:45,920 Speaker 1: we really tend to trust technology in general. And even 581 00:32:45,920 --> 00:32:50,800 Speaker 1: if I know that CHATCHBT or even a properly made 582 00:32:51,760 --> 00:32:56,200 Speaker 1: AI therapy purposely built therapy chatbot is not a human, 583 00:32:56,640 --> 00:32:59,800 Speaker 1: it still feels very authoritative. Maybe the issue is that 584 00:33:00,280 --> 00:33:02,680 Speaker 1: we look at a computer and we see that glowing 585 00:33:02,720 --> 00:33:05,080 Speaker 1: rectangle and we think this is an authority because in 586 00:33:05,120 --> 00:33:08,560 Speaker 1: almost every other aspect of our lives, the computer is 587 00:33:08,600 --> 00:33:09,240 Speaker 1: an authority. 588 00:33:09,520 --> 00:33:09,720 Speaker 3: Yeah. 589 00:33:09,760 --> 00:33:10,760 Speaker 5: I think that's a great point. 590 00:33:10,800 --> 00:33:15,120 Speaker 6: I mean definitely I see people, you know, understand or 591 00:33:15,160 --> 00:33:18,280 Speaker 6: think that, like the Internet knows everything. My kids say 592 00:33:18,320 --> 00:33:20,680 Speaker 6: whenever they have questions about things, they're not like, you know, 593 00:33:20,960 --> 00:33:23,040 Speaker 6: what's the answer to this, Dad, They're like, ask Google. 594 00:33:23,160 --> 00:33:26,520 Speaker 5: Google knows and Google often does know. It's all knowing, 595 00:33:26,680 --> 00:33:28,120 Speaker 5: It's got all the information. 596 00:33:28,280 --> 00:33:30,480 Speaker 6: At your point's a really good one is that technology 597 00:33:30,520 --> 00:33:33,720 Speaker 6: is also it's a separate thing in our life, snap 598 00:33:33,920 --> 00:33:36,080 Speaker 6: things like Google, things like chat should be t is 599 00:33:36,120 --> 00:33:39,240 Speaker 6: like it knows dus, It's got all the knowledge, Like 600 00:33:39,600 --> 00:33:41,520 Speaker 6: why would it not be better than a person? 601 00:33:41,640 --> 00:33:43,720 Speaker 5: Like it's got the whole Internet knowledge on it. 602 00:33:43,920 --> 00:33:47,560 Speaker 6: My therapist doesn't my therapist doesn't know everything, but Google 603 00:33:47,800 --> 00:33:48,480 Speaker 6: has everything. 604 00:33:48,640 --> 00:33:50,240 Speaker 5: So yeah, it's a good point. 605 00:33:51,040 --> 00:33:53,600 Speaker 1: I'm not really sure what to do with that yet. 606 00:33:54,400 --> 00:33:58,360 Speaker 1: If you got any ideas, let me know anyway, catch 607 00:33:58,360 --> 00:33:58,960 Speaker 1: you next time. 608 00:34:00,080 --> 00:34:00,360 Speaker 3: Bye,