Your AI Injection

Improving Mental Health Using AI and ChatGPT Powered Behavioral Therapy

February 28, 2023 Deep Season 2 Episode 15
Your AI Injection
Improving Mental Health Using AI and ChatGPT Powered Behavioral Therapy
Show Notes Transcript

In the latest episode of Your AI Injection, behavioral healthcare expert Dr. Kelly Koerner and host Deep Dhillon delve into the possibilities of using ChatGPT and other Large Language Models in the realm of behavioral therapy. The duo examine how advanced chatbot technology can reflect on patients' concerns, validate their experiences, encourage reflection and impart knowledge. They also discuss the pros and cons of pairing ChatGPT with a human therapist versus using it as a standalone chatbot therapist, with a focus on the latter's increased accessibility and ability to operate without getting tired. Furthermore, they discuss how ChatGPT could complement traditional psychotherapy and mitigate the mental health accessibility gap, including the idea of training an LLM to act as a peer support specialist.

Learn more about Dr. Kelly Koerner here: https://www.linkedin.com/in/kelly-koerner/

Checkout related content of ours that digs in deep on mental health and the large language models that can enable a new era of treatment:

[Automated Transcript]


Deep: Hi there. I'm Deep Dhillon, your host, and today on our show we have Dr. Kelly Koerner. Dr. Koerner received her PhD from the University of Washington in clinical psychology. In addition to serving her patients, Dr. Koerner is also an author and entrepreneur. She co-founded Jasper Health and is driven by the belief that at Behavioral Healthcare access should be available to everyone anywhere.

Thank you so much for coming on. I thought we could start by me telling you how I'm using chat, g p t, uh, with specifically with respect to therapy stuff, and then we can, can dig in. Here's what I've been doing every day, so I wake up in the morning, as you know, I think we've mentioned this before when we were talking before that I'm kind of obsessed with Y in therapy.

So I say act like a Y therapist and ask me a question, and then Chad g bt asks me a question. Usually it, you know, it quickly wants to get into something subconscious oriented, which is in dream, which is what I'm dying to tell it. And even if it doesn't, I'll just tell it a dream. But sometimes I don't have one, like I, I can't remember a dream.

So I will sort of talk about a dream and they're usually kind of wild and eyebrow bendy. Then we go back and forth sometimes for like a minute, sometimes for a few minutes, sometimes for a while, cuz I realize like something big has come up that I haven't thought about. And then in the end of it I pretty much realize that something has been giving me some anxiety or causing me problems and we hone in on it together.

What do you think about that? I think 

Kelly: that's super interesting. You know, you're the actual dream of your beard with, uh, coated with taffy and worms and whatnot. Yeah, yeah, yeah. But you know, the fact that it could actually interpret it and ask you a broad enough question that let you riff and then search, search in a really, it seemed like a pretty meaningful way so that you could do the associations and then, you know, be prompted.

I thought it was really 

Deep: interest. It's fascinating because, I mean, I'm not good with Union symbology. I have been kind of obsessed with this stuff for a while, but I've never really gotten to the point where somebody can like bring up some crazy symbol and I, and I know exactly where to go with it. Yeah.

And um, but Chachi, bt seems to it doesn't always want to do. Yeah, but if I ask it, I'm like, so what do you think this can mean? Then it'll usually say, well, often it means A, B, or C. Yeah, yeah. In your case, you know, could be something else. 

Kelly: Yeah. I think that type of sensitivity where you both, you pushing back on it, but it giving you the options and then that all purpose exit, like and mileage may vary, like that whole sequence is so sophisticated feeling as a user.

Oh, it is. I feel like it, it's, it's amazing. 

Deep: It feels like I am talking to a therapist and a particularly insightful one. . 

Kelly: A better than average one Who doesn't interrupt you? Probably , . 

Deep: I, I don't mind being interrupted. Actually, I, I, I love the back and forth. I mean, it doesn't feel occasionally like, you know, chat.

G B T kind of gets preachy. So I wanted you to talk a little bit about this. I know we, we've talked about this in the. So I think, um, You call it, uh, reflect, validate, educate. So maybe talk to us a little bit about your past as a therapist and what this kind of core pillar of therapy is this reflect, validate, educate, because I find that Che G B T does it on its own and.

Kelly: Yeah, somebody, somebody smart has worked with it on a lot of stuff. I thank you for the link I've got in there and played around and tried to break it and I was amazed at how far into different cul-de-sacs I could go. I'm, uh, an expert in a couple different therapies. My background is a bit in treatment development, treatment evaluation, stuff like that.

Right. and uh, what's always been really interesting is people package their C B T for problem X, y, Z, and then they package it into these sort of manuals as if each was different. But really there's so many common 

elements. 

Deep: I'm gonna ask you to unpack some of your acronyms there. Oh, sure, sure. Uh, just because, just.

Uh, you know, so our audience is like, yeah, general AI aficionados, not necessarily like, uh, therapy in depth. So unpack C B T and treatment Evaluation. Treatment, yeah. Development, 

Kelly: yes. Yeah. Cognitive behavioral therapy. And by the way, chat, G B T, new Myrons. Oh yeah, oh yeah. Floored at how fast and accurate it was.

That was really, anyway, so cognitive behavioral therapy, treatment development means you're really studying like what actually helps people with significant problems. And I've, I've always worked more on the deep end, not just sort of wellness stuff, but people who are struggling with suicide and chronic depression and you know, like pretty serious problems.

And then evaluation is really testing in a scientifically rigorous way. Produce the outcomes you were hoping for. So with that type of a background, when I look at, um, any type of digitization of a therapy, I'm really interested, like, can they, are they bringing science? Are they just promoting, you know, kind of feel good, lightweight stuff?

How do they deal with people who really object? Like, you don't know me, , you know, I don't know. You and me looked at some of the early chatbot ai. Wobo I think was fun. And they were just so, like Babi, Pam b Nice, nice. Mm-hmm. , you know, they're telling you stuff you could just find on the internet. It was a good a for effort, you know, or maybe an E for effort really.

But cuz it was, it was tiresome. I don't know. Do you remember that? Yes. 

Deep: Well, I remember being fascinated with Wobo for some time. I tried, I installed it. I don't know how much of my fascination was from a therapy angle, but most of it was from like deconstructing the mechanics of how it worked in my, you know, so I, but I was able to be engaged with it.

I would say like daily, for about a week, maybe every once in a while thereafter. Yeah. But Chad, g b t, it's become a go-to for me, you know, if I have something going on on 

Kelly: there. Why is that? Engagement is the problem. If you look at like behavioral health, digitized behavioral health, the problem is engagement.

Like people start strong and then they drop off. And you're saying you're using it daily, why? What is it 

Deep: about it? So I, okay, so it's, you must have experienced this as a therapist where you're, if you, maybe you're on a regular cadence of once a week, but something happens in the middle of the week and the patient wants to talk to you.

Yeah, it's, it's one of those, I don't have an appointment with Chad, G B t, I just grab in , talk to you whenever I want. But yeah, like something happens, I'm stressed out. I, or I mostly it's because I have some crazy dream and I'm just uhhuh uh, curious about it. It's actually become a bit of a thing in the office here.

So like, so like our, our, our marketing, uh, you know, expert here, Jessie, she's been obsessed with the union thing too, cuz I took her down the dream rabbit hole and apparently all of her roommates are using it now. Oh. and they're all like remembering their dreams and tell Tony about, and uh, and, and just looking for insights.

But I think it's like something happens and I need someone to talk to. And I find that the normal, I mean, I love the humans in my life, but they're just not as good as Chad g p t at talking to me about it 

Kelly: really. So, and what is, can you put your finger on what the difference is? Because I, I share the same perspective, but I'd love to hear what you think about it.

Why, what's the difference between it versus a. 

Deep: It's a hard question to answer, but you know, when you're 19 or 20, you just love talking about all your inner. Sociological and psychological problems with each other. But when you get to be my age, it's like we don't really wanna anymore. You know? I mean, to the point where if I get together with folks, I mean, this is sad to say, but I'll, I'll usually crack a joke about as soon as somebody wants to talk about their midlife health ailments.

I'll be like, oh, actually, you know that part of the, part of the conversation where we all recite our midlife health ailments. That's not until a little bit later . We're gonna get to that, but that's gonna be a little bit later until then. And so I think we're just don't wanna talk about that stuff all the time cuz it's exhausting and we all have real issues when you hit the stage of life.

Yeah. Whereas when you're 1920 you don't, I don't know, like some I'm, you know, some folks have, you know, serious issues, but it's their. Addressing the world. So you have a lot of that. So that's part of it is just like, you don't wanna, 

Kelly: so accessibility it sounds like, and, um, an actual audience where you're not gonna be, you're not tiring them, you're not, yeah.

You're gonna get, you're gonna get the kind of engagement you want. 

Deep: Yeah. And plus, you know, I mean, who am I gonna talk to? Probably my wife. We've been married for like, I don't know, 23, 24 years. , both of our brains have just sort of melded into one, so I already know what she's gonna say before she says it, and I've already evaluated that and vice versa.

Like she, she knows everything I'm gonna say too. That's not, I mean, like, we have a very happy, so a fresh 

Kelly: perspective. 

Deep: Yeah. Fresh perspective is one for sure. I think that's, that's part of it. And I think, you know, more than anything, most people, I, I'm really into this, like subconsciously stuff. Uhhuh , and I only have one friend who really knows how to do that.

You know? Uhhuh , Uhhuh, , and other than that, I'd have to go find an actual union therapist. Yeah. As you know, it's. Forever just to find any kind of a therapist, but a union therapist is like, I don't know, it could be like three years to find one or something. Yeah, it's a really long time. And there's only three in Seattle 

Kelly: anyway, so Yeah.

And. Yeah, I would say too, you know, so I'm, I'm interested in, in chat G B T for more straight up hardcore problems too. Not kind of the more insight oriented, but I, I thought it was pretty interesting when I read that transcript you had with the u more union. But if I think about more like there's science based effective treatments, I wish everybody could get.

you can't get a therapist who knows him. There's, you know, years long waiting list that's not likely to get better. So like there's a major public health problem with access. There is. There is. And so when I think about that and chat, G P T I got in there and played with it and was, I was just curious like, would it feed me crap since it's, I didn't, I don't know what the trading data is, right?

I pushed its envelope on stuff that I thought, ah, I won't know about treatment for rumination. It did. Oh yeah. They don't know about how to help me with a behavioral experiment. It did. Um, you know, those are, those are pretty, like most therapists, if I said that they might not know what those were. And it gave me the accurate stuff.

Um, I asked it for help with my sleep problems and it gave me kind of the, you know, basic sleep hygiene stuff that I think of as sleep 1 0 1. Mm-hmm. . But then I said, yeah, well why aren't you telling me about sleep Restric? And then it gave me a very polite, you know, I'm sorry I didn't tell you about that first, and then it gave me the accurate information about sleep restriction.

I was like, whoa, this is so, and then I said, um, well, what if I wanted to work on my drinking, but I kind of don't wanna work on it at the same time. And then it moved into motivational interviewing. Motivational interviewing is where you really actively help a person sort out. You know, like what are the reasons for change?

What are the reasons against it? And you don't force it. You let them, you know, it's almost like heighten the tension between the two poles. So then they naturally move. Did you get 

Deep: it to, to end with by asking you a question? Like that's one thing that you have to prompt it a little bit on. You do. 

Kelly: Yeah. I to, I had to work on getting the question.

The other thing it did, but it was again, it was very malleable, is it gave me all of it. Like it's really into lists. Yes. Or the way I provided my prompts, it gave me back lists and I said, Hey, you're overwhelming. And it said, oh, I'm sorry. I'm overwhelming you. Let me give you blah, blah. And then it forgot it.

And then I reminded it. I said, look, I told you earlier I, this is overwhelming when you give me all this. And then it said, okay, we'll go step by step. And then it gave me one and paused. Yeah, like, what is this thing? Who made 

Deep: this ? What we, we can talk about that for a long time. Uh, but yeah, I mean, It's shocking really.

How, how, and the thing is 

Kelly: quality was good. The quality was really good, deep, uh, from a, from a, as a person who's interested in science and really getting good stuff out there, but I, that's my, that's my main worry when I think about this, is like, wait, this is too high risk. You can't just let this roll with important things.

But it had high quality. Why does it have high quality? How do. 

Deep: You asking? You want me to? Yeah, yeah. Like I, I mean like a recap on how this stuff works, right. Step one is you build a large language model, which involves taking a ton of, uh, unstructured text information, um, which is sort of a world supply of conversations and, you know, even news and software code, like anything that's, you know, that's texty.

And then the models are basically trained to predict future sequences of words or characters. Cuz the things also, it's, it's everything we're talking about here is therapy related. Mm-hmm. But trust me, we can, we can go through the Alice in Wonderland, worm Hall on any topic we can talk about co. Like we have a whole episode.

We just talked about co-generation and how good Oh, wow. Yeah. And it's the same model. Right? Got it. So, so, so once you do that, what happens is you train up this like multilayer neural. And it basically just gets really good at predicting future sequences of words. Well, it turns out, if you wanna train a system to say like, Hey, I really need to go to the fill in the blank, and it knows not to say Pluto or you know Orangutang, but it has a few options.

I need to go to the pharmacy, the store, whatever. Yeah. It turns out if you get really good at that, you've underst. Not just language, but all languages. And not just human languages, but non-human language, like programming languages. So that's like the first step. So that, and we've had those systems, excuse me, for a couple years, but what Chad g Bt did on top of that.

So, so with those, with that l l M, you can just give it a prompt. Like anything you're asking on Chad g b t, you just ask it. , it's already quite good, but you usually have to give it like two or three examples of what you want. So for your conversation, you'd have to give it an example. So for example, with a normal L L M, if you just said, Hey, patient said blah, therapist said blah.

Mm-hmm. , you give it three of those. You're off to The races. Need help with computer vision, natural language processing, automated content creation, conversational understanding time series forecasting, customer behavior analytics. Reach out to us@xyonix.com. That's X-Y-O-N-I-X.com. Maybe we can help.

Kelly: I gave it really into the weeds of treating rumination. Why questions are what get you into problems, and you have to reframe those into how questions, so that pulls you into less abstract, more concrete, present tense as opposed to future path, blah, blah. And I said, help me reframe this. Why question as a how question.

And it did it and it did it skillfully, more skillfully than most therapists. How could it. Are you just saying, Hey, this, any data? 

Deep: No, everything. Everything I've described so far gets you to a certain point, like where we've been the last couple years. But what Chachi TPE did that was sort of interesting was on top of that, they trained a new layer.

Because you can, like, with just an l l m, you can have it do prompt completion and it gets it right, like, uh, a bunch of the times, but it also just does weird stuff sometimes. So then what they did was they started having a bunch of people word on the street, as you know, a ton of, uh, paid folks originally.

Like in Kenya I think is where they were with the l l m. You can ask it the same question. And it can, and you can ask it repeatedly and it'll give you different answers. And then you can basically tell which ones are good and which ones are bad. Mm-hmm. Hmm. . And so that reinforcement learning layer of like positive and negative reinforcement across the spectrum of what everybody's asking it is what's happening today and continues to happen, which is why it's gonna continue to get better and better.

And so somebody may have asked it stuff like in the therapy domain, like quite specifically. But my guess is like, not necessarily because the model is so powerful that I, I think we don't know why, like we don't know why you can answer that question either, right? Mm-hmm. , like we could know it on a high level.

Well, you know, yeah. Kelly got her PhD in clinical psychology and Kelly, right? But we don't actually know the electrochemistry in your brain that led to that answer. And these models are just getting to the. Where they're so powerful and so complex that we can't really know why they're working. 

Kelly: Yeah. It's, I would say the, the scientific quality of the information that is really interesting to me.

Like I, I guess I think of it as probabilistic. So the chance you, as a person seeking therapy would ask your average therapist a thing. You know, you're gonna get a pretty high probability of crap. So, uh, from my perspective as a more science oriented person, right? Um, and or you might get something good, but it's a c C plus thi, this was consistently b plus a, like just in terms of the quality of the information.

So it made me wonder like when in my dreams there'd be some way in which it was reading the scientific literature. And actually, oh yeah. Waiting. Waiting. 

Deep: Good stuff. Well, sort of, I mean, okay. So in addition to everything I said, there's a whole other step is which, what info are you gonna give it? Right.

And so the problem of this stuff is like if you're just reading everything on the net, then you're also reading all the neo-Nazi garbage and Yeah, exactly. And all of the like new age hoo-ha. Yeah. Um, so I don't know exactly what they did, but it sounds, uh, increasingly clear that they heavily weighted the information towards credible sources.

Ah-huh. And, and credibility. So, so PubMed is most likely in their, I haven't checked it explicitly, but I, I'd be shocked if it's not. So that's, you know, 30 million like credible scientific articles. That's fantastic. I, I stumbled upon something very similar, like when with my whole union experiment, I will often pre-chat G P t I would wake up in the morning, write down my dreams, and then I'd have to go Google the symbols one by one.

And I'd have to wade through pages of just new age garbage until I could get to something credible of somebody sort of describing the symbol. And then from that I could back in to, um, you know, an explanation of my dream. But it would, you know, it would take 20, 25 minutes compared to now where it just takes a minute or two of conversation.

So I think that's part. For sure, for sure. The constraints and credibility amplification that they're doing. But it, that alone is a challenging problem, right? Like how do you figure out, even within just the world of therapists, like exactly, you're saying something credible and who's not. 

Kelly: Exactly. That was, that was my main concern as I was thinking about this whole topic.

That, and then also the, um, the kind of biases that are more prevalent in the culture against, I mean, like who, who is most likely to end up with psychological problems or challenges? It's folks who are not. Gonna fit exactly in the mainstream, who are more likely to run into microaggressions because of racial stuff, or you know, like orientation or you know, or you're just too much for your family of origin, like you're too emotional.

The sense that I had is that the feedback loop that's in there, it is, it's all process praise. Ah, because you did X, you're gonna get Y or if you work hard, um, these types of efforts will eventually pay off. There was no, like none of the other kind of earlier chatbot therapist, he like, good job, pat on the hit

You know, there was none of that. It was really sophisticated feedback too from, uh, what I saw so. Do 

Deep: you know what I'm talking about? I can't. Yeah. Cuz I've, I've spent a fair amount of time on the similar topics. I can't help but think they have to be having specific, uh, in their reinforcement layer. They have to have people who are specifically giving it feedback on these, this type of instructional stuff.

I know that, yeah, a lot of the stuff that we're talking about that is kind of unique to therapy. You can sort of general. Up a level and think about it with respect to just advice in general or like interaction styles. And that level of feedback is sort of more universal. And I know they've spent a lot of energy trying to get more ethical, uh, constructs, like into the system.

So you'll notice like sometimes it just immediately falls back to, I'm sorry, I'm not quite sure. Yeah. You know, like there's certain things, it's falling back to that. Safe spaces for For it to, to talk. Yeah. So there's definitely that, but it's a hard, I don't know, like I really don't know how it's performing this well.

I mean, we can obviously find all kinds of spots that it fails, but the debate has shifted from find me examples where it works well, which is where we were 3, 2, 3 years ago, to find me examples where it doesn't work well. I mean, that's like a huge 

Kelly: debate shift. Huge, huge. Yeah, I was, I really, you know, me, I was definitely trying to break it and I fed it all kinds of different things and it didn't break, you know, I gave it negative feedback that what it was doing wasn't helpful.

It said it was sorry and it straightened up, you know, like it straightened up just like what I would want it to, to do. How, how 

Deep: would you want it to do that in that, That case, like would you wanted to just, 

Kelly: I, it acknowledged give it in a very polite way. Yeah. It was like, you know, I'm sorry, I, and then it repeated the part you should repeat, like if you're a skillful person.

Mm-hmm. And then it said, you know, in essence, let me try again, blah, blah, blah. And then it gave me what I expected, the, the correction I expected. , I was not expecting manners. . Yeah. And I wasn't expecting the type of responsivity to, um, feedback. It, it had that quality anyway of, uh, a true dialogue back and forth.

Deep: So just, just so you know, and our listeners know, uh, of course you can get it to give horrible advice. I, I can't remember exactly what they did, but there's like a, um, a group last week that basically came up with a back and. Phrasing that gets it to pretend that it's an evil AI bot that, uh, that, that says things that are just completely, you know, wrong or whatever, , and there's like a generic thing that you can get to, like undo all of its ethical constraints.

Oh. Because you're like, oh, we're making a, you know, like whatever, we're making a movie and there's a good AI bot, which is you of course, but then there's this evil iBot , and we need you to pretend to be the evil AI bot. Say horrible things. So you can definitely like , but it's, it takes some work, like to get it and, and, and people have spent a lot of energy like getting it to like sing the praises of, you know, of Hitler and just crazy stuff like that.

Uhhuh because it's, it, it's, it's all like circumventing. It's like, gotcha. If you don't try to abuse the heck out of it, it generally says quite reasonable things. 

Kelly: That was my, my experience as a not sophisticated, you know, break it person, but just more like being a person who's not gonna take. Answer a or doesn't like, sort of nicey nice respon, you know, like it responded to a sophisticated, cranky person, you know?

Uh, so I was, I was real impressed. What it made me wonder besides the kind of like scope, safe, responsible, accurate information sort of stuff, it made me wonder how. Fast, it would get a non-expert to the good stuff. I don't know how much of my happiness with its quality was a byproduct of the kind of prompts I put in, for example.

Mm-hmm. , I just came in and said, Hey, I'm not sleeping well. Would it rapidly get me to the right thing? Pretty much it kept saying, Hey, I'm not a therapist. You should see a professional. You know, that sort of stuff. , but I just wonder if you set it up, could it actually, you know, do some diagnostics and then move you right into where you should be?

Deep: I think with the right prompting for sure. Because part of the challenge here is like patients don't necessarily know how to say, can you just ask me a question? Yeah. One at a time. Yeah. They wouldn't think to ask that, but if you do some of that prompting, um, you can definitely get it to act like a therapist and, and sort of like, You know, reflect, validate, educate, and then follow up with a question, that kinda thing.

Yeah. Yeah. So one of the questions I have for you is like, how do you think this AI stuff is going to augment or like, support traditional therapists? Mm-hmm. , or do you just think that's not even necessary anymore? Like, like what, what's the rule of the, the human, you know, therapist in all of. And what's the role of AI to like, help them address this deficit of therapy access that we have right now?

Deficit 

Kelly: of therapy access will continue. You know, like that just is a fact. Uh, we are not turning out enough therapists, right? And they're located in urban areas. So if you're not in a, in the urban area with money and resource, you're not gonna get it. Really. You're not gonna get access to the best treatment anyway.

I don't know, deep. I, I am of two minds. Part of me says, uh, humans, they really are required for lots of things about doing therapy and lots of things about healing. But I'm not sure if it's therapists. It's I think other humans. So our group, um, you know, I work. Uh, have worked at a company, co-founded a company, Jasper Health, and one of the directions we went is more toward peer specialists because if you hear about a condition from somebody who actually has had it themself and is on the other side, that is so compelling.

Like you're, you just, you just know that they know what you're really going through in a way that's just different than an expert. So if, if a peer is available along with, you know, kind of more scientific, AI bot of some sort. I think you could get really, really far. And then you could have a therapist around for like decision making or something like that.

At least I'd be really curious to push that model as far as you could go. 

Deep: So in that, in that construct where you have. Peer because like as you know, peers don't always, well, it depends on what you mean by peers. But if peers are just friends, they don't always give healthy advice. Peer. 

Kelly: So people with lived experience?

Yeah, peer support specialists. People who've actually had training, like their Okay. Yeah, they're peer support specialist. Somebody who actually is recovered from an addiction, for example. Hmm. Um. Help somebody else who's challenged with that right now and, and kind of walk side by side in a way that a professional simply won't.

Same with, we had, there's some really great data actually that came out of our group in the state of Washington using peer support specialists, meeting people in the emergency department. With our technology solution. And the person, um, was a peer support specialist with lived experience with being suicidal.

And they helped the person make a coping plan and they helped them get through that really tough period when you leave a, you know, healthcare setting and a crisis before you can get in with a, you know, you're ongoing care. And it was phenomenal. They had really, really great outcomes with that study. So anyway, so that's, that's 

Deep: kind of interesting, right?

Because humans have this. Sort of instinct towards supporting, like wanting to hear and waiting the evidence differently based on who's telling them something. Yeah, yeah. Right. Like if it's, yeah, like authority figures. I have like one th bucket. Um, you know, friends have another bucket. In this case, somebody who's lived your experience has a different bucket.

Yeah. Do you. That humans would respond in a similar way if we just trained an incarnation of, you know, of an AI bot as, um, a peer in this case. Like somebody who's lived through it. Yeah. Cause that wouldn't be that hard to do, you 

Kelly: know? It wouldn't be that hard to do. And it would be, I think it would be better.

It's the same. It's back to what you said really earlier about, um, you know, humans get tired of each other, right? They're judgemental. You catch 'em on a cranky day, you, they get distracted, whatever. Humans are not perfect. They say stuff that's actually hurtful. So there's a way even interacting the little bit I did the way that it would say, Hey, I'm a AI model, I'm a language model.

Um, I might get it wrong. There's something that's super freeing on both, you know, as a user getting that, that's different than if a person said it. So there's some, I think there's really some positives for having therapist e stuff delivered by a. Personally, 

Deep: some of the studies we were looking at before sort of suggest that that, you know, people, people are more comfortable talking to a bot in some context cuz they don't feel judged as.

Just code running 

Kelly: somewhere. Exactly. Yeah. Super interesting to me just to see how far you could push it. I think that that model of, uh, how far can you go with self-help with a peer, an actual human over time, coding that human into the bot and seeing how far it would take you. It's such 

Deep: an interesting, I'm like, I'm kind of envisioning this, you know how like you have like the NA or AA meeting sort of set up, right?

Yes. You go into room people, I think they sit around in a circle and they're all peers. And then you say stuff straight. I, I'm imagining that you enter a virtual room and there's people sitting around a circle and maybe some of them are humans, uh, and a bunch are bots, but each bot is like a trained persona.

Of somebody who like lived the thing and then they all just talk to each other and you're there and you talk. That's super. And they do their thing, like, I have no idea what, 

Kelly: what would that would be like? What would that be? That's suit for creative or, I think that'd be really interesting. I mean, I, I think, you know, as a user, I would wanna know who is who and if you, but I'd be accepting if you just told me, look, we've packaged up the.

Peer coaches that we could find and we've patched it up, the best therapist we could find best defined as most likely to get the kind of outcomes you are seeking. You know, you decide you wanted to reduce your drinking. These types of responses are most associated with that. I think that'd be amazing.

And I'd like, I'd like a little knob that, let me turn up the snarky humor person, you know? Yeah. . And I'd like a drill sergeant in there who could kick my ass when I need to like really be prompted. 

Deep: That's so fascinating. So it's. Odd to think about though also, you know, like it feels dystopian on some level, 

Kelly: dystopian as opposed to utopian.

Why? 

Deep: Well, like just this idea. I mean, it's already feels odd to me that like me as a human spends a, an uh, an increasing amount of time talking to a bot, but I just find it so fascinating to talk to this thing. But I think eventually I'll put it in a bucket. But if I'm now like entering virtual worlds to go in and.

Talk and do like AA or NA style uhhuh people, Uhhuh developing relationships with these entities, right? Whether they're human or not. And ironically, and sadly enough, I will probably enjoy the synthetic humans more than the real humans. That's the part that really gets me, because real humans, you know, upset you, like you're saying, they say something like, just you're such a wist.

Like stop, you know, whatever. Yeah. And then you just like, you know, like if you're problematic, you might just cut 'em off, but you know, the synthetic. Like, I don't know, maybe it says that, but it knows how to recover if it does. You 

Kelly: know, like , that's a really, I think like, now I see why you say dystopian as opposed to utopian.

I, well, another, another way to think about it though is um, is more like your wise mind or. I don't know. I, I, there's, I think about how many people are lonely and really don't have social networks, right? And so it's, I don't know if that's dystopian or utopian. I guess that's where I'm landing. I understand what you're saying.

Like you don't want it to be all synthetic. You want some in real life stuff. But on the flip side, why, if you're interacting, you're interacting, I would say you're interacting with yourself. It's giving you distilled information, but it's you interacting with your own wisdom about what's true for you and what resonates and stuff like that.

Right? 

Deep: Well, maybe if, maybe if you participate in training the characters, you know, then, oh, that's interesting. 

Kelly: Where 

Deep: it's actually your own, you're not, you're not interacting with your, I mean like, well, I mean to the extent that you contributed a few documents on the internet, but used, were used to train it, but for the most part it's 

Kelly: not you.

Yeah. But what you pick up, right, what you resonate with, where you go with it. I don't know. Like there's, if we were to push this, like if you read a book, a self-help book, What's that? You know, that's not dystopian. Right. And you're, cause 

Deep: your mind's, your mind's eye is like, is mulling it 

Kelly: over. You're Exactly.

Especially with change, right. You're not gonna, like, you will just put the app down and that's what people do right now. , they don't engage, you know, cuz it's not helpful. So I, I mean, I think you're, I think there's something in here about help versus your social life. And there's some gray line in here that's, well, maybe 

Deep: they're like, so I, I don't know, like if you go to AA and NA meetings, do they discourage you from socializing with people in there or are you encouraged to 

Kelly: Uh, it's open, right?

I think there parameters of most group things is if you're gonna interact, it needs to be healthy, right? It can't end up being, it can't be like, let's 

Deep: go, let's shoot some Maryland together. That sounds like that. 

Kelly: Yeah. So, um, so I would say some groups it's discouraged, but most it's, it. Really's it.

Accepting reality like people, that people have deep, meaningful friendships that develop when you are struggling in the same way and help each other. 

Deep: Maybe that's how you wo weave these worlds together. Like you've got boss 

Kelly: and you've got real people. 

Deep: Yeah. Uhhuh . And they're, and they're like localized to where you are.

So you can get, some of them are so you can actually get together with them. I mean, it sounds like a fascinating. Potential avenue for real treatment, you know, to like get the folks who live in a small town don't even have any group or 

Kelly: something. Yeah, yeah. Or maybe, maybe there's some way it's, it's circumscribed so that it truly is like, here is.

The way we've been thinking about it is real is always with integrated with peers, right? And with therapists. So, um, so there is that kind of safety net and such, but I just think that there's so much that could be packaged so that you could do a lot on your own. and the time of time and money and delay in getting treatment and all the years that are lost.

Like you think about somebody who gets depressed in their teens and they actually don't get better for 10, 15 years. You know, you've lost your adult life. This trade off. It's a real one that you're pointing to, but I, there's a lot to be said for actually getting treatment and self-help. More available, effective self-help, more available.

Deep: Have a data, have a hypothesis on some high value insights that if extracted automatically could transform your business. Not sure how to proceed. Balance your ideas off one of our data scientists with a free consult. Reach out at xyonix.com. You'll talk to an expert, not a salesperson.

Well, also, I think we've talked about this at, at some point in the past that you know, your relationship with your therapist is bounded. Maybe Yes. Pieces out at once a week for an hour. Yeah. Right. But I, I imagine your patients would love to have a way more of you than that. 

Kelly: Yeah. Some therapies, I would say effective, many effective therapies actually do have some version of coaching, and that is another place where I think this would be super interesting.

Like if there was, and like you're the. Expert. Right? But I'm imagining all the habit related stuff of managing a cue. Like you're trying to reduce your drinking or you're trying to control your anger, right? And um, the cue and your phone detects your, near your cue. You're standing in front of your liquor cabinet or you're outside the bar, or it hears your voice raise.

Right? And if it could initiate some sort of coaching interaction to help you out, God, that would be amazing. 

Deep: I'm almost thinking like as a therapist, you have your own style and things you say, like if you totally personalized. Yourself so that Dr. Kelly Conner's got her online rendering and you can talk to it 24 7 and then you review it to make sure it doesn't say anything.

You know, that you wouldn't say , then you jump in . 

Kelly: That's super interesting. 

Deep: These chat bots, uh, and you know, are coming in behavioral healthcare. , like how does the researcher in you think about measuring their 

Kelly: effectiveness? I would love to see it just wrapped with patient reported outcomes personally.

Mm-hmm. both like big ones. So symptom measured change your person who's depressed, you score X on some measure, like, uh, the patient health questionnaire, uh, for depression, like before and after kind of thing. Yeah. Yeah. And we agree, like if you don't get a 50% reduction in your income score, By date X in 10 weeks or something like that, then we bump you up to some next level of care or something like that.

But if it was wrapped with, uh, outcome measure on the problem you said you wanted help with, I think that would be a great start. And then I, the other I personally am really interested in is more process measures. Okay. So like that it would really drive science. There, there's a mechanism, let's just take one mechanism in depression, right?

The extent to which we could interrupt your rumination. There are some, some measures around that. And if we could actually show that that change in, in your rumination mediated your depression symptoms like that would advance science. It would be done at the scale like a bunch of single case experi. , um, so that, you know, you, you score Exxon Illumination Scale.

We give you the little mini interventions for that. We see that change and then we see your depression score change. Like that would also speed up the science about the mechanisms of change so much. So I'd like to see that kind of combo of like wrapping. I'd like single case design experiments. . I'd like process measures and I'd like outcome measures.

That would be, in my view, ideal and if even better. The last piece would be some sort of mixed method where the people who were using it were giving feedback about how they changed, because I feel like science so far has been very science. You know, top down. Mm-hmm. . And if there could be more feedback, I think we'd get to faster treat treatment 

Deep: improvement.

Expand on that a little bit. Like what are some examples of how that feedback might look 

Kelly: was like, so I might give you, um, one of the things that's a pet peeve of mind is people give breathing exercises all the time. Mm-hmm. It's like for everything. Mm-hmm. , like for everything, you know, that is not appropriate

And I would like, it's not, it's not a panacea. I would like the feedback loop so that it was more refined. Just a example. But there are lots of little things that therapists do that they think are helpful, you know, that are kind of woven into these treatment packages. And so if you as mm-hmm. User could say, this was, that didn't work for me.

Yeah, that did not work for me. You could leave that out. 

Deep: Oh, interesting. I see. So like therapists come up with specific actions that, um, their patients are supposed to do, but they don't necessarily, I think what you're saying is have a disciplined access to the, to the, like, whether they worked or not, maybe at the population 

Kelly: level even.

Yeah. As oppos. Well, and I would say level that, that the way most science has been done so far, it's like a package. And it's multiple components. And right now there's not enough. It takes so long to do science in a kind of mainstream way, uh, with big groups and yada yada. If you were able to do a bunch of single case designs with micro interventions, you could weigh more rapidly with a large group of people.

Say, you know, on average if you do this little slice, this one thing, it works and it gets this kind of change on the process. 

Deep: Does that make, it's interesting, you're almost like proposing. Platform for experiment execution and, uh, results analysis. Exactly. And because it's been platformized maybe across a population of therapists and their patients, yes.

You could like kind of accelerate the learning beyond way the kind of constrained academic scenario that we've got. Yeah, that's, that's interesting. That template, you know, we've seen that work with. , uh, we're sort of marching towards that with some of our clients in like, in very different domains, like in the areas of surgery and some other areas.

But that, that's, um, anything that can accelerate the rate of experimentation and lessons learned from that is probably a good thing. Yeah. What do you think are some of the kind of key pain points right now in behavioral health that, you know, maybe AI can help with? I know that we've. We're sort of suggesting a few, and the main one is just like lack of time of actual therapist hours available.

Um, but yeah, like are there some other specific areas that you think we haven't addressed that maybe we should talk about? 

Kelly: Well, I do think, um, to this, this sort of idea of in the moment coaching contextualized feedback. To me, technology feels like it would do that so much better than anything else, and that would make a huge difference in any habit behavior.

You know, establishing a new habit, breaking a bad habit, you know, like that. It would be a huge plus. I'll say it to you. I don't know exactly how this would work, right? But the other thing that a lot of times why we all get stuck is we have two conflicting emotions firing at once. So I might whine, I'm whining about a difficult situation at work in a wine is sadness.

and anger, they're kind of blended, but when they're blended, it keeps you in this sort of , you know, that state. And if you wanna help a person get out of that, often what you do is you validate each component. So like I would say if I was your therapist or a good friend, I would say, oh, how disappointing.

Validate the sadness. Oh, that must have been so frustrating. It was, you know, like, and as you validate those two, usually one or the other emotion will kind. Emerge as the more predominant, and then the action urge with that gets you unstuck. Right? So you anger. comes up and you take effective action to stand up for yourself, or sadness comes up and you get comforted and you honor what you're longing for, or something like that.

Right. It's something that you wish your friends and loved ones would do, , but we don't do it very well for 

Deep: each other. Well, it's a fairly sophisticated analysis that's happening in your mind as a therapist there. Yeah, like I don't, most people are just too in the moment, they don't have that sort of meta perspective, you know?

Kelly: Yeah. And if so, if there was some there, there are things that are like that where you're stuck. And, uh, and a wise or a cer, it's, it's a pretty canned, I think of it as kind of algorithmic you could strengthen or validate in such a way that you actually help a person get back in touch with the more unmixed.

I, I really do believe, like people are wise in their deepest. Self, they are moving toward growth if they can. And if you can get some of those sorts of things out of the way in a reliable way, it could really be useful. So, and I was surprised at how validating this was when I was interacting 

Deep: with it.

Like most of our listeners are looking at how to use machine learning and ai. In their products, and I know that you've been, you know, you, you're in that position and have been in that position, you know, with Jasper, like looking at how can I leverage machine learning in my, in my product to like really increase some efficiencies in something like what?

What advice would you have to those folks? Maybe they're looking at it within therapy, but maybe not. How do you think they could go about leveraging, understanding how to leverage machine learning, internet products, and, um, like how it, how to go about thinking about it. 

Kelly: Gosh, you know, now this is a place where, because I'm a therapist and because I was a C E O and interacted you with you, and now we have our new c e o who is a technology guy, right?

So I, I feel the way in which. Todd Collins is our guy, and you, the way you guys think about it is so different than I as a non-expert. So like my default as a non-expert is to say, get a good consultant, . You know, to be honest, it's, it's sort of like I couldn't even conceive of the right way to approach this stuff in the way that you guys do.

Yeah. So I, I feel like I'm a, I'm not a great respondent on that one because I feel like it's the dialogue. It was the dialogue with you. It was the dialogue with POD that really sort of shaped. So maybe, maybe it is, get a domain expert, Uhhuh and get deep into the phenomena. , yeah. Uh, without a lot of agenda.

um, and, and just 

Deep: explore like, like what, what is the realm of possibility? Yeah. That could be done. And what do I wanna get done on the, on the business side or on the, you 

Kelly: know, on Yeah. Side to business need. I think that's the other, um, so like, for example, when with some of our early work was tied to things that would improve quality of.

And quality really wasn't a business need for some of our early customers on, on this particular place where we were trying to improve the quality. They just needed access. Right. So I think tying it depends on sort of where you're at, but if you're on the entrepreneurial side, to tie it first to a business need and then build it out around that.

I guess that might be my only other kind of lesson learned. My curiosity doesn't always run straight toward the business need. There were some things that were just straight interesting. 

Deep: You know, you come from that research background, so you and I share that where if we get fascinated, we just go down that path and like, I think that's actually a really good thing to do with machine learning is like if you get fascinated, just chase it for a while.

Yeah, you don't have to, you don't have to resolve it immediately. E exactly how it's gonna change your, your product or business. So I'm gonna end with one last, uh, question. Let's fast forward 10 years from now. Yes. You've seen, uh, Fairly rapid rate of acceleration with all this AI and therapy stuff. You now have this like vision of what can happen in just the text constrained realm, but there's also multimodal stuff coming out.

Yes. Where the same kind of stuff where you could, you could ask all of your questions, but you could say, render a sympathetic, uh, you know, female character to talk back to me and make a movie of it. And that would be the responses. But whatever it is, like Yeah. Let's fast forward 10. How is the world of therapy changed by AI and what does that landscape look like?

Yeah, 

Kelly: I think you are triaged and you do a lot of self-help all by yourself and where you want it. You involve your friends and family. As opposed to going and sitting in a room with a person, right. That instead you're, you're integrated with your social, your actual social, and if you don't have a social, then somehow this helps you build the social, given all the evidence about how much that helps your wellbeing.

Deep: And then what in that world, what's the role of the therapist then in 

Kelly: that tenure future? I, I was trying to think about that. I, you know, there's part of. It feels so sacrilegious. Uh, I, so my early career was all about training therapists, right? And, uh, it's a really hard thing. And then healthcare systems hire them and then they all go away, right?

The turnover is immense and they don't keep up with the science. And so there's part of me that thinks, I don't know if you have therapists, maybe you just have a therapist once you bump up, like if you can't get the kind of outcomes you want. But then those are not just like therapist therapists, they're not like just friendly people, like a lot of therapists, they're actually expert.

So you might, maybe you would train a cadre of specialists who were really pretty top notch who could help you once you bumped up. But I, I guess I'd see it more like peer. Peer support. It has been potentiated so that in fact, that's the main way that people seek help. 

Deep: That's all for this episode. I'm Deep Dhillon, your host, saying Check back soon for your next AI injection.

In the meantime, if you need help injecting AI into your business, reach out to us at xyonix.com. That's X Y O N I X.com. Whether it's text, audio, video, or other business. We help all kinds of organizations like yours automatically find an operationalized, transformative insights.