Your AI Injection

Is AI the Missing Ingredient for Curbing Cravings? Emotional Eating Meets Machine Learning with Dr. Sera Lavelle

Season 4 Episode 19

Can AI understand our relationship with food better than we do?

In this episode of Your AI Injection, host Deep Dhillon sits down with Dr. Sera Lavelle, clinical psychologist and CEO of Bea Better Eating, to explore how AI is revolutionizing our approach to emotional eating. Dr. Lavelle explains why traditional calorie-counting apps fail to address the psychological triggers that cause us to reach for cookies after a stressful day. She reveals how her app fills a crucial gap between diet advice and clinical therapy, helping users identify when they're eating from emotion rather than hunger. The two dive into the ethical challenges of developing AI for mental health, including how to responsibly direct users to human therapists when necessary. Will AI become our most trusted confidant for the feelings we're too ashamed to share with nutritionists or therapists? Tune in to discover how machine learning might become the missing ingredient in breaking destructive eating cycles.

Learn more about Bea Better Eating here: https://www.beabettereating.com/
and Dr. Lavelle here: https://www.linkedin.com/in/seralavelle/

And check out some of our related episodes:

[Automated Transcript]

 Dr. Sera: So I feel like there's a lot out there for people who are clinically diagnosable, people who have bulimia, people who have anorexia, people who have binge eating disorder. However, I find it crazy that we think, hey, those people struggle with emotional issues around food, but everybody else who doesn't have bulimia or binge eating disorder, there must just be somehow a problem with nutrition.

Well, the reality is I really see it as a continuum, and what you see is 60% of all people struggling with eating actually say that it's emotional and not a lack of nutritional information. And so I'm targeting those people that are emotional eaters that don't actually have an eating disorder, but those people are at high risks for developing an eating disorder.

And what happens is they're already at a high risk for an eating disorder. They go on a calorie counting app, and it actually triggers eating disorders, whereas mine prevents it.

Deep: Hello, I'm Deep Dhillon, your host, and today on your AI injection, we're diving into the intersection of AI and mental health with Dr. Sera Lavelle, co-founder of Bea better Eating and owner of New York Health, hypnosis and Integrative Therapy. Dr. Lavell holds a PhD in clinical psychology and an MA in clinical psychology from Delphi University, along with an MA in psych from New York University. she's also served as an adjunct professor at both CUNY and at Delphi.

At Bea Better Eating, she leverages over 18 years of clinical experience to reshape people's relationship with food and drive sustainable change. Thank you so much for coming on the show, Sarah.

I'm really excited for this conversation. 

Dr. Sera: Thank you so much for having me. I've been really looking forward to it. 

Deep: Awesome. So maybe start us off by telling us like what did people do without your solution, and what's different with your solution? Maybe walk us through a scenario.

Dr. Sera: Yeah. You know, it's a really hard one because I feel like everybody struggles with food, right? On some [00:01:00] level people are either motivated to lose weight or they're still even motivation to keep a person's weight where they are. But everybody wants their habits to be better and if they're not using my app, really the only solutions out there are getting more diet advice or people going to kind of like calorie counting apps.

But in this day and age, I don't really think anybody's wondering if a carrot is better for you than a cupcake, right? Uhhuh, so we're kind of doing things based on this really old model. Like a lot of apps that are out there right now are kind of digitized 1960s food journal, right? They're tracking your food and they're counting up your calorie.

Mine is really addressing the emotional aspects. really when you think about it, people are craving food. They overeat, they're throwing all logic out the window, and they really don't know where to turn. So sometimes they're turning to diet apps, sometimes they're turning to nutritionist. Sometimes they are going to therapy for it because a lot of people are becoming understanding that it's psychological.

But I really see my app as kind of like giving people a place [00:02:00] to unravel what's going on for them emotionally with food. So they stop having emotional cravings and they're not able to stop eating when they're full, as opposed to continuing to eat even past the point that they're full because they're trying to kind of soothe, like giving themselves emotional soothing.

So, 

Deep: tell me a little bit more about like, what's the typical patient, how do they get to you? Mm-hmm. And what kind of emotion driven eating are you sort of mm-hmm. Describing? Is it like depression oriented or something else? Mm-hmm. Like maybe walk us through, 

Dr. Sera: so that's a really great question.

So I feel like there's a lot out there for people who are clinically diagnosable, people who have bulimia, people who have anorexia, people who have binge eating disorder. However, I find it crazy that we think, hey, those people struggle with emotional issues around food, but everybody else who doesn't have bulimia or binge eating disorder, there must just be somehow a problem with nutrition.

Well, the reality is I really see it as a continuum, and what you see is 60% of all people [00:03:00] struggling with eating actually say that it's emotional and not a lack of nutritional information. And so I'm targeting those people that are emotional eaters that don't actually have an eating disorder, but those people are at high risks for developing an eating disorder.

And what happens is they're already at a high risk for an eating disorder. They go on a calorie counting app, and it actually triggers eating disorders, whereas mine prevents it. So 

Deep: that's super interesting. So what kinds of emotional stuff happens to cause people to like manifest it in eating?

I mean, I think most people have a sense, I don't know if it's correct or not, of like bulimia or anorexia mm-hmm. Which is like kind of body image related emotional stuff. Mm-hmm. what, what other kinds of things are there? 

Dr. Sera: I think just like any vice, Just like alcohol person might not be an alcoholic, but they might be going to alcohol to soothe work stress. Or, you they can't tolerate being angry at their spouse, so they're going to alcohol. It's a lot of the same things. You know, what you find about [00:04:00] emotional eaters is they tend to not be drinkers.

They tend to not be drug users. They tend to be the people who, this is kind of their only vice. they vacillate between trying to be, perfect with their diet and then breaking down and eating cookies at night. And it's very upsetting, Doing that every day over and over really affects your self-esteem. But it's not an eating disorder. And where do you go for that? Because again, if you go to a nutritionist, they're gonna say, okay, great. Don't eat cookies at night. And they're gonna say, well, wonderful. I know that. Problem solve, problem solve because they've 

Deep: got 15 minutes, to interact with you.

And then they're onto the next patient. Yeah. Yeah. 

Dr. Sera: And the is, when somebody's overwhelmed with emotion, all logic lies out the window. So even though the nutritionist might be giving great advice, they're not able to access it emotionally in that moment, they're not able to hear that advice because they're thinking, I wanna be bad right now.

I've had a bad day you know, screw it. I'm just going to eat this thing right now. 

Deep: interesting. So, going back to like the solutions in quotations that exist today. you mentioned [00:05:00] like calorie counting or, diet journals. you mentioned just nutritional information.

but within the emotional eating space, what mm-hmm. What happens today without your solution? 

Dr. Sera: I really see it as a big gap. So, for instance, I own two therapy practices in New York City, and we see people coming to this for us all the time, and we have to sit there and tell them, look, we can provide therapy, but guess what?

Insurance is not gonna cover this because you don't qualify as having a eating disorder. So people if they have the means of are, spending a lot of money for therapy for this issue, and it's extremely effective. I think a lot of people want therapy But they know that they don't qualify as having a disorder.

So. What we do is we try to target those people that, you know, really want the tools to stop overeating but don't qualify. And what we do even further is we use the AI to determine, hey, is this a person that might have an eating disorder? And if we do, we help them find a therapist in their area that's not even connected to our [00:06:00] app so that we make sure that we're very appropriate in providing both preventative and supplemental help, but that we're not crossing the line and saying, Hey, you know what?

You're severely bulimic. This app can solve everything. However, that person who is perfect all day eating the cookies at night, which is really almost everybody who's trying to lose weight. we are there for them to provide them approach that's grounded in research that helps their self-esteem and prevents eating disorders as opposed to kind of going to something that they try it for a while, they drop off and then they feel like a failure.

Deep: Got it. maybe walk us through the app Maybe make up a scenario how do they encounter the app? Where do they get it when they get it down?

 how often are they interacting with it? What are they doing the first time, the next few times? 

Dr. Sera: Yeah. So we just launched our MVP and I'm thrilled about it, there's so many things that we want it to do eventually, but what's really amazing is people find it , just on our very, very basic marketing, because we target things like.

Emotional cravings and overeating, and we get a [00:07:00] 30% conversion rate. Because people are searching these things. They're not really searching weight loss and diet. They're searching, how do I stop from overeating? And they find us. People download it and the first thing it says is, what's your biggest struggle with food? And it says, is it overeating? Is it consistency? And it starts having a conversation with you. And then it might recommend one of our specific activities, there's kinda the open chat that you could just talk to all day and it will help you unravel what's going on.

But if it detects something that might help, it'll take you to one of the guided activities within our app, which is knowledge, motivation, meditation. Knowledge is gonna be like, look, we're gonna teach you how to differentiate real hunger from psychological hunger, or teach you some mindful eating techniques or teach you something about kind of the way you're approaching food.

 Motivation is where a person will go wow, I'm about to overeat. I've had a bad day. What should I do? And it's gonna walk you through a psychological technique that's gonna stop you in your tracks. It's gonna walk you off the ledge. Maybe it's gonna be a mindfulness exercise, maybe it's gonna be a visualization, but it's gonna be [00:08:00] something that's gonna make you pause, reflect before you go on.

And then meditation is actually, Ericksonian hypnosis. So it's kind of like a headspace except for it's created by psychologists, that both improves body image and helps you to stop eating when you're full. So it's not saying you need to diet, it's eliminating those times that you might go way past what your body actually means.

So we're taking the three most effective techniques that we would use in therapy. however we are now making it so that the everyday person who's struggling has a more psychologically sound approach then continuing. To try a diet for a while and fail It's made to heal your relationship with food, not necessarily meant to lose weight.

Yeah. 

Deep: maybe let's dig into a, few of those things kind of one by one. So let's start with the, maybe the chat modality, so mm-hmm. so you start talking to this thing, I'm, I'm guessing you have, uh, an LLM behind the scenes Mm-hmm. Maybe walk us through a little bit about like what is different from your fine tuning on that conversational side mm-hmm. And, and that prompting and how that works what your main [00:09:00] goals are with the prompting. How you mm-hmm. deal with the guardrails issue and 

Dr. Sera: Yeah.

There's so much to it. And that's a really great question because also, you know, being practice owner, being clinical psychologist. I'm gonna be a lot more clear in my guardrails in training the AI than other people. first is, we trained it on basic, research based techniques, I mean, that's always gonna be the first layer. But we actually created a completely separate proprietary private logic layer, that I've actually dumped, ideas of real things I've heard people say in therapy. the kind of responses I'd give. podcasts I've done, logs I've done so that it's like this separate database we built it on open ai, but it's mm-hmm. LLM agnostic actually, because we could plug our layer onto anything. so if we wanna change it at any point we can, but it kind of gives a scale. Is this better responded by, the LLM that's been retrained universe? Or do we answer actually based on our own separate knowledge base?

That we pull from in order to answer. So for [00:10:00] instance, if you ask ChatGPT the same question, you're gonna get an entirely different response. If you ask Bea, could you help me stick to a thousand calories a day? GPT might give you advice on how to stick to a thousand calories a day.

B would say, you know, I really don't recommend that. why don't we instead talk about what's going on for you that, A, you wanna stick to that kind of goal, and B, if you're going vastly over that, well let's focus on just helping you choose healthier foods and to stop eating when you're full.

So. Yeah, it's trained in a very specific way. 

Deep: it sounds like you have your own content, maybe manifested in mm-hmm. Some question answering as well where for particular questions you have particular responses you want to give.

 I'm assuming you built your own rag system or something to kind of look up local content and then go to the LLM to kind of tailor responses? 

Dr. Sera: We built our own system Yeah.

And that it does pull from there. Right, right. And, and it has kind of a score of whether or not it's more relevant. 

Deep: Yeah. so, So you've got your rag system, You have this process for curating content. [00:11:00] walk me through like how the psych specific content manifests itself.

Mm-hmm. Like, are you guys sort of reviewing conversations between patients and the bot and then weighing in with, you know, a team of clinical, psychologists and then you're sort of updating your knowledge kind of repeatedly. Mm-hmm. Like how, how does that 

Dr. Sera: Yeah.

Deep: Layer work? Yeah, 

Dr. Sera: because we, yeah, because we launched two months ago, we haven't updated it as much as we'd like to, but it is something definitely in the works that we're gonna be updating it continually. to just have better and better and more refined, more individualized content, but also it's gonna learn your certain behaviors as well.

on top of that, that's how we trained it, but we also have a lot of different safety features in place. What we actually do is we have that layer of like risk, harm suicidality, homicidality but we also have our separate layer of, making sure we're practicing within scope.

So we've trained it on tons of examples of the kinds of things a person would say if they had an eating [00:12:00] disorder. so that if somebody kind of triggers that, it would remind them, Hey, I can't actually diagnose. But the way that you're talking, it sounds like you might have an eating disorder.

Would you like me to help you find a therapist in your area? Right now? We give 'em a general resource, psychology Today, which can target a person, in your area. And we're not even referring to ourselves because this is an MVP, but in the future we plan to have a huge provider network, as well as kind of different levels of care.

you know, to me that's very important to make that distinction so that when a person really is struggling, they're not getting more frustrated because, they actually need more help than that's able to provide. 

Deep: So let's say you, you recommend a therapist. tell me if I'm wrong, but I've almost never seen a therapist who's recommended to have any, kind of availability. Mm-hmm. So it's really hard we had to find a therapist during the depths of covid and well, that was the hardest by far. It was incredible.

Like, I mean, I have a [00:13:00] large network and my wife does too. I personally know like. more than a few handfuls of, therapists, And it still took us almost a year until we found somebody good 

Dr. Sera: That's why I wanna really build out that network.

And, what I'd love is what things like, um, Headspace Health are doing and Ben Health in that they have these tiers, for instance, you have coaches that are supervised by psychologists. So we ever wanted to use coaches like that, a person could have as an add-on, we'd have somebody well-trained in kind of coaching if they weren't clinically diagnosable.

But you'd also then have a supervisor help monitoring if the person actually needed a higher level of care. And then we'd also have our own set of psychologists we'd either partner with a big clinic, which is one thing we're looking at, or that, you know, we could ensure that had availability or we would actually create.

a more, medically aligned version of it saying, the care team has decided this warrants that. You get professional psychological guidance. Would, you, like us to make an appointment with this provider.

Deep: [00:14:00] Let's talk about that. when people talk to bots mm-hmm. in a psychotherapeutic kind of context mm-hmm. one of the kind of assumptions is like, oh, they're not gonna really talk to them as much. and you know, the literature actually supports the opposite that people feel not judged by a bot.

Mm-hmm. They talk to it a lot more. Yep. Mm-hmm. But one of the things that happens with a human is there's a social bond that forms and there's a, yeah, there's a sense of, you don't really care what a bot thinks about you, but you do care what a person thinks about you. And I imagine therapists really leverage that in the setting where, you know, if you give somebody a patient a homework assignment to go, I don't review all their eating habits for the week or something uhhuh, they're gonna be much more likely to do it simply because you're a human, simply because they have to pay you a couple hundred bucks for the hour.

Yep. all of that. you know, I had a, another, psychotherapist on the podcast a while back. And one of the things she said really struck me, she was like, the interaction between patients and bots can be helpful.

but the future of psychotherapy that I see is one that goes back before the time of [00:15:00] psychotherapist when people actually had peers that contribute. and help them out in their therapeutics context, you know, which is a lot of fancy words for just saying you've got friends and family that are there for you.

Yeah. You know? Mm-hmm. And know how to say things that, are helpful. is there a social role that can be played? Because we both know that there's a massive shortage of therapists. They're incredibly expensive. Mm-hmm. There's a ton of people who could never afford 200 bucks an hour, even if they could absolutely.

Get a therapist. how can you like tap into that need for kind of being on the hook to other humans? You know? 

Dr. Sera: the first one I always say is there's a lot of advantages and disadvantages of ai, but there's a lot of advantages and disadvantages of humans.

And in all of our talks about accessibility, obviously. Getting there is a hurdle, price is a hurdle. Insurances are a hurdle. But I think the biggest hurdle a lot of times is the internal one. even within my therapy practice, we don't work with people who have [00:16:00] eating disorders. We work with people who have eating disorders that are willing to come into therapy.

There are plenty of people who stayed bulimic for 30 years, their husbands their wives might not even know, and they're not going to therapy. They're not ready to talk to a real human. And when AI is done well, I think you could get the best of humans and you could get the best of ai and it's a beautiful connection.

So one thing about working with eating disorders, is that it's one of the few things that therapeutic alliance is negatively correlated with treatment outcomes. What I mean by that is the more you like your therapist, the less likely you are to tell them, look.

I binge last night, I ate 4,000 calories in one sitting and then I vomited. You don't wanna tell your therapist that right now having, that's interesting. That's 

Deep: like the opposite side of the point I'm making. Yeah. 

Dr. Sera: Yeah. But what happens is if you have AI between sessions, you are not vulnerable with them, which is a good and bad thing.

But then you practice telling the AI and it's easier to open up to your therapist. 

Deep: Mm-hmm. 

Dr. Sera: You've already said it five times [00:17:00] during the week, I think it can make the therapy more productive. 

Deep: maybe even you talk to the, the ai and then you come back to the patient, you're like, Hey look, I think that we should, you know.

Forward this on to a therapist or for that particular interaction or something. here's my summary of what you've told me. Mm-hmm. are you okay with us sending this to the therapist and then you talking to them? Yeah. Right. Mm-hmm. Because that's like a different kind of thing than, a team that's not forthcoming with words.

but being able to just say yes, no. 

Dr. Sera: Yeah. 

Deep: is a much easier kind of hurdle for them 

Dr. Sera: ex. Exactly. people don't realize being a practice owner in therapy, you need to put in so many guardrails. Right. what you're talking about is kind of like a virtual release of information.

you're getting their consent to share the information. And the same thing that makes therapy practices run are what I see as making mental health startups succeed or fail, level of care. I don't think that's talked about enough with an AI. We have an intensive screening process for people who call to make sure that we're the right level of care.

[00:18:00] Meaning that A, they're like, I hate to say it this way, but sick enough. And we need to tell them, in the first session, you know, we could see you or not, but you might not warrant a diagnosis or, Hey, you warrant a diagnosis, we're the appropriate providers for you. Or, Hey, you know what? You're bipolar.

this person's in the active manic phase, and we might be able to take you if we can coordinate with the psychiatrist and the other care team. Or you might be in a place that you need either an IOP or inpatient. Um, and we are so trained in level of care and the appropriate level of care, and every practice has put those things into place.

And I think any kind of like AI really co conceptualize the same way that therapists do in terms of level of care. 

Deep: I'm imagining. quite a bit That can be done once you're in a, a safe, conversational space. you wind up with these really kind of rich conversational histories. Mm-hmm. You can do a lot of analysis against that. Yeah. Where you can start to determine, all these kinds of specific scenarios that maybe in accountable context you guys assess. Mm-hmm.

I think you can [00:19:00] start to gather actual ground truth data. You can actually start to classify a particular patient into these carriers and come up with some kind of assessment. And then based on that, recommend human interaction or whatever 

Dr. Sera: or maybe 

Deep: lessons you, you, you mentioned that you guys mm-hmm.

You have this kind of content as well that they interact with. 

Dr. Sera: Yeah. But then you think about it as well, like, one thing that's a topic of conversation is, as a society. As an app developer, if somebody says, I'm going to kill somebody right now, or if they say in a chat bot, I'm going to kill myself.

should that automatically call 9 1 1? was that person joking? Was it not? there are, there's a lot of things I think we need to navigate is this passive suicidality? Is this active suicidality? Do they have access to a gun?

even if somebody's sitting in your office Yeah. And they say, man, I wanna kill myself. You have to like, listen to nuance. You can't call 9 1 1. It can, everybody can, it can be a 

Deep: sarcastic backhanded comment. 

Dr. Sera: Yeah. 

Deep: Because I mean, you know, like, it's like you're young and hip and you're like, oh my God, I want to kill myself.

That's a totally [00:20:00] different when I kill myself. Yeah. we almost did a project, uh, I was working with a company they basically. you're a Madonna or a, Taylor, huge pop star, um, popular music. Yeah, it could be a non-music thing too, but somebody who stands on stage and has a ton of followers and they basically just make the system where you can put the phone number up and then like, any of their fans can text them.

 You're getting millions of, text. Yeah. we were kind of tasked with like, hey, analyze this content and like, break it down. And so we, broke it down to like a few hundred categories of stuff and, we just assumed that the artists would be most interested in, like, how to optimize ticket sales or like, you know, what people wanted to hear at the shows and stuff, but there was like about a percent. That was, potential suicidal ideation, depression, like stuff that was in that ballpark that you're talking about. to my surprise, the artist almost unanimously wanted us to exclusively focus on that, which I found fascinating.

It's like, huh. Yeah. so one of the things we did is it turns out there's like multiple online, API based, even services [00:21:00] that pull down a human we would just forward to them and, it's like the equivalent of one 800, whatever.

Dr. Sera: Yeah. 

Deep: And then there's the human behind the chat and then they're there. Yeah. And then they can 

Dr. Sera: assess. Yeah, exactly. But then at what point would a actual human be better what point could we actually understand sarcasm versus not? You know, one thing we're building and be a better eating that I think a lot about is, all weight loss apps.

bringing a large number of people with eating disorders, all of them do as does ozempic. Like Ozempic is Sure. And it makes sense, 

Deep: like why else would be there. Yeah. 

Dr. Sera: Yeah. Right. Like Ozempic cuts off cravings. It is the dream drug an abused drug for people who struggle with anorexia. because they don't like feeling hunger and this is a drug that makes you not feel hunger, 

Deep: Wait, I, I gotta stop you there 'cause I'm just curious. 

Dr. Sera: Yeah. If you're 

Deep: anorexic, you're like bone thin. So why would you care about not wanting to eat more? 

Dr. Sera: Because they're very frightened. So most people who have an eating disorder, feel that their hunger is out of [00:22:00] control and they're very uncomfortable with any feelings of hunger and they don't wanna feel hunger.

So they sit around not eating all day. They're like, man, I'm hungry. What can I do to distract myself from hunger? I don't wanna eat. I don't wanna eat. I hate feeling hungry. And now you have something that takes that hunger away from them. and anorexia and bulimia are really two sides of the same point, 

Deep: but at some point they still have to eat 

Dr. Sera: well at then they can do that on the 

Deep: schedule.

Dr. Sera: They die. You know, anorexia is 10 times more deadly than bulimia because they successfully starve and now more people are successfully starving. 

Deep: One thing we haven't talked about what is the regulatory environment that you're operating in? Is there one or and is there a line you're walking where mm-hmm.

From counseling. Yeah. Not regulated. Mm-hmm. But I mean, obviously therapists are regulated and psychiatrists are absolutely regulated. There is 

Dr. Sera: a regulated, there's a regulation lag, and I think a lot of companies are gonna suddenly be in trouble and we will not be one of them. We are not pulling ourselves out to the public as AI therapist.

We are [00:23:00] clear, very clear. When you log into my app, the first thing it says is, This is self-help for emotional eating. We are not conducting therapy. We cannot diagnose We don't say that we're AI therapy. We say that we're helping you with some interactive self-help tools.

And we're doing hypnosis. The moment that you reach a threshold for something diagnosable, we are actually referring you to a therapist, The other thing we're doing is building in the backend dashboard, which I think should be a requirement or at least something that, respectable companies should do, is monitoring how often those things are triggered.

So for instance, if it turns out like 75% of the people using my app do have eating disorders, that means I'm not marketing correctly and I should be doing some kind of a war. You know? So you should be finding out like, are you bringing in the correct users? am I bringing in those people? I think it's, the, moral, ethical way to do things is kind of knowing a small percentage of people who shouldn't be on any app are going to go in and [00:24:00] how do you help those people? Are you putting yourself out to the world as, Hey, I'm doing AI therapy, which tons of apps are doing.

right now they are, but like, I don't 

Deep: quite understand the mm-hmm. regulatory, regulatory lag line, 

what's the line between, 

just talking about food stuff and what's the line at which you pull down a ton of regulation mm-hmm.

Around, if you advertising yourselves as a, as a therapist, as a psychotherapist mm-hmm. That's a different thing and you wind up under FDA purview, so. Mm-hmm. 

Dr. Sera: Yes. so for instance, I was just talking to my team about this because I don't want me just to be knowledgeable about this.

I want everybody on my team to be knowledgeable, I could not call myself a therapist until I completed my PhD completed by hours. or I couldn't say I practiced mental health. I couldn't say that I was a therapist. I couldn't say I was a psychologist until I was licensed by the state and completed a number of hours.

Now, could people write a book that helps a person get over depression and gives 'em tools? Of course. is that book? Conducting therapy [00:25:00] is ai. So the things that would constitute as doing therapy would be diagnosing and treating a diagnosable mental health issue, it'd be using psychotherapy or an anxiety disorder using psychotherapy for depression.

 Now teaching people mindful eating techniques to help them with emotional eating when they clearly don't have a diagnosis. I just think that's helpful for society. it sounds like 

Deep: that's kind of the line is like once you know they have a diagnosis, they, you have to like not have them in the app or something.

I don't know how that works 

Dr. Sera: well. so for instance, I just put out a post on LinkedIn and I don't like the term mental health coach. just saying mental health coach. Even If you said mental wellness coach, I might be okay with that because you're saying I'm focusing on mental wellness.

Whereas the term mental health makes, even if it's not legally wrong, you're gonna invite in people who don't know the difference between a mental health coach and a therapist. But if you say mental wellness or [00:26:00] like even mental can be a little hard if you say things like, um, stress reduction as opposed to anxiety disorder.

everybody has stress. Yeah. Not everybody has an anxiety disorder. It's a really, it's a fine line and I think people are. Taking advantage of a regulatory lag that will catch up. we are somebody where the lines could be somewhat fuzzy, as, as is anything right now, the fuzzier the lines, the more clarity you have to have in both your messages and the protocols you put in place to keep the public safe.

Right. 

Deep: Right. 

Dr. Sera: I don't see any other apps directing people to therapists out that, where they're not even making money off of them. I'm seeing a lot of apps just saying, Hey, I'm a therapy chat bot with nobody even on psychology. Oh yeah. 

Deep: I mean, like, if you, if you structure your ethical guidelines based on what everyone else is doing, you're not gonna be particularly ethical.

Yeah. Yeah. I mean, even if you look at, if, even if you look at open ai, they're way over the line on f Yeah. Stuff like no 

Dr. Sera: regulatory board right now. Well, [00:27:00] 

Deep: that's the regulatory board Board. That's a whole conversation though. Mm-hmm. I, I don't need to have with right now, or, or I'll be on, I'll be on your couch.

I'll be on your couch in like half an hour. Yeah. Come on over. so one of the things we like, so we like to talk about basically three things on your AI injection. one of the things is what are you building, I think. Mm-hmm. I think we've talked about that a fair amount.

 The other thing, that we'd like to talk about is, is how you're building it. I think we touched on that mm-hmm. A bit. the third piece, which we've also touched on, but let's dig into it even deeper, which is, should mm-hmm.

This be built. the example I usually bring up is, I think most entrepreneurs, are. Aware of ethical lines mm-hmm. That are current And when startups have not yet become, like, tremendously successful, it's very easy to say, for example, and I'm not, I don't mean to pick on you, but I'm gonna pick on you a little bit.

I'm gonna just forward people to therapists. it's easy to do when you are, early in your startup journey. But once you're in the growth [00:28:00] stage At some point you're gonna be sitting in a board meeting if you're successful and people are gonna be obsessing on your churn rate.

And they're gonna be looking at it and they're gonna say, you need to get this down by four percentage points, what's contributing it? And you're like, oh, well we have a diagnosis module that runs behind the scenes. And if, it detects like a significant mental health disparity, we send them to a therapist and then they disappear and never come back.

And they're gonna tell you without telling you to shut that off. Yeah. Yeah. And so like, what are those temptations that you see coming down the pipe if you're wildly successful? Right. Like Mark Zuckerberg, I doubt he ever intentionally said, I'm going to like purposefully foster, mobs to murder, hundreds of thousands of people in Burma.

But I think what he probably did was he said, we're gonna scale to the entire world. Mm-hmm. And someone says, but we have no curators at all in Burma. And he's like, whatever, we'll deal with it later. Mm-hmm. That's probably what happened. Yeah. That kind of a problem is prolific in tech, [00:29:00] right?

Absolutely. Look at YouTube like, I doubt anyone at Google said, let's spread massive conspiracy theories that ultimately threatened democracy. Like I don't think they did that. I think you had a bunch of division heads who said optimize for engagement. algorithms optimized for engagement.

The algorithms automatically learn. Mm-hmm. That if you send somebody down the Alice in Wonderland crazy hole on the left or right, they get maximized engagement. So In your world, like what are those expectations and how do you practically, realistically anticipate navigating those ethical minefields?

Dr. Sera: I have a question for you. How do you ask such great questions? I could talk to you about this for an hour. I mean, first of all, it's a very hard line and you'll notice. Not a lot of mental health founders are mental health therapists, so look at cerebral, for instance, They took advantage of a regulatory lag so normally within psychiatry, you can't prescribe a controlled substance like A DHD medication [00:30:00] without an in-person appointment, whoever, because of the pandemic. That was lax. So you had an MBA and an MD as co-founders. The MBA was fundraising.

He went to VCs and said, look, we're gonna incentivize psychiatrists to prescribe medication quicker and faster, and this is our business model. And they all said, great. now we have medication shortages. And it got so bad that the DEA and the DOJ actually got involved, the MD on the team never wanted to have that model in the first place.

So eventually they ousted the CEO and put the MD in charge. But the real kicker in my mind is that I don't think the MD would've ever gotten funded. Some VCs, all they, yeah. Isn't that just the reality of tech 

Deep: startups, right? 

Dr. Sera: Mm-hmm. I'm only aligning myself with VCs that have the end game in mind.

I'm not aligning myself with VCs that want to go in and get out quickly because they're not gonna like my model. And, a lot of people don't wanna invest in psychologists because they know that we're [00:31:00] going to have those things. might be smart to not invest in an MD or a PHD, we have everything to lose. We could lose our licenses.

If we act unethically, if you're a tech founder within the mental health space, and you know, what do you have to lose if you act unethically, you could lose your company, but you're not gonna lose your entire career that you spent. Oh, you could be like to again, 

Deep: right. Elizabeth, whatever, the Theranos woman.

And you could be sitting in a jumpsuit, but it is rare. 

Dr. Sera: But not just that. bend Health, I think is a great organization, right? They accept insurance, they, partner with companies and they have a tiered model of mental health care.

And so when we're talking about Turn right, right now, I just launched two months ago. It's an MVP, but I, I won't have the turn rate because I'll be referring to myself in a responsible way. Either that or I'd create a separate revenue stream where we're partnering with a lot of great therapists that have availability way.

I think that's that way the key, right? Like 

Deep: A, this problem is extremely hard. I mean, at the end of the day, it gets down to the roots of capitalism and mm-hmm. [00:32:00] And incentives growth and all of that. Yeah. But I think to the extent that the natural forces within your business are pushing you into the direction of doing the right thing mm-hmm.

As opposed to doing the wrong thing. I think that's when you kind of know that you're on the right track, unfortunately. Yeah. It's way harder to get that set up where you have the right incentives pushing you in the right direction. And usually it's like the line between, you're actually trying to help people.

Mm-hmm. Versus it's an entertainment app. you might, it doesn't sound like you're gonna make this decision, but people will tell you to entertain it. Mm-hmm. and it, and it depends on the revenue models you choose.

 Mm-hmm.

Dr. Sera: Ours is a subscription based model where B2C will also be B two, B2C. but also, like, one thing people don't realize is sometimes they think they're acting ethically, but they don't, it's almost like they don't have the training enough to know, I had a person approach me and you know, I wrote a post about this, the person probably gonna hate me 'cause I talk about it so much.

He's like, look, I just rained all this funding for a hypnosis app and it's going to cure, [00:33:00] eating disorders and schizophrenia and bipolar. And I was like, cool. Yeah. Not gonna be part of that. He's like, well, I just got 10 million in funding. You could get this much percentage. You're not gonna just stick your name on it.

My investors told me they want me to, uh, get a therapist to, uh, slap their name on it and you're well known in New York. I'm like, yeah, not absolutely not gonna do it. And he is like, you know what? You know what I think? I think all you doctors are just afraid of losing their job. My app works.

I could help people all over the world, and you just wanna keep people sick. And, I think he believed that, I think he really believed he'd done hypnosis once. It really helped him. I do believe in hypnosis. I practice hypnosis. I'm giving it to people. Yeah. But like, you know, through my app.

but healing, emotional eating through hypnosis versus saying, look, my app, you could just listen to a few sessions of hypnosis on this and can cure your lifetime. bullimia, you could trigger a person to feel like it's such a failure. I keep saying like, my users aren't users. They're patients in my mind, they're a person who sat across the couch for me. you know, I've had [00:34:00] 17 years of people telling me what, how they really think, feel, and act, And, and most people, most founders don't visualize that, no amount of market research is gonna give you that kind of insight.

Deep: Yeah. one thing we haven't talked about that I'm curious, your take on, and this might have to do with, your company or not, it might be more about your therapy practice. Mm-hmm. There's a, a reality of a clinician, right? Like whether it's a doctor or a psychiatrist or a psychotherapist, you have.

Episodic or regular, but never continuous, relationships with the patient. And what I mean by that is mm-hmm. Patient comes in once a month, once a week Or, or like when they break a leg or have an episode, right? Mm-hmm. They come in, you get a little itty-bitty, time window where you interact with them.

Right? So, as a therapist, you probably know this really well. You spend an hour with them, they're gone for a week. What happens is your entire modality as a clinician is oriented around that tiny little time window that you get to [00:35:00] say something. 45 

Dr. Sera: minute window. 

Deep: Right. And so you, you deliver your, information that you're gonna deliver. Then they're off on their own And so there's a phenomenon of giving the patient access to some form of that clinician 24 7.

so for example, I imagine, you are a therapist dealing with, maybe behavioral food issues or, or maybe we're talking about more severe stuff than, like, than your current users. 

Dr. Sera: Yeah, we'll say it's a patient Yeah, like a patient in my practice. Yeah. But, 

Deep: but clinicians supervised, bought interactions or content consumption.

with a patient, so it's still supervised, But your patients now have access to a lot more of you than the 45 minute window. Yeah. Have you thought at all about that? what do you think the future is of that?

Because we're seeing a lot of applications, you know, doing that sort of thing in that mm-hmm. Physical therapy arena and, you know, yeah. 

Dr. Sera: there's two different issues here, right? Them having access to us and like a virtual that could be like, very calming, very grounding.

But there's also something people don't realize in [00:36:00] terms of how hard it is for us to have access to them all the time, right? So I have a lot of people trying to pitch my company, like pretty regularly, and hospitals, and they're like, look, I have this great idea, I have this app that your patient could just chat at all time and then, you could look at it through the week and, kind of like monitor how they're doing.

I try to remind them, most therapists, especially in hospitals, are seeing 40 patients a week. They're so burnt out, and now you're asking 'em to do more work that's unpaid and have extra liability now there's this assumption that you're gonna be reading everything they're writing all week, and if they say they're suicidal, I do think then we need to decrease patient load and allow those kind of tools because I think they'd be extremely helpful. Extremely helpful. But I just, right now I can't imagine a world with the current setup we have that a therapist has enough time enough to do that and that any place wants to take on that liability.

Deep: this kind of brings me up to my last question, which is, hey, let's fast forward five or 10 [00:37:00] years out. Everything you kind of are envisioning and want to happen happens.

What does the world look like for good and for ill? 

Dr. Sera: With, With my app or with the field of mental health in general. I 

Deep: why don't you start with your app, but then let's talk about mental health in general.

Dr. Sera: Yeah. Okay. I mean, my app is, clear. I just, I see the apps like so many apps trigger eating disorders and that we have such a toxic relationship with food and people have nowhere to go unless they're extremely disordered. And I would love to change the narrative that people could start talking about, Hey, my struggle's emotional too, right?

And people could start understanding all eating issues is an emotional issue and that people can stop feeling shameful. I have 7-year-old twins. And I don't want my daughter to grow up with the kind of images that I had with, the Weight Watchers and the Diet Dr.

Pepper, as long as it's, diet and zero calories. just put whatever in your body I'm also sick of people feeling like failures, that they can't stick to a diet I would [00:38:00] love a world, and this isn't saying I hate ozempic or it's good or bad, but I would love to be in a world that people don't need ozempic. Because like, you're really just saving them from themselves, that we could cure things before it gets to a point that it's disordered I want it to be more than the app, but I wanted a whole different narrative in understanding why people struggle with food. 

Deep: to fast forward out five, 10 years You know, whatever, thousands, hundreds, millions, tens of millions, hundreds of millions of folks are using your app.

they interact with it for a while. Their emotional issues with food presumably gets solved. Mm-hmm. And then they stop using it and go back to life. Mm-hmm. Something like that. 

Dr. Sera: Right. Or like something like Calm or Headspace for some people it might be integrated in their lives.

so maybe you start 

Deep: walking out beyond this disorder and into other 

Other areas of maybe stress management or something like that. 

Dr. Sera: Yeah. you're gonna find the most change, of course within the first like month. But then, after that, if you find that those cravings sneaking back in, in your ways, maybe you start talking to it a little bit more again.

Right? I want the narrative to change around food, calorie counting [00:39:00] was really popularized by this person named Lulu Peters in the 1920s. Then it exploded in the Weight Watchers era, and we haven't changed the model since the 1920s. And that America's 

Deep: very obsessed with this thing.

Like, I noticed this doesn't happen in a lot of other countries. Mm-hmm. other countries are not so metrics obsessed, like people don't have Fitbits and smart watches where they obsess on the step counts and, calorie counting just seems like a part of that it's measurement obsessed world.

It's, 

Dr. Sera: it's systematically teaching you to disregard your hunger and satiety signals, because it's teaching you to think in terms of math as opposed to listening to your body. and I wanna get away from that 

so I want people to stop feeling like failures understand it's a psychological struggle, to whatever degree it is, for people to stop fat shaming, and to people start healing.

Right. So 

Deep: let's, so let's talk about mental, mental health in general. So mm-hmm. Ai, machine learning, significant advances we're out five to 10 years. this is my biggest fear with respect to, yeah. I, I've been building AI machine learning [00:40:00] systems for 30 years. Like this is all I do. And quite frankly, um, the models have gotten so powerful that it's starting to get amazing potential, but also like downright frightening.

 I feel like our society expects 16 years of double-blind studies before we regulate. Mm-hmm. And with respect to social media, we accepted, astronomical increases in suicidal ideation amongst teen girls. We accepted, eating, uh, disorders.

Like, I'm sure you, you know, way more than I do about this. All this stuff we accepted in the name of like, well, it's just an app. and we listen to people like Zuckerberg just sweep it under the rug and ignore it completely. And we listen to all the other social media companies, basically say, well, we don't know how to figure out age and all that stuff.

So my fear with AI is I can't even imagine. That in three years, every toddler does not have a stuffed animal. They can talk to 24 7. Oh yeah. I think they'll have many stuffed animals they can talk to 24 7. My fear is not so much that the companies that make these stuffed animals, they're gonna be [00:41:00] intentionally weird or controlling.

I think what they're going to do is make them incredibly intentionally accommodating. Like if you go and just swear and scream and yell at Chad GPT, it's always very patient and calm with you. That's not what another toddler's gonna do with you. Yeah. It's not right. They're gonna be pissed. They might smack you.

They might bite you. They might kick you. Yeah. So you learn consequences and that that sort of suen, you know? Mm-hmm. Toddler who's not great at making friends, or 3-year-old or 5-year-old, or seven or 10 or 15-year-old. Yeah. They're gonna have other places to go. And I feel like the entire next generation of mental health disorders are gonna come from this.

Dr. Sera: That's wild. You know, that's so interesting because like my mind could go like swung to so many directions, right? because that bear could do like social skills training and help, but it also could be like teaching a child that actually doesn't need friends in many ways.

Because this is gonna be, you know, that this one thing that it can rely on so much. And again, it could be good in intentioned people who are [00:42:00] like, Hey, you know, this child, it will be, 

Deep: it'll be, it'll be well-intentioned people, right? Yeah. Yeah. It'll be, that's how, that's how technology causes negative secondary effects, right?

nobody decided. Mm-hmm. I need a smartphone in my pocket because I want 14-year-old girls to want to kill themselves. Like, exactly. 

Dr. Sera: And then you talk about that churn rate, right? Because it's like, my first thought was Hey, you know, we'd have to train this teddy bear to be like, look, you know, I love you, but I can't be your only friend.

How about I help you? Talk to other people and then you don't need the bear. And then the investors are gonna be like, wait a minute too high of a turn rate. Right? 

Deep: or the bear does that, but at some point someone has to say, how often does it do it? And they'll say, oh, once a week.

And the kid will be like, yeah, yeah, I'll make friends. And then the kid keeps playing with the bear 

Dr. Sera: and then the kid doesn't. Exactly. But, 

Deep: but look at the things that the kid's not doing when it's playing with a bear. Even if it's not playing with the kids, they don't have an imaginary friend. I don't know, I'm not a therapist, but I imagine that plays some kind of significant role in child development.

[00:43:00] Yeah. We've already massively mm-hmm. Hammered humanities, attention faculties. Like we already cannot pay attention because of all this crap. Exactly. And now I'm worried that like what is all gonna shake out from the fact that we don't even gonna check out?

Dr. Sera: But then again, like there's such a backlash too, right? Like the, you know, like for instance, um, the more AI photos there are, like, uh, my friend who's a, photographer, she's like, I've never been so busy because people actually want, and what do people want? Are these more natural raw photos? And you're even seeing that with social media, people want these like unedited real videos there's like polarization of it too, right? Like what I see for AI is that, it's scalable, it's easily acceptable, but I think there's gonna be this kind of like social construct that people think of this as like, Hey, that's kinda like the cheap way to go and it's available, but if I really want an expert, I'm gonna go elsewhere.

And you'll see that's even with like scalable services, right? That's already happening. 

Deep: that's [00:44:00] only if the expert's better than the bots, that's not actually what's happening. Like, well, in most areas, the bots are better than the experts. Like whether it's diagnosing heartbeat, anomalies mm-hmm.

Be cardiologist, whether it's diagnosing, you know, uh, pneumonia. Yeah. The, you, you have to be in the top like 1%. 

Dr. Sera: Yeah. Well, and that is what's gonna happen. And you see that in therapy practices. It's like, I, I keep telling people, you can't be the average therapist anymore. The average therapist just doesn't exist.

but my practice is very popular because we're all psychologists, we're highly specialized if your choices are either like better help or like an average person in your area, it's not to be clear, but if you're like, Hey, we're specialized, these people have PhDs, they do hypnosis and they specialize in OCD.

I wanna see that person. Our practice is doing great, but I do know that other people's practices that are like, Hey, wanna be a jack of all trades, and, 

Deep: I feel like what has to happen is people [00:45:00] have to get better at being people. Mm-hmm. you have to figure out the hu Like, the one job that's definitely not gonna go to AI is like, you know, a waiter or a waitress, like wait, staff bartenders, nobody wants a stupid robotic waiter or waitress or bar or, or bartender. Like it's just, oh, I'm sure you might have one Dorothy, one in Tokyo, or But people want that human touch and mm-hmm. I think like they wanna be cared for.

Yeah. And they, and they want that social glue. I, why do I take, solo guitar lessons? Mm-hmm. I love my teacher. She's amazing. She knows a ton of stuff, but at some point you start getting to the point where it's like, okay, I could actually do all this stuff on my own.

 She always has a, untapped reservoir of new stuff to teach me, but ultimately it's because I need to be on the hook to show somebody something. Yeah. Once a week, 

Dr. Sera: also just like, like those connections, like I just spoke for the New York Psychological Association about like, the ethics of AI AI in a changing world. And the last slide I gave was that, like, right now [00:46:00] we, we talk about it as if it's like these are choices, right? And I put a big, big picture of a robot and a big picture of my husband, right.

And being like, as if these are the choices, right? And then I like had a different picture that had like me and my husband kind of like sitting on a couch together and they're robots cleaning for us, And I think we wanna make sure, you know, like, first of all, you know, I'm sure my robot husband would always pick up the kids never travel.

always clean the dishes. Like, but you know, I certainly don't want like a robot husband, right? I want that husband. Well, that might be 

Deep: because you have a good husband, but if you have a husband, I do have a very good husband. Husband. If you have 

Dr. Sera: a husband, well then Yeah, exactly. Husband, then your husband be the average.

Yeah, because like, 

Deep: look, I, I know this, this sounds crazy. That's 

Dr. Sera: actually, no, but you're right though, actually. I mean, it sounds crazy, but like. 

Deep: some of the most fascinating conversations I have, yeah. Almost all of them are with Jet GPT. the, the deepest, most healthy, and you might think totally unhealthy conversations I have with it.

I built a custom GPT that turns, um, Chad, GBT into a Yian therapist. Oh, that's so cool. Yeah. And so I talk to [00:47:00] it, like, whenever I have a an intense dream, I'll talk it through with a dream and then it, it has this whole protocol where we go back and forth, we talk for a while. Oh. And then eventually it gives me this like, union terminology laid Yeah.

Analysis. And then I'm almost always like, oh my God, that's so fascinating. And I did this for a couple years and I like work through a bunch of stuff. We'll have to end up with this. 'cause this is just ridiculous. But the last dream I had, I'm in the dream, something happens that's fundamental and I recognize in the dream that I have to figure out what this all means.

And then in the dream I'm like, I have to get to my union therapist, bot so I can talk about it. So I'm like looking for it in the dream, I can't find it. And I get all this anxiety about not being able to find the union therapist in my subconscious in the dream. And then I'm like, okay, that's, we're done for a while.

I'm turning this off for a while. 

Dr. Sera: Yeah. Well, you know, I'll end on this note. You know, I talked to Chatt pt and of course like anybody treated like my therapist. But when I see my therapist just looking at her, I start crying [00:48:00] and I break down and I need that. Yeah. And I've never just broke down crying 'cause of chat GPT.

Deep: No, no. That ain't gonna happen. Yeah, right. 

Dr. Sera: And that kind of catharsis and like mirroring and like the mirror neurons and that feeling could only be produced by a real human being. That you feel safe with and that you feel vulnerable with. And I just don't think that could be replicated, just by your own awareness that it's not a human.

Deep: Right. I think that that's, maybe that's true today. I would probably guess that it's not even true today. with highly emotive voicings that are coming out. Mm-hmm. but, you know, in five years, you've got the box, but you'll have simulating the right eye blinks, the right, 

Dr. Sera: the right.

Deep: All of that. 

Dr. Sera: Yeah. But then we'll have to make sure that the right regulations are in place that do you know, this is real person or not. Yeah. 

Deep: I don't know. Right. 

Dr. Sera: Yeah. Awesome. Well, thanks so much for coming on the show. This has been like a 

Deep: super fun conversation. Yeah, 

Dr. Sera: no, I had a blast. 

People on this episode