Your AI Injection

Using AI To Assist Physicians With Dr. Tom Lendvay

September 22, 2021 Deep Season 1 Episode 10
Your AI Injection
Using AI To Assist Physicians With Dr. Tom Lendvay
Show Notes Transcript

This episode of Your AI Injection features Dr. Tom Lendvay, a physician at Seattle Children's Hospital and an associate professor of urology at the University of Washington. Alongside his work in the clinical setting, Tom has been involved with a number of AI-driven startups. We spent time talking about important areas and ways that AI is impacting healthcare, and what the future looks like from the perspective of an AI-savvy physician. We also dive into discussing our microbiomes and how AI can possibly help in replenishing weak gut microbiomes. 

Automated Transcript

Deep: Hi, there, I'm Deep Dhilon. Welcome to Your AI Injection. The podcast where we discuss state of the art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient and impactful and successful Welcome back to your ai injection this week that Dr. Tom Lendvay with me to talk about A.I. in medicine, automation is rapidly changing the medical field by creating new treatment options and making clinical practices more accurate and efficient. Our team here at Bionics has worked with a lot of medical startups, including some the times we've built systems to, for example, automatically analyze videos and text reviews of surgical procedures. And we also built a smart stethoscope that automatically detects heartbeat anomalies. So I'm really excited to finally be talking about this on the podcast. Tom's a physician in the Seattle Children's Hospital urology department and an associate professor of neurology at the University of Washington. Alongside his work in the clinical setting, Tom's been involved with a number of AI driven startups. We're going to talk about important areas in ways that AI is impacting health care and what the future looks like from the perspective of an AI savvy physician. And so. So, Tom, tell me, how did you get involved with tech startups and what type of role has I played in these startups?  

Dr. Lendvay: Yeah. So we spun a company at the University of Washington called Crowdsourced Assessment of Technical Skills or C-Sats. And this was led by our CEO, Derek Street, who we co-founded this with and bio, a biostatistician, Brian Comstock and Lee White and Tinkerbell Eskew, the grad students. And we built it up as a basically software, as a service platform with a hardware component that allowed us to take video of surgery out of the O.R., chop it up into the key pieces or steps of a case presented to reviewers and then score that. And what what that allowed us to do is build a vast trove of data of structured, annotated data of the surgical performance that we could then use to train machine learning algorithms to eventually automate some of those assessment processes. Like Is there a lot of blood in the field of view? Is there too much cautery or too much thermal energy being used, which could potentially be bad for a patient? You could potentially burn something inadvertently. And so, or how was the suturing being performed? And these are that's how artificial intelligence can play a role in in our profession and surgery in particular. And what we were striving to do, hospitals and physicians and providers are now they want to they want to buy things, but they also need solutions. They need insight. And that's where I see artificial intelligence providing such helpful feedback to clinicians in practice and not just in surgery, but in many other areas. Yeah, let's let's say peace. Yeah, yeah.

Deep: Let's talk about that a little bit. What are some parts of it? Because obviously we can't clone a physician's brain in its entirety, but there are certain types of things that you guys do really regularly that if that were machines can actually do quite a great job and like help help pick up some of the heavy lifting. Like, where are you seeing some of the exciting stuff on that front?

Dr. Lendvay:  Right. So so when it comes to vision recognition or identifying movement signatures, so we that is an area that is totally perfect for machine learning algorithms and artificial intelligence because the patterns, the movements that surgeons perform actually confer outcomes, patient outcomes. When you're watching a video of a surgeon and you say, Oh, that's a good performer versus not a good performer, but that's something that is also something that you can automate by training machine learning algorithms to identify those signatures and then loop it back to the surge in real time.  

Deep: Yeah. So I mean, I think if we if we think about all kinds of things, the manual dexterity concept probably translates you think about a basketball player in middle school versus a Michael Jordan or Michael Jordan's going to be as good with his left hand as he is with his right and certainly better than any of us with with the we can you see that sort of differentiation a lot and you're kind of talking about some sort of really objective things to measure, right, that let you sort of well, when somebody  

Dr. Lendvay: says so when somebody says you have this, this surgeon or this, let's say this excavator driver or this basketball player to use your analogy is really smooth or is really deficient. Yeah, those are general terms of. But what does that exactly mean? And so what machine what A.I. allows for is that you can distill or you can deconstruct what is smooth, what is efficient, what exactly is that like? What is the. Down to the motion, to the gestures of the hands and the instruments. And that's something you can get where you can track instruments or or capture the video of a performance and then train the machines to be able to tell. This is a signature of what smoothness is and what is smoothness. Well, it's deceleration to target its acceleration between spots. It's the shortest distance between point A and point B.. These are all things that are that are metrics that can be captured. It's the distance time traveled. It's the angular momentum of the instruments. And and that's you can't. We look at a surge in operating and our brain is doing that interpretation. But what we want to do is provide objectivity about that so that you can present that back to a user in a very automated way.

Deep: So we're talking a little bit about video and imagery and vision, which in the context of surgery, which, you know, but there's also other places that you can use and write like, you know, you think about it, a lot of scope hasn't changed much in the last probably 50 years or something. But you put some machine learning behind it. You can talk to us a little bit about other applications. Yeah, like what? You can do something sort of like, you know, you've got an ear, nose and throat dog, for example, looking in there, they're looking trying to diagnose an ear infection or something particular. You can also start to like in the context of the practice in the hospital. You can start to walk down the stack even where you can take maybe nurse practitioners or even sort of other folks. And with some of this assistive technology, maybe it is it beneficial to have some folks, you know, other folks able to like, be assisted by some of these capabilities? Like how do you see the relationship between a health care provider, not just physician and assistants?

Dr. Lendvay: Well, I think one is to reduce or, I should say, increase the probability that the diagnosis is correct. OK, so for example, to use your looking in the ear and checking out whether there is truly an ear infection or not by evaluating the eardrum? That is something that you would think, Oh, well, there's science behind that. And here's here's what it looks like when it's actively infected. And here's what it's not actively infected, but it's not. Again, medicine is an art, not a science. And so there's very obvious, yes and very obvious, no diagnoses. But the gray in-between is where I think I can assist the clinician to increase the probability of a correct diagnosis because you can use machine learning algorithms to interpret thousands of images of ears. And yes, you could say, Well, I'm been a practicing ear, nose and throat surgeon or a pediatrician for 25 years. And that person has looked in a thousand years and has been able to follow the patients over time.  

Deep: But that's not the same thing as looking in three hundred million ears, which is what is what some of these models where we're going to get to eventually. Yes.  

Dr. Lendvay:  And if you want to accelerate the learning curve of junior physicians and like it's it's not really acceptable that medicine is a profession where you have you're actually in your prime when you're 20 years out of your ear training. Like, why does it take so long? Why is the learning curve takes so long? And so what I think that I can help with is accelerate the learning curves for all these providers so that they're not making those mistakes that they have to necessarily learn from right off the bat. I mean, there is a lot of value in making mistakes for sure and failing. But you can think in medicine there, really, you

Deep: don't you don't want your medicine to fail. None of us want somebody doing a robotic surgery on us that's at eight hours on a da Vinci system like that.

Dr. Lendvay: So, you know, it's not fair, it's not good medicine. It's not helpful to the patient and to health care, you know, society wide. And so I really believe that that's where I can help. It's it's basically increasing the probability of correct diagnoses and correct decisions. So that's the assistance piece to it all. And you had mentioned vision recognition. So this AI is used in pathology to make kind of quicker and more automated decisions about what the pathologic findings are. These are static images. These are microscopic slides. Nothing's moving. It's actually the lowest hanging fruit for artificial intelligence, followed by dermatology, where you have like a static picture of some lesion on your skin. You don't have that much variability in the lesions that are out there. And so that's the end. You can capture you, to your point, about having 300 million images. I mean, you can capture many, many images rapidly, and it's easy to store that video that the static image, anything that static image is actually much easier than video to start with. So that's the low hanging.

Deep I mean, what what I think about it, I think about it like one physician can be as great as one human physician can be. But these AI systems, especially in these narrow roles as pattern recognizer, can start to be as great as the sum of all of our physicians are right. Because you're getting that much more exposure that many more things to look at. When you said one thing that I kind of want to dig in on, you mentioned this notion of expert disagreement. But can you talk to us a little bit a little bit more detail, like what happens in the patterns that causes the physicians to this?  

Dr. Lendvay: It's training bias that's a big that plays a big role, which means that I went to this residency and my mentors trained me this way, and you went to that residency and you were trained this way or in your medical school practice or your early career. And you like, for example, you were trained to use a lot of electrical or thermal energy to stop bleeding in an area where another person was trained to use very little because it could damage some surrounding tissues. But the first surgeon says, Well, my mentor is the bleeding was like the worst thing. And they wanted to stay away from that. And another surgeon says, well, bleeding was not that bad. And it didn't impact patient outcomes. But by not using so much thermal energy, I preserved some very critical structures in the area that could potentially be damaged. And so that's where you get this. So the training bias plays a big role to start with. And then there is there are literally signatures of surgeons that are different. So it's called the eye. We are we always called it the essence. It's like the surgeon, Noam. It's the essence of surgery that each individual surgeon has, and sometimes a surgeon's personality trans is translated into their movements. So somebody who tends to be impatient a lot may actually be an impatient surgeon and have hurried movements. Likewise, somebody who is very deliberate in the things they do outside of the operating room, they may be extremely deliberate to the point of apoplexy, like they barely move where they make such small little movements all the time that it's translated into there how they operate. Those are. Those are those are kind of personality differences that actually can change how you physically move and operate. So those are some of the other differences. And then there's just experience. So people who have a lot of experience again, they know exactly where they need to go, how they need to go there and with what forces and movements they do. And to an outside observer, it might first look like, Whoa, that's really fast. Whoa, that's that's not comfortable with how fast that surgeon's moving. And yet the patient does fine. But the point is, there are different types of surgeons. And so that's another reason why you would get disagreement in if you have experts watching another surgeon operating.

Deep: Are those areas that you think are ripe for machine learning, or are there other areas that are just like a very different way of thinking about it that you think are right?  

Dr. Lendvay:  I mean, my my my lens is through the kind of surgical skills assessment and video recognition piece. But the actual, I think the much bigger use of A.I. is in clinical decision support services or systems, which is that's kind of the catch all the could ask for for the use of artificial intelligence, for for making diagnoses. And so for for ingesting the reams of data from the electronic medical record and longitudinal data to be able to predict certain outcomes in patients at the acute level, say in the intensive care unit, the difference between somebody from a Tuesday to a Thursday based on all the data that's being captured and understanding if there is a trend towards something going wrong or something working out better and the patient gets discharged out of the intensive care unit, I think that's where the air support services will be most beneficial and a kind of more broadly adopted and needed, I should say. Yeah. So it's like it must be a combination of the patient's like body position in the bed, their demeanor, how they look at you, how they are looking at their surroundings, their skin tone, skin tone, coloration smells all these things. And they could be super subtle. Like I say, smells. I mean, it's it's not like you walk in and somebody who's sick smells bad and somebody who's healthy doesn't smell bad. It's like you're you're synthesizing all these things. And I think those are, if we can, if we can put metrics around all those aspects of what we're interpreting, that's where I that's how that's the value proposition for A.I. is ingesting huge amounts of data and then making putting that together, identifying patterns and then correlating that, of course, with outcomes. And we have the train, the ground truth, the patient who has this color, this response to the surgeon in this blood pressure through the night for this diagnosis that they were admitted over this period of time. Ninety six percent of the time this is going to happen,

Deep: like that's and is the goal. You know, I can see a few different goals, but one of them is probably to arm the physician with more insights and information and to go back to your, I think, your phrase with something like reduce the probability of misdiagnosis.  

Dr. Lendvay:  Yeah, yeah. And so the answer is yes and has to be done. There's kind of some some ah requirements, if I may, to use like kind of an engineering term of like systems requirements for AI in health care. And I did not invent these, but I totally agree with them. There was an opinion piece in JAMA in 2018 by Edward Shortlived, who's out of Columbia bioinformatics and his partner, who is at IBM Watson. And they basically distilled the applications of A.I. kind of. They basically said, here comes six requirements that have to happen for A.I. to be really adopted in health care. One is black boxes are unacceptable. There has to be transparency because physicians and providers in general are somewhat suspicious of, you know, if they can't tell how the like, how you got to that, that answer, like if they can't see the equation mapped out it, it kind of lends to some uncertainty about the value of the A.I. algorithm or about whatever system they're interacting with because it is a high stakes profession and we want to do, we always want to do what's in the best interest of the patient and to then rely on something where you no longer have complete control over particularly algorithms where, like, you don't really understand how the algorithm the output was achieved, then that's one of the things where a decision you're going  

Deep: [00:18:09] to default to your own training, right? Like if something says X and you disagree with them at most, you're going to look into it a little bit more. Right? But if it's X and if if the model is saying it's X because I noticed a, b and C, then you might be inclined to go look specifically at a B and C due to the because. And that's right. So that explainability is something we talk a lot about on the podcast here. Is it because it's not only physicians that want this? And part of that part of the challenge with respect to machine learning and AI systems is that the better? This is a rough generalization, but generally the better the models are, the less explainable they are.  

Dr. Lendvay: Yes, I know it's like a neuron. It's like a brain. Yeah. You can't explain a brain.

Deep: No, it's very difficult, but like we're starting, you know, it's it's a that's a hot active area of research and we're certainly trying to get a lot better at it. But a lot of times we're sort of reverse engineering the, you know, the details, but a lot of that's in how you build the model. Like if if you need to see like visual component A, B and C, I don't know if we're talking about. I think it's just  

Dr. Lendvay: it's just like this. It's just displaying to the user, the provider, what went, what went into at least? What are the inputs that's like, you know, if it's the urine output, the blood pressure, 10 hours worth of data, the nurse report, like basically that's part of the transparency that will build trust for the clinic.

Deep: Yeah. But you know, you have we've worked on these systems together a fair amount. You know how it just ends up being the image a lot of times or just the audio signal like it's really raw, low level things. So if I come to you as a physician, I say, what went into this, this MRI diagnosis? Well, a bunch of pixels of of the MRI image. Well, you can you say, well, that's not good enough. I need  

Dr. Lendvay: more. I mean, you have to yes, you have, but you have to. You can do that. There can be like another kind of a differential, like a secondary explanation about it. Yes, it's the pixels from the MRI static image, but in particular, it's looking for shadings and brightness and those types of things because they correlate to soft tissue signatures of disease or know. So like, I think there's a way that it can be distilled for a provider to build some trust. Right?

Deep: I think what your what your what you're pointing out is a really important point, which is it's not enough to just train these systems to do the end quote diagnosis. You have to actually back up and and actually analyze and maybe isolate the building blocks of the end of the end, the diagnosis. Right. So you've got to be able to say like this is like the shadings and the other pieces that let you make that conclusion. They all have to kind of be a part of the of the of the training data generation process because I think you see that a lot of times when people are getting into a problem for Newt, they try to go straight to the Big Kahuna, like, let me go straight to the end conclusion. That's usually for these complex machine learning tasks difficult to do more error prone. But I think what you're bringing up, the really important part that a lot of practitioners don't realize which is you actually don't want that you actually need to know why.

Dr. Lendvay: Right, right. And so another aspect of AI and health care and kind of requirement is that it can't take a lot of time to interpret or to deduce. Basically, it's got to go right into the workflow of a provider. I mean, everybody's clinically very busy and it can't be some laborious process that you have to boot up and it has to be seamless, which means it has to rip data from the EMR and the nursing reporting. And it has to ward the out of the operating room. And it has to require minimal inputs from the provider, which is tough because people who work on and build machine learning algorithms, they really value the the clinical context like what the physician and the provider is seen is important data. It's not just the blood pressure, it's like, And what does the patient look like to the provider and why are they why the belly is a little distended and things that are not really metrics that we capture on a piece of paper or in a computer system? So. So it's kind of a balance between making sure that you are leveraging the experience and knowledge of the provider that they are like maybe telling you or inputting, but not requiring such a huge burden of time from the provider for the algorithm to work.  

Deep: Yeah. Or the the consumption of the output of the algorithm, right?  

Dr. Lendvay: Because that too, yeah, if it's too complex. So that gets to the next point, which is complexity, how to use it and the complexity of the output like it has to be has to make sense and it has to be it has to resonate with the provider like it has to be relevant. It has to provide essential insights, not some superfluous findings that like, you know, that's an output that would satisfy an engineer that's developing step some algorithm. But it's like the the that the out the end output matters. That's really important. And then how it's delivered. So how that knowledge is delivered, that information? Has to be done. So the user interface has to be good to be easy, and it has to be respectful. That's another point that was brought up by this group who wrote the opinion pieces that providers that they don't want to be told that. I mean, and an algorithm isn't going to say, you're an idiot doctor and a Siri voice. But the point is that it has to be it has to kind of be in a way that's non-threatening. You're not like deciding over the provider that the algorithm is correct. It has to be kind of a  

Deep: it's going to be. Yeah, I mean, that makes sense, right? Because like very few companies go after systems that go straight to diagnosis, this is like. Right? You know, it's all about assisting the submission  

Dr. Lendvay: assist, but not replace. Yeah.  

Deep:  Because I think I mean, just we as a society are not willing to have a machine supersede our our physicians like we want our physicians to give us the diagnosis and we want them to filter all of these different devices and outputs in lab results, et cetera. They're passing that all through their experience and training filters and making the statement. We we want that even if it's a very fancy, highly, highly accurate version that maybe is even more accurate than physicians on average. But we still want it filtered through that process, and I think what you're getting at is part of these systems. Being successful involves them convincing, in essence, that position of the right thing.  

Dr. Lendvay: Yeah. And the physician wants to be kind of like somewhat metaphorically heard. Yeah, just like a patient wants to be heard by their physician or their provider. They are. They're they're they're listening to the patient and they're ingesting. What the patient is referring is relaying to them the same way. The same kind of kind of if I anthropomorphize the relationship between a I and the provider, it's that that the AI algorithm or the system is listening and and and taking into account what the provider is also inputting.

Deep: Perhaps you're not sure whether I can really transform your business. Maybe you don't know what it means to inject it into your business. Maybe you need some help actually building models. Check us out at xyonix.com. That's x-y-o-n-i-x.com. Maybe we can help.

Dr. Lendvay: The more element, which is that it's rooted in science, so that this that the algorithm, if the probability is 99 percent, that something is what it is. Yeah, that it is. It has been confirmed, peer reviewed, it's reliable and and reproducible and that it's safe like it's providing guidance or assistance that doesn't harm the patient. Those are kind of it sounds like obvious. Well, yeah, OK, of course. But actually certain providers and they really are a conservative bunch. We all are. And again, because it's a high stakes profession, are you dealing with people's health in their lives? And it really needs to be proven and we need to have we need to see the evidence that it's working and that it works.  

Deep: Yeah. I mean, her just to it's not enough, in other words, to build systems that you have measured in your own way, their efficacy. It has to have gone through a peer reviewed process that's accepted and digested, you know, within the medical community, which is why, like a lot of startups in this arena, things just go a little bit slower because you, you know, you've got to take the time to get the NIH funding to do the proposal, to get the grant, to do the study, to get the study approved. Yeah, get your data, publish and get and all of that with institutional credibility and all the other stuff that you don't have to do if you're just trying to make a system that slings adds faster or better.

Dr. Lendvay: I'm going to I'm going to put an asterisk next to that, though, because I do think that there needs to be some. I need think the providers need to come halfway in this journey with AI and medicine, which is that it can't rely on the stodgy long tax like tack times of incorporating new information that we have all grown used to. We need to be more nimble. And I think that all of us could do a better job at kind of embracing or under trying to understand, like again, listening. In this case, listening to the A.I. works both ways.  

Deep: Tell me a little bit about like, what have you seen? I mean, you're you're obviously at the forefront of like, I don't know, techie physicians like, you know, like a physician that's really into technology. But all of this has been in a doctor's office where we don't think of the doctor as like the forefront of tech, like what are you seeing out there in terms of physicians and their sort of engagement level with some of these state of the art techniques that we're talking about in technology in general, but really with AI in particular? And what are some of the hesitations and maybe lack of hesitations that you're seeing?

Dr. Lendvay:  Yeah, I I I would say that what I can speak to most intelligently is when we applied the the surgical assessment to actual surgeons in the field and kind of the responses we got. And I would tell you that that as to getting back to some of those bullets that I mentioned or kind of requirements for any A.I. platform or decision support system is that surgeons are skeptical. That's that's a fact. And they there has to be what appears to be a human component. There has to be some little. Feature in the system that seems remotely like another provider was giving that advice, whether it's real or not, it has to kind of at least be perceived that it is coming from an actual person. And the other piece is the intensive care unit. So interpreting all the data coming in from a patient over a period of time to oh, to kind of lend towards a potential trend of badness happening. And I think that ICU docs or providers are becoming kind of more open to that as a possibility because the data is actually is there are some pretty compelling data about how sepsis the development of sepsis over a period of time can be sometimes detected through these platforms. Now, personally, I have not seen them. I've not worked with them. It's only through reading peer reviewed journals have I seen this so but that my my, my only direct interaction with surgeons that have to ingest information produced by automated or machine learning algorithms is on the surgical skill side of things  

Deep: like one of the one of the sort of realities of medical practice is that, you know, there's this synchronous medicine thing that happens patient, comes in, sits with a physician for 15 minutes x minutes, physician observes. During that time window patient goes away because comes back. Same thing goes away. With modern technology, we have the ability to when patients away to gather data, to get more information, to glean stuff so that when they come back to physician physician has a lot more nuance than what they can just get from looking at them and the current blood tests and other kind of diagnostics and stuff they can perform. Can you talk a little bit about like, what do you see the future? As for this kind of asynchronous kind of physician patient data gathering to assist physician phenomena like you see that as a thing and one that can be taken advantage of to provide better medicine and health care. And if so, like how?

Dr. Lendvay: Yeah. So I would say that the weather providers are going to embrace asynchronous care where data can actually be ingested from the home of a of a patient before a provider sees them or between visits. And then there some interpretation going on that kind of again to do the CDSs, that clinical decision support process to help aid so that the next time the physician sees a provider sees the patient that they have some more information, whether the providers want that or not, it's coming because the people who want it are the patients. So although patients love to see their providers and they like to take time and they like to know they've been heard, they would also like efficiencies. And so if this if being able to capture, say, like a video from your mobile device that's at home, that can be easily imported. And then, for example, let's use the example of physical therapy or orthopedics when you're dealing with arm motion or you have shoulder pain and I can tell you what's going to happen. You are going to go into the doctor's office and they are going to do some maneuvers to see, Does this hurt? Does this hurt? Does that hurt? Can you do this? Can you bend that way? And that in and of itself took some time to do. And yes, it's important for the provider to to lay your head, lay their hands on the person kind of using that term in many cases. There is still so much opportunity to have some of the diagnosis already kind of narrowed down, or that the differential diagnosis as it were because it might make health care more efficient because now the provider can say, You know what, I don't need to see you then border an MRI because now I've done the exam and feel that you need an MRI. That's like another visit. Instead, you just you are. Do this home evaluation. Send the data. They say, no, it's not the MRI you need. All you need is a plane x ray. And let's get that right before you roll into the providers and office, and now you have all that information ready to go. So I actually see it as a huge windfall for big upside for patients and ultimately providers, because providers want to be they want to take care of patients that are in their expertize. They don't want to take care of somebody necessarily that doesn't have a problem, that that provider is kind of primarily suited to take care of. And there's a lot of missed triaging that goes on. And so I think that kind of home health care and home kind of diagnosis is a mountain area in the United States.  

Deep: At least there's an elephant in the room. COVID 19 has certainly been a catalyst. Apps like look at how many patients are now not physically coming into the office  

Dr. Lendvay: and don't want to. They don't have to.

Deep: And that that that Pandora's box is open and there's so much possibility to do great things. But we want to get more information to the physician than just what they can see through a camera in their laptop, right? Like we, we can have devices at home that a patient could operate. We could have something to look in their eyes, something to look in their ear, something to like, you know, like, how do you see that reality of this kind of remote medicine evolving? Like, what does it look like in 10 years, you know, from your vantage? I mean, it could be

Dr. Lendvay: like it could be the pocket knife of medical diagnostic system that sits in your home. It could be. I mean, it could be something where all those things you brought up like looking in the ear and looking in the eyes and and checking your blood pressure. And it's like some very simple kind of little unit or box that kind of allows people to have, you know, measurements taken there in the privacy of your own home and the comfort of your own home. And just having that data kind of imported to a central repository or a repository, your provider's office that can also then start chewing on the information and provide. Some insights to the provider before they have to weigh in on things that it probably will obviate many visits, the need for many visits. I mean, there's a lot of follow up that doesn't have to occur in person. A ton of follow just in surgery alone, just the wound care like the post-operative wound check visit. That has to happen. I mean, that's that is totally already being outsourced. Just kind of to more mid-level providers. And ultimately that can be done by, you know, just something in the privacy of your own home that ingests the information visually and or contextually. And then it basically spits out at this wound is doing fine, healing well, progressing as it's expected versus this one needs to come in and be seen.

Deep: Whereas if you've got eyes and ears and other sensors in the patient's house, then those things might become unnecessary. And then the flip is also true where something will get caught earlier and absolutely in between those things and we can rush somebody in and get it dealt with as soon as possible based on a maybe a simple sensor that they use to gather the input required for whatever procedure they had. You are listening to your A.I. injection brought to you by bionics dot com. That's x y o NIAC dot com. Check out our website for more content or if you need help injecting A.I. into your organization. So, so, Tom, tell me a bit more. You have a fascinating project looking at the human biome. Tell me a little bit more about this project.

Dr. Lendvay: Yeah. So first kai level, the microbiome is the kind of compilation of all the organisms in a certain area in our body. So the one that's gotten the most press has been the intestinal or gut microbiome, which is all the bacteria, fungus and viruses that are living inside of us, of which there are over a trillion in an individual and there are thousands of different species. And there's it's not a coincidence that we have evolved to have this rich set of of passengers in our bodies to help do stuff for our overall wellness. There's a microbiome in the mouth, there's a microbiome on the skin. And when organisms lose the diversity of their microbiome, they die. And so we I think in the next five to 10 years, we will be looking at diseases through the lens of what's going on in somebody's microbiome. And the intestinal microbiome in particular, is really interesting. It drives metabolism, it drives immunity. It drives our behaviors. So there's what's called the gut brain access that, you know, people who have depression, there are people who have mood disturbances. It's actually, in part, could be driven by changes in the type of bacteria and the fat and the community that's in your intestines. And we're understanding more and more about this. And so there is a particular problem that we are facing as a society, which is that we are totally inundated with antibiotics and chemotherapy. These are the atomic bombs to our healthy microbiome, our gut microbiome and the equilibrium that's been created. And there are 80 million people the United States who suffer from intestinal health disorders, and a lot of those are due to these chemicals that change the microbiome. Half a million people get a very severe intestinal infection called Clostridium difficile that kills 44000 people in the United States, and it's due to basically a whole community of bacteria that are wiped out by these chemicals, Lee leaving some of the bad actors to overgrow. But there is an effective management which is replenishing a microbiome by somebody else's healthy microbiome. It's called fecal microbiota transplant, or FMT, or banking your own microbiome in advance of some insult so that you can basically re up afterwards. And the process of actually managing the sample and getting it into a patient is not good. It's horrible. The user experience is bad for everybody. And what's really interesting is that the data about which microbiome is best for which person is not clear, and this is where A.I. comes into play. So to be able to ingest the the and not the unbelievably high level of granular microbiome analysis data coupled with wellness data of the patient coupled with their electronic medical record and clinical conditions or their outcomes in general, after some intervention that those data that's where, you know, I love these huge troves of data to chew on for my own microbiome. Data is so rich and so deep, but we don't know how to find signals yet. And ultimately, in this particular situation of having your microbiome disrupted, it would be wonderful if we could figure out who's the best donor for the best recipient and when is the right time to replenish your microbiome? Depending if you have, like a disease process that has peaks and valleys or ebbs and flows. So ulcerative colitis patients or Crohn's disease patients, they get flare ups and then they are in a period of permission where they're not sick. And maybe that's the right time to bank their microbiome and then ultimately replenish during a flare up. And when does that flare up happening? It takes it will take machine learning algorithms to ingest and make derive insights on all these data. Who's the best donor for the best recipient? You know, right now we have solid organ donor matching program like kidney transplants and heart transplants that are based on some blood signatures that the cells from one patient and the cells from the other, how they react with each other that determines whether you're a good match. The microbiome is so much more complicated because there's so many actors. And so what we need is using and pointing artificial intelligence at ingesting and making insights around all this microbiome data that's going to be coming out. So in our company, Microbiome X is actually working on both the understanding, the reporting. As well as an intervention of being able to provide a seamless, very easy way to do a microbiome transplant. And so that's something that's very exciting to me. It's not in the surgical, still skilled space, but it is in an area where I believe health care is the paradigm will shift and look. So we will be looking at diseases differently.  

Deep:  So you can say you've got this the microbiome data for a set of patients. How do you start to reverse engineer this to figure out? Like, like, like, are you looking for very healthy patients and now you can study the attributes of their microbiomes versus very unhealthy patients for you and you're looking at the attributes of their microbiome? Like how do you start to connect? To to ultimately make this patient match that you're describing  

Dr. Lendvay:  so that you can't start with the healthy because the healthy is so different and so varied from one another, like one healthier person to another healthy person's microbiome doesn't look at all the same. When you go to particular illnesses, very severe illnesses, the microbiome start looking alike. And so you start your you start your journey down machine learning with the with the like, the known right where there's ground truth data, where the variety isn't that high just yet and you start building your you start to ride getting signal from that. Then you start branching out to the not so healthy, but somewhat healthy, and then you go out to the healthy and then you couple that with what happens over time because microbiomes change over time. I eat different foods over time. My microbiome changes. I do a different exercise program. My microbiome changes. I travel to an area where dysentery rates are high. My microbiome changes. I get a fecal transplant. My microbiome dramatically changes. And the question is what? What in it, what in from the donor was retained in the recipient and what environment was in the recipient to start that may have led to the donor's microbiome taking or in grafting. And I think that is and because there's we're not dealing with like just a few bacteria we're dealing with, you know, thousands of species of viruses and fungus and bacteria. I think that's why aid is so valuable in this area and will be because you're dealing with data that is incomprehensible for like an individual human and maybe even just like simple computer algorithms to chew on to see. And is there a role?

Deep: Is there a role to be played to life? We're back to like food sources that and like things you eat, sources that can help you alter your biome. I'm thinking of probiotics, yogurt, all that stuff. Well, currently we have like a rough, fuzzy like, Hey, eat more yogurt, whatever. But it sounds like you're going for something more targeted.  

Dr. Lendvay: No. And when needs to be more targeted, first of all, to use your probiotic example? There is. There really are very few, if any, randomized controlled trials that show that probiotics work. And anecdotally, people will say, Oh, I do kombucha. I've tried this cultural pill and it worked fine, and I felt much better after an antibiotic course. But there's no real hard evidence that it works, and it's because those bacteria don't really take within the gut so. But to answer your question about yes, there are. There are ways of linking what's in the diet or what you're doing to what's happening to your microbiome. It's not just the organisms, it's also kind of the chemicals around the organisms. It's like the the the waste products of the bacteria that have eaten the food that you have put in the intestines. Those those are chemicals that actually can be detected as well. And you can start seeing signatures of while you have a lot of this, which means that you must be eating this and having this type of bacteria that's kind of promoted within your gut. And it turns out that every time this happens, you're, you know, this wellness change happens like you feel more energy or you feel less energy that there's so many dimensions of data that can be ingested. That's why machine learning algorithms and I will be so important in this space because it is so multidimensional dimensional.  

Deep:  All right. I think that's all we've got time for. Thanks so much, Tom, for coming on. This has been an awesome episode. That was Dr. Tom Lendvay today. If you're interested in learning more about the intersection of A.I. and health care, you can check out some of xyonix's own projects on our website. You can find those at xyonix.com/projects. That's all for this episode. I'm your host saying. Check back soon for your next A.I. injection. In the meantime, if you need help injecting A.I. into your business, reach out to us at xyonix.com. That's x y o n i x . com. Whether it's text, audio, video or other business data, we have all kinds of organizations like yours automatically find and operationalize transformative insights.