Your AI Injection
Is AI an ally or adversary? Get Your AI Injection and learn how to transform your business by responsibly injecting artificial intelligence into your projects. Our host Deep Dhillon, long term AI practitioner and founder of Xyonix.com, interviews successful AI practitioners and domain experts to better understand how AI is affecting the world. AI has been described as a morally agnostic tool that can be used to make the world better, or harm it irrevocably. Join us as we discuss the ethics of AI, including both its astounding promise and sizable societal challenges. We dig in deep and discuss state of the art techniques with a particular focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful. Need help injecting AI into your business? Reach out to us @ www.xyonix.com.
Your AI Injection
Using AI to Detect Motor Delays in Infants: Interview with Bharath Modayur
This week we had the chance to talk to Bharath Modayur, founder of Early Markers, a startup that is using AI to detect developmental delays in infants. We chatted with Bharath about some of the machine learning involved in Early Markers, and about the challenges the company has faced and the importance of the work they are doing.
Diagnosis of developmental problems in infants is most impactful when done early. Unfortunately, it is difficult and resource-consuming for new parents to bring their newborns in for motor-developmental screenings. Early Markers is using AI-driven body pose estimation to make screening for motor development easy, accessible, and efficient.
To learn more about Early Markers, visit their website here: Early Markers
To find out more about how to use pose analysis to analyze activities, visit our website: Human Pose Analysis
If you are interested in reading about other applications of body pose estimation: Using AI to Improve Sports Performance & Achieve Better Running Efficiency
Automated Transcript
Deep: Hi there. I'm Deep Dhillon. Welcome to your AI injection, the podcast, to discuss state of the art techniques in artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful and successful. welcome to this week's episode of your AI injection, so this week we've got with us Bharath Modayur received his Ph.D. from the University of Washington, where he is currently an affiliate assistant professor. We've got Bharath on to chat with him about his start up early markers, which is leveraging AI technology to detect developmental issues in infants. All right Bharath Thanks so much for being here. And I'm super excited to talk about early markers. Let's get started. Why don't you tell us a little bit about your early inspiration for the project?
Dr. Modayur: Yeah, I mean, first of all, it is exciting to talk to you guys. It's always been. I always get supercharged after these conversations, and typically I need a booster conversation in a few weeks. So this is exciting. So I'm quite excited to talk to you about this. The short story is that my expertize is, you know, I'm a computer vision guy, mid-nineties, Ph.D., from the University of Washington. And there was not really that many avenues to use. You know, the skills that you learn from grad school computer vision at that time to something practical. So I hadn't done much with computer vision for earlier part of my career, and right around 2012 or so there were there were some demos at the University of Washington Electrical Engineering Department, and they were doing pose estimation on adults. So just figuring out where the body joints are of adults moving around and typically in the upright position. And they were, you know, they had applications for surveillance and suspicious activities, et cetera. And that the Let US Park and I did a proposal to DARPA at the time to develop something along those lines and the ideas were accepted. And then we realized that you're kind of transferring all your IP over to ARPA. And it's not something that I really understood. So backed off a little bit, and some mentors in my circle kind of pushed me towards health care and the use autism department. And so I hooked up with researchers there and looked for ways where we can use the computer vision video processing to look for early markers, early signs of autism. That was kind of the germination of the idea. And it's kind of, you know, the the the key thing being that there was a lot of anxiety, especially for parents that already have a child with autism and the younger sibling. The the likelihood goes up to 17 percent if your older sibling is diagnosed with autism. And so how can we help them look for signs early? And you know, and and what can parents do at home to engage with a child to develop these skills like motor skills and social skills? So that was kind of the idea.
Deep: Walk us through what's the benefit of early diagnosis first and then what are the markers you're looking for through the video signal? And how do they actually predict, you know, these?
Dr. Modayur: Right. So so I mean, we have branched off from looking for something very specific to autism to generalize developmental atypical 80s and developmental delays. That's what we're focused on. But in general, early detection can lead to early intervention, and early intervention is certainly linked to better health outcomes. So, for instance, 15 to 20 percent of children have a developmental or behavioral disability or disorder, and only a third of them get diagnosed before they enter the school system. And early detection and early intervention can save the society up to like $100000 in social service costs. So that is kind of the motivation for, you know, looking for delays and looking for markers early so that you can get to a diagnosis and it can get to intervention early. It's it's better health outcomes and you know, you'll save money as well. The markers like, for instance, the the it is again, I want to be clear about the fact that markers are not diagnosis, right? I mean, just because you find something that is a marker for, I don't know, diabetes doesn't mean that you have diabetes. So there is like a herd. That's a distinction that we typically have trouble with when we write NIH proposals that, you know, the conflation of markers with diagnosis, we have to be very clear about what is what. So the markers that we can see very early on, the earliest markers that you can typically see are motor related. Write them in social and communication skills, come later for the infant and the motor markers that are linked to autism and specifically are like, you know, delayed development, you know, and less time spent in advanced postures like tummy time. That is linked. And then later, you have atypical gait asymmetry, and there is this. Again, it was not a big study, but it gave tantalizing clues about how the infant lying down in the in the crib. And you look at the posture of the infant to see whether it is symmetric or asymmetric and the the more time the kid spends and the asymmetric poses in the crib is correlated with higher risks for autism. So these are these are markers again is just something to look out for. And cumulatively, they they they lead to a specific condition. And you know, there is, you know, more is interesting because there are a lot of development in other domains that cascade from development of motor skills. They call it like the sticky, sticky mitten experiments where they attach something to the infant, like three to five months old, something like a velcro glove, if you will. It allows the kid to reach and essentially grab it even before they have skills to grab. And then longitudinally, they look at these three models that have been trained to reach for things and how they develop at 13 to 15 month old and they find that are their ability to explore objects is clearly improved because of that early training. It kind of shows that there is a causality where motor development leads to development of other domains like language and communication and social interaction. You know, just the ability to sit independently and point to things. They're kind of like it in prosocial bidding. It leads to interaction with the adults and and just the ability to crawl independently and later. Walk allows the baby to explore the environment and interact with the environment. And just those interactions drive development in other domains. So it's motor development is kind of pivotal. And from computer vision perspective, it's also convenient for us to passively look at a baby and glean things. Instead of actively fitting them with wearables and stuff,
Deep: so before we kind of dig into the how are we doing this with machine learning? How does autism typically get diagnosed today and what role does this kind of infant level behavioral analysis play or postural analysis?
Dr. Modayur: Typical diagnosis is on average, it's I think it's like three plus a year. Even though diagnosis can be performed at 18 months. So I mean, you know, if we're not talking about autism specifically, it's it is earlier things that you can earlier things that you can find in terms of delayed achievement of milestones just keeps the parent engaged so that, you know, during well-child visits or just through observations and interactions, the parent is able to look for these early signs, right? And that is especially crucial for a parent that already has a familial history of autism that they can look for these early markers, if you will.
Deep: Right? Yeah, I mean, it sounds, though, like you're saying. I'm kind of reading between the lines, but I am not clear, so I think the thesis here is that people are not identifying these conditions early enough and that there is a gap between when they're actually being identified today because maybe there are some parents who have a former first child or earlier child that has a condition so they know to look for it. But there's a lot of folks who don't. And that if we can identify these folks because we can make it lower cost and they don't need the convenient yet convenient, they don't necessarily need to know ahead of time what to look for and then to take them into their position and get maybe rooted to some kind of behavioral occupational specialist or what have you. Is that basically the the point here?
Dr. Modayur: Yeah, the convenience and passive observation and not really expecting the parent to know everything and guiding the parent through activities at home that kind of promote development along all these domains, right? So even the diagnosis about autism, it's it's now possible to do it younger than, say, 10 years ago. But I mean, you know, the parents have to have to be motivated or concerned for you to get there. Like, for instance, the American Academy of Pediatrics recommends developmental screening for infants, especially motor screening. And there are about six annual well-child visits that are recommended in the first year of life. And on average, parents make it to two point two, which means that you're not even being seen by a by a clinician. And the more tools that we can provide for the parent and the clinician to perhaps even remote consult and be able to, you know, look at the baby without the parent having to go to a clinic. All of those are kind of lowering the barriers and exposing the child to being observed by a clinician early and often.
Deep: Right. So that yeah, and maybe even for a little more time because you know that 15 minutes, once every few months observation or window might not be enough for the physician to sort of notice anything?
Dr. Modayur: Correct. And that kind of goes to the heart of what we have developed and what we have in the pipeline is about is about being able to observe the baby in a naturalistic environment, which is home and being able to observe the baby for longer periods of time and more often. Right. So it's you don't know what the what the circumstances will be during a clinic visit about the mood of the baby at the time, it's a compressed time frame. And there are, you know, the baby may not be at the the best state to express whatever skills that he or she has, right? So the more opportunity you give the infant to be observed and and kind of express their skills, the better off you are and in and recording something that's more accurate than a clinical setting observation.
Deep: You are listening to Your AI Injection brought to you by xyonix.com. That's x-y-o-n-i-x.com. Check out our website for more content or if you need help injecting air into your organization. Let's change gears a little bit. So for the sake of our audience, you know, we're we're going to talk about an air technique called body pose analysis, which if you think about the human body, you've got all these joints on it. You've got a, you know, a skeleton that you can lay over that and sort of track over time. So you can sort of see the joints kind of moving across time and then you can start to do things like assess whether or not there some asymmetry like Bart's talking about or whether or not, you know, a baby is sort of laying down or crawling, etc. So maybe it works. Let's start on the on the the skeleton extraction itself, the pose extraction. Like what are some of the challenges about, I guess, babies versus adults with respect to actually getting the pose out in the first place?
Dr. Modayur: Yeah. I mean, they are they are more varied, different proportions and squishy, if you will. And so when we started this out, it was there was a phase one NIH grant. And so we have this idea to be able to recognize, you know, where the baby is and where the baby parts are, where the baby joints are and what the baby is doing in terms of the specific activities that the baby's engaged. And so that was the thesis. And then we came down to implementing the technology. And so just a brief history is that we approached it from a what you would call a classic. Computer vision or what they call machine vision approach, where you're trying to look for specific features in the image, like does it look like an elbow joint and does it look like fingers and does it doesn't look like a face and then you extract these extract these features, you put them together and then you constrain them and say that, well, these features can be just occurring everywhere. So if you are pretty sure that that's the head, you know that below that will be the shoulder, you know, et cetera. And you know, you're kind of constrained by by the human anatomy. And and of course, most of these classic algorithms were written for, you know, people like you and me right now sitting at a desk or just walking around or doing some common activities, whereas the observational videos of babies they're not doing, they're most of them are not sitting. Most of them are not certainly standing that are crawling around. And there are not enough, you know, examples of crawling around humans for us to work the classic computer vision way or later when deep learning kind of exploded and people are able to do some, you know, previously on doable things and recognizing animals and people and where where people's joins art, et cetera. So the main technical problem that we ran into was we don't have the data, we don't have infant data and infant data is hard to come by. It is it for obvious privacy reasons. You don't you don't find a whole lot of them on YouTube. And so what we decided to do was to do our own study, and we recruited and imaged 60 68 infants in a in a one year period. And we have about like 30 to 40 minutes of data on each one of those infants. And we used Microsoft Kinect at the time. So we had not just the color images, but also information about the the depth of each pixel in the image. So it gave us additional information. We went through like this extensive period of labeling these images in terms of where the different joints are. That's pretty painstaking. And then. And but it say it. Say, you know, you can use human resources. You don't need that much training. Everybody knows what a baby looks like and where the baby's, you know, joints are. So, yeah, yeah. So you could you could get a high school or middle school student to do that. But then comes the second part, which is a labeling specific parts of the video where the baby is doing something interesting. Like, for instance, is the baby rolling over and there are like three different kinds of rolling over. There are nuances there that only a developmental expert will be able to tease out. And so we use, you know, expert time to annotate parts of the video where specific things are happening, you know? So once you have all that, then we have these deep learning systems that we train on these data to figure out where the body joints are and then be to see where specific motor activities are taking place.
Deep: So, so part of it is like, you can't just go use some off the shelf body pose packages and apply them to babies you know, with and get too far. It sounds like and correct. And then the second part here is once you've got so, so you went out, you started gathering your your your baby data and then you trained up your own models for proposed extraction on the babies. And you've got now you've got the ability to track the skeleton. What is it you're trying to do next once you've got the skeleton? Walk us through the actual poses you're trying to get to. Yeah, some of the challenges there and getting from the skeleton over time in a video to the postures that you're after or.
Dr. Modayur: Right. So if I can, if I can jump ahead and let's see if I can paint a higher level picture of how I is kind of employed in this particular solution that we are developing, right? So the solution we're developing is for for a clinician to be able to do a standardized a clinically validated assessment, which is objective. It is not really just looking at a baby and saying looking at a checklist and saying, well, is the baby is five months old or so, is the baby rolling over? Yes or no? And you ask the parent and those are like questionnaire based assessment of the child. Not. As accurate as something that it is observation based and that uses a clinically validated tool. So what we are trying to do as a see whether the parent themselves can do the conduct of this or the administration of this test. So think of it as as a clinical test that the baby goes through, which is just all passive observation by by an expert and occasionally facilitating movement. You can just ask the baby to do something so you have to provide some cues, rattle some things and give some enticements so the baby crawls or rolls over or reaches for stuff, right? So right now, that is done in a clinic under the clinicians supervision, and then they look at how the baby behaves and then they score the baby and then you get a score and then you get a percentile ranking and then you figure out whether the baby needs further evaluation or specific intervention or everything's going fine. Let's see you again during the next well-child visit. Right? So that's how it happens. And so our first idea was. Can we have the parent do it at home over time? So we kind of split this assessment into administration and evaluation and and made them asynchronous, they don't have to take place at the same time. And for administration, our key idea is can we just have the parent do it? And so so the first part of the air there is for for the for the app to be running on a tablet or a laptop or a phone. It recognizes that the baby is, for instance, on its back. And so there are a set of cues now you can give the parent either typically through voice voice guidance, where it can tell the parent, Hey, it looks like, you know, Layla's lying down on her back. So can you rattle something on her left and see if she reaches for it? And then so can you rattle something on her right and see if she reaches for it? That's an example of figuring something out in near real time as to what how the baby's posture is located and then giving specific ideas and cues to the parents so that they can elicit some movement. That is part of this assessment.
Deep: Right. There's probably a number of things that kind of go into deciding what the exercises that you provide. The parent there is maybe the the age of the child. Yes. Of what? What exercises you already have data for and which ones. Maybe you don't know anything else that yeah, I
Dr. Modayur: mean, these are these are not something that we have the early markers of come up with. These are standardized tools, if you will. Observational tools that clinicians use. OK, so so we're just facilitating conduct of this at home, which hasn't been done. And then the second part is that once you do that, you give the parent about a week so they don't have to make time and sit down and do something with a baby for one hour and and be constrained again. But it's part of the constraint you have in the clinic is it's time limited. So we expand the time so you observe the baby longitudinally a little longer and then on on the so that that is the first part of the MRI right to be able to look at a video at home and be able to tell whether the baby is in one of these canonical poses, if you will, is the is the baby on its tummy, on a back or sitting or standing or so? And based on that, you give very context specific cues to the parent to elicit movement from the baby. So that's one part. The second part that we did using a lot of data is that we. So this particular tool that a clinical tool for assessment is called the Alberta infant motor scale. It's called Aims AIIMS. That's got 58 specific motor items. So, you know, crawling, for instance, will be one item. And there are different kinds of crawling, actually. And then crawling to sitting will be another item and rolling from back to tummy will be another item. So these are examples of, you know, what constitutes those 58 items, right? So. So, for instance. So already, you can see that if you give a video of this assessment to an expert, you can assume that the expert knows to recognize any one of these 15 58 items. So if you want to automate this, you need a system that can that can classify or recognize any one of these items from video. And that is a Herculean task at the stage four to develop an automation that you know, machine learning that can you can give it one of these arcane items out of the 58, and the machine says, Oh yeah, it has item number 17. We cannot do that. So what we have done is reduced the 58 to 15 items, and we show that using just 15 items picked up by the machine, you can still do really well. You don't need 58 items. And even within the 15 items, we kind of mashed together items that are hard to distinguish, even for an expert. So we'll have expert number one and expert number two looking at an item and say, I think it is 18. And another expert would say, No, I think it's 20. There is like, is the baby leading with a hip or is the baby leading with the shoulder? And you know, I have looked at this for several years and I can't tell, you know, but I with good reason. I'm not a developmental expert, but they even they they differ in whether it is, you know, leading with a hip or leading with a shoulder. So what we did is even combine those items. So for instance, we just have one kind of rolling over. We don't try to distinguish between the various kinds of rolling over. The idea being that it becomes a little easier for the machine to, you know, chronologically see that yeah, the baby so fine. The baby's so fine. Hey, now the baby is prone. Very likely the baby rolled over, and it's a supine to prone roll over. I don't know anything else about how exactly the baby rolled over. Was it, you know, leading with a hip or shoulder? The machine doesn't know, but it becomes a little easier to train a system that just says, I found a roll over. I don't know what kind of rollover it was, right? So we use machine learning just to reduce the number of items that we have to recognize who arrive at his core developmental score for for the for the infant. Right. And so and then the final part of using the eye is even now we're not thinking full automation where you pump in the video and outcomes the score. We are not there yet. That is the goal. Let me find
Deep: us again of what exactly is the the score and and what's the connection between the score and some of the conditions that you might have like autism or cerebral palsy, et cetera.
Dr. Modayur: Yeah, so. There is there is no cut off, so a motor score just is like one way of measuring objectively the developmental trajectory as it pertains to motor development of an infant that is that is derived from a, you know, standardized tool that has been norm referenced on like twelve hundred infants or something. So you can also you get a raw score and based on the age of the baby since essentially they'll take
Deep: somewhere on a Gaussian distribution. Yeah, you get
Dr. Modayur: a percentile score, and clinicians typically have a threshold that kind of minimizes, you know, the false positive and false negative rates. And so, for instance, they could use the 10th percentile as a cutoff and say, if you're below the 10th percentile, then we need to have additional, you know, evaluation of the baby, maybe a full battery of tests that not just motor, but speech, communication behavior, et cetera. And that takes more resources, more time, commitment to do. But it becomes a trigger for for kind of triaging further care. So if right, so if if you're below the 10th percentile, it doesn't automatically mean that you have to
Deep: have a condition that you have
Dr. Modayur: a condition, but it is something that warrants further evaluation and further diversion of resources to evaluating the.
Deep: I'm kind of curious about the flip side. Let's say you're in the, I don't know, 18th percentile, but you're a concerned mom and you want to intervene anyway. Are there benefits to getting infants more mobile earlier, sort of regardless of these more serious conditions?
Dr. Modayur: don't know whether it's, you know, it could be controversial to suggest that, you know, you need to you need to do that early enough and there are there is just one pathway or trajectory of development that's acceptable. I think that is why I mean, it depends on the concern level of the parent, right? And so at least in Washington state. Parents can self-refer. You still have to get in the queue. And there are clinics like Gendering Bellevue that will take you and then put you through the full battery of tests. But I think that is that is the one of the benefits of, you know, making this convenient, doable at home and low cost that you can have a longitudinal observation of the child just because you are the 10th percentile one month doesn't mean that you have that as your trajectory, right? You need a few data points to see whether your concern is validated and you don't have to wait for the next clinic. Visit for you to get additional data about objective data about your child. So that is kind of the benefit of offering a tool that that, you know, depending on your level of concern or how proactive the parent is, they can they can go through this and figure more out based on the data. Right? Yeah. So the final point I want to make is that instead of full automation, like that's what we're talking about, a motor score and the percentile and the cutoffs and all that. The final part of the eye is allowing a clinician or a developmental expert to look at these videos rapidly and be able to arrive at a evaluation report, if you will. And currently, if, for instance, a 30 minute video takes 45 minutes to evaluate. Our job is to kind of, you know, iteratively reduce the time that it takes to evaluate the video. Right. So if you can if if I can run through the video in advance, queue up kind of the interesting or salient parts of the video and then and then kind of
Deep: solicit have guesses like, I imagine, yes, make the developmental experts more efficient. Absolutely correct. Because in this kind of passive observation environment, you could have hours and hours and hours of of time where the baby's asleep or, you know, or maybe doing. Maybe they did 10 of this particular type of activity and you really are into the, let's call it, the minority activity. I don't know something a little bit different. So then even if the machine is wrong a few times, it's still going to save in theory, the developmental expert. Right.
Dr. Modayur: And sometimes you may miss something like, for instance, that we had a practical case like a week ago when when babies video was scored and looking at the breakdown of the there are these component scores and looking at the breakdown of the scores, the clinician thought, Well, where is the the baby spend any time at all standing? And it doesn't mean that the baby has to be independently standing. The parent or the clinician may be assisting the baby and seeing whether the baby can bear weight on its legs. And they saw that there was no standing subcomponent score, and that is where the machine could say, Hey, here is where the standing parts of the video are, and you don't have to look through a 30 minute video to find where the baby potentially could have been in the standing post. That becomes like a way for the machine to reduce the videos that you have to look at for you to get a comprehensive evaluation.
Deep: And I'm also just guessing that due to the limitation of the time, like totally outside the context of the machine learning based system, but the developmental expert might not get through all of this, like they may not get an opportunity to view some different baby poses and particular scenarios. I don't know. Maybe the baby just refused to do something or is that also some of the benefits of having this kind of long term passive asynchronous monitoring?
Dr. Modayur: Right? I mean, I think that if the longer you it's clear that the longer you monitor the baby, the better. But that also makes it impossible for a human. Even with time on their hands, it's impossible for a human to go through, like, so you have just increased the complexity and you know, the requirements on the clinicians part to look at instead of looking at a video that's 15 minutes long. Now you're saying, Well, I have three hours of video gathered over one week, and it's just simply impossible for a human to look at all of that. And that's where A.I. comes in for it to just like. Has said eliminate non salient parts of it and then just look for just the salient part, and if you find that the baby is not spent any time in the in the setting post, for instance. Now you have an opportunity to send a notification via the app to the parents saying, Hey, we got all the information that we need. We just need a little bit more information about the baby spending time in the sitting post. So it becomes like a very targeted request to the parent to complete the task, right, so that you can get all the information that you need to finish your evaluation.
Deep: Perhaps you're not sure whether it can really transform your business. Maybe you don't know what it means to inject air into your business. Maybe you need some help actually building models. Check us out at Zanik WSJ.com. That's x y o Anahad Stockholm. Maybe we can help. What's been your biggest challenge and maybe like maybe there is like a technical one and maybe a non technical one, but what's what's been the thing that sort of surprised you? Because, you know, as I'm sure, you know, more than many that it's hard working with video, especially when you have very like, you know, long time horizons, just the the sheer compute power required to train up these models. And you're not even talking about, you know, frame level only, you know, right insights. You're talking about activity analysis, which is kind of very much at the forefront of machine learning these days. What's been some like? Maybe let's just start with the technical part. Like what's been the hardest on the technical problem front? And then maybe the same question on the non-technical front.
Dr. Modayur: I think that it's you know what, what we what we are, what we're doing is kind of a the kind of at the intersection of technology like A.I. and the the behavioral aspects of this, but what you would call a behavioral tool that is used to assess infant development. Right. And and finding that, you know, you don't the the instruments themselves are are quite complicated, and it seemed like a daunting task like I referred to earlier. It just seemed like when we started this project that, you know, do we have a product if we don't automate extraction of these nuanced, complicated activities from from videos of persons that kind of look like a blog, right? I mean, it just seemed like a like an impossible task. And our solution to those challenges have been to kind of pare down the the tool itself that is used to assess the the infant's development. And and that took a lot of time. And then and then the generation of of data itself has been a long process. Right. And as you can imagine, we have more data for the post estimation because we have like a million plus frames of data where each one of them with roughly about 10 key points rate body joint. So we got a lot of data on that. So we have a system that can detect the baby's pose and the body joins with a great degree of accuracy. What we don't have is that many samples for activities because a it is you can get that many samples for even when you look at public open source databases on human activities. They are not as numerous as, you know, samples for image classification. Right. So getting these activity database has been challenging, but I think we are in a good place with like, you know, we have several thousands of samples of these motor activities. But then when you look at specific activity, you can stay well for how many samples do we have for activity number twenty four out of these 58. And for some activities, we may just see like 50 samples because infants don't really do some of these activities or you don't, you don't. You're not lucky enough to capture them often enough on camera. Right. So yeah, that is kind of the challenging part. And generation of these samples requires expert time and that is expensive, right? So our innovation has been to not rely on, you know, complete soup to nuts automation. But what we call augmented AI, where you have an expert in the loop, but you're just trying to make that, you know, utilization of that expertize time efficient so that they can their throughput can go up right? And then also we innovate in terms of making things asynchronous so that, you know, if you have a pediatric occupational therapist that's got an hour on a Saturday afternoon, they might just they may be able to just look at the queue of things, the videos that they have to look at and there's already been processed by and they can just probably whip through like three infant assessments in a 30 minute time slot, right? So that, you know, we can we can bring a lot of efficiencies there and build something that is, you know, immediately usable in 2021 as opposed to waiting for a complete automation in twenty twenty five.
Deep: So, so Bart, let's fast forward five years, even maybe 10 years out and talk to that new mother who's got their first child. Who's? Nervous about their child's development. What is your system going to give that that mother and what and why is their life kind of better off, you know, using your technology?
Dr. Modayur: It's like a that's a brilliant question, right? I mean, to be honest, we never really thought of this when we started out the project. And then we went through this NIH program called Innovation Core. It's like a boot camp for entrepreneurs, and you go and talk to a like 100 plus stakeholders. And many of them are like people that have pain points that you're trying to relieve through your product, right? And it's not. It's not always the clinicians, mostly the parent. What our vision is to have passive observation of the infant at home, kind of blending into the the hectic lifestyle of a parent with a newborn in the household and and and just be able to give them frequent information about the developmental trajectory of their child. But more importantly, give them something to do. So there are specific activities that parents can do to engage with a child, even when they are like less than three months old in terms of verbal communication, a social interaction and and, you know, motor development. A lot of simple activities that a lot of parents know instinctively. And a lot of parents like me, for instance, I didn't know instinctively. But many of these, you know what you'd call exercises or activities are something that parents may be doing already. But it's just like our vision is to be able to observe the baby and based on the on the activities and the milestones achieved of that specific infant or emerging skills of that specific infant have targeted, you know, activity plans that the parent can participate. And we started developing this idea and it's early versions of it is on our website, early markers, dot com slash minutes. It's called motor minutes. It's like real short bite sized videos of like 30 seconds or a or a minute that tells you like your OK, so your baby is six months old, and these are kind of the skills that you should already be seeing, and these are potentially emerging skills that are happening at this time. And these are activities that you could do as a parent to, you know, kind of propel the baby along those trajectories, right? Or to augment the baby's development. And if something were to happen or milestones or delayed or being messed or certain things are not happening, like, for instance, a concrete example would be just the amount of prone time tummy time that the baby spends on a per day per week basis. This is highly correlated to future development, and tummy time is a hard thing to do for a lot of babies, and it is hard thing to do for a parent to encourage because it's just, you know, it's it's stressful, right of the baby's crying. And there are some concrete steps that the babies can take. And you know, we're developing like these occupational therapy modules or play activities of what we call a motor minutes that can target the parent and allow them to do these specific activities. And then, you know, our system can observe whether the baby's prone time has gone up, right? So those are tangible measures, you know, so you help the parent interact with the baby, augment their development and give them feedback, right? And then
Deep: so to the mom, you're basically saying, look, in five 10 years in your nursery, you're going to have a developmental expert that clinicians are going to be in there monitoring your baby and making sure you do just the right thing to kind of optimize their their their kind of near perfect health trajectory or
Dr. Modayur: something, right? And where we are, we of course, rightfully focus on the video signals because that's our domain. But there are also audio methodologies for grabbing and and analyzing the baby's vocalizations and conversational turn picking between the infant and the and the parent. And and those are like, there are there are tools that can be used to predict the risks for conditions like autism. Just based on vocalization analysis. Right, and so you don't have to be restricted to just visual analysis. Right. So you if you if you have a home nursery solution, it can look at various signal modalities to give feedback to the parent and the clinician also.
Deep: Well, it's been fantastic having you on talking to us about the early markers in some of the amazing work that you're doing to make sure kids grow up healthy. And so thanks again for coming on.
Dr. Modayur: Thank you. It was awesome and exciting.
Deep: Thanks so much for tuning into Your AI Injection. If you're interested in reading more about the enabling technology behind early markers, you can check out our website xyonix.com. That's x y o n i x dot com slash solutions slash body dash pose dash analysis. And if you like what you're hearing, we're a relatively new podcast and really appreciate you're leaving a review on the podcast platform you are listening on. Thanks so much.