Your AI Injection

Why AI Fails According to AI Consultants

Deep Season 1 Episode 2

Artificial Intelligence is transforming industries around you every day, but all too often great opportunities to apply AI in business fail. 

In this episode, data scientists at Xyonix explain some of the most common reasons why AI fails and walk through ways to prevent it. Ranging from a lack of support from the broader company to a poorly structured team, these reasons can stifle AI innovation and if left unchecked, completely prevent eager businesses from successfully applying AI. 

To read more about this topic, visit the article on our website:


Automated Transcript

BILL: Welcome to your AI injection, the podcast where we discuss state-of-the-art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations making them more efficient, impactful and successful.


DEEP: Okay, welcome back to your AI injection podcast. We've got our regulars, Bill Constantine, Carson Tusk. And the three of us are going to discuss reasons why a i Innovation fails according to AI Consultants, which includes the three of us. Bill, Carson, and I are all data scientist here at xyonix. We already have an article up on our blog about this topic, which will kind of ping you too. And later on that goes into a lot more detail, but let's go ahead and kick it off right now. So in your experience, let's just kind of start really broad. Like you know, what do you guys think are some reasons why a i fails To take off to fails to get features into a product, you know, fails to kind of get some traction. There are Carson. You want to start first,


CARSTEN: okay? Because I can think of many reasons why I will fail.


DEEP: All right, well, let's take one of them.


CARSTEN: All right, so one of them, lack of support. So, you know, you have like one one part of your organization that is interesting in solving an eye problem. They have some data scientist and making some progress towards It. But the end of the day, they actually in order to feed their system, or to get the data. They need to actually need support of other pieces and parts of the organization. And that is sometimes hard to come by. And so, while there might be a chance to get AI, you will never get the data you need. And because collecting the data takes effort collecting, the data might require changes to existing system, so whether you know, parts of the organization might be resilient against, it might even have to change the your customer facing Using interfaces in order to get that data and you will get pushed back. And so sometimes what I've seen is people try to, well let's try it anyway and bring the start on the project. But ultimately it just fails because of lack of data like support from other pieces of the


DEEP: organization. Yeah. I mean, sometimes, as, you know, we all know like companies have like a certain type of DNA, if you will, like some some companies are you know, the rare exception are, you know, our tech companies that Have this kind of cultural accommodation for data backed features and data powered features, and a tolerance for some of the uncertainty, you know, with in machine learning and other ones are like brand-new to it. And sometimes, you know, like I think what you're describing here, at least resonates with me some companies at large maybe don't yet have the kind of collective motivations. DNA / imperative to integrate a high-powered features and their products. But maybe a team or two or three does and then, you know, there can be some, you know, some some institutional kind of lessons or learnings that need to take place. I don't know. Build, you have any thoughts on that? Yeah, I think with Karsten mention is probably my number one issue in terms of stifling Innovation within the eye. I think if a company grows too Large enough and there's not this sort of incentive for other parts of the company, sort of the trust with the data scientists, A churning out, you do get this pushback on really cool stuff and it may may never see the light of day as a result of that. I've been in the situation in the past were that's actually happened. We're producing this really cool Innovative stuff with AI but it just doesn't take with the company because basically they don't want to take the risk. Ask that they were willing to take when they were start up, you know, that they're willing to take when they're younger company. So, how does one overcome that? How does one overcome that deep? How do


CARSTEN: you have an even better example? So, for example, if


DEEP: well, we'll be the judge of whether it's a better example. Yeah. Yeah.


CARSTEN: So we had one situation where the company wanted actually to improve customer happiness but their product, right? But the problem is the way that they measure It. Or let's say, the problem was that some of the Department's performance was measured against the customer happiness. However, the whole system was slightly tweaked so that the way that they're measured it, it defaulted to the highest happiness rating. Like a, you know, a five out of five stars if you didn't enter your happiness, right? Was it good? but then our request to change that setting and either, you know, make it An average of three on by default or even Force the customers to enter some rating, kind of was hit with a lot of backlash, right? Because those departments were measured upon the customer happiness and they didn't want to change that. So you run into simple issues like that


DEEP: politically, I mean like kind of related to that. I think you're kind of. I mean I would just sort of like to take that up a notch. It's sort of this idea of dark data like there's a which we run into a lot where oftentimes, they the data scientists can sometimes The very first people actually looking at data, like we you know, where data is sometimes being collected, maybe it's being looked at like in this example that you had for the sake of performance bonuses, but no other reason and you know, but the data scientists might be the first ones to start looking at it from like a very different angle and you know, inevitably you know we find all kinds of anomalies and problems in the data and the way it's being gathered and this kind of speaks to your and so we need some changes there sometimes, you know. Are where we need to go back and we need to say it like you know we had a customer or a project that I'm thinking of right now where you know there was like a third party you know collector of the data for this project. This was an insurance company project. There was like a third party that was gathering data and and and they and there was like kind of minimalize on the data in the In some senses. And then we started kind of finding a lot of things that need to be addressed in order to get like a prediction out, you know, it's so you know, whether it was like, predicting, you know, you know, like whether Somebody's gotta laps or be a great customer or what have you, we needed to get these things resolved and then all of a sudden you have this. So it's not necessarily just within the company that the resistance that you're talking about, sometimes it's like, oh, now we project has to go to the company and the company has to go to a third party company that gathers data for They and the further away a company is from the ability to change that data flow, the more problematic things can get a better brother. Amen, I have experienced that. Oh, man. Yeah, I've experienced that big time and I think is it the case with you guys? I mean it's a lot of times. We are the first people to look at the data in a very sort of serious way but oftentimes we don't know the origin of the data whether there's quality Control on that data, whether it's a rich being collected from a variety of sources. We sometimes are just handed this stuff and we got to go through it and then we start figuring stuff out. Start asking questions. And and that point when we start asking enough questions and found enough gotchas, that's when you start thinking about, maybe we should be the ones that are taking this data in. Yeah. Exactly. You know,


CARSTEN: that's like an estate sale, you know, you get letters to somebody's dark basement and you're like, do you guys have data on it? Yeah, we do have data on your stairs. Is he like this this dark basement with all kinds of stuff in there and you're like, well I kind of need a garden hose so I'm not sure there is one here. So now been mishandled rubber


DEEP: tubes over here and then there's yeah, some some copper or wrinkle. It's over there.


CARSTEN: The mismatch of the data that they have and the problem they want to solve is actually a huge issue, and then like, we just alluded to the challenge of getting the right data, if it doesn't already exist. East.


BILL: You're listening to your AI injection brought to you by xyonix.com. That's XY. O ni x.com check out our website for more content or if you need help injecting AI into your organization's.


DEEP: You guys ever been in this situation deep, I'm asking you in particular, where a client was beholden to a particular third party to get their data. But then that third party, like went away. And so literally, the source of their data was going to go away. And then they had the sort of scramble to try to replace that. That I personally have seen that before when it feels kind of like control of your data pipeline. And your ability to affect it is like you know really important for success and the lack of that is really important for failure is, would you guys agree with that? Totally absolutely. I've seen it. I've seen it. Cut rear its ugly head quite a few times now actually. So right, I totally agree.


CARSTEN: I feel it's okay to go with the third parties in order to save resources, if you don't have them in house, but you better have a back-up plan and you have a better, have a back-up plan to The Back-up. Plan. So in other words, if you had to you should know how to get that data, right? For example, you go to a company, this web scraping for you or something, and data acquisition, great, don't build a team in house to do that. Outsource it. Great, but you know that if you had to, you could build that team in house and so that's your backup


DEEP: plan. How are you can swap it out, you know? Yeah,


CARSTEN: that's very important thing.


DEEP: Yeah. And in like maybe so we're kind of honing in on some some culturally issues sort of A pivoting a little bit. Do you like? Let's look. So we've been looking at the culture of a company, but are there cultural issues within data scientists themselves that maybe lead to a i failures? And I'm yeah, like, you can you get? I think I had a great one. Okay. Have a great one again. What will be the judge of whether it's


CARSTEN: great. So I'm telling you this is a


DEEP: slow. Okay, let's hear it. I bet. It's okay, here we go. I'm putting myself out there. No, I think I've been in the situation where we have these incredibly smart data Sciences who know nothing about production and they're very they're very very resistant to that. I think one of the things I appreciate about us if I'm gonna Pat ourselves on the back is that we've been around the, we've been around long enough to know that, if you're going to get something out it's not just a matter of you being smart and being able to come up with great, Innovative scientific ideas and analytics, you have to be able to get it production and actually one of the things I can say, I very much appreciate that is xyonix. Indeed, maybe your you've pressed. This is getting in the aid is Ed sort of, you know, thing it wheels and production going early and focusing on sort of continual Improvement of the product as you as you as you go along. I mean, that's right. Those are lessons. We've kind of learned the hard way. Yeah, I'm Carson. You were going to say


CARSTEN: I was going to say we have seen the opposite too. Though, so that's you know, that's a double-edged sword because we have seen companies where they were so focused on production that the AI discussion kind of like fell under the table, right? Where operational security was their main thing. And for every little thing even before validating that, there was a nice solution to the problem, they were worried about Operational Support and structure etc, etc. And my advice for that is look at your problem first. See if it can be Because if it can be solved you will find a way to put it into production but the other way around is just initially, really not


DEEP: important. Yeah, that's that's a good good point and like, you know, I think I've seen that not just from You know, the Vantage of security, but, you know, you see that you see, like, folks worrying about all kinds of stuff, but whether their signal and and it's almost like, you know, companies have kind of a natural core competency. And if you take a any, any meeting full of people, there's going to be like a natural set of core competencies there, like some groups, like some companies are just like naturally great at like devops for example like, you know, maybe you know it's like I think we you know this made me think of a project where No. It's like you know a Healthcare company that was like really awesome it at getting things into, you know, a SAS production environment with very sensitive. You know patient Healthcare data which as you can imagine requires you to be pretty good at devops e stuff especially at scale and what we've seen is like that kind of natural kind of core competency or DNA in the room if you will can Kai-shek the conversation, it can like make it so that, you know, you're just talking about devops the issues or just telling about Any issues or just talking about architectural issues? None of, which matter if there's no signal, like, if you're trying to predict the stock market, you know, and and you know at you know and get like 70% signal on a 50/50 bet on whether it's stocks going to go up or down and all you do is talk about security and stuff, you're not, you know, you could spend half, have your lifetime trying to, you know, plan your company out only to find out when you check the signal like that's, you know, not exactly. They're like, that's the Golden Goose, that doesn't exist. So, and there's a naivete about how AI in some people's minds can solve any problem, no matter what, no matter what and that. Well, you know, it's just you, there's an expectation with those companies that are said, it may be like devops focused. They generally get a lot of stuff done because there's a lot of sort of protocols in place for them to do so and they don't necessarily have compassion for the data Sciences when they're doing sort of research based AI approaches that it me. Sometimes we Face incredibly challenging problems that that aren't you don't make progress.


CARSTEN: Sometimes. I thinking chunks, I think it's due to a lack of understanding of how it actually works. Right? Where people say well hey if it can do this, then it should be able to do that. Yeah, and what they don't realize is that one is totally trivial and simple and the other one is almost impossible and has never been done. That's meant if you don't have experience in the field. And if you don't really know how the algorithms work, you might as a, you know, late, And not really realize that and so people people have unrealistic expectations sometimes. And that that's also reason why I failed because they go into this this project with the idea. Hey, I've seen it detect kittens, it should do this. This my project here with high accuracy as well and it just boned and they say well if it can't do that, then I'm not interested. Whereas really what they should be asking is how can I say I improve my existing? Process rather than even if it cannot really solve my problem, that I Envision can help you along the way, right? Can I put a human in the middle and you know, double their work efficiency because I feel like AI is about efficiency Improvement. Not necessarily solutions to problems. Because, you know, we have this typical learning curve where we can get to like, 70% accuracy on many problems, fairly easily then going from 70 to 80 will take us a year and going from 80 to 90 is 10 years of research. Many people don't realize that they think it's a linear progression scale and just like engineering if I just put enough manpower edit and just work on it. It will make progress. No, it does not. It's exponential and your progress might stop. Because the end of the day I is still research unless you're trying to do something that has no before.


BILL: Here listening to your AI injection brought to you by xyonix.com. That's XY. O ni x.com check out our website for more content or if you need help injecting AI into your


DEEP: organization. One, going back to the idea of, you know, cultural kind of issues within data, the data Science World bill, you kind of, you know, pointed out one. I'm going to just kind of add another one that I see a lot which is, you know, data saying is like most of us come from an academic background, most of us have spent time. You know, slightly slogging away in grad school and And very little to no credit. Generally in grad, school comes from finding better data and almost all credit and medallions. Get handed out for those who come up with new algorithms, new deep learning architectures who basically push on the models itself. And, and I see the same thing in the, you know, in the data science practical world, where it feels to me, like almost an inordinate amount of energy. Goes into coming up with new models and the model types model structures model architectural tweaks but relatively little comes from the data. When in reality that the the opposite is what really can push the problem, you know and make progress. What do you guys think about that?


CARSTEN: I just want to chime in


DEEP: there. Sorry to interrupt car. So before I want to chime in there, I think that that is very well. Put my just my own experience coming from Academia. I studied cardio Dynamics and There was so much research going on and being able to take basic ECG, level type of signals and be able to do something good with it and machine learning was coming and, you know, starting to really take a foothold and people are just going crazy thinking. Well I'm going to take this regular basic data and a feed it through this, random forest and these neural Nets. And they, they're going to come up with magic when in fact that really wasn't the case in many, many circumstances when I had the opportunity to work with some cardiologists. And we got these implantable, Cardioverter-defibrillator data ICD data where you're measuring literally, when somebody was alive and then at the end of it, their heart was through buildings. Are there on the course, the dying we had, we had that kind of data was just incredibly rare, and we're able to milk it for all it's worth, but, but that storyline came back and has come back many, many times. It's goes down to the data first and then maybe you can apply some fancy algorithms on top of it. But the, the Attention to data, should be the first thing. Cool. That's it, that's it. I'm going to, I'm going to switch gears here. So one of the problems that I see in the, you know, getting AI back features, kind of adopted into products, is their sort of, and this kind of goes back, Carson to what you were saying about almost this like belief that machine learning can kind of do more that it can. And you were talking about, you know, getting you know from, you know, 70% efficacy to 1898. And I into the mid and high 90s is, you know, it's like an evolution, it's a timeline but a lot of times is a business. You know what I see is folks will put machine learning and the inevitable errors that the models will have up front and right, you know, directly expose the users to them and one of the kind of paradigms that, you know, that I see being missed when you do that is you start obsessing on the inevitable error? Is that the models got. So you start quickly, you wind up in a scenario where, you know, maybe a salesperson exact, you know, somebody's out like pitching demoing and then, you know, the things fail. Like, you know, comes back with like, you know, bad predictions, bad recommendations, whatever. And then the conversation like the, the energy goes to like, you know, can easily be distracted down to, like one or two or a set of examples when in reality, I think. I think businesses would be better off if they stuck a human in the middle and they start looking at some of these problems more with a human in the loop so that you sort of think of thing. Think about things differently. Like if you're trying to build, you know, something that I don't know, let's say is you know trying to have conversations with high lead, you know with with you know psychiatric patients or with somebody with some psychological The issues kind of in the vein of a therapist. For example, your your, if you rethink your business around how what is a service that's being offered today? And how can I lie? Put some of those humans in that Loop and then take the machine learning and instead of putting it front and center, where it's right off the bat, it's talking to people in the form of a bot when which case it goes off the rails quickly. But instead leverage it to kind of bring efficiencies to those to those humans. Do you What do you guys think about the concept of like humans in the loop and AI errors just being too


CARSTEN: exposed? I think it's very important because I feel like you know we come from a few like the people that imagine a i projects that used to managing software projects where there's a buck so please fix it right and that just really does not happen. People spot errors of your AI and they say okay this is wrong, this is wrong. This is wrong. The next version should fix those things. And that's just not really how it works at all, you know. You can try to mitigate these errors and they're like, very significant. You can try to build better models that have less of those errors, but you will never be able to address one specific example, or get rid of something that somebody some executive noticed. I think the message is learn how to live with the errors and what is an error rate that you can accept because you should not expect a Flawless model. Now, feel like that's where the man-in-the-middle comes comes into play. And I like to call it like search. Reduction. Right? Let's say you have a problem where typically, there's an error rate of, let's say 80% and there's a man that sits there, 12 hours a day and has to look through 10,000 samples of Errors, right? If I have a machine learning model, with a necropsy of 50% or 60%, I can, we, I'm, that person has to spend manually correcting, these arrows by 50%. I feel like that's a huge gain, right? We should not. We should not actually expect a, i to replace humans. We should expect Picked them to, we should expect. A, i to augment them and make their life easier and increase their efficiency, not not necessarily solve the problem, a hundred


DEEP: percent. I'm going to switch gears for, for one last topic have either of you sort of seem scenarios where AI fails to like take off or or make it some are based on, let's say a suboptimal. Structural team, we're like the teams are just not structured, right? So this is kind of an organizational question or maybe there's like insufficient steam strength in a particular area like where maybe there's not enough Core ml expertise or maybe programming accuser, distributed computing, expertise, or something like that. We'll see a lot of programming expertise is certainly stunted. The growth potential of certain products that I've worked on in the past and that's been a bit of a frustration because I think it very much cut short, the longevity of a really cool idea go project because folks didn't know how to get things done in terms of an engineering perspective. I've Ever been, personally involved in a team where there is a lack of technical expertise with analytics. So I've been sort of less than that sense, I guess you could say. But I've definitely fallen short on the engineering side, which is important, great. I think we're gonna have to wrap up. I think we've covered, maybe not all of the reasons machine learning or AI systems kind of fail to get out the door, but I feel like we've touched on a, you know, a lot of those, a lot of the issues here with It's cultural, or procedural. So, with that, I will, let's call it a wrap. Thanks guys for coming in and, and chatting, and yeah, those fun. All right, all right, till next, all right, till next time


BILL: that is all for this episode. I'm deep Dylan, your host saying, check back soon for your next AI injection. In the meantime if you need help injecting AI into your Business. Reach out to us at xyonix.com. That's XY. O ni x.com, whether it's text audio video, or other business data, we help all kinds of organizations, like yours automatically, find and operationalize transformative. InsightsBILL: Welcome to your AI injection, the podcast where we discuss state-of-the-art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations making them more efficient, impactful and successful.


DEEP: Okay, welcome back to your AI injection podcast. We've got our regulars, Bill Constantine, Carson Tusk. And the three of us are going to discuss reasons why a i Innovation fails according to AI Consultants, which includes the three of us. Bill, Carson, and I are all data scientist here at xyonix. We already have an article up on our blog about this topic, which will kind of ping you too. And later on that goes into a lot more detail, but let's go ahead and kick it off right now. So in your experience, let's just kind of start really broad. Like you know, what do you guys think are some reasons why a i fails To take off to fails to get features into a product, you know, fails to kind of get some traction. There are Carson. You want to start first,


CARSTEN: okay? Because I can think of many reasons why I will fail.


DEEP: All right, well, let's take one of them.


CARSTEN: All right, so one of them, lack of support. So, you know, you have like one one part of your organization that is interesting in solving an eye problem. They have some data scientist and making some progress towards It. But the end of the day, they actually in order to feed their system, or to get the data. They need to actually need support of other pieces and parts of the organization. And that is sometimes hard to come by. And so, while there might be a chance to get AI, you will never get the data you need. And because collecting the data takes effort collecting, the data might require changes to existing system, so whether you know, parts of the organization might be resilient against, it might even have to change the your customer facing Using interfaces in order to get that data and you will get pushed back. And so sometimes what I've seen is people try to, well let's try it anyway and bring the start on the project. But ultimately it just fails because of lack of data like support from other pieces of the


DEEP: organization. Yeah. I mean, sometimes, as, you know, we all know like companies have like a certain type of DNA, if you will, like some some companies are you know, the rare exception are, you know, our tech companies that Have this kind of cultural accommodation for data backed features and data powered features, and a tolerance for some of the uncertainty, you know, with in machine learning and other ones are like brand-new to it. And sometimes, you know, like I think what you're describing here, at least resonates with me some companies at large maybe don't yet have the kind of collective motivations. DNA / imperative to integrate a high-powered features and their products. But maybe a team or two or three does and then, you know, there can be some, you know, some some institutional kind of lessons or learnings that need to take place. I don't know. Build, you have any thoughts on that? Yeah, I think with Karsten mention is probably my number one issue in terms of stifling Innovation within the eye. I think if a company grows too Large enough and there's not this sort of incentive for other parts of the company, sort of the trust with the data scientists, A churning out, you do get this pushback on really cool stuff and it may may never see the light of day as a result of that. I've been in the situation in the past were that's actually happened. We're producing this really cool Innovative stuff with AI but it just doesn't take with the company because basically they don't want to take the risk. Ask that they were willing to take when they were start up, you know, that they're willing to take when they're younger company. So, how does one overcome that? How does one overcome that deep? How do


CARSTEN: you have an even better example? So, for example, if


DEEP: well, we'll be the judge of whether it's a better example. Yeah. Yeah.


CARSTEN: So we had one situation where the company wanted actually to improve customer happiness but their product, right? But the problem is the way that they measure It. Or let's say, the problem was that some of the Department's performance was measured against the customer happiness. However, the whole system was slightly tweaked so that the way that they're measured it, it defaulted to the highest happiness rating. Like a, you know, a five out of five stars if you didn't enter your happiness, right? Was it good? but then our request to change that setting and either, you know, make it An average of three on by default or even Force the customers to enter some rating, kind of was hit with a lot of backlash, right? Because those departments were measured upon the customer happiness and they didn't want to change that. So you run into simple issues like that


DEEP: politically, I mean like kind of related to that. I think you're kind of. I mean I would just sort of like to take that up a notch. It's sort of this idea of dark data like there's a which we run into a lot where oftentimes, they the data scientists can sometimes The very first people actually looking at data, like we you know, where data is sometimes being collected, maybe it's being looked at like in this example that you had for the sake of performance bonuses, but no other reason and you know, but the data scientists might be the first ones to start looking at it from like a very different angle and you know, inevitably you know we find all kinds of anomalies and problems in the data and the way it's being gathered and this kind of speaks to your and so we need some changes there sometimes, you know. Are where we need to go back and we need to say it like you know we had a customer or a project that I'm thinking of right now where you know there was like a third party you know collector of the data for this project. This was an insurance company project. There was like a third party that was gathering data and and and they and there was like kind of minimalize on the data in the In some senses. And then we started kind of finding a lot of things that need to be addressed in order to get like a prediction out, you know, it's so you know, whether it was like, predicting, you know, you know, like whether Somebody's gotta laps or be a great customer or what have you, we needed to get these things resolved and then all of a sudden you have this. So it's not necessarily just within the company that the resistance that you're talking about, sometimes it's like, oh, now we project has to go to the company and the company has to go to a third party company that gathers data for They and the further away a company is from the ability to change that data flow, the more problematic things can get a better brother. Amen, I have experienced that. Oh, man. Yeah, I've experienced that big time and I think is it the case with you guys? I mean it's a lot of times. We are the first people to look at the data in a very sort of serious way but oftentimes we don't know the origin of the data whether there's quality Control on that data, whether it's a rich being collected from a variety of sources. We sometimes are just handed this stuff and we got to go through it and then we start figuring stuff out. Start asking questions. And and that point when we start asking enough questions and found enough gotchas, that's when you start thinking about, maybe we should be the ones that are taking this data in. Yeah. Exactly. You know,


CARSTEN: that's like an estate sale, you know, you get letters to somebody's dark basement and you're like, do you guys have data on it? Yeah, we do have data on your stairs. Is he like this this dark basement with all kinds of stuff in there and you're like, well I kind of need a garden hose so I'm not sure there is one here. So now been mishandled rubber


DEEP: tubes over here and then there's yeah, some some copper or wrinkle. It's over there.


CARSTEN: The mismatch of the data that they have and the problem they want to solve is actually a huge issue, and then like, we just alluded to the challenge of getting the right data, if it doesn't already exist. East.


BILL: You're listening to your AI injection brought to you by xyonix.com. That's XY. O ni x.com check out our website for more content or if you need help injecting AI into your organization's.


DEEP: You guys ever been in this situation deep, I'm asking you in particular, where a client was beholden to a particular third party to get their data. But then that third party, like went away. And so literally, the source of their data was going to go away. And then they had the sort of scramble to try to replace that. That I personally have seen that before when it feels kind of like control of your data pipeline. And your ability to affect it is like you know really important for success and the lack of that is really important for failure is, would you guys agree with that? Totally absolutely. I've seen it. I've seen it. Cut rear its ugly head quite a few times now actually. So right, I totally agree.


CARSTEN: I feel it's okay to go with the third parties in order to save resources, if you don't have them in house, but you better have a back-up plan and you have a better, have a back-up plan to The Back-up. Plan. So in other words, if you had to you should know how to get that data, right? For example, you go to a company, this web scraping for you or something, and data acquisition, great, don't build a team in house to do that. Outsource it. Great, but you know that if you had to, you could build that team in house and so that's your backup


DEEP: plan. How are you can swap it out, you know? Yeah,


CARSTEN: that's very important thing.


DEEP: Yeah. And in like maybe so we're kind of honing in on some some culturally issues sort of A pivoting a little bit. Do you like? Let's look. So we've been looking at the culture of a company, but are there cultural issues within data scientists themselves that maybe lead to a i failures? And I'm yeah, like, you can you get? I think I had a great one. Okay. Have a great one again. What will be the judge of whether it's


CARSTEN: great. So I'm telling you this is a


DEEP: slow. Okay, let's hear it. I bet. It's okay, here we go. I'm putting myself out there. No, I think I've been in the situation where we have these incredibly smart data Sciences who know nothing about production and they're very they're very very resistant to that. I think one of the things I appreciate about us if I'm gonna Pat ourselves on the back is that we've been around the, we've been around long enough to know that, if you're going to get something out it's not just a matter of you being smart and being able to come up with great, Innovative scientific ideas and analytics, you have to be able to get it production and actually one of the things I can say, I very much appreciate that is xyonix. Indeed, maybe your you've pressed. This is getting in the aid is Ed sort of, you know, thing it wheels and production going early and focusing on sort of continual Improvement of the product as you as you as you go along. I mean, that's right. Those are lessons. We've kind of learned the hard way. Yeah, I'm Carson. You were going to say


CARSTEN: I was going to say we have seen the opposite too. Though, so that's you know, that's a double-edged sword because we have seen companies where they were so focused on production that the AI discussion kind of like fell under the table, right? Where operational security was their main thing. And for every little thing even before validating that, there was a nice solution to the problem, they were worried about Operational Support and structure etc, etc. And my advice for that is look at your problem first. See if it can be Because if it can be solved you will find a way to put it into production but the other way around is just initially, really not


DEEP: important. Yeah, that's that's a good good point and like, you know, I think I've seen that not just from You know, the Vantage of security, but, you know, you see that you see, like, folks worrying about all kinds of stuff, but whether their signal and and it's almost like, you know, companies have kind of a natural core competency. And if you take a any, any meeting full of people, there's going to be like a natural set of core competencies there, like some groups, like some companies are just like naturally great at like devops for example like, you know, maybe you know it's like I think we you know this made me think of a project where No. It's like you know a Healthcare company that was like really awesome it at getting things into, you know, a SAS production environment with very sensitive. You know patient Healthcare data which as you can imagine requires you to be pretty good at devops e stuff especially at scale and what we've seen is like that kind of natural kind of core competency or DNA in the room if you will can Kai-shek the conversation, it can like make it so that, you know, you're just talking about devops the issues or just telling about Any issues or just talking about architectural issues? None of, which matter if there's no signal, like, if you're trying to predict the stock market, you know, and and you know at you know and get like 70% signal on a 50/50 bet on whether it's stocks going to go up or down and all you do is talk about security and stuff, you're not, you know, you could spend half, have your lifetime trying to, you know, plan your company out only to find out when you check the signal like that's, you know, not exactly. They're like, that's the Golden Goose, that doesn't exist. So, and there's a naivete about how AI in some people's minds can solve any problem, no matter what, no matter what and that. Well, you know, it's just you, there's an expectation with those companies that are said, it may be like devops focused. They generally get a lot of stuff done because there's a lot of sort of protocols in place for them to do so and they don't necessarily have compassion for the data Sciences when they're doing sort of research based AI approaches that it me. Sometimes we Face incredibly challenging problems that that aren't you don't make progress.


CARSTEN: Sometimes. I thinking chunks, I think it's due to a lack of understanding of how it actually works. Right? Where people say well hey if it can do this, then it should be able to do that. Yeah, and what they don't realize is that one is totally trivial and simple and the other one is almost impossible and has never been done. That's meant if you don't have experience in the field. And if you don't really know how the algorithms work, you might as a, you know, late, And not really realize that and so people people have unrealistic expectations sometimes. And that that's also reason why I failed because they go into this this project with the idea. Hey, I've seen it detect kittens, it should do this. This my project here with high accuracy as well and it just boned and they say well if it can't do that, then I'm not interested. Whereas really what they should be asking is how can I say I improve my existing? Process rather than even if it cannot really solve my problem, that I Envision can help you along the way, right? Can I put a human in the middle and you know, double their work efficiency because I feel like AI is about efficiency Improvement. Not necessarily solutions to problems. Because, you know, we have this typical learning curve where we can get to like, 70% accuracy on many problems, fairly easily then going from 70 to 80 will take us a year and going from 80 to 90 is 10 years of research. Many people don't realize that they think it's a linear progression scale and just like engineering if I just put enough manpower edit and just work on it. It will make progress. No, it does not. It's exponential and your progress might stop. Because the end of the day I is still research unless you're trying to do something that has no before.


BILL: Here listening to your AI injection brought to you by xyonix.com. That's XY. O ni x.com check out our website for more content or if you need help injecting AI into your


DEEP: organization. One, going back to the idea of, you know, cultural kind of issues within data, the data Science World bill, you kind of, you know, pointed out one. I'm going to just kind of add another one that I see a lot which is, you know, data saying is like most of us come from an academic background, most of us have spent time. You know, slightly slogging away in grad school and And very little to no credit. Generally in grad, school comes from finding better data and almost all credit and medallions. Get handed out for those who come up with new algorithms, new deep learning architectures who basically push on the models itself. And, and I see the same thing in the, you know, in the data science practical world, where it feels to me, like almost an inordinate amount of energy. Goes into coming up with new models and the model types model structures model architectural tweaks but relatively little comes from the data. When in reality that the the opposite is what really can push the problem, you know and make progress. What do you guys think about that?


CARSTEN: I just want to chime in


DEEP: there. Sorry to interrupt car. So before I want to chime in there, I think that that is very well. Put my just my own experience coming from Academia. I studied cardio Dynamics and There was so much research going on and being able to take basic ECG, level type of signals and be able to do something good with it and machine learning was coming and, you know, starting to really take a foothold and people are just going crazy thinking. Well I'm going to take this regular basic data and a feed it through this, random forest and these neural Nets. And they, they're going to come up with magic when in fact that really wasn't the case in many, many circumstances when I had the opportunity to work with some cardiologists. And we got these implantable, Cardioverter-defibrillator data ICD data where you're measuring literally, when somebody was alive and then at the end of it, their heart was through buildings. Are there on the course, the dying we had, we had that kind of data was just incredibly rare, and we're able to milk it for all it's worth, but, but that storyline came back and has come back many, many times. It's goes down to the data first and then maybe you can apply some fancy algorithms on top of it. But the, the Attention to data, should be the first thing. Cool. That's it, that's it. I'm going to, I'm going to switch gears here. So one of the problems that I see in the, you know, getting AI back features, kind of adopted into products, is their sort of, and this kind of goes back, Carson to what you were saying about almost this like belief that machine learning can kind of do more that it can. And you were talking about, you know, getting you know from, you know, 70% efficacy to 1898. And I into the mid and high 90s is, you know, it's like an evolution, it's a timeline but a lot of times is a business. You know what I see is folks will put machine learning and the inevitable errors that the models will have up front and right, you know, directly expose the users to them and one of the kind of paradigms that, you know, that I see being missed when you do that is you start obsessing on the inevitable error? Is that the models got. So you start quickly, you wind up in a scenario where, you know, maybe a salesperson exact, you know, somebody's out like pitching demoing and then, you know, the things fail. Like, you know, comes back with like, you know, bad predictions, bad recommendations, whatever. And then the conversation like the, the energy goes to like, you know, can easily be distracted down to, like one or two or a set of examples when in reality, I think. I think businesses would be better off if they stuck a human in the middle and they start looking at some of these problems more with a human in the loop so that you sort of think of thing. Think about things differently. Like if you're trying to build, you know, something that I don't know, let's say is you know trying to have conversations with high lead, you know with with you know psychiatric patients or with somebody with some psychological The issues kind of in the vein of a therapist. For example, your your, if you rethink your business around how what is a service that's being offered today? And how can I lie? Put some of those humans in that Loop and then take the machine learning and instead of putting it front and center, where it's right off the bat, it's talking to people in the form of a bot when which case it goes off the rails quickly. But instead leverage it to kind of bring efficiencies to those to those humans. Do you What do you guys think about the concept of like humans in the loop and AI errors just being too


CARSTEN: exposed? I think it's very important because I feel like you know we come from a few like the people that imagine a i projects that used to managing software projects where there's a buck so please fix it right and that just really does not happen. People spot errors of your AI and they say okay this is wrong, this is wrong. This is wrong. The next version should fix those things. And that's just not really how it works at all, you know. You can try to mitigate these errors and they're like, very significant. You can try to build better models that have less of those errors, but you will never be able to address one specific example, or get rid of something that somebody some executive noticed. I think the message is learn how to live with the errors and what is an error rate that you can accept because you should not expect a Flawless model. Now, feel like that's where the man-in-the-middle comes comes into play. And I like to call it like search. Reduction. Right? Let's say you have a problem where typically, there's an error rate of, let's say 80% and there's a man that sits there, 12 hours a day and has to look through 10,000 samples of Errors, right? If I have a machine learning model, with a necropsy of 50% or 60%, I can, we, I'm, that person has to spend manually correcting, these arrows by 50%. I feel like that's a huge gain, right? We should not. We should not actually expect a, i to replace humans. We should expect Picked them to, we should expect. A, i to augment them and make their life easier and increase their efficiency, not not necessarily solve the problem, a hundred


DEEP: percent. I'm going to switch gears for, for one last topic have either of you sort of seem scenarios where AI fails to like take off or or make it some are based on, let's say a suboptimal. Structural team, we're like the teams are just not structured, right? So this is kind of an organizational question or maybe there's like insufficient steam strength in a particular area like where maybe there's not enough Core ml expertise or maybe programming accuser, distributed computing, expertise, or something like that. We'll see a lot of programming expertise is certainly stunted. The growth potential of certain products that I've worked on in the past and that's been a bit of a frustration because I think it very much cut short, the longevity of a really cool idea go project because folks didn't know how to get things done in terms of an engineering perspective. I've Ever been, personally involved in a team where there is a lack of technical expertise with analytics. So I've been sort of less than that sense, I guess you could say. But I've definitely fallen short on the engineering side, which is important, great. I think we're gonna have to wrap up. I think we've covered, maybe not all of the reasons machine learning or AI systems kind of fail to get out the door, but I feel like we've touched on a, you know, a lot of those, a lot of the issues here with It's cultural, or procedural. So, with that, I will, let's call it a wrap. Thanks guys for coming in and, and chatting, and yeah, those fun. All right, all right, till next, all right, till next time


BILL: that is all for this episode. I'm Deep Dhillon, your host saying, check back soon for your next AI injection. In the meantime if you need help injecting AI into your Business. Reach out to us at xyonix.com. That's x-y-o-n-i-x.com, whether it's text audio video, or other business data, we help all kinds of organizations, like yours automatically, find and operationalize transformative. Insights



People on this episode