This week we interviewed Dr. Werner Koepf, a successful organization leader and AI Practitioner. Currently the Senior Vice President Of Engineering at Karat, Dr. Koepf has served as a tech. executive and CTO at numerous companies, including Expedia, Ticketmaster, and Conversica.
We chatted with Dr. Koepf about his background in theoretical physics, and how he transitioned to working in software and product development. Dr. Koepf shared some of his experiences that have shaped his approach to AI and what successful companies should do—and not do—to integrate AI into their business models.
DEEP: Welcome to this week's episode of your AI injection. This week, we've got Dr. Werner Koepf, Werner work for a number of years at Amazon and has served as a tech exec and CTO at a number of companies including Expedia Ticketmaster conversa and now carrot the startup bringing some tech savvy to technical interviews. Today we're going to talk about leveraging AI but from the perspective of a season to CTO Werner, thanks for being our guest. On the podcast here. Talk to chat with you about AI from a business leadership perspective and ideally dig in on some of the experiences you've had in the industry and things that are shaped your in thinking around AI. So with that, I want to start by going way. Back in time, you got your PhD in theoretical Nuclear Physics from the Technical University of Munich. So, what's your elevator pitch for your work back then for us non physicists? You know, tell me, tell me like, Yeah, how did you get into physics? What was the passion? You know, and what were you actually working on?
WERNER: Sure. Hey Deep. Thanks for having me on your show. So I guess I was always really good in math. In, you know, all the way from primary school from middle school to high school. Like, you know, when I was like maybe nine years old, I was walking around the classroom, help the other students, so doing something in math and physics was, you know, it was kind of like a given, I wasn't really that good. Languages are the liberal arts or anything like that. So math was it actually I started studying to become a math and physics high school teacher in Germany. There's like a special Masters for that. And then once I got into it and I had the opportunity to do a couple of practica at or internships at schools, it was clear that those working with kids. It's kind of, you know, it's his own it's got his own challenges. So I really like the math and the physics part and then I got a master's in physics and then stayed on for my PhD, even during my Master's thesis. And then also for my PhD, I always did basically model calculations around electrons Gathering or the structure of the nucleus. And it was always a good combo of doing some stuff phone with pencil and paper and then putting it on a computer. Like my Master's thesis, we were, we were the brilliant idea. Was that we were going Determine some of those like the levels that a nucleus has when it rotates. And we're going to do that by applying the theory of general relativity and basically transforming into that system of a rotating nucleus. And then from that getting to those, to, those discrete levels, which in in practice meant, like, literally hundreds of pages of like equations, where we try to re basically transform the lagrangian over, static nucleus into a row. Rotating coordinate system and there was like hundreds of terms. They came up that at the end, almost all of them cancelled out and then you were left River, you know, partial, differential equation that I put on a computer and then solved, and then generate some graphs and then published a paper in. I realized out of that whole process that part about writing the code. And so, like having to Q8 myself and having to verify that, you know, whatever I'm doing actually made sense, didn't forget to or somewhere in the denominator, or put a minus sign. Plus signs of go so obviously you know, to come up with different ways of getting to the same result to verify that. What I was doing was actually correct. Yeah. They had a lot of fun doing it. So when you know when it was time to get a real job and I got my PhD in, I think 1991. And then I did a couple of postdocs did like six years of postdocs in Seattle and Tel Aviv at Ohio State. So, after those postdocs, I was looking both together professorship and get a real job and then I guess the real world grown up.
DEEP: Interesting. So so why like, how did how did the like, what What attracted you to the real world? Because I know that I don't know if it's the fake world. I mean, we are talking about the elements of the universe here. But like, yeah, like what was it about? What's going on in, you know, in, I don't know in business in, you know, in kind of the much more applied practical, world, that you kind of, you are attracted to, and
WERNER: I guess that's good question. I guess I've always somewhat maybe been a victim or a it also been able to embrace the opportunity. So I was a postdoc at Ohio State and I had applied for a couple of professorships at that time. The field I was in, there was a lot of folks that come over from because the wall had just come down a while ago from the Eastern Bloc that I was competing with. And I had like, you know, maybe 30 papers and they Like 300 papers and we were both going for the same jobs, I was on the short list at the University of Saskatchewan. And then right before I was going to fly there and give my colloquium, I was actually, also, one of the jobs I had applied for was in San Diego. Indian, one of our postdocs from a, how estate worked at Fair Isaac. Which was the credit card fraud detection company in in San Diego and I, you know, it's kind of like visit him in say hello. Whoa, and then just share a little bit about, you know, what his stated, they look like because it was some kind of like, you know, you need to hear come out of the same out of the same background that I was and he seemed to enjoy his job and then when I got there they actually they had a candidate scheduled for that particular day and then the candidate didn't show up and they basically ran an interview on me without really telling me. So all of a sudden I was like interviewing for a job at Fair, Isaac without even having applied and then you know, they at the end they made me an offer. At that time, I had just I had one kid and one on the way we just moved back from Tel Aviv to the states and my then wife didn't want to move to San Diego which still to this day. For those are very poor move because I think it was like, you know, 10 below and Columbus and it was about 75 and in San Diego and they were like, wow, it's a cold day. It's like whoa you know it's it was also like a very interesting very was an interesting job was alluded smells like you know data Managing kind of, I think they had malls that they were working around the globe and they were going into different countries. So the job there was going to be tweaked those molds for the particulars of a particular country. But still, you know, it was better than Ohio. But I had applied for a job at a company in Pittsburgh where my then wife was from and neural network company called Nora, where where they basically had built this like their own proprietary neural network tool that they applied for. All kinds of different use cases and actually holds a solid as a as a you know, downloadable software that people who did it was hooked into Excel and you could load your data in Excel and then you could you know run neural network models on top of it. And then I pick the guy, the CEO and said, hey I got a job offer from from Fair, Isaac, in San Diego. Come on. Why don't you want to, why don't you look at my resume now? Because he seemingly had that before. Yeah, in the end he did and he invited Let me in and then he offered me a job on the spot. So you know, told anyone
DEEP: everyone wants to dance with the boy, that's already dancing with another girl.
WERNER: So yeah, absolutely. It is way easier to get a job if you already have one in. Actually I it was pretty hard coming out of physics. I wrote like a blow. It was sort of like pre blog post, but I published on my page at Ohio State, you know, twenty page guide on how to get a job and how to write your resume and how to network and blah, blah blah. And then I actually even gave a talk on that at one of the Can physics Society conferences on, you know, about networking in the in the real world, I think in the meantime it's probably a little more streamlined at that time. Many of my peers ended up at, you know, Quantum hedge funds. Yeah, yeah. Yeah, I like calling it is our
DEEP: destination for physicists.
WERNER: The street looking back at my career probably would have significantly increased my earnings, but probably had a little less fun. I've many of those guys that I knew back, then they went to hedge funds. Another there became Traders and, you know, really got into the industry, but they got out of there. You know, the actual matter of things and a lot of other really, you know, the technology part in still a mean. I don't solve, you know, partial differential, equations on using Fortran libraries anymore. But I'm still, you know, it's still still doing Tech. I'm still, you know, breaking things apart and putting them back together and thinking about concept. So, you know, even though I'm in a very different field, now it's still somewhat feel. I didn't, you know, Stray too far away from what I was originally studying?
DEEP: Yeah, I can see that, you know, I mean I've worked with a lot of technologists Engineers and you know, machine learning folks over the years and for some reason for I find the physicists always stand out in their thinking and kind of coming at problems from unorthodox directions. That can be just really insightful and I feel like I don't really grok there. Their thinking patterns, like, how does it, you know, like so I think it's I'm not alone and in sort of identifying that, you know, that background has something really special to bring to the field. So tell me a little bit about the you mentioned you know, you started seeing these neural networks. For the first, I tell me the first time you saw some machine learning or some AI algorithms that you know, maybe You observe some supervisor or even unsupervised learning, you know with these models. Like what went through your head? Was it something that was, you know, exciting did a light go off or was it like Miss is just you know, slightly different algorithm that, you know, like you know what happened because
WERNER: It's just so I think it was a definitely was a combo. I think some degree. It's not all that different than, you know, some of the Monte Carlo simulations are you know, big partial, differential equation stuff, we've been doing it was you was clear that in and actually ignore aware. So we built this like built, this ultimately destroy to like an inferential sensing tool where, you know, people who work in refineries. Is would take all kinds of data on their plant and then they would, instead of putting a sensor in the chimney, they would run my software to predict what the no X emission or whatever was, and then they will get certified by the EPA. Then we'll move the plant around a little bit and make sure that whatever my Elgar was predicting was the same as what the actual sensor set. I think that the big difference was he was that data played a much bigger role where in you know just by If you the equations, do you have or they are all, you know, they are, there's been research that derive them. So, there really isn't any doubt about them and then you just basically pretty much straightforward. Brute Force, come up with a solution with you with, you know, Ai and machine learning. It's much more a function of how good your data is on, what really comes out of it. And really understanding that, you know, probably took me. I'm sure it didn't have those thoughts way back then, but now haven't seen this for a long time for quite a while. It's definitely become more and more Apparent to me. That it is that combo about what? You know what? What math are you throwing it? It will computing power you throwing in it. You know, what's your algo? What's your type of parameter? Optimization all those things but ultimately, it really comes down of what data do you have? And I think many of the mistakes that are made in this space or related to trying things where you don't or, you know, trying to Brute Force things where you really don't have the data for.
DEEP: Yeah, I think that's that's a super important. Point one thing, before we kind of really dig into that topic, because I think that's it, that's a good one to dig into lately. I've been sort of attracted to this idea of. So you mentioned like you don't always have the right data, right? So so sometimes you've got the opportunity to really marry the physics-based world and the and those sort of physics based simulation approaches with this kind of, you know, machine learning driven data-driven, AI driven, kind of approach. So, for example, I've you starting to see systems, we're like, let's say take something like self-driving, cars data. So, you've got, you know, you got cars out there, driving all over the place. But let's say you want to teach them about, you know, stop sign. So you've got all this information coming in on stop signs but specifically, you know, you want to teach them about how to read a stop sign. Even though there's graffiti all over the stop sign. I mean, you can imagine just coming up with a straightforward physics. Based simulation of graffiti and generating a ton Ground truth data, you know, this is sort of, you know, one thing that we did something similar. Once we were, we were analyzing a lot of in body surgery data and one of the things that happens is we were trying to classify when this cauterizer is being used. When you're actually cauterizing Flash and we realized we didn't have enough training data for when, you know, for when this like actually we were looking at some other signals like suturing inside of the body, but a lot of times when cauterizing was going on you to get all this smoke that would obfuscate. The actual needle and thread. So we started putting in fake smoke into the scenes to basically get a ton of training data. So I'm curious if you thought, you know, like is sometimes, it feels like things go up, come full circle and you're seeing a lot of physics, based approaches being introduced into the ground truth or training data generation process. Increase if you've given any of that any
WERNER: thought The way back in an order where we try, we married some of the cuz he were basically softer and then we got ball past in technology, which is a big, you know, software company for the petrochemicals. We definitely marry it. Some some data driven or some, you know, models Based on data we've moles based on first principles and, you know, partial, differential equations. I think definitely, you know, as I think in machine learning, it's all about. Using whatever tools you have at your disposal to solve a particular problem. And you know it's like why would I, why would I have a model? Have a model, do all the hard work to relearn, you know what? I already know. And so if I can input what I already know, like the fact that, you know, those two, those two variables are related by, you know, by an equation where you know, the second one is one over the other times 450 plus 12. Why would I spend all the energy to, you know, build them all the rigid really comes up for that. I think that the trick in designing some of those systems, then is how do you actually, you know, put the two together? Where you know, it's not really not necessarily always straightforward, right? Yeah. It's you could you could clearly you could generate if you know what the what the physical relationships are, you could basically generate data that like encapsulates that right?
DEEP: Yeah. Yeah. And it's it mean, it's definitely. I mean I've been in these scenarios where your Reducing this generated data and you just can't help, but feel kind of, I don't know, goofy about it because because it's it's data that can be represented way more compactly. But just due to our understanding of how to train these models. We end up blowing it up into a bajillion examples. The whole, the whole thing sort of feels odd, but in the end, it achieves, that that kind of goal. So, tell me, tell me a little bit. You, I know you were at Amazon during maybe not the early early days but, you know, certainly earlier than today. What was what was your In into Amazon. Like, and what kinds of, you know, what kinds of like data data science, machine learning stuff, where you sort of exposed to or thinking about back, in those days,
WERNER: I always thought I was more. So I came in as I was, what's called a to Pizza team leader. So was that written phrase? That actually Rick tells L who are coming to Amazon. He was the CIO, I think of Walmart and he could like, coined that phrase and it was those teams they were very Like very independent of each other. That had a very clear Mission. We were all supposed to have a fitness function, which was like a one-dimensional chart that showed how well my team was doing and then you would spend a good bit of time, iterating on what their Fitness function was, you would meet with Jeff and his team to, you know, debate what the fitness function was. So, I owned basically gifting. So was kind of like a regex, anything that had gifted the name on the Amazon retail side. I owned all the way from the And wedding Registries and the wish list the gift wrap for a while. I own gift certificates which is like a pretty big business. I think we did the math. At some point. I think we were like the fourth or fifth largest or an Amazon at that point. So we had like you know a couple hundred million a year, even back then flowing from my features With You Know, Tamar team of like 15 Engineers. So I I think I wasn't I think at Amazon for me it was like a crash course in how to manage engineering At scale, how do you think at scale? How to think about metrics? How to work backwards from the customer. We use machine learning a little bit. We did some campaigns where we would you know, basically optimize do things on the on the Amazon homepage to everywhere on your wish list or you know other things related to give thing. But it was much more you know a consumer of that it was more wasn't so much about machine learning was more about you know, retail and
DEEP: executioner. Gotcha. So, maybe so maybe, Switching, gears a little bit like, I know since you left Amazon, you know, you've been in and help kind of really accelerate a number of organizations into kind of You know, both development and nice kind of tech stack or exact can kind of produce and write software quickly and rapidly and Achieve something. But I know of for the last few years, at least you've definitely been so focused a lot more on kind of machine learning based systems. What do you think are like the differences from a cultural standpoint of between getting you know, product managers developers you know to like develop kind of more traditional software. Verses in this machine, learning world where you've got uncertainty, you've got data sets, you've got, you know, all that, like, what advice would you give to, you know, other technology leaders, you know, ctOS product leaders, Etc. On you know what that difference is and like how how to kind of navigate that
WERNER: or Okay. Maybe I start with answering the the opposite of your question. I think there is a lot of similarities about data science, machine learning, AI driven projects, being successful in regular project being successful. It's how well you need. A lot of it is on the product side. How well do you really understand your customer? How well is there really a problem there in how? Well are you equipped to solve that problem? You know, I spent some time at a start-up where was pretty unclear. What the problem was it was You clearly didn't have the data and was pretty clear. Well, in there for, you know, we didn't get very far. I think it Converse zika is almost like very much the opposite. They found a very well-defined problem that came up with a very well-defined relatively narrow. But like, very powerful solution for it, and they ran with it. I think it's all about. I mean, in general, I think, in software in that I have no, I haven't switched over into product. I'm always dabbled, a little A bit in between either right now, I am product as well, temporarily back at my previous companies. I think my one of my most, you know, intense interactions with was with our head of product. I think for an engineering leader, what the team is working on is more important than anything else. I think if they're working on the right stuff, you can always make it work. I think so. That's I think where the, where the similarities are it's just about good product management, figuring out what the problem is prioritizing, right? You know, narrowing it down MVP, getting something out getting feedback. Working backwards from the customer in. Good things are going to happen. D, DT, additional Dimension with machine learning comes in that there is that I think we've regular softer, if you build a, you know, another payment system, it there isn't really any unknowns on how what, you know what the like what the equations are the tie all that together. People are going to put money in their wallet and they're going to use your thing to pay at a store and you're going to have to sign up the stores. You know, there's like it's all known. On the AI side, my experience has been like when you talk to like, you know, the CEO or you talk to people on the board or you talk to the product people, they have no idea to distinguish between what is totally trivial. What is doable with a good bit of work and it was completely impossible in its beak. And so it's almost because there is this. So that makes product that makes figuring out what to do and even like an order of magnitude harder. Because now you have to either keep, you know, if you tell them hey I don't think we can do that. Well, they really hear that or do they just think you're not working hard enough? Or is it when you tell him? Well we can't do that because we have we don't have that kind of data. Well okay now it's about getting that data so I think it just becomes a lot more. It's almost like there is a additional degrees of freedom that in regular software. You don't have to worry about and it's the hey, it's that, you know how good I do. I have the right data in in can I actually, you know, can actually solve the problem and have while in most predictable straightforward algorithmic, you know, kind of like the regular soft will do. It's more, you know that typically is not known.
DEEP: Yeah, I think that's I think that's very well said there's and sometimes you even run into that with with software I mean like itself where sometimes, you know, folks will see, you know, feature a and feature be and feature see And then they'll think okay, well, those seemed like really hard things because, you know, from whatever benches they have. But feature D, you know, when the developers say, oh, by the way, we're going to need a new platform for that or we're going to need to you know, we're going to have to look completely re-architect to support that case. Sometimes you have that a similar kind of thing where they're like, what I don't understand like why would that be the case? And so yeah it's definitely one of the one of the one of the One of the challenges. So, one question I wanted to ask you is like Do you see some kind of pitfalls for organizations? You know, that are trying to leverage AI? That this kind of avoid, like, are there things that that, you know, that orgs, maybe you've seen kind of fall into particular traps, you know, with respect to like, you know, accelerating AI back features into products that you think, you know, there's just this Simple set of things. If you just follow these like, you know, you'll be much better off like
WERNER: sure. So I think I'm a huge. I'm a huge fan of doing prototypes and doing groups of content. So before in, I think, in AI power software, I think it's even more important than in, in regular software like in, he's part of the scrum process is what's called a spike Which is basically when the team they don't really know how much work it's going to be and how to do it. So they spend some time where the outcome really is knowing how hard it is and how to do it. And I think in a i driven project that is a much much bigger margin like you could be you could be you could be two years away from really having a product. So what you want to do to like the risk that is you want to build prototypes, you want to like work your way there and you want to sell like somewhat have a have an really where you want to go and based on that idea where you want to go, you want to decompose it in small steps that allow you to get there in every one of those steps is basically is proving to yourself that there is still that still possible. Now ideally while you do that you get it out you get some feedback from customers but you got to look I think you have too much more. Look at it as a you know as a I know it's funny you use that phrase is a Learning Journey you know where I know. Ultimately maybe you apply deep learning and your training, neural network that's got a billion. Ominous. But ultimately it's also you are also learning what you can do with the data that you have, what your customers are going to do with the product, you gave them. Maybe you get better data and now your product that's better. Jay, I think it's the, it's that understanding that this is much more about, you know, figuring things out then, shipping something you've already figured out. Once you figured it out, it becomes a machine like a conversa care. You know, we basically figured out how to train molds, to understand those conversations and then it becomes, okay. Now, we have like hundreds of moles and now we have To build. You know, we have to build software and that tracks. How well does that or how well does malls are doing? We have to build dashboards. We have to build more entering but now I'm like back at, okay? Or you know, running five thousand servers at Amazon and I have to figure out what to see po is and what the memory is and how well my service is doing, you can attend any Maps back to you, like a just good all the engineering, but I think on their own the, when conceiving a new product, I think you gotta, you gotta understand that and have that modesty get a lot of time, you really don't know what you're doing and it's about, Figuring it out, which is part of the journey. Yeah. Uh, I think you know not to throw in a plug for you guys which is a lot of the time and I think physicist we are good at recognizing Day stuff that similar to other stuff. And then we apply you know, partial differential equations or rules are math from domain one in domain to and then everybody's happy and somebody gets a Nobel Prize, I think, I think a lot of also in like machine learning, it's like some a lot of those problems they In a higher dimensional space, they map to each other. So the fact that you've already seen this over here allows you to apply, apply it over there where, you know, while many companies doing this for the first time they totally don't have that. You know, they don't have that breadth of of a perspective and I think that's very, very powerful tool.
DEEP: Yeah. I mean that's something that, you know, I know we do a lot of, we see a lot of different projects because we're working with, you know, with the number of teams that you definitely find Find you know, you find these things working and you know, one domain really well and then they can you just see that pattern? You can lift it like when I've been you just kind of reading about recently. As I'm sure you're familiar with, you know, the power of these, you know, these generative models, particularly in text, they've gotten a lot of popularity where we train these, these massive models, like, G PT 3 to basically, you know, take all of the Worlds. Text information predict the next word predicts, the next sequence of words, the next sentence, things like that. And then with that relatively straightforward, kind of thing. It turns out that these machines are Marvels at just generating, you know, text and then, you can kind of flip it on its head and like, with something like, G PT 3, you can now, you know, inputs you put in an example in English. The translation of French example in English translation of French, all the Senate can translate a whole language. The same model that you can give it, you know, the opening sentences, you know, to Alice in Wonderland. It'll start writing in that style, and it's kind of its kind of wild. But to your point about seeing things in different domains, you know, lately we've been looking at at music for example, and some of the systems coming out of open, a, I where folks are able to take that same approach. That's worked so well in Intex but they're doing it in music and now they're able to, you know, kind of in a You know, with unfortunately not the Fidelity level. That's that is not quite there but you're able to like get something to like finish a song. So you start with like Nirvana lyrics or something, or Nirvana, the actual tune of something and it can just author an entire piece which is kind of mind-blowing if you think about it. One question I had for you is like, you know, we've had this gigantic kind of demarcation in the you know in that machine learning world it's kind of like Pre and post deep learning if you will. And I'm curious from a from a CT o / business practitioner Vantage. Did you see a difference from pre deep learning when, you know, I don't know. I feel like we were kind of sitting around kind of mid 80s in terms of percentage of accuracy for things like translation language translation, Etc to all of a sudden all it felt like almost overnight just being overwhelmed with how good these things started working. Like, you know, if you look at, you know, all these different I'm curious from your Vantage. Did you notice a big shift around that time? And if so like, how did it affect your thinking, with respect to maybe, you know, helping out a new startup or a new new project or a new organization?
WERNER: It's a drag, you know, reminisce with some of my, you know, physics colleagues on. Hey you know we did like I mean literally build neural network models back in the 90s but clearly it wasn't deep learning. I wasn't at DARPA project when I was at work for a company in Pittsburgh in that they were basically using one of the one of the efforts was, you know, natural language, understanding like voice basically to run. Command post and it was just like wasn't working well enough to make any impact and, you know, here are five years later or, you know, maybe a little more 10 years, later, 15 years later, we got you know, Alexa and we got, it's totally become mainstream. In the reason is if that thing is 80% accurate or 85% accurate, it's worthless and if it's 95% were accurate, it's perfect. So it's seems like a relatively small change but it makes all the difference. So I think what. So number one I think what in In what has what has happened there? It has crossed that threshold where it's good. It's good enough to really do its job while before it was just below that threshold and it wasn't good enough to do its job. What I'm sometimes, you know, there's a Google commercial where they say, Hey, you know, only Google can do that in some of the, you know, sometimes I think it's worrisome for startups on whether we'll be able. I mean we can't compete with, you know, the big fan companies on some of the what they developing or the mall. Like TP, you mentioned cheap, easy 3, Min to train them all is I what hundreds of millions of dollars? I'm not going to mean either they make it available to the rest of us. Yeah. Or it's something that we'll never be able to get to that level of, you know?
DEEP: Yeah. I mean, apparently, it pales in comparison to this new woudl 2.0, that just came out, which is trillions of parameters and it's just we have yet to see that publicly. But I mean it's just mind-boggling the scale at which we're competing on some of these projects and you know of course the big tech companies that Have these vast data, repositories are the ones that can really kind of push the envelope on some of these basic kind of standard tasks. If you will, like, you know, language, translation and transcription and speech, transcription and Cetera, right?
WERNER: And I think that the real enabler for startups is to, you know, is to have access to that. I mean, you know, you shoulda mentioned, like, advice to companies doing that. Like, if you are building, you know, if you're building your own model, has you building your own algorithms or you? I'm like proprietary software rather than using some of the libraries or using some of the open source stuff you are doing. I mean, unless you're one of the, you know, unless you want to really main big, you know, high tech tech companies that have a reason to do that you're definitely doing something wrong. Whatever, your problem is, there is definitely an open-source, I'll go out there. It's all said that already most likely, there is already training stayed out there. That could help you a lot. So, you know, please do it. Do it, as little as possible on your own. And you reuse as much as you know, what's already out there is what
DEEP: But yeah, I think that's fantastic advice. Like, you know, there's just such an insane amount of building blocks to stand on now and there's building blocks on the code, level on the, in the, in the machine learning library level, there's building blocks on data with transfer learning and then there's even building blocks at the service level, where, you know, if you're a startup and you're trying to do something, you know, with speech transcription or speech, terrorization, you know, the big, you know, the big tech companies have their version they WS got there or say Amazon's got there. Google's got theirs and looking for where, where you can sit and maybe Niche down, or like kind of like focus in a particular Niche so that you can kind of get out of their crosshairs one question. One thing you mentioned that I kind of want to go back to as you mentioned like during the kind of deep learning transition, you know, or like just these this this kind of translation transition from when we used to talk about machine learning and not be able to use like, you know, a I you know, with a straight phase. There was this transition where a suddenly, like, we went from, you know, you mention 85 to 95 percent and things became feasible. That weren't a lot of times though. We do have we like even 95% could be too little and going back to the organizational question, like, how do you think about communicating with? Like, you know, product managers, for example, when you start saying like, hey this thing works, you know, 92% of the time night 93% of time but it fails Like is there a conversation that you find yourself having regularly around like how to? For example hide the errors or or your expectations are out of whack. Like we're never going to get to 99% that sort of thing.
WERNER: Now, definitely, I remember, you know, conversations with my head of customer service Converse occur and, you know, he would like, quote response where, you know, a customer had emailed in and then how we responded and how the client had witnessed that and how wrong that was. And I said, well, you know, that's one out of 10,000. So, you know, we're 99.9%. And he was like, well, but I don't care and they're upset. And I think It's well. And I actually at the end, didn't have a good answer because he had the right hit, the customer had the right to be upset, but the customer didn't really understand that we're looking at a probabilistic model here. Then once in a while, it's going to make a mistake. But it didn't really, they didn't really want to hear that because, you know, was the, actually, their customers that we were interacting with. I think I would say the best way out of that is sort of is a like in, you know, in software there's like strong consistency and eventually Consistency. If you have to build something that has to be strong, be strong, be strong consistency. It's way harder way slower, way, more expensive, way worse. If you can get away with the eventual consistency, you know? Like if you got a stream of News stream of posts on your home page and you don't really know when I posted it eventually it's going to show up and you still think it's a great product. You have, it's way easy I think even it also in that space if you can make it so you can forgive what the user will. Forgive me, sir. Six you're in a much better position by that's again is like a design, Innovation product problem. I think if you had if you really, it has to work a hundred percent of the time it just, you know, it will not. It's like it's just like human son work, a hundred percent of the time. So it's but it's kind of like either building it in or understanding that in kind of like designing with that, in mind from the from the, from the ground
DEEP: up. Yeah, I think that's a, that's a really good point. One of the things that I talked to, you know, folks about a lot is There are error rates and then there are perceived error rates. And so to the extent that you can decrease the perceived error rates, you know, the better. So for example, you know you've got, you've got a thing that humans look at you don't have to lead with random sample of whatever your machine learning thing is doing. You can lead with the very high confidence cases and then you can tuck the other ones underneath them or or something, you know, you can like hide and you see that happening all the time out there but sometimes it's you know, it's It's a challenge to like, you know, sort of navigate the inevitable errors of machine learning and I find that like that's kind of what differentiates, the great organizations that machine learning from the sort of ones who are new to it. Or aspiring, is that the great ones are able to do kind of two things, one, they're able to like mitigate the person. You know that the efficacy and at the same time they're able to keep pushing and move that statistical batch you know that that floor of efficacy. So the The probability that a customer sees a, high-value case and calls up, a sales person starts to decrease over time. So cool. One question I had for you is like I mean maybe just when have you seen this all working perfectly like in like Harmony maybe like or what would you describe as like you know, maybe the perfect team makeup and project / organizational composition where machine learning is happening, data is being gathered, the product is improving customers or being happier. Like, I don't know what, what does that whole worldview look like to you, like to just kind of characterize that
WERNER: in my actual background, there in my perceived imaginary,
DEEP: maybe both maybe a minute
WERNER: perfect world. So, I think well, I think a carrot right now, we have a strong. Good product, we have great product Market fit. We're in the fortunate position, that it works with for we've up machine learning and we're using A no machine learning to improve the product to help with the alignment to help with the alignment like basically how we, how we set those levels on when somebody passes or fails. So we have a separate data science team, they have, you know, they have their own roadmap. There are kind of like focused on more R&D activities so I think so that is I think a good model where you realize you distinguish between hey I'm doing R&D where the payoff is a little further out to building it into the product. I have another am you know way back then at Expedia we built built a new hotel. Sort took a while to get that out the door and originally we thought it was just about sorting hotels the right way. It turned out that we once that we got that to work. Expedia, use that to have Hotel. He's bit on their short position. So we will basically simply factor in on how much margin they were given us and based on where, how much margin they were given us. The hotel will move up and down. Not sort of like, ignoring customers preferences because if you move it up and nobody clicks on it, nobody is Happy anyway. So it was kind of like us being able. So, but you could only launch that product, if you had an algorithm that would sort the right way such that you could actually build on it. So, it's, I think it's sometimes, I love those like, multi plays where you build something and then once you have it working, you all of a sudden have a new play that you could make on top of that first play. So that was a good example of that. I think a convert and then I think sometimes what I am one of my catchphrases from way back then is like we get that for free so frequently. I'm not in a position where I get stuff for free, but sometimes I am. So if you build a platform where you build something that does something and then when you want to extend it, it kind of like is already there, you know. I think there's examples. I think we all have were, you know, that that has worked out. So I think whether it's just regular softer or, or the domain, we're in, if you have, if you build something that, you know, You can extend to something that the business wants to do without really a lot of work. That is a super powerful Paradigm. And you know I think that's kind of like always read something. We should strive for
DEEP: fantastic. I'm gonna ask you one final question. No, AI interview is complete without it. The singularity Terminator prophecies. Your general AI vision for the future like where is all this stuff going in the long Arc you know, in in the Timeline. Like where are we headed? Right?
WERNER: You know, I think it was in the early 20th century where we had physicists were under the impression that we had discovered everything and it was going to be a real bore for, you know, going forward. And then there was quantum mechanics. And and I decided all kinds of new things. So I think humans are really, really, really bad at those predictions. So, number one that and number two, we're so far. Well, the number one we're so far away. From the actual, you know, out from the actual, if you do look at what, what would require something like that to happen? And then we also another way of being really wrong at predicting things is ignoring the changes that are going to happen based on the things that just happened. Like if we're going to get, if AI is going to get much better and better. We're also going to get much better in working with you and understanding AI incorporating an AI in our daily life in the Addictions we're making today about the singularity is ignoring all those changes that we're going to make to our own lives. And the way we look, at the way we interact with it and then it looks like really scary. So long story short. I don't think anybody has to worry about that.
DEEP: All right, fantastic, that's all for this episode. As always to our audience. Thanks so much for for sticking around and learning about, you know, what a great CTO like Werner thinks about how to bring AI into your organization and Verner, thanks a ton for chatting with me. That's all for your AI injection.