Your AI Injection
Is AI an ally or adversary? Get Your AI Injection and learn how to transform your business by responsibly injecting artificial intelligence into your projects. Our host Deep Dhillon, long term AI practitioner and founder of Xyonix.com, interviews successful AI practitioners and domain experts to better understand how AI is affecting the world. AI has been described as a morally agnostic tool that can be used to make the world better, or harm it irrevocably. Join us as we discuss the ethics of AI, including both its astounding promise and sizable societal challenges. We dig in deep and discuss state of the art techniques with a particular focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful. Need help injecting AI into your business? Reach out to us @ www.xyonix.com.
Your AI Injection
Product Management and Generative AI Powered Innovation with Adrian Klein
Join Deep Dhillon and Adrian Klein, Senior Product and Program Manager at Amazon (formerly PM @ Microsoft and multiple startups), as they both explore the massive product management and development ramifications of recent generative AI model breakthroughs like GPT-4. They delve into the challenges of achieving true differentiation in a world where fine-tuning on top of AI models is highly prevalent and straightforward. Deep and Adrian also explore what it means to build an AI powered intellectual property moat around your business and whether this is even possible in a world with a handful of deep pocketed huge players building increasingly powerful models that are challenging for startups and others to compete with. Finally, the two contemplate the evolving role of product managers in a world where automation and streamlined execution are paramount; they explore how in the near future, product managers may transition to focusing on thoughtful value judgments and creativity rather than mundane tasks that can be handled by AI.
Learn more about Adrian here: https://www.linkedin.com/in/adrianklein/
[Automated Transcript]
Deep: Hi there, I'm Deep Dhillon. Welcome to Your AI Injection, the podcast where we discuss state of the art techniques in artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.
Maybe before we get started, Adrian, you can give us a little bit about your background, maybe for the audience's benefit, you can say how you know, how you know me and a little bit about your background as a product person, as somebody who's interested in AI, and then we can just kind of dig in from there.
Adrian: Yeah, sure. Um, so like my AI interest was like. I don't know, like forever, right? Like that's when I was selecting colleges to go to, I'd read, uh, you know, Gerda Lechter Bach, um, you know, and gotten really into like, Hey, what can it mean for computational processing systems? And like, you know, I was a big fan of Dan Dennett and thinking about emergence of consciousness from symbolic systems.
So, um, so naturally it was basically, it was like Carnegie Mellon in Stanford. Um, they had these great kind of cognitive science meets computer science programs. I was all up in the symbolic systems. Design my major in intelligent decision systems, which, so this is like the early nineties, right? And so it was, it was a lot of, you know, um, applied Bayes theorem and kind of, you know, um, belief nets influence diagrams, but also like, you know, perceptrons and early versions of, you know, feed forward, neural nets and genetic programming, genetic algorithms and that kind of stuff.
Um, and then came out of, came out of college into Microsoft where. Uh, probably for the first, I don't know, six or seven years, like a lot of the product stuff I worked on was kind of like being the geek that sits in between the research, because a lot of the guys that I worked in Palo Alto got headhunted by, uh, Nathan Mirable to build MSR.
And, and I kind of was not like, I had no PhD, I was into the stuff, but I was more like, I want to sit on the product side of the fence. And so I ended up kind of doing a lot of liaising and, you know, kind of like trying to translate and massage customer requirements and, you know, figure out how can we, how can we do this stuff?
And, um, so I did, and then that spiraled. So most of my stuff was in office in the early days. Like I did the answer wizard, which I think most people probably think of Clippy when they think of that.
Deep: Yeah. It's the first thing that went in my head, like a big clip. Right, right.
Adrian: Well, because we actually did the answer wizard in office 95, and then we slapped the social UI on top of an office 97 was a short.
Yeah, but like all that stuff in the answer was like, was, you know, sort of simple, natural language, bag of words, Bayesian inference model kind of stuff. And so that was one of the, um, most of my, most of my early and mid career at Microsoft was about incorporating AI and machine learning into the product offerings.
AI and machine learning, like when you use these phrases today, like, you know, it's this moving target, right? Like the stuff that I worked on, if we talk, you know, we talked about what, what did you work on in like 98 or 2004? I think most folks say, we go like, Oh, well, that's not AI. That's just like a particular kind of directed search, or, you know, like, that's not AI.
That's just, you know, a statistical, you know, propagation. And it's just funny because where we are now with what we think of as AI and, you know, classification systems and, uh, you know, ANNs and all that kind of stuff, like, for the longest time, AI was basically just smart directed search, right? Like, it was, it was basically just searching through a hyperspace.
Anyway, so. Well,
Deep: I mean, I think it's for those that have been around for a while. I mean, you just couldn't even say AI without being kind of embarrassed and like looking at your shoes. Like I sound like a putz, like I can't use these letters, but now when you, you know, when you talk about GPT 4, it feels unfair to not.
To learn to only think about machine learning, like as a term, because it's, I don't know about you, but I, I didn't think in my lifetime, I'd be looking at a system like GPT 4 and interacting with it and having these, this insane level of. Reasoning ability. I just didn't think I do not
Adrian: want to like, look, I don't want to recapitulate your, the whole thing with Carson, but I'll tell you when I started playing with 4, my first assumption was that there was some really interesting controlling layers over because, you know, the LLM is basically like just a Markov process on steroids, right?
Like, and that sounds, I don't mean to be dismissive and saying that, but it's, It's a very specific kind of production system, right? So when I started seeing how rich and nuanced the language outputs were, it's kind of mind blowing. Yeah. I was like, Oh, so there must be some kind of sort of ontological or cognitive layer on top of this stuff.
And to an extent, the attention, the attention models are, are some of that using the document history is kind of like You know, storage system, but when I started going back and reading the papers and going from, you know, 2008 and coming forward, I'm like, no, that is it. It is just,
Deep: yeah, I mean, it turns out. If you train a system to predict future sequences of text, you also happen to train it to read and to understand language and to understand languages, human or otherwise, you know, and, and like all this other stuff just shakes out and it's just kind of a. I don't know. It's just not an intuition. I, I had before I started seeing these things.
Adrian: I was, I wouldn't have guessed it, you know, but it's interesting wrinkle, right? Because like, you know, you and I are sort of in a, in a similar like, you know, epic, right? Like, so most of our careers, we were. For lack of a better, like over promising and under delivering, right?
Deep: I would totally agree with that.
Adrian: Or like working on, you know, like I can't do that problem, but I can do this toy well bounded version of that problem. Yeah, and I think this
Deep: really itty bitty thing that will give you the ability to like promise a lot. Right, right, right. But now, now it's just nuts. Now I'm literally like, you know, Minsky's society of mind, like taking this idea of like many minds and I'm literally coding it in a day across 40 different minds and getting it to work.
I can't believe this is happening, but, but anyway, we can, we can GPT for forever, but I want to, I want to switch the conversation over to like product specifically, because we've got a lot of folks that build products that listen to this, to your AI injection. And. You know, it's not every day that we get somebody on that's like really deep in just how to build products.
So maybe jump up a few layers and like, think back, you know, a bit, walk us through, how do you go from like, you know, maybe you have a product in some arena, you generate some data, you've got some data, you got some intuition about this data, um, of something high value that you can extract or do with it.
How does that go from that intuition and idea and data to A crisp idea of what a product, um, should be that leverages, you know, some machine learning or something.
Adrian: So you can stop and redirect me if you want, but I'm going to start with a couple of Here's what it's not kind of thing. Um, and it might sound incredibly trivial or boring, but like in general, you're trying to identify an unmet need and meet that need, right?
Like, so you're looking for, you're looking for a pain point that, um, a particular, you know, demographic or particular market experiences, and then you're trying to figure out, Hey, can I solve that pain point? And this sounds super basic and it is super basic, but I want to say it again, because I think.
Whenever a new wave of technology comes along, like whether it's, um, you know, voice UX with, you know, echo speakers or, uh, or whether it's facial recognition or whatever it is, I think there's this, This inversion that happens for a while, where, where the question for, you know, product managers or, or C level folks that like, what should we build becomes, oh, how do I use X in my industry, right?
Like, and now you've got the hammer looking for the nail. I can totally empathize with that sort of top down mindset that is not, in my experience, likely to produce a fruitful outcome. Like if you, if you approach it as like, Oh, you chat, TPT is all the rage. Well, my industry is, you know, use car sales. What is chat TPT going to do for use car sales?
You might get some inspiration coming from that, but you got to keep backing off and rooting in. Okay. Well, let me look at my list. Cause I probably got a list of what are some pain points, right? Like, and then evaluate those as to how can this technology, how could this technology help me solve
Deep: those?
That I think that's a super important point because the hammer looking for the nail thing is what most technologists wind up doing for the chunk of their careers. Right. And I think almost, almost every academic does this, at least if in engineering departments or computer science departments, are you almost saying, like, if you want to sort of.
leverage new kind of capabilities that can drive some innovation. It's almost like you're saying, well, maintain a list or something of those capabilities, but at the top you're anchored in, you know, your core user that you're trying to make their lives better in some way. And, um. And now you're looking for intersections or something.
Adrian: Exactly. I don't know when the right point to sort of branch into this is. I do think that the scale and speed of synthesis of information that is becoming possible, um, with these well trained LLMs. opens up a new avenue of product exploration, right? Um, so if you think of the traditional model or maybe traditional is the wrong word, but like, let's talk about, you know, being lean, right?
Like product startup lean model. It's okay. I've got my pain point. I'm going to shop the idea through my network, validate, you know, that this is a customer pain point. Now I'm going to sort of assess. Yeah. You know, uh, what's my potential addressable market, right? So I'm going to throw up a landing page.
I'm going to throw up some surveys that hit to a landing page. I'm going to, you know, SEO this. I'm going to SEO that. See what, you know, see what clicks. If I have enough sticking on the wall, now I'm going to build the MVP, see what adoption looks like and scale from there, right? That's kind of the lean playbook.
Yeah, and it's great. Like, it's a good playbook. I love it. What, what you can imagine being possible with the scalability and, um, of. Something like, you know, GPT 4 or its successors is you can fail faster on some of these ideas or some of these investigations you're doing. Right. And so I'm going to get a little bit sci fi here, but I don't think too much.
And you can say something like, hey, Based on available, you know, very Picard talking to, you know, the ship based on available demographic data, like what's the approximate addressable market for, you know, my idea of food and what's the median income of consumers who would be interested in a food solution, right?
Like, like data that might take you might have taken you weeks or a month to kind of. Like corral and synthesize, you can get to the price of the question now, assuming that you can trust your, go ahead. So this
Deep: is interesting. I want to make sure I understand you here, but you're basically saying use GPT to help you think through your product and your product planning process at all the way down to the questions that you would have ran off before, like four months ago and started looking for actual data.
Just talk to it and start to get to it quicker.
Adrian: The short answer is yes, but I'm going to sort of hedge that with saying, like, look, if you're just in this, this will get to, I think we'll get to this all down the road about when we start talking about. You know, who benefits from this stuff, who owns the models, who, you know, who does Laura's to, you know, update checkpoints for the models and all that kind of stuff, but you've got to have the right data in there, right?
Like, it's not magic, like the systems are great, but they're, they're trained on what they're trained on. I think the, the scenario that I'm lining up here with you and you follow up with like, okay, I'm, you know, as long as I'm fleshing this out and I'm thinking of doing a combinator, you know, kind of, you know, an incubator thing on it.
Like what fortune 1000 companies would be interested in acquiring a food solution and how would that fit in with their current product or services portfolio. The kind of stuff that you basically, you can think of it as a combination of market research and analyst report stuff that you use. Used to maybe, you know, slog through for weeks or a month.
It's totally reasonable given the right, you know, whether it's plugins or whatever your data model is, that you should be able to get that in minutes through using a large language model.
Deep: What does that mean for what you do as a product manager, if you're getting such meaningful strategic feedback so fast, like, does that mean you're able to spend more time pick like fine tuning between what product you go and focus on or what feature area you go and focus on?
Like.
Adrian: One is I throw that out there as kind of like, look, in my experience, most founders or most, you know, um, senior product folks, like we don't have a shortage of pain point ideas, right? Like, we don't need any help generating like, oh, I'd like to start a business. What might I do? You know what I mean?
What we have a shortage of is understanding which of the pain points we perceive. Like how big those are for the rest of the world and how monetizable they are. Right. And so that's what the whole MVP process is meant to fast track is. Yeah, you've got this idea, like, you know, you're a parent, you've been watching how bad, you know, Bark and Circle are for doing, you know, parental control, and so you have this feeling that there's space in the marketplace for a better parental controls for, you know, net nannying kind of stuff.
Great, that's an intuition you have, but now you need data to prove or disprove that intuition, right? I will make the assertion that A system like whatever kind of oracular system you are trusting, it's never going to be able to tell you how to do the right thing or like how excellent to do it, but it will help you spot a bunch of negative paths quickly.
And so you can fail fast on things that would be non starters. Does
Deep: that make sense? Yes, I think so. Have data? Have a hypothesis on some high value insights that, if extracted automatically, could transform your business? Not sure how to proceed? Bounce your ideas off one of our data scientists with a free consult.
Reach out at xyonix.com. You'll talk to an expert, not a salesperson.
Everything that you've said so far is... General to any kind of product feature, right? Yeah. So where does the AI angle happen? I mean, like one part of it is just literally using the AI to help you think about product. Right. And the AI system for that. But there's another part that's like, I mean, you could, I suppose just directly go to chat GPT and ask it like, Hey, what's the, I mean, I do, I actually do this a fair amount.
Like what's the AI innovation angle on historic product X. Those of us who've been like building these systems forever, we just have like all this intuition about particular places to look. So for example, like the second I saw JadGBD4, I immediately thought, Oh my gosh, there's going to be a new generation of customer, uh, of customer experience, uh, things, you know, everything from like, you're like, I think, you know, cause, cause you can just kind of immediately like clicks.
But like other times, other algorithms, like other ways to use. Um, AIML is a little more nuanced. Like, hey, what can I forecast here? Like, what can I predict here that would be of value? Um, and is that just a small part of a solution or is there a way to like turn this into a Like, let's say you're working in a startup, uh, you know, or an early stage project to turn it into like a factory of AI, where there's a lot of machine learning, AI drip powered innovation.
That's going to go into this thing continuously. Like, what's your process for kind of getting to there or do you not? And you're like, look, I don't, I don't care. I'm solving a product for a customer. Um, if I have a, uh, an efficiency that I need to improve, then I look, otherwise I don't like, like, how do you, how do you think about that?
Adrian: I'm having trouble wrapping my head around what does it mean to have a deeper or an AI factory pipeline, because I think I am, maybe it's just my, my own biases or my own perspective. I am definitely like strapped in to the model of like, well, I know what my customers, you know, segment, like I know what I want my product to do and I'm trying to figure out how to do it better.
The main capabilities that I'm looking at from the language models, other than impersonating humans, which is also great, like is. That rapid synthesis of data that whether you're just an end level consumer and you used to do like 13 Google searches and then you click on a link and read it for six minutes and then click on another link and read it for six minutes, like the streamlining that into Oh, could you build me a table?
Of you know, like all the different, um, mosquito repellents and summarize available data on, you know, like how effective they are against West Nile virus. I don't have a vision of like, Oh, yeah, this is the, you know, like sort of the Nissan production line of, you know, AI soup to nuts. I think I'm more have a thought process around like, Well, I know what I want to do, or I have an idea of what I might want to do.
And how can I move faster in that
Deep: direction? So it's an efficiency. Yeah, in a way, I think
Adrian: it is. Um, well, and then being a geek, right? Like I have all these, um, even though I just said, like, don't be top down, like, there are all these things that sort of spark for you. Like, when you see image prompting in the latent diffusion models and, you know, alongside the text prompting.
You just, and, and you're, you're a parent. You see your kids' drawings and you just sort of play with 'em, and you take their drawing, but then you sort of like uplevel it into kind of a slightly Pixar Disneyfied version of that. Oh, and then by the way, you auto bone it and auto rig it in Maya and then turn it around like, and in 15 minutes you've made them a video avatar where they can play with their drawing.
None of that would've occurred to me if those tools hadn't been in front of me. But that's a very, no, I
Deep: feel like you're, I feel like you're sort of, Touching on something that maybe is just so natural for you. You don't think about it. You're playing in essence. Like you're, you're grabbing these tools and you're playing.
And when you play, you're just trusting. You're going to have ideas from that.
Adrian: Yeah, I think that's fair. Yeah. Yeah. That particular example I gave was just something you're just like, Hey, wouldn't this be neat? Like I, my kids are really loving this thing. Like, I wonder if I could, and then you just sort of, like you said, like you sort of play, you mess around with it and you're like, Oh, they love it.
This is cool.
Deep: I mean, I feel like that's worth digging in a little bit more because. I find I go into a few different modes, right? Like I, if I see a new tech capability, the first thing is I have to think, do I care enough to even read about it? That has to kind of pass a bar. Like, you know, and usually like, you know, we all have our techniques.
I've got a bunch of feeds and I got a bunch of people in my feeds. And there are some people, if they say I should look at something, I look at it right away. And other ones, I look for more. Once they start reading about it. I have some intellectual ideas of what can be done, but I don't feel like I get to that kind of spark and obsession like you're almost describing in your, your example here, until I actually download, play, write code, put APIs around stuff, actually use it, try it, dink around.
But for me to do that, I have to pass yet another bar of motivation and interest. Like it has to have gotten to a certain point. Like if I think back to just, you know, in the LLM case, first time I saw LLM to first time I grabbed LLM and actually had it like write some poetry for me, and then had it some kind of interactive thing going back and forth.
My, Takeaways were pretty different from the reading about it. Oh, yeah, that's that's interesting. I can see how that would help out with classification. I can see why we got an extra four points on that task. Like it's different, right? And I feel like a lot of people just don't take the time to do that, that hands on play.
It's like almost it's okay for them. I don't know
Adrian: those people. You know, many of them either, but like,
Deep: I see that. I see that happening though. I see where there's like different levels of play and obsession, right? There's people who wind up, um, building things or being involved in the process that maybe have some engineers around them and like, you know, and I feel like it's just somehow different, but I, I want you to like, yeah.
Tell me why you don't think you have those around you and like, what happens to the ones who play and what happens to those who don't?
Adrian: I think, I mean, you're sort of talking about like, you know, so social network selection bias and stuff now, I think, um, like that's just kind of like how my mind works and how my brain runs.
Right. And, and either that is like a shared kind of experience and like, You know, we start riffing off each other and we're both enjoying that conversation and that riff and then, you know, we, you know, sort of clink glasses and go home. Um, or if you're not thinking that way, you're sitting there going, this guy fucking talks a lot and You know Like, like, it's, and it's all like
Deep: super, um, It's like the guy that's always talking about snowboarding or something.
Yeah, yeah, yeah. And you're like, oh cool, like where do you ride at usually? Oh, you know, like four years ago I went to such and such. And you're like, um, why are you talking about it so much? Like it makes no sense.
Adrian: But it's even independent of domain, but you're right, but it's that same thing, like for it's either engaged, you're either engaged by that kind of flow and and riffing or it's off putting or possibly intimidating to you.
And if it's the latter, then you're going to self select not to spend more time with me. And I'm going to you and your face was dead. The whole time I was talking about stuff that I was super passionate about. So I'm probably going to self select and spend less time with you. And that's just the way it goes,
Deep: I guess.
I feel like a lot of these generative, um, capabilities that are coming out are creating a world where people who maybe were too intimidated to like fire up Python and learn some basic code and like dig in there and try to get some stuff to work and read a little bit and hang out on stack overflow and put some APIs together.
Like I've, I've personally run across multiples of folks that might've been like that a few months ago that are curious enough to just say like, Hey, just. Build the code for me for this thing. Like I feel like the bar's lowering and people are building capabilities. I dunno if you saw this, but like, I think it was like last week, can't remember the tool, but somebody's built this environment, you know, on top of chat, GBT four, where you can just basically design a video game.
So think like, um, Gallaga, but you're designing it from scratch via just interacting, you know, back and forth.
Adrian: And it's right into like AU or Unity or something on the backend. Yeah,
Deep: yeah. It's, it's putting together the whole, and so now all of a sudden, I feel like that's part of what's going on here with all this stuff is we've lowered the bar for creation and creation is so important for getting that product obsession that gets this stuff out there.
So like, if we rewind like five years ago, 10 years ago, we could have got an API that did something like, let's say we had great speech, speech transcription API comes out. People like you and I are going to have no problem finding our way to it and integrating it into products, but now I feel like it's like a different level like that, that obsession that we get because we grabbed it and played with it is that bars going lower and and folks who never would have written code.
Are going to increasingly see an entire generation of new capabilities and creations. So, you know, going back to your Star Trek thing, it's like, you know, like you didn't have to know squat on Star Trek. You just wiggle your hands around and talk to computer and like, it keeps doing stuff, right? That's the world we're entering.
And I'm
Adrian: pretty much in violent agreement with you. Um, and, and first I think that's 80% of good thing. The only the 20% where I feel like, and we kind of got to watch it is. I mean, having more people, lowering the bar and having more people generating content is an unalloyed good thing. But I don't know, I'm sort of thinking of a bunch of specific examples in the back of my head.
Most of them are actually around like the sort of metal spectrum, like voice, um, cloning stuff. There are a limited number of service providers that are wrapping this stuff up in a certain way with certain Goals and means and ends of their own in mind. There are certain companies in that tend to assume we know we don't want people to hurt themselves on the sharp edges of technology, right?
And so and so we are going to make the product decisions behind the scenes to wall these things off or hide these things and play up and expose the flexibility of these other things, right? And the, the bonus is that that makes it more accessible to a wider range of people, which is awesome. And the price is that it's, they're inherently passing along their own bias for the direction and the use of the technology.
But, so that's my, I guess my sort of slightly cynical caveat to the, the mostly optimistic message, which is that, yes, the more power is being put in the hands of people who didn't have to. You know, learn what, like pointers or, you know, type safe programming was.
Deep: It makes me think, do you remember, like, I think it was maybe 20 years ago, all of these fancy visual IDEs.
We're like all the rage, this is like net beans and I'm just all this. And people started like, people started talking about like, Oh, you know, everyone's gonna be able to write code and you're just going to have these visual things, you're gonna drag and drop. And you're right. Like HP was
Adrian: hugely into
Deep: that.
Oh yeah. Like it was a big, it was a big area. Sun put like tons of money into this. Microsoft put in tons of money into this. And, you know, I think almost universally at some point, people realize like, well, yes. But it doesn't really take away the complexity, like it just makes it harder almost like it's although
Adrian: I think I think in so I'm, you know, game designer and game developer.
And so I think there are certain that's a great genre where it allowed. So, for example, if you look at what I'm real, let's you do both with like shader graph and materials graph and with. You know, the sort of visual programming language for the actions in Unreal, I think that stuff's awesome because it takes what used to have to go over the wall from a level designer to a software engineer and get baked into a release and push back and become something that the level designer, because the interactions you're detailing there are more like timing things like this button needs to be pressed and this button also needs to be pressed for this door to be open.
And then that triggered this other event. And so I think there was some payoff in certain, you know, sub, I don't know what you call them, like sub genres or, you know, subsets of the market that was really great. And, and the shader graph is another great example, like who wants to learn, you know, a bunch of matrix transformations in order to, like, you develop an outline shader, you know, for your, um, for your models, but when you can literally kind of stack them on top of each other, almost like filters in Photoshop and see what's happening, like, that's awesome.
Deep: Yeah, I mean, I appreciate you pushing back on that because you're absolutely right, like, it's almost like you get this promising new big thing, tons of people get obsessed with it, the vision gets over promised, then the pockets where it didn't deliver Get slightly disillusioned, but then there's pockets where it truly did deliver.
But the reason I brought it up was not, um, was not that so much as we've long had this vision of not having to learn low level coding capabilities, but to be able to create, right? Like this vision has been there for decades and decades, and we've had, um. Limited successes here and there. And, and today, you know, it's all about the text to coding kind of, I mean, a lot of that vision is also there.
Right. And I think, um, there are going to be pockets just like you're saying that where it's really going to pan out. Uh, and there's going to be others probably where it really doesn't, you know, like it's going to get oversold. The question I have is like, what's the, what's the real value for product as.
People who build product in taking that bar and lowering it and getting it so that more prevalent skilled folks can like create
Adrian: there's a bunch of stuff that sort of swimming in my head and I'm not sure I don't want to like get sidetracked like one thing that I'm kind of putting a pin on and setting aside is every time this happens for a community there's a bunch of people who are like great more stuff and eventually the cream will rise to the top.
And so, you know, rising tide lifts all boats. And there's also a kind of backlash to most stuff is crap, right? Like, it was better when only the people that really had passion and care. And you see this, again, I'm going back to games, like, with, with Unity, Unity did a really great job of creating the asset marketplace, right?
Where it wasn't just Models and meshes. It was like, Oh, you're trying to do pathfinding. Well, here's a little, you know, essentially it was the other store was like stack exchange for people that don't know stack exchange. Right? And suddenly you started seeing what they call asset flips pop up all over the steam store, which is like, I just made a match 3.
I don't know anything about programming, but I bought I bought this logic piece and I bought these assets and I bought these tiles and I published this thing in steam and hope that someone would pay 99 cents for it. And now suddenly the markets flood with this crap and, um. Where am I, where does that link back to your question?
If it has a product manager, how does this, how does lowering the bar help? I guess the most direct way is, is if it lets you be the person that does that playing rather than having to delegate that play to somebody else and hope that they come back to you on it. I'm,
Deep: I'm, I'm taking what you're saying, and I'm, I'm translating it for it for a minute, which is like.
You're lowering the cost at which you can get to an MVP in an early stage startup or something. And, and you're, and you're,
Adrian: I also, yes, you are. And by doing that, you also, again, I don't know how much everyone has my own psychological mindset, but I'm aware of my own fear of failure as being sometimes a gate, right?
Like it prevents you from trying something new because like, well, you know, what if I spend six days working on this and it ends up really just kind of. Blah, right. Like, and what's my, so what's my initial prior on that? And like, you know, so that, like you said, it sort of informs my passion level. Yeah. But if you lower that cost from like, you know, six days to five hours, it lets, it gives me more at bats,
Deep: right?
There you go. I think that's what it is. Yes. We're always like, as people who build products, we are in a number, I mean, as much as great as certain product people can be and can always hit it out of the you. You're usually playing a probability game. Like you will fail if you're going to fail a certain number of times and you're just getting, cause there's just, it's just like a complicated process of, for building great product.
Adrian: You're right. Otherwise you're, you're constantly doing this resource analysis, you know, trade off of like, you know, how much wood do I put behind this arrow? How much, because I have other arrows I could fire. Right. Um, and this lets you do more.
Deep: You're listening to your AI injection brought to you by Xyonix.com. That's X, Y, O, N, I, X. com. Check out our website for more content, or if you need help injecting AI into your organization.
This is, this is kind of an area in general that we see in tech, which, um, every once in a while I'll have a conversation with somebody that's like not involved in the tech industry and they're like, I don't understand. So this question, I'll, I'll, I'll throw it in the context of Twitter, but I think you could put it in the context of just about anything.
I don't understand how somebody could lay off like whatever percentage 80, 85% of the people in a company. And then have the company keep worrying, uh, working. And I don't want to over index on Twitter, but the answer I usually give is something like this. It's like, well, what a lot of people don't understand is that the high tech industry is largely speaking an ecosystem of bets being placed.
And tons of us are in this machine placing bets and placing bets and placing bets or being in the bet. And not all of these things are going to win. And in fact, the things that win, can come from like an inordinately few number of people. So like, I think Instagram was like eight, what, eight folks, you know, that pre created at that time, 3 billion in value in a, in like, I think it was like 16 months or something.
And later on, you can fast forward 12 years later, and there's like a tons of people working there, but generally speaking, that's largely what's going on. Right.
Adrian: Agreed. I think, I mean, I don't want to like bring out all the old chestnuts, but there's also the, you know, the pivot, right? Like, the thing you thought you were building actually turns out, oh, it's really this other thing, right?
Like, I think Slack is probably the most overused example for that, but it's a good one of, yeah, we were just doing this as a means to an end, but it turns out this is the product, right? I'm kind of noodling on and I won't suck up too much time, but I want to go back to what we were saying about more at bats.
And I'm thinking about a couple of things in my own personal experience. Like, 1 was at every when we were helping, you know, helping the automated channel creation and sort of like. You know, tuning it and teaching it. And I created this like really crappy, like, um, macromedia flash tool, right. To be able to go in and like edit some of the channel definitions and play with the hero images and stuff.
And just by doing that and reducing the. The task and like basically allowing the editors to directly access that stuff instead of having to, you know, file tickets and push it through into a release, the quality of the channels that came out was so much started being so much better because they could do it themselves and not have this high overhead costs.
And then the other example that occurs to me is when, um, in orgs that I've been in where people have done a lot of work of taking what's, you know, some huge massive collection of change, you know, SQL tables. And putting a nice front end on it so that like marketing and, um, product development, people can easily pivot and, and filter through that data.
Like every time that's a repeating pattern of when that, when some internal tools person does that, there's always like this insight that comes out of like, Oh, like, cause it was in there in the instrumentation data or it was in there in the, you know, in the signup path data. But like, Oh my God, all our major convert, like we have like 80% conversion from people coming from this thing.
Maybe this is where our market is. Right. So I'm maybe perhaps needlessly elaborating on that point, but it, it's really just the value that comes from reducing the number of layers of indirection between the, the data or the code and the person who can be
Deep: inspired by it. That's interesting. And that goes back to our theme here, which seems to be emerging of.
Pushing down the, I guess maybe the better way to say it would be like making creation, it's more accessible or something. I mean, that, that does feel inherently valuable. Like we can, we can probably rattle off when you were telling that story. I was thinking of the rise of Tableau, you know, where, you know, you had Excel.
And that generation of spreadsheets kind of on one path, and then you had kind of the emergence of Tableau where it was marrying the easy kind of construction of the analysis, but with the distribution side and make and piping it in with like real time data feeds and marrying those two together. Also pushing down the creation abilities that it took to do that.
Right. And, and, and now you have this kind of huge emergent capability. I, I love that theme. Um, making creation more accessible, that, that seems valuable. So I have a question for you. Like, so talking to all the people that build product, like, what do you think is, you know, when you think about all this generative abilities that are emerging, like, what do you think is like maybe the most important question that.
People aren't asking right now that they should be.
Adrian: I don't know. I don't know if there's any question that people aren't asking, by the way. So like I said, my every almost twice a week since February. I've had someone from my network reaching out and like grabbing coffee or, you know, getting a drink to sort of run some idea past me, um Or ask a question.
So I feel like wow, a lot of like all the right questions are being asked in a lot of ways. Can I can I let me toss out to kind of related questions. You tell me if there's interest in drilling down either. One of one of them is Mhm. Would be, um, to turn that around a little bit. What are some of the either wrong questions or wrong solutions that people are over focusing on in my humble opinion or not so humble opinion.
And then number 2 would be what are, what are, like, if you, if you zoom out as far as you can, what are sort of the biggest picture things. That you wonder or worry about right with this wave of stuff, I'll try to take your
Deep: first one. I mean, one of the, one of the, uh, one of the things that I think people are like over indexing on in a way they're over indexing on this idea of let's just take of the provider, like the generative engine providers.
Let's take ChatGPT, like open AI. They're over indexing on the interface that OpenAI has given you, and like, actually going in there and using it. Um, we have yet to, like, we're just now barely starting to get products that are built on the API behind the scenes, that, that take away all of that, you know?
So I feel like... From the very first moment I saw that I was like, this is not how we're going to be using this forever. But for now I can, it's really like everyone under the sun's thinking, how do I get product out of this that makes it so that you don't even know what's going on.
Adrian: And it's so, I mean, I'm just going to riff on that a little bit because I think it's super interesting.
For example, there, there's going to very quickly, I guess I probably, it's probably safe to say it's already emerged. Um, I bet there are a number of companies, and I don't say this based on any data of looking at LinkedIn. I just think that, so feel free to fact check me and prove me false. I would, I would not be surprised if already people were writing job descriptions for and hiring prompt engineers.
Deep: Oh no, this is a thing. This is totally a thing. So big bucks too, which blows my
Adrian: mind. My two pennies for what it's worth on that is like what both what an interesting job, but also. What an amazingly short lived window of opportunity there is for that particular profession.
Deep: For
Adrian: some double digit number of months, or if you're lucky, some medium single digit number of years, that will be a need. But literally every day you go to work, you are literally training your own automated replacement. Right. And, and so maybe that's a, you know, if you're okay with that and you get it and you're like, yeah, but in the meantime, you know, they offered me, you know, 135, 000 in remote work, like I'll take it.
That's totally cool. Like, as long as everyone's clear on what's going on, but you know,
Deep: I think you'll accidentally learn some other things too, though. I mean, like you have to hope so,
Adrian: you hope so. But like, I guess I look at that and I, whatever, maybe I'm. Sounding like a super cynical word, but I look at that and I think, yes, number one, that's a great, like, interesting opportunity for a very short period of time, right?
Like,
Deep: Yeah, but that, that's always the case, isn't it? Like, when the new thing comes, like, when computer science came along, I guarantee you there were a lot of mathematicians that are sitting around, like, whatever. Like, it's just a branch of math. Yeah, well, It's just gonna, I think it will just grow out, right?
Like, cause Prompt engineer will just be the, I mean, it might change its name, but it, because like, once you got your prompt, then, then you're immediately confronted with the bottleneck of the constraints around few shot. You're like, okay, I got one, two, three, four, five, six, maybe seven, maybe eight examples.
And then next thing you know, you're fine tuning. And then, and once you're fine tuning, you're like, okay, I'm sick of paying. So I'm going to pull this stuff down onto my own hosted models. And like, there's that whole trajectory, right? So
Adrian: let's, I don't, maybe you did a, for all I know you did a whole separate podcast on this, but I definitely want to put a pin on if you'd have it and come back to touching on the case of where, for people who don't pay to train from soup to nuts, their own, like, you know, 150 million model.
How do you add, um, competitive advantage or
Deep: value? I totally wanna talk and literally, like, I've got it right here. Uh, AI moat, question mark. This is like the biggest question I've been getting asked from our clients. Okay. And I can think of no better person to ask it to than you. So I'm gonna ask this to you because every, every, everyone's VCs say the same stuff constantly.
Like, Hey, you know, what's your competitive advantage over time? Yeah. When, you know, when you're interacting with this like huge model, you can't compete. You can't build your GPT 5 like yourself. That's not what you're about. Um, and those models are getting so good. And it's not about text even like, you know, we can talk about text.
We can talk about generative imagery, but all of them have the same. phenomenon going on, which is the, which is the big centralized model is so powerful that your localized version is not giving you the competitive mode that it used to, right? Like if you rewind even just six months, that my answer was always basically the same.
You're a startup control, your data that you generate, gather your data. Don't be like kind of exclusively an aggregator, grab this stuff. That's going to be your ultimate competitive mode. Get training data, turk, hire annotators, whatever you got to do, um, label your data and then build your models. The actual algorithms themselves aren't going to be the competitive data.
It's that whole thing. But now it's like harder and harder to say that I can probably still say that for time series data. I can't really say it for text. I can't say like if you had, you know, even if you were doing there's
Adrian: not a better answer. Like you just summarized kind of, I mean, the best answer I can get, like, because, so here's what I imagine the landscape will happen, right?
There's going to be some small single digit number of, like, let's say it's Microsoft, Alphabet, Meta, who knows, like maybe Oracle and like Amazon, like, you know, kind of come from behind and, and, you know, generate your own things. But there's going to be some very limited number of real providers for the kind of pre trained models, right?
And then there's going to be the question of, yeah, yeah, there's so there's no competitive advantage in licensing, right? Like, it's, it's almost like you license to not fall behind. Not you don't license to get ahead. Right? So the whole curve just shifts. And now you have to license just so you're not behind.
And that, you know, swimming to keep up never feels good. Right? But you, you kind of got to do it. But then when you ask the next question of, you know, how do you, how do you get ahead and not just fall behind? I think the answer really does come back down to like, well, you've got to get, you've got to have some unique data to offer for whether you're doing a, you know, a Laura or whether you're doing, you know, a checkpoint or like, let's assume that you're not, you, you don't have the money or the resources to build the whole thing, but hopefully.
More of the, like these, you know, um, low rank adaptation, you know, post training models will become effective and you can be really good at leveraging the data that you do have access to, to be better in this niche than someone who just takes this thing off the shelf and deploys it.
Deep: Um, I don't think that that story has gotten lame lately.
I feel like that's the story I've been using for 15 years, but it feels. It feels like a not like not a slam dunk anymore. It feels like, but is the
Adrian: reason it feels like not a slam dunk? Not because is it perhaps not because it's not true, but because many people don't have any critical mass of differentiating data.
Deep: Or isn't that a great question?
Adrian: Yeah,
Deep: I mean, okay, maybe need help with computer vision, natural language processing, automated content creation, conversational understanding, time series forecasting, customer behavior analytics, reach out to us at Xyonix.com. That's x, y, o, n, i, x. com. Maybe we can help.
Maybe that, that covers, certainly that covers a few cases that were at the top of my mind, but like, but maybe let me ask you to this way, like, does there even exist? Like, let's take text because I think it's the text is so much further ahead as far as these big models. In the case of text, does there even exist a corpus of data that you can like really, really train on where you can't, where you can like outperform GPT for enough?
That it actually matters, like, doesn't GPT four have to actually be bad at something for you to not accept for you to excel enough for it to matter. And I would argue it's not that bad. Like, I can't find that many holes. Oh, wow. Okay. I mean, like, obviously with like math and like certain like, but as far as like reading well and reasoning off the reading.
I mean, it's, it's generally quite reasonable.
Adrian: I disagree.
Deep: Okay, let's hear it.
Adrian: So, First of all, like, I'm just going to do the, the caveat, like, and yes, let's, let's set aside all those things, you know, that actually require like an explicit computation model or the building of an explicit computation model.
Although, sorry, I can't help but even mention, I won't drill on it, I promise, but like when you think of the massive number of neural nets that you're training in all those different channels with all those different attention models and they're all semantically opaque. Thank you. You honest, it's not likely, but it is possible that some more structured representations are emerging there.
But I'll set a pin on that and set it aside. Um, as a user for this, for my own, like, hobbies and passion projects, right? I can tell you that if you're writing like some shit blog post on, you know, what are the top, what are the top weapons to use in Counter Strike, ChatGPT has nailed the bar for what you need to publish, okay?
If what you're trying to do is something that requires a little more than just that, I don't want to say problem, but that is kind of like blah level of, yeah, I strung some words together and it kind of is coherent and makes sense. For example, uh, I am at a D and D GM, right. And, and the number of hours you spend prepping for, you know, the kind of shit your players are going to encounter in, you know, in a given week for a session is a lot.
And I would love if I could go to, you know, chat dbt and say, Hey. Here's the, like, give me a really nice evocative description for, like, an underground cavern with a big, uh, that has a big, uh, phosphorescent lake and a monster at the end. Like, give me three paragraphs on that, right, so that I don't have to spend, you know, probably 40 minutes, like, building it myself.
And that is something that I don't know. First of all, I don't know if there's a market there, but that's an example of what ChatGPT does not do well. All the shit sounds the same. It all ends with something. And it was a place where evil always dwelled. And you're like, I don't know what thing
Deep: you're... But isn't that just a matter of like a few months?
Like, as soon as we can fine tune on top of GPT 4, you know, you're going to give it... A few thousand examples that are rich and colorful, like you want. And then at that point, does your startup not need to like, like whatever you come up with, with a few thousand examples on top of GPT 4, it's not that much differentiation at the end of the day.
It's not like the way it was a year ago where you've got, you know, your whole corpus and everything custom trained and all that, right? Like you're basically still a GPT 4 dependent, you
Adrian: know, Okay. I, I don't disagree with you. And maybe, maybe the thing that we're just sort of arguing about now is, you know, how much, like, is, is the problem is the reason it feels like a lame answer because the amount of data that you're talking about feels like no.
Yes.
Deep: I think that's a huge part of it. You don't need like when you needed millions of examples for your deep learning model to do what X, Y, Z. It felt like a great answer to your investors. Like, okay, that's, that's a moat, but when you're talking about 2000 examples, maybe three, maybe 10, like, you know, and then maybe, okay, sure.
Maybe you go to a hundred thousand, you get a half a percent more at efficacy. That doesn't feel like as strong of an argument. Let's,
Adrian: I feel like we're in empirical waters here and I don't know what the empirical answer is, but so let me pause at two different possible worlds of the next, you know, end months.
Let's say my goal, for example, in this kind of realm that we're talking about is, boy, like the way Stephen King writes, or the way Guy Gabriel Kay writes, or the way, you know, pick your favorite author here writes, I want my stuff to be in the style of X, with the same level of fidelity that I could go to mid journey and say, in the style of N. C. Wyeth, right, and get those kind of brushstroke things. One of two things is true. Either... I'm going to be able to basically provide four books like, you know, four books worth of data points and then boom, like it's gonna my my Laura, you know, my checkpoint is going to be able to reduce that output or it's actually not because the because the priors are so heavy in the huge corpus that it trained on that shifting that momentum to have That kind of plausible structure.
It's never going to do and but so from David. I mean, it will, it will spit that phrase out, but it would never have come up with it. You know, I mean, like David Foster Wallace example. And so either one, either one thing is true, either a I'm. It will actually be like, and we'll all be like, holy fuck. The bar just lowered again.
Amazing or B you're going to have to step back and say, okay, I'm actually going to now, in order to get the output I want, I'm going to need to train on not just the raw data. I'm going to need to mark my data up in such a structured way. Like I'm going to need to go back and do some NLP and say, what makes this.
Style this style is this number of noun phrases, you know, this is how often we ship character points of view and you're going to have to build some, for lack of a better word, build some IP around the structure and the form of it. And
Deep: it feels like you're basically making, if I jump up a level from what you're saying, you're making the argument that people's tastes will get much more refined and they're going to want.
They're going to read closer and want more.
Adrian: I don't know that people's tastes are going to get much more refined. I think they are like, I think this is what you pay advertising agencies, a huge buttload of money for right now is to like, get my product, like make it sound hip and not like some idiot who tried to make it sound hip.
I don't know. Yes.
Deep: No, I actually, that, that, that example is really good because, you know, just do it is in our brains. It's so simple. Pretty sure chat GPT, however good would not have come up with that. If we rewind back in time, like, yeah. Okay. This has been a super awesome conversation. Uh, I feel like we covered a lot of terrain.
Um, we're almost out of time. I'm going to leave us with one final question. Fast forward five years, everything evolves the way that you're thinking it does. Like what is the world of a product manager look like? Uh, specifically in terms of how it's different from today. You know, I think we,
Adrian: we both been in.
Sort of content processing and, you know, massaging and republishing kind of, uh, areas for a long time. And I think we're both very familiar with this idea of curation as being a role, right? Um, my, my short answer would be a very, a very true and very common saying is like, uh, you know, in the startup world is like, I don't give a shit about your idea.
Ideas are a dime a dozen and it's 99% execution, right? Um, I'm not saying that's going to completely change overnight, but if the execution is the thing that's being automated and, uh, radically simplified, then the judgment in the idea actually is going to start to take up some of that space in terms of importance.
So I'm totally like way out over my skis here, right? But I'm, I'm, if I'm, if I'm trying, especially because the pace of innovation is so insane right now, but I would imagine that if being a product and product manager at like, not, you know, not VP of product, but at a sort of like senior, like mid managing level right now.
Is way more about tap hurting and blocking and tackling and way less about like, you know, thoughtful value judgments on, you know, different experiences. Um, I would like the, I would like to believe that the composition of that job gets to change because a lot of the annoying blocking and tackling stuff gets reduced and simplified.
There's this whole world of answers that I think. We definitely don't want to dive into as we kind of wrap up, but there's a whole world of answers around legislation and IP and and how you're going to have to focus less on the tools and more on the source data that is, I think, a very deep root, but not something to not something to head into in like 60 seconds.
I'll tell you the stuff that's rattling around in my head right now is very much around IP rights and legislation, like what people are going to do to kind of preserve the kinds of freedoms they think they want. Like if the question that I think people over index on way too much, it's not the right question is how do I exclude my output from the training set of whatever the tool is?
That's the wrong question because Someone can do something not exactly like you, but reasonably similar to you, and that will go into the training set and your, your output can still be commoditized, even if your literal source data didn't get ingested.
Deep: Awesome. Well, thanks a ton for coming on, Adrian.
That's all for this episode. I'm Deep Dhillon, your host, saying check back soon for your next AI Injection. In the meantime, if you need help injecting AI into your business, reach out to us at Xyonix.com, that's X Y O N I X dot com. Whether it's text, audio, video, or other business data, we help all kinds of organizations like yours automatically find and operationalize transformative insights.