
Your AI Injection
Is AI an ally or adversary? Get Your AI Injection and learn how to transform your business by responsibly injecting artificial intelligence into your projects. Our host Deep Dhillon, long term AI practitioner and founder of Xyonix.com, interviews successful AI practitioners and domain experts to better understand how AI is affecting the world. AI has been described as a morally agnostic tool that can be used to make the world better, or harm it irrevocably. Join us as we discuss the ethics of AI, including both its astounding promise and sizable societal challenges. We dig in deep and discuss state of the art techniques with a particular focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful. Need help injecting AI into your business? Reach out to us @ www.xyonix.com.
Your AI Injection
Is This the End of Traditional Coding? How AI Orchestration Might Render Developers Obsolete with Laly Bar-Ilan of Bit
Is traditional coding already a thing of the past?
In this episode of Your AI Injection, host Deep Dhillon sits down with Laly Bar-Ilan, Chief Scientist at Bit, to explore a near-future where software developers evolve from code writers into code orchestrators. Laly discusses how Bit's composable software development and AI-powered componentization are set to overhaul the industry, even possibly eliminating the need for writing code altogether. The discussion leads into how a graph of reusable components could tame sprawling, out-of-control code bases, the ethical nuances of automated development, and what it really means to “orchestrate” code. Tune in now to uncover whether tomorrow’s developers will simply curate AI outputs, or continue the tradition of writing lines of code themselves.
Check out more about Laly here: https://www.linkedin.com/in/laly-bar-ilan/
and Bit here: https://www.linkedin.com/company/bit-dev/
Check out some more of our AI & software development podcasts here:
[Automated Transcript]
Laly: I don't think that in five years even, software development is going to look anything like it looks today. I don't think we'll be writing code, like hardly writing code to tell you the truth. We'll have, we'll be more like orchestrators and that's going to go away at some point as well.
But more like orchestrators, overseeing, outputs, evaluating outputs, taking care of the pipeline, but I don't really see us writing code anymore in the traditional sense. Mm-hmm.
Deep: Hello, I'm Deep Dhillon, your host, and today on your AI injection, we're exploring AI driven composable software development with Laly Barr. Ilan, chief scientist at bit Laly has 18 years experience in software engineering and data science specializing in NLP. She holds a master's degree in linguistics and cognitive science from Tel Aviv University.
Bit provides tools designed to accelerate innovation through composable software development. Laly, thanks so much for coming on the [00:01:00] show.
Laly: Thanks for having me. I'm really excited to be here.
Deep: Awesome. Me too. So maybe let's get started. Walk us through like what did people do without your solution? What is different with your solution?
And maybe take us through a particular scenario.
Laly: Right. So, basically what our solution, allows companies to do is represent their entire code base as a set of reusable components. And we're talking about both front end and backend components. And this is the essence of composable software, really
Deep: how is that different from like the normal software where everybody encapsulates and has objects and classes and functions and modules and all the stuff that makes it reusable?
Laly: Right. But they're written inside repositories. And there's a lot of duplication, right? Uhhuh, which creates a lot of tech debt and maintainability issues and scalability issues.
And when you represent your entire code [00:02:00] base as actually it's a graph of reusable components that depend on each other. Ideally, each functionality, business or product functionality is represented only once in the entire code base. Okay? So, what, let's say you want to build a shopping cart. Okay, we'll go for frontend because it's easier, but we do both backend and frontend.
As I said. you wanna build a shopping cart. What you do with Bit is you use our semantic search engine to see if there already is a shopping cart somewhere in your organization. And if there isn't, you look for dependencies that you can use for existing components that you can use, right? So you look for a.
Menu items and selectors to select chipping methods and tax calculators and all of that. And then you wouldn't have to write everything from scratch like a lot of people do. And basically like AI does today,
Deep: I think what you're [00:03:00] getting at is these are higher level components than, than what you might have in an encapsulated class or, in a normal, software, you know, development process.
So this is like your writing code and if you think anyone else in the organization would ever need to use this component, then you go and modularize it to Exactly. Bits vantage. Is that, is that right?
Laly: That's right. that's the exact principle we follow. if we think someone's going to use it, then we, turn it into a component.
Deep: So it's kind of like you're in an org where there's maybe enough developers that it's not, obvious like who's built what. and then you're gonna go ahead as an individual developer and go through the effort of, componentizing it in the bit way. and then that makes it available in this kind of company's marketplace, if you will.
Is that fair to say?
Laly: Yeah. you don't really need to componentize your entire code base, but, a lot of our clients actually came to us because, their code bases became, [00:04:00] very big. Really fast. And they felt like they were starting to lose control over the code base and there were starting to be a lot of duplications and they didn't know, what's really going on.
So they wanted a way to be able, to bring some order into their code base. So every new functionality that they're writing, they're writing in components. also the idea is that our ai, which I haven't talked about yet, will very soon be able to, to understand the APIs that your code base is already using.
Mm-hmm. And be able to reuse these APIs when it generates new components.
Deep: help me understand like. in an alternative world, kind of without bit, I might just tell my developers you know, imagine we're in an open source ecosystem and you're contributing a package if it's Python or a library file if it's, whatever language.
and you might have like, a landing page website your GitHub repo, and people just kind of know like, Like, I'll go to an elastic search [00:05:00] repo if I, if I need mm-hmm. Search capability for example.
Or if I need, a relational database, maybe I go to a Postgres repo or something and then each of those sort of repos have their own like libraries and interfaces for interacting with them. Map me from that world to your world, like,
Laly: right. So today a lot of the code, we don't really know what's going on in many other repos.
Right. And the bigger the organization, the less we know about other repos and searching within repos isn't very easy. Searching on a graph of components is much, much easier, especially if you have a semantic search engine, which searches by functionality and not by string matching. So for example, if you looked for nav bar, it would also give you, Header menu or side bar or you know, whatever component already, implements this functionality. Okay. So it's much easier to find and then reuse, existing functionality this [00:06:00] way.
Deep: Yeah. So it's sort of like you go into your org if you're the, I don't know, CTO or something of a, of a company, you decide we're gonna institute bit, companywide.
what instructions do you give to each developer now? what guidance do you give them in terms of what to componentize and how does that maybe conflict with whatever their product manager or project manager wants them to be working on at any point in time?
Laly: Mm. Okay. So let's start with what a componentize.
the idea is basically that everything that can be reused and has an API, you can componentize, it can be microservices. They can be very, very easily wrapped as components. Any SDK you have in your organization can be componentized, and design systems, can also be componentized.
So basically a lot of your, code base can be used both by like human developers and the ai.
Deep: Got it. So maybe it would help to like, take a particular example and what does it actually [00:07:00] mean to, like, componentize it? okay. Like what exactly does the developer have to do to componentize?
Let's say, um, they've implemented their own logging, uh mm-hmm. Capability or something. Yep. Just, just for sake of argument, like what, what, mm-hmm. What, what would they have to do to componentize it?
If I have to guess, based on what you're telling me, it sounds like you want some kind of metadata. 'cause this thing's gonna wind up in a catalog and at a minimum, somebody else needs to be able to search it. So you want to know like, what it is. There's probably some instructions for it. There's probably, the Java docs or the, you know, the documentation, equivalent of whatever language it's implemented.
And you probably wanna know the Lang and all that stuff. Or, or, or are you just slurping that up and figuring it out on your own with that machine learning or AI system?
Laly: so let's start with what is a component?
Deep: Yeah.
Laly: What is a bit component and then how we componentize it. So a Bit component, think about it like a package or a mini repo first of all, it's an individual unit of business or product meaning.
It has distinct business or product functionality. It has its own implementation, its own [00:08:00] API, its own docs, its own version control, its own tests, everything that it needs to be an independent unit that can run, and be used in different contexts, right? So this is a bit component. today it's pretty easy to do manually with existing code, but within the next couple of weeks, you can basically just, give the reference to our ai and it will build a component around it.
Deep: And the reference you would give it would be the, would be like, okay, repo or something, be like,
Laly: sure. The files, usually it's not an entire GitHub repo, but a certain module. Okay. Yeah. Within the repo. yeah, you could do that to, to a repo actually to an entire repo as well.
You'll just have to specify the API. Okay? Mm-hmm. What sort of methods, uh, it has, what, what sort of functions, you know?
Deep: And so then maybe walk us through like, what is the AI part?
Laly: Right. So our [00:09:00] AI is the first that we know of to be optimized for reuse.
Okay. AI today, you know, the dry principle, right? The do not repeat yourself. Okay. So AI today is the opposite. It's wet, it means write everything twice, right? I mean, Some developer needs a button and the AI will gladly generate it, right? Mm-hmm. Any AI tool that you're using today will gladly generate it.
and then another developer in another team, same organization says, okay, generate a button. And AI will do that gladly as well. And before you know it, you get a code base that's inflated and full of duplications and unmaintainable. Not only is it unmaintainable, but in the speed of ai, we'll use human control over code bases very soon because, it's moving so fast, it's generating code so fast.
We see, AI code generation today is kind of acting like [00:10:00] Code Monkeys more than actual the actual senior architects they could be used as
Deep: mean, I think that, that, that is how people are using the, the systems largely. It's like, well encapsulated problems, either like a well-defined function or a well-defined class, and then, you know, you'll go into Clot or GPT or whatever, and you'll have it author that component and then you pull it in.
But, you know, increasingly copilot, there's like more sophisticated IDs that are kind of swimming all over the place and starting to come up with like really powerful and slightly frightening refactoring capabilities too, that, you know, mm-hmm. I think speaks to your potential bloat situation or unmanageability problem.
Laly: Right. But it is harder for them to, first of all, spot the code that they can actually reuse. They don't always have it as context. A lot of the time what copilot or cursor or other tools have as context is what you have in your local repo. But a lot of the time they're not aware of what's going on in other repos.
Deep: Oh, that's interesting. [00:11:00] yeah. So, so maybe it just has your project repo, but it doesn't have an organization wide one or, uh, exactly.
Laly: It doesn't have a holistic view of your entire code base. Mm-hmm. Even if it did it still think about the amount of context it would get and how much noise it would produce.
Just finding the code that it can actually reuse is very, very hard when it's just, you know, spread out across repos. and it's also coupled sometimes to other things. When you represent your code as a graph of components, it's much, much easier to find the components you need and to reuse them. And they're all well documented.
They all have these APIs that allow them to be readily consumable.
Deep: Tell me what you mean about representing the components as a, graph. what exactly do you mean by that?
Laly: we have, a graph in which each node is a component. Let's say it's a button, it's a header, it's a shopping cart, or maybe it's a microservice, or [00:12:00] maybe it's an entity like user or item, maybe it's a database handler.
Okay. These are the nodes in the graph. Now the edges between them are dependency relations. So for example, you'll have shopping cart dependent on item. On tax calculator. You'll have header dependent on logo, avatar and search box or menu. this way you have a map that's a live representation of this organism.
That's the code base. It's a live representation of all the business and product functionality of the organization.
Deep: do you have users that are non-developers that somehow tap into this?
Laly: we're just starting to have users who are, who are non-developers.
Deep: but what's your vision there?
Like what are you imagining them being able to do?
Laly: vibe code, basically. Um, I keep hearing
Deep: this term. Can you like, yeah. Tell me what it is. Like I have no, I just heard it like, this is like the fourth time I've heard it and I've just been so busy. I haven't bothered Googling it, like.
Laly: [00:13:00] Like, I gotta admit, I just heard it a few days ago.
Yeah,
Deep: well who has to you to like throw it? is this like, where you don't know how to code, but you found some AI thing and you just, twiddle something and then out comes some code?
Laly: Yeah. That actually works. Exactly. You just wrote with it without coding. Someone wrote on Twitter, he said vibe coding is really fun until you have to vibe debug, vibe refactor and, and vibe deploy, no, it's an
Deep: interesting phenomenon, right? Like I, I've been working with so many people that were, maybe non-technical product managers or a marketing person and and all of a sudden, got a, a marketing person here who I had no idea knew how to code and.
She's just showing me all this like pretty sophisticated, you know, HTML stuff that's on that's been deployed on the website. I'm like, huh, I didn't know, you know, how to code. Nope. Don't have any idea what any of this means, but I got it to work. Wow.
Laly: It's just kind of that,
Deep: I think we're gonna get a lot more of this vibe coding in the future.
Right.
Laly: I agree. So tell me democratization of, of software development that's going on here.
Deep: [00:14:00] it sounds great for everyone to accept those who have to debug, deploy and like all the, all the other stuff you mentioned. Yeah. So tell me like, what does that mean in your, in your worldview?
That means maybe, a sales person's talking to a client and they know that there's a bunch of apps that some other consulting project in the company maybe did. they start talking to the LLM and then they just boom, get something that they can vibe, code out and show to a customer or something like that.
Is that like. Scenario or, yeah,
Laly: that's the vision. That's where we're headed. But today we're definitely a developer tool because we're more organizational, we're not like, you know, lovable and bold and these low-code, no-code platforms.
More for, uh, maintaining organizational code. Where, you know, things really have to work together in large, complex systems.
Deep: I mean, developers are a prickly bunch, as I'm sure you know, uh, quite well. what are they attracted to in that platform and what do they whine about?
I, and I don't mean to put you on the spot there, but. [00:15:00] I don't know any developer that doesn't dislike a long list of things, and they can all be utterly reasonable things that something else is like totally dependent on and lost, so. Right. But like, what, what are some of the common, like, attractants in the common like, critiques?
Laly: that's a very interesting question. what attracts people to our platform is the fact that it allows them to bring a lot of order and visibility, to their code base. I have to say that since I'm not customer facing that I'm not sure about their specific complaints.
I don't have like, the stats about what they usually complain about.
Deep: Well, let me guess at some, and you tell me if the product already deals with that or not, do you let them use whatever development environment that they want?
Laly: Great question. first of all, yes, they can use whatever development environment they want, but we do only support, in terms of, of languages.
node js and TypeScript View, react and Angular. that is something that we get asked a lot. you know. Okay. So
Deep: [00:16:00] it,
the bulk of your work is your historic client base is maybe more on front end development then, with the exception Yes.
The node folks, but,
Laly: well, the, those were the early adopters. Uhhuh, but again, I mean people, but you'll be adding language into some use. Use us to componentize their entire code base backend to end front.
Deep: Got it. And then, take us into the world of how you see this evolving over time. Right, right. I think being able to find components that, could be used in a particular context that seems like I.
Probably a no brainer. I mean, it's an extension of search here. Like, you've got all these components, you've got a common, catalog or repository, whatever you want call it, where, where these components live. Any individual developer in a particular context can now like know what's a reasonable candidate for insertion somewhere.
walk us through like how you see that evolving.
Laly: , so I actually have this, this crazy vision basically so much of the functionality that companies are building today [00:17:00] already exists, If you think about all the code in the world as like one huge code base. It has so many duplications.
Oh, sure. Think about right? absolutely. Absolutely. And think about a world in which all open source is componentized. like we have NPM or PyPI, these are registries of packages. Right? But they're not really managed. What about if the entire open source was componentized and represented as a component graph?
So that ai, which builds so much of the code today and will continue to do so, has access to it so it doesn't have to write more code.
Deep: It sounds like what you're saying is that like if I go to GBT today and I say, hey, write me some code that opens up, you know, a file and does some stuff to it, it will go down generally to the most common way that people [00:18:00] access that code.
Right. let's say that there's a slightly higher level interpretation of maybe even like a standardized spreadsheet or something, for example. Mm-hmm. And let's say that there's like a higher level component that maybe loads the spreadsheet. Loads all the columns and maybe does some kind of statistical analysis of it, like, gives me cardinality or something like that, you know?
Laly: Mm-hmm.
Deep: If I ask, oh three mini or oh one or whatever, I say like, Hey, I want you to, do this for me. I think your point is that it will, it will use the most common, but generally like kinda lower level approaches to that. Like it will not find this higher level component that, I'm sure there's probably hundreds of them that already do this, that like open up a file and, recognize that it's tabular and goes through and like characterizes the columns.
It probably won't do that. Like it will try to do it from scratch, I think point Exactly right.
Laly: Exactly. It will, it will do it from scratch and let's say it succeeds, then it gets copy pasted into some repository. That's [00:19:00] that. And then another developer somewhere else asks for the same thing and it gets duplicated and duplicated think about how many resources like power and storage we could save if that code wasn't duplicated on a if you think about it as a global ecosystem of software.
Deep: I mean, I think that's an interesting observation, right? and I think to a large extent all developers know this sort of thing, which is probably guiding how people use the LLMs in the first place. That's why they're sort of going with these more encapsulated tiny little functions or surgical interactions mm-hmm.
As opposed to like, bigger chunk, interactions
mm-hmm.
Ideally you're going back to the user and you're sort of saying like, well, do you wanna higher level library?
Deep: You want a low level library? I've got five options. This one's like super popular. This one's maintained by a developer who disappeared off the scene 12 years ago and nobody knows where the heck he is and nobody maintains the code anymore. I feel like in general, the LLMs don't [00:20:00] really have conversations with you.
They more just like, take your prompt and run with it. I think if we did that with humans, like if we hired a developer. That every time you sent something they just ran off and implemented a code, you'd be kind of annoyed with that. Like, you kind of want them to say, well, what do you mean?
I mean, this is like a standard interviewing question. give them partial requirements and you're looking for whether they try to clarify the requirements or they just run off and we know the LLMs will just run off and go it's like a seen as a, a big flaw in a junior developer, but a reality with an LLM,
Laly: yeah, that's true.
I'm guessing that it will be something similar to what we have today with how maintained the component is and how popular basically a component is something that has its own tests and it's all own build process, so you can actually run it and see how well it works. Mm-hmm. So before the AI suggests it to you, it can simulate how it will run in your context, if it knows your context and bring you the most relevant component that you [00:21:00] can use and run it like simulate its run.
This is actually something that we have today. We have graph based CI that actually allows you to stimulate how the changes you're making to a specific component, will, ripple, across this component's dependence, if that makes sense.
Deep: Yeah. But I think we should dig into it 'cause it's an interesting concept.
So you are a developer, you have a component, other people are using your component in their applications. Your system has a graph of that entire universe. You want to change something for sake of simplicity, let's say you're gonna delete an attribute in an output Jason structure. Yeah. What is your system gonna do when I try to make that change?
Like it's gonna say, these guys are all impacted by this. You should version this or something.
Laly: Exactly, exactly. That. It will show you, it will run your build, then it'll run your dependence. Builds it'll start with your direct dependence and shows you who [00:22:00] breaks and why.
Deep: interesting. So you're sort of presuming that the dependences, they the, A, you have their code B, you have their tests, and c you have the ability to execute their tests and run their code.
Yes. Is is that true? That's how your system works?
Laly: Yes.
Deep: Uh, it, it works today
Laly: like this already
Deep: uh, that's intriguing. So then at least if you're gonna, Piss off the world going from Python, two x to three x, you'll know exactly who's impacted. Exactly. I think they knew, like, you know, they knew, but they had no choice 'cause they wanted to move the language forward, but like, right.
So there's lots of scenarios. I think you're probably right in this hypothesis that were people maybe naively make some change and don't understand it, and now you can bring a lot of transparency there that could be really powerful.
Laly: Exactly. They don't know until it's deployed and it's too late. Right,
Deep: right.
now this one's a fun one. So you know, in a normal universe, you've versioned your output, right? So you've versioned your API output, and then the consumers are locked in either to a [00:23:00] version or to the latest. Now you would be able to distinguish between the latest and those who are versioned.
And this is actually kind of a common. Point of discussion amongst developers like different schools of thought. I always wanna be, you know, on the latest so that if something changes, they deal with it right away. I don't ever want to get blindsided by that two days before a release or two minutes before a release.
Hmm. both people have their opinions and the solution usually winds up these problematic libraries get locked in and the ones that every are respectful tend to not, in your worldview. yeah, I guess you would get a whole presentation of what that landscape looks like right before you make your, commit your change or something.
Laly: First of all, yes. Second, organizations can actually, define their strategy the way that they want to deal with new versions. so some can decide that they're going to, want to update to the latest version every time. Some might say, okay, I only want minor versions, but not major versions.
Each organization can decide what it wants to do, and this is connected, [00:24:00] to the ai, it's part of the system instructions.
Deep: So what do you think is the organization are you envisioning just tweaks on an LLM, that's can generically deal reasonably well with code?
Or are you envisioning like a very fine tuned rendering of that model for this kind of component world that you're describing?
Laly: It doesn't require, from our experience, it doesn't require a lot of fine tuning. It requires really good rag because all you have to do, and this is part of, you mentioned context earlier, one of the principles that we live by is accurate context is better than big context. when we tell our model to generate a new component, we use the rag, to only fetch the components that are likely candidates for this component that's going to be generated. Okay. Yeah. So it doesn't have a large context, it has the accurate context and the most relevant context that it needs.
Deep: Well, let's [00:25:00] talk about that part of the system. ' so you've got this, This rag system, you have an index built, I think it's of your graph, right? your component graph with, the textual descriptions plus the code itself. It's all in there. And maybe you create embeddings at the component level so that you can like fetch the most appropriate component.
Something like that.
Laly: something like that. Except we don't have the implementation there at all. We only have the API and the metadata. like the documentation.
Deep: Oh, you don't go into the code in, in there.
Laly: We don't go into the code for two reasons. First, we don't want the actual implementation to be exposed.
To the model. Um, wait,
Deep: wait, no. And explain to me why that is. Like you don't want the model to. Well, I mean, I guess you probably already dealt with it by exposing it to the model in terms of generating the metadata descriptions about the code in the first place.
Laly: Well, if it's something that the AI generated, then yes, it generated the component and it's description and tests.
But then we're not [00:26:00] using it to train the model further or anything like that. so it's not available to it anymore. It was just available to it while it was creating it or using it. And then that's that. And if it's a component that you wrote, then it doesn't ever see its implementation.
Only, it's API and docs. This is what we index. That's it.
Deep: Yeah. I mean, that, that kinda makes sense, right? Like I. You don't need to know how something was implemented necessarily. You need to know its API and maybe properties of its execution, performance numbers, something around the implementation.
But I could imagine you, at the time of assisting in the componentization preparation with the developer, you might dig into the code itself to help them characterize it, for example.
Laly: Yeah. That, that is true. but again, it's it's very important for us to maintain our customer's ip, you know, so that's one of the Oh, we're not,
Deep: [00:27:00] oh, oh, oh, I see.
Yeah, because you're, you, you might be sharing this like universe of component, like your graph is not company specific. It might be cross company.
Laly: No, no. Our graph is company specific. A hundred percent. Okay. We're not exposing, like if you have a certain component, then no one else will know about it. Okay.
Unless you choose to make it public. Right.
Deep: Okay. But you do have a public representation or possibility.
Laly: We do have this possibility, like a lot of our components, our company's components are open source. Mm-hmm. Yeah. but yeah, in any case, as you said, the AI doesn't need the implementation details. In fact, it's just noise.
All it needs in order to compose, components is just the APIs maybe usage examples, maybe docs. That's that.
Deep: Yeah. I mean that, that jives with intuition, right? Like when I look you know, at the best open source library that does X, Y, and Z, Google's not typically making that decision based on the innards of the [00:28:00] code.
It's based on the documentation, maybe the popularity, uh, indicators like, you know, a bunch of other stuff.
Laly: Yeah. And how much are we really looking into the implementation of the libraries we're using? like, yeah. Yeah. I, I don't really, I have to say, well, I
Deep: mean, once, once they pass a certain bar of usage, right,
Laly: exactly.
Deep: Mm-hmm. Exactly. Interesting. Uh, that's cool. So like, so do you guys go through then and componentize all the popular open source libraries so that those are also available to your clients so that they can, kind of know how to use those?
Laly: We've done that with several libraries, but it's not like a huge project that we've taken on.
Deep: but it, it seems like it should be.
Laly: I agree. Either you
Deep: take it on or your customers take it on. Right. Because like part of your value proposition is, is helping your individual, like developers become more efficient. Mm-hmm. So those options should be there and available to them. They should, because otherwise, right now they have to leave your product to go figure out which open source projects to go use and integrate.
Laly: We, we, have [00:29:00] developed a lot of open source components ourselves, so we do try to make, you know, available a lot of open source components to our, customers and, you know, basically to everyone.
Deep: it seems to me like, you know, I'd want all the Apache project, like all the, high quality Apache two projects,
Laly: all
Deep: the high quality projects.
I would want them in your library because in your graph representation, because somebody's sitting there trying to do something. I mean, sure they can use their own company specific weirdo way of like logging something, but why, when there's something better that the world uses, , so on the show we sort of have Three kind of main vectors of stuff that we ask about, right? Like we usually talk about like what it is that you do. I think we've covered that pretty well. And we usually talk about like how it is you do, you do what you do.
I think we've covered that to some extent. but look, one of the things that's a little bit harder, and I don't know if there's an obvious answer here, but we'd like to get into that. the should. if you guys are tremendously successful, should you have been. And that sort of touches on that ethics world of whatever it is, the folks that come on the [00:30:00] show do.
the reason we do this is there's no shortage of, AI fluffery out there, but there's a relative dearth of people trying to figure out like, should X be built? And so what do you think are the kind of core. Ethical questions that maybe are in your world, and it could be anything from like, you know, we're making developers really dumb because, you know, the AI just does everything. And then what happens to the new generation of developers? 'cause they never get a chance to do any kind of junior dev task, and they're like, expected to be a dev manager on day one.
Like, it doesn't have to be unique to bit, it can be kind of broader, but like, what do you think are some of the ethical questions that are out right now?
Laly: Well, one thing that comes to mind is keeping humans in the loop. in the long run, components are, probably the easiest way to keep humans in the loop because think about ai, bloating code bases all over the place, writing code that It's not really nice for anyone to go into [00:31:00] someone else's code. Right. So
Deep: controversial, but utterly agreed with opinion.
Laly: You know, so when AI wrote the code, and especially when it's not like wrapped up nicely as a component and documented and tested and has a lot of things that can help you understand its functionality, I see it as if we didn't exist, then Code WA basis would just run wild and get out of human control.
And I think that's a very scary prospect. And I think when you keep them as component graphs, they're easily scalable and they will continue to be understandable and explainable to humans.
Deep: let's dig into that a little bit. tell me what you think the actual fear is maybe not even with respect to your solution, but in general, like what do you think is the problem with runaway code?
Because I get that everyone's like using it, but in general, like humans are still the ones who are going off and using the LLMs, whether [00:32:00] it's in copilot or whatever, they still exist on a team. The team's responsible for the code. Nobody cares if something breaks in their system. They're still getting, you know, their sev one alerts and stuff.
Alarms are going off and they still have to fix it. So unless they screw up in this context of their own particular code that their team that they're involved in, does it matter?
Laly: hmm. Okay. I think it, it does matter, because I think we're, I. We're a lazy species, it's very easy for us to just outsource whatever we can.
Mm-hmm. And especially with coding, which isn't an easy task, I think it, it's very easy for us to just say, okay, let's let it handle it. And with the development of agents and, you know, the fact that we can outsource or delegate so many of, of the tasks today, we might find ourselves giving more and more, like authority to, to agents.
It's not authority. [00:33:00] Exactly. There's another word for that.
Deep: No, I, I think, I think probably agency will be overloading the term, but if you think about how stuff gets done In a company or an organization, there's a deadline. Someone promised something to someone, things gotta get done.
There's a natural pressure that gets applied to developers. Developers are sitting there trying to build the thing. You can imagine we've sort of, in the last year or two, we've shifted the world from one where they could just really muscle through and build stuff to like being really tempting to just go ahead and cut paste, whatever O three says, and stick it in your, code base, and you run your unit test and it works and like, whatever.
I mean, I think that that's probably happening millions of times a day right now.
Laly: Yeah. To your point, but stack overflow by the way, right? Copy pasting. Oh, yes,
Deep: yes. But the problem's bigger now. Uh, yeah, it's, and I think you're right, like when you start thinking about. Stuff like, you, you mentioned them earlier.
Is it lovable? The ones where you, you, vibe code and you just put in some [00:34:00] blah and then it just sits there and just churns and it tries to write some code. tries to generate, tries to fire it up, run some tests, and then eventually you get a little, you know, mobile app or whatever.
Laly: Yeah.
Deep: Um, it does feel like that once you start chaining these reasoning steps together that we're creating an awful lot of code that requires other code to go in and even make sense of it. And it's sometimes it's just not worth the human time to even look at stuff. Like, I do this all the time.
I'm like, oh, I'm just, this is, I'm just trying to get this to work. It's not worth my time to go in and look at the code that it just wrote. I'd rather just take the exception, throw it in there, and then round trip it and if it fails after the second time, I'll go to oh one and it works.
Exactly, and I think with the next models coming out, like the failures just won't even be there,
Laly: Yeah.
Deep: There's something what I call the Homer Simpson problem.
Like Homer Simpson's like this, you know, he's, yes. He's a human in the loop. He's just a complete idiot in the loop. And I fear a world, like if I was studying computer science right now, I might be kind of [00:35:00] like, actually, it doesn't even have to do with computer science. It has to do with anything where, where machine learning and AI systems are like taking over significant human faculties and, parts of their job.
We sort of are currently, like in this world where the, the humans whose jobs are being Parceled up and tasks are taken away by machines, still know how to do things. But if you fast forward a generation or two, I don't think that will be the case. Because if you're coming straight outta school today, like, it's not like the old days where you were given like really junior dev tasks.
you're sort of expected to be an engineering manager on day one. 'cause you're orchestrating across all these agents doing little specific things for you. Exactly.
Laly: And so
Deep: I don't know how, if modularization and encapsulation is the solution necessarily, but the problem I think I agree.
Laly: I think it may help us mitigate the risks with, agents running amok, in terms of okay you can, for example, you can modularize their permissions or, or access. You can say, okay, you [00:36:00] can only access these components. Or you only have permission to perform actions on these components, right?
Yeah. And the fact that it's modularized makes it less risky, if that makes sense.
Deep: Yeah. I think I buy that argument to some extent. I just think the human will to get a problem solved ASAP will Trump whatever constraints your componentized world has put on them, I think they'll just go with whatever.
Laly: Yeah. We want the answer. Now we want things easy and comfy.
Deep: there's an analogy here where there's no shortage of development teams that operate in kind of one of these two modes. One mode is, I get to a campsite, I have to clean up the code.
I have to refactor it. I have to make it look good because it stinks and I need to get Project X done. So I get my thing done, but only by making the campsite cleaner, that's like a good team, a bad team is like, I go in, I hold my nose, I go ahead and get my feature out, and I put it onto the, 300th [00:37:00] part of the conditional statement, and then I leave.
And that's a bad team and I feel like in the AI world, there will be comparable, good team, bad teams, scenarios, maybe and maybe what you're saying is that the good team ones will be embracing some of more of this modularization and componentization. And maybe the bad teams are just, this is like the new version of cut and paste and move on.
Laly: yeah. The ones that are more risk aware might embrace the ized version.
Deep: Yeah, it's an interesting idea. So thanks so much for coming on the show. It's been really awesome having you. I it's been really a fun conversation. I'm gonna end with a question I kind of always ask, like, I want you to, and I know it's really hard in in your world because I'm in your world largely too, but move out five, 10 years into the future and gimme the scary scenario of what it looks like and give me the happy scenario of what it looks like.
Laly: Mm.
Deep: Nothing to do with your company necessarily, but like software and development at large? [00:38:00] Yeah.
Laly: Software and development. Yeah. Wow. that's a really interesting question. First of all. there are few trends here that come together. I think there's the democratization of software development and the better the models become, the more people will be able to build software, right?
With, with no
Deep: already seeing that
Laly: build and deploy and, and everything and, and it, it'll work. So that's one trend. Another trend is, the solopreneur trend, ah, Uhhuh. so I think more and more people who had this technical, limit that they had to cross will now be able to do that.
So I think we will see more and more solopreneurs like, you know, building their own little apps and websites. Oh
Deep: I love that. Recognition. I think that's, that's very true because I mean, if I look at myself like all the weak points I had three years ago in my development or data [00:39:00] analysis abilities, they've all been pretty much mitigated.
Like I still am not an expert in certain areas. Like I'm not, I I've never been a good on the DevOps side or for example, but it doesn't really matter anymore, right? Like, it's like I, I, I have my new best buddy and me and GPT go figure it out. you know, similarly, like maybe you have developers that don't have good product ideas or didn't have a product person to riff on, now you can get a lot of that.
from the strong models.
Laly: Yeah, exactly.
Deep: So what do you think is the nihilistic view of the worst case? the dark doom and gloom view that of what can happen in this world? Oh
Laly: wow. of software development, first of all. we're seeing today that AI can do so much of our tasks, and of course it's going to get better and better at doing our tasks.
I don't think that in five years even, software development is going to look anything like it looks today. I don't think we'll be writing code, like hardly writing code to tell you the truth. We'll [00:40:00] have, we'll be more like orchestrators and that's going to go away at some point as well.
But more like orchestrators, overseeing, outputs, evaluating outputs, taking care of the pipeline, but I don't really see us writing code anymore in the traditional sense. Mm-hmm. What do you think?
Deep: I think, it's such a hard question to answer 'cause I, I have so many people that all, you know, I, I do AI consulting for them. People are always trying to glean what the future looks like. And it sort of depends on like what kind of day I'm having on a good day. On a good day I wake up and I think we're gonna be able to like, refocus humanity around some big problems that really need a lot of attention.
Like health, the environment, like stuff where we could use an infinite number of r and d resources to, to do all kinds of things, like tons of stuff. And, being able to take maybe, you [00:41:00] know, a c grade scientist and make them an a plus grade scientist and take an a plus one and make 'em like, you know, even better is all gonna like, help and make the world know, so much better.
On a bad day, I wake up and I think 90% of all white collar good jobs could be gone within five years. unless they're in a heavily regulated industry, government overreacts starts regulating the heck outta stuff for no reason just to create fake jobs. People can't find a better way to be, they get laid off, the economy goes to the toilet, we all end about each other's throats.
part of me thinks like humans have an untapped ability to value the most esoteric things.
There's like, been a massive commoditization of cheap stuff. Let's take shoes. Shoes are a good example.
Laly: Okay?
Deep: 40 years ago, maybe 50, 60 years ago, getting a pair of shoes was a thing that was functional, that worked for you. Nowadays, in the western world, you can always go get a cheap pair of Payless shoes or something for relatively cheap.
Humans will still pay like what, eight, 900, a thousand dollars for a [00:42:00] pair of shoes that was handmade in Italy by some artisan so people's desires are a function of accessibility. And so to the extent that AI starts making things that today are inaccessible and harder to get, we'll come up with other things that maybe have like a marginal benefit.
Like I would argue a thousand dollar pair of shoes has a marginal benefit over a $10 $20 pair of shoes at pay less. Mm-hmm. But like, mm-hmm. nonetheless, like the economic disparity in like what you pay for it is still there. So I think there'll probably be some, equivalent, like maybe there'll be restaurants where, yeah, sure there's like a bunch of robots that make your drink, but it'll be dumb.
Like nobody wants to go there. People want to go to the place where there's a cool bartender that chats so maybe like some of those human heavy, emotionally heavy tasks are more valued, you know, like there's like this like guy, I dunno if you're a Star Trek nerd like me, but like, there was a, character in, I think it was, Voyager series, Whoopi Goldberg played this role and [00:43:00] Okay.
She was just like this extremely high on the emotional quotient, the emotional IQ quotient, eq, and didn't really do anything, just kind of like hung out and talked to the, crew, but was like the most valued crew member. Mm-hmm. I feel like there's gonna be right. That kind of stuff is gonna, we figure out a way to like, you know, monetize that, but
Laly: sign value to, to human made stuff like,
Deep: yeah. Right.
Laly: Yeah. Uh, especially maybe in the art world you know, because AI is taking over the art world as well, right? It's creating art, it's creating music, it's creating all sorts of things. But maybe there will be more value assigned to things that are handmade
Deep: or huge.
I think you're already seeing it, right? Like, there, there's a huge, resurgence of the artisanal movements in everything. Like, nobody wants to go drink a Bud Light anymore. At least not in Seattle. Like, nobody would get caught dead drinking a Bud Light. They're all going for some craft beer that has a story, and they wanna go to something really small that they'll go to Etsy to buy their [00:44:00] presents.
nobody wants to order like something generic. yeah, I think like, in a world where, let's say. Let's say that the robots are amazing and our food production costs plummet, right? Mm-hmm. So in a world where food is a negligible percentage of your monthly income, will people just go out to fancy restaurants?
Like maybe, I would probably bet so, you know?
Laly: Yeah. It's a, it's an interesting question, but you know what, I'm sure that if I go back to your bad days vision mm-hmm. I'm sure we'll, we'll need to start thinking about UBI really seriously like about universal basic income. Mm-hmm. Really, really soon.
Deep: The problem I have with UBI is, I feel like the most unhappy people I know are the ones who wake up in the morning and know that they have all the money they need, and it creates an existential quandary. That like my dad, I remember I was like 20, I don't know, I was in my mid twenties [00:45:00] and I, I was working for a few years and I decided to go back and get my PhD or something.
And I had this like super annoying, um, PI and I, and I remember chatting with my dad and I'm like, he listened to me like whine about my woes. And he said, look, your problem has nothing to do with, finishing PhD or not finishing PhD or grad school or any of that. And I said, well, okay, well what's my problem?
He's like, your problem's very simply that you have figured out a way to be happy and to exist on so little money that you've over inflated the worth and value of your career and your intellectual pursuits to the point where you've lost the simple ability to like wake up and know why you function that day.
And I said. What are you suggesting? He's like, what am I suggesting? Like every good Indian dad get married, buy a house and wake up in the morning. So you know why you go to work, you go to work to pay the mortgage and provide for family. That simplicity is like kind of amazing. And UBI takes that away to some extent.
I mean it depends. It's a function of how much you BI you give out.
Laly: Exactly.
Deep: Right. Yeah. I mean [00:46:00] like I'm not arguing with like basics, But like I think, if you look at a place like Kuwait where there's like a massive UBI for men at least, I mean I don't think it breeds like a super healthy culture, on some level, and that's not to pick on the Kuwaits necessarily, but like I think if you compared it to a place like Norway that has like a comparably sized oil wealth fund and didn't just give it to everybody.
Mm-hmm. I think you find that people are happy. Here's true. And so money's kind of weird that way, you know?
Laly: Yeah, I agree. And I agree that it can breed an existential crisis for people that they need to have their why. and as you said, it will depend on the sum. Like if it really covers the, bare necessities, then people will still have the drive to go out there and earn some more and create some more.
I think.
Deep: I think that's right. the question I would ask there. What happens when basic, like, not even just basic necessities, but like when what we define as work today is completely done by bots.[00:47:00]
What's left after that?
Laly: Yeah. The whole concept of job,
Deep: it changes is gonna,
Laly: it changes Changes, like, and I think like,
Deep: like maybe the utopian, I probably overuse these Star Trek analogies, but like Star Trek kind of paints this like utopian picture in some of the planets that they visit.
there's always the planet where everybody's happy. Everyone's got nice, bright clothes and they all just sit around and make pottery all day, right? Because it's like such an advanced society. The computer does everything that they just make pottery and then they just paint and, I don't know, like maybe that'll be enough, maybe intellectual pursuits will be enough, but I fear that it won't, like I fear that people need something more directly associated with making you feel alive and putting food on the table makes you feel like that.
friction, right? Yeah. Fighting and killing each other makes people feel like that, which obviously, I'm not advocating that, but that does make people, you feel alive when you're in these horrible situations. So I fear like there's a positive worldview and a negative, like the positive one.
It [00:48:00] feels like a weird world we're entering, you know?
Laly: Yeah, for sure. I mean, do you remember in the, I think it was in the first movie of the Matrix where they said that the first version of the Matrix they created, or the first versions were like paradise. There were like, everything was fine, everything was frictionless.
Uhhuh, and then it didn't work.
Deep: Did they say why it didn't work? 'cause everyone got too bored. Because everyone got
Laly: too bored. Or people like didn't buy it. People didn't buy the whole Oh, they didn't think
Deep: it was real.
Laly: Yeah. Yeah. Something like that. Yeah, I need to look it up.
Deep: There's a, there's an old saying.
not to throw the Swiss under the bus, but like 500 years of, peace and democracy in Switzerland led to the cuckoo clock and 500 years of like death and mayhem and murder in Italy led to like Leonardo da Vinci and like, all this like amazing innovation in, in ancient Italy.
So, We'll see. anyway, thanks so much for coming on the show. This is really good.
Laly: Thanks so much for having me on.