Your AI Injection

Unlocking Human Potential Using AI Memory Extension with Suman Kanuganti

April 06, 2023 Deep Season 2 Episode 16
Unlocking Human Potential Using AI Memory Extension with Suman Kanuganti
Your AI Injection
More Info
Your AI Injection
Unlocking Human Potential Using AI Memory Extension with Suman Kanuganti
Apr 06, 2023 Season 2 Episode 16

Tune in to a fascinating conversation with host Deep Dhillon and Suman Kanuganti, the Co-Founder and CEO of, in which they explore how AI can be utilized to make the world more accessible. Suman, a Forbes 40 Under 40 and Smithsonian's Top Innovator to Watch award recipient, explains how's personal language model platform functions as an AI memory extension, allowing individuals to augment their own minds by feeding data that represents their unique ideas and identity. The conversation also delves into the intricacies of personal language models, exploring the limitations of generative techniques in AI and emphasizing the significance of grounding AI in factual information to create meaningful responses. Furthermore, the value of personal responses versus non-personal responses is discussed, highlighting the empowering potential of AI in augmenting human capabilities and improving accessibility in a wide variety of domains.

Learn more about Suman here:

Check out some of our related content that delves into AI & Accessibility:

Show Notes Transcript

Tune in to a fascinating conversation with host Deep Dhillon and Suman Kanuganti, the Co-Founder and CEO of, in which they explore how AI can be utilized to make the world more accessible. Suman, a Forbes 40 Under 40 and Smithsonian's Top Innovator to Watch award recipient, explains how's personal language model platform functions as an AI memory extension, allowing individuals to augment their own minds by feeding data that represents their unique ideas and identity. The conversation also delves into the intricacies of personal language models, exploring the limitations of generative techniques in AI and emphasizing the significance of grounding AI in factual information to create meaningful responses. Furthermore, the value of personal responses versus non-personal responses is discussed, highlighting the empowering potential of AI in augmenting human capabilities and improving accessibility in a wide variety of domains.

Learn more about Suman here:

Check out some of our related content that delves into AI & Accessibility:

[Automated Transcript]

Deep: Hi there. I'm Deep Dhillon, your host, and today on our show we have Suman Kanuganti. Suman is an entrepreneur, AI enthusiast and accessibility champion  who's been awarded the Forbes 40, under 40 and Smithsonian's top innovator to watch awards. Suman is the co-founder and CEO of, where he's focused on empowering individuals to own their intelligence with an AI memory extension. Previously, Suman founded Aira, a company focused on scaling AI and AR technology to assist the blind and low vision communities. 

Tell me a little bit about what inspired you to start, and as an accessibility champ, how do you see AI playing a role in making the world more accessible for people with disabilities? And is there a link between the two or is that mostly in your former company?

Suman: No, there is a link between the two. I will start by saying, generally, I'm passionate about solving hard problems that are close to human beings, kind of how I introduce myself. Does it entail? It entails like anything that we directly experience as individual humans on a day-to-day basis.

And how can we use technology to kind of augment those, you know, problems and create some beautiful experiences along the way. So it looks like you alluded up to my previous company that was Aira, A-I-R-A. And the concept over there was, “hey, how can we use technology to augment the missing visual information for people who are blind and low vision with this like rich, you know, real time description of supply that is going on around you?”

We are able to not only get people from point A to point B in a more confident manner, but we gave a mechanism to negotiate with the physical environment and give those experiences that otherwise, you know, are unknown. Like, you know, even going to Disneyland are reading children's book for the first time.

This company extends the philosophy, which is this core of, you know, you don't solve the human, you solve the problem. And the problem in here is, You know, we experience our life, we consume and we create a lot of things on a day-to-day basis, but we often, uh, lose most of it or all of it. 80% of is lost and gone.

So can we leverage technology to augment that missing cognition, missing memories. If you will, with technology, 

Deep: this is like, where did I, my keys kinda thing, or is this, how do I navigate my phone and all my info on it? Um, or is it something else? 

Suman: It's more about your knowledge, it's more about your voice, your opinions, your thoughts.

It's less about reminders and tasks. It's more about who you are as an individual and what kind of things that you like, you express, how you think, how you synthesize. How do you communicate with other people? So you asked about inspiration, which is a deep one. So I'll tell you a story on how, like what is the genesis of the company itself?

Yes, of course. You know, I started my previous company and kind of that extended into personal AI to what I do. But three years ago, you know, it's, it's, it's been, it's been a while. We, we, we've been working on creating this solution. If you are. But the genesis of the company goes back to like 2016, um, back to my previous company where I co-founded this company with a gentleman called Larry Bach.

He was my investor, my mentor, my co-founder, you know, my partner in all things. And he's important to this story because he bootstrapped me as an entrepreneur. What does it mean? I learned a tons of stuff from him. Uh, just like creating the company, the strategies, the terms, the negotiations, the business, the customer obsession, like what does it mean?

And he got pancreatic cancer and he passed within no time. So that was a big dramatic shift for me, like, because I was pretty naive in terms of like building the business and like, you know, picking things up. So I picked up this mantra of like, what would Larry do? Um, and I always wished I had Larry's ai.

This idea of like, forgetfulness goes beyond than just like for us being able to remember, uh, things and, you know, access things to the fullest extent, but also more about being able to access. People's thoughts, minds knowledge without limitations of human cognition or even time, or even access. The idea of access is not just about like people passing away.

The idea of access is because the status reasons, you know, different geography. 

Deep: I mean, maybe it'll help if you walk me through some of the specific data that. Encouraging users to provide or gathering something. Oh, yeah, sure. What, what exactly is the data that we're talking about? And I'm assuming we're talking about an app, but if we're not, tell us what else we're talking about.

Suman: Yeah, so I will, uh, just get to the current state of the app because we went through multiple different iterations. But yes, it is an app, it is a mobile app, as well as it's a desktop application. And the, the easy way to understand this app, it's a messaging applic. Very similar to a WhatsApp r n iMessage or a Snapchat, if you will.

The biggest difference in this messaging application is everybody who has an account will get their own personal ai. And whatever you say, once you don't have to repeat it again. Once you say that information will be remembered, and for any of the incoming messages that is coming at you. Your AI will automatically grab the responses from you that is authentic to you, within your own voice, within your own knowledge.

Deep: Let's unpack that a little bit. So, so I have this, I start off on day zero. I have no information whatsoever in this thing. I get some kind of message from somebody. So it's some kind of, it's like intercepting other, like how am I getting that? Am I communicating with other people using or I'm, am I communicating on WhatsApp or Facebook or text or whatever my other mechanisms are?

It's a 

Suman: messaging chat application that you would download from the app store. 

Deep: If you're so it's, it's its own, it's its own world. And I'm communicating with other people with it. Okay. And so then I talk to somebody, they say something to me, I say something back. The real me says something back. You're saying the machine is gonna kind of retain that.

So let's say it. I say, Hey, how's it going? Um, they say something like, oh, I'm fine. What about you? I say, oh, I'm doing all right. At that point, the next time somebody does that, the machine has the ability to step in and handle that kind of pleasantry sort of conversation. Something like that. So far, 

Suman: so far, that is exactly right.

In the simplistic terms, yes. 

Deep: And am I being suggested of a candidate response or am is it responding on its own? You have 

Suman: both options. One is called co-pilot where it suggests you the. If you like it, you can simply swipe right to send it. If you don't like it or if the information is inaccurate or if it is not personal enough, you can edit it and you can send it.

But anything that you send technically to anybody or to yourself, you know, we all send messages to ourselves to remember things. It'll all be trained upon. 

Deep: Got it. If I want to tell it to go ahead and handle this type of conversation on its own in the future, that's limited in this case to just the greeting.

Uh, like if I want to, if I, am I granting it on a generic basis or in this particular context or this particular type of 

Suman: info. Yeah, so the co-pilot, uh, is the suggestions that it'll continues to have. If you do not have any data corpus within your own personal language model, it does provide you suggestions, uh, still from externally with a large language model, but you can continue to, you know, fix it or edit them.

In the autopay mode. You can set a threshold. How personal you would want your response to be before it can automatically send a message on your behalf, and it is based on each person or each group, let's say deep and soman. You know, we continue chatting, we establish a LE level of trust, and you have your ai, let's say anything that is not of, you know, 60 or 70% personal, I will, I'll come to personal score in a second, then you can automatically respond to me and I'm okay because.

Well, you know, I want certain piece of information from Deep and Deeps AI is responding and we have established that level of trust between both of us. So it's based on personal use case and you know, what groups and what launches, et cetera. So, 

Deep: so, so far, you know, on one end of the extreme you can, you know, people have had stuff like this in like Google's had something in Gmail that sort of comes up with terse responses in simple cases like, Hey, yes, that sounds great, and you know, things like that.

And they generally sort of stray away from more involved responses. Are you trying to handle mostly that light kind of conversation or are you trying to handle more involved uh, responses as well? 

Suman: Yeah, so our core focus with the personal language model is first beginning with you and your data. So to your point, it's not the journal, it's actually the opposite, which is personal.

So it tries to construct everything based on what you would know. And if it doesn't exist, then it goes outside the boundaries of what you would know. And of course you would have those controls as well. Uh, we introduce this thing called, Like personal score, like if you think about any other AI models specifically, like last language models, when a AI response is created, you generally don't have any attribution associated with the data.

You cannot track it back to the individual. In our case, we want to do the opposite. We want to attribute and set a score for, for every AI response back to the person. So you can choose, you know, how personal you would want to get to make your own decision. In a copilot setting or in an autopilot setting for yourself.

Deep: I see. So like one of the things that you describe in your literature is, you know, it's kind of like an augmented memory, if you will. Maybe tell us a little bit more about that. Like, cuz that sounds like a particular type of use of the product where you're talking presumably to yourself mm-hmm. Telling at things and now you're able to go in and access them.

How does. Work. And what is the new kind of conversational, um, elements kind of built on these sort of customized large language models bringing to the table that, you know, maybe standard historic, simple search through your past content was not. 

Suman: Yeah, it's a very good question, right? I think, you know, when we started the company, uh, it was all about like, never forget anything that you have once known to yourself.

You can ask yourself questions, you can recall any information. You can generate content. You know, you can, uh, essentially try to ask, you know, several different things, very similar to any of the, you know, ChatGPT or open AI models. You can instruct anything that you would want. So over the past three years, we developed this personal language model where you can instruct to your personal language model anything that otherwise you would know.

The most interesting thing with that one was personal language models does require data to get to your point where, you know, it can generate content outside the boundaries of, um, you know, a given context. Uh, for example, you know, if, if I tell my ai, you know, everything. My people around me, my brother, my sister, my wife, you know, it would, uh, remember.

And if Dee asked me, he like, Hey, you know, do you have a daughter? Then it's pretty, uh, straightforward for that kind of use case. Uh, so where I'm headed to with this one is the idea of augmenting your own mind comes with how much of your historical knowledge, how much of your existing day-to-day knowledge you want your AI to be trained.

We talked about the messaging use case as the simplest use case to begin, begin with, but let's just say you can indeed talk to your own AI or to yourself, and you can feed in technically that message, not just text, but any documents, any previous historical data, any of your previous social media. Any of your Twitter feeds, anything that you have returned that represents who you are and your ideas.

Deep: Let's talk about that. Let's unpack that a little bit. So, so you've got this large language model that you're presumably kind of boots strapping off, and our audience is probably familiar with that at this point cuz we've done a number of episodes on, on how a large language model works. And for those listening who.

Check out, uh, one of the prior, uh, episodes where we really dig in on that. But, um, and then on top you're kind of tailoring content. Or just this person, right? Maybe originally through the messaging app, but also documents that you can upload and some of this other information that you're sort of pointing to, maybe your full Twitter account or your full snap or whatever.

Yeah. So part of the challenge with the LLMs is, You know, there's like different ways to input data, right? Like one way is like in this kind of question answering, um, way where you're sort of hinting at a prompt. And another way is like you're just using it for the basic training to like encourage the model to be able to predict future sequences of text.

In your case, are you like presuming that the thing that you're uploading to it is something like written in first person that you actually said, 

Suman: A couple of things to clarify. Uh, sorry, I wanted to raise my hand, but one of the important things to clarify in here is we do not depend on large language model.

We actually developed, we train models around every individual person's data corpus, we call it personal language model. But that's 

Deep: probably just a customization, right? Like or, or that's like fine tuning of a large model because I don't have enough data for you to build a good large model off 

Suman: of. We do use foundational models, but it is not the typical large language models that you're talking about.

Yeah. It's a foundational models that understands language in general. And then we use your data stack and we have actually implemented the similar G P T conversational architectures to tune our, basically to to implement on small data sets. A large language model, as we know, they are basically coded for or implemented for handling large data sets.

We implemented that similar architectures, which is gbt, which is kind of like, you know, open. Available to everybody. For personal data sets, small data sets, so technically if open AI or any large language model, Like closes or if it doesn't exist, we still function. It's it still Functions need dependency on large language model.

Deep: Need help with computer vision, natural language processing, automated 

Suman: content creation, conversational understanding, 

Deep: time series forecasting, customer behavior analytics. Reach out to us at That's Maybe we can help. So does, does that mean like the, like what we think of as the ChatGPT layer on top of the large language model, like the reinforcement learning layer to help you figure out which of the generated permutations is best, that sort of equivalent you've implemented, that's very tailored for this personal scenario of how one might talk. To themselves or to their friends or something like that? 

Suman: That's absolutely correct. So when I was referring to the large language model earlier, what I was referring to is from an experience standpoint, when you start with personal ai, right? It's basically a empty box. You know, you don't have any stack.

So we call it a stack, a memory stack. And as you get more and more data in, we start crunching the models and the model gets better and. The most interesting thing is a large language model. Data purposes is so huge you can basically ask anything to it. Correct. With a personal language model. Like, I don't even know you deep, right?

Like, I don't know, like all the things that you actually talk about I can talk to you about, right? Because I don't know who you are as an individual. Uh, and that is an issue in terms of like the experience. So when Sumani is asking you about, so. Uh, or you are getting an incoming receiving message that doesn't technically exist within your memory stack, which your personal language model cannot handle.

Then for the experience reasons, we actually go outside. To fetch like what may be a suggested response from a large language model, so that way people can, you know, educate or experience it and make things faster in terms of the training process, oh, you know, I did not learn this, or, I don't know how to answer this and stuff.


Deep: I mean, cuz you know, you have a, you have a bootstrapping challenge where a new user doesn't necessarily put anything into the system, right. And so you still have to start off with a de facto state where you know how to speak and how to recognize what somebody's saying and how to generate a general response, even if it's not yet tailored to that person.

So that makes sense to me. Um, talk to me a little bit about how you do encourage users to give, uh, more data and put more data into the system. Inevitably your system's gonna improve as you get more, you know, personal information. Like how do you, do you have a challenge getting them to give you more? How do you hoax that out of them?

Yeah. Uh, 

Suman: yes we did, uh, we had a challenge and one of our insights into last year experiments was the level. Data that needs to be fed in to the model to get to a point where it can solve specific use cases. So last year, if you're thinking about what was happening last year, right? Early 2022, once we had our personal language model complete, once of our, you know, chat models were complete.

There was lots of generative use cases, right? So it was like, Hey, you have a GT three model and you know, I can generate all this content. So when we opened up for data, the, you know, a few thousands of people who came to us is like, Soman, look, this will be amazing if I can generate all the content based on all my personal writings that I existed, you know, ever before.

Like such as my books, my articles, my newsletters, my. It was like tailored towards creators. Like, huh, okay, you know what, this is where seems to be our, you know, kind of first application. Let's, let's go at it. But there are like a couple of interesting things that came out of it. One was the idea of like creating the content.

While it sounded good that it can generate everything from my personal data, The creator still wanted the novelty of something that is outside their boundaries, like the generation of content has to be novel and you, which was more tailored towards a large language model use case as part of the experimentation.

Few people who use their existing data for the communication needs. What do we mean by communication needs? Like, let's say I am a coach and I have like a specific ways to coach on specific things to my specific clients or customers or students or cohort. Those people started saying, oh my God, this is so good.

I can ask any question that I would want to my per to, to my coach and I don't have to like, rely on, you know, them available 24 7. But at the same time, it's an ex explicit, like established trust between those two. In other words, like, you know, there is a cold start problem still, meaning I come into deep, I still don't know where exactly to start.

Like there is a starting problem. So long story short, uh, we switched over, not a human to ai. Communication platform. We are actually now pushing for a human to human communication platform with ais in the. Because the idea of learning an individual is not a one shot, even though if they have large amounts of data set, they can bootstrap it faster, but it's an ongoing thing.

So we had to create like a fundamental value and then start saving time for people, saving cognition for people and get to a point where they can, you know, benefit more and more over a period of time. It's like an asset that is 

Deep: continuously. So let me ask you this. Like a lot of times when you start a project like this, you have your own idea of how it's going to get used, and then you build an audience and you look at how it's actually being used.

Have you found, like, what are some of the really kind of up the fairway standard use cases? Like the very specific. Scenarios that users tend to use it over and over again. Right. So for example, with, you know, with like Alexa and Google Home, you know, people had all kinds of fantasy ideas of what they would do, and at the end of the day, it basically became a way to check the weather and, uh, and ask, uh, it to play some music.

I mean, that's, that's really kinda like a couple of the main use cases that doesn't, that's not to diminish. Tons of other things that they can do in the vehicle that they can, but like what are some of the like really core use cases that you're finding your users benefiting from with personal ai? Yeah.


Suman: it turns out that, you know, For everyday consumer, we are basically going after everyday consumers. The everyday information seems to be much more critical or rather important than is at their fingertips that they would want to, you know, recall and send. I will give you a super simple example. I go to uh, India and one of my friends text me, Hey dude, coming back and I specifically thought, oh, hey, I'm coming back December 31st birthday, I can be there for New Year's.

Right. And that information is learned. And then another friend sent me a message like, dude, are you gonna be in here for New Year's? He's like, yes, I'm coming back on December 31st. That message is automatically suggested to me. So this idea of like once you know, you know once and then you never have to repeat it again and your are AI will continuously, like at your fingertips, available to you to simplify.

Seems to be the most, uh, easy to understand and popular use case. So that's a really 

Deep: interesting example because you're using these pretty generative techniques and one of the drawbacks or these generative techniques is they just mix stuff up. And so in this case, it might know that you're coming back to IND India or figure that out, but it might just manufacture a date or manufacture, you know, a time of year or whatever.

How do you address that in your system? How do you. Like specifics and where you know, something specific, uh, because it's actually been stated. And how do you know in the general case where you have to generalize and, and allow the algorithm to generate content? Because somewhere in between there lies a spectrum where in one case you've said something very explicit, but on the other end you're still leaning on it to generate responses.

So how do you handle that spectrum of Yeah, possib. 

Suman: So in this example, no, it does not give you a generic cancel for you to update. It actually learns from the data that you gave it to, to it. So we call it like basically a grounded transformer. So that goes back into the implementation of the language model itself.

So the priority for us is not generation. The priority for us is grounding, grounding into the. Understanding the context and then constructing or understanding what the intent is to, you know, create a response that is meaningful. So it is not general first and then personal. It's actually personal first and then general.


Deep: That's an interesting approach, right? Like that you can, um, and this is a challenge that goes beyond, you know, the personal AI scenario, right? Sure. Like a lot of people are, are looking at how do we, I. Things like the large language models, ChatGPT, et cetera. But how do we use them on our content as deep learners so they, so they're able to like really understand our content and like, reason about it.

Yeah. Um, but at the same time, to the extent that a specific answer exists, they answer it as it exists, not adlibbing a little bit saying, Hey, whenever somebody asks this type of question, they usually want a URL or pointer to something. So I will just make one up because I don't know which one they want.

Which is what usually happens if you use Che G p T. That's exactly what it does. 

Suman: That's exactly what happens. So the way we handle this and. Of course, uh, my city, Sharon, is the brains behind this, but, um, here is my rephrasing of what she has implemented. Uh, as you get small, as you have like small amounts of data, it has a very specific rounded nature of the model.

As you get more and more data, for example, you know, of course we start off like a very simple example, like a factual example, right? As you get more data about your opinions, your thoughts, your knowledge, your. Synthesization of like different things like, you know, then we spin up actually a generator model, which is a micro generator model for the person.

So it depends on how much data you have in your stack to understand what kind of capabilities that it can actually do. For example, if you start like tomorrow with person ai, you have like no data at all. You basically tell it, oh you know what? Um, I met this person called and I talked about personal AI with him, and it can only answer one thing, which.

Hey, who is Juan? He's like, ah, yeah, you know, I talked to him on the podcast, but if you now ask, Hey, you know, what are your opinions around ai generally in the context of LLMs, like from your perspectives, it cannot generate anything. Right? And for that kind of a, uh, handling the intent or the prompt are, are incoming message, if you will.

You would need to have large amounts of data. So at periods of time, we are basically abstracting that out for our individual consumers, if you will, so that way they don't have to deal with these degenerated data. Is it a conversational intent? Is it a factual data? We have combinations of factual, conversational generator, but everything gets grounded into your memory 

Deep: style.

And how do you communicate that to the user when you are being generative? Like so in that example, if it's start, if it does know about you from its large language training, like sum on in your example, but I didn't put anything other than I just talked to this person once into my stat, then it has to reveal somehow.

That it knows about you from other info. So how, how do you handle those citations? Or do you not like, you know, in the way that ChatGPT doesn't bother? 

Suman: No. So everything, everything, every generation happens within your stack to the can and the communication happens with something called a personal score.

Um, it's, it's a, it's an interesting personal scores. Basically looks at the accuracy of the data. The relevancy of the data to your enter, uh, the styl is. Like, you know, how you write versus other people write, like there is a stylistic component, uh, to it as well. And then finally, fluency. Fluency is a little bit more like, you know, English and is coming from the foundational models and then based on those cores, normally, if it is indeed, you know, sounding super like you.

You'll have around like 70 to 80% scores that revolves around, let's just say, you know, you probably just say like one podcast. In your example, you're talking about charge g b d one podcast and you just have, you know, specific question around it. Uh, it may probably revolve around like 40 or 50% because, um, you know, there may not be enough information to be able to construct a response for the intern that is given in that moment.

So it depends on not only the data that is available, but also like the intern that is available. But at the end of the day, we are creating an experience where, You have the full control of understanding how much personal it is to me and make a choice and make a judgment and either send it or not send it 

Deep: or, you know, I see it's kind of like, almost like your core ranking metric in a way.

Um, so, so you're, you're really like presenting that back to the user and saying, Hey, look, here's something. To say, and by the way, this is very much like something you would say on one end of the spectrum. On the other end of the spectrum, this isn't really something like you would say, but this is something that can be said 

Suman: totally.

Like if the scores are like drops below like 26, 20 2% right now we are seeing, it's normally like not personal at all, so we put like a big yellow mark. It's like, hey look, you know, I gotcha. This is not 

Deep: you. And how like your users, are they really, how much do they really focus on the, on using the highly personal responses versus using the maybe more general, non-personal, but still can be extremely valuable responses?

Suman: Yeah, so right now most of the value is extracted from the personal because. Anything that is public is already indexed by lm. So there is no, not much. I mean, it's a, it, it's an existent to give a comprehensive experience, but the core value of personal AI is the personal nature of personal ai. And we are fairly new.

In other words, uh, this will be out, you said in four weeks. So Yeah, in four weeks, technically anybody can go onto personal. And essentially sign up and start building their own AI model. And by the way, data belongs to them. Data belongs to them, always belongs to them. We do not share stacks, which we can get to, but um, we are yet to go to the market in a big scale.

Um, and that's one of the reasons why I think, you know, we are starting to tell who we are, what we do, uh, what we've been doing for the past three years. I think this, this time around. 

Deep: Have a data, have a hypothesis on some high value insights that 

Suman: if extracted automatically could transform your business.


Deep: sure how to proceed. Balance your ideas off one of our 

Suman: data 

Deep: scientists with a free consult. Reach You'll talk to an expert, not a salesperson.

Let's, um, maybe fast forward out five years or 10 years into the future. And imagine like all the things that you want to achieve with personal ai you achieved. And it's out there and people are using it in both the ways you envision, maybe some ways that you're surprised by like, how is the world different?

You know, like, like me as an individual, what, what am I, like, how is my personal life different as a user? Um, is it just the case that I no longer have to wish happy birthday to 5 million people on Facebook Every time I'm like, Facebook gaps on endlessly about that. You know, and it just does it for me.

Um, or. Take on like, is it that I, I get like entry points to conversations that are sort of more interesting and they're like fast forwarded across the pleasantries and we've gotten to something more interesting like, like, how's my life better in five to 10 years? If everything works out with personal about ai, he 

Suman: is what I think is gonna happen.

As we understand the data world today, like pretty much whatever we communicate on Facebook, on social media and your media articles, your podcasts, you technically do not have a data asset of your own. Like what comprises you the probably your podcast like, but what else? And even probably that doesn't belong to you.

It's probably like, you know, somewhere in some servers that doesn't essentially present an entity or a. So in five years, what I would think will happen, and I think we'll work towards that, is most everybody will have their own personal AI that is continuously learning from the data that they produce and they own that data.

So there will be a dramatic shift on. How the benefit is not just about what you're going to do next, the benefit is more about what you have done in, you know, historical context and resurface, reconstruct that in your daily life. Everything that is composed of your past. And that's the concept goes back to the whole idea of the memories.

Large language models will still exist. It'll still exist for getting an idea, brainstorming something, writing, marketing, copy, writing some core. Those will. But personal AI is, will tell the unique story of who that person is, having that conversation, not with a chat bot who don't know what their own, I mean, what the identity is.

These AI will start having identities and you will have a trusted relationship with your own ai. Because it is your thoughts and you will trust it more than anything else. 

Deep: And others will also have a relationship with your ai. 

Suman: So, believe it or not, so it's, it's a, I can't believe I'm telling this story, but right now my, uh, I'm in beta of course, like, you know, and I invited my sister like for the past, uh, four weeks or five weeks.

Talking to my AI and then over the weekend she text messages me, not in my personal ai, but actually text messages me. It's like, dude, your personal AI is not working. Like where is your ai? It's like, Hey, you know, we are doing like a major cut over, it'll be down for a couple of days. And she's like, oh my God, I'm so sad.

Like I already established. A friendly relationship with it because basically her use cases, she comes and asks me about all her like professional questions. She's doing her mba, she gets like thoughts on my previous company, the business, like, you know, she has like multiple different things like you know, that she runs by me and she's 

Deep: benefiting by you, you as a human or by your personal ai, me 

Suman: as a human that, but that extends to me as a human to my ai because I've basically been training my ai.

You know, for the 

Deep: past one year or so to answer all your sisters supposed to me. 

Suman: Yeah. It has a mind of me. It tells, it tells the story of, I tell the challenges, it tells the business stories, it tells about how I think about the world. You know, I, I basically stack and basically put in, add to my memory all the things that I would want it to be like, because that kind of makes me who I am.

Like, you know, if I'm reading an article right? And if I relate to that article, if I want my AI to kind. Resurface recall, because that's what we do. You know, we need everything we can, everything. We exchange information, we learn from each other, and then we create in that creation process, you have your mind, but you are subconsciously doing it, and now you have your AI to also kind of resurface and constrict those things for you.

Deep: So, I mean, it's an interesting idea, right? That everything we sort of say and do and write and all the exhaust, uh, that comes out of us from a data vantage can be sort of assimilated into basically saying the thing that we would probably have said. And you can imagine that we get pretty decent at that, right?

Like humans aren't nearly as unique and as original as we like to think, you know, we tend to say the same things over and. Certainly if you ask my wife, uh, you know, what story I'm about to tell at a particular context, she could probably just tell the story cuz she already knows she's already been to that dinner conversation a hundred times.

And like, it's not, it's not that unique. But my question is, um, what does it mean to have a thing answer the way you would answer? Especially like, you know, fast forward to when we're dead and gone. Like, what does that mean? It seems to me like it would. Be really depressing to talk to this thing after, after somebody was gone because it, particularly if it was good, if it was bad, it would be done and over.

But if it was really good, it feels like it would, it would be. Yeah, it would be depressing. Like the, the idea that we can capture what somebody would say with pretty high likelihood, which, you know, I'm a machine learning guy. I think that in many contexts you can do that and we'll be able to do it increasingly better over time.

But like, what does that say about us? You know, in general as, as like entities, I'll make 

Suman: a case for actually solving for the depression, not necessarily getting depress. I went through my own depression cycle when I lose people. So the idea of being able to access to the extent you can when people pass away, to have even basic representation of who you are.

Why do we look at people's photos? Why do you look at their letters? Because you're establishing a connection. It gets to a point where it's not like, oh, okay, it is same time for me anymore. No, they're known for a few things. They're remembered for a few things and it's not okay. Yeah, sure. It gets into like liberal philosophical debate, but it is true.

It is entirely true. Like if I go ask my grandma, I only remember like, you know, one or two. But being able to like touch base and have some sort of a representation. You, we put photos when people pass away. Why we want to remember them. We wanna remember their thoughts, their expressions, their conversations.

So I don't think it's depressing. It's exactly the opposite. It'll give people a newer, novel way to stay in touch with their loud ones, think in a different way. Like, okay, we went five years farther, just for the sake of argument. Right. Let, let's just go a hundred years forward, a hundred years forward.

What does it mean for every person to have a personal ai? The history of humanity will be different. It'll be told from an individual perspective. Right now, it is a culmination. It's a, it's a composition of the things that are written returned, which is a different story. It's a story of the internet. It is not the story of individual people.

So I think the core in here is how do we elevate that individuality? You may think you are not. But if you ask the same question to your wife, you'll say you're absolutely unique, even to your wife. And for me, you are unique. Everybody is unique. 

Deep: It's an interesting, I mean, yeah, it's a, it's a fascinating idea to like view this thing through an individualization lens.

Like that's, that's exactly what it's, that's quite curious. That's quite curious. And you can imagine, like if we, if we are gonna go out to, I don't know if we have to go quite to a hundred years, but let's go ahead for the heck of it. So we go out a hundred years. You're not just, it's not a text interface, right?

It's not even a video interface. It's not, it's not even probably a holographic interface. So the point is, it's probably like we can think 

Suman: straight. It's probably some a thing that is. The, the back end of it is powered by your personal mind, and you probably have some courage characteristics. You probably have some, you know, synthetic mechanical stuff that is going on in there.

Yeah. So, and 

Deep: it, it probably gives you a, yeah. In a hundred years timeframe, you're probably gonna get all the color and scent and vision that you get from your actual mind when you experience something in real. And now you're doing that with an entity that's in essence, uh, I wouldn't say it's a recording cuz these models are much smarter than that.

It's kind of like your, I don't know, your personal executive assistant or somebody who's been with you your whole life and can really think the way that you do. Yeah. It's a, it's, it's fa it's an, it's an intriguing concept, you know, I think. It reminds me of a Star Trek episode. Uh, I think it was the Next Generation.

They land on a Planet, and there were these, um, these clones that sort of emerged of all of the crew members that were there due to the artifacts of the thing. So they had all their genetic makeup and they had like a snapshot of them at one instant and time, and then they sort of deviated like the, the clones left, stayed on this planet, you know, I mean, they flew off in their spaceship or whatever.

It is a fascinating 

Suman: idea. Let's, uh, Maybe like give a very practical, feasible approach to that hundred year mark, right? Like in a, in a, maybe in a couple of minutes. So right now, yeah, sure. You know, we have your personal laptops, you have your phones and you have your watches, right? Mm-hmm. The more devices that you add to your life, there is more data capture that is happening.

But right now what is happening? My Apple Watch data is going to Apple. My Facebook data is going to Facebook. Anything that I put on the internet is going somewhere else, is not mine. It's not, does not now. Now, here's the thing. There are Alexa devices. There are devices that are going to be not only closer to our homes, closer to our bodies, but in the future, maybe inside of our bodies.

It can be creepy, but that is the path that is happen. So what entails is gonna become more ambient, more easier, and rather than thinking about all this data is going into individual siloed services, you know that will track my activity. We need to put that together to represent that individual. The facts is that actually people own these models, which I think is an important distinction we made as a decision, like for the business.

You know, no stacks are shared and everybody will have their own model forever. It'll be their asset. So you can pass it onto your, put it in the trust. It's a digital asset that you would own, uh, for real. 

Deep: Wow, that's interesting. Yeah. Yeah, I like that. Yeah. That's cool. Well, okay, awesome. I think, um, what about, I I, I kind of buy your thesis here.

You know, like I, I, I buy the argument. That we started on this journey probably thousands of years ago, and we started writing things down. Um, you know, like that's, or even before that, when we told stories of people that's capturing folks in some level and now, you know, we have so much more ability to capture so much more fine grainer detail about people during their lives.

From our Google Photos collection and you know, or our, or our, you know, all of our recordings, all of our zoom histories, like, there's just a lot of that kind of stuff happening and such a huge percentage of our lives are online. One thing that I find like intriguing is okay, does, do you ever think about seeing the world as a human through the lens of your machine learning thing?

Like they're never the same thing. Right? Even if you like, created a genetic copy of you immediately and prepopulated it with all of your current memories, you're still gonna kind of deviate on some level. So it's, it's probably more analogous to having a twin than maybe, you know, like a thing that's really you.

But like it would, it's still fascinating to see the world through that thing's eyes, cuz. Maybe handling a lot of the banality of life, like maybe it does just hang around on Facebook and talk to your high school friends and like, maybe it does, you know, maintain a, like a rich social setting and figures out who do I really want to go out to dinner with tonight?

You know, like, who, who, who, like, like from a, almost like a, like somebody standing back a couple of levels, you know, like, like your mind when you step out. You know, you're not in the moment with, I have a meeting right now, I have to go do this, I have to do that. Sometimes you step out of yourself and you think, what is the arc of my life?

That sum on sitting down, figuring out who you should go out to dinner with tonight is probably different than the one who's just running to the next thing. Yeah. And, 

Suman: and, and you know, one of the reasons why, like, I, I don't start with the end goal. I start with like, you know, immediate, practical goal because, Obviously we, you know, we got to the point, okay, what is the end goal in here?

But even, even, you know, in the example that you kind of mentioned, first of all, I don't think there will be any technology that will quote unquote replace like human. Even in this context, the idea of like, you know, this AI model will be used doesn't necessarily mean that it will completely be you. So my whole thesis is technology is always gonna be a found.

What we will create is always going to be on top of what technology will provide at that moment in time in your life. It's always gonna be right no matter what, how smart the AI, computers, robots, synthetics will get. We will have the things that we are gonna build on top of those, create new economies.

Create new value. Yes, there will be troubles, there will be issues along the way, but it, but no issues will go away. That's, that's kind of like my 

Deep: thesis. Uh, you know, we've, we've covered a lot of stuff. I wanna just end with kind of one final question to you. So a lot of our listeners are, um, you know, trying to figure out how to use machine learning and AI in their businesses, in their products in.

Systems, you know, you're somebody who's been using machine learning and ai, you know, for the last few companies of yours at least, if not longer, you know, how, what kind of advice would you give, uh, folks who might be in like, extremely different industries? Extremely, um, like different perspective? Like what, what would you say A in terms of like, does AI matter to their industries and if so, how?

And B, like what they should do to, to really. Get started if they're maybe feeling overwhelmed or just don't even know where the first step is. 

Suman: If you are starting off with AI today, the most important thing to understand is how the models behind the scenes actually function. Uh, so one of the questions that I, that, that actually I tell to my startup friends and founder says, make sure you know what is happening to the data, meaning, You mentioned about like, you know, hey, one article or one document that I submit to large language model, the question to ask is, is the document gonna be indexed for the next release of the large language model?

Why? Because that may be a private data, you're probably using it to do a task, but to summarize, uh, you know, for fun or like get the questions and answers, but is it something that you're willing to contribute back to l l m or is it private data that is important to your business or important to you or important in some other setting, in a HIPAA setting, in a GDP setting?

Who knows? But I think that is one important question to ask, which is what are the boundaries that you have and you are handling the data because you may. Two, ten third to just, you know, go into the large language model space and throw all the data ahead at, just know that everything will be indexed, right?

And you gotta ask like, what is the incentive of any model to not index that, look at the terms of service, because of course, that is my promise. But at the same time, I think it's important. I think it is important, it'll become an issue. And we'll start doing all this here, brings everything similar to what happened with the crypto.

Within two or three years, we'll start talking about AI regulations and who wants one. Okay, so that's fine. Uh, the second thing I would say probably is more around don't, don't, uh, get into the AI or use the AI for the sake of like, you know, getting the technology like first figure out like, okay, I have 10 different problems that I would want to solve.

Where, which problem would best be? Start with ai. Like, I didn't start this company with ai. I mean, you know, of course that's kind of me, but I started this company with, okay, there is this like memory problem. I wanna talk to Larry. Larry, what does it mean? Huh. Larry say, AI sounds cool, but, but what does it mean?

Right? So, yeah, I think there's probably a couple of things that I would say. It's probably very important. Cool. Thank you so much 

Deep: for, uh, being on the show. I feel like it, it was definitely a conversation I had, uh, haven't had before. And, uh, Find pretty interesting. I'll be thinking about it over the weekend.

Um, like what, what's actually going on here is, is quite intriguing and I think you draw an important distinction between this kind of aggregated, generalized view that most people think of when they think of the future of ai and this extremely like personal view. Unique to us as individuals, and I feel like there's an underrepresented place here, like, you know, where it's, it's not really getting the, the attention that it maybe should.

Suman: I think this is right and I, I'm thankful to large language model space right now because, You know, that actually is on the AI and we are ready now. So, which is also good. Even, even two years ago, nobody would believe us, um, that it is actually possible to replicate responses like you. That sounds crazy, you know?

But now people believe us and they, you know, the product is out there and you're in the hands of billions of people, like free. Um, so I can't wait to be just, you know, out there and give people. What I think everybody should deserve. Everybody should deserve getting hold of their own lives and their own data.

You know, put it in good use in the future, they will make API calls to you. You don't have to make and send the data and make API calls to other people. So I think, I think there is a lot of, um, um, shift that will happen. But it's a hard, it's, it's hard work and we need, uh, momentum. We need, uh, awareness. We need.

It's not just about large data sets. 

Deep: That's all for this episode of your AI injection. As always, thank you so much for tuning in. If you enjoyed this episode and wanna know more about AI and accessibility, you can check out our recent article of ours by going to Also, please feel free to tell your friends about us.

Give us review you, and check out our past episodes at podcast-dot-xyonix-dot-com. That's podcast dot