Your AI Injection

Streamlining the Film Production Process with AI featuring Ruslan Khamidullin

January 12, 2023 Deep Season 2 Episode 13
Your AI Injection
Streamlining the Film Production Process with AI featuring Ruslan Khamidullin
Show Notes Transcript

In this podcast, host Deep Dhillon interviews Ruslan Khamidullin, co-founder and CTO at Filmustage. Founded in 2018, Filmustage is an AI-driven web service that streamlines the film production process using neural nets. Ruslan and Deep begin by discussing the old-school methodologies of pre-production, the entrypoint of AI into this process, and the future of the film industry in the context of Filmustage. They also discuss cost breakdown, future prospective features of platforms like Filmustage, and the potential of text to image generation in the film space. Ruslan and Deep explore how text to image generation can be used to create visuals to communicate ideas to set designers, as well as how the platform can help filmmakers generate cost breakdowns and forecasts to save time and money. Tune in to this podcast to learn more about the power of AI in the film industry and how Filmustage is revolutionizing the filmmaking process!

Learn more about Ruslan here: https://www.linkedin.com/in/ruslan138/
and Filmustage here: https://filmustage.com/

[Automated Transcript]

Deep: Hi there. I'm Deep Dhillon. Welcome to your AI injection, the podcast where we discuss state-of-the-art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.

Hi there. I'm Deep Dhillon, your host, and today on our show is Co-founder and C T O of Filmustage Ruslan Khamidullin. Ruslan graduated from the Belarusian State University of Informatics and Radio Electronics in 2012 and founded Filmustage in 2018. An automatic AI driven web service that streamlines the film production process using neural nets.

Maybe let's just start off by, you know, telling us a little bit about what inspired you to co-found filmustage and, you know, what gap were you identifying, you know, in the film industry that you think AI could help solve? 

Ruslan: My pleasure. So actually, uh, I've been working in technologies in it for quite long, like for six or seven years.

Became kind of tired of, you know, working for companies and stuff and stuff. Oh, I hear that. For, for opportunities, what to actually do. Been thinking about finding my own business and, uh, well, yeah, the opportunity, you know, kind of appeared. I met my old friend from the university, Igor. He was, he's actually a filmmaker, so he been shooting different stuff.

70 years by then and uh, he just returned from the us worked for various productions in San Francisco and in la So he get an opportunity to kind of dive deeper into the processes and see and understand the pain of, uh, real filmmakers on set. We ended up thinking about the idea, how can we help those people?

And they actually became clear that the processes in that industry, Kinda a old school, let's call it. And there's a lot we can actually, yeah, they're the things we can do better. Like for me, technology is a good tool, especially artificial intelligence is a great replacement of, uh, manual labor of manpower.

Actually 

Deep: walk us through. What's the old school way of doing? Uh, things are that you're, that you're kind of referring to, like maybe be really specific. 

Ruslan: Easily. Easily. So basically one day you decide to shoot a movie. At the very beginning, Carol, you have, it's your screenplay. It's a quite long, uh, text document.

Something around 100 pages long for Yeah. Full last movie. So now you start to plan your production. Step called prep production. Basically, you take your text, divide it into parts, into scenes, and then start to find all dimensions, say like actors, various subjects like props, maybe some location descriptions, some makeup, visual sound effects, et cetera, et cetera.

Lots of various categories of stuff, so people tend to still do it pretty much manually. So basically they just take pen on paper and mark everything. On sheets or maybe copy, copy paste to spreadsheets or maybe into some software. There's a thing called movie magic, which is the fact of industry standard came literal and changed since 1997, though it's lots of, uh, manual work.

Not very funny. Not very. And what kind of stuff is like, 

Deep: like when you go through that process as a. Like who, who does this? Is this the director that's doing this or the, 

Ruslan: basically, it depends actually on the size of productions, someone responsible for planning. Okay. Usually line producer or, or a system director.

Okay. So the, 

Deep: so they start, like, they look at the, the script that maybe yet doesn't have this level of annotation in it, and then how are they thinking about it? Like, are they thinking about it? Like, okay, like what defines a scene? Is the scene like a physical. Location with Yes. You know, like maybe there's like a scene in the rain where there's like a, you know, two actors, actors interacting and so they'll describe the set, describe, you 

Ruslan: know, exactly.

Basically, basically they start to plan and understand everything. So, okay, this scene is happening in, at the bar. Okay. There's a dark, crowded room. Okay. Multiple bottles of whiskey or whatsoever whatever, or something special. Maybe there is, uh, some special object. For instance, uh, it's crowded with, uh, I dunno, gangsters mm-hmm.

or something else. Uh, and all these digitals later turns into. Tasks for particular departments of your production. Yeah. 

Deep: Cause now somebody has to go and like a build that scene like Exactly. With the dark room and the dark bottles and whatever. Somebody else has to like map that into the cast. Cuz now yes.

They just characterize like what we need of. Full bar so that something, yeah, something cues up somewhere that says, find 50 people willing to sit in a bar that look like this. Okay. So, so it, it sort of feeds into different roles that are then gonna go off Exactly. And like support and help build 

Ruslan: that scene.

Yeah. For instance, you have a bar, you have multiple options. How you might shoot a, you, you might, uh, rent a real bar, you might build your very. You might do something with visual effects today, and then you need, uh, your cast, obviously, and then you probably need to hire extras. Maybe you need some special type of customs, like from fifties or 40 eras, something like that.

You need, maybe you need some specific type of objects somewhere else like. That kind of whiskey bottles or something else. Maybe you need lots of like, I dunno, fake blood or something, weapon, and you start to plan everything really early. And so you basically need to be super careful, super precise, and walk through your whole script to find literally everything needed for production.

Then using disinformation, you start to build your switching schedule. You start to build your budget, you calculate everyth. And only then it's you explain production 

Deep: are you say, I find it shocking to believe that, you know, a multi-million dollar film budget would do this manually, even if it's in Google Docs or something.

It still sounds crazy to do this manually because I mean, some of those films are so elaborate, you know, like there's so much going on and you can imagine. Just coordinating who's gonna go get whom and like fill the bar with whoever and you know, how are somebody has to like do all the interviewing for that.

And then you wanna be able to like look at the macro view across the whole life cycle of the production of the film and figure out everything has to be linked back to. To budget, and then you want to figure out where the bottlenecks are and then like, and then there's responsibility allocations happening.

Like, like this isn't probably how, you know, like if you think about somebody building a skyscraper or something, you know, like there's very sophisticated software involved in doing that. I wouldn't have guessed in 2023. There would be anything remotely manual happening? Yeah, 

Ruslan: yeah. Believe it or not, it's still more or less manual because there's a person responsible for that.

He does pretty much everything manually, doesn't matter. Copy past into somewhere like spreadsheets or some outdated software, maybe even manually writing down everything with pen on paper. And then you add here another layer like emails and uh, documents. Both base, like you send your call sheet, you send your script updates to somewhere.

So it's, it's a really a mess. And quickly becomes a, becomes hell actually. Huh. And we actually, we actually think that, It's the right time to use technology to make it a little bit better at least, and solve some problems. 

Deep: And so, so you, at film you stage, do you go like, so your entry point is the screenplay or does it go even before there where maybe you leverage some, you know, large language models to like help the writers too?

Or does it start with a presumed complet? 

Ruslan: Screenplay at the moment. Yes. We start with the completed screenplay. So basically what you do, you, you go to our platform, you are called your PDF or final draft document, which is one of the standard one, one of the software, uh, used to write, uh, screenplay. You upload it to our platform when we, and we basically analyze it with our, uh, neural engine.

We break it down, we'll break it down into scenes. We quickly show. The summary and we extract all the meaningful stuff from your text so you understand how many actors, um, mentioned in which scenes, how many objects of, of what types, uh, mentioned here and there. And you can really quickly get the overview of your project basically in a couple of minutes.

Deep: Walk us through what that's doing. Let's talk about like this, I dunno, this exam example screenplay that like what would've been in the screenplay to, to ultimately map to the scene that we kind of were characterizing in the bar. Like the, they, I assume they might have just been right in the story the way a novelist almost does, but a little bit more screeny.

So like, you know, like lead enters room, the room, maybe they have a few, few like high level descriptions of the bar, but then boom, they go into the dialogue probably right away. And then maybe they characterize the fight scene or something. But then you have to take that and sort of decide first of all, like why is that a scene based on the text?

Walk us through how that modeling works. Well, 

Ruslan: uh, good news, movie screenplay. Uh, Usually very well formatted, so it's easy to understand where the actual sim starts. Okay. There's like a, the fact industry standard, how you annotate the beginning of the sim. Basically it says like interior, exterior, daytime, and the name of the location.

Ah, okay. And starting from there, you analyze the chunk of text I after that, so, and that chunk of text is combination of everything from description of happening description of how the location looks. Dialects and everything, what needs to be in that particular scene. 

Deep: Okay. So there's already a de facto standard that you can kind of, uh, model.

So at this point, are you using some machine learning or is it some, like some basic Reg Xs and rules, um, to parse the scenes out? And 

Ruslan: we started with Rig X to actually do a very, very raw extraction of text, but we ended up actually combining two approaches, but. Basically perform the best because you, most of the time you work with PDF and you have to involve some kind of a little bit of machine vision.

In order to actually get the most accurate results out of that because mm-hmm. PDF is a mess also. Oh yeah. 

Deep: I've was, I've wasted many hours of that on in my whole life. So yes. You cannot rely as probably just about any software engineer in the history of, uh, of the modern world. 

Ruslan: Yes, yes. You just cannot rely on No.

What you see in pdf. So we spend some time actually finding the, the proper combination. Machine, machine learning can, uh, usually reg action in order to understand and extract the text in the best way. So once we extracted the text, we launched another narrow network NLP model, actually to label different type of objects in the text to categorize basically, okay.

The character enters the bar, he's part of the cast, obviously bar is the location. He sits down on the chair or by the table. So table is a part of prop or maybe suggestion. It depends on actually how the character interacts with object in the scene maybe, or this bottle of whiskey. Or whiskey again, is a prop, takes a sip.

Some other character comes by. Here's another cast. Uh, maybe at the background there's a chatting of. I dunno. A group of people chatting as a sound effect group of people is usually it's their extras, so and so on and so on. You basically mark down everything and, uh, that's it. 

Deep: Yeah. And so then you, you probably have some methodology for labeling these, um, These scripts and then maybe leveraging the cleanup that your users do if you get something wrong into additional training data that you can use kind of to refine the models over time.

Something like that. Yeah, 

Ruslan: sure. Yeah, sure. We had to develop a set of rules for our market team because of course, you know, with machine learning you spend like 95% of time basically building your data set and then 5% you're just training. Experi. So data set is the most difficult part. So we actually had a few stages of developing development of our own data set in order to start with something.

Yes. Later we would just fine tune using the input and our own insights, how to better treat different type of 

Deep: texts. So your ultimate goal here is like, you know, you take the raw script and then you're gonna map it into your object model that describes. You know, the full movie production process and, and so that object model has like scene and in a scene you have props and, and characters and like a given character then has a human and contact information associated with them.

And then like a given prop maybe has attributes characterized and somebody responsible for obtaining it or something like. 

Ruslan: A bit easier at the moment. We just, uh, recognize all the currents of different type of object in particular scene and show it as a summary and you can do something else like attach, I dunno, visual references on all, all the objects and all.

Then you build your schedule based on the information you have in a particular, particular scenes for particular. Our next step actually would be our goal is to build some kind of graph or the whole screenplay in order to understand automatically with machine learning in which scene, for instance, which cast member interacts with who and with what kind of object.

It will be a huge help to understand like the value of each object and cast members in your film. Basically, it'll help to really quickly understand, so to get some visual representation of, of your screenplay. What, what, 

Deep: what's the point of that? Is it, is it like, Hey, you know, there's like nine minutes spent on this table in like four different scenes, so we need to get the right table or something like that.

Like, is that the point? 

Ruslan: Exactly, exactly. Okay. So we spend like 30 minutes for this particular, or on this particular actor. He must be super important. Let's focus on, on him maybe if he like. If he need to spend like 30 minutes on screen time for this particular cast member and his salaries like that, let's optimize the whole shooting schedule around that.

It's gonna help us to save money in production. 

Deep: Yeah. So you have like a, a high value actor or actress. Yeah. And I imagine like other interesting questions come in too, like maybe you have a, a high value, but very short duration. Character that, uh, where you think like, okay, I only need an a minute of Scarlet Johansen's time, but like that cameo lets me put it on the thing and, and all of a sudden it, it might be worth it because of that particular type of role.

It lets you look at things quite differently, I imagine. 

Ruslan: Exactly. Or maybe even we have a really expensive factor today. We have options, we. To, for instance, should this particular person, we might replace him with visual effects. We can easily generate his voice. For instance, we don't need to hire Samuel Objection to narrate the whole text.

We might just pay him for his voice, his digital, his voicemail. Yeah, yeah, yeah. 

Deep: Have data? Have a hypothesis on some high value insights that if extracted automatically could transform the business. Not sure how to proceed. Balance your ideas off one of our data scientists with a free consult. Reach out at Xyonix.com. You'll talk to an expert, not a salesperson.

Let's talk a little bit about the future of filmmaking and the context of your product. Right? So if we rewind 30 years ago, The bulk of films had, you know, people in se in rooms or scenes, and those actors or actresses did things and you paid them. And all of everything you've described so far makes sense.

But in the modern era, you know, so much is animated. And the animation itself is getting so much smarter. You know, it's no longer, you know, where somebody's necessarily animating every arm or leg or eye blanket. You know? It's getting, it's getting, it's on a different trajectory. What's the role of film you stage in this kind of future arena?

How do you think about it? 

Ruslan: We want to be a, actually, like a central point of a modern production. We wanna be like a app. So, okay, your upload your screenplay, we quickly analyze it, show you all possible views of your screenplay, maybe some visual representations at, at the same time. Okay. You have, you need to have that type of applications to sh to make your project, to shoot your movie.

We have a location. I dunno, vendors or location agencies we can provide to that. Connected with that. People also, we want to connect cast agencies in order to, you know, provide with actors, 

Deep: et cetera, much. And also the virtual and physical can commingle here. Right? Like going back to our bar scene. Exactly.

You know, like, so now you're looking at your object view. You realize, well scene three has this bar in it. You know, if filming stage can like prepopulate it with a generative model of three or four dingy. Context. Yeah. And they can pick and choose. Maybe they're still gonna go get a physical bar or build it on set, but they now have like a visual in their head that they can start to like help communicate to the exactly, you know, set designer or whatever.

That makes so much sense. 

Ruslan: Yeah, and, and we actually start, already started to work on that kind of tool. We partnered with a visual effects studio called Crafty Apes, one of the best studios, uh, in North America. They do a bunch of stuff for Netflix and other studios. They did visual effects for Stranger Things and Spiderman and so much so, so many other titles.

So Idea looks, something like that, like what you just described. So yeah, the idea is to show the modern filmmaker the, the options, like what he's able to do and what options. Our goal is basically to lower the threshold and kind of democratize the 

Deep: industry. I mean, it makes sense, right? Like they're probably right now relying on independent Google searches and a whole process where somebody has to go and like, okay, I need to find a bar's name.

And so then they have to talk, well, what do you want? Well, I want the dingy thing, and blah, blah, blah, blah, blah. Okay. And then they go off and they start searching and they find a physical location like, is this what you mean? No, I want more. Like what about this? Yeah, like that. All of that stuff could be brought into this tool.

Absolutely. And made more, way more efficient. All the way to the point where the person who even maybe writes screenplay is even contributing this sort of information. Exactly. Like it doesn't have, I mean, I can see that moving. That role moving upstream, you know, a lot further. Mm-hmm. so that it takes a lot of the work and cost out of like producing the film.

Yeah, 

Ruslan: sure. Just, just go to Reddit. You'll be surprised how many people starting Stretch just looking for something for that particular bar. That particular, I dunno, helicopter or something else? 

Deep: Oh, no, I know, like my wife's actually, she's on an, this is a bit of an aside, but she's an artist and, uh, a lithographer, uh, for years.

And she used to, she was located up in Vancouver where a lot of films were made. And yeah, one day the set designer for this, uh, the TV show, the XFiles came in. Like she said, I heard there was some art on the walls here, . I need see. And somebody's like, oh, you must mean, you know, a mean stuff. And she went and she looked at it and she's like, I have to have this art in my, in my show.

So then she asked her, can I rent this? And you know, my wife's like, no, but you can buy it. . . I don't do rent. I'm just like some starting artist making like a dollar a month. So they ended up taking it, sticking in the set and then like, you know, fast forward 25, 30 years later, people are still trying to collect that stuff.

So I could see even residual revenue opportu. Or like set artifact or something like being built in. I mean, it seems like you have like so many, so much potential here to like expand this product 

Ruslan: over time. Yes, yes. It's all about expansion. We just found the proper entry point, like. Once you are ready to plan, we're here to help you.

Also, as you mentioned before, yes, we have plans to actually help to, uh, fill, um, script writers in order to, okay, you just finished your draft version 0.1. Let's calculate what's the potential budget of your project might be of co Okay, it turns out maybe too expensive. Let's cut something. Let's divide it into parts.

We also have ideas how to help street writers to. Better prepare for their pitch session in order to make their product real. 

Deep: That seems so potentially valuable to just like, as a writer, I mean, I can't help but think at some point in the not too distant future, you're gonna wanna, you're gonna want to actually build an authoring tool so that you don't have to start with the PDF point, entry point.

Cause, cause I, I can see a ton of value there in and of itself. Like you're authoring, you're getting a dynamic prediction of. You know, maybe based on certain factors, like you can put in like big budget, small budget, film, whatever, but as soon as you introduce Oh, and then there's 45,000, you know, animated elf running around like, okay, budget, just like jacked up by like 12 million or something.

Okay. And then you're like, okay, take that out. Take that out. Forget about the elves, let's put it in the cave. Oh, okay. Cave a lot cheaper. . Yeah, exactly. Like, and, and in the past, I imagine those were really painful. Sessions for at least early screenwriters that are in the early stages of their career, maybe they, you know, come outta school.

They don't know. How important that stuff is when it is being interpreted by the, the backer of the film or whatever. Yes. 

Ruslan: So it was always painful to actually find money and, uh, make people with money believe you, and invest in you. 

Deep: So let's talk a little bit about like how would you build, like, so this kind of price forecasting is sort of intriguing to me.

So would that be like tied at the scene level? Like what would it is is the, where you're predicting cost based on the scene and I imagine there's so many attributes in there that you have to kind of figure out. Whether you have an A-list actor or like a no name actor and you know, all that kind of stuff would have to go into some kind of forecasting model.

Not to mention just the complexities of the scene and seems challenging 

Ruslan: to build such a thing. It's hard. It's hard to predict how it's gonna turn out. So what can I tell? For sure, we definitely will be aiming to show like the whole project cost, maybe pering cost also, and some. Cost breakdown tips and drinks based on your schedule because, well, usually everything is, cost is very dependent on your student schedule because location rental is usually cost that amount of money.

Certain day is that expensive? That particular cast members, cast days that expensive. So it's hard to predict. Also 

Deep: location, like if you're filming in Switzerland, I imagine the cost is pretty high compared to, you know, somewhere else that's maybe really 

Ruslan: low. Yes, low cost. But you may go, but you may go to Serbia for instance, or Yeah, Greece.

Yeah. 

Deep: Yeah. Well that can be also be part of your suggestions here is like, hey, absolutely, you know, you're on set here. You just need a snowy ALP scene. We can do that in these four places. Exactly. So it feels like once you hit some kind of critical mass, like let's say you have a certain number of films, you know, running on platform, if you can also start gathering the true cost and true values from your users, then you can start to.

More refined forecast over time, and you can start to benchmark, you know, filmmakers ambitions relative to like their. Projects, right? Like projects that are maybe like, it doesn't make any sense to compare like a, you know, like a Steven Spielberg film to like an indie film. You know, I imagine you have to like get really n narrow.

But if you have a whole population of folks using these, a tool like this, you know, you start to be in a really special position as a company to sort of learn a lot about how to suggest. So my question is like, where are you now? Like, how many films are using the tool? Like what stage of development are you at?

Are. And how do you see that sort of growing 

Ruslan: over time? Well, you know, every pet rejection starts as an indie filmmaker. So our strategy is very simple. We start to, we started with, uh, really low price pricing level. We started to sell to individuals. Like the majority of our users are just in the filmmakers and smaller productions.

So we started to sell basically individual licenses or subscriptions on our website. So you pay permits and you use our platform, whatever you need for, uh, the amount of time you pay for. So we entered the market like a year and a year and a half ago, I believe, not so long ago actually we started to grow our user base.

We have something around three or 4,000 registrations ever, and uh, around a hundred active pain customers at the moment and at the same moment, it was a great way to gather enough feedback. Yeah, I mean, 

Deep: especially if they're engaged, you know, if you get that. Loyal engaged group. If they're telling you what's working, what's not 

Ruslan: working.

Yes. And they told us so much we never thought about and actually, and now we're slowly starting to drift to, let's call it business sector, to studios and productions. We're actively pitching to companies actually. Have some success in here. We have, as I mentioned before, we have a collaboration with scarf tapes.

We have a potential collaboration with a really big animation studios. We just show the demo to Amazon Studios. Hopefully something is going to build or make something for them, and well actually looks like. You know, gained enough momentum and we gathered enough trust and credibility to continue our growth.

I cannot, I cannot call it like explosive growth. Cause No, 

Deep: I mean, that's not, I mean, like that's what everybody thinks happens when you start a company, but that's almost never what actually happens. Like, it's just hard work. And I think, yes, the strategy of starting with like really like lower budget, but dedicated indie filmmakers makes a.

Sense, um, particularly cause I don't know if you wanna rat, like if you, if you land like one massive client, then that can like rat hole the whole company for of course, you know, a year or two where you're spending too much time and energy and, and dealing with eclectic issues or a, and it going small.

Like if you start to have traction there, maybe you broaden from, you know, a hundred filmmakers, you eventually get to 1,010 thousand. But some subset of those are gonna, careers are going to progress. And as they progress, if they carry you with. And then you will naturally start, wa you, you'll be able to organically grow up with that?

Uh, absolutely. That, that feels like a healthy growth 

Ruslan: pattern for me, El because you know, we, we, of course we, we want some bigger clients, but at the same time we want to to preserve the platform accessible for everyone. 

Deep: Yeah. We do a lot of, uh, and I do a lot of it advising of startups along the way. Um, you know, we build a lot of machine learning models for folks, but one of the things that, that we do is help 'em grow their business.

One of the things I say all the time is it's, it's way better to have like multiple low paying but highly engaged clients than it is to have one highly, very highly paying, but, uh, exactly engaged or not engaged. Honestly, if, if you're addicted to a large pot of money that's not diversified mm-hmm. , it will affect your evolution typically in a negative way.

Like you'll sort of wind up being a one shop company or something. It's all 

Ruslan: about freedom, basically. Basically. When you have a lot of clients, even if it's smaller clients, you have all the freedom to do basically whatever you want. Of course, you want to for their needs and listen to your customers, but still all the power is in your hands.

Deep: Yeah, having a nice purchase is invaluable, right? Like being able to like look at what a hundred different filmmakers are doing day to day, minute to minute, and where they're struggling in the product, uh, and on their set, you know, and where they're really, really need help, that's a, a very valuable position to be in as opposed to maybe you're on a, a purge looking at one company with one problem.

Now you have all the quirks of their internal setup, and maybe you've been compartmentalized and you don't even really. Yes.

Where do you see AI having like the biggest impact in the film industry right now? 

Ruslan: I think the visual generation just enters some new levels of complexity and actually, We at the threshold of really visual revolution, I believe with, you know, the latest developments, mid journey or DLI version two other image generation models, you name it became so real what you can just generate basically text to image generation becomes so robust and so, I believe it's going to be a great tool actually, not only for planning, not only for like prototyping your, your visuals like storyboarding or pty or prototyping the aesthetics of your film or animation or, or whatever.

One day it's going to be one of the tools for actually visual visualizing the stuff like you. Able to create the sequence of images today. And I think it's gonna be one of the most powerful tools for filmmakers, especially for people who create something surreal, which is usually super, super hard to actually animate, or, I dunno, shoot.

Or even, even to 

Deep: imagine sometimes it's really hard to even to imagine. Imagine, yeah. Yeah. You know, like some of these surrealists, like their. Were very special in a way. Like they could tap into their dreamscape and like manifest it on a canvas. Rumor on the street is that like, you know, open AI's next drop is gonna have, you know, it's, there's a lot of multimodal input inputs and uh, one of them is like tech straight to video scenes.

Mm-hmm. . So, you know, like what we were talking about earlier in the, you know, in the discussion, you. Woman walks into bar, there's three people at the dingy bar, uh, one's very angry looking, and there's a table with somebody pounding their fist. I mean, if you look at where we're, where we're at, where we were at six months ago, that trajectory seems quite feasible.

We know how to train these models now. Someone just has to step up and start doing it. And if you look at Dolly and our ability to manifest it in, you know, a two-dimensional frame mm-hmm. . And then if you look at like, you know, people have been doing this, I, I can't remember the name off the top, but we have a, we have, um, a whole episode here where we were talking, um, About music generation and like mm-hmm.

using these gener generative algorithms, you know, to generate audio into the future. And I don't know if you've seen it, but somebody had taken like, like a bunch of, uh, Nirvana songs and, um, and, and the model basically just not only does the lyrics, but does the actual, the full production of the actual.

Sound. I actually took it and played it for my wife and I'm like, oh, check it out. I found this new song by Irvana that that was just like undiscovered and nobody'd heard it and she's like, oh, lemme hear it . It's like, yeah, that's pretty good actually. I don't know why that never made it big. And I was like, yeah, that's cuz it just got produced like two days ago.

by a bot. But if you, you know, you take that time series ability to. You know, generative audio and you've got the 2d, like we can imagine the next step is just around the corner. It's just a matter of it costs a lot more to do 

Ruslan: probably Yes. To some, to some, to some level. Well, as for me, to me personally, all these, uh, computer generated nirvana, like never aesthetics songs, I still sound a little bit fake or any kind of other.

Music generated stuff, like 

Deep: it's still a little lofi, right? Like it's, it's, it's, and it doesn't quite have the soul, but if you look at the trajectory, it's kind of mind boggling in a way too. You know, it's like, well, yeah, it's 

Ruslan: discussable. Well, if we're talking about these things or film stuff, or there's another level of complexity because when you want.

Have a particular scene generated by the machine. You have really tricky part here because when you build your frame, you want to be super precise with, uh, about multiple parameters, like, I dunno, focal lens, maybe your camera distance, camera angle, and so many details you just. Usually build your frame and you think about it all the time.

I'm not, I don't, I don't think the machine learning and the open AI model is able to take this kind of parameters in account. Yeah. I 

Deep: mean, I think we're probably a ways away from text to like something interesting enough and visually impactful enough to be put on the big screen, but as a sketching tool in your, as a sketching.

Perfect. In your, you know, Product, for example, to just like help paint the picture. I mean, it's kind of feels like mind boggling, like yes. Like that. And, and I think it's important to not even underestimate that. Like if you look at like, the level of energy that like, you know, Pixar for example, expands on their sketches.

I mean, it's, it's not like a small amount of the budget, you know, so it takes you in a different head. Quickly, you know, like you mentioned surrealism, but like having a light surrealist feel in a bar with the character. Like just telling it like what does it mean to have a light surrealistic feel? But like these models can start to like have one clock in the background dripping or something, you know?

Yeah, yeah. But the rest is all okay, and then all of a sudden that sparks an idea. Exactly in the eyes of whoever's like looking at the scene and is in charge of set construction and now they're like, oh, okay. They're flipping around getting ideas. I mean, one of the things that I, I see as the biggest challenge kind of in general with like UX design is like, I call it like the blank, the blank text prompt.

Whenever you have anything that's just blank and somebody has to do something with it, it's really hard. But if you concede it with some knobs and dials, Generate some ideas. Suddenly people become insanely creative and you can get so much further, so much 

Ruslan: faster. Yeah, yeah, yeah. As a, as a sketching tool, it's just amazing as a source of inspiration.

Give us so much. So yes, why not you use it? 

Deep: Why don't you fast forward for me 10 years out and like models get more and more powerful. We're looking at, you know, this trajectory. Your tool, you know, you have a lot of success. Your a hundred filmmakers becomes a thousand, becomes maybe 10,000, and maybe you have a hundred of the really big name filmmakers on your platform.

What does that director walking in in 10 years, what's their experience like? Paint the imaginative picture where AI to the hilt and your product and all your wildest successes have. Like what happens? Is film even a thing in 10 years? Like are they doing something else? I hope, you know. I hope so. I hope 

Ruslan: so.

I guess we wanna have that kind of screen place in 10 years. I guess we'll have something more role, like some kind of textile description. Hopefully the whole experience might look like, okay, you have a story, you upload it to platform, or it'll be something else. You upload it and it gets translated into screen.

And, uh, it shows you like the strong part and the weak part and shows you tips how to actually make your story better. Like compared to archetype kind of story, it's shows you possible variations, maybe, uh, at the same time it builds like some kind of visual representation of your story. Like this character mentioned here, it happens, something happens with him.

Here. Here's like your, I dunno, most intense part. Contracts with that kind of object. Okay. We have a kind of breakdown and who can regenerate the visual aesthetics of your project for you, of your video or film or whatever, or animation, and then you start to experiment with all sorts of turns and dials.

You waiting. I don't know, maybe you, you're playing with the time you want to be this shot in the aesthetics of 20th century fifties. Maybe you want to be super surreal or even trippy. Maybe you want this to be a newer movie noise. And uh, the next step you get your storyboard ready with all the frames accurately generated for you, and now you decide how you are actually going to crew a this.

Maybe you are a super old school guy, you want to shoot every. Real might be still a thing, I believe. Mm-hmm. . And, uh, okay. We are going to suggest you all the possible occasions where this might be done. Maybe we can offer you some, someone who can, could build the whole set. So saw the studios who almost ex this exact set for your, your particular needs.

At the same time, we're going to connect with props, warehouses, cast agencies, and et cetera, et cetera. Show you the, the. Of course you need to understand how much money you basically need to make this project real, or if you want to do everything with Viv Fix, okay, we're going to shoot show you the opportunities.

I don't know how far the video generation could be at the time, but I believe it'll be super, super advanced, so maybe we even be able to generate the whole movie for you. I don't know, 

Deep: about 10 years, but maybe, maybe in a little bit after. But you never know. I mean, it's like the, the trajectories are, are mind-boggling right now.

So one last question for you, like, you know, a lot of our listeners are looking at, um, maybe they're not in the film industry, but they're trying to understand how to leverage AI in their projects and their business. Like, do you have any advice for them as somebody. You know, taking a technology like machine learning and AI and, and incorporate it in different ways into a very distinct product.

Like how, how, what kinds of, um, things do you think those folks 

Ruslan: should be doing? Uh, well, it depends actually. You definitely need to first have some kind of idea, understand your idea or your intention and what type of value you might, you know, get out of it. Maybe you want to generate images based on textual description in that case.

Okay. Go experiment with open AI or Mid journey. It's super accessible to everyone. Go get discord and you can play for the whole evening. You know, just combining two images. So waiting something never existed before. If you, you know, work with just actual information, you need to dive deeper into the thing called, uh, natural language processing.

Mm-hmm. , there are lots of cool libraries and frameworks for that. I don't, I don't want to be super, super specific. Just go read about natural language processing or NLP for. The thing is, most advanced, uh, AI engines are open sourced, so basically accessible to everyone. Go to Reddit, go to Google, go to GitHub, find something for you and just play with it.

Usually it's fun. . Yeah, 

no, 

Deep: I think that's great advice. I mean, I think, I think people kind of often. Suffer from a little bit of analysis paralysis. And one of the most valuable things that I've found, as you point out, is just experimenting. Get outta your head. Just jump out, grab the thing, download it, do the tokens, whatever you gotta do, and start playing with it in your domain.

And that, that process will like, stimulate a lot of ideas. And I, I think that's, that's good 

Ruslan: advice. Absolutely. Um, it's a rabbit hole. It's a rabbit hole. You know, you never, you never know when you're. I, I'm 

Deep: notorious for saying one last question and then having another last question, but I, I, not every day I get to talk to somebody who really understands the film industry.

Well, this isn't even sequential, but are you familiar with like, the world of Deep bakes and, and, and like, you, you probably know a fair, uh, a lot more than I do about character animation and stuff, but when we look at like deep fakes and, um, the ability to like take somebody. And render a model, like a puppet style model where we can get them to say what we need to in their own voice and like move and animate and whatever.

And we can do so off of low cost, like just like literally like, like with just one video and an image as input. You know, like a video of Barack Obama giving a speech, but my picture and suddenly it's me giving this speech. What do you think is the role of like that fake technology in the film industry?

You know, is it driving like low cost animation in some way, or do you think it's always gonna be like motion sensors on after actress's bodies that are doing the motion capture and lower scale animation? Or do you think there's some, some role for this? Like kind of. High quality rendering, but not maybe ultra high quality 

Ruslan: rendering.

Speaking about deep defects, I believe it turned to be very soon. Very good. So nowadays it's kinda easy to recognize fake images or videos, but it's getting better or better actually all depends on, uh, your data set again and your computational powers. So it's going to be one of the tools for the filmmaker.

And it starts to become a business actually, because recently Bruce Wheelis just, uh, patent pretended his, let's call it visual model or Okay, image or what, whatever you want to call it. So now he's able to sell his, uh, image in order to use in someone else project. So you just basically pay him and you can use.

Model or the fake it, whatever you wish to wish him doing. It's super, super interesting actually how it changes the industry and, and the, the meaning of identity 

Deep: in theory. I mean, I don't know in Bruce Willis's case, but in theory it drives the cost down too, right? Like the cost, even in Bruce Willis's case, the cost of flying Bruce Willis out to stand in your studio and do an announcement 

Ruslan: is gonna, I'm not sure about.

I'm not sure about the customer, 

Deep: but in theory, you know, it's, it's a lot cheaper to use this digital manifestation of somebody than it is to fly them out and pay, you know? In theory, they can make up for the, the cost in bulk use or whatever. 

Ruslan: Probably yes. But don't forget about your computational powers.

You actually need to render the sin again and gain the cost. So I don't know. We just cheaper actually, at the moment. I don't want to, you know. Okay. Yeah. Yeah. We've been talking a lot about usual generation or deep fakes for visual representation. We can do almost the same for visual. There is a project called Speech, actually from our friends from Ukraine and they do, let's call it deep fakes for voiceover.

So basically you don't need to have a seminal objection and hire him. You can just use his, his, uh, sound is a reference and, and his audience. You need whatever you want. Yeah. Works super. Well, 

Deep: awesome. Um, well thanks so much for coming on the show. This has been, it's been really awesome having you and I feel like I learned a ton about the, the film industry and just, it seems like an absolute no-brainer what you, you and film you say you're up to.

It's, 

Ruslan: well, we're we, we are just the time to do it. 

Deep: That's all for this episode of your AI injection. As always, thank you so much for tuning in. If you've enjoyed this episode on AI in the film industry, please feel free to tell your friends about us. Give us a review and check out our past episodes at podcast.xyonix.com.

That's podcast dot X Y O N I X.com.