Your AI Injection

Fraud Detection with Conor Burke

Season 2 Episode 7

The financial industry has changed dramatically over the last few decades thanks to developments in technology, providing us with modern conveniences like online banking. Now, AI provides the potential for even more innovation and security through services such as Inscribe, a FinTech company where this week’s guest, Conor Burke, serves as CTO. 

Conor starts by teaching us about types of fraud and traditional methods of catching fraudsters, as well as how AI is enabling identification of fraudulent customers despite the growing complexity of fraud schemes. Then, Deep and Conor dive into the sharing of data used to perpetuate or curtail fraud and why bad actors might actually be collaborating more effectively than good actors such as Inscribe.

Learn more about Conor here: https://www.linkedin.com/in/conorbrk/ 

Automated Transcript

Deep: Hi there. I'm Deep Dhillon. Welcome to your AI injection, the podcast where we discuss state of the art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.

Welcome back to your AI injection. This week, we'll be learning about AI powered fraud detection and risk management with Conor Burke, CTO of a fraud detection company called Inscribe. Conor completed a bachelor's degree in electronic engineering from university college in Dublin and went on to co-found inscribe with his twin brother in 2017. His achievements have earned him a spot on the Forbes, 30 under 30 list.

Conor, get us started by telling us a little bit about your background in AI And what problem inspired you to start Inscribe?

Conor: Sure. Yeah. It's great to be here. Thanks for having me on. Yeah. So I'm a, I'm a builder at heart, you know, growing up, I was always, uh, tinkering with electronics and, you know, true university had my first real exposure to machine learning. And that really sparked my whole career in AI since, and more recently that inscribe were using AI in particular interal services for the purpose of fraud detect. And this is really interesting. And for context for the audience listening, I think it's really important to understand just what's going on in the financial service industry at the moment. Uh, just how much change is going on there. I think we've all probably interacted with the financial service industry over the last, um, over, over the last few years, whether it's opening up a bank account or credit card, but there's like three or four big trends going on that I think are really, uh, like changing what national services means. And, you know, the first one is everything is going online. Uh, this sounds like a very. Trend for those of us in the tech industry. But over the last 10 years, there's been a massive shift from, you know, in person branches to online services. So you may be using, or I've heard of companies like chime or cash app from square, all these, all these products there, their technology companies at heart. And we're even seeing some of the old, more traditional companies that were, you know, in branch. Like JP Morgan investing massively in technology. So this massive shift online and focus on technology has really, really been a massive shift. And you.

Deep: Yeah. I mean, imagine in the older days, you know, you would walk into a bank, you might even know a banker, you know, you might actually even have a community bank where somebody knows your family or something. It feels like all of that's gone. I, myself, I switched to a, a bank that was in a. Different state maybe 30 years ago. But I think most folks that's kind of a new thing where they don't actually necessarily have even branches where they are, et cetera. So, yeah, that makes a lot of sense. And you know, and then of course we're accessing to ATMs. We're putting money in and out electronically directly. I don't know a lot of folks, look, my kids ask me what a check was the other day and so. Yeah. So it makes sense that that would really affect this. Um, so it sounds like there's, uh, some kind of fraud that happens. What, what kinds of fraud scenarios are we talking about?

Conor: Usually. Yeah. So if you, if you take this like that, that context, we're just talking about of like everything going online, you, you really have two very big problems coming together. You have this like very old problem of fraud that has existed since now. The, you know, medieval times are in the yeah. Thousands of years ago, just like the idea of like, People are trying to get access to money that they, uh, shouldn't be. And then you also have this other big phenomenon, just like cyber security and cyber, cyber crime people accessing and commit crimes through the internet. And you, you have these two, two trends coming together, which, um, as you can hint at there, um, is leading this all new types of fraud. So I think when you, when I think about frauds, Two main access to one first party fraud and, and the other third party fraud. And to explain this, this lingo a bit more is first party fraud is when. Let's say you yourself are committing fraud under your own identity. And for example, you may be trying to get a mortgage out, but you're lying on your application to get a, to get, uh, you know, more money or under a case of tur fraud, you're stealing someone else's identity or someone else's making up completely fake identity. Uh, they're, they're two big important access that. It's important to note. And then the other access is understanding who's actually committing to fraud or these opportunistic frauds or these professional frauds. And that's something we've seen arise over the last number of years too, is, is just, uh, increase in both increase of opportunistic frauds, but also more professional fraud drinks.

Deep: Yeah. I mean, it makes us like a lot of the things we're describing aren't even you need to financial industry scenarios, right? Like a company like Google that you know, or any, anyone that has an online service has to make sure. If it's not an anonymized service like that, you are who you say you are. And, um, then there's always someone trying to steal your account. Uh, at some point that's a pretty common thing and like all the vectors you're talking about. So what problems are sort of maybe more unique to the, the banking arena that are special to these scenarios of pulling money out or?

Conor: Um, yeah, so I think what. I think, first of all, what makes these problem unique is just a financial reward. That's at the end. Uh that's at the end here. So for example, yeah, when a cyber criminalist trying to maybe attack Google there's, um, you know, a certain financial reward, but I think in the financial services, there's, uh, often a very immediate gain. So the, uh, Type of ones that are unique to let's say financial services, let's say, uh, let's say you're, um, STB lender. You're trying to, you, you, you give out loans between, you know, a hundred thousand dollars and maybe 2 million to businesses. Mm-hmm and the type of fraud. That's quite unique to just like financial services that, you know, I assume let's say Google isn't experiencing is someone essentially coming to this small business. Representing themselves as a real business. When in fact it's actually, uh, nonexistent. So what we often see is people essentially fabricating entirely fake businesses. Um, so what they'll do is they might, uh, create a fake identity of the people who run the business that have, you know, maybe a website that'll have people on LinkedIn who work at this business. That'll have. They'll create a whole set of fake do fake documentation around, like, these are my fake bank statements that show all my real operating business. They'll, they'll create some articles on corporation to show that they're registered in this state. And, you know, from the bank's perspective, um, a lot of the stuff will check out, uh, their process for making sure that their ID verification passes. May actually miss these fake, uh, fake ID documents.

Deep: So how, how sophisticated is the dark web, if you will at providing tools for fake documentation? Like, can I go somewhere. And take advantage of generative adversarial networks that have been, you know, trained up to produce bank statements and like all these types of documentation. And then I just, and even trained up to just provide this stuff for me, company, uh, names, et cetera, or are people kind of on their own?

Conor: Yeah, that's the, that's a good point. Um, so there is definitely a range. There's definitely a scale. We, we do, we do see a lot of this to Inscribe. Uh, so we see some fraud that as you mentioned is quite primitive and yeah, if you're familiar to film, you know, catch me if you can, which Frank Ana, you know, uh, you know, is a very basic masking information, very, uh, uh, primitive edits to documentation. But what we have seen lately is just, I guess, the use of technology to create better fix. So it is almost gone to a point where the technology to create better. Or has gone to point where technology to create better fake documents has surpassed maybe a human's ability to detect those fakes. And that's where we really try to focus a lot of our fraud detect, you know, how can we, how can we detect these fakes that don't necessarily rely on what maybe human fraud detectors are looking at previously?

Deep: And so are you using, are you using a lot of kind of GaN networks to, to sort of detect this? Cuz it, it feels really close to deep fake world, but you know, it's not human faces it's documents. You know, you got one model generating better and better fakes, and you got another model trying to distinguish fakes from real. And so the, it just becomes a, you know, a war between these two networks battling it out until eventually you have really, really good deep, fake whatever.

Conor: Yeah. That's a, it's, it's a good point. So like what, we haven't been able to like confirm that, you know, the fraudsters are using GS mm-hmm , there is an abstract level. That cat and mouse game is still happening. So yeah, while they they're using, uh, some, some technology, we don't know exactly what technology is using, uh, to get better. They're shooting them over to our service and what, and they're using our response to, you know, I guess train their algorithm. So we're the reward function of their algorithm essentially. And we, we do, we do look out, you know, what? Watch Walters are out there. Are actually maybe abusing our platform, so to figure out how, where our platform's actually weak.

Deep: So what kind of data are you? Oh, sorry, go ahead.

Conor: Oh, no, go, go ahead. I think that's actually a great question. Like what data are we actually actually,

Deep: yeah, I was, yeah, I was gonna ask like, so are you basing your assessment of whether a piece of documentation is a fake on the document? Like the PDF itself or the documentation itself? Or are you. Figuring out what business name they have and then going out and looking for an EIN number and then verifying what address they, they gave. You aligns with that E number and, you know, are you cross checking with known databases of legit companies and legit people or,

Conor: yeah, it's a bit about of both. And I will say though, what's just so interesting about. Documents themselves is just how rich of information source they are. So, as you mentioned, as you correctly, assume the document itself actually has a lot of interesting artifacts. So this digital file, uh, when it's transferred to a fashion institution actually has some evidence done it. So a lot of your audience may be familiar like, you know, a lot of files that metadata, that itself has information such as when the document is created.

Deep: Maybe even where and by who and what tool created and what tool exactly. Cause I imagine if they're faking it, they might miss some of that metadata. Sometimes, maybe like the ones who aren't as sophisticated so you can catch them a little bit.

Conor: Exactly. Then you also have like have the construction of the file itself.  Um, so how was in the case of PDF, how was that file actually created and looking at the bite level information, and then you also have the information within the file itself, such as how is everything formatted? Does everything in the right place, you know, based on, you know, using, using computer vision, you can, yeah. You have out disability to look at large quantities of data and. Know, if something is anomalous or not. And then lastly, um, if you keep going upper level, you then have the actual details himself information from the document, as you mentioned, the business name and so on. And then these, these can be used then for further checks, uh, against external databases and so on. So like, I I'm imagining like a model that honing in on certain email domains have, are kind of more suspect than others. Maybe. Um, and even even the actual, um, email address itself. So, um, a lot, a lot of times fraud rings would like auto generate fake emails, and there's a certain pattern to those emails, uh, versus you reading email.

Deep: Gotcha.

Conor: You can also have, you can also look at the email record. So like it's bill for a domain named, you know, who is, you know, records, uh, lookup mm-hmm you can see how long has this email been active for. And you can really discover a lot about, you know, an applicant based on all this metadata that comes.

Deep: Yeah, that makes sense. And then how are you, like training your models? You know, is it more of like, kind of a combination of models or do you have kind of like a unified approach? And if so, like how do you think about training data and approach that?

Conor: Yeah, that's a, that's a great question. So I'll, I'll ask this question from the lens of, you know, one of inscribe. Primary outputs are fields, which is our trust score. So with our trust score, this is a score between zero and 100, which indicates for a particular document, how trustworthy it is. So zero representing it's not trustworthy at all. And a hundred representing it's completely trustworthy. So to like determine such a score. What we've taken into account is, is two, two things, primarily two things first. Just how familiar we are, this particular document. And this is trying to replicate the human intuition. When you see a document set from a customer, um, how familiar are you that document set? So for example, if you've seen like a bank of America bank statement, a thousand times, you kind of know what good looks like, and we try to replicate that intuition to machine learning. And then on the other side, and other like primary input is, as you're kind of suggesting there is a list of feature. With a logistic regression on top of it to determine how fraudulent a document may be. So we look out for it, everything from the metadata itself, all the way up to the information within the document. And we've essentially put, uh, this is like, um, series of features into logistic progression, which we then use to come up with a score, which we combine with just how familiar we are just, and that gives us this ultimate trust score. And what we found is not only does this help to replicate a lot. The intuition that maybe a human fraud analyst would be able to, or would've previously done manually. We've also enabled a superhuman level performance. And to answer your question specifically about, you know, training data, what what's their actual training data here, and there's really, you know, to supervised. Training that aspect. And then there's unsupervised. So on the supervised side, you know, we feedback from our customers themselves. So like they tell us whether something is fraudulent or not, but also we run own internal fraud analysts and fraud experts to are able to determine and essentially label data that are good examples of activity. And we can then use this to develop features on top of that, on the kinda like unsupervised learning aspect. Then we. We also have disability that, you know, as, as we get more and more documents and we have these features that we look for, we can start identifying anomalies. So if you give us, um, you know, 100 bank statements from particular bank, and then we notice maybe two or three have these unique characteristics, whether it's the file itself or information file, we can then, you know, look, look into that, investigate whether, um, that's an issue. And then that develops this, uh, nice flywheel that over time. Are this algorithm is automatically getting more intelligent and better at detecting fraud.

Deep: Yeah. I mean the supervised case seems pretty straightforward. I imagine you have, you know, tons of positive examples of legitimate bank statements and, you know, and, and other statements, and then on the UNFI side. So, so maybe you get like a new. Bank that you haven't seen before, to what extent do you like leverage credibility of institutions in and of itself? Cuz I imagine there's the analysis of the thing of the, the utility bill, you know, what have you, and then the visual component where you've seen this thing before, but there's also just the possibility that somebody did a really great, great job of faking that part. Company's thing. And then there's another side where the model might learn what it looks like to have a, a good, a good legitimate document. And then it inserts a maybe illegitimate organization. So it, it feels like both of those pieces of information and approaches might matter.

Conor: Yeah, yeah, no, you're, you're, you're spot on there and we've had to develop some safeguards to, you know, avoid these kind of age cases. So as you mentioned, uh, a new institution or. Someone who or an, a fraudulent document that is a very good representation of a good institution. Um, so you know, what we've kind of had to consider here is like having thresholds, you know, like at what point does an institution, do we have enough confidence about what institution or documents from institution should look like? And we, we constantly look at that as a, a variable at our disposal to be able. Make sure that, you know, we're taking advantage of all the data we're getting in, but yeah, avoiding the case where we're letting true institutions that is completely fake. So for example, if a fraud fraud ring sent thousands of documents from a new institution, never heard of before, how do we ensure that it's actually re institution? So going back to this idea that like, you know, fraud, is there going to much greater lengths to, to commit fraud? Uh, and yeah, we've come up with a few like heuristics on our side to be able to got it, be able to catch.

Deep: Have a data, have a hypothesis on some high value insights that if extracted automatically could transform your business, not sure how to proceed, balance your ideas off one of our data scientists with a free consult reach out at xyonix.com. You'll talk to an expert, not a salesperson.

And kind of switching gears slightly. So like what kind of assurances do you give your customers and what sorts of visibility into your fraud detection methodology? Do you give them so that they feel good? Um, about your process and methodology?

Conor: Yeah, that's, uh, it's a really interesting topic and. You know, something we've prioritized from the very early days. It Inscribe is making sure our models are explainable and auditable. And what this means in practice is for every feature or for every, um, feature that we look at, having a reason behind why we look at that feature. And also secondly, having to constrain, not to just feed everything into our models. So for example, I was talking earlier about all the data that is available. You it's important to have constraint about not just throw all the data at a model, but being quite disciplined about which ones, uh, from like intuitive level and also a policy level makes sense as an indicator or fraud. So for our customers, that kind of approach has been really helpful. And then, you know, lastly also from the data perspective, you know, explaining to our customers just where and what data we have trained on and making sure that Macs and closely represents their own data, that they. And making sure that that's, uh, that's that's well understood too.

Deep: So at the end of the day, do do your customers sort of rely on you exclusively for their security of some aspect of their security or are they always kind of spot checking your results and, um, maybe doing some sampling?

Conor: Yeah. So the ideal way of using a strip and I guess eventually the goal is. That we, we can reduce a lot of the, uh, manual interventions. So today, you know, with a trust score, uh, what we usually recommend is, you know, above a certain threshold, you can auto accept customers. And in this case, it's worth noting, you know, 95% or 90% plus your customers are actually good customers and you want to accept them. And, uh, it's maybe only three to 5% of your customers are actually fraudulent that you want to reject. So. What we tend to do then with our trust score is for the ones that we have low confidence about our low conviction in, we give a score as somewhere in the middle. So which indicates that we're, we don't have strong conviction and, you know, we'd recommend double checking. So, so that's kinda like one way our customers are using us to, you know, have that confidence in, in our results. And you know, another way is, as you mentioned there, just doing some. Understanding if there's any model drift and very easy ways to do this, you know, like is a fraud rate increasing without an expandable reason. Why? So if we see, uh, we're flagging more applicants for fraud and we go in and do a sampling of those flagged fraud, but we don't find, or we find a lot of false positives or we don't find an underlying reason that allows us to be proactive about, um, updating our thresholds and updating our models.

Deep: Got it. One question I have is like, you're not, you're clearly not the only ones dealing with trying to identify fraud. How is what the world of the good guys, how are you all collectively going about sharing and helping each other to sort of win. And on the other side, how. Are the dark actors going about sharing and ensuring that they win. Like, is there coordination, you know, for example, we, there there's known list, you know, of bad credit cards. There's known lists of bad email. How sophisticated is the sharing going out outside of your company into some collective repository and coming back into you and how it is that the good guy sharing world differ from the bad guy sharing.

Conor: Yeah, that's a, that's a really good point. And I'd almost say, I think the bad guys are doing a better job with this than the good guys. Um, so there is some work today going on to, um, improve this idea of like a fraud consortium. So how can, how can all of these entities within the financial services space share more data about, and who's committing fraud. Uh, what new fraud trends are coming up and, you know, avenues we've seen this come true is, you know, first of all, vendors themselves, you know, so there are companies like inscribe who work with multiple vendors has been a great, uh, leap forward, I think, for the industry. Uh, so if you think back maybe 10 or 15 years, a lot of banks didn't really use vendors. They built everything internally and there was no shared data. Uh, and there's no shared knowledge, but the explosion, I guess, of vendors in this space has enabled, you know, more sharing of insights across. Um, also I think, uh, another thing that's really helping is the, I guess, community of fraud leaders and risk leaders is definitely growing and there's more openness to share what fraud strategies are working well, what new fraud trends are are coming up. And then lastly, I guess a more tech technological approach is this idea of a fraud, uh, the actual technical implementation of fraud consortium team. So as you mentioned, how can you have this centralized database? Up to date frauds. So there are, there are some projects going on trying to get there, but there, there, yeah. Continues to be challenges around, uh, coverage, uh, around state access.

Deep: To what extent are there like conflicting incentives? So for example, let's say you're doing, you know, your company is doing a terrific job. Your models are working fantastically and your customers are, um, You know, happy and a huge component of your efficacy is your ability. For example, let's just say to assess, uh, new emergent, bad organizations. So if you were a purely good hearted player, you know, then you would publish that in almost real time as you determine these things, but then your incentives may not be there. So like, to what extent are the incentive? Sort of disincentivizing productive sharing.

Conor: Yeah, no, I think you're, I think you're right to, to say that, uh, the, there really is this two, two schools of talkers, you know, one which is, you know, why should, uh, why should I share data that I've worked so hard to, uh, catch about these frauds with my competitors? And then there's this other school that thought of, you know, we're all in the same boat here to, you know, rising tide rises raises all boats. Approach, which I think is becoming more popular. And I think one of the reasons why that shift in thinking is happening now is because in financial services, I think companies used to compete primarily on risk and the ability to determine risk. But I think now the board is really turning and, uh, direction is really turning towards customer experience and feature. So, what banks are competing on now is actually not like interest rates and risk, um, ability to term risk with a credit or fraud. It's more about how good is your customer experience? What new features are you developing that your competitors aren't. And I think if that keeps on continuing, we'll find more collaboration between organizations and I'm quite optimistic about that. And in that kind of scenario, these incentives are more aligned.

Deep: So let me just make sure I understood that right. If you are an entity that specializes in detecting fraud, you're saying those entities compete more on features and less on the efficacy of their fraud detection. Did I get that right? Or were you saying that the banks and like the actual financial institutions?

Conor: Yeah, it was actually more in a bank side. So if you take, um, let's say, you know, cash app, for example, uh, very like modern FinTech. They aren't competing as much on risk and ability to take fraud as they are on new features and new capabilities. Whereas if you look back at a more traditional bank, yeah, they, they aren't really thinking the, the financial products and services they develop are quite common across all the banks, but today it's all about, you know, what, how, how good is your customer experience, uh, your digital experience on mobile or, uh, just like new ideas of, yeah. I mean, changing money.

Deep: Yeah. I mean, I think that makes sense from the bank's vantage, but are you saying that there, there is some kind of disincentive for the fraud entities, the fraud protecting entities to share openly? Or are you saying that like, Hey, we don't necessarily do the sharing, but we provide the data back the banks who then in turn to do the sharing of the detections. Is that something like that?

Conor: Yeah, it's more, it's more latter. I think where, where vendors and fraud detection companies into space come into is enabling the sharing. but it's, uh, a lot of it, a lot of the, um, you know, permission around sharing data does come back to the institutions.

Deep: Okay. I got it. So, but you know, isn't there like a risk there that the only data that's sort of being shared then is maybe high level data, like bad email addresses or bad organizations, but maybe interim lower level data that could make a difference. So it's not shared.

Conor: Yeah, no, you're right. And I think you do just dynamic too, of, um, lots of fraud vendors, or lot, lots of vendors in space and lots of banks. And with any system that's, you know, complex, it's harder to connect the dots between the high level data, as you mentioned, like an email and maybe some of the lower level data, you know, for example, uh, did the account, or did the loan that this person take out that default within three months, six months, 12 months, or that, that fraud vendor with that email? May not have that granular data other than this person just defaulted. They may not know, uh, anything else about that. And, you know, that does lead to some inefficiencies in this fraud network.

Deep: Yeah. I'm almost thinking that the, that the incentives on the, on the bad exercise are quite. Positive with respect to sharing, cuz you know, they get street cred by like sharing a new, you know, area that they just suddenly have or a new type of fake something or other. And they're like quite eagerly, typically like sharing information. And I mean, I suppose it depends on the type of hackers or, you know, bad actors. They are like, if they're really good, maybe they act privately, but if they're novices, they might really, you know, wanna share their exploits. I'm wondering if there's some role maybe for government or some nonprofit entities to sort of step in and just sort of flat out. Provide the incentive on the good guy side. So like, um, if there's an entity that just pays for certain types of data, maybe that gives the, you know, fraud detector companies, the incentives they need to like share more openly.

Conor: Yeah, no, that's a good point. Um, I would say there, there are definitely some regulations out there that are, that do push for. Um, fraud detection and advocating forward to sharing of data and then enabling the sharing of data too. That, that, that is, I will say, there is this massive dichotomy between, you know, um, want to want to protect everyone's data, even the fraudsters in some cases. And, um, re like when you are sharing data, you have to be quite respectful and conservative with that. But at the same time, yeah. The more sharing of data you can enable, um, the less fraud you'll have in, in, in society. Um, but I, or, and I'm hopeful that the, you know, capitalistic nature of the financial service industry does encourage this sharing of data. And I think, as I mentioned earlier, the, this trend towards, um, you know, competing on more than just risk will hopefully be the, a big driver behind getting to that state where. All these, all these project and vendors or all these project and teams at these institutions can ultimately work with vendors like inscribe, to be able to share that data. And, you know, even at inscribe, we ultimately do want to play a bigger role here and, you know, almost be that common risk team between all the different institutions. That we can get that deeper insight, not only in the high level, but also the deep level data points.

Deep: Need help with computer vision, natural language processing, automated content creation, conversational understanding time series, forecasting, customer behavior analytics? Reach out to us at xyonix.com. That's xyonix.com. Maybe we can help.

Conor: Just to perhaps touch on your previous point around, you know, the bad actors and understanding what they're doing. Uh, it sounds actually like you may have already, um, had a chance to explore this, but, uh, there is a lot of information online, um, available to people who wanna commit fraud. So if you look up, you know, Reddit or discord or any of these new social media platforms, there's communities out there. Telling people how to commit fraud against all these big name financial institutions. So they'll give you the document set. They'll give you the, where to log in what to do, what not to do, what answers to give for each step of the process. And there is very dis very much like open sharing of information of, you know, how to get a hundred thousand dollars for free off this institution.

Deep: Um, yeah, I mean, it, it's, it's a real problem, right? I mean, it's, it's, um, a lot of these, you know, message boards, places like Reddit, cetera, you know, they kind of pride themselves on being. Kind of free speech advocates and, you know, everybody understands the need to take down certain types of content, like, you know, blatantly like violent content, but it feels like their incentives to take down stuff like this is not as, it's not necessarily there even, you know, and it's problematic. Like, I don't know what the right answer is, but I do. This conversation's been, you know, it's been really fascinating and I, and I, I wanna thank you for coming on the show. I think it's, um, I've definitely learned a lot, but I wanna end by asking you to maybe jump out in your mind, you know, five, uh, to 10 years into the future. And describe for me. The world of the dark actors and how it's evolved and the world of the positive actors in on the fraud detection side and how they've evolved and who wins.

Conor: Yeah. This is a great question. So. Yeah, I'll start this maybe on the, on the bad side first and then I'll talk about I'll end on the good. So I think what's going what's, you know, fair to assume over next few years, like FinTech is continue, is gonna continue on revolution. There's going to be immense innovation on, you know, financial products even over the last few years is the rise of buy now pay letter, uh, the I'm sure the matter versus go to play a big role and then, you know, crypto again on, on, on top of it. All these dynamics and new ways of doing finance is going to open up a lot of opportunities. And a lot of holes in our current systems stay for fraud. And I think the bad actors are gotta be, uh, on top of this. And probably what tends to be the cases. A lot, lot of times the financial institutions are behind the innovation of the fraudster that the bad actors are playing. So, yeah, I definitely do anticipate the, I think fraud naturally, or like fraud attempts will increase and they'll adapt to the adapting FinTech ecosystem. And yeah, I think overall, if you do look at the stats, even over the last few years, um, it will continue on this rise, but to switch on the other side, uh, on the good side, you know, what the good actors are doing. I think what we're really going to see is just more data. So today we, we have more data than we've had even a year ago. And definitely more than five years ago, I think this is gonna be the same in like five and 10 years time, which is gonna to be more data. So it's not just going to be maybe like a, maybe 20 or 50 data points you have today. Uh, you could have like, you know, 500 or, or beyond, and these are gonna be all like touch points, that, and data points that fraud teams and vendors can use. And this is really where AI machine learning is gonna come into its. It's gonna enable us actually to do something with this data. And as we're touching on there, we'll also, hopefully at that point, have that have this sharing of data. So each, each institution will have this immense data lakes of information. I anticipate we will figure out this collaboration or for consortium, uh, question. So it will come down to how well can you know, our, these fraud. And our a AI practitioners come together and actually develop the adaptive models and the, um, machine learning models to be able to catch, catch this new fraud.

Deep: Yeah, I think it's, uh, I think it's, it's, there's just a lot of stuff evolving in a way that's gonna, if nothing else make the battlefield much more sophisticated, you know, cuz if, if I had to like jump out 10 years, it seems like on the bad actor side, They're gonna get much better at leveraging deep fake capabilities. The deep fake algorithms themselves are gonna get much more capable, your fake documents, fake fake statements, fake. All of that. It feels to me like will be really quite good. The obvious mistakes won't be there. Um, the cap, like whatever we're doing for a caption today. We'll probably get more sophisticated. Maybe we'll introduce a lot more biometrics to kind of get in there. And then like, it feels to me like certain technologies will kind of like come and be safe for a while and then fade off as they get less safe. So for example, You know, my, uh, financial trading institution does a lot of voice recognition now. And so I've asked them to like, not allow anyone on my account in just based on a voice, a match with mine, because I know that it's really quite simple to make phenomenal, um, you know, voice, uh, uh, mimics at this point. And so, yeah, there was a period of time when nobody had yet come up with like a deep, fake based voice generator that could, you know, go break into fidelity or, you know, Morgan Stanley or whatever. So, and then on the good guy side, it feels like they have no choice, but to increase the sophistication of the sort of auditing of. The real human somehow, but it feels like this whole arena is just evolving so fast and that the cat and mouse game, the low hanging fruit is gonna be gone and will be in a more sophisticated arena where, you know, maybe it's still 15 year old kids, uh, are able to play, but with very, very powerful tools. And, um, you know, as far as bad actors and on the good actor side, hopefully we get better. But honestly, I'm kind of concerned about it. It feels to me. We need to be pushing much more on the Providence side so that we can track the Providence of, of, of things.

Conor: Yeah, no, there's, there's definitely like the whole concept of using Deepfake and AI to like recreate or create identities. And, um, yeah, you have all this information around online now with like been able to yeah. Recreate someone's voice, recreate someone's face. Uh, so like, yeah, I guess it redefines what it means. Um, be able to tend to get someone's identity, um, you know, in the past you to just having access to, uh, you know, a passport or your driving license and that's enough to verify someone's identity, but yeah, when you have technology that, um, questions to validity of that.

Deep: Yeah. Yeah. And we can even just make fake a fake humans now. Right? Like. We've gotten so much better. We can make a fake face. We can make a, you know, we can make fake videos of the face. We can make fake, um, voices to go along with the face. We can basically create little puppets and, um, that capability right now, we can still kind of do things like, look. The biomarkers of, you know, some human face and whether or not the, the redness in their face is following a biological. Pattern, but that's gonna get fake too. On the other side. like, so we're very much, yeah. I don't know. I don't know what's gonna happen, but I do think the battlefield will get way more sophisticated.

That's all for this episode of your AI injection as always. Thank you so much for tuning in. If you enjoyed this episode on the role of AI in fraud detection and risk management. Please feel free to tell your friends about us. Give us a review and check out our past episodes podcast.xyonix.com. That's podcast.xyonix.com.

That's all for this episode. I'm Deep Dhillon, your host saying, check back soon for your next AI injection. In the meantime, if you need help injecting AI into your business, reach out. At xyonix.com that's xyonix.com. Whether it's text, audio, video, or other business data, we help all kinds of organizations like yours automatically find and operationalize transformative insights.

People on this episode