Your AI Injection
Is AI an ally or adversary? Get Your AI Injection and learn how to transform your business by responsibly injecting artificial intelligence into your projects. Our host Deep Dhillon, long term AI practitioner and founder of Xyonix.com, interviews successful AI practitioners and domain experts to better understand how AI is affecting the world. AI has been described as a morally agnostic tool that can be used to make the world better, or harm it irrevocably. Join us as we discuss the ethics of AI, including both its astounding promise and sizable societal challenges. We dig in deep and discuss state of the art techniques with a particular focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful. Need help injecting AI into your business? Reach out to us @ www.xyonix.com.
Your AI Injection
Regulating AI with Dr. Rowena Rodrigues
As AI becomes increasingly powerful, concerns about abuses grow. Consideration of the potential of future AI innovations leaves us with a burning question: does AI require regulation, and if so, how should AI be regulated and who should make those decisions? In this episode, we speak with Dr. Rowena Rodrigues, Head of Innovation and Research at Trilateral Research, a UK based organization that provides ethical AI software solutions.
Deep and Dr. Rodrigues discuss the importance of data transparency and how to balance innovation and regulation. They also speculate about what it might take for stricter regulatory legislation in AI to be enforced and how these regulations might evolve over the next decade.
Want to dive deeper? Explore some of the topics introduced by Dr. Rodrigues below:
- EU AI Act proposal: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
- SIENNA project: https://www.sienna-project.eu
- SHERPA project: https://www.project-sherpa.eu
Automated Transcript
Deep: Hi there. I'm Deep Dhillon. Welcome to your AI injection. The podcast where we discuss state of the art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.
Welcome back to your AI injection. This week, we'll be digging into the question of whether policies should be enacted to regulate the use of AI. We're speaking today with Dr. Rowena Rodrigues, Dr. Rodrigues received her PhD in law from the university of Edinburgh and is a senior research manager at Triateral research. Her work focuses on the intersection of AI and human rights.
So, um, Rowena, if you can get us started today, tell us a little bit about your research. How did you start thinking about policy and regulations in AI space?
Dr. Rodrigues: Okay. Uh, that, that's a good question. I think a very good start. So at the outset, uh, just to say that I work at tri later research and, uh, co-leading the work of the innovation and research services there mm-hmm and, uh, Um, I've had a very good, uh, experience here because I've been working for a long time in law and ethics. So also going back a little bit. I, uh, I did my in innovation technology and law at the university of Edinburgh mm-hmm . And at that point, which was way back in 2004, I remember doing, having a module. AI. And I think we didn't give it a second thought. I mean, we did it as a, as a matter of fact, as part of the course, but it, AI didn't have the policy thrust that it has nowadays.
Deep: Yeah. Those were early days for sure. The machine learning community. I don't think a lot of people outside of the AI ML world, would've been talking about it too much. So that's, that's quite advanced for your, uh, for your university.
Dr. Rodrigues: Uh, absolutely. And I think it was interesting cuz then afterwards, uh, when I was working at Trilateral, I had this opportunity while a project, which was called the Sienna project, which was on ethics of, uh, new technologies with high socioeconomic impact. And there in that project, we had the, we had the opportunity to do legal research on, um, issues related to AI. And particularly we carried out, uh, Study that looked at, um, what are the legal developments related to AI and robotics? What are the legal issues? What are the human rights challenges? And I think it was this work that kind of got me really into it. And it was quite, uh, this research was quite useful. It's up out there, it's open access and it looked at, uh, developments at the international level. So we looked at, uh, 12 countries. We looked at the U level and the international level as well. So we kind of, uh, tried to. Summarize, what was the state of the art at that time? So this was around 20 18, 20 19, and then also worked in another project, which was on AI, big data and analytics. So kind of looked at a different angle in the other project, which was on what are the regulatory options for AI and big? Yeah.
Deep: So before we dive in. Just for our, our listeners benefit. What does Trilateral research do? Like what kind of organization is it?
Dr. Rodrigues: So Trilateral research is, uh, UK and island based enterprise mm-hmm . And we were founded in 2004 and we carry out, uh, research and we provide ethical AI software solutions that can help tackle complex problems. So we've got a variety of diff projects in different area. And, uh, we have, uh, for example, I work in the innovation and research services team, which kind of does a lot of executes, a lot of research projects for the European commission. For example, we have a technology team, as I said, that, uh, works on ethical AI solutions. So we have them in various. Areas and even within my work. And at later we have projects running in different, uh, areas such as health cybersecurity, law enforcement, crisis, and security. So there's different areas that we focus on as well. So organization has a good mix of, uh, we can technologist as well as the social scientist to kind of work on different aspects of law philosophy. So it's quite a unique mix and a nice organization to be part of. Yeah. And the consumers of your, your work are largely like policy makers or regulators, is that right? I, I would say quite, quite a good mix. I mean, we've had, uh, we've had, we've done a lot of work for the European commission. Then we've also had private sector. We've done consultancy work as well for the private sector and we've done now. We are also doing a lot of work for the public sector with, uh, ethical AI solutions.
Deep: Got it. So let's start with, um, basic question. So what's the problem with AI here? Everything's fine. There's nothing could possibly ever go wrong. I'm I'm being a bit facetious, but like, Why does AI need to be regulated? Let's start there. Okay.
Dr. Rodrigues: So it's, uh, it is quite a complex answer, but I'll try and simplify it as much as yeah, please do. So I think, uh, the way I see it and these, uh, these are, these are my opinions based on the research I've done and the views that I've come to form, uh, and do not reflect my organization's position, but you know, I'll try and give you what I feel based on the research that I've done. So I think. What we see is that, you know, AI is being designed, developed in a way that has implications for human lives and society. And I think, uh, a lot of the fears and concerns about AI are that what is happening cannot be seen. And sometimes things are done without adequate impact assessment. For example, So to me, uh, it, AI should be regulated, I think because one, we need to prevent harm. So we need to ensure that AI is beneficial and doesn't cause harm. So for example, taking the case of AI that might be programmed to kill, uh, secondly, I think the other reason AI could potentially need to be regulated is because we want to. Set and guide its development in terms of there need to be certain standards, whether these are technical standards that kind of help the help AI to develop in a way that is beneficial or prevents problems later down the line. So this could be saying, okay, you need, uh, to follow this standard, you need to be certified to this requirement, for example. Yeah. So are we, are we talking about specific applications of machine learning and you mentioned the, the kill, no kill scenario, you know, there's much more benign applications, you know, where we're just recognizing, you know, a particular pattern in a particular context. So how do you think about like the application domains that have potential need for regulation and. Who determines whether something is even subject to regulation. Yeah. So I, I think, I think it's quite complex because I think the applications of AI are so very right. And I think there's, they, they range across different sectors. So I mean, an application in sector X would have different implications as to an application in sector. Why? So if, and if a certain solution. Um, used in the healthcare context, it might have different implications as an application that is used, for example, in the military context or in the security context, or, you know, for random entertainment purposes. Right. So I think it, I, I don't think there is a single answer that I can say, yes, we need to regulate all AI. That's not useful at all, which is why I think if you see the proposal by the European commission for the new AI act, uh, this follows a risk based approach because I think, um, I think policy makers also understand the difficulty that you cannot, uh, you cannot have one universal way of regulating something that is so diverse that applies across sectors, that in each context it might be different. And then again, there is also sectoral legislation which might apply. Already.
Deep: So let's talk about the risk based approach. Like what does that mean exactly. Who's doing that. Is that being done on a follow basis where you're looking at, where problems exist today, or is it being done in a preemptive basis?
Dr. Rodrigues: And, um, Yeah. So the way the, the proposal, uh, the proposal at least outlines it, it's got a tier risk framework. So one, one, the lower level framework is like, okay, if AI, any AI systems that are like minimal or no risk, uh, for example, spam filters, they will be permitted. And, uh, providers of such services would be encouraged to adhere for example, to voluntary codes of conduct. Uh, then again, um, Most AI systems they expect might fit into this category, which is minimal alone, no risk. And then there is another category called limited risk, which for example, might be something connected to chat bots. And these minimal risk systems would be subject to transparency obligations. For example, it then goes on to class, the, uh, proposal also then classifies what are called as high risk, uh, AI systems. And in this case, uh, high, high. Systems or anything that possibly could be, you know, they're part of safety components or products for their products in themselves. And these are some that the proposal itself specifies. So if you look at the proposal, what it specifies, it says, okay, any biometric ID systems, any criti, critical infrastructure systems that, um, that, that use AI, any education training systems. Employment training systems, any public services, any low enforcement risk assessment systems. So anything related to justice and or migration, these are all classified as high risk. So it's got quite a detailed annex that gives you all these, all these that might fall within this category. And then again, I think because the objective of the proposal is to protect fundamental rights. I think, um, certain, uh, applications would fall within the high. Categories. And I think the commission will reclassify this and you know, there has been, there have been some back and forth about the proposal of the commission, but we'll see how this pans out and it's, it's still in, um, it's still undergoing assessment, it's in dialogue. And, um, let's see how this proposal turns out in its final shape when it is. Um, when it gets to conclusion.
Deep: I mean, it feels like. Let's take something as potentially, um, obvious as a high risk system, like biometric IDs. If it's a generic technology potentially deployed under security scenarios. Yeah. Obviously, but if it's an ID system for something really benign, like stuffed animal gets delivered to a ki a kid in a particular toy context or something just, it feels like there's a, a, it's hard to like put a bound around something and say, inevitably, this is high risk. Are you seeing concerns about preemptive regulation that de facto empowers status quo players at the cost of really small companies that just simply can't afford to deal with whatever regulation comes out.
Dr. Rodrigues: Uh, absolutely. I think, uh, and, uh, you know, I think, I think there is a, there is a pro and con to, uh, to regulation and legislation, right. So there is always a cost for somebody. And as you said, uh it's who is, um, who is legislation empowering and who is it making? Uh, who, who is it kind of putting the cost on? Yeah. So that, I think you make a good point of, is it, uh, and, and also because the, I think the bigger companies or the established companies would have more. Would have the potential to lobby, um, to kind of either, you know, Tone tone the legislation, so to speak. So that power has always been kind of accepted.
Deep: Yeah. I mean, we, we see that all the time, right? Like we especially like, like let's take social media regulations as an example. I mean, clearly machine learning is playing a key role in the mass disinformation and, uh, and campaigns around the world. But when you start looking at the regulations, even coming outta the EU, which is, you know, obviously, you know, significantly more strict than what we're seeing coming outta the us. If you look at the risk regulation, I mean, these are VA like almost entirely things that are already being done by Facebook and Twitter, at least massive companies and the companies are. I mean, it makes sense for them to play along and to basically say like, yeah, sure. Let's have that regulation because you know, the real threat to Facebook is, uh, is not so much, you know, a regulation that they're already doing. It's like some small company that couldn't possibly afford to do, even what they're doing today. And we don't know if it matters. Like, do we wanna kill those, um, entities in their inception? So are some accommodations for size and, you know, eco like revenue being sort of included or is that in it itself a risk because something could be free and still, you know, a mass pur of this information.
Dr. Rodrigues: Yeah. AB absolutely. And I think, uh, like we saw this in the context of the general data production regulation, which was on, you know, um, that one did take into account the fact that, you know, it, it was, um, it, it did kind of like, uh, give some leeway to the smaller companies and stuff like that. But I think, uh, the, the core point is who, who, who pays the price for it all. And then I, I just see regulation and, you know, the legislative type of regulation as being just one of the means that you. Can get used and it's, it's not perfect because, uh, legislation has a lag time as well. And I think, uh, you know, between something like even the proposal will now take so many years to, you know, be dialogue to come into force. And then again, uh, technology might move on to a much faster, AI will move on at a much faster pace and it'll be completely different scenario and a couple of years down the line. So I think, um, it's, it's just one of those means, and I think. That's what we need to understand is you can't regulate AI with legislation alone. So I think that's, yeah. I mean, clearly, like we need, we need to understand what that landscape looks like even, right. And, and that landscape sort of starts to feel well formed, but it's in a continuous change rate of continuous state of change. . Yeah, and I, I think the problem is we don't want our society to stifle innovation. And I think, uh, we, we don't wanna stifle growth. Right. That's also very important. And it's also important that, uh, because it's, it's, it's about innovation, growth, creativity, and that's how society. Progresses as well. So yeah.
Deep: Um, I mean, there's a lot of advantages that AI has brought about. So efficiency, cost, reductions, uh, you know, helping do stuff that is, you know, mundane and, you know, totally part of what I'm wondering here, like, as you described this, is, does it even make sense to think about it in a, in a broad AI centric way? Cuz I can see like when you start talking broad, it's very easy for one side to. Identifying all kinds of things that just don't need regulation, you know, in, in a reasonable way. Whereas if you talk about things that definitely read regulation, like I would argue self-driving car algorithms need to be regulated, especially if I'm at a snowstorm. If it's dark, like I'll have a million things going in my head and I'll be completely nerve wracked. Cause I know the models are gonna make mistakes. We need regulation for sure. There, like you do, but I don't know what regulation can be meaningful either. Like transparency of algorithm. Does that really matter to me? You know, like it's like, I look at it and I think, okay. I'm way more concerned about the training data.
You're listening to your AI injection, brought to you by xyonix.com. That's xyonix.com. Check out our website for more content, or if you need help injecting AI into your organization.
The thing about self-driving cars, that's a hard ethical dilemma is. You have an inevitable difference in the type of deaths that are going to occur and you inevitably will save way more lives by choosing the self-driving car. So like, we will save more lives, but we will die in ways that we don't today. And that trade off is like not an obvious ethical slam dunk. Like I don't even know how to think about it. So how, how do we. Process, something like that from your, like the ethical lens that you've kind of developed.
Dr. Rodrigues: Yeah. So I think it's, um, it's, it's also, I mean, with self-driving cars, I think it, it was, I think when, when we looked at the legal landscape as well, we found that there's much more development on the legal front with regard to self-driving cars, because. They obviously had, um, they had lots of visible impact. So, you know, it, it was something you could very visibly explain and, you know, you could show and you could demonstrate. So I think that, that, that was kind of, uh, well, I'm just putting it simply, but that was one that was kind of like easy to say, okay, if this happens, this will happen. But I think you raise a very important point, which is about the training data. And I think that's that. I know we talked about transparency, but you know, there's a lot of work going on on transparency, but that's just one lens through which one lens one means of protection. But I think that, as you said, I think I would also support the fact that, you know, it's very important that the, the training, the training, uh, data is all good and done well. Uh, with regard to the ethical lens. I think the problematic thing about, um, Self-driving cars is possibly because it kind of like, you know, it kind of, if something bad happens, there's going to be injury, there's gonna be physical harm. Yeah. And that goes to the core of our existence. Right. And it's also. I'm losing control. It's it's all about human. You know, people talk about it in terms of human control, autonomy. It's, I'm, I'm giving away my, uh, my, you know, what, I would normally control to something else or to someone else. What if someone hacks the car? What if, um, what if I, cause you know what, if I'm in the car, but the car causes an accident because, well, it wasn't, it was so,
Deep: I mean, my biggest concern was self-driving cars. Is that due to the way we think about law. Mm. They got, you know, this may be a slightly more American vantage, but like here in the states, We want fault to be to assigned. Right? And so the easiest way for, you know, a company to like absolve themselves of of fault is to basically say this isn't a self-driving car. This you are supposed to sit there. And like, uh, and, and, and, and drive. Uh, but we, you know, we just make it a little bit easy. Now that sounds reasonable on the surface. But if you look at the research, it's very obvious that when people lack situational awareness, because they're like half asleep, um, because they, maybe they technically even have their hands on the steering wheel, you know, like, where is that line? Do you have to like, track their eyes? Maybe they are just zoned out and their eyes are open and their hands are on the steering wheel. It feels to me like a cop out. Yeah, and it feels like a cop out on the regulatory side. And it feels to me as a cop out on the company side to, to basically pass it the burden on to the individual. And, but at the same time, I don't know how to think of an alternative either, but.
Dr. Rodrigues: Yeah, absolutely. And I, I think, uh, you're much better, worse than I am on the technical side of things. So it's, it's, it's also slightly, I think, um, you know, I was looking at, uh, at, uh, one of the manufacturers websites and they clearly save, we are not autonomous vehicles and you're like, But, but you, but you're giving everyone the impression that you are. And do you know? I, I, I have a law background, so I sat there and I read your terms of conditions because I wanna understand what your, what you're telling your, you know, users to do. But does everybody understand that if something goes wrong, ultimately they were meant to be either keeping their eyes open or their hands, and they are technically responsible alongside. the car, so to speak because for needs to be assigned to someone at the end of the day, as you said, so it, it was, it was a bit of an eye opener and, you know, it's kind of like, okay, why are you using autonomous? If they're not really autonomous?
Deep: Yeah. I mean, this is the, I mean, like, you know, I mean, not to bring Elon Musk into every conversation, but you know, when he did his flame thrower thing, I mean, they're basically called this is not a flame thrower, flame throwers, and then he sells them and it's just like, you can. List, regulatory language. You know, if you, if you want to. And I mean, I know regulators are, have a lot more teeth in Europe, but even here we do have industries with where, where there's a lot of teeth in the regulatory works, but there's always like another paragraph or two or 50 that, you know, your lawyers will pack onto your, you know, abs absolve of guilt, um, you know, uh, user agreement and. I mean, at some point it just gets too convoluted and humans don't care. We just like who reads the user agreements for, you know, any of their apps or any of it? Absolutely. Everybody just says yes, yes, yes, yes, yes. Move on. And that's what they're going to do with all of this other, you know, machine learning, AI systems that really, that they want, even if absolutely there is potential harm.
Dr. Rodrigues: Yeah, no, which is why I think, you know, we've had this, uh, like the, the people, like people not reading their, you know, terms of service and all of that. We've had this for a long, like over 20. I mean, since the Internet's been, yeah. Now, I mean, none of us, those in the know as well used to kind of read, I mean, I'm not, I I've been guilty of not reading the full terms. Right. So if, if I was doing research into it, yes, I'd read the full terms. But when I need my app to work properly, to work quickly, rather. I'm not really going to, you know, spend an hour reading the terms, so I'm gonna accept it and then, okay. Tell myself I accepted that. I know because I wanted to work.
Deep: Like, I think it was GDPR when it got passed. Yeah. Everyone sort of had high hopes that somehow people would not hand over their rights because there would be all of these like questions. You know, companies got clever quickly and they made the, they like gamed the default positions and they, they made it so that the fonts were small and big, where they needed to be to get you to do the default thing that they wanted. And in the end, it didn't didn't really achieve much of anything. I mean, I don't know, like maybe I'm wrong. I just, I accept everything because in that frame of mind, I just simply wanna move on with whatever the heck the website wants to do to me, but
Dr. Rodrigues: absolutely. And I think, uh, but I, I, I do think that the GDPR did have its value, so I will defend it. So I, I think, well,
Deep: go for it. Tell us. Yeah, what did it?
Dr. Rodrigues: Yeah, I, I Def I definitely think it, it did its thing. It did its thing in terms of, I mean, one, it did freak everybody out at the time. Yeah. But I think it, it definitely because we already had the directive before the GDPR, so I think there was already, I think the GDPR just reinforced what the directive said. Very. Softly. I think it just set, uh, things in a much more stronger language in a much more different way. And with some new provisions, of course, but I think, uh, having, having done a lot of research on the GDPR and talked to a lot of colleagues and friends and everything else, it, it did have an impact in the sense that people did get conscientious about, even though they kind of ranted about it. They. Get conscientious about data protection and privacy. So I've Def definitely seen some movement in terms of people's, uh, feelings about privacy and data protection.
Deep: Going back to the question of, um, transparency, cuz I think this is sort of one of the, one of the pillars of reg of the regulatory idea around AI. Like how do you actually encourage data transparency? And is that enough because you know, there's so much privacy concerns around the data. A lot of the companies, I don't, I don't see the kinds of data that for many of the models they're building, being easy to just suddenly make public. So how do you think about that? I don't know. Let's take something. Relatively like prone to ethical abuse. So like you, you've got an app, like, like a Google image search kind of a thing, but maybe less, a little bit more constrained. And it's just all about beautiful people. So you search for beautiful people and it, you know, it only shows up with people of one particular race or something. So then they have to go through and explain the training process. Inevitably, there's just natural biases in the training data. That are maybe not so easy to correct for. How do you think about that? Like, like how do you think about creating a world where the transparency of the data communicates something meaningful to the regulator and the parties on the other side of it?
Dr. Rodrigues: I think, uh, as I said, I think transparency can mean different things to different people. And as you said, I mean, what is opened up also has, has limitations. So, I mean, everything cannot be opened up. We know that. Yeah. And there's various considerations for that one might be conflict. Potentiality. Um, there might be intellectual property considerations, for example. And then I think also in terms of transparency, um, you might want to think about, um, what levels of transparency you can given to who, what you might want to, what you might be willing to dis. Close to a regulator is way different to what you know, you would disclose to the public. And then again, the public might say, I couldn't care less because I can't understand what it is. You're trying to tell me because I don't have the technical knowledge to read. You know, you might put a whole. thing out there and it's like, unless you show them how to interpret that, that won't make any sense to the random person to you say, okay.
Deep: Yeah. I mean, this, this is a problem we had in the like, and we had mark Zuckerberg in the Senate hearings here in the us and yeah, the questions of the senators were just. Completely inane, Absolutely. I mean like these, they, they have no idea that like, how are you gonna trust these hundred people to regulate it industry when they don't even know that, you know, they're making money on ads. Like they didn't know the most basic aspects of what they needed to understand. That's part of my concern here is like, how do you, how do you get to the point where experts who actually understand things are more heavily involved than just politicians who, you know, a lot of their skillset is just putting a smiling face and being able to get a bunch of demographics, to like weigh in behind them.
Dr. Rodrigues: Yeah, absolutely. And this is, I think one of the things we looked at in when we were looking at the regulatory options for AI and big data, and I think we looked at, uh, there were these, uh, many proposals for creating new regulators, um, you know, to kind of regulate data. And I think, um, in, in one of our reports, we proposed, um, AEU agency. And in that agency, we said, come on, you've got to have on board, uh, people, you know, experts, experts in the topics of that. And I think. That's where I think, um, because as, as you said, I mean, politicians have their limitations. They don't understand it all. I mean, even I have my limitations, I'm, I'm a lawyer and ethicist background. Yes. But I don't understand it all till somebody in, in somebody would explain that to me and I might ask them questions and they might reply to me. And then, you know, we would talk to each other.
Deep: Well, it's just, it's, it's not the easiest stuff to understand. I mean, I build machine learning models all day long, and I can tell you for a fact, I Don. I don't know. I would really have to think for, for a long time to figure out what would be a reasonable way to, to regulate this stuff. It's not that obvious. It's so nuanced. It's like, it's, it's just not that straightforward. .
Dr. Rodrigues: Yeah. Which, which is why I was also. Saying, it's kind of like, it's not that I think, um, you know, uh, the, the, the role of regulation, which can be to prevent, you know, harm, as I said yeah. To guide the development. And then again, you might just want to have the, the last bit of regulation, which is kind of to remedy harms. And I think, you know, we can have good AI and good AI might go bad. And, you know, AI developed with the best of intentions. I think at some point you can say, okay, if it is misused, and if, if something bad does happen, Then what is it we can do to remedy and redress or some. So I think that is also important, but, but I think the key thing is, uh, you know, understanding at what point regulation should make its, you know, should make its. Cloud felt or power felt.
Deep: Cause yeah, I mean like, so is, are we at that point in Europe right now? Like in, in, in Britain and in, in the, in you where we actually have the AI equivalent of the nuclear regulatory commission, like, like, you know, everybody regulates nuclear stuff. We don't rely on the politicians to. Run around and define exactly, you know, the specifications of this, that, and the other in, in the, in the, in the plants. Do we have those kinds of entities being formed today or are we just still in the talking about it phase?
Dr. Rodrigues: So I, I think, uh, we have sectoral regulators, I think, and some of the, some of the aspects might fit within them. But I think with the, with the EU. AI act, I think in that they proposed a EU, um, EU AI board, but the board, uh, I don't think you could compare it to the equivalent as you mentioned. Uh, I think it's, uh, it's a softer board. Uh, we, we, we, we think it should have been made a bit more stronger, but I think, uh, the way they proposed it was kind of, you know, an entity to kind of determine what other. AI systems might fit into high risk. And for example, ensure that, uh, what, what, what was the act was being implemented in a way that was fitting with cross-sectoral, um, sectoral laws, for example. And it was, I think, um, it was a softer body than we kind of anticipated and hopeful, but, um, yeah, I think it's, we are not there yet in terms of the, you know, comparing it with the bodies that regulate nuclear and not.
Deep: Perhaps, you're not sure whether AI can really transform your business. Maybe you don't know what it means to inject AI into your business. Maybe you need some help actually building models. Check us out at xyonix.com. That's xyonix.com.
Deep: Maybe we can help is this one of those things where we just simply need a three mile. Uh, you know, like a massive incident, like a Chernoble or something before we realize, okay, now we have to go and regulate this stuff is, and on some level it feels like that's like, we just haven't had a breach that people understand as a breach of AI, but I would argue we have, yeah, I think the 2016 election in the us, you know, was a significant one. I think the Rohingya massacres in, uh, you know, in Burma was like, And I don't know what more signal you need than that. I mean, that's a pretty loud signal that Zuckerberg can't regulate things on his own. Why do you think we haven't reacted in a more significant way when we know there's such potential for, and, and not even like unrealized potential, like we've, we've had huge breaches. I mean, like all of Western democracy I would argue is potentially at.
Dr. Rodrigues: Absolutely. And I, I think, uh, I think, uh, it's, it's not that things haven't happened. Right? So, I mean, if you, if you look at the, if you look at some of the repositories that collect, uh, for example, um, AI incidents, I think there's a huge, uh, huge mass of, you know, incidents that you can find reported there. And I think also in, in, if you, if you go looking for examples of. AI that's gone wrong. You get plenty. Plenty of them come up there. There's there's lots.
Deep: Yeah, we, we could, we could talk for hours on this, I think on any one of them, honestly. Oh, absolutely. Yeah. I mean, we could easily talk about social media for, for, for, for months.
Dr. Rodrigues: And I think the last couple of years have been the sticking point, I think because I think lots of things have happened on the, particularly on the political front, I think as well. And you know, the social media, as you said, the disinformation, the potential, and I think it is also in the public. Uh, in the public eye, which is why I think the commission has pushed forward, uh, with the EU AI act, because I think it was, you know, they did the impact assessment and this was kind of imminent because there was this definitive gap. And also the commission has not just stopped at that. I mean, it's, it's also going to look at, uh, live changes to liability rules, for example, it's also considering that, um, You know, it's got two new other proposals on, um, that are being considered, which is the digital markets act. And then it also has the digital services act. And these will this, we, these will be used to kind of regulate online platforms and intermediaries and, uh, you know, also online gatekeepers. So I think. It's it's, it's, it's quite happening in Europe. And I think, uh, having looked at a couple of things in the us as well, I think it's, we are a turning point right now. So I think that there will be, there will be significant developments, I think, in the years to come and as, and I think as you said, there might be even few more major breaches that kind of tip the boat.
Deep: So yeah, I think. It's it's a strange thing. You know, like I thought in the wake of the, um, you know, the 2016 election scandal, something might actually come out and there was a lot of conversation to be sure. And there are now, uh, we have in the us, we have a couple of senators that are fairly well educated on the topic. Um, we have no shortage of, of those who are know nothing at all about it as well. Yeah. But like, I feel like there's conversation. There's curiosity. But I haven't felt anything approaching actual regulation. So I don't know. I mean, I don't know how this actually happens. Like part of me thinks like, let's like, it might happen piecemeal for a while. Yeah. And not in a AI breath fashion. Like I can see, you know, the bodies, you know, like the FAA regulating AI systems within yeah. Within airplanes. I can see, you know, the comparable thinging happening within cars. Like whoever's regulating the thing now. Just starts introducing AI competencies, you know, into their thinking, um, that feels maybe even appropriate, you know, but then there, there feel like there are, it feels to me like there are gaps, like, you know, we have the, the FCC that regulates communications. Is it appropriate for them to regulate Twitter and Facebook and, you know, convers you know, like the, the machine learning systems in those systems possibly, you know? Cause I think a lot of our stuff goes down to like public airwaves. Yeah. You know, and our, and it's a little different in the internet world, so yeah.
Dr. Rodrigues: Yeah. I mean, it depends. Yeah. It depends on their remit, I guess. And whether they would see that within, um, you know, their powers, cuz even, uh, even on this side it was, uh, kind of, you know, there were discussions being held about, oh, is this, um, if it's an AI data kind of thing, is this something that falls within the remit of the information commissioner's office? Perhaps not right. If it's, if it's a communications thing, does it fall within the, within another regulator's REIT? And which is why I think, uh, there was, you know, you know, it, it needed to be looked at and it's like, okay, other sectoral regulators enough, does this fall within whose REIT does it fall under? Is this within this domain? Is this within that domain? So I think it's, um, it's. It also needs putting in context, I think. And then again, if you have something blanket at the top level, then it's kind of like, no, that doesn't work because we already, so I think one of the key things is I think to ensure that regulatory impact assessments are done to at, you know, to ensure to ensure that, you know, It's duly considered, as in, is there a real need for legislation? Is there a need to change legislation? Where, where does existing legislation fall short? Because, uh, regulatory impact assessments, you know, they're done in a very systematic way, so, and they're done, you know, you look at both the primary primary legislation, you look at the secondary legislation and then you see what might it effect be on different stakeholders, the sec different economic sectors, the environment. So I think. That. And, and you also, I think we also said that that's not enough. You also need some sort, some legal foresight into this because I mean, AI is about it's new emerging technologies and, you know, you need to think it through, you need to think it in context, not just with legislation, but where it's going as well.
Deep: So to some extent, to part of me thinks like, you know, like if you even just look at internet related mm-hmm I remember maybe 1993. When I first saw the web, I started thinking, oh my gosh, there's just gonna be so much social transformation. There's such a need for, you know, like the legal world to understand what's coming. And then if I look at the last 25, 30 years, I feel like they didn't really do anything other than react and try to duct tape old laws. And like, through like this very painstaking. Like evolutions start to evolve towards something less completely clumsy. And I'm wondering, that feels to me, like it's probably what will happen with machine learning and AI systems is that it just feels like, I don't know how it is, you know, in Europe, but at least here the, the legal. Um, and regulatory framework world is not the place that you go to look for innovation, but I'm wondering, how do you think about that? Like, does it, do you run across an attitude of your work? Like, ah, let's just see if this ever becomes it, something that people start clamoring for.
Dr. Rodrigues: Yeah. I, I think, I think it's, it's, it's interesting how, you know, um, How, you know, the, the whole regulation of the internet debate spanned. And then again, we also had those debates of what regulation, right. Does the market regulate? Does industry regulate who regulates, uh, does the policy maker come after you while if you, you know, so, you know, we saw the. the thing about like, okay, let's go for self-regulation because you know, self-regulation is good because then the other regulators don't come after us, for example, Uhhuh, uh, I think, um, self meaning like companies doing the things to regulate themselves. Yeah. You have your codes of conduct. You, you do, you put in place measures that, you know, you say, okay, I'm going to do high impact assessments. I'm going to do this. And, uh, I think, which is why also, uh, you know, people put more, I mean, and it's good as well, right? Uh it's uh, self-regulation is a good way to kind of ensure then that you don't have the more expensive, um, legislation coming about and stuff like that. Because at the end of the day, that has a, that has a cost, not just to society, but, you know, in fact, even through the companies that have to kind of, uh, pay the price for meeting them, meeting the added burdens of the regulation. And sometimes if the regulation doesn't distinguish, for example, between the big. In the small, we have a bit of a problem there, or a big problem, so to speak because it'll stifle innovation growth. But, uh, yeah, I think, um, the, the challenge, I guess, is also for example, the very definition of AI and the conceptualization, right? So if you look at, uh, coming at it from a non-technical perspective, I've seen so many different definitions of AI and then even the EU AI act has been criticized. For example, for being too, for having two broader definition of AI. Oh, okay. Then you're thinking, okay, where are we going with this? Because, um, in, in some senses, when, you know, you look at, uh, other pieces, like the GDPR, for example, you know, lots of other pieces of legislation, they try to be technology neutral, but here you have the case of, you know, you have to address the technology, which you're seeking to regulate, but what is it and what context is it?
Deep: Yeah, I think it's yeah. I was gonna say going back to your idea of like, like your, your, your comments on self-regulation. I mean, it makes sense to me that you need to look first almost to self-regulation. I mean, absence in environment of completely bad actors. I mean, I've been doing this for a while. I've never once. Really seen anyone in the tech industry sort of intentionally like, you know, James Bond, villain style, trying to like do something overtly evil. It's it's always like, there are just other priorities. Like, Hey, we're optimizing for engagement. We're not paying attention to like, mm-hmm, , you know, taking people down the Allison Wonderland rabbit hole of disinformation. So companies have to a first cease problems sort of emerging and arising, and then they try things and it makes sense to me that regulate. Formal regulation from the government would look first and foremost to see what they're doing. And then to try to look at the gaps across the companies like who's doing it well, or what part of a company is doing it well, and then you start with encouragement because you are not actually sure yet if that's the right thing to do, like right as a absolutely. And then you start with encouragement and then you start finding like the real slam dunk things that have to be done. And then those, you can start putting laws. Would you agree with that? Or?
Dr. Rodrigues: Yeah, it's kind, kind of mirrors the layer risk approach. I think of the propo, you know, the commission's proposal as well, because I think that's why they wanted to say, okay, for minimal risk systems, we, we don't want to kind of crack down on you because you're not the same as something that is high risk, which is kind of can be implemented and have severe consequences, either life injury or, you know, uh, to someone else. So I think, um, yeah, I think. So probably going to work out to be, I mean, we'll see the final form, but I think we have to recognize that there's different levels. And as you said, I don't think anyone sets out to say, let me go destroy the world today. I know. And so usually that's not what happens usually.
Deep: Yeah, absolutely. There's different. There's so much gray, you know, it's the world is definitely not black and white. This has been a super fascinating conversation. I wanna thank you for coming on. I've really enjoyed it, but I wanna end with one. Question. I'd like to kind of project into the future. So take us out 10 years into the future. What regulations do you think will definitely have happened by then? What would have maybe necessitated them to have happened and regulations, are they working like, is the, is the, is the world better? Are the AI systems a little bit more TAed and, okay.
Dr. Rodrigues: So I think, uh, one thing that might definitely happen is, um,
Deep: I gotta love the hedging. There might definitely oh yeah.
Dr. Rodrigues: So something on, uh, I think facial recognition technology. Yes. I know it's already gone down, but I, I, I think some, some movement on the facial recognition technology front, because I think that's the one that is, um, Raising some concerns. I, I know there's all the stuff about, uh, automated, uh, weapons and, and all of that, but I think that's already on the radar, but I think something with facial recognition technology would be my that's that's where you would guess we'll have sort of national level legislation in multiple countries. Um, yeah. I don't know if it'll be, it could be national level. It could be, you know, there is some kind of agreement that, um, At a treaty level or regional level. What we'll see, I think it's, it's so hard with the regional level things, because again, getting agreement is so hard on some of these things, and I think you might see some more national level developments than regional. So yeah, little hard to predict at the moment, but because each, each country has different strategies, different pushes different priorities with regard to AI. So we'll see. We'll. Maybe, I think one thing I would like to say is I think, you know, we should being in the industry. I think it's important that, you know, as, uh, we work on AI products and systems, I think it's important that we not just think about, uh, legal compliance in terms of them. So with my, you know, having done a lot of work in ethics, I think we need to think about even if this product or system is legal, it's legally compliant. Is it ethical? Uh, if there are any. Form it, um, you know, uh, how can I address them? And we should do this throughout the product life cycle. So that's just something
Deep: I, yeah, I mean, I think that's, that's actually a huge topic and we actually had a, a whole, um, episode on this where we were really talking much more on the ethical side because the law. Is different from ethics like here, I, I often see companies where, you know, folks are just like pursuing whatever technological solutions, just seem exciting or maybe meet some business school. There isn't a lot of contemplation of these issues and it feels to me like there's a long ways to go to getting people, to even see potential ethical breaches and think about them and have teams sit down and talk and think and, um, understand what might happen.
Dr. Rodrigues: Yeah, absolutely. Yeah. I think the conversations are very important. I was, uh, it was really nice to be able to talk to you cuz you know, for me, it's, it's also about we do the research, but it's just nice to, I think have those engaged conversations. And even in our team the other day, we had a presentation on, uh, by the, you know, by the technical team members and for us, the rest of our social scientists who are sitting there is just useful to get the conversation going, cuz then they can present something to us. We can ask questions and I. I think we can, we can get to there because those conversations are difficult because you know, we are all coming at it from different points of view, but I think we have a lot to learn from each other. And I think that's, it's good. We can keep those conversations open and alive.
Deep: That's all for this episode of your AI injection as always. Thank you so much for tuning in. If you enjoyed this episode, please check out a similar podcast episode of ours, that dives into the ethics of AI. It's called building ethical AI with Dr. Brian Green. Also feel free to tell your friends about us, give us review and check out our past episodes at podcast.xyonix.com that's podcast.xyonix.com.
That's all for this episode, I'm Deep Dhillon, your host saying check back soon for your next AI injection. In the meantime, if you need help injecting AI into your business, reach out to us at xyonix.com. That's xyonix.com. Whether it's text, audio, video, or other business data, we help all kinds of organizations like yours automatically find and operationalize transformative insights.