Your AI Injection
Is AI an ally or adversary? Get Your AI Injection and learn how to transform your business by responsibly injecting artificial intelligence into your projects. Our host Deep Dhillon, long term AI practitioner and founder of Xyonix.com, interviews successful AI practitioners and domain experts to better understand how AI is affecting the world. AI has been described as a morally agnostic tool that can be used to make the world better, or harm it irrevocably. Join us as we discuss the ethics of AI, including both its astounding promise and sizable societal challenges. We dig in deep and discuss state of the art techniques with a particular focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful. Need help injecting AI into your business? Reach out to us @ www.xyonix.com.
Your AI Injection
Building Ethical AI with Dr. Brian Green
This week on Your AI Injection, Deep speaks with Dr. Brian Green. Dr. Green is the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University. His work focuses mainly on the relationship of ethics in AI.
In this episode, we find out how social media algorithms can encourage negative thinking. We also talk about how tech companies can create ethical AI during the product design process and what it means to expand the ethical circle. We also touch on the ethical dilemmas in self driving cars, autonomous weapons and more.
Find out more about Dr. Green below:
https://www.linkedin.com/in/brian-green-896a3b39/
If you are interested in learning more about this topic, check out our in depth article titled "How to Detect and Mitigate Harmful Societal Bias in Your Organizations AI:
https://www.xyonix.com/blog/how-to-detect-and-mitigate-harmful-societal-bias-in-your-organizations-ai
Automated Transcript
Deep: Hi there I'm Deep Dhillon. Welcome to your AI injection. The podcast where we discuss state-of-the-art techniques and artificial intelligence with a focus on how these capabilities are used to transform organizations, making them more efficient, impactful, and successful.
Welcome back to Your AI Injection. This week, we'll be discussing the ethics of artificial intelligence. We're speaking today with Dr. Brian Green, Dr. Green's the director of technology ethics at the Markkula center for applied ethics at Santa Clara university. His work focuses on the relationship of ethics and AI. Dr. Green received his doctoral degree in ethics and social theory from the graduate theological union in Berkeley.
Thanks Brian, for coming on the show, how did you wind up spending your time on AI and ethics?
Dr. Green: Sure. That's a great question because it was not at all apparent when I got into the field, what happened was I applied for about 50 various jobs and ethics and none of them hired me. Nobody was interested in technology. And ethics at the time, but there was one place that was interested in ethics of technology. And that was the engineering school at Santa Clara university, where I work now, the engineers there had been thinking about these technological ethical issues that they haven't seen appearing for years and years, they basically started wanting to develop more classes on this. And so I started teaching one class, which was on climate change and ethics. Then I started teaching another class. Which was on bioengineering and ethics. And then I picked up a computer science and ethics course. And so it just kind of snowballed from there. And pretty soon I was working at the engineering school and also at the Markkula center for applied ethics, which is where I am now as the director of technology ethics.
Deep: What ways are you seeing where AI can be unethical? Maybe intentionally or unintentionally.
Dr. Green: One of the things I would say about artificial intelligence is that it can be just as unethical as anything that people apply it to. Right. There are lots of bad things that AI can be used for. I mean, you know, we feel autonomous weapon systems are definitely one.
That's getting a little bit of play in the press right now. I mean, there's all sorts of information that, uh, AI can really vacuum up and go through too. I mean, one of the main things that we use machine learning for is just sorting through huge amounts of data. And when you sort through that data, you can discover what kind of things we're vulnerable to on a, you know, kind of a population level. You can classify people into groups and you can say, this person is scared of terrorism, lifts, market them with a whole bunch of stuff. That'll make them scared so that they will want to buy our stuff. Or let's direct, you know, certain sorts of videos that we think are really interesting. To them. And they may not be truthful in any way. They might be conspiracy theories. So there's plenty of ways that AI can go wrong. But I think that one of the important things to remember is that every way that AI can go wrong also indicates the way the AI could go. Right. So it's a challenge, but it's also an opportunity to turn it around and actually use it for a beneficial purpose.
Deep: Let's maybe take an example. So most of our listeners are probably familiar with this older sort of YouTube scenario, which I think is a paper that came out a few years ago, where if you started out with a Hillary Clinton campaign video on the political left, or if you started out with a Donald Trump political campaign video on the political, right. And you just followed the number one recommendations going through the system. You know, YouTube, but just sort of take you down the rabbit hole of bizarro world. On either side on the left, you would eventually wind up watching videos on all kinds of nine 11 conspiracies. And if you wind up on the right, you know, you wind up in, you know, all kinds of other crazy places. So how do you think something like this happens at a company that used to at least have a motto? Uh, don't be evil.
Dr. Green: Yeah. I mean, it's maximizing for a certain type of value, right? They're optimizing for showing people the thing they're most likely to click on the thing that they, that they want to see, but they don't even realize that they want to see it. And the way to really keep people watching is to give them something more extreme. Then the last thing that they saw and you're right. It wasn't probably intentional in the first place, but, uh, that's ended up being what happens because they were just trying to maximize for eyeballs watching the system. Right. And so the system said, this is a video that gets a lot of people watching it, and then we're going to follow it with another one that also is very attractive and it just, yeah.
Deep: I think your theory on how this happens is it feels right to me. I also just kind of know a lot of people that work at Google and Facebook, and I have yet to meet somebody who sort of intentionally sets out to do nonsense. Um, you know, people are generally well-meaning, but I can also see there's sort of a desire, you know, within management circles to like give your teams clear objectives to optimize against, and engagement is easy to define. It's very clear and it's easy to like. Give bonuses against and do all these sorts of incentivization strategies become really crystal wised in that scenario. So how should they think about that from ethical advantage?
Dr. Green: That's a huge question, right? These are the kinds of things that are easy goals that people want to stay. But the more difficult goal, the one that we really need to be aiming for, which is how do we actually have people watch YouTube and have something good come from it. Not just good in terms of the company making money, but having the person come away, perhaps more educated or perhaps more entertained or happy with themselves. There are so many different things that could be optimized for. And that then becomes really difficult to measure against, like you were just saying, it's one thing to measure how long a person's eyeballs are watching YouTube videos. It's another thing completely to measure whether they come away more educated or less depressed, you know, in a, in a better psychological state or whatever it is that you might be aiming for.
Deep: Maybe I'm reading between the lines a little bit, but it feels like engagement is sort of a morally neutral thing on some level. And then some of the other things you mentioned, like happiness becoming more educated are sort of not morally neutral. They're like morally positive. Are you saying like it's not enough to just be neutral and let the algorithms fall where they may, but actually actively tracking. Pursue something positive.
Dr. Green: I think that really you're, you're hitting the nail on the head, which is that if you don't actually pursue something positive, then you end up not doing something positive. That's the really deep question that has to be examined there because any way that we spend our time is making an implicit value judgment, right? But it says this is a good way for you to be spending your time. The way we spend our time is how we literally spend our life because we have a limited amount of. In a limited amount of time. And so the question is how are we going to make sure there's actually something good that happens with that? If we don't pursue something good, then good things don't happen. And in fact, the bad things start to happen. As we've seen with the spread of misinformation, kind of the radicalization of people in society, you know, the, the social fabric of United States right now, for example, it's just been kind of torn apart for years now. Based on the fact that these recommendation algorithms have been pulling us in directions that are literally designed to do that in a lot of ways, not intentionally designed, but designed that way with the side effect of then pulling us off into these different groups, where we find people who all agree with us. And then when we stop looking at that and we start looking at everybody else, who's actually around us, physically in society, we've figured out how are we supposed to get along with each other when we don't believe the same things anymore?
Deep: It feels like, I mean, the idea of optimizing for engagement, wasn't only a function of the ease of measurement. It's also just directly tied into the business model here, which is, you know, right. And same thing with Facebook and other tech companies that are incentivized ultimately by advertising. You can sort of teach an ethics class at some company, but if there's a diametrically opposed reality that these folks are operating in, which is like, we just have to maximize revenue, it feels to me like you have to go and address those root issues and sort of sit in a place where you even have the luxury of picking an objective function. That's morally positive.
Dr. Green: You're exactly right. The way I would look at it is that if you want to have a sustainable business model, the first rule is that you don't destroy the preconditions for your own existence. If YouTube or any social media site is creating this algorithm, it's literally pushing the way people view things, you know, changing the way we have a perspective on the world. The first thing they need to do is not destroy the society that they operate within. Right? Yes. It's great to have eyeballs on our website. So also. If the people don't get radicalized and then have a civil war that gets our headquarters burned down or something like that. Right. And so there are these preconditions that we have to think about, and it's too easy to forget them. It really is easy to forget them because things have gone right. For a long time, we forget to think of can go wrong and we really need to remember things can go wrong and we need to make sure that we don't have that happen as a, you know, these recommendation engines are working. Just remember the precondition. For the existence of the company, that's making that material. You need to have a country with laws. You need to have a country where the people can get along with each other. And those sorts of things. It might be really hard to figure out how to make an algorithm that reflects those kinds of preconditions that you need to have in place. But if we don't do. Then we end up facing potentially really bad situations.
Deep: So how do these kinds of conversations typically go inside of some of these tech companies from your vantage? Like, cause everyone does nod and say, yes, that's obviously we don't want to destroy society. I mean, like that seems way too. You know, a high level for them to do anything with, and at the same time they go back and optimize for engagement because that's what Facebook does.
Dr. Gren: Yeah. So that's a great question. It's fundamentally the operationalization question. Tech companies are getting these principles. They figured out, okay. Ethics is important. We need to be fair. We need to protect privacy. We need to have a, you know, Systems, but the real question is how do you make those principles actually mean something in the product design process. And this is actually in place that the Marrkula center we worked on this four years ago. Now we worked with X, you know, part of alphabet they're kind of moonshot lab. And actually they contacted us. Somebody from hex said, Hey, how do we actually operationalize this stuff? You're saying everybody needs to talk about it. But how do we make that happen? And we said, Hey, if you let us work with you, we'll figure out how to make that happen. So it was Shannon Vallor, who's a philosopher of technology. She's at the university of Edinburgh now, and then my colleague Irina ryku and I, and we worked on these materials and we came up with a toolbox and principles and lenses and how to look at these issues. And so we have those resources available for people to look at. Oh, so this is how you operationalize ethics. You think about the risks. Uh, you think about what a post-mortem would look like. If something went wrong, you think about similar cases and other technologies. It looked like this one and how it operated in society that you remember the ethical benefits of the creative work. And so you really focus on the benefit that you. Derived from that you think about what bad people could use your technology for, if they decided to abuse it and use it to perhaps harm other people. And, you know, the toolkit has several other tools in it also. And all of those help during the, you know, the product design process to think about how do we actually talk about these things? And in addition, Microsoft and IBM, I have a couple of reports with the world economic forum. That came out, does case studies, they're responsible use of technology, the Microsoft case study and the IBM case study. And each one of those goes through what Microsoft and IBM are doing right now in order to try to develop an ethical culture and enhance their ethical design process and all that sort of thinking that's going on at their company.
Deep: So I'm curious, like how well are these sorts of principles and the sorts of methodology that you're trying to bring into these companies? How well are they received given that most engineers really just want to build. And we don't usually think about all these other things because they just feel fanciful or far off.
Dr. Green: What I would say is that it completely depends on the company. The corporate culture is really important for figuring out how to integrate these sorts of things. Some corporate cultures are kind of naturally interested in ethics and some corporate cultures are not. And a lot of that comes from the leadership of the company, right? It's the leadership of the company actually on board with thinking about ethics or do they, do they not have time for ethics? Do they not want to put the resources into it? And you're right. Engineers do want to build, but I think engineers also want to build a world that's. Not a world it's worse if your product is having an effect on the world, but then fundamentally that effect needs to be a good one, because if you just create a product that does bad stuff, people are going to call it a bad, bright and
Deep: well, I mean, I guess I would push on that a little bit. Uh, that's not what I see. I don't see most engineers wanting to build bad stuff. That's kind of an easy case. You know, most, most engineers don't want to build something that overtly goes out and kills people. But most engineers, I would argue the vast majority that I've ever met. And the vast majority of cultures are operating within. And the vast majority of the tech industry with the exception of maybe health care and a few other places they're building morally agnostic hammers, you know, you can take a hammer and go build a house for a poor person. And like, you know, that that needs it. Or you can take a hammer and bludgeoned somebody to death. Engineering by nature is so compartmentalize that there is no morality associated with it. Like if you're building a compiler, like where's the morality there, you know, there isn't any, so I think engineers, I would, I would sort of beg to differ that engineers are actively going out of their way to pick things that positively impact the world. I think we might tell ourselves that story, but even that's in a minority of cases.
Dr. Green: You're right. There's a certain number of engineers who just want to build a product that works and that's fine, but somebody runs that team. Somebody is paying for it. And ultimately, you know, the value is in there. You can say that there's just one little piece of technology or one little algorithm or something like that. But ultimately somebody is going to buy it because it does something that they want to have. And that's a good thing and that's great. And if it's not, then that's a bad thing. I mean, it's not like anybody forces anyone to get YouTube radicalized. Right. Everybody does that completely voluntarily. It just happens that the algorithm does that. And I'm sure that there were some people at YouTube who were very, very pleased with themselves. When they designed that algorithm, that was, you know, getting all the metrics that they wanted. But once again, you know, engineers have to live in the society that they create. Also organizations like the partnership on AI at the very first partnership on AI meeting, one of the tech CEOs got up there and said, you know, we have to live in this world that we're creating. And that's ultimately what it comes down to. Right. We're we are creating a world around. We're automating human decision-making processes through these algorithms. And by doing that, we are generating the world that we live in. And so if we want to live in a better world, we need to make better algorithms. And if we want to live in a worse world, which hopefully we don't, but if we just happen to, you know, mess up those algorithms, then we're going to end up living in a worst.
Deep: You're listening to your AI injection, brought to you by xyonix.com. That's xyonixx.com. Check out our website for more content, or if you need help injecting AI into your organization.
It feels like just talking about ethics, isn't efficient. Like if it feels like there's much deeper soul searching that has to happen. And I think what I'm hearing from you is you have to really actively go towards the light in a very obvious way.
Dr. Green: Absolutely. That's exactly the point. As you're saying, maybe the way to think of it is that a lot of engineering is agnostic or it's unconsciously involved with ethics. And so what we're really trying to do with the work that we do at the Marcus center, whether it's in the ethics and technology practice toolkit, or whether it is with the papers that we wrote with Microsoft and IBM is to say all this unconscious stuff has been going on for so long, let's make it conscious. Let's talk about it. Let's bring it out into the world. 'cause actually thinking about something generally will get you a better answer than just assuming unconsciously that things are going to work out on their own. It's kind of moving from this kind of a morphous idea that, you know, ethically things will work out into a no, let's actually talk about it. Let's figure it out. So for example, some of the things that they do at Microsoft and IBM is they'll just sit down and they'll talk. And this new product, they'll have a workshop that goes through the product and says, you know exactly what is this product doing? Uh, how does it operate here? How is this going to affect people? Microsoft also has a couple of tools. They have one that's called the envision AI workshop. They have another that's called community jury. Community. Jury is basically you take your product after you've been thinking about it for a long time. And then you take people from the community and kind of ask some questions. It's kind of like a focus group, but you try to get their response. How do you feel about this technology? You know, there's going to be difficulties there with kind of conveying information back and forth, potentially between folks who are heavily on the technical side and people who might not have that technical background, but they've found this to be worthwhile because it ends up indicating to them how the public is likely to react. To a technology that they're creating. And also it really goes into expanding the ethical circle, which is one of the tools in the ethics and technology practice, toolkit, expanding the ethical circle means that you just think bigger, what's this going to do to the users of the technology? What is this going to do to the people who are indirectly affected by the technology? So is the user going to get angry from watching all these angry videos? And then go and be angry with their family or something like that. And if you keep a bigger perspective, once again, you, right, this kind of goes against the type of very fluid. The idea that comes along with engineering. A lot of the times you need to maintain that focus obviously, but you also need to think bigger picture. You need to think, what is this technology being used for? How is it going to operate in society? What is it going to do on a social level? What's it going to do to relationships? What's it going to do over long periods of time? Right? Because it takes a long time to turn somebody from, you know, your average person into someone who's, you know, enjoying watching these radicalized videos on YouTube for exams.
Deep: I really liked this idea of expanding the ethical circle. It's like, um, the higher up and the further out you go, if I apply those to some scenarios where it seems like things have really gone off the rails, you know, like there's a number of pretty famous projects where things went off the rails, like there's a Twitter bot that Microsoft put out. I ended up going on fogginess, racist, Nazi, spewing, Rand.
Dr. Green: That was one of the things that inspired Microsoft that they actually needed.
Deep: Yeah, no, they, I mean, I, I think the folks at Microsoft really tried to learn a lot from that. And I like the idea of. Jumping out a notch, because those are things that you might very well have caught with a pretty simple conversation, right? Oh, you train this on everything on the internet. Yeah. And now it's just going to sort of say something in response to somebody else. Cause it's a GPT three, like response dinner, like, yeah. Okay. Well, did you exclude all of the horrible things people say on the internet? Uh, no, of course. Like, how would we do that to say, okay. So do you think it's going to say something horrible if I ask it about this? Uh, yeah, it probably would. Okay. Well, what does that mean? Nobody went through that exercise. Um, obviously, otherwise they would have been like, let's, let's maybe hold this back a little bit and think about this. A lot of our listeners, you know, are either sort of running machine learning teams inside of organizations or are trying to acquire or need some machine learning teams, but they're pretty. Like down low in a particular product that maybe does a particular thing. What would you recommend they do from a systematic standpoint? Is there software they should be using or is it really sort of starting these kinds of conversations that you're talking about?
Dr. Green: That's a great question. I think some technology practice toolkit is aimed at engineers and those two world economic forum papers that I mentioned, those are more structurally oriented. So kind of trying to get the big picture or the governance picture, the business side and picture of it. It really depends on who you are, what your place is in the organization, how you can do something. And the first thing to do is to care about it. Right? You notice that, uh, what you're doing has some kind of impact on society It's either going to help people or hurt people or product maybe enables somebody else to do something good or bad. And so you want to think to yourself, oh, what is this likely to be used for? Or if you already know how the product's being used, maybe you see that there's a certain usage for it that you didn't expect. Is there some way that you can enhance. The good side of that and put some deterrent on using it negatively. You know, a lot of companies do that with an end user license agreement, right? You're not allowed to use this for XYZ purposes. So there's the legal solutions to it. There's engineering solutions. In other words, you look at the product and. Uh, let's think harder about how we're designing this product so that we can really enhance the beneficial side of it and suppress the negative uses. One thing to recognize there is that if you're trying to prevent people from using your product in a certain way, you might actually be losing money because you're losing a sale potentially. And that's when the leadership of the company really either needs to get on board or is going to reveal that they are not on board. And this is something we've seen at companies that we worked with, which is that they'll say, wait a minute, we're going to lose. Let's say based on preventing people from using our product for, you know, a bad purpose. And if they say that hurts our sale ends, we want them. Then they have revealed themselves to be a company with a bad corporate culture and bad leadership, which is going to end up having scandals in the future and things like that. And we've seen this in the technology industry right now. People will leave a company if they feel like the mission of the company is not aligned with actually doing something with it.
Deep: Well, that's a really good point that elevate the truth and people can sort of self-select approach where, you know, if you ask these questions, then inevitably once it gets to the business level part, trade-offs of actually sort of turning down some of that. Then you can kind of see what's in front of you. I suppose
Dr. Green: I'll just share a story, which is a technologist that I was talking to once. And he discovered he was being assigned to, you had to design a chip in a certain way. And he said, oh, this is a really weird design. Why would anyone want to do this? And he recognized later on that this chip was being used in mass surveillance. Basically it was enabling, you know, collection of all sorts of information off the internet, which he didn't realize he was doing. But when this happened, it really became, you know, a wake up call for him saying, oh, I can't work here anymore. I didn't realize that I was doing something fundamentally against my values. And I don't want to be a part of this system.
Deep: I feel like we should take one of these scenarios where most people would be kind of obviously on one side and talk it through because, uh, I belief that even the most obvious moral issues in AI are not so obvious. So most people would agree that you don't want. Machines to make a targeting decision, like who should be killed. And so most people would agree that that sort of serious decisions should be made by a human. But like now you have to ask yourself some questions. Well, how many more people. We'll die because we don't let the machines make the decision depending on the scenario. But let's say you've got some intelligence about, you know, a person and you've got some facial identification and you've got a drone that's sort of flying around and it's got lower latency to make a decision. If you wait for the extra, you know, half a second, second for. Information to go all the way across the world, having a human look at it, plan debate, argue, and then you come back and now the opportunity's gone. It feels to me like you are making the choice, even in that extreme scenario to allow more people to die in some cases.
Dr. Green: What I would say about that is that you're, you're kind of making the assumption that the computer is going to be better than a person.
Deep: Right. Because that's the case.
Dr. Green: Right. But I mean, the whole reason we assume that the human should be there is that we're assuming, hoping, at least that there'll be able to make it better. Then the machine and maybe in the future, that won't be the case anymore. But certainly right now we know facial recognition, for example, has problems getting better of course. But, uh, you know, there's a lot of imperfection out there. We also know that humans make mistakes too. So we're not saying that humans are going to do the perfect thing either, but I think the idea is that if you have at least the human and the machine working together, Then hopefully they'll come up with a better answer than either one of them working alone. And so that kind of partnership between human and machine I think is helpful. And that's one of the reasons why I think that people kind of want to reserve that kill decision for humans to make, because it's not just the human making the decision, right? It's the human actually working in cooperation with whatever their electronic system. There are surveillance all their other data. Um, because fundamentally the drone doesn't know what it's doing. It's a machine, it's a robot, it's just executing orders. So it's very easy for it to make a mistake in terms of understanding or knowing what's the right thing that's going on. On the other hand, at least humans can understand that much, right? We can at least understand that it's a significant decision and we have to be careful about it even. It ends up still being a mistake.
Deep: Yeah, no, I think that's a really good point. Like a, the man machine synergies, like really important because the humans can always be armed with the info that the machine has, but maybe they can't make the decision fast enough, but at least they've got a history. They've got the statistical analysis on the efficacy of the machine and they can sort of interpreted it as sober-minded way. There exists a scenario. Where latency really matters. And going back to the humans, genuinely jeopardizes somebody's life. It's kind of like the, um, trolley car example. I, you know, the ethical example where there's a human perceived difference between actively saving one person on one track, but like intentionally killing another person, people in that context, you know, hesitate, even though there might still be a clear. Moral decision. I think we're saying it's sort of a similar problem is that right?
Dr. Green: There are certainly situations where the human is not going to be fast enough. We've already seen this right now with cyber defense and cyber attacking can be automated. Cyber defense can be automated. And so we have these automated systems just attacking and defending against each other all the time. And humans sit back and kind of observe this system and make sure the systems are still operating, but they're not necessarily working certainly on the defense side of things. It's not like they're sitting there typing keys, actively defending themselves all the time, because it's just simply not possible. It's too fast. So yes, at some point we have to delegate these powers to automated systems. We have to start asking ourselves, do we trust the system? Right? This is one of the big things that Microsoft and IBM, when they talk about. Doing ethics in technology. They're really talking about trust. They want to have a product that people trust. They want to have a company that people trust. And of course, ultimately that's a business decision. They want to be trustworthy so that people will want to use their products. One of the. Parts of that, of course, is that you can't just look like you're trustworthy, right? Everyone's already going to be skeptical of you. And once they see through it, then they'll say, oh yeah, we always knew they weren't really trustworthy. What these particular companies that we used to have done is they've said to themselves in a very deep way, The only way to have people's trust is to actually be trustworthy. Not every company is like that. Right. And certainly people are going to probably disagree and say, you know what? I don't trust Microsoft or IBM anyway, but that's kind of the reasoning process that they went through. They said, if we want people to trust us, we actually have to be trustworthy.
Deep: Sort of switching tacks a little bit. What do you think the role is of transparency in like how these decisions were made? So for example, companies have a natural need to be sneaky or private about how they do things. So let's take Tesla as an example. So Tesla has fairly sophisticated. Self-driving. Car set up and you know, you you've got this. I think the scenario happened a few years ago. There was like a semi-truck driver, lost their brakes, going down a steep hill. And then, you know, there was a family having a picnic or something. Right. You know, in one of the runaway ramps. So the truck driver to decide whether to kill the family or keep going and kill themselves, like to what extent does Tesla owe, that. Transparency around the training process around any heuristics that might be involved to the public in those kinds of scenarios.
Dr. Green: That's a really good question. There's a certain amount of secrecy that yes is completely understood. You know, if you're creating a new product, you don't want to go around, advertising it to everybody until it's ready. Right. Then you want to advertise it. You don't want people to steal your intellectual property. And all these other sorts of things that are really kind of important at the same time. If you're being too secretive, nobody knows what you're doing. Nobody knows whether they should trust you. And so this becomes a real problem. There has to be a balance where people can say, I trust this company because they're sharing enough information. About themselves, but at the same time, it's not harming the company to be giving out too much information about their products at the same time, so that everyone's either copying it or, uh, you know, improving upon it, making a more competitive product or things like that. This is a balance that's being struck right now, even just four years ago. Nobody wanted to talk about ethics openly in Silicon valley. Or in the tech industry, more broadly people, people didn't want to talk about it. It was secret. They made it. If they were talking to us, we had to sign a nondisclosure agreement that meant that we could tell nobody that we were working with them. So it was a major accomplishment when some of them started admitting it saying, yeah, we're talking about ethics.
Deep: What do you think changed there? Why are they talking about it?
Dr. Green: I mean, that is kind of a fundamentally interesting question because it's social pressure, right? One of the big things that changed, I think, was a recognition after the election and everybody can seen how much misinformation and disinformation had been floating around on social media. That was a big wake up call. And it also tarnished industry's image. So it used to be at the tech industry could do nothing wrong. Everybody loved them. And then, and then all of a sudden it's like, oh, people don't love you anymore because you messed up really badly. And that realization, you know, when something like that happens, it hits you hard. And so they started doing some soul searching and, uh, recognizing, okay, we've got issues here. We got issues about social impact, social benefits, social health. And, uh, they recognized they needed to think about it and that they needed help fundamentally, right? Because it's not an engineering discipline. It's not something that the engineering discipline has expertise in. And so they called in whether it was sociologists or psychologists or historians or ethicists, they called us in to talk about these things. And pretty soon everybody talked to everybody else and realized, oh, we're all doing this. Let's just be public about it. And that making it. Then let's everyone breathe a sigh of relief and go, oh, thank goodness. Everyone's having the same problem. Let's talk about this. And maybe we can all work together to make this better let's pool, our resources let's think together about how to solve this problem.
Deep: Do you think there's a role for legislation here? Like, is it enough to just rely on companies Goodwills and a sort of inner desire to do something? Or is there a role for legislation and it's so what does that look like?
Dr. Green: That's the next question, right? What is the rule of law in this? Right, because it's one thing to have ethics. It's another thing to have law law is ultimately backed up with violent force behind it. Right. They can throw you in jail or they can do, you know, find money or, or those sorts of things. And so. I believe law does have absolutely. It has a role here. It would be nice. However, if everybody just did the right thing voluntarily, so I think that kind of voluntary choosing ethical behavior is the ideal, right? Everybody just chooses to do the right thing. Now we already know that doesn't work, right. We already know there are bad people in society and you need to have laws in order to control them. The role of the government is to say, we are delegating. Control of your profession to your professional organization. And so if the American medical association says you're not a doctor anymore, where if the American bar association says, or you're not a lawyer anymore, then you're thrown out of the profession. You're not part of it anymore. Engineers are not like that. There are engineering, professional organizations, but in the United States, they don't generally throw people out. Not every country is the same. In Canada. For example, I was talking to a Canadian engineer and he said, the engineering professional associations in Canada can throw you out if you do the wrong thing and not every professional. Organization's very right. So if you're a structural engineer, for example, you'd have to pass certain tests. You have to have certain licenses, but when it comes to something like AI or computer science, the ACM association of computing machinery, Can't necessarily disbar you, you know, or whatever the analogous thing would be.
Deep: I think you nailed it. Like there are some engineering associations where it matters to be a member of structural engineers. Civil engineers seem to be the one that come to mind, but generally speaking organizations don't really matter to an engineer's career. So it sounds like you're suggesting, like maybe there is a broader role for. Elevating people working with AI into the level where societal harm is a distinct possibility. Therefore, you know, there's a professional organization that you must be a member of to be working or something.
Dr. Green: That's just one solution, right? If you wanted to have a legislative solution, that was still pretty light-handed. It could be something like every company has to have an ethics. And that ethics committee has to have XYZ people on it, whether it's inside the company or outside the company or whatever, and they need to look at your product and approve it before it gets produced. Or maybe you have a third party agency that evaluates algorithms or something like that. There are lots of ways to do this. That aren't necessarily really heavy handed where the government comes in. And, you know, stomps on your algorithm or whatever, and tells you exactly what to do. That's not really the way that people want things to be, right. Engineers don't want to have the government telling them what to do. So the question is, how do we find the right approach? It's not heavy handed something that still achieves the good that we're taking, which is to have our society actually do well and flourish. Whereas the balancing point, and I think what's happening right now in the United States is really a paralysis of analysis. Right? And we've got so many things. But nobody knows, which is the right approach. So as we are right now, it's just kind of self-governance
Deep: Final question. First. I want to thank you. A bunch of Brian for coming in and talking to us about this AI fast forward, 10 years, AI ethical challenges. Fast-forward 10 years. What does the world look like from a worst case scenario and from a best case scenario from your bed?
Dr. Green: I actually love this question. I asked this question of people all the time. It's the utopia and dystopia question, right? Which is what kind of future are we building? I think the dystopia of the future is very much, we don't know what's true anymore. There's so much misinformation. There's so much disinformation. Nobody knows what's the writer or, or wrong in terms of accuracy. And so we start breaking into these groups where we say, well, I don't know what's right or wrong, but I know my group is right. And so this group loyalty kind of takes over. It ends up fracturing us. And so we ended up in this broken society where everybody is off in their own little world, and they're either fighting people online or maybe even literally fighting people.
Deep: I think you're just describing today. It hasn't gotten that bad yet. It sounds very, very, very familiar.
Dr. Green: It's it's a S it's a trajectory that we might well be on. Yes. I think the utopian version is something that we could actually start moving towards if we put enough effort into it. And that is that we learn how to clean up our information system. We figure out better ways of preventing disinformation from spreading, and instead of pushing people outwards, you know, by personalizing their information, there's a difference between individualizing. Information and personalizing information, personalizing information fundamentally should be something that helps make a person more of a person makes you more personable. It makes you want to talk to other people and have relationships, things like that. We should really think. Quote, unquote personalization. That's happening with AI and content right now is an individualization or a kind of radicalization. And that it's taking you as a person and shoving you out to the side and making you different from everyone else, or at least different. As in putting you into a group. With other people who think like you, you know, another word for it would be balkanization right. So the Balkans and Eastern Europe all broken up into little tiny pieces. So we can avoid that. If we can figure out a way to structure that algorithms instead to bring people together, how do we share. Thing is how do we have a common conception of the world? There's a big value problem that makes that difficult because all of a sudden you have media, that's going to be pushing a certain perspective. And that's a, you know, it goes against ideas of free speech. It goes against ideas of freedom of expression, you know, very prized to cherish valley. In the United States and how do we actually bring people together though? The only way to bring us together is to have a shared world. We have to believe similar things. And so that values question, I mean, this is the reason the problem fundamentally hasn't been solved yet is because the values question is there and it can't be ignored.
Deep: That's all for this episode of your AI injection as always. Thanks so much for tuning in. If you enjoyed this episode and want to know more about ethics and AI, you can check out a recent article of ours by Googling how to detect and mitigate harmful societal bias in your organizations.
AI. Or by going to xyonix.com/articles, please feel free to tell your friends about us. Give us a review and check out our past episodes at podcast.xyonix.com that's podcast.xyonix.com. That's all for this episode, I'm Deep Dhillon your host saying, check back soon for your next AI injection. In the meantime, if you need help injecting AI into your business, reach out at xyonix.com. That's X Y O N I X.com. Whether it's text, audio, video, or other business data, we help all kinds of organizations like yours automatically find an operationalized transformative insights.