Aug 2 / Alessandro Mauro, Caro Robson, Punit Bhatia and Saurabh Gupta

The EU AI Act - Risk Management Approach

Drag to resize

In today's fast-paced digital landscape, the introduction of new AI regulations can be both a challenge and an opportunity for businesses. In this episode of the FIT4PRIVACY Podcast, join us for an in-depth exploration of the EU AI Act with experts Alessandro Mauro, Caro Robson, Punit Bhatia, and Saurabh Gupta. Our panel dives deep into the implications of this landmark legislation for businesses, discussing the risks and real-world challenges associated with AI adoption. You'll gain comprehensive insights into what the EU AI Act means for your business. From understanding risk classifications to managing these risks effectively, our experts provide practical advice to help you navigate this complex regulatory landscape. We'll also explore compelling case studies that highlight the challenges businesses face and discuss enforcement and governance mechanisms under the Act.

Transcript of the Conversation

Punit 00:00

The EU act is here. And we all know that EU is all about risk based approach towards application. That means based on the risk and application carries the number of requirements applicable would also change. So how do you manage risk in context of EU AI Act? We had a wonderful panel recently, me along with Caro Robson , who is actively involved with government on advocating for AI and privacy and security concerns. And also Alessandro Mauro, who is a risk professional, and we had an interesting chat moderated by my friend Saurabh Gupta, who kindly agreed to do this. He's also an entrepreneur, and an active technologist in field of privacy and security, and even AI. So we had a wonderful conversation around managing risk in context of EU AI. And in this episode, we go through that and understand managing risk.

 

FIT4Privacy  01:04

Hello, and welcome to the Fit4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here, your host Punit Bhatia has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.

 

Punit 01:32

And now we get into the second part. And I welcome to the stage first, our moderator, Saurabh Gupta, who's a dear friend, and also a serial entrepreneur. He's been active in the space of creating technology or Salesforce based technology for compliance with privacy security and AI legislations, and he's very, very active. So welcome, Saurabh.

 

Saurabh  01:54

Thank you so much for having me Punit. It's a pleasure.

 

Punit 01:57

It's a pleasure to have you. And we are also going to have somebody who's been working with the governments and also in the government and in the Information Commissioner's Office of Josie Corp. Caro Robson. So welcome, Caro, very happy to have you.

 

Caro  02:13

Thank you lovely to be here.

 

Punit 02:14

And then as we were talking about how can we miss somebody within vast expertise on risk, and who has been a risk professional for I think, decades now, and has advised work in companies to set up risk frameworks. Because if we talk about EU AI Act, we also want to understand how does the Act flow and how do our current risk management practices allow us to leverage them? We have Alessandro Mauro. Welcome, Alessandro.

 

Alessandro  02:42

Thank you for having me. Today is a pleasure and an honor to take part to this interesting conversation.

 

Punit 02:49

Thank you so much. And now I hand over to my dear friend, sort of, so that now I can switch to bandel. And I can also talk a bit.

 

Saurabh  02:58

Thank you so much Punit and nice to see you. Again, Alessandra in Cairo. Thanks, everyone who's joining us today. The thing that's really interesting is you have a panel with extremely diverse experience, and you have people all of whom are needed for these things to become real. You have folks who represent the governance point of view, the ICO officers perspective. And some of those areas from a policymaking perspective, you have folks who are doing risk management for a living and who want to look at it from a risk perspective of how large and midsize businesses can address some of these things, and make sure that they are compliant, while also balancing business imperatives. And then of course, you have Punit, who's one of the facilitators, authors, thinkers in this space and spent a lot of time in data privacy, and now AI. And finally, I come from a very implementation part of the world. I'm one of those people who gets handed down these things off, like go make this happen. And so you will see a juxtaposition of how you see these new laws come into place, what's the kind of churn around there that mentally happens with a variety of different perspectives? And so I'm looking forward to how we move forward. So before we can start, Punit do you want to give us a really quick recap from your session? One, I attended a part of it, but I think it may be useful to set the stage for our folks. And then anybody who's interested, I'm sure it's gotten recorded and you can get a copy. 
Punit 4:32
Sure. So we talked about in the Session 1, we didn't go deep into AI Act, but we got into why did we need the EU AI Act? And one of the things we talked about is AI is not like any other systems because we often say systems and automation was there for long. An AI is just an extension of no it's not just an extension when we talk about generative or what we call the discriminative AI. The challenge is enormous. And it creates a lot of social, moral, ethical, political job levels and many levels of challenge and to address that allow was needed. And there we have this law, the EU AI Act, which does not impose all the requirements into all the systems but says, let's take a risk based approach that is grade systems into risks. So we will prohibit certain systems, and for most system, there will be no requirements. And for the remaining systems, there will be transparency obligations, if they are in the limited risk. And if they are in the high risk category, all the requirements around explainability, the data quality management, data governance, and whole lot of requirements that are being talked about EU AI Act, then they apply. So it's not holistic application saying Take it or leave it. It's a risk based approach. And that's where our panel comes in. And another thing we talked about is it's not B and we have an it's the be all end all solution, it is going to be allowing you to do the risk management or AI related requirements on your systems. But in terms of the copyright infringement, like liability infringement, or even the IP infringement, the other laws will also need to be upgraded. Because we all know if formula or Unilever builds a product, there are a lot of checks and a lot of laws that apply on those products. So similar we need to think of in the software world, how do we manage that. And last, but not the least, we also discuss if you're thinking of the EU AI Act at compliance, the time is now don't think it's 2 years or 6 months, and you can start, we all have the lessons from GDPR. We thought there are 2 years and then in the last mile, we learned it's not enough, and we're still working on it. So that's in brief. Of course, if people are interested in understanding the EU AI Act, there are a lot of information available on our website, and they can take advantage of that.

Saurabh 6:56
That's a great recap. Thank you Punit. So with that, I'll ask a question from Caro, Caro, I know you come from an incredibly diverse background and a lot of season point of view on some of these laws and regulations. So when you think about the EU AI Act, can you talk a little bit about what it means to businesses, and also maybe get a little more detail into what Punit was mentioning around the risk based approach?

 

Caro  07:28

Certainly, yes. So it's at the moment, it's 459 page, tome that businesses are sort of staring at and looking at and wondering what on earth this thing is. So let's let's get into some of some of the context and kind of take a step back. And then a little bit of a deep dive, I think, in some of the definitions, because I think that that's where the real meat of this is. And it's not always obvious from reading hundreds of pages of this legislation. So at a high level, it is EU legislation, that is both product safety legislation, which is kind of a different approach to the other thing it's doing, which is fundamental rights legislation. And it's free market legislation. So you have this kind of dual approach of protecting rights, but also taking a product safety approach. And there'll be a lot of technical regulations coming under the regulation itself, the act itself, that go into more detail, because it's product safety. And it's protecting people in the EU from harms, and the harms that have been protected from are really vast economic, social, physical, psychological, environmental, and also harms to fundamental rights. That's kind of important because that informs the understanding of risk that the Act takes. And it can be quite different to some other definitions of risk in, say financial services legislation, or to the business risks around AI that we're going to talk about in a moment. So the risk based approach at a high level is the number of regulatory obligations you have to comply with depend on how you use a system. and to a lesser extent, there are elements of how a system itself is built, and what risks it poses to those harms that broad range of harms to people and environment and things depends on the level of obligations you need to comply with. So that's a high level on the art. The crux of all of this, I think is what is AI, we're all sat here talking vastly about AI. What actually is it in the act because I think if you can get your head around it. For me, that's the key to understanding not just the act itself, but the business risks involved with AI and how that's different to traditional software and products. So I have an extract in front of me Article 3. An AI system is a machine based system designed to operate with varying levels of autonomy may exhibit adaptiveness after deployment, and for explicit or implicit objectives in furs from the input it receives how to generate outputs such as predictions, content recommendations or decisions that can influence physical or virtual environments. So I hope that's clear. No. So it's a really difficult definition to get your head around. But there is a word in there buried in that meaty, dense definition. That's doing all the work, and is really important. And the word I want to give a huge shout out to and to champion is the word infers, because that's the difference. That's the key. It's the word infers, it's a system that infers outputs. So what does infers mean? I hear who you ask. Well, it's not defined in the regulation recital 12 gives us a little bit of a hint. So we hear that a key characteristic is about the ability to infer the capability refers to processing, the process of obtaining the outputs, and deriving models from inputs or data. And the capacity of an AI system to infer, transcends basic data processing enables learning, reasoning, and modeling, which is lovely, and sounds fantastic, but doesn't help me as a non technical person. So I turned to business and industry. And a wonderful engineer called Geoffrey Erickson, Oracle has a fantastic definition, which is AI inference is when an AI model that has been trained to see patterns in curated datasets, begins to recognize those patterns in data it has never seen before. That, for me is a lightbulb moment, because that's the difference. These systems have been built and trained on data that has been gathered, scraped, collected, curated at vast environmental cost, human costs 1000s, millions of people label and tag this data. But that's the data that is then informing the outputs from the end user. So it's different to a normal system where you would input all your data into grm system, and then the machine would apply some rules. Now the outputs are coming from other data that you've not seen isn't from you, and you've not got oversight of. And I think once you appreciate that about the AI supply chain, that's where applying the act to your systems, but also applying basic risk management in a business context starts to take hold. That's a very long description. So I'm going to stop talking and let other people come in. But that for me is the key AI is about where you get an output from data. That isn't just what you put into it.

 

Saurabh  12:42

Yeah, I think this is great. I, one of the takeaway for me from this is really, there are two aspects of it. One is the product safety. And the second part of it is the fundamental rights that we want to protect for, at least for you, as far the residents, right. And this is great to know, and I know you've talked about risk, I'm curious Punit to hear from you. If you want to dive a little bit deeper into what's the risk, what's the classification of risk, when we think about the EU AI act,

 

Punit  13:13

Okay, at a broad level, it's time risk is a very, very topic. But from a product perspective, or as system perspective, combining it with the fundamental rights risk, what we look at is 4 categories of risk. So 1 is when an AI system does not pose or pose a very limited set of risk to individuals and freedom of individuals. So let's say, a transcription service, for example, then we would say that's a low or medium risk, and it does not have an obligation. Now I don't say transcription service is going to be rated as that I'm just saying, for example, then there are no obligations. But then as soon as you say there is an Alexa, and she can interpret something on your behalf. And as you'll see, I already said she while it's a technology, I'm already starting to infer AI as human beings. And that's where it can create some risk. And there we say there are transparency obligations. So limited risk, and we can create transparency, obligation meaning explaining to people what is it what does it do and keeping a responsible check. But then we get into some high major risk or high risk systems, let's say we are going to do some scanning of your voice and make a judgement. Are you having a cold or not? Then make a recommendation to you saying, Hey, you're sounding or not, okay? Is everything okay? Do you want me to order medicines for you? Now that's a risk because if I'm doing it in context of my voice, I don't know what context it will interpret. In those cases. You have major requirements. For example, the data governance, testing, penetration testing, quality checks, ability to log everything, maintain the logs technical documentation, given the risk management practice, and also conformity assessment and conformity check, and even a registration with authorities. So that's a long list of requirements to say the least. But then we also get into high risk, which can impact the society at large or create make decisions for the country on that level. So those are completely forbidden systems. So those are not allowed. But if you look at in terms of 8020 rule, I think 80% of your current systems would fall into the category of limited or no obligations, and maybe 20% would have the full application, that's my judgment of current systems are not the future. And it's just a judgment, and then maybe 1%, or 2% of systems would fall into the prohibitive category. So that's how we are looking at risk. And it's still being evaluated. Still people are looking to map into which category of systems map where and all, like, recently, I was talking to a healthcare company, and they do AI. So if applied Caro's definition of AI, it fits in, but they say, oh, no, our system is offline. So there is no AI. And it has nothing to do with offline or online. The simple definition is based on the learnings the system has, does it apply itself to new data and make the judgment is the inference aspect of it. So then it gets into the risk category?

 

Saurabh  16:21

True. I think that's an excellent overview. So my takeaways, again, like she basically talked about four categories of risk with the prohibitive at the very top, just probably military systems and things like that. But typically, back to your ad to any comment, if I'm in on the business side of this if you're crunching data, and you're not driving autonomous cars or launching rocket ships. And if you're just maybe summarizing call transcript so that the kind of business information summarizing emails, doing analysis on everything sentiment detection, intent detection, things like that, those kinds of analysis, what I've heard is that there is a possibility that they may fall in the sort of lower risk to moderate depending on the potential of these things to do harm, right, that's kind of what you're trying to balance as a business person in the room. So speaking of risk, I know Alessandro you've spent a significant amount of your career in risk management across a wide diversity of industries and particularly with an emphasis on commodities and other markets. So I'm very curious to hear from your perspective, how do you manage risk in general and particularly in context of the EU AI Act Act?

 

Alessandro  17:39

Yes, Yeah. So the talking about risk management, we wave in front something big and huge. That is often scaring gas so it's AI and this is why so big that European Union came with this draft of a lot and also United Nations now with the UN AI but body decided also to get the nation decide to evade the body on AI and they came at the end of last year with the first draft which we commend to the global association and brought to the party about risk management so my my point of view is that we shouldn't it's a big challenge is something be that can be really a revolution but we shouldn't forget from what we're coming in the end what what what is risk management? That you know, human human beings have always been doing risk management now he's about the instinct of survival. Now, we do risk management because we want to survive and this has been on for for millenniums Okay, is that what evolution but then if coming closer to us like 30 40 years ago, we started putting in place rules or best practice for guidelines for its management became a became a discipline. And so, it begins your year 2000 resin into the Institute of risk management Kane with the with the risk management standards, then it doesn't a nine ISO the international Standardization Organization came with this standard framework for implementing the risk management game known as ISO 31,000. And so on. Causal is a leader in enterprise risk management and so on. So what is the message here? We have, we have frameworks we have guidelines, we shouldn't forget these. Many people have been worked on these on these frameworks and A limited number of organizations are using these these approaches to us to address risk. So, my my view is that we should, we should use these approaches in terms of in front of also AI. So, if I give more content, ISO 31,000 Put all the process for assessing areas or addressing areas. So, you start with the risk assessment, you have to do risk identification, after you identify you have to risk do the risk analysis, then you do the risk evaluation, and then when your framework is clear, then you can go to this treatment, what is my risk response, and then any you have all the bases and I feel we have to do the all these steps and Acadia that we're talking about goes in this direction, like is a risk based approach, but then it's still in theory still in gives examples, but then you have to decline you to apply your realities can be an NGO can be a business and so on. So, especially on the risk identification part where to be very, very attentive, where really is where you put the default the foundations is the first step and and you know, deviation is adapted is adapted to the kind of risk, I have done a lot of market risk management in the past. So the year to study the contracts come with exposures understanding or the impact to your your bottom line. In other another case, which is even more interesting, probably and closer to AI is a cyber risk on cyber risk. One of the main use the approach for this identification is not to understand the your systems, or they are connected your IT systems on is actually do to do white arcing, so to do cyber, to do penetration test. So you pay someone to enter in your systems and try to see if they can be broken. So it's very, it's very, for me fascinating, you know, and actually, it's about something like it's a black box. And so this is why I'm making this example, because then AI is a black box, and we see the we put something in the prompt and we get the results. And then we don't know if there is a losing nations and so on. So I don't want to go too much in details. But this is my message before we even go and say what is the risk treatment issue, these responses should be avoided, they should transfer the risk or we transfer the risk, we should first of all do a proper risk identification of these new sorts of risks.

 

Saurabh  22:56

I think that's a, that's a very interesting perspective you bring particularly that AI is a black box. I think you said out loud that you know something that I speak to a lot of organizations every week, because I'm on the you know, sort of AI solution side of the story. And I think a lot of people on the other side of the Zoom call probably have the same question in their mind that we are dealing,

 

Alessandro  23:21

I don't know if it's sorry interrupting you if you agree or not, is a black box or not. But then you start reading articles from from the gurus, you know, the gurus don't make names big companies, and they try to understand the behavior of AI by prompting by you prompt in this way, you're gonna know that they found out the prompt more aggressively so that the answer is more prudent from the system means that nobody's going the machine and checking the neural network. Try to go input and output so all the all the middle is black box. So these ways a black box.

 

Saurabh  23:58

True true now it's very well set up. So that brings up another interesting question in Caro. I'm going to ask that from you and see what your thoughts are in line with what Alessandro is saying there's definitely a lot of discomfort or unease with AI. It's black box nature, its potential to hallucinate induced biased flat out just really, really convincingly lie. So there's a lot of things that we have heard and these are like, what are the challenges? What are the real world challenges for businesses? who are thinking about using AI from your perspective?

 

Caro  24:36

I think first of all, the idea of it being a black box, what I call the black box mentality. I agree. And I've read a lot of scientific and data science papers where even the people developing these models are struggling to understand quite quite how some of the parameters and the links are being made. But I think a black box mentality is one of our biggest risks because you can lift the lid on that box To an extent, so a lot of companies are publishing safety documents. In the case of open AI, they publish the safety paper and then release the product anyway. But still, a lot of companies are publishing information on how they work. And this is where I think the act, the AI act could actually be really helpful for end users. Because whilst you might be subject to obligations, and it's a very product safety based act, it's very much looking at what is the risk of this product or use harming the public. But it does place obligations on providers, the people who provide foundation models, on distributors on deployers, who might be your b2b customers, around transparency around explainability. And so the act itself might be really helpful in helping you unpick that supply chain. When certifications if they come in and regulations are being drafted to look at working with certification bodies, maybe a CE mark or an ISO standard. If these things come in, that's going to be super helpful at identifying and looking at certified risks and products, where there have been metrologists have actually gone in and looked at how these systems work. So I think the practical challenge is understanding that supply chain as best you can, understanding that at the start of the supply chain is the collection of a huge amount of data unlawfully, I'd suggest from a data protection perspective, from a copyright perspective. It's unclear where a lot of this has come from, there are labor rights issues. So the millions of people around the world who are employed to tag label sift that check this data. The International Labor Organization is doing a lot of interesting work on this actually. But there are supply chain issues there around labor rights, the legalities and of course issues with the product itself. If I hallucinate, it's very convincing, telling you it's right. So understanding some of the risks, the safety briefing notes of the products themselves, I think is very important. And understanding all of these things in the workplace and the environment as to how you're using an AI system. And when you're using an AI system, so mapping your products, making sure that you have a policy or at least awareness of when your employees are just hammering the annual report into chat GPT, because it's the night before it's due. And all your financials have just gone off to San Francisco and open AI, just because it's too late to draft the report. Being on top of those things, I think are important. And really, I think, overall, my advice would be treat these as you would any other risks. These are supply chain risks. They are product risks, their legal risks, their compliance risks, their communication risks. So break down the chain, look at where you're using this tech. Look at the controls that you have governance, making sure your boards aware, I can talk about some class action lawsuits that are very interesting, where boards being personally liable for this. Look at contracts with vendors, it's hard to get clarity, the ACT might help that but try to talk to your vendors. Try to make sure you've got policies rolled out that your employees are aware of it. Make sure that staff understand when AI is being used, how it's being used. And treat these things as any other business risk escalated. Put it on the risk register, make sure that it's been assessed by your risk management framework, make sure people are aware mitigations are in place and actions are carried out. And I think those kinds of basic risk management tools if you're brave enough to try to price the lid on the black box and have a look under the hood, as Uncle Kim was just saying that, I think is the way to approach this from a business risk perspective, I realized that's a different way of looking at risk than the risk based approach in the act as to which requirements apply. But from a business side, just be brave, ask questions and use as Alexandra was saying, your existing risk management tools.

 

Saurabh  29:04

That's a great overview of this. I was thinking while you were talking about this, that later today, I'll be driving for about six hours, out of which probably four and a half the car's gonna drive itself. Just so you know. And that one of the things the reason I mentioned that specifically is because I'm not sleeping at the wheel, right, like we are in the United States, at least Tesla is you have autopilot, you can run it on it and it's got like, billions of miles logged against it. But the risk is 100% there, I'm just not gonna be playing Wordle on my phone while it's driving itself. And I am definitely having my hands on the steering but I think it's a good metaphor for where the businesses need to go. I think these are good capabilities to look at, but you absolutely want a human in the loop. You definitely want to make sure that Are you are looking at the source of these training? You know, and the supply chain as you put it, I've never thought of software from a supply chain perspective, I would have thought of coffee directly. But I totally see your point very legitimate of how this data has been aggregated, curated, you know, humanly review tested, there's probably a lot of concerns in a variety of different areas. Having said that, though, I do, I am aware that there are organizations like anthropic that has cloud, which they are that that have actually gone, you know, far and beyond just publishing papers, and they at least proclaim that they're doing a lot of work in this space to make sure that it is, it is a fair trade, AI, if you will. But I'm curious if you have any following commercial.

 

Caro  30:50

I was just going to say there's fascinating research on this. And in terms of the human labor costs, your self driving car, that data is probably going to people in in Madagascar, or Brazil or the Philippines, to hand label and look at your images. So there is information out there. What I would say, though, on all these points from a bringing us back to the EU AI Act, the act is clear that it is without prejudice to the Corporate Social Responsibility directive, the upcoming platform workers directive, which we're expecting to the GDPR to the E commerce directive, how these things will Interplay isn't clear, because these are big questions and complex questions. This is very much about fundamental rights of people in the EU. And it's about product safety of how these technologies are used. However, the harms, they're talking about our environmental, social, you know, there are many people in the supply chains, and many of the mines producing these precious minerals are in the EU, but how these things will interact with Corporate Social Responsibility directive, the Environment Program that you use, pursuing, isn't clear. So what I would say to organizations is, as you're looking across your supply chain, bear in mind all of your legal and compliance risks and your reputational risks and your ESG risks, and you're reporting on ESG at the board, so bear these, this this one act isn't it is horizontal, but it's not a silver bullet that's going to solve everything. It's one element of of your broad risks, because all those are the laws still apply.

 

Saurabh  32:35

Totally agree with you. Very, very important. So that brings us another interesting question. And I'm going to ask Alessandro with this because, again, risk management. Alessandro, can you maybe talk about any case studies, any incidents that exemplify some of the challenges that Carol mentioned? And what would have been an alternate way to address them? Yeah,

 

Alessandro  32:58

So it's kind of they're not public, you know. So, it's people keep keep these can be confidential, they again say is like this Sterling and we need to clarify these risk management approach. And I there are things I risk, lower risk and so on. So, this is already a guide for the for companies, because Okay, Iris for example, is like technology applied to employment management, like we saw in software and so on. So, these already clarified design risk so as to be really to be very, very careful. So a company can then can assess this risk and the end, find the risk response, like okay, I don't want at least convinced I will not do any CV sorting with the artificial intelligence. So as they act is complex, but also gives a gives a direction of what we should do in terms of applications. But what I'm seeing talking with the, with the with friends, people in the network, my network is, especially in the commodity trading business, the application that they're trying to put in place is about businesses about creating value, if you want to say more value where I mean, financial Velikiy money. So it's a business but it's not spatially it's a business where we have more and more data like many other business these movements or commodities are more and more tracted. There are there are digital, we are collecting systems are collecting a huge amount of data. So there is a big incentive a big interest in applying AI. And I mean, like data like weather about shipping rules. About commodities moving from one place to another, and so on, it's huge. So it's also a limitation of humans, like you must cannot digest all this data. So a lot of effort is done in prime machine, artificial intelligence to do this data and as we say, infer infer patterns, find a way to get create financial value, see something that other people didn't see, it's like, and then then my point is, then it can be gap by using it is a big opportunity versus a big risk. Because it's, it can be a risk bigger than, like, going against one of these regulation in the AI Act, like I say, Okay, I want to avoid the CCB sorting risk with AI, I don't do this, I use the HR humans to do CP soft in fine, I have the categories, and I will try to follow these in depth, when I apply instead, to business intelligence, you can you can, if you do something wrong can be really costly. Okay, and you really can you can, you can take wrong decisions, I go in this market, I, I buy this commodity, and then I lose a lot of money. So my point is that and there you know, there is it's you, you don't have a you act behind is just your companies investing time and money in artificial intelligence. So these these are, you shouldn't be concerned. And we say you need checkpoints. When you are getting data out of the AI system, you need to check actually, maybe we say okay, we need less humans and so on, we need probably numerous which are even more trained because to a challenge. What AI is a getting out after you prompt with an API, you put data and you get the chance to challenge these is difficult is we know everybody with a way they the way you get down so it looks very authoritative. And they give you an example, I tested something on some some of these AI platforms, I wanted to see if I could automate the calculation of market risk exposure is a crucial risk for commodities normally use specialized systems to do these a year to call them it's it's all the coding and getting the data out where you are, tells you where you are long, short, whatever commodities, so you can take your your risk and take your decisions, I tried to see if I can get from the AI. And the you know, the first the first approach was interesting, I was getting some answers. But then you look into the numbers and the you do the your math and then and then was was wrong. So but you know, to, to challenge these, you need really knowledge, you need to know all these things that works for every kind of risk that you are, you're approaching this is true, but it's also for AI risk and to apply, try to manage a source of risk through AI, this is what I was doing, I was trying to manage market risk, which is very crucial in commodity business through AI. And after a while, and after challenging, I was not satisfied. So to close to close these. What I'm saying is like business business application of AI, which are also not under the rather of the UAE European Union AI act can be even more risky and can be even more impacting on the on the company.

 

Saurabh  39:00

It's interesting you say that, particularly your comment on I think recruitment is the term you use. So I was reading somewhere that in the New York has an act now, which prohibits the use of AI for shortlisting candidates, because they find they found that AI had a bias against women against people of color. And those are the kind of risks that I think, you know, the more this becomes mainstream and it is definitely becoming mainstream right, the more it becomes mainstream the more fuel report into this fire. I think some of those risks will come out. Partly they may be teething troubles that happened with every new technology, right cars crashed, planes crashed before they became fairly well regulated and safe. But I think part of it is also that it is just, you know, based on the data it was fed. That data represents unfortunate Usually the state of the society, right? The Internet represents maybe a sorry state of our society, but it does. And it is unfortunately baked in those biases. And you know, those kinds of proclivities that we are not proud of, or and we won't say, unless we are an anonymous person lurking on the internet, there is an interesting question here that I'm going to pose and see if one of you wants to take it. And the question that someone is asking is, what do you think about the very long period of enforcement requirements for high risk system considering the speed of AI development? And it's like, as a technologist, I totally appreciate this question. Because the tech is moving so fast, literally, every two to four weeks, there is a new AI model, there is new development, there are retrieval, augmented generated systems or ag systems, there is just an incredible amount of trillions of dollars now are chasing these innovations, right? So this is happening at a at a very, very fast pace. How do how do you see the enforcement and the governance and regulatory bodies catching up to it, anyone who wants?

 

Punit  41:10

I think I'm sure Caro would definitely add something. I think when we were young, they used to be some placards on the road speed, thrills, but it kills. So it's about mitigating the risks. And it's about identifying if you want to take that risk. And it's the same thing with AI or any technology, that technology does thrill. And we say we can bring in a tremendous advantage or something or the other. But you also have to weigh in the risk it can cause on the other end, so you have to look at both the benefit and the risk, and balance out the two. And in balancing out the two, yes, you may feel that the speed is being slowed down, or you're feeling bogged down. And also I see the fear of or the fear of missing out that is my technology will be outdated, because I'm serving in a regulated region. But even in an unregulated region. If you serve our dish out that technology, which is going to create risk, I think that it's a short term game you're playing. If you look at the long term, then I think you want to mitigate the risk and look for the quality rather than the speed.

 

Saurabh 42:20

I do want to clarify, though, before Caro respond. So the question really is that technology is moving at a much faster pace than, you know, then regulations. And, you know, in the last I think 20 30 years, this has become far more commonplace just with the advent of internet cloud computing, mobile technologies, AI, all of that stuff, right? And the question is, how do you like, Would these regulations be outdated? Right, because what you're coming up is not keeping parity with the rate of acceleration of technology? It's a very fascinating question. And I'm curious, Caro, if you have a perspective on

 

Caro  43:01

Of course I do. I have perspectives on most things I'm always talking. So when I said at the start that this is as as legislation, this is fundamental rights and internal market, but it's also product safety. And we've shifted our language now from talking about fair AI to ethical AI to safe AI. So it's gone from the be fair to everyone to do the right thing to just don't kill us all. That's really important that shift in language, I know I'm obsessing about definitions, but it means that product safety is the legal approach that's been taken. And so ISO IEC and their joint technical committees, national standards, bodies around the world, are coming together to try to develop standards for AI systems. And I was at the Alan Turing Institute, just through the week where several standards bodies are trying to work out how you can test and manage and look at how these systems actually work. Are they working correctly? How are they functioning? And so I think those people who are metrologists, who are market surveillance, who are good at conformity assessments, which is one of the requirements for high risk AI, those people are working. Our technologists are working on ways to assess and manage this and those standards will emerge from international standards bodies. They'll also emerge, in part under regulations underneath the act. So you have the act, and then you you've got regulations, technical enabling, implementing measures underneath those regulations will have more technical detail. They are easier to change. They're easier to update, they're a bit more agile. And the act itself says it's without prejudice to international standards and international certification standards. So my guess would be that this will shift from a fund The mental rights battleground, if you like to talking about is this right is this fair? It will become a more technically regulated space by product safety market surveillance metrologists specialists, and I've met a lot of the people working on the regulations from coming from the UN and the resolution, who are working on regs into the UAE Ay ay ay ay ay act, tracing that quickly. They're brilliant people, they are technical people, they have a different perspective on compliance and law than, say, fundamental rights lawyers do or even technology lawyers do. But I think they're the ones who are going to be setting new standards, new benchmarks, and they will be be continually updated. And they will be adopted, I think either under enabling regulations, or will become the standards a bit like with the GDPR, technical organizational measures taking account of the state of the art and resources, that lovely blanket term that says you do what is best practice, and is appropriate right now, I think that's the approach we're going to get. But with AI, it might be slightly more technical in terms of the standards for how you measure these models. That's beyond my expertise, but the people I met some of the people building them, and they really are very brilliant. So I think that's how it's gonna progress, the standards will, will be behind, but only hopefully, a tiny bit behind the law, the actual tech itself,

 

Saurabh  46:31

That's really insightful, like, I really like how you phrased it. And I do see, you know, the economic imperative, because as much as we want to talk about fairness, and rights and ethics, and all of that, when you're looking at the, you know, quarter by quarter and profitability, businesses tend to have different perspectives. And I think the fact that it becomes some sort of a product guideline, or a product safety related measure brings a whole different degree of scrutiny and degree of enforcement around it, I can see how it may be the way de Ville the you know, the EU AI stick, if you will, and make sure that you know, there is enough alignment across the board and organizations are held accountable to make sure it is a safe and appropriate usage of these technologies. So very well, Porter, I know we are almost coming to the close. So I wanted to give all of us an opportunity to maybe make one specific recommendation for companies that are starting their AI initiative. And I think I'm going to start with Punit because I believe you've been through two sessions today. So I want to make sure I give you the first opportunity to go ahead and summarize what would be your one recommendation for companies that are just getting ready to start their AI initiatives.

 

Punit  47:59

Thanks Saurabh, So I think one recommendation would be challenging. But if I were to bundle everything in start by knowing where the end is, because you typically say this is the EU AI Act, this is what I want to do. Or maybe you want to say, I want to follow the AI for humanity standard from the UN or whatever, start with an end in mind. Do not start like okay, I have to comply with EU AI Act. So start with what do I want to achieve in the long term with my AI vision? What kind of a company do I want to be? What are my goals? What kind of system I'm going to build in? And then work backwards on where does the EU AI act or any other act fit in and take a holistic approach because as Caro said, EU AI Act is not a silver bullet, you will need to look at a broader solution. This is one element which looks at different dimensions. But there are other things. So look at the end game and build a game plan for that. And one of the things to do that is start to understand what this EU AI Act is about, and then you will be better able to infer what's in it for you.

 

Saurabh  49:08

That's a great recommendation. So begin with the end in mind. Perfect, Alessandro. I'm gonna ask you second and then we'll come to Cairo.

 

Alessandro  49:17

Yeah. I mean, I have to be consistent with what I said before. So in risk management, we're talking about low years it people always, always be very involved in compliance. HR. My My advice is not not to leave behind the risk management that is measuring people in risk management. So it's it's I say that there is there is a discipline behind that there is knowledge that is now so companies some often they do things in in a rush late and so on. We say this before, involved involved Risk Management from from the the beginning it is very risk based strategy should be in other cases, you know, new things coming and response has been left apart. We saw these on climate risk and sustainability ESG photography, many people, they think that risk management people were left behind. They thought there was not, there was no risk. There are no they were not the right people because they were not technical. They couldn't calculate scope 123 emissions, guess gas, emissions, and so on. So it's technical. And then then there is management people who are left behind. So I will say this shouldn't be involved from the beginning in this strategy of each company. And we're also not combined organizations in the in approaching the the ai ai from very strategical and structure approach to the Martin.

 

Saurabh  51:03

Great, very well, port, so don't you risk management behind? Right. Excellent. All right, Caro. Last but not the least, very curious to see what you would say.

 

Caro  51:15

I think my overall take would be for companies looking at using AI in particular, don't be scared, but don't be stupid. So don't be scared of AI. It's incredible technology, the things that I have seen, I saw a human heart replica being made the other week. It's incredible. Don't be scared of AI. But also don't be stupid involve, as Alessandro said, the risk management team has printed says, look at what the Act says, Where do you fall? Are you a Deployer? A provider? You know, where are you at? What are your obligations, and try to lift the lid on the black box. So try to understand what this thing is and how it works and where the supply chain is even at a high level, there's usually a lot of information out there. If you can understand where your risks are, then do exactly what Alessandro says, Get your risk management tools in place, map them, identify them, map them, understand them, manage them, but think about it in terms of risk. But do think about it. Don't be scared. But don't be stupid, either. That's my advice.

 

Saurabh  52:18

Well port, well port, I'm gonna add a little bit of my spin to everything I've heard. And I, like I mentioned, I come from the other side of the story, right, I am working day in day out with companies that are actually in for implementing AI. And the thing that I've found is that very much in line with what all of you're saying is most decisions in life that we make are not perfect. They're not based on 100% knowledge, right? It's always, always a balance. So I think it's really it boils down to that idea of, you know, don't be scared, but don't be stupid, right? You have to find that balance, seek that balance of, you know, where can you apply AI with minimal risk exposure? How do you balance this and find some, you know, a reasonable degree of value at your business. And when you find that, you know, low risk sort of high return kind of quadrant, then, you know, you have some use cases, which are good to do it. And in that balance. You know, like everything else, right? CRISPR is always about balance, right? It's the eye of the beholder, like beauty, right? So the idea is find the balance when you think about deploying AI as a business, find the balance between risk between business advantage, competitive advantage, technology, lead innovation, things like that. So but very fascinating discussion. And I know we are right at time, I'm gonna turn it over to Puneeth, our excellent host. And before I do, thank you so much, Karen Alessandra, I learned a significant amount from both of you today. And I definitely have a much broader perspective than what I had about an hour ago. So I appreciate all your insights. And likewise, Penny, thank you so much for having me on this. It has been a privilege overdue.

 

Punit  54:06

Thank you so much, Caro, Alessandro and Saurabh. I think what sort of said is applicable to me. And I think to every one of us and every one of our listeners, we have the tremendous amount of wisdom and insights into what we knew about you at about an hour and a half ago. And now. And that is what the purpose of these kinds of sessions is. And I'm very happy that we had you all and also the previous panel to enlighten us. So with that, I would say thank you so much and have a good day because some of you have a morning. I won't say good evening, but have a very good day and see you soon.

 

Alessandro  54:44

Thank you was very interesting. Thank you. Thank you.

 

FIT4Privacy  54:49

Thanks for listening. If you liked the show, feel free to share it with a friend and write a review. If you have already done so, thank you so much. And if you did not like the show Don't bother and forget about it. Take care and stay safe. Fit for privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE CIPM, and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more? Visit www.fit4privacy.com. That's www.fit4privacy.com. If you have questions or suggestions, drop an email at hello(at)fit4privacy.com.

Conclusion

This episode emphasizes the importance of a balanced approach to AI development and deployment. The EU AI Act's risks-based framework provides a starting point, but businesses must also consider ethical implications supply chain risks, and potential biases in AI systems. By leveraging existing risk management practices and incorporating them throughout the AI development lifecycle, businesses can ensure responsible AI use that delivers value while mitigating potentials harms.

ABOUT THE GUEST 

Alessandro Mauro is a risk management professional with 25+ years of experience. Focusing on excellence and responsibility. A team builder with a can-do attitude paired with a lot of prudence. Throughout his career, he has been built from foundations successful risk management departments in start-ups and established companies. He assumed managerial roles of increasing responsibility, built around the selection, training, and management of multi-national groups of people, being part of top management, and reporting to the Company CEOs. His professional skills include commodity markets, CTRM/ETRM software, climate risk, financial derivatives, project management, and program coding.  

Caro Robson is a seasoned leader and expert in technology and data regulation. She is passionate advocate for ethical AI and data governance with over 15 years global experience across sectors, designing and embedding practical solutions to these challenges. She worked with governments, international organizations and multinational businesses on data and technology regulation, including as strategy executive for a regulator and leader of a growing practice area for prominent public policy consultancy in Brussels. She was recently appointed UK Ambassador for the Global AI Association and expert observer to the UNECE Working Party on Regulatory Cooperation and Standardization Policies (WP.6), Group of Experts on Risk Management in Regulatory Systems. She holds an Executive MBA with distinction from Oxford, an LLM with distinction in Computer & Communications Law from Queen Mary, University of London, and is a Fellow of Information Privacy with the International Association of Privacy Professionals. 

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

Saurabh Gupta is the Founder and CEO of PlumCloud Labs, a company that specializes in GDPR compliance in the SalesForce ecosystem.  

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with