The EU AI Act - Why, What & How
Let us demystify the EU AI Act with a distinguished panel of experts. Let's explore what the EU AI Act entails and how it benefits the society. Join Dr. Ann Cavoukian, Nicola Fabiano, Punit Bhatia and Raghu Bala as they explain the significance of compliance with AI regulations not just limited to the EU AU Act but within a broader global context. This episode tackles broader challenges in AI development, mitigating bias in data and fostering ethical and responsible AI practices.
Transcript of the Conversation
Punit 00:00
The EU AI act is here. And there's a lot of questions concerned around what needs to be done. In fact, questions like, Why do we need this regulation? And how does it help the mankind, the society and so on? And what we're going to do is I recently had a panel discussion with Dr. Ann Cavoukian, who is the creator of privacy by design, Nicola Fabiano, almost the ex president of a Data Protection Authority in San Marino, and Raghu Bala, who is the leading architect of AI, courses at MIT, and many other Institute's, and also running his own entrepreneur ventures. And I had them talking about what is it that we needed Eu AI Act for? Let's say, why did we need the EU AI Act? What does it bring to the table? And how does it help the society? And in this episode, I'd like to share with you that conversation we had with them.
FIT4Privacy 01:14
Hello, and welcome to the Fit4Privacy podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here's your host Punit Bhatia has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:38
So hello, and welcome, everyone. My name is Punit Bhatia. I'm a privacy and AI professional. So thank you so much for being here and we have a very interesting lineup here. We are going to do is demystify, EU AI Act. EU AI Act is here, it is with us and we are going to demystify so what do I mean by demystify, we will talk about what is EU AI Act? or in fact, more importantly, why did we need it? And how do we manage? Or how does it make an impact for society and human mankind? To make it interesting, make it interactive, ask your questions. And we will make sure your questions come in here. Welcome colleagues. What I who I have with me is if you read in the field of privacy and security, you will have heard something called privacy by design or security by design. If you are ever wondering who is this and who created, it must be a very intelligent man or woman. It's a woman. And she's called Dr. Anne Cavoukian. And we have the privilege of having her with us. So welcome, ma’am.
Dr. Ann 02:49
You're very kind. Thank you. It's a pleasure to be here with you.
Punit 02:52
Thank you so much. And then if you've been in Italy, and you've seen all the provinces, then you will have been thinking, how does it work? Do they have Data Protection Authority everywhere? Or is one or so on? I can tell you. There are many data protection authorities in Italy, and one of them is in San Marino. Of course, I don't go into the Republican federal structure of at least no remarks there. But what we have with us is Nicola Fabiano, who's ex president of the Data Protection Authority of San Marino. So welcome, Nicola.
Nicola 03:25
Hello, everyone. Thank you. Thank you. Punit Thank you, everyone. Thank you for the invitation, my pleasure.
Punit 03:33
And then if you are in the field of education, and you ever wanted to do a degree or an MS or an engineering, you would have heard about Massachusetts Institute of Technology. And you would have said, Oh, wow, that's a great place. And there are great people teaching there. And one of them happens to be Raghu Bala. So he's with us. And he takes care of the AI business implications or AI implications for business, that course and I did it with him. And I was really impressed with his skills and caliber. And I'm happy to have him as well.
Raghu 04:05
Thank you for inviting me to this panel and happy to share the stage with all the really accomplished individuals on the panel, and looking forward to participating in this event.
Punit 04:18
So let's get started. Thank you so much each one of you for being here. And each one of you who will come in the next panel. So let's start with the first basic question. We've been talking about AI for long, but recently there has been a lot of buzz saying AI will kill us do this, they will do that. And we need a regulation. That was the philosophy and there was no regulation and then EU AI Act came, but I want to understand from each one of you maybe starting in the alphabetical order. Why did we need this EU AI Act over and above what media has been telling us? What was the need for this act? What has changed because some say AI has been there for years, ever, everything is but some say something but I see Ann is already shaking her head. So Ann enlighten us.
Dr. Ann 05:09
AI has been around for years but not the generative AI that was advanced last year or the year before by ChatGPT. This just grew it dramatically. And it has expanded its use so widely. And this is the the concern that people in privacy and security have. Because what a lot of us AI does is it goes to hundreds 1000s of websites, databases, etc. And just extract all the information contained, including personally identifiable data. So it doesn't just extract information about what I'm doing or who I'm with or things of that nature. It identifies that as Ann Cavoukian and engaged in all these activities. And that is just such an egregious transgression of privacy. Privacy is all about control, personal control, relating to the use and disclosure of your personally identifiable data. And in this context, generative AI, the way it's taking place now has just left privacy far behind. So I think what we're trying to do is play catch up, I think that's what you're seeing with the EU, the AI act, and now the US is mirroring that working with the EU and it's beginning to grow dramatically. And thank God because we're already behind the gate, where we're lagging behind AI of this nature is manifesting itself everywhere. And people love it because it facilitates so many activities and actions, etc. makes life easier for a lot of people. But you will end up relinquishing all control in terms of your personal information, your identity. And that is completely unacceptable. So as I've always said, you have to look under the hood, we have to look at how can we apply privacy by design, which is all about being proactive in an effort to prevent the privacy and security harms from arising. So that's what I'm looking at.
Punit 07:10
Okay. And Nicola, your view on the same thing, why did we need this UI?
Nicola 07:17
Well, um, it's challenging to discuss relevant topics in a short time, but I try to, to describe my point of view. Yes, I think that I agree with comments from Ann and I want to explain my point of view in 3 short points, 3 pros and 3 cons. The 1st one pros is that AI represented a highly complex phenomenon with exponential growth and expansion transversally involving several areas with significant impacts and mainly is not a recent innovation. Probably. I will explain later. What I mean, the 2nd point is that AI needs a multidisciplinary approach with professionals from different sectors. And the 3rd the corporate pros is regulating the use of AI system, not AI is a good thing two guarantees to guarantee each individual's fundamental rights. Moving to the three main cons and criticality is the 1st one, I think is over legislation. The AI act stems from the European Digital Strategy, which includes the artificial intelligence strategy. Europe has published over 100 acts in the digital sphere, but the risk of over legislation is to generate this orientation. The 2nd point the second cons is wrong competitiveness. With the AI act, Europe wanted to foot and demonstrate its digital serenity. But this is a short sighted vision because it does not take into account what is happening in the rest of the world. I am referring to the United States, China, China, India, etc. for researchers and business with the AIA act, Europe wanted to show the word that it was a leader in adopting in adopting AI legislation. The 3rd cons is warnings. Eminent scientists working on AI for years, I am referring to Stuart Russell, Peter Norvig, Max Tegmark and others highlighted warnings about the considerable risks arising from the distorted use of AI systems, even described as autonomous weapons systems on that point, the EU oversight model do not consider other equally interesting is described in some research.
Punit 10:09
Okay. And Raghu? What would be your view on why do we need this EU AI regulation or regulation in general also?
Raghu 10:19
Sure. So basically, I'll just go quickly. So, building upon what I am said, Okay, now AI itself, you should split into two parts discriminative AI in generative AI. Discriminative AI, the answers are very determinant. In generative AI, the answers are indeterminate, so you don't know what the next token is going to give you. So now what happens is now once you have indeterminate answers, now comes into play social, moral and ethical implications. So you need to have some guardrails in order to control what AI is going to do next, because you cannot predict what it's going to do. So if for example, if you were to ChatGPT, and you give the same prompt twice, you will get two different answers. Because it's not discriminate. It's not determinant it can be, you know, it can give you a different answer each time. So now that you don't know what the outcome is going to be, so you need to figure out how to put some, some constraints around this problem. And that's why, you know, governments all over the world are starting to feel like okay, now, what was kind of, you know, can be controlled is now sort of like, not controllable. So we got to put some, some regulations around it. So that doesn't get out of hand. That's, that's what's prompting the EU to act. And one other point is, data is the fuel for AI. That's known. But the data itself can be very biased. So for example, I was in Africa, a couple of months ago at a conference as a keynote speaker. And the problem was the data and the LLM models, and so on, they are trained on Western data, it's got nothing to do with Africa, it's got no relevance to the people there, or their culture or anything like that. So now what happens is data can get very biased, and so on. So again, the need for regulation is to protect the people who are going to be subjected to AI, from any sort of prejudice and bias and those types of things as well. So that's a number of reasons why the US actor and I think it's quite good at this, someone is beginning to put some framework around this thing, I think it's a positive move. But like what Nicola said you cannot get too far into it such that you stuck, you know, prevent technology from progressing by putting so many rigid rules that it cannot breathe. So you got to have a kind of a balance in this.
Punit 12:45
I agree with you a balance is needed. Because typically we say, technology leads and laws lag, because technology gives us new things, which we need to control which we need to regulate, and then we create laws. And by the time we create a law, that the technology has already moved forward. So it's always a catch up that we are playing. But again, AI needs to be good AI needs to be good for the society and mankind. And that's why we have to give it some direction, give it some shape. So talking about that, what is this EU AI Act about? Because we all know, it's a risk based approach, which we will delve into in the second panel. But what is this EU AI Act all about? Because give us a perspective, a short one, and then we will see.
Dr. Ann 13:32
Sure he want me to start. If you look at the way the EU acts, generally speaking, you look at the GDPR that was passed and things of that nature, personal control, retaining a sense of control over the use, and disclosure of your personal data is critical. So the AI act, they're trying to preserve that, that we need to preserve some sense of control over who gains access to our personally identifiable data, without having sought our consent, or anything of that nature. So it's trying to prevent the automatic extraction of all information in databases around the world without any notion of personal control or consent or any of that. So it's trying to regain some kind of balance like that, because it's been totally lost with AI. And now we're trying to get it back. So I think that's one of the major goals that the AI act is trying to understand that you can't just go and extract whatever information you want from whatever database is for purposes of AI. That's not acceptable.
Punit 14:42
Make sense? Make sense, and I think is the same thing, what GDPR did, trying to bring control to the user is the same way in AI we want to put the user in control of and being able to be aware of what's happening because you don't want a mission impossible scenario that a robot takes so over the world. And of course, that can virtually happen. But we want to avoid that insofar possible. So maybe this time I asked Raghu first Raghu, what's your view on? What is this EU AI all about?
Raghu 15:11
So what happens is, you know, there are two major prongs, and I won't take the thunder away from the second panel. But the first prong is a risk based approach. Right? So the second panel can talk about the details of the approach, but they're talking about risk. And so they classify risk into high risk, you know, unacceptable risk, high risk, medium risk, and then like, no, no low risk. Okay, that's the framework. The second thing that they're worried about is transparency. So let me talk about that a little bit more. So transparency, one of the problems is once machines, automatons are controlling, you know, actions, you want to know why they arrived at a particular action. So that's AI observability, or this transparency that we're talking about. Now, once the thing that Ann is talking about, which is very important, which is once the machines are sort of like you know, you're worried about, they're getting out of control, how do you control? The word control means if you know why arrived at a decision, that transparency once you can figure out how it arrived, the decision that the control choke point that you have to hold on to once you cannot want it once it's not transparent, you don't know how it arrived and actions. And this is what Elon Musk is always worried about. He keeps saying that you know, what machines are in or might take over? Because you cannot figure out what they did. Once you cannot figure out what they did, then you're in trouble. So I think regulation has to have to control this part. Because once you know, it's transparent and observable, then it's within sort of like a framework that we can as humans, figure out what happened. And that's very important for AI.
Punit 16:52
That that's very true, I think explainability that means being able to explain what is done in a transparent and a simple language, which user can understand. And at the right moment, prompting it at the right time, because you don't say I'm installing Alexa and it keeps asking me for one day all the prompts to say Agree, Agree, Agree, agree that doesn't work. You need to have the right prompt at the right time and being able to explain, this is what it means for me. But, Nicola, so what is your view because you're also working with the lawmakers in the EU. What this EU regulation all about what's the objective behind it?
Nicola 17:28
Ann and Raghu already said something about I don't want to sound very critical today. But I would like to stress some points related to when the AI Act doesn't apply. And because there are some concerns related to the exempt exemptions, particularly military. AI system, development began for operational activities in the military. And the concerns are the risks of the development of use of dangerous AI systems and the possible availability to civil society where they may be misused. What about the government competence? Regarding the military sphere, the regulation falls within the competence of governments. This point raises a great question, how much can governments maintain accountable behavior and pay attention to those concerns without truths and warranties for the individuals? This is another point this is a $1 billion question. The third point is police and judicial cooperation. On this point, one cannot overlook to doubts about the impact of AI, or five risk AI systems on individuals. 2 main questions. What about in case in case of urgency? What guarantees are there for individuals fundamental rights and freedoms in case for example, of biases, with no rules, especially on human oversight and consequent possible errors? The fourth point is scientific research and development purposes and research, testing or development activities. So that exemption may be a balanced solution that aims to provide some guarantees, but might reduce in limit research activities. Three questions where researchers and scientists be available to work only to reach scientific results and acquire data on projects that will never be put into production. Does that make sense? Who will control them? I mean project and researchers in that case so some few points, when in the cases of the AI doesn't apply.
Dr. Ann 20:06
Of course not I mean, lawmakers and regulators have an a broad understanding of what is important and what must be preserved in terms of rights and not relinquish. But then, of course, you have to work with the brilliant tech community to engage them to apply this in any kind of real sense. Yeah, I always say you have to look under the hood, in terms of whatever the technology is involved, you have to ensure that the technical staff that you have, have an understanding of what you're trying to achieve in terms of your goals of preserving privacy, preserving security, but not standing in the way of advancing the interests of AI. In non personally identifiable data. You know, privacy, by design is all about, it's not zero sum privacy versus whatever its privacy, and whatever the business interest is privacy and security, it's and you need to have that. So when I work with technical people to convey this, I start there. And I say, I'm not trying to get rid of that. I want to make sure it works in a way that respects privacy, and preserve personally identifiable data, etc. And when you can explain it to them, they can figure out how to do it. I mean, I'm not saying it's an easy task with AI, but this is doable. But you definitely need technical Brilliant stuff.
Punit 20:06
Make sense. And I think this is the right moment somebody, somebody has asked a very good question in the chat, saying, Do lawmakers and regulators have sufficient technical expertise, and even understanding the craft of effective lawmaking and comprehensive lawmaking, especially as AI is a rapidly govern, rapidly changing and complex field? Well, that is not only a challenge for lawmakers, but also for us. But Ann I think that's a very well rounded answer. But I like to give a completely different spin. In a company, we have a CEO, he does not know each and every department, maybe he's a specialist from HR from risk from compliance, from sales from abrasion from business, but he does not know each and every business and you cannot expect them to be expert in privacy aim zones, same way the lawmakers or the governments are not expected to be an expert on each and everything. And as an emphasize, it's the role of the government to make laws, while going to the technical experts and reach out to the community of technical experts. And that's how the laws are framed. And yes, at certain times, in some countries, there may be some things which we will say this is not okay. And that is not okay. But that's part and parcel of the game, because that's what we say sometimes about our CEOs also. So let's not get that far ahead. I don't see any more question there. But so we can maybe get to the other part of the thing is that is around how does this EU AI act help the society because what we're talking about, is it frontal society, we want to manage the social, moral and ethical aspects of AI. So how does a regulation like EU AI Act help us in achieving that? And maybe if this time, if it's okay, I start you with Nicola if it's okay.
Nicola 23:28
Okay. Well, the AI Act helps the internal market and probably our society according to the subject matter in Article 1. However, we must be positive and consider that there will be greater guarantees for individuals due to the human centric approach. The AI act explains that it is a general framework and does not provide management system solutions. So we should refer to some international standards. Like ISO 41 000 and others, but I want to stress a relevant point, which is the liability. The AI Act doesn't regulate liability depending on AI systems. liability depends, of course on the legal systems and those for Europe, on a regulatory source governing liability arising from artificial intelligence and for each member state, or national legislation, the European Commission published to proposal on September 28 2022, notably a proposal for a directive on liability for detect defective products, and the second proper proposal for a directive on adopting the rule on non current contractor civil liability to artificial intelligence. On this point, European data protection supervisor expresses its view in opinion 42 2023, which included some suggestions there might be some criticality is related to the transposition stage due to the national legislator, we live to consider the domestic legal framework. And so the introduction of an AI system liability at national level could conflict with the existing domestic legal framework. This is my, my personal my perspective.
Punit 25:21
That's interesting. And I like to move to Raghu. If it's okay?
Raghu 25:27
Sure! so one of the things that I teach in the course, I can share this video that I post on that course, it's about a robot. And the robot is actually given a problem. And the problem is, it's like in a self driving car scenario, and you have two choices to make, you can get into an accident where you can hurt the passengers in the car, or you can hit a pedestrian and injure or kill the pedestrian. Now, this is the dilemma for the robot, okay, because it doesn't know which one do I do one way I'll hurt the passengers the other way I would hurt someone outside the car. So these are types of things where why I'm mentioning, there's no real correct answer to this. But why I'm mentioning this example is because once AI is actually being introduced into the public domain, they're self driving cars, self driving trucks already running in North America, and the public is involved. And they might get into like accidents or claims or insurance or litigation and what not. But they need to understand how these systems operate. And, and I think the more knowledge is spread among the society, about how the system's engaged, that's very, very important. Another thing that, you know, on the how part of it which I am a proponent, I don't know if many people are but a lot of AI systems are still very centralized in the sense that big companies are controlling these AI engines, the data that govern these AI engines, one of the things I propose is that we use blockchain a little bit more because blockchain is transparent, it's outside of, it's not as decentralized. And this would improve the transparency quotient of the system so that anyone can inspect the data and so on, especially if it's for the public good. The public needs to be able to see that data if they so desire. But right now, what happens is a lot of it is behind the curtains of large companies with control these systems. And no one knows what has gone into the data to make them act a particular way. So that part of it, I think, you know, soon there'll be some sort of understanding of how the systems work. And legislation should try to at least bring about the transparency and blockchain is a very good mechanism to make that happen.
Punit 27:58
Sure. And how do you think this is helping the society at large?
Dr. Ann 28:04
I think is providing some comfort, in the sense that certainly people I've spoken to all around the world, including EU, of course, are getting very nervous about AI. They're getting nervous about what's happening to their personal information, where does it reside, you know, their data brokers everywhere. There's all kinds of information out there, that is available to adult AI, in terms of being extracted, etc. And so it people are very concerned these days with their information, the EU AI act will give him some level of comfort that the government is trying to do something to protect your data, while enabling AI to take place because AI will bring multiple gains as well. So I think it's very positive in that sense. Transparency is important, what's taking place Trust, which has been lacking for quite a while, will hopefully be restored somewhat. And it's just baby steps at this point. But I think it will give certainly the public a greater level of comfort than they had before.
Punit 29:17
I can agree with you, it's a step that will create comfort. And then sometimes people ask and someone else already asked in context of the bias because what happens is, data that is being fed to the models is usually biased or it's not 100% Okay? Now, even if we say we will clean data and it will still be biased because it's dependent on the person cleaning the data in that the person also has inherent or subconscious bias. So how can we reach in AI systems something called the person is asking 100% But I would say reasonable, bias and fairness. Anyone wants to take that?
Raghu 30:03
I can certainly just repeat the question one more time quickly.
Punit 30:07
Basically person is saying how do we reach a state where in the data is not biased, and we remove dependence on people?
Raghu 30:14
Yeah. So, so first of all, you need to know, which is the demographic you're using the data for, okay? So it has to reflect the demographic, it's used, the context and demographic and useful. So if I'm going to subject AI to some audience in North America, then it has to reflect that particular populace. If I'm going to put it in some part of Asia or whatnot, it has to reflect that that's number one, in my opinion. So that would remove the sort of like, bias of the underlying data. Now, in terms of, like, transparency itself, I mentioned, you know, I think data is currently being collected and not sometimes not cleansed in any way. And also hidden behind closed doors of companies that if you put it out in, in blockchain, and so on, and for public inspector ability, that's one thing. One thing about what I mentioned earlier, PII information, personally identifiable information needs to be somewhat, what they call it, there's a word for it escapes me right now. But basically, you hide that part of it, okay, you don't need to show that part. You can put pseudo names or block that part of it don't need to know who it came from. But that would at least protect the privacy, but still making the data valuable for learning so that future generations or what not, can leverage the data to make decisions, which is what we all want to get to. But I think there are mechanisms in place, it's just that getting all the players who are collecting this data, which happens to be all the big name brand, companies, you all know what run platforms for E commerce, for search for other things that they need to come to the table to play right now. They're using data as their way of making money. And that needs to maybe stop. And then that's, I think one of the questions on the panel that I see is about AI and web three. So web three is all about sharing, it's not about holding. And right now, we're in the holding mode among the large companies. So if we can move the needle towards sharing, and then you can get to the transparency and removal of buyers, and all those types of things.
Punit 32:29
Thanks Raghu. Ann it seems you want to add something.
Dr. Ann 32:36
My only concern with the notion of sharing which I am very fond of is, again, you have to share in the context of non personally identifiable data, then you are free to do whatever you want with the data. That's why I always say privacy and data utility go hand in hand, as long as you remove the personal identifiers. So then I'm all for it. And I work with many companies to enable that to happen to make sure they can benefit from their data and AI in a variety of ways without risking the privacy of their customers or subjects or whoever's involved.
Punit 33:14
That makes sense. And there's another question in the context of how that how can we have consistent and harmonized laws that are applied across jurisdictions to avoid a patchwork of regulations or vacuum of legal frameworks, when it comes to technologies operating on global scale. Now, that goes similar to what we had in the GDPR. the GDPR came in, and now we have 100 plus law countries caught replicating it in a way it's a consistent standard being replicated, but in another way, everyone wants to be different. So they've changed the 20% of it, name it differently and all that and create that variety of spaghetti of laws, as we call it. Anyone wants to take that?
Nicola 33:59
Yes. Everybody knows that. The AI act will be a regulation in Europe. So it will apply in all 27 Member States European member states but if we discuss about a global so a global legislation. I want to I want to highlight that the UN group of experts published an interim report about the governance the AI governance at a global level. And they consider it that the AI Act and the President Biden's executive order on AI are both governance models, and they hope for the fulfillment of a worldwide one um, From my perspective, I so I remain deeply skeptical of the feasibility of a worldwide AI governance model, primarily due to the formidable challenge presented by each country steadfast desire for its own AI, sovereignty and the potentially disruptive effects of divergent regulations. And from my perspective, the equation representing global governance, as impossible is many lows and fewer opportunities.
Raghu 35:38
Actually, I'll just add one more thing, it's kind of funny, Larry Ellison from Oracle, actually, he brought up something a couple of weeks ago as a news release. And he talked about, you know, this, you know, in GDPR, there's something called data residency, right? Every country needs to maintain its own data. And so on the EU, it's like, it's considered one big blob. So as long as your data centers within the EU, it's okay, you know, like Facebook got fined I think, quite a lot of money, few billion dollars, for shipping data back to us from EU that data residency wasn't within us, within EU. Now, the thing that allowed the product was that in the future, because AI is so important to like what Nicola mentioned about military and, you know, sovereignty and security of a country and things like that, he talks about, actually, each country trying to maintain the data centers within their own country, so that the AI is not going to, you know, bleed into even within EU, it might be within separate countries, so it's not going to bleed outside of your region. So that's going to be very important. And I think that poses, you know, in the business context, there's a lot of opportunity as well, but you have to maintain your data within your own boundaries, and things like that, that's going to be very big. Because once it starts bleeding, then then you lose control of it. So that's going to be a big theme going forward.
Punit 37:06
I think data sovereignty is an interesting topic, and maybe, and you would have a view on that, because you're talking to a lot of regulators and governments.
Dr. Ann 37:15
Data sovereignty is very important, there's no question. But regardless of where you are, whatever jurisdiction you're in, the notion of privacy and data protection is very simple. And, and to me, it doesn't relate to sovereignty at all relates to data subjects, their personally identifiable information, whether they can retain personal control, privacy is all about control, relating to the use and disclosure of your personal information. So that's the way I view it, as opposed to sovereignty. And, you know, when the EU comes out with legislation, like the GDPR, now the EU AI act, other countries try to, if not mirror it, certainly work with it, match it, I know, the US is working on an AI X and all that. So I think that's very important. But the notion of sovereignty to me is, is far more related to individual's personal control than anything else.
Punit 38:14
I think sovereignty for me can be interpreted in many ways, like, one is the data sovereignty in itself, that data, confidentiality, integrity and availability is maintained. Second, is the sovereignty in terms of the individual? What does he want to do? Does he have control over it or not? Then it's the state or the country or the jurisdiction? What control do they have? And then, of course, physically, where does the data reside? So it has, it's a very complex and complicated topic. And AI doesn't help in that.
Dr. Ann 38:48
But put it if I can add, it doesn't have to be that complicated. I don't disagree with you, with all the layers you identified. But whenever I talked to governments, you know, the I was Privacy Commissioner here for 3 terms. And what I say to them, is, it's all about the individual's personal information and their ability to retain control over it within disclosure that's at the heart of it, then you can add all kinds of layers of different things and laws and various countries and how they work together. But in my view, that has to be at the heart of it.
Punit 39:24
Okay? That is completely agreeable, because if you put that in right, or the individual in the control, the rest of it can be taken care of. Now, I understand why you think that simply because you are putting the person in charge and then if person is in wherever the country is taken care of. Yes, thank you. It's always exciting talking to you and how you simplify things. And then there's another question which we've always been talking about responsible AI and trustworthy AI and taking ethics also in consideration. I know, and you're working on ethics by design nowadays. So I asked you a trust by design, as you call it sometimes. So what's your take on this? On the responsible AI, trustworthy AI and ethics into AI?
Dr. Ann 40:16
I don't want to take over, I want to make sure the gentleman also have an opportunity to speak. So I'll keep my answer really short. And once again, it relates to, you know, ethics by design, privacy, by design, security, by design, it's all about the individual retaining control, and having an understanding on how their data is going to be used, you know, when you think of ethics, and by design, it's all about how is my data being used by whom, and for what purposes, transparency is very, very important for people to have an understanding of how their data will be used. And then they can make decisions relating to their desire to consent to that or not.
Punit 41:00
Make sense? Raghu or Nicola, Nicola, you want to add something to that?
Raghu 41:07
Yeah. So, in terms of AI itself, I mean, this, these regulations have to be somewhat malleable. I'll tell you why. Because AI is not static, right? AI is the by definition, machine learning. That means it learns, and it morphs and it changes. So whatever laws that are passed have to be within, you have to understand that you're talking about a not like other technologies, which are pretty static, once it's carved in stone, it's like that, here, it's gonna change. Okay. And so what happens is, if we talk about responsible AI, trustworthy AI, let's say that the first iteration of some system, let's say you're talking about self driving cars, or whatever, just an example, it might perform a particular way. And there might be some accidents, there might be some issues, what has to happen is the laws have to allow for the system to sort of learn from its mistakes and improve itself. And that way, what happens is, over time, the trust between the AI and the human population, society and so on, will start to grow because it will still be a feedback loop that is continuously happening. And that's very, very important for AI because unlike a lot of other technologies, which don't change too often, this almost changes with data almost changes in real time, and so I'll say that, we cannot be so critical of AI. From the get go we have to give a little bit of a little bit of rope in order for it to learn and improve itself as well.
Punit 42:41
That makes sense and Nicola, you wanted to add also?
Nicola 42:44
Yes, briefly I agree with Raghu and Ann, I want to add only that AI is an umbrella. And we should talk about AI systems, because the AI act regulates AI systems, we do not confuse AI with AI systems. This is the first point The second point is that ethics is a critical very critical point. In a recent paper I published the LLM I hire glad I liked that the need of consider also apart from the human oversight the ethical oversight, because if we consider the processes of that an AI systems the though does we wish we cannot ignore the approach the human the ethical oversight approach. And on this point, I think that it could be opportune to make evaluation to ensure so ethical oversights and mitigation regarding protected categories intersectionality's and vulnerable people populations.
Punit 44:18
That makes sense. Now, in essence of time, I wanted to normally ask you two things, how can companies comply with it one thing for them to do? And also your one final message, but if I asked that will be running over time. So what I will ask you is to combine the two and say what would be your one final message towards organizations who want to comply with it how inch amount? Of course it's a long process. But in your view, what would be or how would we the starting towards the journey on compliance towards AI regulation? Not only EU AI Act, and maybe a bit more broader.
Dr. Ann 44:56
The one message I would like to get across Is to law enforcement, because people think that the police can just go in and extract whatever information they want, when they have a case or not. Now, what I always say to the police and to others is, if you have solid grounds for gaining access to someone's personal information, you go to court, get a warrant, if you have a warrant, that is your legal means by which to gain access to personally identifiable data, short of a warrant, you don't have any greater right than anyone else, to gain the public's right of access to people's personal information. So I think that's critical. The other issue is transparency, companies, organizations have to be very clear what they're doing with your personal information, especially in the context of AI. And who else may be gaining access to that information, without your knowledge or consent, that should not be permitted. And I would seek ways to have that imposed.
Punit 45:58
So putting people in control of things. And yeah, go to Nicola and ask, what would be your message?
Nicola 46:06
Briefly, I suggest to consider training courses for everyone involved with the company's AI processes. And also, I suggest to evaluate assessing a high risk AI system. And another point is to consider the data used by an AI system, and whether they meet the quality criteria, and if they are accurate. Last, I think that it's necessary to act soon and not wasting time to wait, the application, the concrete application of the AIA act, because we are experienced with the GDPR. The AI Act, the first application is 6 months. So we are ready. And this is my suggestions.
Punit 47:15
Thank you for definitely start now. Don't wait and Raghu?
Raghu 47:19
Yeah, so just briefly say that, you know, the way the Act has been constructed, it requires one to assess the applications according to this criteria. So I believe that the same companies who are doing GDPR, and sock to compliance and so on, will start to add EU AI Act, AI compliance among the list of compliance activities that they do these assessment companies and so on. So whoever is an enterprise, when they want to launch an application, they might want to put an extra step in the launch process to say that, okay, we finished coding, we finished testing. But let's now go for compliance testing, to make sure that in which category that our app sort of like fits within, they might not do it at the tail end, as well as the beginning at the beginning, they want to make sure that they're not going to build application, which is an unacceptable risk criteria, because one won't pass the EU AI compliance anyway. So you want to figure out where you sit within those four categories, and then plan accordingly or whatever it project and then launch it so that you launch something which is going to meet the standards that had been imposed by the EU.
Punit 48:32
That very well said, I think you're saying compliance by design or risk by design. So okay, so if we were to wrap up, in a few words, I would say you have said that in view of the generative AI and the discriminative AI that's coming up, and the challenges it poses to society, morals and ethics, we do need an EU AI act or act or a regulation. And it does not solve all challenges. So the challenges around liability, copyright IP, all those will remain there will be under law. So everything works in conjunction. And in view of the privacy of individuals, it's very important to put people in control of the data. So we are in a way extending what was the basis of the fundamental of the privacy laws of the GDPR into AI act and making it more realistic in context of the systems because now the systems or some of the systems are going to be autonomous, and they're going to generate data and create more challenges for us. That what we are saying and you're also saying, Do not wait to act, start now. And start as you started with your privacy or GDPR. Start reading, understanding the law, converting it into a policy then governance, and then step by step, maintaining a log of your systems, and that's where all of us are here to help you. And I would with that say Thank you so much Ann, Nicola Raghu, for your wonderful insights. So thank you so much. Have a wonderful day.
Dr. Ann 50:08
My pleasure.
Raghu 50:09
Thank you.
Nicola 50:09
Bye bye.
FIT4Privacy 50:10
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review. If you have already done so, thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. Fit4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM, and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more? Visit www.FIT4Privacy.com. That's www.fit4privacy.com. If you have questions or suggestions, drop me an email at hello(@)fit4privacy.com.
Conclusion
EU AI Act aims to help individuals regain control over the use of personal data, ensuring the transparency and accountability in AI systems. By categorizing AI applications based on risk and emphasizing the need for a multidisciplinary approach, the act seeks to protect fundamental rights while fostering innovation responsibly. It also aims to mitigate biases in AI data and promote trust through stringent regulatory frameworks.
As AI continues to advance, the EU AI Act serves as a foundational step towards balancing technological progress with the protection of individual privacy and societal values, ensuring a secure and ethical integration of AI into various aspects of life.
ABOUT THE GUEST
Dr. Ann Cavoukian is a globally recognized privacy expert, distinguished academic, and passionate advocate for privacy by design. With a career spanning several decades, Dr. Cavoukian has left an indelible mark on the field of privacy and data protection. As the former Information and Privacy Commissioner of Ontario, Canada, she pioneered the concept of Privacy by Design, which emphasizes embedding privacy protections into the design and operation of systems, processes, and technologies. Dr. Cavoukian's groundbreaking work has earned her numerous accolades, including being named as one of the Top 25 Women of Influence in Canada and receiving the Meritorious Service Medal from the Governor General of Canada. Her expertise is sought after globally, and she has served as a consultant, advisor, and speaker for governments, corporations, and academic institutions worldwide. Dr. Cavoukian's commitment to advancing privacy and data protection through innovative solutions continues to shape policies and practices around the world, making her a true luminary in the field.
Nicola Fabiano is a distinguished Italian lawyer with a rich background in data protection, privacy, and artificial intelligence (AI) regulation. As an adjunct professor at Ostrava University in Rome and a former President of the San Marino Data Protection Authority, he brings a wealth of expertise to the table. Nicola has served as a national expert for the Republic of San Marino on key committees of the Council of Europe, including those focused on Convention No. 108 and the Ad hoc Committee on Artificial Intelligence. With his extensive experience as a government advisor for drafting legislation on personal data protection and his innovative contributions such as the Data Protection and Privacy Relationships Model (DAPPREMO), Nicola is at the forefront of shaping AI policy and ethics. He is a certified professional in various domains including security management, data protection, and privacy assessment. Nicola's memberships in prestigious organizations like the European AI Alliance and his role as a technical expert for the European Data Protection Board further highlight his influence in the field. With numerous publications to his name, Nicola Fabiano continues to be a leading voice in the intersection of law, technology, and ethics.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com