Nov 7 / Punit Bhatia and Kai Zenner

How EU AI Act helps build Digital Trust

Drag to resize

Can the EU AI Act Navigate the Complex Terrain of Artificial Intelligence? As AI continues to reshape industries and societies, concerns about its ethical implications and potential misuse have grown. The European Union's AI Act is a bold attempt to address these concerns by establishing a robust regulatory framework. This groundbreaking legislation aims to ensure that AI systems are developed and deployed responsibly, prioritizing human rights, safety, and transparency. By setting high standards for AI development, the EU hopes to foster trust in AI technologies and drive innovation while mitigating risks. However, the effectiveness of the AI Act will depend on its implementation, enforcement, and adaptability to the rapidly evolving AI landscape.

As we delve deeper into the intricacies of this legislation, we explore the challenges and opportunities it presents for businesses, policymakers, and society as a whole



Transcript of the Conversation

Punit  00:00
The EU AI Act is here, and how does it help build trust? Well, yes, the idea behind digital market, digital society, digital Europe, is that it's safe, it respects fundamental rights. It protects privacy of individuals, and that's why we have the EU GDPR, then the EU AI Act and so on and so forth, and we will have more. But how does EU AI Act specifically help build trust? And how do you define trust in a digital society? Well, these are interesting questions, and to answer these questions, we have none other than Kai Zenner, who's a digital enthusiast, who has a blog, who's also in the OECD AI Policy Observatory. He's also active in many, many other forums relating to AI and digital society. And we are going to ask him about how EU AI Act builds digital trust. So let's go and have a conversation with him.  

FIT4Privacy  01:08

Hello and welcome to the Fit4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host, Punit Bhatia has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.  

Punit  01:37

So here we are with Kai Zenner. Kai, Welcome to the FIT4Privacy Podcast. It's a pleasure to have you. 

Kai  01:43

Many thanks for the invitation.  

Punit  01:45

Okay, let's start with the premise that you have been working on, and you have the education and all those years that you spent. We are moving towards what we call a digital society. It's about data, it's about digital and the society is becoming more and more digital, and one of the parameters we talk about is that there needs to be trust, and that's why we have the cyber security laws, the GDPR and now even the EU AI Act. Could you define or elaborate on how digital society and trust go hand in hand?  

Kai  02:18

Yes, so especially here in Europe, there is this realization now for quite some years, that when it comes to the technical advance, when it comes to big investments, we are so far lacking behind but where we are still strong is indeed to have digital products, digital services that people can trust in because they have higher, for example, privacy levels, higher safety levels and so on. And the whole idea that the European Commission and the European Union in the last years developed as a kind of yeah possibilities to a possibility to create a competitive edge over some other regions of the world. Is to create legal frameworks and that makes sure that, for example, artificial intelligence technologies have a higher level of safety, security, privacy and so on, than in other regions of the world, meaning that people can have more trust in them. The same, not only speaking about competitive edge, is when it's coming to trust about accelerating, let's say the digitalization of the society. Because, for example, when you are coming from Germany, many German citizens are highly skeptical, for example, to use in digital identity to lock in into their bank account, or to do taxes and so on and so, especially when it comes to those member states, the Commission in their attempt to build up a digital environment where trust and safety are very important. And yeah, they are trying to kind of make it so safe and so clear that even countries where you have a lot of concerns, like in Germany, people are kind of pushed to use more digital tools, digital services and so on. So again, it's about building a competitive edge, but also using trust in order to accelerate the digitalization and digital deployment of tools within the society.  

Punit  04:57

Okay, that's interesting. And when we say. Digital society, and there are many actors in that and especially there are actors from other regions who probably don't have a legal presence here. And then the legislative framework has to extend itself beyond the aspect of being legally applicable or legally relevant here, to actors or to companies who are not based here, like the EU GDPR did. Is that the intention of the EU or the parliament to go in that direction maybe say, going even further?  

Kai  05:32

Yeah well, it's a little bit disputed this point. So, for example, I'm working for the EPP Group, which is a conservative political group in the European Parliament, and our group is often very hesitant in kind of pushing our rules across the world and forcing other regions to kind of Copy paste everything that is now happening in Europe? Others as they are, yeah, let's say more open minded, and they would indeed like to see that. For example, the GDPR rules are 100% applicable wherever you are going. In the final laws, it's always a little bit tricky and kind of vaguely written. First of all, the European Union hopes that there's a little bit, let's call it, soft power approach, that those European legal regimes are so efficient, are quality wise, so good, that other countries are kind of taking them over without any pressure and a second attempt is, of course, that if you as a non-European company, want to do business, want to join the European market, you need to play by those European rules. So basically, by, by establishing this rule, other non-European companies are kind of forced, at least when they are active here, with their products that they are selling here, to fulfill those rules. And then we are coming to a very gray area, or unclear area, and how it is if really, for example, a Canadian company is developing a product, but only for the North American market. Never want to bring it in the European Union. But then maybe some customers are using the system in the European Union, the AI Act has their article two, paragraph one, point C, which is saying that even this output from in Canadian AI system would need to play by the rules of the AI Act. But how to enforce it in Canada there we are going back to trade agreements, to international private law, and even the lawyers here within the European institutions, commission and Parliament and Council were not really sure how this is playing out in practice. So, to summarize, my answer to your question, there is an attempt, definitely to have an extra territorial effect with the AI Act, but also with our other digital laws. Mostly it's more soft power or soft law approach, but there is a potential legal, let's say obligation of non-European companies to fulfill digital EU laws. But it's unclear if it's really enforceable.  

Punit  08:50

So that's clear we are moving, as we move into this digital society, we put, say, highest priority on the fundamental rights, where trust, privacy, all these elements are of essential or essence or values for us as Europe. And then we are saying we would go or use the power in so far, we can and experiment and explore how far we go. We did that in the GDPR. Now it's in the EU AI Act. And then, do you see that this EU AI Act, then, is helping us build or create that digital trust in the long term? Because, of course, it's complex, as you said, but does it help? Does it go in that direction for EU citizens?  

Kai  09:38

It's very difficult to say right now, I think, in principle and the original proposal from the Commission in this regard was going in a good direction, because, indeed it was mainly principle driven, so in comparison to other EU Digital Laws. It was not so much going into detail, but it was very clear that those OECD AI principles that were adopted by all g20 countries in 2019 were playing a big role, which is good for international alignment. And basically, in the end, we are now translated into European legal systems, and would have created an ecosystem in which AI technology would have been built and deployed in a way that probably European citizens can trust in and it was in the initial draft, in the long run, I think it will lead the AI Act to a situation that Our AI technologies are reaching high levels of quality, safety, privacy and so on, meaning that citizens can be sure to a certain extent that those systems are well intentioned and can trusting in their operations. Where I'm worried is, let's say a short term or mid-term effect, because, as you said, the AI Act is very complex, and therefore I see many companies, especially smaller companies, struggling in the first months and maybe even years, to fulfill the obligations In the AI act and maybe even to understand what the obligations for them as developer or deployer of an AI system are, and therefore, maybe as a result, we will see 3-5 years of high legal uncertainty, which would not only exist for developers and deployers of AI systems, but then, of course, also for end users, for the citizens, because they if there's, for example, an accident, they are getting harmed, or their fundamental rights are violated. If the rules are not clear, they probably have also problems to get liabilities or redress, to get successful court case in the end, and so on. So yeah, let's see how it's playing out, and hopefully I'm wrong with my more pessimistic assessment for the near future, but as I said, for the long term, or with a long-term perspective, I do think that the AI Act, but also the other digital laws in the European Union, are probably heading in a good direction, making everything better and creating a situation in which we can trust our digital technologies within Europe.  

Punit  13:03

Indeed, I think it's one of the first initiatives wherein we have gone extendedly in terms of controlling or managing AI. And it is a step in the right direction, but we don't know how successful or unsuccessful it will be and which direction or directional changes we would need to make. But there's the fundamental debate around and I want to hear it from you, because you've been instrumental in drafting and have followed this for nine years very closely. Is this a product safety legislation, or is it a rights assurance mechanism, or is it a rights redressal mechanism that is like in the US, we have, you have a you can sue a party and then get some claims. Which is not a direction EU prefers, usually. So, which is the vision behind EU AI Act?  

Kai  13:52

Yeah, so the European Parliament, to start with us, we had a different idea, and we wanted to go a little bit in the direction of the US approach or UK approach. So having, let's say, non-binding ethical standards for all AI systems, then have sectoral legislation that is really going, Yeah, let's say in the details of the use cases and addressing the specific, let's say, threats or dangers or uncertainties of the development and deployment in the specific cases, and then complement everything with a Strong liability regime. And now we see AI act indeed. We have product safety legislation pre piece, but not, let's say a clean one, if that is the right word, because it's a kind of hybrid legal framework. It's not only product safety, but it was. Complemented by fundamental rights protection, which makes it really difficult to evaluate or to assess right now, because with product safety, the European Union has a lot of experience. It's a well functioned ecosystem where people know each other, where the where the proceedings or processes are very clear. Now that fundamental rights protection is included, it's not so clear anymore, and there are a lot of actors that are now forced to work together, that are working very differently. Have no idea how the other people are working. To give an example, in the risk assessment and product safety, you will need it only to look into risks for health and safety so far. Now you need to look also for risk for fundamental rights, meaning that market surveillance authorities in the future we need to work together with data protection authorities, which have completely different perspectives. So, this is one of those reasons why I was talking before about legal uncertainties, that I'm not so sure how the next 234, years are working out, and that we need at least some months, or maybe even some years, to make adjustments to connect people, like I said, data protection people and market surveillance authority, for example, from the health sector that so far have not worked together. And then going back to your liability question, so far, we have in the European Union now an updated product liability directive, which is covering from the AI act, product sorry, health and safety violations. So, if in those cases, because AI is now part of this product liability directive, you could potentially file a redress claim if there, for example, damage or harm happened. But on fundamental riots, it's not so clear what is happening. We have national liability regimes that sometimes could work, sometimes maybe not. We have maybe some sectorial liability regimes, maybe even strict liability regimes, for example, with automated cars or automated vehicles, but again, especially with fundamental riots, with discrimination and so on. There is maybe a liability gap, and this is something as the EU institutions will look into in the next years. There is already an AI liability directive presented by the Commission which potentially could close this gap. But yeah, the Parliament and the Council will now only after the summer break, start to look into this particular legislative proposal, because we were overwhelmed with all those other digital laws that have been discussed until the European election.  

Punit  18:18

So that's fair enough. I think if I were to put it in my words, I would say it's a hybrid approach. You take the liability, the market, competition, controlling the products as a safety element, and then the fundamental all those elements are combined, and hence the complexity and the uncertainty that it will bring in the early part of, let's say, a few years, but then things would settle. But another thing I take from is it's not the end state. It will be complemented with other legislation, other directives to make sure that the gaps are plugged in. Because it cannot be the end, end all solution.  

Kai  18:52

Exactly exactly, and even not only in only the A high sector, let's say, will be dominant this. I think the European Union did well, even though, right now there is a problem that there are contradictions and overlaps. But if we are able to fix those overlaps and contradictions, we have now really a kind of digital legal framework that is dealing with data, non-personal data and personal data DATA Act and GDPR. We have the AI Act. We have the Digital Services Act for platforms. We have digital market act for gatekeepers, very powerful in market terms, companies that need to fulfill stricter rules. We have a lot of specific laws that are looking into platform workers, for example, those working for Uber Eats or Amazon and so on, and all those pieces in the end, again, if we are managing to fix those overlaps and control. Addictions should indeed create a framework that going back to our first topic, is creating a high level of trust that companies can have, but especially citizens, can have in the digital market in the European Union.  

Punit  20:17

That makes perfect sense. And I think some of the laws have been written from the non-digital world, like the IP, the copyright and those and they never envisaged these kinds of scenarios which we are going to encounter, or we are encountering already through AI and through all the emerging technology. So, they would, over time, not in one or two years, but in coming years, need a bit of fine tuning or refresh, probably, indeed, yeah. And as we do that, one of the key elements that you already talked about is the EU AI Act takes a risk-based approach. So, it's not that there is a compliance obligation or requirements obligation for everybody, but there, there are two dimensions. One, the EU AI Act does specify, how do you classify? And of course, there will be more guidance coming along. But is it a choice people have or companies have that I will choose to be, or I will play around and be medium risk or high risk or low risk? Or is it going to be really a subjective or rather, rather than subjective, I would say objective approach, saying this is how you choose which category you end up, and then you don't have a choice. How would it work?  

Kai  21:28

Yes, and it's, it's tricky to answer, and because there are several layers, let's say, first of all, you are right with the second part of the second option that you presented that in general, the AI Act, the law is very clear. If you are in, for example, the category of critical infrastructure, you are using an AI system for the authorization or the management of critical infrastructure, then you are high risk. The company cannot say, No, I'm not high risk. So, I'm not fulfilling any rules in principle. Well, however, it is now the case that we were saying companies when they are producing an AI system and placing it on the market, do not need to do a third-party conformity assessment in most cases, except biometrics, there, they need to do a third-party conformity assessment. Meaning, for example, in Germany, they would need to go to the Tuff or Declass notified bodies that are again a third party that are checking basically your whole product, if it's high risk, if it's fulfilling the obligations of the AI act and so on. Since they don't need to go to those third parties, they will do it via an internal conformity assessment. And they could, for example, just say, well, even though we are maybe falling in the category of high risk, we don't think we are high risk. And just placing their AI system on the market, there are two problems in this approach. From the perspective of the company, in this approach, there is no, let's say, no power to force them to do otherwise, but they would have a huge problem once they are placing the product on the market, and then the market surveillance authorities are firstly saying way you misclassified because we think you are high risk, and you didn't even notify it, as said, you don't think you are high risk, because this would require a dialog between you and us, and we think we are right. So, this is the first point why they potentially would face a huge fine. And secondly, if they are misclassifying, or even if they are not misclassifying as themselves, if their product would violate AI Act standards or maybe even other laws. Market surveillance authorities have a long list of methods that they could use against this company, and in the end, they could even ask the company to completely withdraw the AI system in a matter of days, which means a lot of costs for this company, and again, it could be combined with huge fines. So to make it short, even though there is a lot of flexibility and freedom for companies until they are placing an AI system on the market, and there is, in most cases, no a. Obligation to involve other players to check your product it's well, once they are putting it on the market, there is such a big risk to get a fine otherwise that probably most companies will not take this risk, but will want to make sure that the system is completely compliant with the AI Act even before they are putting it on the market.  

Punit  25:28

No that's fair enough. I think it's always wiser to take a safer approach, especially with regulatory compliance and having a defensible position. But here we are talking about a bit of complexity or anomaly in sense, one, there's the software, which we call the AI system, and one, the product which is using that AI system. I mean, I give you an example, like Alexa, it's a device, but behind it, it's a system. The phone is a device, but behind it now they are saying we will build the AI system that's a software, same way an ultrasound device. The it's a medical device, but behind it, it can be having artificial intelligence, so that it does more than just scanning certain parts of the body. And so which one would the EU AI Act apply? Because for the other one, the product there is the product safety legislation already.  

Kai  26:21

Yes, no, but in the end. So, since the AI Act is, yeah, product safety legislation, AI systems are products in the end. So, in the end, it would be really the device that you are putting on, the final AI system, or downstream AI system. We called it in our negotiations always so for example, SAP is a German HR company that is producing a lot of beat-to-beat products would create an AI system that can be used in order to scan applications for a new position in a company, and this system is then being used by, let's say, a German city that is using it for themselves, for a city hall, for example. And then we would have SAP as a provider that needs to fulfill the whole AI Act, because they want to bring this HR AI system on the market, and also the City Hall, let's say of Munich in Bavaria would need to fulfill certain obligations as a deployer based on the AI Act there, basically there is not really a differentiation of what in the end is a device or is the end product and what is, let's say, sheer AI technology, sheer numbers and so on and so on. It's really the AI Act is only looking on the end product. One thing that needs to be taken into account. And this touches up on a little bit. What you were saying at the beginning is that we in the EU institutions, we're very aware that, similar to a car, an AI system, when we are talking now about the end product, is often really an end product that is consisting of a lot of different pieces, A lot of different data sets, a lot of external companies or contractors that are maybe just joining the project a few days in order to do some coding or some service, services that they are contributing and so on. And we wanted to make sure that all those different actors along the AI value chain are playing a part in the end, and that's in my example, not only sap that is basically feeling the whole set of obligations of the AI Act, and maybe lacking in many situations, information from all those other players. This is being described in Article 25 as it is called, responsibilities along the value chain. And this article makes sure that, going back to what I just said, All players that played a part in designing or creating the final AI product and need to share at least information, or in other situations, maybe even need to provide technical support and so on, in order to allow the final downstream companies that is placing the AI product on the market and to. Become fully compliant with the AI Act. How this works in practice? We don't know, because it was a very theoretical discussion that we had there. We were trying to solve this situation with standard contractual clauses. The Commission will create templates that companies can use in order to make sure that the necessary information flow is happening. But again, it's it's something really new. And also here, I would say a side effect is, at least at the beginning, legal uncertainty, because companies need to get used to it if they are not already doing it.  

Punit  30:45

So, in a sense, it's not straightforward binary, yes, no question. It's a question which builds up from, are you using an AI system, which risk categories that classified in? How is it deployed? And what role do you play in the whole value chain.  

Kai 31:02

Exactly. All  

Punit  31:03

that then determines what obligations would apply and what you need to do, and maybe more than what you need to do, you need to collaborate with a party on the left and a party on the rest right, so that you are fulfilling your obligations in everyone in entirety would be responsible. So, which will be quite of a, quite an ambitious challenge, I would say.  

Kai  31:25

Exactly. So, there is the regulatory focus, we said, always on the downstream actor. It's typical product safety legislation approach that again, this company that is or even a university, or whoever is commercializing the product or service and placing it on the market or putting it into service, this actor is the focus of the AI Act. But indeed, as you just said, all the other players around in this ecosystem or in the AI value chain of this specific product or service will play a part.  
Punit  32:07

So that will be very, very interesting. And only time will tell how that will shape up and how that will develop. But I think that's needed, because if you look at your example of the car, it has many, many, many AI systems in in that the suspension system, which gives a warning, there's a braking system, there's an alert system, there's a sensing system, and there's also checks on various parts of the, say, technical equipment, and all of them combine and feed data into a system which makes decisions, but each one of them is feeding a decision. So, it's going to be, I think, very interesting times. But as we move into this digital society or digital market, or digital Europe, as we call it, in the next 20 to 30 years, what, at a say, macro level, what steps do you see, in addition to realization of this EUA act and also realization of the dream of or the reality of this digital Europe?  

Kai  33:06

Yeah, I think right now, the most important point, and I met already once, a statement about it said, I really feel we did a bit too many digital laws at once. We are now at 116 digital laws in Europe. I think 88 already have entered into force. Many of them are already applicable, or will become applicable in 2025 or at latest, 2026 and as I said, a lot of very good ideas, very good intentions, but also many, many overlaps and contradictions because of how the European Union and many democracies are working, there are a lot of work silos, and people are just not talking enough with each other and coordinating enough with each other. So, I think number one priority is really to now make sure that our good intentions and ideas are really working in practice. And to my other points that I made several times already, it's not creating so high compliance costs for all those companies that want to play by the rules that in the end, they are not competitive anymore because they need to spend a lot of money. And what I hear, for example, from Germany, from leading digital companies, that right now, 30% 30 So, three, 0% of all their staffs is focusing on regulation and compliance. This is, of course, way too much, and therefore hopefully we are maybe, let's say, besides our attempts here in Brussels. To make the legal framework easier and more streamlined, hopefully we are also focusing more on easy compliance tools, maybe some private public partnerships, maybe the Regulatory Sandboxes that we now also find in the AI act can play a role digital innovation hub’s and so on. So, let's say to deal with these, this compliance topic, a second point is, yeah, let's say also an ongoing issue that we have now for a long time, digital skills, or AI literacy, and the case of artificial intelligence, where, unfortunately, the European Union doesn't have a lot of competences. It's a competence of member state, and we are really lagging behind. If you compare a German school to a South Korean school or Singaporean school, you see major differences. And yeah, I think there we really need to make strong improvements over the next year to really have to quote you, digital Europe in 10 years Europe, where technologies are being used when it comes to the mindset of people there, I'm a little bit more positive. Especially in Germany, we had huge problems that people don't didn't want to digitalize their life and so on. But I think it was good that our country became more international, and also those product teams or startups are nowadays having a lot of international people working there, because they kind of force the Germans that are always looking at the risks, to open up, to try out and so on and so on. And if we are, and this is the final point, if we are using this, let's say more positive mindset and taking risk a step, and really, finally, deploying technical tools and one area, and there we are going very much in your trust direction, for example, European electronic identities in order to access your tax declaration, banks and so on. And are rolling out those applications or those mechanisms across the European Union in different sectors are able to centralize everything, then I think we could manage real, genuine digital Europe in the next 10 years. But yeah, our problem so far was besides the lack of digital skills, really often the hesitation and this national mindset so that we were only thinking about how this works in Germany, or even in Germany, we were only thinking how it works in Bavaria or in the area of Berlin, and yeah, let's hope. Let's hope that this changes now and we are thinking really cross European and are finally having those big projects being rolled out.  

Punit  38:37

I think the time would tell. But as we talk about implementing in an organization, we talk about the policy, the process, the technology, and then the people. And when we talk about doing these changes at continent level or country level, then it's about legislation and society and skills. So, legislation, we are doing our bit. You're doing your bit when it comes to society adoption, I think people have seen what they've seen, and they can't foresee what is coming to them, and that's where the adoption takes time, and then skills is what you need in terms of making that legislative framework be a reality on the ground. But we all know it's a huge effort and a huge challenge, but I think the world will change faster, and this kind of legislation would help us, and maybe we will have more trust and less fear, and that's the whole intention, indeed. So, with that, I would say thank you so much for your time. It was wonderful to have you and very insightful conversation.  

Kai  39:42

Thank you very much for the invitation. It's a pleasure.  

FIT4Privacy  39:47

Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. It's. Take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www.fit4privacy.com, that's www, dot FIT the number 4, privacy.com if you have questions or suggestions, drop an email at hello (@)fitforprivacy.com, until next time. Goodbye.

Conclusion

As we conclude our discussion on the EU AI Act, it's clear that this groundbreaking legislation marks a pivotal moment in the global landscape of AI regulation. By prioritizing ethical considerations, transparency, and accountability, the EU aims to create a future where AI is used for the benefit of society.


While challenges and uncertainties remain, the EU AI Act offers a promising framework for responsible AI development and deployment. By fostering collaboration between policymakers, industry leaders, and researchers, we can work towards a future where AI is a force for good, driving innovation while safeguarding human rights and values.

ABOUT THE GUEST 

Kai Zenner is a digital enthusiast focusing on AI, data and the EU's digital transition. Soft spot for interinstitutional reforms and the 'Better Regulation Agenda'. Cooperative and pragmatic approach, always trying to strike a balance. Annoyed by stagnation, ideological mindsets and political power plays in the EU institutions and elsewhere.

He graduated in politics and law, after specializing in Security Studies, Foreign Policy Analysis, and Constitutional and European Law. Studied Political Science (B.A.) at University of Bremen, Law (First German state examination / Dipl.-Jur.) at University of Freiburg / York / Münster, and International Relations (M.Sc.) at University of Edinburgh.

He started his professional life as Research Associate at the European Office of the Konrad-Adenauer-Foundation in Brussels, before moving to the European Parliament as Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP Group) in mid-2017. Member of OECD's Network of Experts (One AI) and of the AI Governance Alliance from the World Economic Forum. He was also part of the temporary United Nations' Expert Group that supported the Secretary-General's 'High-Level Advisory Body on AI' in 2024. Awarded best MEP Assistant in 2023 ("APA who has gone above and beyond in his duties") and ranked Place #13 in Politico's Power 40 - class 2023 ("top influencers who are most effectively setting the agenda in politics, public policy and advocacy in Brussels"). 

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with