Sep 12 / Punit Bhatia and Mark Thomas

Govern and Manage AI to Create Trust

Drag to resize
How can organizations really keep AI safe and make smart decisions with it? As AI moves faster and becomes a bigger part of everything we do, managing the risks and building digital trust is more important than ever. There are frameworks like ISO 42001 that help guide us, but just having rules isn’t enough. What really matters is how companies set up their teams, share responsibility, and create a culture where everyone supports doing things the right way.

In a recent episode of the FIT4PRIVACY Podcast, Punit Bhatia sat down with digital trust expert Mark Thomas to talk about how organizations can use governance frameworks effectively—not just as checklists, but as tools backed by leadership and real action.

Transcript of the Conversation

Punit 00:00 
We often say that the digital trust concept is being disrupted by AI. Is it really true? Isn't managing AI governing AI, managing the risks? Around AI, the real thing? Do you really believe in it? And if so, what is the role of different frameworks that exist, the ISO 42001, the EU AI Act, the World Economic Forum Digital Trust Framework, and so on and so forth. What are these? Are these to be used as is or are these to be used as a means to review your governance and risk management processes? These are all relevant questions. And who better than Mark Thomas, who's is Hall of Fame and provides advisory and consulting to companies on digital trust matters to talk about all this. So, let's go and talk to Mark.  

FIT4Privacy Introduction 00:54 
Hello and welcome to the FIT4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here, your host, Punit Bhatia has conversations with industry leaders about their perspectives, ideas, and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.  

Punit 01:23 
So hello and welcome to another episode of Fit4Privacy podcast and today we have with us Mark Thomas. Mark, welcome to Fit4Privacy podcast.   

Mark 01:32 
Thank you very much for having me. It's always a pleasure chatting with you.  

Punit 01:35 
Absolutely. It is the same here. I enjoy talking to you and listening to your speeches. So, let's start with the fundamental question these days. Trust the digital trust in the digital environment. How would you define this digital trust?  

Mark 01:51 
It's this has been around for a couple years and a lot of people that go out to the internet these days, you always see this thing that says Digital trust. Digital trust. Digital trust. When I started getting into digital trust with association called ISACA was part of the building of the digital trust ecosystem framework. One of the things that we wanted to do was define what we meant by digital trust. 'cause there's a lot of different perspectives out there and I think a lot of the research I did, they were focusing specifically on financial transactions and security based types of things. You could lose trust with an organization for a lot more than just a bad financial transaction or a cybersecurity kind of event and so on. There's a couple key things that, that we really did that I like to define digital trust. And one key word in here is called confidence. It's confidence in this relationship of this integrity and of the interactions and the transactions interactions. You, you and I may have trust just in our interactions. We may have trust in transactions, they're between two roles. A provider of a service and product and a consumer of a product or service, and they're in this thing called a digital ecosystem. And today, everybody's digitally transforming. We've been digitally transforming for decades now. We're just using different technologies to do that. We're trying to reduce that amount of friction that we have between us as providers and our consumers. It's the way as organization really puts together its people, organizational structures, processes, information to maintain that digital, trustworthy world. So, it's about those seamless interactions and transactions.  

Punit 03:34 
Okay. Okay. And in this digital world, there's something, we talk about artificial intelligence these days. Now many people say that the AI is disrupting the digital trust. Do you agree with that?   

Mark 03:49  
There's yes and no. And a lot of that depends on how well an organization governs its AI cause there's two pieces here. There's one piece of AI that I may go out to chatGPT to find me some ideas for a presentation, but as an organization builds its AI platforms, right? That information now has to be trustworthy because, you've seen all the a lot of stories out there about somebody does this question on chatGPT, and it gives them an answer that is so wildly insane, but it's something that was written somewhere on the internet. But I think. That you sometimes it can behave unpredictably if you don't govern your machine learning and your generative AI platforms. It can lack transparency or it can be used unethically like making bias decisions about certain buying decisions or misusing personal data. Enabling surveillance when somebody didn't allow that from a privacy perspective. So there are some risks that challenge users their confidence and raise concerns. And but overall, you know what's interesting is, technology, in this case, AI could do a lot of great things. We can have all these, but sometimes it only takes that one, oh, to make the entire industry say, oh, it's bad. No, I think we're, I think we're getting to a point where governance and management of AI structures are the top of decisions right now. And in the boards that I'm talking with. They want it. They need it. And they're adding governance over this. But I do wanna make a case here that don't freak out over AI. Because I remember years ago when I first got into the IT world, client server technology blew my mind. It took me a long time to understand client server. Now that's old news, right? Cloud blew my mind, right? We all freaked out now. Everybody's freaking out. We were freaking out over internet of things then all these, and now it's AI. But guess what it's gonna be in 5 years from now. Or maybe longer, it's gonna be quantum computing. So the way we govern these things I think is a way to really look at it. So there is a negative example, right? chatGPT, right? There was a data leak incident. A couple years ago. Open AI chatGPT experienced a bug and it exposed portions of user conversation histories and payment related information. To other users, right? And it was for a brief period, but some users could actually see snippets of other users' chats and billing information. What I tell people is you use a generative AI tool somewhere. Somehow somebody typed that information in somewhere and now we're collectively using that and so that case disrupted some trust, right? Because there's a privacy violation, there's a transparency gap, and then there was concerns over ethical governance kind of concerns. But I think that. Even a small bug in an AI system that's widely adopted in a seat as cutting edge can create a large impact on public trust. And I think we advertise and we miss,  wemiss the fact that there are so many positive things being done from an AI perspective, from public protection, from safety, all those pieces there. I think there's a lot for us to look at 'cause we love attacking the negative side of it, but we don't realize that AI has we've developed a lot of trust in a lot of organizations from an AI perspective that we didn't actually know was AI. But when it goes wrong, we certainly know it was AI. They're the first verse to blame so a lot of 'em. And would you agree with that or would you think there's other areas that I might be missing there?  


Punit 07:37 
No, I would fully agree with you. I think a lot of the press, a lot of the media tends to focus on the negative and any technology, be it cloud meet AI, be it quantum, when something goes wrong gets a lot of press. But from a lot of 90, 90, 90% plus, 98, 99% goes right? And that is never talked about, but you bring in another important point. It's not AI that's disrupting is the newness. It's the unfamiliarity, it's the speed, it's the impact that it creates that is scary and that is disrupting the trust. And if you govern it well, if you manage it well, if you identify your risks, manage it. And try to go a bit slow, if I call it, so by slow i mean slow responsible and not slow meaning don't do it. Yeah. And you mentioned do it trustworthy. Do it responsible.  

Mark 08:27 
Yeah. And you mentioned governance and risk. The concepts of governance and the concepts of risk haven't changed, right? It's just some people overdo them a little bit, but it's the same kind of thing, positively speaking, right? Yeah. So, I've worked with several banks. We're using AI to help find money laundering. MasterCard has invested heavily in AI powered like fraud detection systems, right? And they have this decision intelligence, which uses machine learning to analyze real time transaction data. So if you've ever received a message from your credit card handler that says, Hey, we've spotted this. That's a positive thing. They're actually helping you because they're using these systems to catch it. And they don't say, FYI, we caught this with our AI tool. They don't say that it's part of business. Yeah.  

Punit 09:16 
That's always the risky part, but I agree. If you manage it well, if you manage the risk, if you understand the risk and do it in a responsible manner, rather than being in a hurry to let it out, manage it, identify it, and then slowly. Bring it to the market with low risk that. If we want to govern it and if we want to identify the risks, I think that's where frameworks or standards help a lot.  

Mark 09:45 
You got it. You got it. They truly do. And I, and that's one of the recommendations I always make to people on looking at the various frameworks, standards out there to help you. And some of them, we get overloaded with frameworks. And I always talk about this phenomenon called framework fatigue. There's a framework that's gonna save every company so many of 'em out there. And I'm sure we're gonna hit some frameworks probably here in a little bit if you're, if you'd like to. So you talked frameworks. You want me to hit some frameworks for you?  

Punit 10:17 
I think I was thinking now that we talked responsible ai, AI governance, AI risk management, and we also talk that this is new. So when something is new, it needs to be managed. It needs to be governed. Of course we know how risks are managed, how governance is done, and it's all the same stuff, which we've been doing on it. But how do you apply that into an AI setup? Because when you apply it to normal system, offline system, 30 years ago it was different. Now, online, cloud-based, AI based self-learning system, it's slightly different. So what frameworks recommend would you talk about, recommend. I know there's the ISACA, there's a framework. There is just one risk management framework, but which ones are your favorites?  

Mark 11:01 
Yeah, there's a couple I'll turn to, but, so the first part when you were mentioning is you were talking about governance and management. The first thing I wanna do is make sure that we talk about this distinction between governance and management, right? Governance is direction and control, right? When we're talking about these, this is, these are the policies these are the delegations of authorities. These are from a risk perspective, it is defining, say, our risk appetites and our tolerance levels and those types of things, right? And that's done usually through boards, committees. And sanctioned or chartered governing bodies or multiple governing bodies in an organization. Usually that's, started at the board. And then we have manager, management actually executes. And so when we're talking about AI, we're talking about trust specifically from a governance perspective, I wanna make sure that hey, we have transparency inside of our policies. We may embed ethical considerations into these pieces, validate that we're accurate with our models. And every one of these do require human oversight. We're not at the day where humans, where human eyeballs are not required everything, right? Has to have some human eyeballs to make sure that this is legitimate and communicate these things. But when we talk about, we're talking about different frameworks and standards and so on. When I really got into this several years ago the first real framework that I looked at was by the World Economic Forum. The World Economic Forum has a digital trust framework. It's a global multi-stakeholder approach to building digital trust as the World Economic Forum would. And it was talking more about things like responsible technology, use cybersecurity, privacy and so on. I did like it, but as a practitioner, I needed more detail. It gave me a very strategic view. I need more operational, more tactical kind of information. But key things of this, because I do recommend people look at this from a global perspective, it, and it has the ingredients you would expect. Ethics by design, cross border collaboration, trust as a driver of innovation and inclusion. Okay, so that's one I do wanna tell you a little bit about this digital trust ecosystem framework today. And I'm not saying this just because I'm one of the core architects of the framework, but I will tell you that it is very practical. It's very outcomes based. It takes me, it not only hits that governance level of trust, but it also hits the tactical and operational areas as well. It has several, what we call trust factors in that. And the key themes here are, trust, relationships and stakeholder value. It's always looking at that from an organizational perspective because why does an organization exist? To succeed at meeting its goals and completing its strategy, right? But we also may be looking at risk, compliance, governance, continual improvement and so on. So I do like the Dtap is what we call it. It's a good one out there that I do turn to quite often. NIST has actually I love NIST. NIST has some great stuff. And I would suggest anybody looking at governance or overall risk. You've got, of course you have the risk management framework, right? But you also have the AI version of that, the AI. Risk Management framework. And that's, and a lot of times globally, people say yeah, that's US government stuff. It is the, it is a free version of an ISO standard, basically. Yeah. And you, I love ISO standards because we've got another one called ISO 42001 42001 is the ISO standard for AI Management systems. So, if you're looking for standards out there, the NIST, AI, RMF ISO 42001 are pretty good. O-E-C-D-O Oscar Echo, Charlie Delta has AI principles. It's a pretty large global policy on responsible AI. Of course the EU AI Act. And we've seen, we've seen some significant amounts of standards and requirements coming out of the EU, especially when we look back at GDPR. Now we've got the EU AI Act, and that's, that is a landmark legislative framework for regulating high risk and AI systems. And I'll also say there's a, several others, but I'll also say this as we talked about governance one of my favorite. Go-to frameworks, and it doesn't say anything about digital trust in it. It's called the COBIT framework. I believe the COBIT framework is, and this sounds very redundant, but the COBIT framework is a framework to manage frameworks, right? It is my governance framework because it says, Hey, you can use all these frameworks, but you need to have this central nucleus of understanding how you govern and how you manage, and then bring these other frameworks into, it brings other things like TOGAF or architecture ITIL for say service management, a lot of other project management methodologies and so on. So we could go on for a long time on that. But I want to hit, I think the big ones that were, I have a question I need to validate something. I'll go to one of those that we just talked about. I really like them a lot.   

Punit 16:23 
No, I use the same approach and I like the fact that COBIT is strong because it told us, or it guided us to manage a system. AI is also a system, so let's not forget that   

Mark 16:40 
You got it. And I agree. That's why I said don't freak out. 'cause it's AI, right? Yeah it's a system. It's just one disruptive, positive, disruptive technology. 'cause there's gonna be another one after that and another one after that. If you have a good approach to how you govern. You shouldn't have to freak out and it goes to another role on governance. If you don't mind. I'm gonna expand that, that conversation and with, when it comes to digital trust and AI, specifically on the digital trust view, is, I'm not saying go out and create a digital trust committee. I'm not saying go out and create a digital trust officer 'cause quite frankly and many of your listeners know this is. We are also not only overwhelmed with frameworks, but we're overwhelmed with a number of committees. Need your committees. That now we have a separate committee that's called AI, or a separate committee that's called Digital Trust. That's a subject that needs or should be integrated into your current committee structure. Right? The chiefs Right Now, all of a sudden we're seeing, we have a Chief AI officer. We have a Chief Digital Trust officer. We're getting overwhelmed with chiefs, right? If we have so many chiefs who's actually doing the work and it's a very sensitive topic for me cause some organizations based on their bureaucracy, their bureaucracy and their organization, they need as chief and they need a committee for it. I wouldn't suggest that in all cases though, so I thought I'd throw that in there. I just don't want people to think of that, that I'm saying, oh, go create a committee and go create a chief role for this. I think these can be integrated into your current governance structure, and again, depends on the organization. Yeah. What do you think about that?  

Punit 18:19 
The organization is the core part. If you are a GE or a Ford and you're spread all over the world. And you are very committee oriented. You need an AI committee, you need a digital trust committee, you need an ethics committee. A risk committee like the banks have non-financial risk. Financial risk. All those committees are separate. That makes sense. But if you are a 500 person or thousand or even 5,000 people company, you need, don't need bureaucracy of a hundred committees. You have one committee say. In my opinion, call it data committee. Okay. That takes care of your privacy, security risk audit, but you put those agenda items. First 15 minutes we're going to talk about risk, then we are going to talk about privacy issues. Then we are going to talk about the issues and you run your data and data, whatever, community like that. But that's where most companies get it wrong, I would say. Yeah. And I agree with don't overdo it.  

Mark 19:16 
Yeah, because, and as we were mentioning before, very large multinational companies, one incident may not diminish trust enough to put them out of business, small to medium sized businesses or somewhere smaller than your globals. An incident could put a real hurt on the revenues, on their customer retention and those kinds of things. And I, I think that's a big deal. So it has a lot to do with your culture and so on. And plus, digital trust and AI, right? This is not just a relationship between a for-profit company at and a customer of that company. Think about all of these relationships. As an, organization, I may be for-profit, not-for-profit. I could be a church group. I could be an association, right? I could be a small business. Okay? But I have relationships not only with my customers. I have relationships with my vendors and my suppliers. And by the way, those vendors or suppliers, they're third party. I'm not just worried about them. I'm worried about. They're fourth party, fifth party, sixth party I'm worried about that, that small entity that's part of this digital supply chain, that they're so small that they're not compliant to any laws or regulations because they're off the radar. They can create one vulnerability that could propagate itself into my company. And so you've got business to employee, right? My employees, they have to trust me and I have to trust them. There's digital trust there. There's trust that I have with my government entities. I may have to do reports, I have to file my taxes. I have to do apply for permits, right? Do I trust all those things are taking place and so on. It's the one thing there that I think that. That will scare a lot of people is what I call proxy technology or that's that technology between you and I and that's where sometimes people get nervous because you and I are peers, right? You and I may communicate with each other, we talk through email, we do all these things, right? But we may not trust some of the technology between us. My son. My son lived in China for a little bit of time. He was he was doing a university, college, university professor exchange thing. And before he left for China, he said, dad, he said The only way that you and I can literally talk to each other while I'm in China is through a messaging app called WeChat. Okay? Yeah. He said, I just need you to know that I trust you. I trust everything between us, but don't trust WeChat. Because it's monitored, right? It is known as a Chinese kind of influenced organization. He said, so in this case, I trust you, but I don't trust the technology. And that's what happens to a lot of people. I may trust your organization, but I may not trust the technology. And another part of this right, is. AI is being used today to help organizations respond to events. Okay, let's take another example. I'm a Marriott fan. I love the hotel chain Marriott, okay? Marriott has had a couple of unfortunate cybersecurity privacy related incidents. In fact, they were even fined under the GDPR, okay? I've received messages from Marriot, said, Hey, you've been compromised. Here's what we're doing, and so on. But I still trust Marriott. Okay. I've had friends of mine who at the first sign of distress left Marriott said, I'm never going to that hotel chain again. But for some reason, the way Marriott treated me and the way they handled it, I continue to trust Marriott. So why is it that, that I trust Marriott and I have a lot of digital trust in Marriott where somebody else does not. And those are the things that, that are really interesting. 'cause it's not just. That you're secured and that you protect privacy and that you're compliant. It's how your users and your customers feel. That's that. Remember you said that's that confidence in the relationships of our interactions and transactions probably took that a little bit too far, but it's always an interesting kind of thing. And AI's just one of many technologies that are gonna do the same thing for us over the next 10 to 15 years or beyond.   

Punit 23:35 
Yeah. I think confidence, perception, decisions, these are usually personal. Yeah. Everybody perceives, decides confides in their own way. You and I may be trustworthy for a few people, not for some people. That's how it goes. Exactly. Yeah. And, but coming back to the frameworks angle that we are exploring, so yes, AI needs to be governed and managed risks need to be managed. Yes, there are frameworks, but. I know the answer, but I still like to ask you for the benefit of our audience saying if there's an organization and they want to manage governed the risks around AI in a more effective way. How do you go about using, leveraging these frameworks? Because typically I see that's become ISO 42001 certified. Not necessarily, in my opinion, where I'm asking you into their practices because we talked about not setting a committee for everything.   

Mark 24:33 
Yeah. Yeah. And so it goes right back again to the kind of foundations and the core ingredients of governance, right? So when we're looking at governing, say, AI, we're governing really around the digital trust around that is one is structure. Organizational structures. We are moving so fast right now with high velocity it, right? We started with this thing called a waterfall approach, and then we moved to Agile, and now we've got through things like AWS Azure we've got DevOps, where I can be doing, I can literally do thousands of changes today. Structure says delegate levels of authority as much as you can. Those levels of authority can only be made under certain conditions because we can no longer right? Say, oh, I can't make that decision. That has to get escalated to a committee. Now I know clear decision making authority. There are some decisions that have exceeded your authority and need to be escalated. Get it? So the first one is organizational structure. Absolutely key, right? Accountability is that kind of, that ownership piece to that is, is I see too many organizations will delegate levels of authority, but when it comes time for that level, that delegated authority to escalate, right? They're afraid to make that decision. Because they may not have the knowledge on it. Kind of those kinds of things. And that's oversight. Part of that I truly think data and IT architecture is absolutely huge. And there's, there are, there's a framework called TOGAF, if you've heard of TOGAF before your listeners, it doesn't. Talk specifically about ai, but it says, here are the building blocks of this. You've got data architecture, right? You've got theDMB, okay, some other bodies of knowledge out there. So structure, accountability, architecture obviously is a big piece on that. And I truly think that it comes down to your basic processes, right? Your basic processes of incident management continuity and disaster recovery. Problem management, relationship management. I think those are key to have there. But I will tell you this is the biggest killer to any governance system or program is culture. Organizational change. I've seen organizations that say, oh, we wanna change change. But then when it comes to leadership support, when it comes to funding, when it comes to training, there's no support. But they keep saying change. So the culture at the leadership level says, we wanna do this, but they're not putting their money where the mouth is, not necessarily money, but it's the overall support they want. Somehow everybody wants to put this word called governance in front of everything and they believe that somehow governance is going to magically occur. And you can't govern yourself. I can't tell you to govern yourself because if I tell you to govern yourself, whether we're talking about AI initiatives, whether we're talking about, good old fashioned, a IT governance, financial governance, so on. If I tell you to govern yourself, what I've just asked you to do is, number one, make up your own rules. Number two, determine which of your own rules you choose to follow. And number three, determine the consequences on those rules that you choose to break. You can't govern yourself wherever you are in an organization. I don't care if we're talking AI, I don't take care if we're talking quantum computing in a few years, right? There's always a governing body. Setting the guardrails for you. That is the key thing, I believe. I'm not sure. Do you have any comments or feedback on that?  

Punit 28:21 
I would agree with that because you put in a nice spin around it because typically people look at the framework or the process as a solution. The process or the framework is as good as the implementation or execution you do. So your risk management, your governance, your process, your management has to be strong enough. And you use these tools, I call them the frameworks, as tools, as a means to review, to enhance, to enrich, to make them more effective and efficient. Maybe you say, oh. ISO 42001 says, you should do this. We are not doing that. Let's incorporate it into our process. The Digital Trust Enterprise Framework of ISACA, is saying this. Are we doing it? Maybe let's check with our privacy colleague. If in the privacy part it's being done or not, that's the way to use it. But don't expect, I have implemented ISO 42001 as a policy and we have a digital trust framework or committee, and it'll take care of itself. It won't. It won't. Not at all.  

Mark 29:20 
I agree. I agree. You made me think of a kind of a pet peeve I have on, when I do courses and some webinars and things like that. It, people know I, I'm a FA framework fanatic. Just like you, we love our frameworks. Yeah. The framework doesn't make decisions. So, I have people always say, okay, M55ark, here's my scenario. I have A, B, C, D, E, and F. These are my situations. What does COBIT say? I'm like, nothing. COBIT is not making your decision. It's not this punch list where you enter all these parameters and COBIT says, here's your decision. Now AI's the same way, right? We can ask our generative model multiple questions. I've got A through F. What would you suggest? People say AI made the decision. AI did not make the decision. And I think a lot of people are hiding behind the fact that they said that was  that was an AI decision. AI doesn't make decisions. It's giving you the information to make decision. Human leadership will never be taken away. So, I when you were saying that, I was like, ah, that's, I'm glad he brought that up. So that was  

Punit 30:32  
No. You're 100% right. AI will help you make better decisions. The decisions better or worse would still be made by you.   

Mark 30:40 
That's 100%. I agree with you a hundred percent on that. You got it.  

Punit 30:43 
And that's, I think, a very good moment to say. It has been a good conversation, but if somebody wants to contact you. How do they do that?   

Mark 30:52 
Oh, so I'm independent guy out in Phoenix, Arizona. You can contact me on my email address. That's mark, mark@escscout.com. That's E-S-C-O-U-T-E. Or you can find me on LinkedIn. I'd be happy to be LinkedIn with many of your listeners as well. And I do have information on this subject out on out on YouTube as well. And you can find me on Mark Thomas online.com.  

Punit 31:20 
Perfect. Mark, as always, a very interesting conversation, very energizing. Thank you so much for your time.  

Mark 31:27 It's been a pleasure and honor, and we'll talk to you again very soon. Thank you so much for your time. Have a good week.   

Punit 31:31 
Thank you.  

About FIT4Privacy 31:32
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. Fit4privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. If you want to know more, visit www.fit4privacy.com. If you have questions or suggestions, drop an email at hello@fit4privacy.com

Conclusion

Mark Thomas and Punit Bhatia agree that building digital trust with AI isn’t just about following frameworks or getting certificates. It starts with having a clear structure in the organization, where people know who can make which decisions and when they need to ask for help. Accountability is important, but sometimes people are afraid to decide because they don’t feel supported or sure.


They also point out that strong IT and data systems are key to good governance. Even if some frameworks don’t talk about AI directly, they can still guide how to organize and protect data. Basic processes like handling problems and incidents are also needed.


Most of all, they say the company’s culture matters a lot. Without real support from leaders—like training, resources, and time—frameworks won’t work well. Governance can’t just be a buzzword or something people ignore. AI doesn’t make decisions on its own; it helps humans make better choices. So, leaders must stay in charge.


Punit sums it up by saying frameworks are tools to help improve governance, not magic fixes. Both experts agree that organizations need to focus on putting these ideas into practice and creating a supportive culture to manage AI risks well in today’s fast-changing world.

ABOUT THE GUEST 

Mark Thomas is an internationally known Governance, Risk, and Compliance expert specializing in information assurance, IT risk, IT strategy, service management, cybersecurity, and digital trust.  Mark has a wide array of industry experience including government, health care, finance/banking, manufacturing, and technology services.  He has held roles spanning from CIO to IT consulting and is considered a thought leader in frameworks such as COBIT, DTEF, NIST, ITIL and multiple ISO standards.   Mark is also a two-time recipient of the ISACA John Kuyers award for best conference contributor/speaker as well as an ISACA Hall of Fame recipient in 2024. He is also an APMG product knowledge assessor for the CGEIT, CRISC and CDPSE certifications 


Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with