Jul 5 / Punit Bhatia and Prabh Nair

What is the Role of Cybersecurity in AI Governance

Drag to resize

Artificial Intelligence (AI) is rapidly transforming cybersecurity. While core security principles like confidentiality , integrity, and availability remain essential, the evolving landscape demands a new approach. This episode explores AI governance, a set of policies and practices designed to ensure the ethical, transparent , and accountable use of AI in cybersecurity.

Transcript of the Conversation

Punit  00:00
What is the role of cybersecurity in AI Governance? Yes, even cybersecurity has evolved over the years. And now AI is coming up. And what role does cybersecurity professional have to play in this area governance? And if you're an organization, looking to set up AI governance, how do you go about it? These are challenging questions, but these are relevant questions. And how do we answer these? And how about answering these questions with somebody who has been at the forefront of cybersecurity? For a long time and has seen cybersecurity evolve from information to cyber, and now is working on how to integrate AI into it. And I'm talking about none other than Prabh Nair, who's one of my dear friends. And also, you would have seen him in what do we call coffee with prep on YouTube. And if you haven't, do take a listen to him even in those, of course after this episode. For now, let's go and catch up with Prabh on what is the role of cybersecurity in AI governance. Hello, and welcome to the Fit4Privacy podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host Punit Bhatia has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.  

Punit 01:47
So here we are with Prabh. Prabh Nair. Welcome to Fit4Privacy podcast.  

Prabh  01:52
Thank you, sir. Thank you for giving this opportunity and sharing the stage with the person who have followed the content and inspire you on data privacy. It's like a dream come true. And coming to this podcast yeah.  

Punit 02:05
The same for cybersecurity. I've been following your content for so many years. And I'm like, okay, it'll be a privilege to have grab some time. So you've been in this journey of I call it cybersecurity. But if we look back, maybe say 25 years back, it used to be physical security facilities and everything. Then by the time I think you started your career 2005-2006, then it became information security. And now we are using the term cybersecurity. So can you start by elaborating? How do you see the shift in last 1520 years, from the era of when we were saying head of information to security to now say cybersecurity, what changed? How did it move? And I mean, what's going on?  

Prabh  02:55
Thank you. So thank you for this question. And it's really a amazing question you ask, and it is a great to start the session with this question. I want to give you a practical example of that, sir. This is basically a smartphone I think you can see that this is a smartphone. Right? But if you asked me 15 years back, or 18 years back, okay, I don't think so that smartphone everyone has right, no one has a smartphone that time. So we used to have a landline, if you remember, and I don't know, I don't know whether you remember or not. In India, what I used to do is make sure no one should call, we used to lock that keypad.  

Punit  03:27  
I remember.  

Prabh  03:28
I'm going with a very basic example thought process. So information security was there in every house. So the physical security was there in every house right. So make sure that make sure the bill should not go high. So that used to lock that particular keypad. So no one should write. So, that key is called as a password to unlock the pad and you then call the person right. Yeah. So, that is called physical security to make sure only all no unauthorized person should able to access that phone and keypad to call the person. Now things from there we basically move to pager then from pager we basically moved to mobile and now we have a mobile now if I go to the temple mosque and church wherever okay they used to have this physical locker and all that. So, if I keep this phone in that physical locker that is called physical security we are we get it okay. But now that physical security is part of one umbrella which is called information security, because kind of this mobile is getting information. Now, within this information, I have a digital information and that is basically protect with a four digit PIN code which is called cyber security. So somebody is very simple as information security, protect all kinds of assets with the cybersecurity protect digital assets. Physical security is also part of information security. So information security now become umbrella under which we have a physical security, cybersecurity and process security.  

Punit 04:55
Okay. So that's interesting. So moving from physical to digital to information to cyber. It's a nice evolution. But what has changed? Have the basic principles of security change have the basic tenets of security genuine, because a few years ago, I was leading a program in online banking, and we were talking about, we should validate what the person has, what do they know that those kinds of questions. So we were saying, what do they have they have a card, what do they know they have a password, then you say, Okay, what do they know? And we don't know. And he said, Okay, let's call the customer ID. So those were the principles we were applying has that changed in this evolution? Or is that still the fundamentally same principle being applied?  

Prabh  05:41
No, actually, it's principles is always remain same. So we around around 3 principles when it comes come to the information security, as I said, when I say information security to include your physical security, cyber and everything. One is called confidentiality, integrity and availability. Okay, let's take a live example, sir. So when you're talking about a trust, okay, what is the criteria based on which we trust a person or a bank or anything? So example? I trust it because I'm sharing some information with Punit. And based on the stress will make sure it will not share information with other.  

Punit  06:15
Exactly. It's a convenience factor.  

Prabh  06:17
Exactly. So here, what happened is I got an A from third party, but it has shared that information with other. So what do you think? Shall I share the information again with you in future?  

Punit  06:25
No, no.  

Prabh  06:26
Right. Second is basically I asked you something when it said, I'm struggling with this helped me with this problem, do like this, do like that. And the information was wrong based on that I take the decision, and it went wrong. Well, I trust you again?  
Punit  06:37

Prabh  06:38
Punit are told me Prabh. I'm always available whenever it required, but when I call upon itself, he was not available. What do you think? Should I come back to you again? same thing happened in information security, cyber security and physical security. So here what happened, I locked that particular keep that same and alert seeming analogies we have, we have our telephone, we have locked the keypad with the key. So that is called confidentiality only make sure those who have a key they can able to call the person is it clear, make sure when when I basically dial the number, it should connect the right right number, let's call integrity. And when I'm trying to call it must be available, because we're doing a billing based on that. If it's not available, I will discontinue the phone that's called availability. So CIA is always there. Today we're using CIA banking sector use integrity, because for them data is important accuracy of data is important. Healthcare is basically focused on your confidentiality, your privacy vertical is focused on confidentiality, ecommerce, focus on availability, yes, there is another important term which is came out of this three is called authenticity.  

Punit  07:40

Prabh  07:40
Which is basically subsection of the integrity and today you know, authenticity is the base of AI. Yeah. So, this is how So, as the things goes, you know, in a different stage and all that, these three always remain same. under which you can see the multiple sector now under the confidentiality we introduce secrecy, privacy, you can even under the Indicative entities, authenticity, accuracy and availability, we have redundancy and all that so, as more and more things are basically evolved in the in this industry in the business, the CIA is further divided into multiple thing but always remember we have the CIA only no matter in the cybersecurity physical security information security, there's no change in that. And now coming to the part of to verify that is access control, which is the identification, authentication and authorization. These three are the elements of confidentiality to achieve let's say example Gmail when you open first time Gmail, it asks for the username and then it asks for the password to make sure only that person can access the email records. So here confidentiality is outcome but the control that I use here is access control when which is achieved with the help of identification authentication authorization. So we have two things here one is basically called as a principle which is called CIA triad and one is called as access control which is called IEEE and if you notice this both are basically common in all the three vertical security I missed one important thing as you said, "Prabh is any change in the information physical security information security?" Definitely, as I said initially, we used to have security guards we used to have a police we used to have a cop we used to have a personal guards to protect the servers or protect the house assets and all that things go digital. Now we need to protect the process also Chain Management patch management on that. So that come under the information security because information security overall is called as a people process and technology.  

Punit  07:46  
Yeah. Correct.  

Prabh  08:34
And then from there we have a digital data for that we have a cybersecurity. So tried. And controls are always same no matter it's a physical, logical or technical whatever it is. Yeah. Yeah, only the perspective what change Yeah.  

Punit  09:39
Okay. So when we say I like to build on this, so when you say people process and technology there is this new or new say rumor or myth or media buzz, let's put it like that. Which is saying the technology is going to evolve significantly next 1020 years, what we are going to see is astronomical growth in that park. Now we are using the terms like artificial intelligence, machine learning robotics, whatever we call it. And they are saying, while all this used to be people process and technology based, it will be more process and technology based less people based. While we can debate the people part because that we will know in 1020 years, but I'm more curious to explore the technology part. So, if this AI as it's being demanded, or as it being announced, happens, what role does it have to play? And how do you see that happening, especially in the context of this security? Because you explain what happened 1520 years ago, or last 20 years? So let's go forward the next 20 years.  

Prabh  10:47
Very good question actually. And I want to give very another I believe in examples because you saw my memes also, I believe in examples, very document artificial intelligence, artificial intelligence is divided into two parts. One is called generative AI and one is called Predictive AI. Correct. Okay. When you're talking about generative AI, generative AI is basically about like creating a code content materials and all that and the best example of generative AI is ChatGPT.  

Punit  11:14

Prabh  11:15
And if we take example of predictive AI, predictive AI is basically all about giving recommendations and all that by using various AI and machine learning. One thing if you notice here is generative AI is more like a input of the human right in revert it automatically creating a data for my predictive AI also, because today if I enter in data and charge APD it become a database for them. So next time and when it is searching the same query, you get the same information because they believe they believe that what Punit no enter here looking for this information, which initially it was not given to prep, because Rob no Havoc process the right data. So giving the right information. So what happened here is now coming back to the question, why I was talking about the generative AI predictive value, because in this in these two aspects, one common concern we have discovered in last 10 years or five years, sorry, since the AI was there. It become it created toxic behavior. Okay, there was a breach of privacy. Okay. Copyright issues, infringement issues. And since the chargeability came, you saw the lot of people become author in LinkedIn. And you know, you know, it doesn't it will really take three and a 3000 word 4000 Word to build at least week to prepare such kind of a content and now you can see every day they can build a new content right, correct me if I'm wrong. So Punit has spent his 5 days on research, and he builds some content, okay. And when it has shared this content on one website, please understand this. Okay.  

Punit  12:49

Prabh  12:50
ChatGPT okay, I'm not taking any examples.  

Punit  12:53
ChatGPT or CoPilot or?  

Prabh  12:56
Anything they have a plugins they have a integrate and they have your data.  

Punit  13:00

Prabh  13:01
Next time Prabh has basically enter the query importance of AI in privacy. So keyword match trigger match therefore it affects the content which is already there or Punit Bhatia. But now it's become my content.  

Punit  13:14
Yup generated for you.  

Prabh  13:15  
So who spent his who spent the effort who spent this time is Punit and who enjoy the content is Prabh so here what happened? Why is cybersecurity is important in it not two things are there. One is cybersecurity in AI. Cybersecurity in AI or AI in cybersecurity both are different thing. Yeah. So, if I say cybersecurity in AI, it mean how to protect this data and all that in AI, make sure it should able to generate the right data, it should not disclose any sensitive information. If you remember, we see the example of where initially chargeability was generating a Microsoft product it is also so here what happened, if you see if you think from an AI expert perspective, okay, their goal is to basically produce a more and more content more and more accurate input, but in that case, what happened they do mistake they reveal some unnecessary information, which is unauthorized information for the other parties. So, what is the role of security here is to make sure to fine tune the data identify the source of data, verify the source of data validate the source of data. So, that is the role of a security in AI. And if I see an example of role of security in AI, again the goal is same data has to be confidential only regenerate what is relevant, make sure whatever is generating it is accurate and it must be available.  

Punit  14:38  

Prabh  14:39
So again, we back to same so there is no change in security because you know why? Because security is not a function. It is a process. It is a process which we amend and adapt according to need of the business. So yes, recently what happened is there was a video was circulated on YouTube, I don't know whether you saw that. So the charge up the architect has given Interview, how the AI model chargeability worked. So you can see that there's a lot of data points has been talking about in that area. And we learn from our mistakes. We right now people have triggered AI as a race AI is not a race, it is basically a it is a journey it's a beautiful journey, we should respect that journey, it is not a one time solution okay Monday we have built something by Friday the product will be ready no, this is a mistake people does they are in a hurry they build a product but they fail to ignore the security controls validations privacy and everything and that is why the role of security will play a very important role in AI not coming to the second part of your question is basically all about how security can be used sorry how the AI can be used in security?  

Punit  15:45  

Prabh  15:46
Opposite. So example now we take an example Sir Okay. If Mr. Punit is basically browsing naukri.com Okay. So, what did any popular website have job portal in job?  

Punit  15:59
No problem same job okay.  

Prabh  16:02
And he using a company laptop. So, this is the pattern we have discovered. So, initially, what happened is a human who collect the data and based on that he created a rule book. But now what happened AI is there who can basically process the data fetch the information based on that he will predict the only thing which is improve by using AI in cyber security's fastest way to respond to the attacks, fastest way to detect the attack fastest way to correct the attack, that is the only thing but again 100% accuracy is not possible. And people doing this mistake. So that is the goal, that is a role which is coming as a role of AI in cybersecurity. So two things are the cybersecurity and AI and role of AI and both are different, it is not same. And recent example I don't put it's like you know, the vapt is their vulnerability assessment penetration testing, if you have if you hire any consultant penetration tester, so what you do is is basically charge 600 $200. Okay, per day. Now we have a we have automation pen testing nowadays, that concept is called as a bass breach attack simulation. It's a new concept, actually. So in that case, I don't need to hire a pen tester, I will deploy the agent in my laptop in my enterprise, I will enter the IP, they have attack profiles, they use the same attack profile, and based on that we do the test. So when it's a thing from this perspective, you are the business owner, instead of paying a $600 per day to the pentester. You okay to invest $1,000, and which giving you the detail every day, every hour, that is a good call, right?  

Punit  17:35

Prabh  17:36  
So this is how it basically changing the thing. So, but again, we cannot blindly trust the AI we need, we need some kind of process to predict and all that. So that is how it works.  

Punit 17:45  
So essentially, what you're saying is, especially using AI in cybersecurity, the concept will be it will be making it much more faster, much more reliable, exactly what not without human oversight with human oversight, as long as the human is in control, unable to make those judgment. Because the word I'd like to introduce now is the bias because AI by itself would read the data that you give, and it doesn't have the intelligence even though we call it artificial intelligence. It has a logical and rational thinking that put it like that, but it doesn't have the psychological emotional interpretation capabilities, that it can judge what it is, like, few years ago, you would have noticed Amazon put in chatbots to filter out the recruitment profiles. The recruitment profile looked at the bank data, the bank data had less women and more men. So what it taught is I need to hire men,  

Prabh  18:39

Punit  18:40
Men, women, and only select men and a few years, a few months later, they realized it and then they had to backtrack same thing happened in the UK government when the benefit system was being biased. So what you're saying is we need to put a human in control so that this bias can be addressed or be managed proactively.  

Prabh  19:00
But and one more important thing we have on this planet. So I'm sorry to interrupt.  

Punit  19:03
No no.  

Prabh  19:04
There is one more important thing which people are missing, you know what transparency?  

Punit  19:08

Prabh  19:11
If we are entering any kind of data and ChatGPT okay, we don't know the source of the data. At least you know, Gemini, giving you source Yahoo is giving Microsoft copilot is giving source. But when ChatGPT, wasn't introduce, they don't talk about the source, what is the source of the data they providing? We put our hard work in ChatGPT. Now there's a new profile called prompt engineer in India you can become a prompting that in 3000 rupees. What is the role of manager is to enter the prompt that is a new job. Now, they believe I don't want to say this in our session, but there's a lot of new author came after the ChatGPT you can see a lot of books also. But nowadays what happened people are recognizing if you go to Amazon, the book reviews you can see negative comments. They simply say that okay, it's a language of ChatGPT.  

Punit  19:59  

Prabh  19:59
So here the another important thing after biasness is transparency, suppose Punit go to restaurant for having some Indian food and all that.So Sir tell me something, they have a transparent glass, okay, in which you can see how they basically cooking the food, how they making a food and all that does it build the trust factor for you? It does, but it also cleared the disgust factor if they're not. Exactly but that but that last show the transparency?  

Punit  20:26

Prabh  20:27
Okay, now they are making something in the back? You know, we don't know. So it is always a matter of doubt. Is it a safe or not?  

Punit  20:34

Prabh  20:35
ChatGPT that introduced that concern? Initially? What happened is Google bard or Gemini understood this as a lesson learned copilot has understood the lesson. And so they start giving us a URL link. This is how we get to the AI. Same if I'm using a cybersecurity solution, which is AI base it basically predicting any threat and all that I should know the source. Second most important principle after biasness is basically transparency what is the transparency of the solution? How are they collecting data? What are they collecting data, how they verifying the data? What is the data point, that is another important thing. And when it comes to privacy areas also it becomes a national it become a definitely a threat. Because if you're going to use privacy data with the applications and all that you should know the source of the data otherwise, government can sue you easily if using any solution, which is predicting a health benefits and all that.  

Punit  21:27
So isn't it now then when you introduced the word transparency and privacy we are getting into? Because the question of how do you introduce this human oversight or control over AI? And in the corporate world? We call it how do you govern this technology called AI? So how do you see it? For me it should be a set of it's a matter of governance just like as governance is having some principles having a policy and having some steering or committee which oversees it, but how do you see it? Or if you want to take in any of the principles, what do you see as principles?  

Prabh  22:07
So let me let me first tell you the definition of AI governance because different people have a different different dilemma about AI governance. So first of all, these two things are governance and AI. Governance and AI governance is called a set of operation, which include the strategy policy procedures, and right person. Okay, so same thing, a set of rule is basically create to bad is the process. Okay, in AI, that is called AI governance. So what is the process? Make sure whatever we processing, it must be ethical. So we're introducing a set of rules for ethical set of rules for transparency, then we'll be talking about accountability. And another important part we called as, when we deploying any solution, AI solutions, it should be deployed and develop according to the predefined function of the business. That is the first part of AI governance. Second part is basically when AI whatever the AI application you're using, you need to be very clear with why. And, like data privacy, we also say until as your your why is not clear about data collection, it is a breach of compliance. Correct me if I'm wrong Sir. So here what happened, the goal goal is to harness the AI so it can able to produce the right, right data for right user in the right manner. I don't know whether you have seen this Disney Khan movie robot. It's a very great game. So there's a movie to sneak over but let me take another way is like terminators are the right Terminator movie. Okay, so that is an example of AI.  

Punit  23:46

Prabh  23:47
Are you basically fine tune. So I purchase a robot, okay. And they feed the data in the robot with some biasness and all that to solve the toys and all that. But now what happened as the same example, you know, they have a 60 person data of some characteristics. So based so if that characteristics of person is basically speaking to the robot, they're replying. Okay. And if the other other category of person is trying to speak, they're not replying. So here what happened biasness issue came. So when you're working for the product company, or when you're working for the company, we're into service industry, and they're introducing AI, make sure they should create a set of rules for these kinds of things. And when you creating a framework for all these metrics, that is basically called as a AI governance. Right? So what when you're talking about AI governance part and all that we have a 5 principles, transparency, Mandela transparency, 2nd is basically called privacy is the 3rd is basically accountability is there 4th is the security is there and fairness is there. So these are the five important principles we have around AI governance. So we have to see how you basically balance that and how you basically ensure this principle you will maintain in the company.  

Punit  24:53
Absolutely. I think I'm fully with you whether it's privacy, whether it's security, whether it's AI, whether it's risk management or even compliance, it's about identifying the principles which you want to comply with. Because then take a principles deploy a principle based approach to create a policy, then you set up a set of rules, call it operations, how you will manage operations, and at which level, what do you expect people to do? And that's essentially what you need to do. And that same thing in AI. And when you do that correctly, and effectively, you have right results, or the bias is less, let's put because we won't have zero bias, you'll have less bias. But then there comes an important question, because you are representing the security professional CISOs. And everybody and I'm representing the CPUs and the GPUs, let's say, and these people have a question. Now I had my privacy governance, I had my security governance. And now this new animal or new thing is popping up Kali called AI governance. Is it going to be separate? Do I have a role to play as CPOE DPO CISO, information security, Vice President information security, or I let them do it and come to me for security and privacy? How do you see that?  

Prabh  26:12
One thing is that when definitely AI and ML, whatever is there, it will be part of CTO. If the company has a profile of CTO, he is the one who will be the accountable, I will tell you the hierarchy of the company now on paper, even it's basically IP certification or Institute certification or ISACA certification we say CISOs the CIO or CTO should report as is possible in the organization Yeah, just to maintain the accountability and all that or visibility but practicality if you see about these processes, CEO doesn't want to take any stress. To be frank they don't want to take any responsibility accountability of any domain independently. So that is why if you notice, the CIO make CTO as independent role CIO as independent role or CFO as independent role. The reason is basically very simple is because they have a predefined process like answer to your question is why I started the statement because it is also depending upon the power what you carry in the company, because based on the power your roles and structures has been defined. Now coming back to your question is will it be as every profile definitely CIO and CTO will be the one who handle this particular query because it comes under the CTO part, that is one thing. Now if there is a politics, that's a different story, but if there is a politics then definitely CTO and CIO will take this on paper, but as I can see in the future, there will be new profile definitely they are which is called as a chief AI officer or chief ml officers and all that now, I can't reveal the project details, which are recently added, but yes, I was part of another project where they have told me to build the AI governance framework in the organization. So, here what happened is it has been divided into four parts, okay. And I was basically working with CTO, because he is the creator of the particular solution for the company. So, what we did is basically, it's a very interesting thing. So, what happened is, we had a first thing which is called as a governance charter, in which we basically explained about what kind of information what kind of data we need to add in this process. So governance started as a first thing like okay, we are very clear, okay, the company was into healthcare and the companies basically want to I cannot reveal but, but what happened is in that company, what they want is they want to build some kind of things to feed some basic information and it is basically generate a result and based on that, they can do the pre basic treatments. So, mission is to basically automate the task and you know, do the initial level of screening, so, we got this visibility, and what we did is we enter all this information and we have appoint one person which is called as an AI ethics officer.   Punit  28:57 Okay.  

Prabh  28:58
Okay. AI ethics officer, make sure AI should work in a thick manner. I think whenever within privacy biasness opposite of violence is your principal bias, we call fairness. Yeah, we call fairness. So we have a point the AI ethics officer, and he was very easy was one of the person who report to the CTO because on the paper, we have to make sure there should be one guy then we have organized all the information okay, this kind of thing, the user going to feed, so AI will be built around this area. Then the second thing what we did we basically reach out to stakeholders, okay, to understand their expectation. And that is basically called as a stakeholder analysis. We did and based on that what we did we also based on the defense stakeholder, we check the legal regulatory. Okay, so we check the PIP, we check the GDPR and all that because we're expecting some customer from a different location. And based on that, we complete the first stage which is called develop the strategy. Then we build a governance structure document for them, in which we talk about how the AI is going to work and all that this is a set of pointers and alum language and all that. Because what is the mistake people does is they just follow some LLM code and all that and intelligible their own API, it is very important, you need to have AI by design, privacy by design, security by design, which people missed. And the reason blender happened with Ola. In India, they have introduced this new chat, new AI. And it was mentioned powered by open AI. So you can imagine what kind of blunder is happen. So this is what we need to know. So here, we then we introduce an AI ethics policy, it's very important to introduce AI ethics policy, okay, how AI is going to work, then we have introduced also Data Governance Policy. Okay, and then we introduce our AI risk management framework. So this is how we introduce a governance structure for that. And one important thing is that when we introduce this AI risk management framework, okay, how that so in that we have fine tune, the predictive predictability of likelihood and impact. Then we have an implementation plan, then we have a training and awareness. And then finally, we create a guideline. So this is how we basically build the entire AI governance framework for the company. Definitely each and every step have a different rule. Second, part of your question is prob how we can integrate definitely when I said the phase one, to have data input, what kind of data during that time we take the seaso advisor? Okay, we are going to process this kind of data with this set of pointers. Make sure you know, what kind of information we need to mask here we involve the DPO. Also. Yeah, so from according to GDPR. And all that are we processing data in the right manner, because we have to make sure we are processing the data in the right manner, and does not reveal any too much information to unauthorized person according to processor and also we introduced normalization masking. So here we involve the DPO instead of having a DB as a separate profile. So during the stakeholder analysis, we involve the DPO, and CFO. And when we creating a Ethic policy, we involve the DPO and CFO. And when we creating a deployment guideline, they have involved the CISOs and all that. So here you can see that security and privacy run parallelly Yeah, so that is how we drive the things. Yeah.  

Punit  32:05
I think I've fully agree with you. In terms of the steps on how do you move forward from say, not having an AI governance to having an AI governance, these are the steps to be followed? And sometimes what happens is people tend to confuse it with who will do it. Is it my role as a CSO? Is it my role as a DP? Or is it AI officers role? And what do I have to do if all three are there, and that I think, doesn't matter. It has to be done, what you described very eloquently is the process. So don't get into see the rules CCPO DPO AI officer, this is the work to be done, who does it doesn't matter. In a very small company, it will be one person doing all three, probably probably, let's say in a very large organization that will definitely three people and having their own departments and own teams and maybe some politics and fun also. And then in a midsize organization, maybe one of the DPO or CFO is being asked to take up the responsibility or AI is with the technology officer and then CEO and CFO are combined. So, that's basically how do you design the organization. So then how do you design the process and how do you design the organization. So, people need to keep this design of the organization because design of the organization is a function of the budgets size and scale they have. But in terms of the process, the process would remain same and who does it is completely immaterial for us.  But one more important thing, I think that misses third part is called How do you design the technology? Well, let's design the process, how to design the organization and third is basically how to design the technology. So what happened that is it so when I when I talk talk about the stakeholder analysis, we create a committee there actually peninsula that committee is basically called as a steering committee. So that committee what happened is we involve the operation team representatives, okay, example like we have data privacy engineer DPO instead of DPO if you have a data privacy engineer, process developer, okay, security architect, software designer, okay, then from senior management, if they have any kind of like, see, see you as not possible then they, they basically appointed CEO or CMO and all that. And we also have a one profile, which is called product lead. So product lead is the one who's a custodian of this entire solution, which we build against that try to understand here is if you notice AI, use everywhere as a product, not as a process. So if I if I am seeing recently, there was an app was there, where you know, you basically just stand next to like, just stand next to the phone, okay, they will capture your image and they capture my video and all that and based on the AI technology and all that they will give me the idea about the posture, let's say for example, okay, so there's a company which basically building a product. If I'm running a training company, I don't want AI in my process. Hmm, I will better buy AI as a solution. So ai ai whatever the trend is coming, it is basically us as a product not as a process. We're introducing a process to how to fine tune the AI. Okay. Yeah, we have we have this AI in my app purchases AI in my organization. So before deployment, how are these going to be work? What is my expectation from the vendor who providing me the API? Okay, this is what AI governance all about? Correct? Right. This is basically my expectation, you tell me, will you meet that or not? So this become a preventative control, okay, your AI is meeting my transparency principle fairness principle security principle. But privacy principles are a UPC purchasing an AI which, which asking you to enter your health report and all that it ultimately breached my first principle, which is called privacy principle. So AI governance was not introduced to us as a product, it is introduces a process where it can be used to build a product for the customer, we are doing that or when we buy any solution as AI, how to use them. From that perspective was introduced. That's why a lot of government has took the initiative, Singapore government has introduced AI framework, it's a very good framework, then Brazilian government has introduced a framework. So we have no countries who introduced this particular frameworks. Now, indeed, I think, absolutely read through India's guidelines also on AI, and the EU AI Act, of course, but all of them are talking about the same thing. Make sure the product the service you build or you deploy doesn't matter is conformance to certain principles, and in the end, will not create any harm to the society or human mankind. And in doing so you as a company demonstrate accountability, and how do you do it by incorporating the principles of transparency, fairness, ethic ethic, and responsible? We can use different words, and there'll be multiple  

Prabh  36:56
No, no, we can have only these five principles or we can have this five principle and based on this five principle, you can prepare a checklist.  

Punit  37:02

Prabh  37:02
What is expectation and based on that you can basically you know, assess the vendor.  

Punit  37:06
Yeah. And then the next step is building in while you have the principal while you have all this building in bringing in diverse perspectives, and that will seize DPO, CROs see compliance, legal, have a role to play because everyone comes from their mindset. Because if I put the CTO or the CIO, they are always coming from technology mindset. If I put the CFO they are coming from financial mindset, but if I put these people also then there are diverse perspective. And then you have what we call creative tension and balanced decision making.  True, true.  

Prabh  37:37
And actually, this is missing. This is not there in the company. Now upon it when it has told someone you guys should use strategy btw, and we started using chargeability without considering a business requirement. So that is why we need governance. Yeah.  Even worse, some techie guy got to know that there's a fancy technology ChatGPT, copilot, he or she deploys it, because that's his domain to decide what to deploy or not started using it. And some of the other colleagues don't know. And that's where when you were introducing the governance aspect, I had an example. When a company said it's not it, it's not business, it's separate. It's data governance, and they put AI privacy, data quality, all those governance in that. So simply calling it data governance, there are different ways of reaching the same objective. And there's no simple answer, let's put it like that. But it is going to be the next set of questions or challenges people are going to face especially as the legislation AI matters increases. Yeah, so I can give an example here example. When I recently when we talking about the assessment of one of the product we did, because we build the governance there and we are assessing any product based on that framework. So let's take a very basic example. So document transparency, okay. So that transparency is one of the principal in our AI governance. And this was the app which basically it was an AI app in which you will feed email ID and Name and automatically create a report for you. We ask very simple question is according to transit, in order to meet the transparency criteria, we have asked them can you share the system design and your development process and further operation methodology? That was simple question we have asked and they don't have an answer because later we got them they are itself they use the integrated API chat base something from other source. Second question we have asked them about explaining the the decision making process of this AI Okay, so if you remember we did the passport customers. You also told me we have a system and we have a system to an AI. system. One is basically pulling the data system two is basically triggered the data. So we pull in pulling, we're not working with working on the triggering, what kind of data AI will be going to produce? We asked a very simple question is and thanks to you, because of you I asked that question because I was not having that visibility and all that. So And kudos to you so credit to you. So I basically our second question is basically tell me the decision support system. These two basically question, give me the idea how are you going to work. So if you're here the transparency, you get the visibility of privacy. So I always say transparency is a base for privacy, your fairness, security and functionality.  

Punit  40:23

Prabh  40:24
So, that is also it will give you the idea of everything. Then second thing, I basically also ask them, What are the recent audit reports? Give me the audit report of your AI vision system and all that, by that we get a visibility about their fairness, fairness requirement, how fair fair is going to be taken decisions, what kind of diverse actions they will take, these things are missing in the company, sir, we are not following this checklist. Okay, we just having some shallow checklist and based on that we do the assessment, but we should have this kind of a checklist that you know, by which we can able to assist the winter.  

Punit  40:55
Absolutely. So now in essence of time, if I wrote back, we started with the journey from cybersecurity or information to cybersecurity, how AI will play a part and AI needs to be governed in a principle based manner and through process. And I would say it has been a fascinating conversation, all across different spectrums. But in conclusion, I think we can say, whatever your role CSO DPO compliance officer, legal officer, AI is here, and you need to embrace it. And whether it's part of your role or a different role. Still, it will be part of your role, because you'll have a role to play in it. Because we have been saying all along.  

Prabh  41:35
I agree. And one important thing is that follow the principle of privacy by design, security by design, and your assessment, as in design stage, instead of waiting for the development. And transparency should be your base to verify that basically meet all the criteria.  

Punit  41:52
Absolutely. And with that, I would say thank you so much Prabh for your time. It was wonderful to have you. And for listeners, in case they don't know. There's also something called Coffee with Prabh, which is a very useful insights on YouTube. And they would like to subscribe to that.  

Prabh  42:10
Thank you, sir. Thank you for recommending my YouTube channel. And you're doing great. And we will have one more podcasts.  

Punit  42:17
Absolutely many more.  

Prabh  42:19  
Thank you, sir. Thank you so much.  

Punit  42:20
Thank you.  

FIT4Privacy  42:22
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review. If you have already done so, thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. Fit4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE CIPM and CIPT through on demand courses that help you prepare and practice for certification exam wants to know more, visit www.fit4privacy.com. That's www.fit the number 4 privacy.com. If you have questions or suggestions, drop an email at hello(at)fit4privacy.com.


Punit Bhatia and Prabh Nair underscore the indispensable role of AI governance in the modern cybersecurity landscape. They emphasize that while AI can significantly enhance the speed and accuracy of threat detection and response, human oversight is essentials to manage biases and ensure transparency. AI governance must be built on foundational principles of transparency, privacy, accountability, security, and fairness.

Punit and Prabh stress the importance of involving diverse stakeholders in the AI governance process to ensure balanced decision-making and accountability. They advocate for a process-focused approach rather than fixating on specific roles, highlighting the need for principles like privacy by design and security by design to be integrated at the earliest stages of AI system development. As AI becomes an increasingly integral part of operations, embracing AI governance with a focus on transparency is essential for responsible and secure use.


Prabh Nair embarked on his professional journey as a trainer and consultant in the Information Security field, back in 2006, when cybersecurity was still in its infancy. Witnessing the transformative impact of the cloud and digital revolution, he recognized the growing importance of cybersecurity and dedicated himself to the sector. Over the past 17 years, Prabh has become a stalwart in the industry, founding InosecTrain, a cybersecurity training organization, and imparting his expertise to countless learners. His passion for education remains unwavering, driving him to continually expand his knowledge and help others achieve their career goals. 

With expertise spanning Information Security, Cybersecurity Vulnerability Assessment & Penetration Testing, Application Security, and more, Prabh has served over 250 organizations across 25+ countries. His commitment to knowledge sharing and dedication to excellence make him a respected educator and entrepreneur in the field.

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.


Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with