Building Secure AI Systems
Just like any technology, AI can be misused. Hackers might try to fool AI systems by feeding them bad data, or cause them to make harmful decisions. In some cases, people might use AI in ways that weren’t intended, creating problems for privacy, safety, or fairness. These aren’t just science fiction scenarios—they’re happening now. And if we don’t take steps to protect AI systems, the risks could grow along with the technology. Building secure AI systems means making sure they work the way they’re supposed to, even when someone tries to break or trick them. It also means protecting the data they use and making sure they treat people fairly. This isn’t always easy, but it’s incredibly important. AI should help people—not hurt them. And that starts with making sure these systems are safe, secure, and trustworthy from the ground up.
Transcript of the Conversation
Punit: 00:00
Building secure AI systems. That's a challenge all of us have, and that's something we all aspire to do. Building secure AI systems. But how to go about it? Do we start at the time of strategy? Do we start at the time of design? or maybe the implementation team will take care. Definitely not that the implementation team will take care, and you have to start from the scratch and think of responsible AI. Look at what are your vision, mission objectives, and how to set up the right foundations in terms of privacy, security, and data. That's usually how it's done, but how is it being done in different companies and what are the different steps? So, let's go and talk about these with Santosh Kaveti who's the CEO of Pro Arch and as we go and talk to Santosh, I would also like you to go to Grow Skills Store and take advantage of the ISO certification and trainings, which are available for you.
Fit4Privacy 01:04
Hello and welcome to the Fit4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host Punit Bhatia has conversations with industry leaders about their perspectives, ideas, and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:32
Hello and welcome to another episode of the Fit4Privacy podcast, and we have Santosh Kaveti with us. So welcome Santosh.
Santosh 01:39
Thank you, Punit. Thanks for having me.
Punit 01:41
It's a pleasure to have you. And these days when we talk about AI, people talk of digital trust. People talk of ethical AI. Responsible AI. Amongst all these things, the common theme is you got to be responsible and you got to build AI responsibly. But what does this responsible AI mean for you?
Santosh 02:01
A great question Punit. The responsible AI is actually now developed as a framework. I think it's getting more formalized. It's evolving as well. As more criteria is being introduced into the framework of what's called a responsible AI. But an easy way to explain that is, look, at the end of the day, we have to understand that the criteria to trust AI is very different than or evaluate AI. QA AI is very different than your traditional apps or products. There are about maybe 12 to 15 criteria and evolving as part of that framework. For example, bias. How do you assess bias? How do you prevent bias? Fairness. There are several ethical considerations. There are tech technology considerations as well. The models performance itself, reliability models, ability to drift models, ability to contextualize accuracy because there is no good or bad answer when it comes to AI. It then how do you quantify that? How do you actually say, Hey, this is a good answer in this context. So accuracy there is highly contextualized to that particular use case. So, these are several examples and there are security considerations as well. And together they form responsible AI framework.
Punit 3:34
Okay.
Santosh 3:35
That's an easy way for me to explain, and it's evolving, as I said.
Punit 03:36
It is indeed evolving and it's about the as you said, demonstrating a responsibility at each and every step, owning it up and doing all the right things. And we will talk about some of these right things along in this conversation as well. So, let's maybe start with, so it's somebody, some organization wants to start the journey on AI. Let's take that use case and expand further. So let's say somebody is looking to start AI on their journey. They of course have to make a choice being build or use whatever they want to do. But the whole thing is in that when we talk responsible AI there's the setup, there's something they have to do. Let's talk about something which is both the approach as well as the strategy perspective. So, walk us through an organization when you and your team join and helping them start the AI journey. What is the strategy phase or approach phase?
Santosh 04:36
Excellent question Punit. So, there is actually, now, let's start with the use and then we'll go to the bid. Okay. I'll give you specific examples that we're actually actively working on right now. We're helping several utilities with their AI journey. Okay. It starts with, Hey, I would to roll out an AI policy. I would like to roll out my copilot, or I would like to roll out maybe something like Enterprise ChatGPT. How do I go about doing that? That's where the conversation typically starts.
Punit 05:09
Yeah.
Santosh 05:10
And what's interesting is when we go through the journey with our customers, they will quickly learn and we will learn that data and safeguarding data security, and privacy. Is a prerequisite to AI. You cannot go on AI journey in any shape or fashion without ensuring that you have your basic data security and privacy risks understood and mitigated. What do I mean by that? I'll give you one specific example. It was very easy for us to roll out a co-pilot just as an experiment. And showcase to many of our customers that if you ask a clever prompt, it would actually disclose sensitive information.
Punit 05:54
Yep.
Santosh 05:55
To someone who's not, is not authorized to have that information. So access controls all of a sudden became super important. What is interesting to us is data privacy. Data security was an IT conversation. Before AI, and it's now a business conversation with business owners. And that's a huge shift that we've seen in the conversation shifting from becoming IT centric to business centric. Cause business now wants to leverage AI either for efficiencies or for opportunities. But then for them to do that, they first need to understand and really mitigate their data risks. Okay? So that's how it starts. Now once we look at the data, look at establish the ownership, go through the basic hygiene of data governance and have some controls in place as basic as maybe access controls, then we take them through the AI journey. It starts with really education. AI literacy is important without organization, understanding what AI means to them, where they really wanna apply, how they want to apply, it's very difficult for them to go on this journey. So AI literacy is an a number one prerequisite for us there. And it goes from top down, buyin, and all the way to someone who's on the floor, for example. In a utility. I'll give you one example. They think that, for example, one of our customers thought, Hey, I'm rolling out copilot and therefore this is my starting point of AI. Well no, because what happens is inevitably the utility uses 10 to 15 different applications.
Punit 07:41
Yep.
Santosh 07:42
And they're technology driven these days. All of these applications are now AI enabled and they all are using AI in some shape or fashion, so they don't realize that they already have AI risks, whether they know it or not. And the starting point is not Copilot starting point is not AI policy. By virtue of you using your apps today. Most of them are now AI powered. You already have risks. Do you understand those risks? Do you understand what questions you need to ask to your technology partners and vendors to make sure that you know you are protected through the AI exposure that's coming in from all of your third-party apps. That was a revelation to, many of our customers, “hey! Okay so we are already exposed” our starting point is not just copilot.. Then we help them figure out and address those concerns. Now they start on, with the Copilot journey that becomes their corporate AI or productivity AI, we try to first go with a, how can we enable individual role. Every role is now AI powered. I am now A CEO AI power. I'm sure you are too. Then we look at the processes and workflows to see how we can apply AI to data meaningful ROI, we have to measure the result and outcome. And then we look at the data that they're accumulating across all of their systems to see, what kind of data governance and quality and security that data has. Now we are talking about modeling on top of that. Analytics you know everything that we typically do. If you really wanna build now on top of your data, you're thinking about new product and new service, we help them build the applications as well. And we also have a framework called AI examine, which goes through the responsible AI frameworks with guardrails to actually say, okay, this is where you are in terms of how your AI is sharing. Against the responsible AI framework. Yeah. So we can try to quantify all of that. That's typically the journey.
Punit 09:57
Sure. So you're saying strategy is not just about getting into the boardroom and writing a policy or an approach note or governance note and say that's your strategy starts with literacy saying, understanding what is AI, what are your risks? Because you are already exposed to risk. Because that's what typically I say, people say AI will come. I say, no, AI is here. AI is now. And like you said, I help them understand how AI is being used by them without knowing, without even understanding, and then listing out those risks. Some you are already exposed to, some you will be exposed, understanding what, where you want to go, what will be your approach, and then making a plan for literacy for rest of the organization. Creating a policy, creating the setup. And then that's the strategy phase, that's the initial phase. And it's not just saying, making a nice presentation or nice policy. So, if that's done, and you also emphasize the aspect of in this incorporated, privacy by design, security by design, and these, the responsible AI by design or sometimes we call trust by design. So if we do that, all these things, and you mentioned you have a framework for AI, just like we have or and most consultancy firms have. The question comes, you have also some standards here, which are available, and some laws here. So laws, when we talk about the EU AI Act usually standards is the ISO 42001 or there are others? And of course, each one of us bring in a framework. So it becomes challenging from a company because now they have the strategy and they want to move into implementation. It's in typical life cycle. It's more of the design phase or more of the. A phase in which you say, how do we implement this policy? Or how do we get everything going? So what is the role of standards and what should be the approach an organization should take so that, because again, the same thing. They want to be responsible, they want to be compliant, they want to be secure, and all that.
Santosh 11:53
Excellent question. Compliance I know is a big deal for many of our customers, especially customers who are in critical infrastructure healthcare. They're compliance driven, heavily regulated. The fines are huge. If I take utilities, for example, generation transmission distribution, they're subject to, sometimes they're subject to CMMC regulations, and of course, AI regulations are evolving and changing. There are several executive orders now already in place in US. In addition to what you mentioned, the approach that we recommend is, let's not start from compliance and then work backwards. Okay? Let's first make sure that you have good hygiene, good practice, good execution plan, okay? That takes care of, about 60, 70% of the work. So compliance should be an outcome of really good security privacy hygiene. That's the approach that we typically recommend and we advise and we take, and of course then, you know, once you have that in place, it is about using, you know help from professionals such as you know, and to come in and map you know, where we are, map all the controls, all the processes, all documentation to any standards. And let's remember that this is not a one and done. And we've seen that time and again too, that someone comes in and does a compliance check, okay, you are, you're now certified and then nothing happens for two years until it is time to get re-certified. And then you open that conversation out. I think that's not the right way to do this. If you evolve your execution plan and your security, privacy and risk become your day in, day out operational procedures and part of your operational SOP is of working, it's so much easier to turn this into an opportunity. I always say this to our customers. Risk or mitigating the risk is not an obstacle, not a cost. In fact, we could become better because of risk and turn that into an opportunity. Risk could be a huge advantage, and the way we mitigate could turn that into a huge opportunity. So that's the approach that we've taken. And yes, this is not easyn not gonna go simplify and say you know privacy by design, security by design, or even, zero trust by design concepts. They have a training. You touched on this earlier as well. Training is super important, and having that level of education awareness in the organization at all levels requires constant ongoing effort. To make sure you're on top of things.
Punit 14:51
I'm fully with you. I think you have a strategy. You set the right foundations, and I like that term compliance is an outcome because if you set the right hygiene and you do a compliance check, you would be compliant not other way around. And compliance is an opportunity or risk is an opportunity to identify what can go wrong and prepare for it. That's what we want to do. But there's another element which I like to zoom in or emphasize more on the data management or data foundation or data governance, whichever way we call it. 'cause end of the day, the foundation of your AI is going to be based on data. And if. The data foundation is not that strong, then you're going to face the heat at a certain point. So, let's share something of the good practices that you're helping your customers with data foundations as they start the AI journey.
Santosh 15:43
Absolutely. One of our customers is in the healthcare space and the deal with a lot of Medicaid Medicare processes. They wanted to go on AI journey. And again, it started with, Hey let's look at your data. And we really helped them through the entire data governance practice, what we liked about this journey. In this particular cases, the awareness and the literacy that the team had. On. Okay, this is important. This is, we cannot take this granted, okay, data governance is the prerequisite. Even on the data side. That level of literacy is super important and you'll be amazed. I know we talked about AI literacy will be amazed how poor data literacy is. The good news is there are really good frameworks available, right? Simple frameworks available. To actually establish operationalize data governance is what I would call it.
Punit 16:49
Yeah.
Santosh 16:50
And again, I would start with literacy because once we introduced the concepts to them and said, look, in order for us to really have the data governance established and we walk them a new. There were a lot of new terms right. What you're talking about, data owners. Okay. You're talking about data lineage, data classification, data labeling, and metadata and actually being auditability and explainability. Explainability comes in the AI phase at auditability. The literacy of these things is super important. If they don't understand the importance of establishing these roles, establishing the ownership and accountability and being able to track this through their systems it's not easy to go through this. But that's exactly what I've done. We, what we've done is end of the day we want improve your data quality, and data usability, and security. We can later on map this to your compliance, understanding your risks, whether your business risks are with your compliance risks. We can map them. That's the objective, but it starts with, really good. Again, as I said, there are lots of frameworks available. They're not complicated. If you, if your literacy is good, it depends on your literacy maturity. If your literacy is poor and there is a lot of resistance, then you take a different approach. In this particular case, this customer's literacy was, I would say literacy was not great, but their willingness to do this, the intent was strong to make this happen. It really helped us to educate them on what it takes to establish a good data governance. What's interesting is there's a myth that when we use the terms data governance, oh my God, it's a lot of overhead. It's a lot of cost. It's not all, this is really whatever you are supposed to do to start with. It's just operationalizing it the right way, yeah. Changing your perspective, taking a step back and knowing that, oh, I own that data, which means I need to ask these questions. I need to be aware of these things. And that's what it takes. That's the approach that we've taken with this and we had a huge success, and now they're ready to go on their AI journey.
Punit 19:18
Yep. I think we touched upon defining the right strategy, doing the necessary on literacy, identifying the risks, and eventually setting up the foundation in terms of data governance, risk governance, and overall AI governance. And then. As it's about execution and then it becomes technical, and then it's more like any other software development or system management thing. But one of the things we talk about is talent management or talent scarcity or talent issues. Because what happens is an organization, while they do this because of the literacy issues and otherwise, also it's hard to find the talent. So how do you address the talent challenge?
Santosh 19:17
Wow. That is such a ongoing, big, ongoing issue even for us, internally pro. Yeah. I'll actually use as an example. Okay. We go through, we have lots of security controls. We have a good security team. Our CISO and our IT department, they do a great job in terms of trying to protect us in technology perspectives. Still there are lots of loopholes, so we go through constant education, okay. That is the foundation. There, there is a level of education that every role requires these days to be digitally secure to earn that digital trust. Okay? Any simple example is. One of our experts was AI experts, was recently talking about a, an attack new type of attack vector called cross prompting injection attack. Yeah. Where, an attacker would inject a prompt into a document and tricking LLM to think that, it received a human prompt through a document that they. Okay. But you, if you don't know, you know that risk and you are not educated and you don't know how to prevent that is a problem. I'll take simple example too. Phishing scams. Yeah. Or, everyone now is using AI to become super smart. They're highly contextualized, to the extent we feel like, okay, just an hour ago I had this conversation with my colleague and therefore I got this email. So I know this is email. Just, this is not some spam. I know we just had this conversation, this particular topic and here that's highly contextualized phishing scams. So there is a rigor, there is a repetition, and there is improvement of this training modules. And there are really good tools available, open source or paid that you have to go through to keep your teams up to date. Now, as a security professional, a whole different story on how you really, keep yourself updated, but as a business user, there is a level of literacy that you have to keep up with day in, day out. That's the only way that we'll win this game is we outsmart the hackers. And remember there is also a myth that the hackers are technically super smart. And yeah, there are definitely some, I'm not gonna dispute that, but most of them are not. They're simply outsmarting you in leveraging technology and tools in a smart way. They're just like you on the other side, they're highly motivated to do to get in through tools and technology and tricking you. You, we just need to outsmart them through education, learning and the constant awareness the flags will have always have to be up and it's getting worse. Quite frankly
Punit 22:52
No, it is definitely getting worse because AI is a superpower. And that superpower they also have now. Absolutely. That's where it creates the issues or create the challenges. So if that's the journey of AI and the, you help them bridge the AI talent gap also, so what does Pro Arch and you do for a customer? Is that what we talked about? Is there something else? So how can help from this?
Santosh 23:18
We touched on a lot aspects. We do traditional, IT services and transformational services, but in the last two years. I would say a lot of what we do has been about how to operationalize data and AI securely. And we are fortunate to have both great security teams, data teams, and AI teams, and they're working together constantly to figure out how to actually do operationalization effectively. And that's what we do across the board. We're committed to operationalizing your data and AI investment.
Punit 23:55
fSure, it's all about building secure AI as we talked about all through right? From this time that you're not clear on what to do, what should be a approach, what should be a strategy?
Santosh 24:06
AI magnifies, good AI magnifies bad and exposure is worse. In terms of what could happen if something goes wrong.
Punit 24:16
Indeed, it's absolutely true, and it's the responsibility of the management and the board. To start on the right level and right way. And that's where keeping compliance, keeping responsible AI, keeping ethics all in place, and also what are your business goals and aligning with those business goals. So that's essentially the cycle of AI. And with that I think we covered everything. No? We didn't touch on probably the alignment with business goals, but when you talked about the policy. That's when it comes in. So it's been a wonderful conversation. I think we touched upon all aspects of building AI, securing AI systems. So thank you so much for your time and if people want to get in touch with you, what would be the best way?
Santosh 25:07
I would say LinkedIn. I'm pretty active on, on LinkedIn. Our website, www.proarch.com. Or even through our LinkedIn page. LinkedIn would be the best source, I would say.
Punit 25:18
Sure. So, thank you so much, Santos. It was wonderful to have you and wish you all the success in helping build secure AI.
Santosh 25:26
Thank you so much, Punit. Thanks for having me. I enjoyed the conversation.
Punit 25:28
Same here.
Fit4Privacy 25:29
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. Fit4privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. If you want to know more, visit www.fit4privacy.com. If you have questions or suggestions, drop an email at hello@fit4privacy.com.
Conclusion
That’s why building secure AI systems matters so much. It’s about more than just fixing bugs or adding passwords. It’s about designing AI with safety in mind from the very beginning. It means testing systems carefully, keeping them updated, and thinking ahead to what could go wrong. It also means working together—researchers, companies, governments, and everyday users—to build AI that we can all trust.
In the end, a secure AI system is a strong AI system. It’s one that works the way we want it to, even in the face of challenges. By making security a top priority, we’re not just protecting machines—we’re protecting people. And that’s the kind of future we should all want to build.
ABOUT THE GUEST

Santosh Kaveti is CEO & Founder at Proarch. With over 18 years of experience as a technologist, entrepreneur, investor, and advisor, Santosh Kaveti is the CEO and Founder of ProArch, a purpose-driven enterprise that accelerates value and increases resilience for its clients with consulting and technology services, enabled by cloud, guided by data, fueled by apps, and secured by design. Santosh’s vision and leadership have propelled ProArch to become a dominant force in key industry verticals, such as Energy, Healthcare & Lifesciences, and Manufacturing, where he leverages his expertise in manufacturing process improvement, mentoring, and consulting.
- Operationalizing AI: From Strategy to Execution
- Navigating AI Risks: Ensuring Security and Compliance
- Prioritizing AI Initiatives: Aligning with Business Goals
- Attracting and Retaining Top AI Talent
- Integrating AI into Core Business Functions
- The Data Foundation: Governance, Quality, and Culture in AI
Santosh's journey is marked by resilience, ambition, and self-awareness, as he has learned from his successes and failures, and continuously evolved his skills and perspective. He has traveled across 23 countries, gaining insights into the global diversity and interconnectedness of human experiences. He is passionate about blending technology with a human-centric approach and making a meaningful societal impact through his support for initiatives that uplift underprivileged children, assist disadvantaged families, and promote social awareness.
Santosh's ethos extends to his investments in and mentorship of promising startups, as well as his role as the Chairman of the Board at Enhops and iV4, two ProArch companies.

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com