Punit 00:00
AI agents and digital trust do this go hand in hand, seems challenging, right? Because when we talk about AI agents, it's all about working in a context and almost a trained bot, which can do a lot of things for you. Then we are talking about digital trust in a bot or in an agent. Not an easy stuff, right? But what are these AI agents? What kind of challenges do these AI agents create? And what can companies do when they are using, utilizing building, creating these AI agents? And what do we as individuals do when we interact with ai? Well, all these are very interesting, very fascinating, and very challenging questions. And to answer this, we need somebody who has done technology transformation, somebody who has been in the big four, somebody who is independent and has a view on AI technology and transformation. And we have exactly that kind of person Rani Kumar Rajah, who has worked in worked on transformation projects and is going to have a conversation with Ross. On AI agents and digital trust. So, let's go and have a conversation with Rani.
Fit4Privacy 01:20
Hello and welcome to the Fit4Privacy Podcast with Punit Batia. This is the podcast for those who care about their privacy. Here, your host, Punit Batia has conversations with industry leaders about their perspectives, ideas, and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:48
So here we are with Rani Kumar Rajah. Rani, Welcome to Fit4Privacy podcast.
Rani 01:53
Thanks. Thanks for having me, Punit. Glad to be here.
Punit 01:56
It's a pleasure to have you. And, uh, let me start with the very basic question. We are in the digital world, and we are talking about security, privacy, so many things. And then these days we are talking about something called digital trust.
Punit 02:11
How would you define this concept of, uh, digital trust?
Rani 02:17
Well, trust is confidence, right? So digital trust is trust in technology. So, when it comes to ai, specifically, we are in the AI world. Now, trust in AI is, whether AI models are being accurate, whether they're ethical, are they explainable,are they being transparent with their decisions? So digital trust and AI has got a new meaning and a very deeper meaning because, it is important to making sure that, if you want to drive. Real adoption of AI, then it has to be trustworthy, right? It has to be trustworthy, and it has to be explainable. It needs to be secure. It needs to be privacy managing. It has to be accountable, like who takes accountability when AI does a wrong decision? So digital trust a super important topic, and AI has just given even more prominence to this topic.
Punit 03:08
That's very well said. And when we talk about AI, there's a new term that we hear. Something called AI agents. Can you touch upon and explain what are these AI agents?
Rani 03:20
So, you have to give it autonomy, you have to give it decision-making rights. So agents, in many ways, uh, the previous risks that I mentioned, agents have taken the risk level to a whole different level. That there's a lot that needs to be done in order to protect these agents from making sure it's aligned to what its original intended actions.
Punit 04:25
That's interesting. I would like to ask you about risks, but before that mm-hmm. Is it possible that you can share with us an example of an AI agent? One or two maybe?
Rani 04:36
Absolutely. So, say for an example, it's an hospital that's using an MRI scan diagnosis assistant. Right. So, all that it needs to do is that it scans the MRA, uh, the, IT needs to understand the images and it needs to make a prediction and determine whether. This person needs a surgery or not. Right? So this can be automated. It takes a lot of human effort, but if you don't do it with proper human review at the right places, right, your whole objective automating it, it's, it's, it's no use because. That's where the trust gets broken. You need the proper human in the loop to do it. So that's one example. Another very simple example is your ticket reservation. Like if you need to make a reservation to your, uh, vacation spot and you want a travel agent, an automated agent, that actually goes through your preferences, it goes through your, the availability, it accesses some external tools to understand where the, uh, where there are, um. Tickets available, and then what dates would work? It checks the weather and then it comes back. It accesses your personal schedule to know when is the right time. So you get, you, you see that it is making a lot of, uh, what typically humans would do in a real scenario, right? If there's a travel agent in that place, instead of the AI agent, the QM agent need to do all of these. Manually. So, you are now giving the right and the decision-making power to a software to do all these connections. So, it needs access, it needs autonomous, uh, behavior, it needs decision making rights, and it needs proper privileges. Right. So, this travel agent would do all of that and also make your booking, which includes your, of course, your financial access, right, your account access, and it pays for your rooms, your hotel rooms, and then tells you here is your itinerary, right? This is where you're traveling, there's your flight ticket, and this is where your hotel reservations are. But what I trust, uh, travel agent to make my booking like an automated AI agent. Probably, but will I trust an AI agent to make a higher or a fire decision in an organization without human review? I wouldn't. Right? Would I trust an AI agent to make a prediction or a conclusion about my M MRI scans without a physician's review? I wouldn't. So that's where agents, um, you need to design it responsibly. I mean, AI itself, you need to design your AI systems responsibly. But agents, it's even more important because of the amount of rights, amount of access, amount of privileges that you're giving to this new software.
Punit 07:12
Okay, that's very interesting. But. It seems to me that the risks or challenges, especially in context of security from these kinds of agents, would be slightly different or maybe significantly different from what we usually talk about AI or machine learning. And do you think if they are different or how do you see that?
Rani 07:33
So, as an example, right? So, uh, in the past when, say the same example with hospital, let me take that example. If a hacker wants to get access to a sensitive data, they need to actually hack. The database is get the access, break the access, and get to it. Right? But right now with AI model, if you have, if you have trained the AI model with sensitive patient data, it has been proven that the model can actually remember that data, the training data. And you can, if you manipulate just by probing, you don't need to break any access just by pure probing, you can actually extract sensitive, you can make it leak sensitive information about patients because it remembers. The second thing is model poisoning, right? We've never heard about this poisoning attack with traditional applications. If you want to screw up with some kind of the predictions and AI model makes all that you need to go poison the training data that this model's going to be trained on. It's going to have a massive impact because all the decisions that it takes if you, um, uh, introduce a bias right, to making to for your own, um, uh, personal, um, benefit. If you introduce bias into this model, it's going to make decisions that are going to be biased for massive set of use cases, because that's one model, uh, covering multiple use cases. So. The other is, um, uh, model theft. You don't need to break the access, to get access to, to get the steal the model. You could, you could actually extract the model information by pure prompt injection or prompt, um, extraction, right? So, a lot of things are very different between traditional cyber and ai. With agents, it's just even more difficult because now with the same model, you also have access to memory. You also have access to tools. Where sensitive data might be stored. You don't know what's happening inside because the agent goes through its own, um, determined path to complete the goal. You just feed it knowledge; you give it access. You say you give the power to make the right cause of action and what it needs to do, what steps it needs to do. It determines by itself. So that introduces a whole set of risks. There's a lot of literature that's out right now, but it's evolving. But I'll just mention a few. So there's something called agent control hijacking. So, so the hackers can actually get control of the agents because it's like if you are the weak, if it was humans, right? In traditional cyber world, if you are a weak link in the whole system, it's enough that you, you are getting compromised. That you are being compromised, then your access is compromised. That's it. Then the hackers were able to access everything that you add access to. Same with agents. If you give your agents. Higher privileges and you are not monitoring. If you don't follow the least privileged principle, then if it is enough for the hackers to just take control of that one agent that has access to all these APIs that has access to all these databases, data, and so forth. So, um, we are with, with agents, we are actually protecting, we are not fighting against vulnerabilities like traditional. We are fighting against decisions, right? AI decisions, AI decisions, integrity. We need to protect those AI decisions, which can have massive impact, if not protected.
Punit 12:11
Okay. That sounds interesting and also scary, but, uh, if we say that these agents can act autonomously. And somebody can hijack them. I would think we can also use them into our benefit by saying if we train them, guide them, coach them in terms of what do we expect from compliance and security. Me measures like we train our employees saying, do this, don't do that. Be aware of phishing emails. And all those things. If we do that, I think these agents can also learn and be more relevant. No?
Rani 12:46
Absolutely. Yeah. Yeah. So, uh, when it comes to medication, right? Adversarial testing and adversarial training is super important. You need to train the agents on. These are the kind of attacks that it can encounter. Right? How do you, it's almost like, I mean, I keep, uh, comparing with humans, right? You would teach humans the same thing, right? We have phishing training. If you get an email like this, this is a simulated attack. You should know to identify the exact same thing with agents and software. It's no different, but you need a lot of advers because these are powerful machines, powerful software. You need a lot of AI Azdvers real testing and training that's done on these models and agents. Honestly with the speed of evolution of these agents, there's a lot of catch up with, uh, with the market in terms of solutions to properly protect these agents, for example. Observability solutions. There's a very handful of organizations right now, um, startups that are actually working on true platforms where agent ops, it's called agent ops, uh, technology, where you give complete visibility for the user to how the agent, what is the chain of actions of this agent? What kind of path did it take? How did it complete logging so that they can actually audit it, right? So that is not there. Everybody wants to jump into the agent wave, and they want to build a lot of solutions. But, uh, the truth is that the, the security solution, the solutions for all these risks are actually in a very nascent stage, is catching up, right? Which is, which is going to be the case for some time because that's how quickly technology's evolving. From 2024 to 2025, we have moved from rag-based applications to agents so quickly, and then there are companies that need to constantly keep pace with all of these evolutions because they want to. I mean, it's a fear of being left behind. So, need to, they need to adopt the next thing, right? That pressure is there. But what about, uh, guidelines, right? Even the communities, like oas, MIT, Atlas, NIST and lot of other, uh, open-source communities. They are trying to also catch up with this evolution because they released something today, in two months. It's already outdated because the technology has gone into multi-agent and it's very different now. They said let's use LLM to evaluate an M'S output to avoid hallucination. Right? And then they said, oh, uh, how, how would we trust, uh, LLM as a judge? Okay, now how would LLM as a jury, right? A jury of LLMs that are going to evaluate and save whether. This particular element. So, I mean, every day there's new evolution. Every day there's something new that's coming up. I, I think, um, what we need, I mean, there is ways to do it. There is, it's important to go back to your, uh, fundamentals and getting your security, privacy, trust, and governance in place. Or else it's very difficult for organizations to keep in pace with this evolution.
Punit 15:42
That does make sense. I think there's always a fear of missing out when it comes to technology. Mm-hmm. With the speed at which technology is evolving, PE people, in fact, companies are taking a lot of actions. Mm-hmm. Mm-hmm. And a lot of actions, sometimes not knowing if the technology's right for them, technology's mature enough and so on. But that will happen, and the risks are there. In that case, how can companies protect themselves against these risks, these security challenges that we see?
Rani 16:12
Absolutely. See, um, I, um, this is what I believe in that, uh, companies need to do, get the fundamentals right, is what I told you earlier by, because AI is, is different and you need a holistic solution to this problem. Not a, a fixed patch of one, one and done approach is not going to work. You need a well thought through holistic solution to manage the AI risk. So what I, that's why AI governance. So the way I would, I always draw this picture by AI governance is kind of the cornerstone. It's kind of the fundamental building block that you need to put in place. So once you have, under the AI governance pillar, you have AI risk management, are you managing the AI risk properly? And then you have AI asset management, do you have visibility into all your AI assets in your organization? There's a whole problem of shadow ai. So without visibility, if you don't know what exists, then you, you can't protect what you don't know exists, right? So you need to first understand what exists. Then you need to know what is the risk involved? Are these systems, what kind of risk is involved? Then you need to have a good security posture, understanding, right? Today, what controls do I have in place? Do I have guardrails? Do I have LLM firewalls? Do I have my, uh, privacy training for these models? Do I have proper red teaming being done? Do I have, um. Agent, uh, continuous monitoring capabilities, so many capabilities are there from the, throughout the life cycle. If you're building an LLM app across the life cycle, every stage of it, you need some control in place. Because there is risk at every stage. There is risk that these AI systems can, um, are exposed to at the training stage. If you don't ethically source data, if you don't have a good training, um, visibility into your training pipeline, um, you, you run into risk of like bias, right? You run into the risk of like a manipulated training data that's going to affect your decisions. Then at, in your development stage and uh, design stage, you need to ethically design it with explainability. Uh, my AI is going to be explainable. Am I going to show how these decisions were arrived? Am I going to make sure that there is. Proper transparency into the AI's operations, right? Then when it comes to development and deployment, am I, am I, um, putting the right guardrails in place to avoid prompt injection attacks? Right? Because these models, the simplest attack that they're, uh, subjected to is prompt injection gel breaks. Uh, people can just probe and prompt, uh, do prompting. Prompting through prompting, they can induce, uh, include a lot of attacks that can be done on this. They can extract sensitive information; they can extract the actual model decisioning logic. If they want to steal the model, they can do this model inversion and so forth. So, uh, the approach is to making sure that you have asset visibility. You have proper risk awareness about what exists, and you have a good security posture, you have the right controls in place to secure it. You are re-teaming your applications constantly, and then you have the right controls in place to protect the entire attack surface. It's not just the model, it's not just the training pipeline, but the entire attack surface needs to be, uh, kind of, um, protected. Right? And then when you do all of these, you will naturally be in a good posture, compliance posture. Right. So are you able to. Um, align, I mean, comply to the, um, regulatory guidelines, right? You are governing your models so that, so all of these, this is what I call a lifecycle approach. You discover, you assess risk, you map your threats. Then you scan for vulnerabilities, then you implement the gaps, the controls in your risk, and then you are compliant, right? You are. You are ready for your compliance audits. How far are you ready for compliance? So if you take a life, if organizations take a life cycle based approach, have a solid foundation of AI governance structure, I think they'll be much better off in this whole AI adoption journey.
Punit 20:13
I fully agree with you. I think governance is key. Identifying and managing and governing those risks is where it starts and where it also ends. It's a continuous journey. Now while organizations take care of things, I think there are also some individual responsibilities when it comes to ai. And it's both in context of those using ai. Mm-hmm. And also those building ai. So what role do you see for individuals when using or building AI.
Rani 20:42
So, the more you use, the more you're comfortable, the more you know what it is, but use it with the understanding of the risk. That's what I would say.
Punit 22:27
No, very well said. I think AI literacy is key, both for people using AI and also building ai. Mm-hmm. Because the level of literacy and what you get yourself knowledgeable about is different, but it's very important. And then of course, it's about managing risk and making risk-based choices, whether you're a user or a builder. Now, you seem to be doing a lot of work in this area. Would you also be open to share something about you and your company and what you are doing?
Rani 22:55
Absolutely. So, I'm the founder and CEO of Secura AI. It's a startup that's focused on, uh, securing and governing ai. So, I have a unified platform. So, the lifecycle approach that I told you that I explained on this podcast, the platform actually brings it to life, right? So, you need. Platform. Um, I didn't want to just address one problem from this overall problem domain, just one piece of it, but I wanted to approach it holistically. So the platform brings that whole, um, lifecycle approach to life by having a risk management component and asset management component, a compliance management and a security management component all in one platform. And these, uh, there are also like, uh, smaller products. Like there's a, um, attack simulation platform for ai, red teaming that I mentioned. Which is basically, uh, simulating attacks, real life attacks on the models. If you have an LLM app, the AI retaining tool can actually do some static testing as well as some dynamic AI power testing on your AI apps to kind of, uh, replicate and kind of simulate what, uh, hackers would be doing using, again, they use AI as well in order to, um, that's a double-edged sword of cyber right, like AI powered cyber, and then you also need AI to combat that. So, um, so that is what my company is focused on. Yeah. And it's based in New Jersey, US.
Punit 24:20
That's good to know. You're doing some great stuff and if, uh, based on this conversation people are interested to, let's sit, talk to you or find you or contact you for more details. What's the best way?
Rani 24:32
LinkedIn, feel free to connect with me on LinkedIn and you should see my you can also visit secure ai.com and leave a message from there as well. So, LinkedIn would be my preferred way to get in touch with people. And yeah, I'm looking for, uh, partners and we have some active partnerships, uh, already in place. I'm looking for more, more partnerships and because it needs to be a collective effort, and I believe in collaborative partnerships to make bigger things possible.
Punit 25:04
Absolutely partnerships help, and I hope you find lots of them. And for now, I would say thank you so much for sharing your thoughts, insights, and bringing this knowledge to us.
Rani 25:16
Thanks a lot, Punit. It was amazing and I loved conversing on this topic with you. Thanks for having me.
Punit 25:22
Thank you, Rani.
Rani 25:23
Thank you.
Fit4Privacy 25:24
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show. Don't bother and forget about it. Take care and stay safe. Fit for privacy helps you to create a culture of privacy and manage risks by creating, defining, and implementing a privacy strategy. That includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM, and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www.fit4privacy.com. That's www. FIT thenumber 4 privacy.com. If you have questions or suggestions, drop an email at hello(@)fit4privacy.com. Until next time.