Privacy Enhancing Technologies
Transcript of the Conversation
Privacy Enhancing Technologies or PETs as we call it. Do these create digital trust? What are these for? Why do we need PETs? Because we have regulations, we have legislation. So why do we need these privacy enhancing technologies, and how do these helps mitigate risk, especially the risk in context of artificial intelligence systems? And what are the specific risks? That can be mitigated through PETs. What are the specific PETs that exist in the market and all this and more with someone who has done research on PETs and has identified that through a four-step approach, if you implement PTs, it can mitigate risk and of course create digital trust. So, let's go and talk to Jetro Wils. So, let's go and talk to Jetro Wils.
FIT4Privacy 00:54
Hello and welcome to the Fit4Privacy Podcast with Punit Batia. This is the podcast for those who care about their privacy. Here, your host, Punit Batia has conversations with industry leaders about their perspectives, ideas, and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:23
Hi Jetro, welcome to the Fit4Privacy podcast. Thank you for having me. It's a pleasure to have you. And let's start with a very basic question. How would you define digital trust?
Jetro 01:34
Well, that's, that's a great question. So, there are many definitions out there, but I like the one that basically says it is the assurance that one party has, that another party will do the correct actions as specified. So, for example, if I'm moving towards a cloud environment, I have certain expectations from a cloud provider, which are contractually. Written down. And so digital trust then means that I expect to have assurance that the cloud provider operates, acts according to that contract, so that I do not sell my data or distribute my data, for example.
Punit 02:15
That's very well said. So, when we talk about digital trust or trust, and also privacy these days, we hear a term called privacy enhancing technologies. Would you mind sharing a bit about what are these privacy enhancing technologies are?
Jetro 02:35
Yep. That's a nice question. And to be honest, I didn't know about that term until a year or a year and a half ago. So, I'm doing a master's in it. Risk and Cybersecurity Management here at the ANT were management school and, of course you have to, it's a Master of Science, so you have to write a thesis and master project. And so, I chose de-risking AI adoption. With the use of PETs or PETs, privacy enhancing technologies. And so, it's only in that research that I came up and discovered that whole term. So a PETs or a privacy enhancing technology is a set of technologies that, mathematically, boost or protect or further secure privacy and also data security in general. And so what I've discovered is. PETs is something that exists for a long time. I mean, any technology that increases security or privacy is a PET. So even A VPN, for example, is a PET. It's a privacy enhancing technology that ensures that there is no third party who can identify you what you're doing on the internet, right? So, it disconnects the identity. From the actions that you're doing, that's a PET. Now, what I've discovered in my research is that PETs, even though they're very common to the toque browser is another example of a PET it is not that common yet in ai. And so that's why my research is all about using these pets for AI to de-risk AI adoption.
Punit 04:09
That's nice. I think we can have a good conversation around that, but while we go forward, the question would be, why do we really need these PETs? I know technology is everywhere, AI is everywhere, and we need to have privacy, but why do we specifically need pets or privacy enhancing technologies when we have these laws, when we have these principles and everything in place?
Jetro 04:35
Yeah, so I'm thinking about pets and the reason why for me it's very simple. Legislation is important, but it does not protect you. So let me explain what I mean by that. Legislation and in Europe are very good at legislation and regulations. They are either deterrent. Punitive. So, if the law says you shall not steal, then it is like a deterrent for somebody to steal because there's a, there is a sanction towards it, or when it already happened and the, and the thief got caught, then there is a punitive action. Like, you know, you go to jail, you pay a fine, but in the moment itself, the law. Cannot save you protect you. Lemme give you another example. If I drive my car to a big city and I leave my car unlocked and I lleave it there for the whole night and the whole day then in 24 hours later, I come back, and my car is gone. Well, that's a big chance that it's gone or broken into why even though the law exists, you shall not steal it doesn't protect you in the moment. So, what we need are technical controls. Technical controls keep you safe, really safe. So, for example, with the car, an alarm locks having a GPS to track my car or even leaving my Bullock in the car as a century, you know, all these things are technical controls. That's really secure and improving your security posture in the digital realm. That's the same. So yes, we have many acts and regulations in Europe on a digital level, but what really protects you is these is a technical control, and a pet is one of those technical controls. So it is good to trust a cloud provider or a third-party provider that is processing your data, but it's even better if you have technical controls to secure that. Punit 06:31 That makes sense. So, let's maybe position this in context of the AI lifecycle, because AI has its own lifecycle. So first, if you can walk us produces stages of the lifecycle. And also, in that articulate, how do PETs play a role? Because I don't see it's at one stage, it'll be at almost every stage of the lifecycle no.
Jetro 06:57
Yes, that's true. And, and so in my research I went through first like, what is, what is the AI lifecycle? Because depending on who you are looking at, you have either 5 stages, 6 stages, or 19 stages, right? But I landed on a common model, which is the, the 5 stages, or even 6 stages, depending on how you slice it. But basically, you have first data management, and that is the phase where you collect data. Where you also clean up the data, you normalize the data. So, for example, if you have a data set, which is in, degrees Celsius and also degrees Fahrenheit, you want to normalize it into one standard. That's all the collection and preparation; we call it data management. And then the next phase would be learning model learning. So basically training, the AI model on your data to become better at predicting something. And then afterwards you have a trained model. You come into the third phase, which is model verification. Here we're going to use, unseen test data to verify the model that's working correctly, and when we're satisfied with that, we move to the next phase, which is model deployment, and here is where we're going to publish our model and make it available for entire world to use, or my specific clients, and then, that's where it goes into production. And then when it's in production, we have the final phase, which is basically the phase of governance or maintenance. You have to update the model, you have to tweak it, you have to get new data, and you know, retrain it. So, it's a whole lifecycle. I. Now what we've seen in the past is that on each phase of the lifecycle, there are risks and weaknesses. And so, attacks happen on each phase of the lifecycle, and that's basically where the PETs, come in. And so, for example, to give you an idea, the model stands or falls with the training data. So, when you collect your data. Imagine that somebody is able to temper or corrupt the training data without you even knowing about it, and then your model gets trained on corrupt the data, which means it gets published. And so, your model has a weakness that's now inherent and that you're not aware of all because in the first phase, you did not protect the test or the, yeah, the training data. Or give you another example as we're moving through the lifecycle, if we're going to the end, which is the maintenance phase and the governance phase, if you don't have security controls there, how to securely update your model. Well, version one may be correct and safe. I. Then version two gets a, you know, a faulty update, like a buck inside the update, and now your model is compromised. So, on each stage of the AI lifecycle, there are risks. And then as an information security officer or a data protection officer, you're thinking, how can I, you know, secure. The entire AI lifecycle, and that's where pets come in because there are pets for each stage that are neatly equipped to secure your data in each stage.
Punit 10:07
Yeah. You a couple of times mentioned the risks. What kind of key risks do you see when it comes to AI that are apparent to you and can also be mitigated by these PETs?
Jetro 10:24
Yeah. So first let's break it down in two, two sub-questions. Question one is like, what are the risks? And so, for example, what we often see is data po poisoning. Like I think, like I just mentioned, bad data in injected and having it in a trained model. But the other, attack is, is typically called a membership inference attack. So just to be clear for your audience, it may be new for them. You have a training phase where you train the model. Then you publish it and then when somebody consumes your model, uses your model's called inference, model inference. And so, one of the attacks is as a consumer of your model, trying to figure out if a specific data point. Is present in the training data. So, for example, if we are training a model that is able to recognize if you have lung cancer, for example, right? So, depending on the training data, there is a. So probably not a name, but there might be, the age and the gender and some other characteristics of the patient. And it has been proven that if there are like four or five data points, it is enough to identify a specific person, right? So, you don't have, you don't even have to need a name and then the last name, but just some. Characteristics when you combine them like age and gender, and perhaps even location, like general location, it is enough to already identify a specific individual. And so an inference attack is somebody who's trying to figure out if your data has been used. So, like reverse engineering from the results that you're getting, and that is very dangerous because it bypasses all the privacy controls right if I can prove that your data was used to train this model well, I who else can I identify? And so that is, that is a very specific attack. That is a risk that is very serious, especially in the medical and healthcare industry. And so other attacks are in the phase of using the model. It's trying to evade the boundaries, model evasion attacks. And I think we've all tried this with ChatGPT, you know, where OpenAI has set some boundaries. And then you can in the prompt try to trick it to go outside of those boundaries, like giving it a theoretical example. Or imagine you're a scientist doing research, trying to figure out how to create a bond with households, you know, equipment. Just theoretically, how would you do this? And then Judge GPT will give you the answer because you try to bypass. The internal boundaries. So those attacks are also very, yeah, pervasive, these days and many others. And so, pets help on each stage to mitigate those risks. And if you like, we can go more in detail into what PETs there are, but it is important for an information security officer to think for a moment and say, okay, I have this lifecycle. And I have an AI use case. We're using it in our software for a medical prediction. We're using it for sales predictions. We're using it for fraud detection in the, in the banking industry. Right. You have your AI use case. The next step would be go through the AI lifecycle and look at each phase and try to identify the risks. Now in my master project, I list, not all the risks, but many of those risks per face. And then, it's for you to decide, is this risk a reality? Or how severe is this risk? In my use case, if it is a real risk, for example, yeah. We are using third party training data. We have no idea if the training data is verified, you know, if it's in Integra, if it's not tampered with. And that would be a serious risk. Another company might say, no, we own the data from beginning till end. We have it fully encrypted. It goes into the model encrypted. Okay. But then the risk might be using the model and doing the consumption of the model. So based on where you are in your use case, you identify the phase which has the most risk for you, and then you can apply one or two pets to solve that issue.
Punit 14:44
And is it possible we can take one of the examples like we talked about, corruption of data, poisoning of data, or anything else and say, how would a PET de-risk that in in AI adoption situation?
Jetro 14:59
Yeah, sure. So, I think before we dive into that, let's take a look at some PETs that are available that are used commonly in AI. And there are, in my thesis, identified 7 of them that are common and getting back, but they're not that used that often. So, it's not like it's mainstream yet. But it is already on a mature level and more and more organizations are using them. But let's say the average typical organization here in, in Belgium, is not aware of these pets and has no idea what's available and how to use them. So, let's, let's go through a number of them. And so, the first one is a, or a trusted execution environment, and that is a very interesting. Pet, because in the cloud world, it has a different name. It's called Confidential Computing. And so that's more, let's say the marketing name, confidential computing. But what it basically it's a hardware based. Pet. So, the cloud provider has specific hardware. It's a MD processor or a specific NVIDIA processor or an Intel specific processor. So, inside the hardware it has the option that whatever you put in there during the computation phase, it is completely encrypted. So let me give you, let's break it down to a more simple example. As you probably know and your audience may as well, data lives in three states, right? There is data in transit, me sending data to the cloud provider and getting data back, and that's typically encrypted with HTTPS, right? The T-L-S-S-S-L protocol. And then there is data at rest. So, it's stored on a file, server, or it's stored in a database. And that's also typically encrypted. And the cloud providers have these built in encryptions, platform-based encryptions, which is great. But here is the tricky part. When the system tries to compute or use your data, the data goes into the memory, it's data in use, and that is not encrypted, right? So that opens up the risk for what we call memory sniffing. So, imagine you're using sensitive software or software that processes sensitive data. It's in the cloud and you think, you know, I did my encryption and everything is fine. But you don't realize that when it's being run, it's computed in memory. It is not encrypted, its naked data, it's plain text. And so, at TA trusted execution environment is a great pet that that encrypts your data while it's being used right? So, from the external view, from the outside, if I would try to reach it and read the memory, it would be encrypted all the time. I cannot even make sense of it. And so that is a great breakthrough I think it came out, let's say more mainstream, 3, 4, 5 years ago, something like that. Azure adopted it in confidential computing. And so, using that pet means I can have specific virtual machines or containers. That does all computations, and they are in an enclave, a special sealed box that from the outside, I cannot read what's inside, but it's able to process my data. Right, and now these days it's so easy. Fairly easy to use. I think it's the same price as a regular virtual machine on Azure or perhaps a little bit of an uptick, like a few percent. And there is a, a small performance decrease of let's say 10%. So, you have to upsize your virtual machine a bit plus 10%, but then you get. You get complete encryption all the time. Right? So that is an ideal pet for those who are using apps in the cloud and have sensitive data there, right? So that's one of the pets that's often comes back. And another PET is actually it's similar to that, but it's like homomorphic encryption. Which encryption basically means you are able to do computations, simple computations. So mainly excel like computations on encrypted data sets. So, this system is not, does not need to decrypt. Perform computations and then re-encrypt. No, it can do it directly on the encrypted data set, which is very nice Downside. There is, it's low in performance high-end complexity. So, there are some downsides, but as technology increases and hardware becomes better that option that PET becomes more and more available. Right. Another one, which is very interesting is federated learning. So federated learning basically means that you want to train your machine learning model, but instead of having it all centralized. And so, all the data goes into one place, and there you train the model. You keep it decentralized. So federated. So basically each, each client or each endpoint uses his own data. And trains the model for his data. And then that trained model is sent to the server and there it's all, you know, combined into one master model. Right? And the benefit of that is privacy because the master model doesn't need all the data from all the individual sources. It just gets a mini trained version and then combines them all together, right? That's federated learning. Another path that often comes back is a differential privacy. And differential privacy basically means that in a, in a particular data set, you, you add some noise, some statistical noise to ensure that an individual record can no longer be identified. So, if you and me are giving up our medical data for, let's say cancer research, right? And even if they anonymize us. , Like I mentioned earlier, by using four or five attributes and combining them, you can ask, have also a specific, let's say, profile, , differential privacy, adds some statistical noise to it so that you cannot identify me or you individually. Right? And it does it in such a way that it does not disturb,The weights or the statistics and just a little bit of noise. Like if I'm my height is let's say 1 meter 82 it would say one meter 83, right, or 82.7. It would add a little bit of noise so that you cannot pinpoint it exactly to me or to you, and that's a great way to enhance privacy. And then another one is a secure multi-party computation that's also a very interesting one and secure multi-party computation. , Like it says, a multi-party basically means I'm going to use the data from different providers, and I'm going to do it in such a way by training my model that the individual contributors don't know about the other person's data. So, it's like you have a data set for, let's say you're a bank. I'm another bank, and together we want to train a model that's able to detect fraudulent transactions. So it needs to have as many transactions as possible, but you don't want to share your data with me and vice versa, because we're both different banks and it's sensitive. So SMPC, Secure Multi-Party Computation is a path that allows us to train the model and it goes into this. Let's say central model, but me as one client and you as the other client, we're unable to see each other's data, but we can benefit from the trained model. Right? And that's a, that's, that's a great pet as well. And there are others, but those are the main ones. And then the question becomes. Okay, so now we have these 6, 7 PETs available, like you mentioned, but which one should I use and for which need and which risk does it mitigate? And that's basically the essence of my whole thesis. It's helping information security officers and DPOs make a systematic selection in these PETs. And then start with one or two of them.
Punit 23:22
So, it's primarily a selection. It's like use 1, 2, 3 of them, but not all of them is that?
Jetro 23:28
Yeah, so my master thesis is first doing all the literary research. So, the background the theoretical background which are the pets available for AI etc. And then I designed what we call an artifact because I went for design science, which basically means that you have to, you find a problem, you check if the problem really exists. Yes, it really exists. Okay? And then you're going to create an artifact that solves that problem. And an artifact can be a prototype; it can be a des model. You know, it can be if you're doing engineering a little mockup, right? And I chose as an artifact, a decision-making framework for and DPOs. And so, it is a four-step framework and it, and step by step, you go through this framework, and it already filters out the pets and it helps you to think, what are my risks? Where are they located on the AI lifecycle? And then it also takes into account your specific organization, your use case. Because if you are a startup, I can imagine you have more weight for budget than if you are a big bank and you have more emphasis on security, right? And so, my decision-making framework takes all those parameters into account and in four steps, it helps you make a decision. When I showed this cause I'm now in the final stage of, you know, verification and, and getting market feedback and other academic feedback. One of the persons I spoke to last week, he was an AI lead, an innovation lead on AI for a huge multinational SaaS provider. And he said yes, only I had this model one year ago because it would help me in a structured way, help me. Think critically on our AI model and come up with one, 1 or 2 pets to start with. And so that's my, that's my goal with this thesis is helping information security officers and DPOs go through these 4steps and make a well-informed decision which best to use first.
Punit 25:47
Sure. I think that's a very noble cause and very much needed in the age of AI and need for privacy. Now, the question that I would have is it certainly creates digital trust, the eds, but what do you do and how can people connect with you?
Jetro 26:05
How can people connect with me? Well, I'm available on LinkedIn, so if you okay. And I will, perhaps we can share the link in the show notes. You can reach out to me. I'm always available there. And I'm a freelance myself. So, I'm getting more and more traction from people saying, hey, can you do these 4 steps analysis with us? We have AI use case because remember pets are very powerful. I love them and Privacy Enhancing Technology is trust but verify. You trust your other. Partner or service provider, but you want to verify that you know that they really protect your data. So, you need a PET, legislation doesn't protect you. It's like I mentioned. So, this PET helps you very well, but I. First of all, there is a knowledge gap. Not all CSOs know about these PETs. Then you have the issue that, okay, I've heard about 6 or 7 PETs, but what are they and what are they for? And then there is the whole issue of, which one should I use? Because one says, you know, go for a with the confidential computing. And another one says, you know, no, you need differential privacy first. You know, and, and they're getting lost, right? And then you have the whole issue of the budget, right? Probably organizations don't have these specialized teams. They have limited budgets. So, they're thinking like, how should I move forward? Right? and then you got legislation from Europe, like the AI Act, just pushing on more. Like you have to be very careful with the data. You have to be conscious about it. And so, another of the what we call expert panel members who verify your artifact, you know, you need always getting scrutinized and they, one of them said to me, this is actually a very useful methodology to afterwards prove to your board. You took privacy and security into account.
Punit 27:57
Absolutely.
Jetro 27:58
So by going to those 4 steps, which are very based in research., And if you do this per use case and you do this like say twice a year or you know, every time there's a new use case. You can prove to the board or to the auditor, like you're taking this very seriously. You have a research-based methodology to help you think why you chose this path and why you put the other path on priority 3 or 4 or even discarded it. Right? So that was great feedback and if you want, or if your listeners want more info on that, you can always reach out on LinkedIn for that.
Punit 28:32
Absolutely. I think you simplified the concept of PETs in a very comprehensive and easily digestible manner and also structured how risks can be identified, and PETs can be incorporated into different stages of the lifecycle. And of course, of course, step method. It's really simplified, and it makes a lot of sense. So, thank you so much for sharing your insights, it was wonderful to have you, and I wish you success in your research or incorporation of research into practice.
Jetro 29:06
Well, thank you very much.
Punit 29:08
Thank you so much for being here.
Jetro 29:10
My pleasure. Thank you for having me.
FIT4Privacy 29:13
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much, and if you did not like the show. Don't bother and forget about it. Take care and stay safe. Fit4privacy helps you to create a culture of privacy and manage risks by creating, defining, and implementing a privacy strategy. That includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM, and CIPT through on-demand courses that help you prepare and practice for certification exam. Want to know more, visit www.fit4privacy.com. That's www.FIT thenumber 4 privacy.com. If you have questions or suggestions, drop an email at hello(@)fit4privacy.com. Until next time, goodbye.
Conclusion
In a world where data is often more valuable than currency, prioritizing privacy is no longer optional—it's essential. The FIT4PRIVACY Podcast episode on Privacy Enhancing Technologies, featuring Jetro Wils and Punit Bhatia, shines a vital light on how we can all take proactive steps to protect sensitive information. By fostering a culture of privacy, investing in ongoing training and certifications, and staying engaged with expert communities, we equip ourselves to face the evolving challenges of the digital age with confidence.
Whether you're a privacy professional, a business leader, or simply someone who cares about personal data protection, this episode is packed with actionable insights. Tune in, take action, and help build a safer, more respectful digital future—because when it comes to privacy, every step we take matters.
ABOUT THE GUEST

Jetro has held roles spanning software development, business analysis, product management, and cloud specialization. Since 2016, he has witnessed the rapid evolution of cloud technology and the growing challenge organizations face in securely adopting it. At the same time, Europe’s expanding information security regulations continue to add complexity to compliance and governance.
Jetro is a 3x Microsoft Certified Azure Expert and a 2x Microsoft Certified Trainer (2022-2024), conducting 10-20 certified training sessions annually on cloud, AI, and security. He has trained over 100 professionals, including enterprise architects, project managers, and engineers. As a technical reviewer for Packt Publishing, he ensures the accuracy of books on cloud and cybersecurity. Additionally, he hosts the BlueDragon Podcast, where he discusses cloud, AI, and security trends with European decision-makers.
Academically, Jetro holds a professional Bachelor’s Degree in Applied Computer Science (2006) and is currently pursuing a Master’s in IT Risk and Cybersecurity Management at Antwerp Management School (2023-2025). His research focuses on derisking AI adoption by enhancing AI security through Privacy Enhancing Technologies (PETs). He is also a certified NIS 2 Lead Implementer and is working toward a DORA certification. With his extensive expertise, Jetro continues to guide organizations in securing their cloud environments and staying ahead of regulatory challenges.

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com