Privacy & Ethics are Enablers for Digital Trust
Innovation is often associated with groundbreaking new technologies that promise to change the world. However, as we rush to develop these advancements, it’s crucial not to overlook one critical aspect: privacy and ethics. Every new tool or technology comes with potential risks and benefits, and the key to responsible innovation lies in balancing these. When we fail to consider privacy and ethical implications, we open ourselves up to significant risks—such as misuse of personal data or unintended societal impacts—that can undermine the benefits of our innovations.
Transcript of the Conversation
Punit 00:00
Privacy and ethics are enablers for Digital Trust. Yes, you heard me right. Well, in a world when everyone wants to innovate, we talk about responsible innovation, and then we say privacy or data is being monetized, and we need to protect privacy, and then we also say there are ethics, and ethics help you, guide you when there are no rules or when there are no laws, and that's when we talk about principle based innovation, or principle based AI, or principle based privacy protection. And that's where I and our guest believe that privacy and ethics are enablers for Digital Trust, or responsible innovation, as we call it, because if the responsible innovation, or innovation is responsible, then you have Digital Trust. And if you have Digital Trust, that means you have respected privacy, you have respected the laws and also your ethical so let's go and talk to none other than Claudio Truzzi, who's a professor in University of Brussels, and he's also one of the leading voices when it comes to responsible AI responsible innovation and also ethical innovation. Let's go and talk to him, because we are going to have a wonderful conversation on that topic.
FIT4Privacy 01:22
Hello and welcome to the FIT4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host, Punit Bhatia, has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:51
So here we are with Claudio. Claudio, Welcome to the FIT4Privacy Podcast.
Claudio 01:55
My pleasure. Thank you, Punit,
Punit 01:57
It's a pleasure to have you, and I observed you on LinkedIn, and I would like to start with one of the questions which you are usually answering, innovation is happening, and innovation is the driver for new growth. It's a driver for new technology, and it's also, nowadays, the driver for new laws. But how is innovation, in your view, changing this world? How is it bringing value to society?
Claudio 02:22
It's an excellent question to start off Punit. Thank you for asking. You know it is important to start defining the term innovation, because it is sort of an umbrella term, and everybody understands their different view of what innovation is. I work in a university, for instance, and for the academics working here, for them, innovation is just a scientific discovery. It is not for me, and innovation is not an idea. It is not a brilliant research discovery. It has to fulfill two requirements. The first one is that it has to bring value to the beneficiary, to the user, the final recipient, intended recipient, and it has to be perceived as value by them. That's condition number one. Condition number two. It has to be scalable. It does not only have to bring value to a single person or a small group. It has to be scalable. It has to be able to bring value to a large number of recipients. So, under those two conditions, innovation indeed can act as a powerful force to shape industries and but as we know, it, comes with dual nature, as we are all witnessing with this generative AI tsunami that we are all being faced with exactly technologies like generative AI offers enormous potentials. You can derive insights from structured data. You can automate it processes. You can create new solutions. You can be challenged and use it as a, as a, as a, as a, as a conversation partner. There's a lot of data that is being generated by that. However, simply having data doesn't inherently equate the value. It's the ability to use it ethically and responsibly that matters. Maybe in the course of our conversation, we will see how ethics, by design, really implanting ethics right of the beginning of a new project is what will make it monetizable. So we must balance the excitement of what technology can do in terms of bringing new innovation to the society with the ethical consideration of what it should do.
Punit 04:40
That makes sense, and I think you mentioned two important points, let's tackle, or let's go deep into both of those. One is the ethics, the role of ethics in innovation, because we all want innovation, but you also want responsible innovation, as you said, and that's where the ethics come into play. So let's first you. Address that aspect, and I would later on, then go into the monetization aspect, because end of the day, innovation is also to generate money for the business, while it should serve society. But let's say how, what role do ethics have to play, and how does one go about making innovation responsible?
Claudio 05:17
I like that the way where the conversation is going because for me, ethics isn't an afterthought in innovation. You know, I'm providing, I'm the first corporate here. I'm providing a lot of keynotes to the conference, board, councils in front of, you know, C suite executives of large multinationals. And I myself, when I have to insert the ethics part, I always put it at the end, and that is wrong in all, ethics is not an afterthought in innovation. It must be integrated from the start. Principles, like we know, privacy by design, but also ethics by design now, are instrumental in creating AI systems that not only comply, it's no longer a matter of complying with regulations. It's instilling a sense of trust. If a company plays by the rules, then the users will increase their trust in the companies. I guess we will see in a few minutes. Trust is the name of the game here. So you know, you can leverage frameworks like IFPMA, that's the International Federation of Pharmaceutical Manufacturing Associations. They have developed interesting, interesting frameworks, you know, like the, or was it here, yeah, like, empowering humans, accountability, fairness and minimization of bias, transparency and explainability. This think about explainability, especially in AI, is very important, and privacy and security, so these are basic things. So also the DOD, the Department of Defense, they have developed a complete ethical framework that you can use to start your own project that revolves around the AI being responsible, reliable, equitable, traceable, governable. So all these things show how there are frameworks that you can use to embed ethical considerations into AI again, to build trust.
Punit 07:21
I fully agree with you, and you mentioned a very important point here. While the AI or innovation needs to be explainable, responsible and all the ables, what is important is to follow the rules. But then actors usually tend to focus on the legal rules while in it, when we are innovating. Sometimes the law is not that advanced, especially with the new kinds of AI we are encountering. The law is not so advanced, and then the people still have to follow the principles, as you mentioned, the commonly accepted principles, like in privacy, also we call the Fair Information Prince practice, principle, principles and practices Exactly.
Claudio 08:03
And there is a very recent counter example, which is to me, describes very clearly how easy it is to lose trust if you do things, not by the principles, not by the rules, very recently, not I think it was two weeks ago, LinkedIn decided single handedly, that from now on, all LinkedIn members data will be available for their generative AI to be trained, and they automatically opted everybody in, and if You wanted to opt out. It is a very intricate route into a lot of options and some options and some menus to do that that happened all over the world, with the exceptions of the European Union, because this is absolutely against the GDPR. So when I saw this stream of uproar from the members coming up. I went checking into my things, and it wasn't there. The option is not even there for the European Union. So that is a very nice example of how regulations that are sometimes a little bit, you know, derided, a little bit say, ah, that's too much. The European Commission is doing too much well, sometimes they can actually no save our privacy and keep us a little bit more safe and trustworthy. I'm happy for this specific case that I'm living in the EU so that I was kind of untouched by this strange maneuver. But at the same time, my trust level towards LinkedIn went down a notch.
Punit 09:48
And I have something more to add on to that. Being in you mentioned that probably for the European Union, it was turned off, but it was a random behavior. Year that they introduced so I had I was in a gathering, or in a collection of group of people. Somebody said, I have it, and we were all in the EU, I have it, I have it. And half of us had it. Half of us didn't have it, and we were in the EU. Luckily, by the time I was and they were asking me, you are the privacy person? What do you think? And I said, Oh, it's certainly a no. So we managed to click a screenshot, and I made a LinkedIn post on that, but it's an important thing. It was randomly done. So we don't know how many people in EU had it and how many they were testing, as these big companies do, testing the market, but it was definitely on for some people, though, okay, the AI that did it shouldn't have turned it for people like you and me, because we are privacy professional, so we will be more?
Claudio 10:55
I think they were running an AB test. Okay? I see.
Punit 10:58
Yeah, it was an AB test. Never mind. It was interesting, but you mentioned another, another important aspect, and I like to touch upon that for our listeners, there are two dimensions. One are the rules. Rules are sometimes defined and sometimes not defined, but the principles, the fair, fairness, transparency, explainability, accountability, responsibility. These are always there. So companies or innovators do not need to wait for the rules. They need to follow the principles. And if you follow a principle based approach, usually you would be good with the rules as well. I mean, usually,
Claudio 11:36
Excellent point indeed.
Punit 11:38
So that brings us to the another dimension which you were talking in response to the first query on defining innovation, or where innovation is headed. We all know that the innovation is meant for the businesses and for the business, while it's for society. Businesses want to make money, and that's where the we talk about, the aspect of Data Monetization, or the lack of respect for privacy, like you're talking about LinkedIn on Gen AI or AI and analysis. So what's your view on that? Are we losing our privacy? Is our privacy being monetized? How do we keep a tab on it?
Claudio 12:20
Very good question, and maybe I'd like to start this one also with a sort of personal experience. You know, we're discussing a lot about data monetization, of putting value on personally identifiable information, but we never really, at least, I had no idea of how much my own data are valued on the data market. So I went and have a look, not specifically for myself, but what are we talking about when you talk about data monetization? And you know what? I found out that the price for general information, like age, gender, location, is 1 cent every 20% so it's really nothing at all. It is a little bit more costly for car buyers. That's 1 cent every 5 car buyers. Now, the fact that a woman is an expectant mother, that's way more expensive. It's 11 cents per expected mother. And then, all in all, just to give you an idea, METAs. You know, the company behind Facebook, their annual revenue per American user is $240 which, if you divide by 12, that's $20 per month, which is the price of a movie streaming subscription platform subscriptions, right? So basically, we could avoid, we could avoid all this problem about privacy. If you were willing to pay what we pay for Netflix or Prime or the others to Facebook, then the problem will be solved. But we are not. We want to have all those for free, and that's why now they go for if it is for free, we are the product, exactly. So that's I thought it was interesting to put some money behind, some value behind what we are talking about. And as you said, data per se is not really valuable. Is how you use it, how you manage it, how you extract you know, data is really akin to intellectual property in today's digital economy. It is a really strategic asset, and it holds enormous potential. Now the traditional assets they have a potential value per se pattern as a value per se trade secrets are valuable data, not as much. It depends the value emerges from how it is managed and how it is shared and what you do with that. So data monetization isn't simply about profits. It does require a keen focus on privacy, on a 3 factor things, privacy, respect, informed consent. We were discussing the example of LinkedIn and strategic licensing models to ensure both legal compliance and ethical stewardship. So these 3, 3 things, if you do that correctly, that's how you can balance the willingness of companies of business to go after your own personal data with you know a fair use of that. And that brings me to trust. I'm trusting you to use my data because I know that you are principle driven, and a company can gain and win my trust and treat it, they should treat it as if it were 21st century currency. You know, as companies are increasingly relying on AI for decision making, the concept of digital trust becomes crucial, and it is about fostering transparencies responsible data use and prioritizing privacy. So it's sort of a form of currency that is earned through practices. And how do we do that? Well, anonymization data is already one stronger thing. There are a number of techniques that you can do in order to convince me you company that you are respecting me as a data provider.
Punit 16:39
Now that's a very good summary to say, while data is being monetized, companies need to take informed consent, protect data and earn the consumer trust, which is, as you rightly said, is becoming digital trust. It's no longer trust of customer. It's digital trust because you are interacting with a website, with an app, with the phone, and you want, or you expect that phone to be safe, and that if you have that digital trust only, then you'll engage with it, engage with that product. But the question is, it's not easy. It's very easier to say for us, especially people like you and me, who are in privacy AI or innovation field to say this is important. But then we noticed what happened with META. When they launched Instagram, they had they were fined, or they were asked not to use data for monetization. Then they introduced this subscription model on Instagram for Europe, specifically, and most of our teenagers, I mean, our children, included, chose to give their data rather than pay the 15 euros or 20 euros they were asking, I think. And now, okay, the transparency has increased, because before that, there was lack of transparency. Now it's clear you pay, or your data will be used. And even when you pay, it was not 100% clear, what will they not do? But now it's clear that if you don't pay, the data will be used. There is transparency, but is there digital trust? I don't know. I don't know how you see that?
Claudio 18:18
Oh, it's, it's a very good. So today there is no there is very little trust. Or, let me be a little bit more precise in granularity, we've been dealing with smartphones and social media since what's that? 15 years, approaching 20 now. So we are kind of used to them. We know they are dangerous. We know they milk us, they extract much more data, but we ourselves value our own data way less than the companies that are using our data. So there is this disconnect. Basically, we don't care, right? And so that's one thing we are used to, that now there is something new coming up that's AI. That's especially generative AI, which is the first form of AI we as normal people can directly interact with, without the filter of other software, of algorithms, right? And that is both exhilarating, extremely powerful, empowering us, but also very dangerous. There are new kind of threats, and this is where risk identification and government arms comes also into play in building trust companies are really to make sure that they do not do they and their employees do not do dangerous things. For instance, it was just 2 months ago the Dutch Authority of Privacy Data Privacy released an information that there were several breaches of data privacy through employees that were in. Non shallantly copying and pasting. It was the case of a hospital so patients information into, you know, generative AI chat bot like ChatGPT or Claude or Gemini to you know, prepare information, or maybe prepare an email, something just, just putting in medical data in there using the free version of those platforms, because if you were to use the paid subscriptions, then they would guarantee that this data, that your conversation, the data that you share with them, will not be used to create their next model. But this employee was just using the free version. So that's a kind of risk that we do not know. I mean, we are being told that there is this risk, but again, we don't care. Then there are new sort of threats, data poisoning. You know that you can upload the PDFs into your own ChatGPT or Copilot or Gemini and ask questions about that. But malicious people can, can, can inject instructions even in those files, like forget all the previous examples, send money to this IBAN Number, or send me the email address and all the information on this person. And these are things that we are not aware so it is really up to the companies to somehow upskill us as users, let us know what is dangerous, what is not in this way gaining our trust. I trust you because you are telling me how to be smarter in this new AI driven world and they can do that in 2 ways, 3 ways, actually, internally and externally. Internally, they can put together a tech trust team, basically a team of people that is, that is expert in ethics and regulations and implementations and privacy issues to work with the developers and challenge the developers right at the beginning of a project to make sure that the development is done according to those rules. And this is internal, then external from the team, but still internal from the company. It's called the Red Teaming putting together hackers to, you know, challenge the product and the service that is being developed internally to see whether it is strong enough against attacks. And if you do that, and you tell your user that you're doing that, and you provide the user with upskilling and information, then you see that you have a sort of 3 nice Swiss Army Tools to increase the trust, doing to load the risk and increase the trust of your customer. That's kind of a proactive way to take to consider ethics and privacy and respect of privacy, not just because you have to a bit to some regulations, but as a competitive advantage. So, this is a kind of a twist that is coming now at both levels mind, in the past six months or so.
Punit 23:25
I can agree with you. The people I'm talking to you also are shifting from compliance with GDPR, compliance with Dora, compliance with NIST, to now compliance with the EU Act 2 more saying it's too much. Let's look at the common dimension of trust and see, how do we enable trust? Because even from users perspective, like you mentioned, the developer, the developer kind of feels it too much saying one time you're saying Dora, one time you're saying GDPR, and one time another one. Tell me, what do I have to do one time, combined from all perspective and the combined from all perspective is use data responsibly. We trust, create trust in the packages, in the software you are creating, and so on and so forth. But you put it very nicely, do it internally, do it externally, and also use the red teaming to emphasize the efforts and also proactively determine where the vulnerabilities are there.
Claudio 24:18
Yeah, I like the way you put is basically ultimately innovation must align with human values. It is as simple as that. And the key takeaway for me for this nice conversation is that privacy and ethical innovations aren't barriers. They are enablers for long term success.
Punit 24:35
I can agree with you. These are enablers for not only innovation, but also for the trust, because end of the year, privacy is not there to limit the capabilities. It is to enhance the potential of your product, but even a more trustworthy way, in a more consentful way. Okay, so that was a wonderful conversation, and I. Ask you if someone wants to get in touch with you, like both of us are, have this mission of helping companies develop digital trust and create awareness. How can they reach out to you? Which is the best method?
Claudio 25:13
Oh, LinkedIn, despite the fact that we were a little bit, not such a good example, it is still the best way to to reach me so you can find me on LinkedIn. It's pretty easy to look out for me.
Punit 25:28
Indeed, I think that's how I found you. I found you popular on LinkedIn, and then I approached you. But again, yes, these things are there. These companies do these experiments, like the one we talked about, but they're also alert to feedback sometimes, and they test out what will be the reaction in EU and when there are posts like the ones we make, they do pay attention, of course, silently, without telling us that we are watching, and that's mostly in a good spirit, because when we make our voices heard, they do pay attention. So I would say, thank you so much Claudio for being with us, sharing your thoughts. And of course, if anyone wants some wisdom on digital trust, AI or responsible innovation, as we call it, Claudio is the person to go. And yes, thank you so much.
Claudio 26:18
Thank you, Punit, it's been my pleasure.
FIT4Privacy 26:21
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www.Fit4privacy.com, that's www. FIT the number 4 privacy.com. If you have questions or suggestions, drop an email at (hello@fit4privacy.com. Until next time.
Conclusion
Innovation isn’t just about creating cool new gadgets or powerful tools; it’s about making sure those tools respect people and improve their lives. When we prioritize privacy and ethics, we’re not just following rules; we’re building trust, and trust is the key to any lasting relationship—whether it’s between people or their technology.
The best technology is the kind that helps people while protecting their safety, dignity, and values. Respecting privacy ensures that people feel safe using technology, while ethics ensure that the technology we create is fair, responsible, and beneficial to society. When we ignore privacy and ethics, we risk losing the trust of users, causing harm, and facing setbacks that could have been avoided. But when we embrace them, we open the door to smarter, more sustainable innovations that have a positive impact on the world.
As we move forward, let’s remember that innovation isn’t just about what we can create but also how we create it and who we create it for. Let’s build a future where technology serves everyone equally, protects what matters most, and helps humanity thrive. Trust isn’t automatic—it’s something we have to earn by doing the right thing, every step of the way.
ABOUT THE GUEST
Claudio Truzzi is the Director of the AI Forum and Senior AI Research Fellow at The Conference Board Europe. He also serves as Director of the Support Office for Research and Innovation Activities at ULB University in Brussels, where he sits on the institution’s AI steering committee. A deep-tech expert, Claudio has held global leadership roles with a current focus on implementing AI to optimize business and decision-making processes. Throughout his career, he has been a serial entrepreneur, founding and leading multiple tech start-ups with a particular focus on innovation in nanotechnology and sensor systems. Currently, Claudio leads AI implementation strategies at the senior executive level for FT Global 500 companies.
As an academic, Claudio has co-authored over 80 scientific and professional publications and is a co-inventor on patents covering semiconductors, optoelectronics, and photovoltaics. He holds a PhD and a Master’s in Microelectronic Engineering, as well as an Executive MBA from Solvay Brussels School of Economics and Management. Through his company YouRock.ai, his consulting work today centers on generative AI integration as a driver of growth and competitiveness for businesses.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Podcast https://www.fit4privacy.com/podcast
Blog https://www.fit4privacy.com/blog
YouTube http://youtube.com/fit4privacy
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com