Jun 20 / Punit Bhatia and Nick Shevelyov

CISO Role In Age of AI

Drag to resize
In an era where artificial intelligence is transforming every facet of our lives, how can organizations and regulators ensure that AI systems remain secure, trustworthy, and respectful of privacy? As AI technologies evolve rapidly, the risks associated with data misuse, cyber threats, and ethical dilemmas increase exponentially. This raises a critical question: What role should standards organizations like the ISO play in governing AI’s impact and safeguarding society?

California, often seen as a bellwether for tech innovation and data privacy, is leading the charge with new legislation such as the SB 468 Artificial Intelligence Security and Protection Act. This bill aims to impose stringent security requirements on high-risk AI systems that process personal data. Meanwhile, experts in cybersecurity and AI are drawing from lessons across history and global frameworks—such as the EU’s AI Act—to chart a path forward.

In this discussion, Nick, a seasoned cybersecurity strategist, shares his insights on how building digital trust through standards, governance, and strategic security measures is essential. He also references his book, “Cyber War and Peace,” which uses historical analogies to help leaders understand the urgency and complexity of managing cyber risks in the age of AI.

Transcript of the Conversation

Punit 00:00
 CISO role is challenging in itself and in the age of AI, the CISO role becomes even more challenging. It would be a fascinating conversation to understand how is the CISO role changing in the age of AI? What specific risks are coming up? What challenges are coming up? How is the CISO managing communication with the board? How are they creating or managing these new risks, like the deep fakes, like the hyperconnected or hyper-focused attacks? So to learn about this, how about talking to someone who has been a CISO, who has been a Chief Privacy Officer, has written a book, and is now serving as a virtual CISO or advisor to CISOs? I'm talking about none other than Nick Shevelyov. Hope I pronounce his name correctly. Let's go and talk to him and also understand about his book Cyber War and Peace.
FIT4Privacy Introduction 00:59
 Hello and welcome to the FIT4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here, your host, Punit Bhatia has conversations with industry leaders about their perspectives, ideas, and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.

Punit 01:27
 So we are with Nick. Nick, welcome to FIT4Privacy podcast.

Nick 01:32
 Thanks, Punit. Glad to be here.

Punit 01:34
 It's a pleasure to have you. And let's start with a simple, basic question. We are in the age of AI. Everything is digital like now, even in this podcast, we are recording digitally. So, in this world of digital or tech or AI, how would you define. The concept of trust, especially these days, we are calling it digital trust. No?

Nick 01:56
 Yeah. The, it's an, it's a question that comes up on a regular basis and I last night I attended a dinner with risk leaders and this topic came up. Because our the definition is changing. I think that historically we want transparency, and traceability, for validation of identity. So, the people that we're talking to. So classic identity and access management as defined by Gartner. Is the right people having the right access to the right resources at the right time, and you can now extend that to machine identities as well. and managing principles of least privilege is the application of what we call zero trust and so, you have these principles about managing trust and then how do you define, you know, when someone becomes trustworthy. Especially in this digital age where deep fakes have gone from a theory to a reality, and are being involved in breaking of trust, with examples including lots of fraud, that's being, created through deep fakes, and the breaking of trust. So, defining trust in the digital age. Has become more complex, more nuanced one of the people I sat with yesterday at dinner said, you know, I'm getting to the point where I only trust the conversations I have with people in person, because so much is being manipulated in the digital age. And so, I guess the answer, you know, my opinion at this point, is that, you know, where are, where there known knowns that I have validated, and I'll kind of tie this with. There's this concept of the swan risk. White swans are known, known risks that we need to deal with and gray Swans are known unknown risks. Black swans, as popularized by Nasem Taleb in his book approximately 15 years ago, are unknown, unknown risks. We don't know what, we don't know about them when they happen. We can only recover from them. And then finally, I wrote about this in my book, Cyber War and Peace. There is Red Swan risks. Known knowns that just aren't so, and you need to validate and have assurances cause those are the risks of the trust areas that leave you bloodied and bruised. And in cybersecurity and data privacy, it's the known, known controls that fail and don't act as intended that result in the biggest risks to your organization. So, that's my view of digital trust in the era of AI Punit.

Punit 04:59
 Well, that's very well said. I like the aspect of transparency, traceability, and access the basics. And then the definition giving the white, black, gray, and red swans. The Red is the new addition to my vocabulary and also the defects. And if I can just ask for a follow up on that one. Now, in the age of AI, especially the artificial intelligence and these LMS that are coming up, or AI agents that are coming up. Where do you place them? How are they shifting? Trust, and are these red, black, gray? What kind of swans? They're definitely not white, in my opinion, so I didn't say so.

Nick 05:39
 You know, I think it's an excellent way to frame the problem, and I think they span between grays, black and red swans, quite frankly.

Punit 05:46
 Mm-hmm.

Nick 05:47
 And I think it's complex. We're all talking about AI and LLMs, and how they work and what's underneath the hood becomes more opaque. We're using these on a regular basis, and in some cases to greater efficiency. But what are the unintended consequences of the usage? Where are we creating risks today or tomorrow for ourselves? I kind of think about the broad buckets. When you interact with software, you're giving information for some value to come out and it's interesting that in the technology world. We've had secure software development for many years, and we've had what's known as DevOps as developers working with operations to build and release code. And more recently we interjected DevSecOps, developers, cybersecurity and privacy professionals working with operations to build code securely. I think there are analogies that we can learn from the legacy DevSecOps world and building software on how we think. About AI because in a lot of ways AI takes data that you input, and it outputs something that you can use in many ways. You can turn it into software yourself, so some analogies apply and so, understanding the intended and unintended data collection that's occurring, right? Being clear on that, there's a lot of misunderstandings there. Understanding that. Once you input enough data into an AI they can, it can have inferential privacy concerns, meaning you did not input the specific PII, but based on the information that you've given it. It can infer certain data elements that are a violation of your privacy rights and then there are others that I can talk about, model inversion, data extraction. and these are the types of new risks that we have to think about it as you put it in the age of AI. You know, we're, we're just kind of learning to cope with the Age of clouds, which came after the age of mobile. And now we're dealing with new complexities, in the age of AI which, and complexity, is not just the enemy of security. It's not just the enemy of data privacy. It's really the enemy of everything. And so where can we make our lives a little more simpler and have a better understanding of the risk imperatives involved in using these new technologies? So, I'll pause there and see if that makes sense.

Punit 08:40
 It absolutely makes sense. And I think you touched upon these risks and the new technologies and the life being complex, uh, there being complexity all around us. So, in this age of AI where there's so much of complexity, so many new risks, and these risks are coming in every day, the new one comes in every day. We didn't hear it. There's a new way of creating phishing and new ways of doing this and that and that how are CISOs coping up with this change in the AI era?

Nick 09:12
 Yeah, it's a fast-moving space. and so I'll typically ask this question at a CISO dinner of operators, and I ask 'what are the top new risks that you're seeing? From AI and what's interesting is, the very technology that empowers us may also imperil us. And the paradox of AI is that the very architecture that empowers AI, meaning having lots of access to data, to lots of data, also imperils us because you've siloed really sensitive data. You've given access to the AI. And so now that repository of data becomes the target of an attacker and so, the 2 biggest risks that come up are hyper contextualized, hyper-focused phishing campaigns in the form of spear phishing. It used to be that a hacker would take time to write an email to a targeted person. They might be misspellings, and they have to do this manually. Now, you can do it at scale with AI models. You can source information about your targets at scale and so hyper-focused spearfishing at scale is something that practitioners are seeing. I sit on the board of directors of Cofence Phishme. Cofence Phishme is a phishing detection and protection company. We're seeing a spike in this in our statistics as well. And so that's one. and phishing combined with vishing and other forms of exploitation. People calling you, people, texting you, using the voice of someone that you know and trust. So that's one category and the other is the deep fakes. Is someone recording just a few seconds of your voice or recording you on a zoom call and then creating a deep fake to commit some sort of social engineering typically, attributed to some sort of money transfer. So, a fraudster will create a zoom call from a person of an authority, and they'll call into a money transfer role. And they'll ask that person to move money and that seems to happen on a regular basis. At yesterday's discussion we were talking about where a CEO of a company was spoofed through a deep fake, through a zoom call that went into a finance department to compel them to move millions of dollars. Right? And this stuff is happening all the time. And so, you have to start thinking about, you know what are compensating controls? Are there safe words we need to start having, between employees on if for money transfer, you know, are there other out of band authentication measures that we need to do? So those are the new threats that CISO practitioners are talking about living and breathing, on a regular basis. So those are the two ones that come up on a regular basis.

Punit 12:18
No, it does make sense, but life is getting complex. But when you say that these, attacks are becoming hyper contextualized and hyper what did you say focused? Does it mean that what we were seeing as an advantage. Like data classification, data segregation. So now, because everybody knows how your data is relatively classified or segregated and the securities, so they're not trying to get into all data, but very specifically, they know which data, which element. Which kind of person do they want? And then they go zoom in rather than going mass. Is that what you're saying?

Nick 12:55
 Yeah. And so, with the advent of people sharing more about themselves

Punit 13:02
 mm-hmm.

Nick 13:02
Facebook phenomena, the LinkedIn phenomena, you can gather information on individuals. More easily. And then you can craft targeted campaigns against those individuals that that look a lot more authentic. and so yeah, you might have classified your data. You might be, have all the right hygiene in place, but the attack will now target someone. With privileged access to that data and leverage their access controls to that data or to compel them to socially engineer them to make some sort of a money transfer.

Punit 13:41
 mm-hmm.

Nick 13:41
Move money out of an organization and so those are the types of attacks that we see. You know, I was a bank CISO for many years, a global publicly traded bank and we dealt with threat actors that would try to break into networks to authenticate. Remember attackers don't break in. They log in. So, they try and get credentials and log in, and then eventually they'd wanna move money. And then they also used business email compromise. The email would come in, you think it's from your CFO, your CEO, and then you act on it. And they've taken that sophistication and taken it much higher with voice recordings, with video recordings that are spoofed and deep faked. In fact, this was used in an attack approximately 18 months ago against a large casino in Las Vegas. It led to a ransomware attack, which led to a payout, right? So, there's lots of different angles that these attacks can manifest in but ultimately posing risk to the organization being targeted.

Punit 14:41
Okay, that makes sense. But then on one hand, the CISO has this challenge of increased threats, increased target attacks, and so on. But on the other hand, it also has the role of making sure that the board is well informed and aware. Sometimes we see some of the boards are not so tech savvy or aware. How are CISOs managing this side of the story? Because on one side it's the technical security aspect, which is your domain, and it's new and it's all coming to you and of course the new technology. But on the other side, you have stakeholders, especially at code level. How are you or your the people you are advising managing this part, part or challenge?

Nick 15:23
You know, I'll, I'll share with you that you know, I've been a Chief Security Officer, a chief privacy officer, a chief information officer and security officer. Now I run my own practice, VCSO AI, where I'm a virtual CISO, an advisor to multiple different companies, and I think the tactic that I've used over the course of my career is no matter what C-level role I have, I was always a chief translation officer translating technical risk into business risk. So, if you think about the typical board of directors, highly accomplished individuals, seasoned, but in a lot of cases we'll have more of a background in finance. Right? A typical board member will have a financial acumen but when you present to them, technical risk in the form of cybersecurity risk. That can be lost in translation. And so, it's, I think it's, a requirement for the leader to translate the risk and help the board better understand what is being communicated. So, there are questions boards can ask themselves, like, do we have an expert on our board of directors? Do we have someone in cybersecurity and data privacy that understands what's being presented? Can the board have meaningful discourse about these topics?Can do we understand our blind spots, right? When we get behind the wheel of a car, we might have blind spots. We wanna know where those blind spots are. Do we understand the very reports that are being deliver to us, right? A lot of these reports can be very technical, some CISOs will talk about vulnerabilities. The critical, high, medium, and level. What does that actually mean? You know, how does that translate into business risk? How is that gonna impact top line growth? Do we understand our risks? Do we understand our regulatory obligations if something bad happens? What is our incident response process? What's our authority and accountability to make decisions? I think about there's an elements that you can quantify your value at risk for an organization, and then you put in controls in place, and then you try and measure the control, capability. Is it capable? Is it configured correctly? And do I have the right scope of coverage? Right? And if I have all that, and I'm gonna reduce my annual loss expectancy and my loss exceedance curve, but there's always gonna be a degree of irreducible risk. What is that irreducible risk number? And do I have I underwritten that risk with insurance? That's what cyber insurance is for. Do the directors themselves have liability insurance through directors and officers, insurance. And then finally here in the United States, the Security Exchange Commission compels publicly traded companies to define the materiality of a breach. and it's a good precedent for companies to think through what is material, what is the dollar amount? How would a cyber risk translate to a dollar loss event, whether it's a hard dollar loss or a soft dollar loss. And those are the good hygienic questions to ask. And going back to my original point, translating this so that everyone can be aware and be part of the meaningful discourse and not abdicate the decisions to be done by some expert in a silo, but really have a collective interaction at the board level.

Punit 18:51
 That's very well said, and you give me a very good term to use in future because I was with a client a few hours ago and the person was asking, we are legal. Your privacy. What is role? What is your role, and how do you help? I told her that I'm the glue between the business and the legal counsel, and I help translate your legal messages to a business understandable language, but I never thought of. This term, chief translating officer, next time I'm going to borrow it and use it with right.

Nick 19:23
 Excellent. Good. So, all of us in this profession, whether you know, whatever side of the risk coin, and I think security and privacy are two sides of the risk coin. You can't have privacy without some degree of security, but the paradox is too much security can infringe on privacy.

Punit 19:39
 Yep.

Nick 19:40
 And so how do you create the right balance and how do you translate that? And I like the idea that you use Punit about being the glue. I've used the analogy of connective tissue through the organization but yeah, please be the Chief Translation Officer for your clients.
Punit 19:58
For sure. And now from a CISO perspective. cause from a DPO perspective, we have the laws. That are allowing us or giving us some leverage when we have to do a few things. From a CISO perspective, the legal space is buzzing up and things are coming up. EU has a few things and I hear the US or California is also coming up with some things. In that context, do you have any insights into what is going to happen? What's the direction being taken in the law's terms?

Nick 20:28
Yeah, a great point. California has typically prided itself and I reside in California as sort of at the vanguard of data privacy laws and regulations for the United States. And they're also, you know, there's, obviously Silicon Valley is here in, California, and there's a lot. This is a lot of innovation, and a lot of these AI solutions are coming out of Silicon Valley as well as other areas of the world. But certainly, Silicon Valley is a hotbed for it. And so right now there is a new law being proposed called SB468. Artificial Intelligence Security and Protection Act and the bill requires deployers of high risk AI systems that process personal information to maintain a comprehensive information security program. And so this is a bill that's being proposed and in fact, I will I've been asked to testify before the Senate. On the bill and the bill mandates written information security programs that include security managers, risk assessments, employee training, physical access restrictions for personal data, third party oversight, regular monitoring and reviews, and incident response procedures and post breach evaluation. So, it really adds some diligence and rigor to the AI systems that are high risk. Now, there's always interpretation to laws, rules and regulations, but I think the good news is, is that there is some direction that we wanna go to, is that we wanna start implementing some safety standards, right? I think is healthy and ultimately good for our citizens. And so that's the latest one that's being proposed. And it's being supported by the Transparency Coalition and the Electronic Privacy Information Center here in California. That's SB468.

Punit 22:30
 That's interesting, SB468. So, it's going to be basically a version of, because the EU AI Act from I based in the EU. So, I see things from that. Dimension. So, EU AI Act gives you the obligation to classify your systems high, medium, low risk, and then the consequent obligations, of course, high risk are subject to the highest number of obligations. It's similar you're saying they will have if your system is high risk classified as you have the obligation for information management systems and other security means that you need to put in place.  So that's interesting. But you also wrote a book on this field of say, We CISO’s or Cybersecurity, cyber, war and Peace. What is this book about? Can you tell us where to find this book?

Nick 27:56
Sure. Thank you, Cyber War and Peace Building Digital Trust, today with history as our guide published in August of 2021. This is when I was wrapping up a 15-year career at Silicon Valley Bank. It was the bank of the Global Innovation Economy now has been acquired by First Citizens Bank. but at the time we banked approximately 80% of all top tier venture capital and private equity IP around the world. I had defended the bank for 15 years. My mission was to protect the bank of the innovation economy. My secondary mission was to help improve our client's probability of success. So, in the early days of Palo Alto Networks, Zscaler, CrowdStrike, I was there helping them as a design partner go to market, help those companies grow. We used those products and services. Big ID, a privacy company. We were an early adopter and so and I decided that I would leave the bank on my 15-year anniversary, help hire my successors. I published the book, and the book is something that I talked about for 10 years before that at various conferences about let's take lessons from history and behavioral science and translate them into ways that our leaders can comprehend cyber risk, right? Security and data privacy risk. And I kind of march my way through history. I start with a Latin term. The Romans had 7 victim parabellum. Those who wish for peace prepare for war. And so if we want peace, we need to arm ourselves to some degree. We need to invest in the right controls and then it we kind of walk through history and their various stories and analogies the Code of Hammurabi and Ancient Babylon. Hammurabi was the emperor of the Babylonian Empire, but he had a problem in architecture. Buildings were collapsing and killing people, and he created a code, the code of Hammurabi. One of which was a law that said if you build a building and it collapsed and it killed people, that would be your fate. They would collapse a building on you. And so, all of a sudden people got skin in the game. They had incentives, right? Show me the incentive, and I'll show you the outcome. And they started to build more securely, safer buildings and this eventually contributed to what became known as the hanging gardens of Babylon. One of the seven wonders of the ancient world. And so, the lesson here is, is to have security at the onset within your architecture is sound architecture will lead to better risk outcomes and the Sans Institute here in the United States publishes the sliding scale of security, and in a nutshell. The more security you have in by design the less security controls you have to put in later and so each chapter goes through a lesson in history that the reader can understand how Napoleon fought the Battle of Oates, how he used a weak flank, how defenders can use weak flanks against adversaries. And then we wrap up with the Trojan War and how the Greek city states spent 10 years trying to break into Troy and they couldn't. And finally, they created the Trojan horse, and they put soldiers on the Trojan horse. Troy let that into their city, and then at night, the soldiers slipped out, opened the gates, and the Greeks ran into Troy and leveled the city. And every day companies are letting in their own Trojan horses and so that's the book. It's available on Amazon. And so, it was fun to write, fun to talk about, and hopefully a fun read. 
Punit 26:56
 Absolutely. From what you're sharing, it captures the essence of history because in essence I was listening to a video last week or watching a video last week, and it talked about there's no peace without war and there's no war without peace because these are things that have coexisted because the moment you say, "I want peace", that means you say you don't want war. And when you say you don't want something, the law of universes, you inherently are or inadvertently are saying, I want that. So then, and when you say, I want peace, you say, I don't want peace. I mean some people will say, "I don't want peace”. And then again, you'll have the debate. Now that's a very interesting conversation. I mean and I agree with you that. We need to fight this as a war. It's a cyber war at the moment that's going on. And our assets, our services, our data is getting attacked and we need to find peace in that one. Now, if someone wants to get in touch with you and say, "I want to connect with Nick”, what's the best way to connect with you?

Nick 27:56
Sure, the website is www.vcso.AI. You can also email me. At vcso.AI and I operate a consulting and advisory business. We help companies think through big strategic initiatives in cybersecurity and data privacy, and then we also help cybersecurity. And data privacy product companies build better products as design partners as go to market partners and building rev ops and building calculators that calculate the value to customers to sell into the market. So, I've been something that's fun to do adds value to the community and we always love to meet new people and have discussions on how we can help.

Punit 28:48
 For sure. I think it has definitely been fun talking to you and definitely a value add for our listeners. So, with that, I would say, Nick, thank you so much for your insights, your time, and your wisdom.
Nick 29:01
 Punit, thank you for having me. Great to talk. Really appreciate it.

Punit 29:05
 Thank you.

About FIT4Privacy 29:08
 Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. Fit4privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. If you want to know more, visit www.fit4privacy.com. If you have questions or suggestions, drop an email at hello@fit4privacy.com

Conclusion

As AI continues to reshape industries and societies, the role of the ISO and similar standards bodies becomes increasingly vital. They help create a unified framework for assessing risk, ensuring security, and enforcing compliance, much like the emerging California laws and the EU AI Act. However, technology alone cannot guarantee safety; the lessons from history remind us that building security from the ground up—embedding it into design and culture—is key to long-term resilience.

Nick’s perspective underscores that we are in a new kind of “cyber war,” where data and digital infrastructure are frontline assets. By combining strong regulatory frameworks, rigorous security programs, and historical wisdom, organizations can better prepare for the challenges ahead and foster a safer AI-powered future.

For those interested in exploring this critical topic further, Nick’s consulting work and his book provide valuable resources for navigating the evolving landscape of cybersecurity and AI governance.

ABOUT THE GUEST 

Nick Shevelyov  has been called on at the earliest ideation stage to develop concepts for next-generation technology companies, ranging from Kubernetes security (StackRox / Acquired by Red Hat $400M) to cloud real-time software composition analysis solutions (Kodem / Greylock Series A), data-loss container technology (Bedrock Security—Greylock Series A), and shadow-data discovery (Laminar / Insight Ventures Series A). 

With a wealth of experience, Nick Shevelyov advises founders and CEOs on product development and go-to-market strategies. His guidance has been instrumental in increasing time-to-value propositions for companies like Pixee.ai, Quokka.io, Boostsecurity.io, ETZ, and more. He works with companies in a variety of industries at every stage, from seed to IPO and beyond. 

He consults with Private Equity firms, including Insight Ventures (also an LP) and FTV Capital, and sits on the Advisory boards of Forge Point Capital, Mayfield Fund, Evolution Equity Partners, Night Dragon, YL Ventures, and Glynn Capital. 

Mr. Shevelyov is on the Board of Directors of Cofense | Phishme and the Bay and the Area CSO Council (BACC), an invitation-only group of Chief Information Security Officers representing the largest companies headquartered in the San Francisco Bay Area. Following his time as CIO, Nick is an honorary member of the Blumberg Technology Council, where he participates in technology industry thought leadership collaborations. 

He is the author of “Cyber War…and Peace: Building Digital Trust Today, with History as our Guide” and is passionate about leveraging insights from history and behavioral science into technology development and risk management practices. 

Mr. Shevelyov holds an Executive MBA from the University of San Francisco and industry certifications, including Stanford’s Strategic Decision & Risk Management, Harvard’s Corporate Risk for Executives, and the CISSP, CISM, and CIPPE. 

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with