Jan 5 / Punit Bhatia

How Can Organizations Seize the AI Opportunity?

Drag to resize

In the ever-evolving landscape of artificial intelligence, businesses stand at the precipice of unprecedented opportunities. How can organizations navigate the complexities and potential pitfalls to truly seize the benefits AI offers? The FIT4Privacy Podcast Episode 103, featuring insightful dialogue between Punit Bhatia, a renowned privacy expert, and Balaji Ganesan, CEO of Privacera, delves into this pivotal question. They explore the essential aspects of understanding AI, integrating data governance, and developing a responsible AI framework. This discussion is not just about leveraging technology but doing so in a way that aligns with legal standards, ethical considerations, and strategic business objectives.

Transcript of the Conversation

Punit 00:00

Can organizations seize the AI opportunity? Yes, AI is an opportunity. And AI also has risks. But to leverage the opportunity, you need to manage risks. And for that, you need to start in the right way, set up the right policy, create the right framework, and then include the right stakeholders. And when you do that, then you're all set for AI, and managing and leveraging areas and opportunity and also managing the AI risks. How do you go about it? What are the steps to take? How do you start thinking about leveraging data? How do you start thinking about AI governance? For all this and more, what we have is a very interesting guest. And his name is Balaji Ganesan, and he's the CEO of a company called Privacera. So let's go and talk to him.

FIT4Privacy 00:54

Hello and welcome to the FIT4rivacy podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here, your host Punit Bhatia has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection, and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.

Punit 01:23

So here we are with Balaji. Welcome Balaji to the FIT4Privacy podcast.

Balaji 01:27

Thanks Punit. Thanks for the opportunity to be here and have a conversation. So I'm looking forward to it.

Punit 01:35

It's a pleasure. And so let's start with a simple question. Now AI or artificial intelligence can be described in many words, and in many ways and with long, long descriptions. But when you think of the AI, and what's the one word that comes to your mind?

Balaji 01:51

I see opportunity, I think the one word that comes to my mind is opportunity to advance humankind opportunity to drive a lot of results and solutions that previously has probably not seen as impossible, I think I think artificial intelligence. And I think AI has become a lot more forefront with the advent of generative AI. But AI has existed for a long while. And it's insane to add. So if you remember the supercomputer that IBM built called Deep Blue and was playing chess, with grandmasters around that it was still not invincible. I mean, obviously, there's a lot more movies made about AI than the articles. But I think AI is always about harnessing the human intelligence and in doing it at scale. So if you could build, combine all these trends that humans have in PBL meto machine, can that drive tremendous opportunities for us. So we have been leveraging AI in many, many spheres without us knowing about it through the advance of Internet and other ages. But we think with the advance of ChatGPT and other things, it has brought even more to the forefront that things which deemed impossible in many years back or a few years back, and now people are able to see that part of having a conversation that looks like almost like a human human being as part of it. So I see ai, ai is going to make humankind better AI is a huge set of opportunities, part of it. It's not something which we should be fearful of, I think I think what we should be mindful of is the risks it brings up and we should be thinking about how to mitigate those risks, because we should not clamp down AI for the fear of it's going to be fear, I think it's a huge advancement of technology huge and management of innovation to bring I'm looking for for the next generation. And I have a 15 and a 12 year old and I'm and I'm excited to see when in their generation AI is going to be mainstream, and it is going to do tremendous things to aid people and as part of it, so. So for me AI is opportunity as everything else. I'm not, I don't know why you've been you've been in the space for a long time as well. So I'd love to hear your thoughts as well.

Punit 04:19

Well, certainly, I think you said the two spots very clearly. AI is a huge opportunity for businesses. And that's what we need to leverage. But all the same it like any other technology like other any other opportunity does create some risks. And we need to identify and mitigate those risks. And if we do that in a balanced way, we are in a good space and businesses can leverage a lot and it can support the human man, human guide trace mankind in a very big way. So if we say that it's an opportunity, and which we both agree on why do we think that businesses need to leverage it? Or how can businesses leverage it because end of the day, AI feeds on data. Alright, And one of the key elements of last decade has been we were talking about how do we leverage data? How do we manage data? How do we govern data? And all of a sudden, for last two, three years, we are now saying AI governance, AI data governance. So how can companies or businesses onboard or start thinking about AI and data governance combined?

Balaji 05:20

Yeah, if you look from the enterprise point of view, which is, you know, you know, we are excited with the ChatGPT advanced features of it. But at the enterprise level, they have to think about data, but they have to also think about, you know, compliance and security and privacy parts of it. And these are driven by external mandates, of course. So it's a fit depending on the industry you are in. So we were talking to a chief data officer in a bank, a large bank, in financial industries, it's, you know, compliance is the end of the day. So you are bound by a lot of regulations around how do you leverage data, but you have to do a lot of external reporting to regulators for the effect. So similarly, in healthcare and other pieces of it. But there are also internal mandates around this data is not, you know, free for all. And we always looked at data and AI in the same lens is saying, it comes with opportunity, but you have to balance the risk. Because if you don't do that, organizations, if you only focus on one aspect of innovation, then ultimately, it doesn't scale. Like in the enterprise world, it doesn't scale means it doesn't drive adoption, quickly gets shut down if there is some seen as some project which can introduce security or other peaks. So when you think about innovation, and AI is a big sphere of innovation in the enterprise world, you have to really think about enterprise readiness is what we call it in the tech industry is really have to think about governance, security, observability, operating at scale, because without that, any technology will struggle in adopting adoption for now, if you if you're not harnessing the power of the technology, and if it's being only used in one, one small pocket in the enterprise, the organization is not not trying to drive value out of that part. So in in this is the always been the challenge with technology or options in the enterprise world is in which comes first, right since you build security or to build innovation first. And in some cases, innovation is a little bit ahead of the other aspects of in some cases, it's balanced. But we are in this day and age that we cannot ignore. It's two parts of an equation. So you cannot ignore one side and just assume that just building an innovative culture embracing an innovative technology is going to drive value because ignore the other Spark, it quickly gets shut down. It doesn't get adoption part of it. So for us, enterprises can, you know, there's huge opportunities for enterprises to drive value out of this AI piece and AI is a template. Every industry, you know, every company has has now a mandate around some form of AI pieces of it, especially with generative AI coming in, people have done hackathons and come up with hundreds of use cases. So if you're a CEO of a large bank, if you're CEO of a large financial services company, they feel that they all have lots of data. That's one of the moats and differentiators. And they are thinking how to leapfrog the competition by leveraging technology. And AI certainly can do that part. We are in the cusp of an age of age where AI has become much easier than it was a decade back. You don't have to have an army of hundreds and 1000s of data scientists and people to build every parts of the infrastructure. Technology is advanced to a stage where you can get up and running fairly quickly. You can buy application which embed AI. So we have certainly on the airport. But being going from that part, if you don't balance the risk part and we talked about balancing, this is a balanced equation that needs to happen. Ultimately, the technology doesn't get adopted in the company. And if you're not doing that you're defeating the business purposes. So for many CDO CIOs, it is the puzzle that they they are always struggling to solve is how do you balance that risk mitigation with the innovation part in in we are in the same place so many organizations today are thinking about it in the same way as we are talking about.

Punit 09:33

I think I would agree with that. The CIOs and CTOs are certainly struggling with it. But it's a business question first, because technology is an enabler technology would solve your challenge what you want to predict for the business, the CEO and the CEO to set their sights on what do they want to achieve as a business and then the CRO CTO and maybe the chief AI officer or whatever you call that, combined can get you the right technologies and and help you achieve that business objective. Now, if we say that, that it's relevant, it's to be done, what are the some of the first steps you would suggest that companies can take to start this journey towards AI?

Balaji 10:15

Yeah, I think the good part is many companies that already have an AI program in place. So like I said, many large enterprises been, most enterprises have been dabbling with machine learning or some level of AI. What we are seeing today is the what they call a paradigm shift between traditional machine learning into generative AI. And the paradigm shift is twofold. One is, it is the technology has transcended to a level that you it is much easier today. To start with these models that have been built, that you can much easier get started, then you're doing it traditionally, ml parts and traditional ml here to set up a team and do that part of an in traditional AI. Now they're part of it. So what organizations today have to think about is not, they have to invest in a huge amount of people to get started, that the technologies available today that can harness and deliver value in very, very quickly. But what they really need to think about is this aspect of responsible AI, because generative AI as much as a paradigm shift in the technology world is also a paradigm shift for governance and security. Because you now have a technology which can generate your own data. So if you think about it, traditional ml will only use what you've been trained to. So if you tell the model to say, build a risk profile, only do that nothing else, because that's what the model has been trained to. And it will only be tell you what is the data has been fed. Now, you're going into this now new models, which have been trained on trillions of data sets available from the internet. And you're leveraging the power of that it's almost like leveraging the power of the brain. Now, you don't really know what it's been trained on, right? It can be trained on anything on the internet, and you're asking questions, and it's coming out with certain things. Not only what's your training itself, the model on organization data that can come back with its own aspects. So it's a paradigm shift for governance security piece, because now you have something which is not completely deterministic, or very static, it can be fairly open, you're almost dealing with the complexity, we're getting closer to a human brain, it's like almost dealing with a human being, who's maybe somebody called it a very smart human being, but it's getting closer to that. Now, you have to think about governance and security and responsible AI part of it in a slightly different way. And not by fear. So what we have seen many organizations is, there has been fear on the unknown. So So what organization need to take a step back is is you cannot shut down this incredible interest, you're going to fall behind your competition. You're going to fall behind your business, if you just go back. So yeah, is the way to go and move forward. But how do you do that responsibility organizations need to take a step back and quickly, in many organizations have done is they put together a steering committee and start putting together this framework of what is responsible AI. And it's between the business and the technology and security all coming together at a higher level? And but if you quickly build up a framework for your organization's to say, what do we what your point ultimately needs to tie to a business goal? So ultimately, what do we want to achieve? And how do we achieve it fairly quickly? But how do you balance that risk element to it perfect. And that can be done in a variety of ways. You know, you can train your teams to putting together a policies and standards part of it and then adopting certain tools to do that. But without putting together that overarching framework of how do you want to do AI inside the company and responsibly AI organizations will struggle like if you do it in a piecemeal way. Now, you mean don't you may start with a certain business organizations or certain business applications. Having that responsible AI framework is going to be crucial in the longevity of that piece of it. So what I what we recommend always is organization start looking at how to build that initially the responsible AI framework. And in those steps, you will outline the steps of the business goals but also how you're going to meet certain of the internal and outlining that communicating that very clearly to the entire organizations and setting the standards and sets up a foundation now you always have As technology evolves, you have to evolve with that. But your organization's need to take a step back and do this so that, again, don't operate by fear, embrace AI. And but establish a clear set of guidelines in what and principles early on. That's the first step. And then obviously, there's a lot of other things that you need to do to make it up and running. But again, I love to hear maybe your thoughts as well to see you talk to a lot of CTO or CXOs. What are they are saying about this part?

Punit 15:32

I think more or less on the same side, everyone wants to leverage AI. But sometimes, or most times, they have this fear or this risk factor. What do we do about this? What do we do about this, and they don't know how to go about it. So the best way is, as you said, let's set up a policy that set up a framework on how you want to operate, let's set some business objectives, and then set up a governance around it, then give it some funding. Of course, if you're a very large corporate, you set up a separate initiative, separate entity so that the risk is contained. Otherwise, you do it in the department. And when you do that, then you innovate. You try out a few models, try out a few ideas. And then whatever works, you implement it, and what works, then you put it in to the risk framework. And after the risk framework and due diligence, you have a good business case to go forward. But then when you do the funding, be prepared, that they will not always be what you expected. So be ready for surprises, be ready for failures. That's another thing that we're seeing in the market.

Balaji 16:34

So one interesting question, again, maybe we'll have to run by you, because that's something comes up is how should organizations set this up? That is, should they do it a command and control like and say everybody follows that? Or should they do it? Because innovation driving command and control? This is very hard, right? And at the same time, you can't completely be decentralized and say everybody on their own, figure it out. So maybe I'd love to have maybe hear your thoughts are and I'm happy to share my thoughts on that part of it?

Punit 17:09

Sure. I think there's two parts to it. Again, it's about balance. From a guideline perspective, or command and control perspective, if you want to call that there needs to be a policy and a framework saying what do we want to do? What are the principles? And how do we want to do certain things? And what is the funding available, and then set up a centralized AI department or a team if it's not a separate entity? So that sets up basically the ground rules? And then at a certain time that team sets up? What do we expect each department each business unit to do? And there there's a freedom of choice on what do they think what ideas they have, what innovation they can do, and they can also be ideation across the company, and then you select some ideas. So that's one way of it. But you cannot control it, then how do you democratize? And how do you make sure everyone is feeling participated? Everyone is feeling consulted. So you need to set up a governance structure in which everyone's views are incorporated. And when we say everyone, let's talk about a few key actors. Right, of course, the AI head has to be there as the orchestrator or chairman, but doesn't have to make the decision that has to make facilitate decisions, then you have to have legal counsel and privacy counsel on the table, because those are the two people who will usually say, you cannot do it, this is not done. I have my privacy concern. How do you manage sensitive data. So those concerns you don't avoid them, put them on the table and address them, collectively, then the financial of course, because it's going to have financial implications. So you don't want to let it be towards the end. So do your business case along with these guys, and have a risk view, privacy view, and also your CFO and CEO, and then your HR because there's also an employee facing issue and the CEO, and then you put all of them together on the table. And of course, based on the organizational dynamics, there may be other people to be not, but you set up a committee, wherein if you're starting an initiative, first you put it for discussion, get everyone's views heard, adapt, or maybe put some constraints, validate those risks, mitigate those risks, and then initiate it. And then they will have far more buy in because if you're launching an AI program, and you say you're the blessings of legal counsel and risk, CRO and CPO and so on, there's far more acceptability and let everyone share their ideas. Let everyone bring their ideas and let those ideas be evaluated on an objective criteria. So you balance it out in a way that there is what is a command and control steering? What projects are done? Right bottom up, you are in engaging all the staff to bring in ideas and views to hear your views as well.

Balaji 19:52

Yeah, I think so. We pipe in spending a lot of time in this space to understand that I view Obviously, data governance and data security, something cannot be completely centralized unless you're an organization who is very, very small. But let's talk about a global 2000. Here, it's going to be command and control. That said, completely decentralized is not the way to governance. So there is a balance, which is we call it a federated model, or, in some way, I call it federal and state laws or local laws, right. So you need to have a set of federal laws that everybody adheres to, which is your organization wide mandates, while you do have an independence for states and local jurisdictions to make their own decisions on their own data, and own businesses part of it. And so you're going to have local sheriffs who can make decisions, you cannot always go back to one command and control piece of it because it doesn't scale. And because in Sometimes organizations fail to think about is, governance is not, again, can't be taught in unison, it has to be taught in the context of a business, right. So if your business, you really need to think about agility and speed and value, as well as so. So you cannot think about so in this case of we're talking about responsible AI, organizations don't have the wherewithal to spend two years thinking about it just don't have it, right. So like, yes, people will love to think and take a step back in wall, a lot of people have discussions to do that. So how do you deal with the situation in this case, where there is a CEO level, the CEOs want to move fast, because the fear is you want to get left behind competition, but also that customers are also starting to think about these experiences. So if you think about a retail companies, and if your other competitions are offering experiences, which leverages AI, you cannot be left behind. Right. So So I think governance and and other piece have to think about like how do you deliver value fairly quickly. And in order. And you cannot do that by just having a command and control, you can get started. Command and control is not scalable. So you have to again, as you outlined very, very articulately as well as my philosophy has been same as the you have to really think about setting the overarching principles and framework and, and do it fairly quickly, and set up those federal laws fairly quickly. But then have these ownership within these departments and lines to start making decisions within this overall framework. And so you cannot dictate every small bits of it. But the guidelines and framework gives you some flexibility to do things and make decisions on your own data out of it. So we believe in that model to do that part. So but it has to be done in a way where you can get value quickly and respond to the business part. Because a business lines are not willing to wait two years or one year for governance to be sorted out. They will go and implement some things to do that part. So the the always the balance is to be how do you do that? But how do you align with the business fairly quickly and show value? And that's, I think, a puzzle that we all need to continue to improve on within the enterprise world. As to how do you do that fairly quickly. Because, you know, otherwise, you're again, going back to that adoption is key. Because if your organization's business lines are not embracing governance, they don't see a value. People don't adopt it. So it's just remains on paper. So if you don't achieve the true level of governance that we knew what need to be.

Punit 23:43

I agree with you. And I think agility and speed are two of the constraints, especially for large organizations. So you need to do it in an agile manner. And maybe if it's an initiative, it's going to cut across the entire organization, or it's going to put your organization in a completely different perspective, completely different branding, then there is no harm in setting up a small subsidiary, let that flourish, let them do its own independent decision making you give them the funding, give them seeding, and let them do it independently while you keep an eye on it, oversight on it, but not a complete insight on what they're doing. And that's the way to gain some speed. But in all these when we talk about AI governance, security, privacy, where does privacy fit in? How does it solve these challenges?

Balaji 24:29

Yeah, so just a brief background on privacy report where we are founded this mission around leveraging data responsibility in late 2016. This is my second startup with my current co founder. My previous startup was in the security, data security data governance space, but in the context of big data, big data analytics, and we were solving at the time this was again before cloud had really taken over in the context of a technology called Hadoop where There were lots of people trying to leverage data. But security and governance was a challenge. And we built a technology to really help enterprises gain understanding of the data, but also put in controls to ensure they can do it transparently on who can do what within the organization. So what data can be analyzed by home as parts. The technology initially was acquired by another big data company called Hortonworks, and open sourced into an open source project called Apache Ranger. And privacy was built on those foundation of those open source projects, Apache Ranger and Apache Atlas part. And what we have really seen is that in the trends wise, we have really seen the explosion of data with the advent of cloud. And now with generative AI is going to be so we are in this golden age, or continue to be in the golden age of data part where data is going to drive a lot of business value. And but what we have helped organization is to balance the risk aspect. And, and most again, I started the part most organizations are dealing with some level of mandates, internal or external. It used to be very what we call us regulated industries. But with the advent of privacy GDPR, California privacy, anybody who's dealing with consumer data has to think about and to think about, make sure that data has been used for the right purpose or only be used for certain things. So how do you do that, across the organization's has been the constant challenge, and most companies ended up putting in controls at variety of way where data touches for, and that leads to a lot of band aids and friction and where privacy has come in. typically how we can help balance both sides of the equation have really fast use of data, but also put in those guardrails piece. And so our platform has really been a year and built for large scale data environments to be started doing so. And where we really looked at architecturally, how do you put in controls without impeding the value the speed, because security governance is not, it's unison, it has to be thought through in the context of business to part of it. So in private sector, we really thought really put a lot of thought and putting technology in architecture, which can help put in this controls, which reduces risk. So controls could be understanding the data itself, what is in the data, what is sensitive, and to building a single place where you can start putting in new rules and policies around who can access what data and then continuously monitoring who's doing what. But the way we have enforced these controls, we have built it in a way it is very transparent to the end user. The end user being could be a data scientist or analyst or end user could be a third party. We have built this in a way where it is very transparent. So what the end result is, it is it is transparent to our end users. So I call it the best security and governance tools are ones which no one's talks about in the organization, it's very nice to be transparent part. And so we put in a lot of effort, it's like, but it's still a, I would say, a lot of work, we need organizations we need to do to continuously keep up with the innovation that is happening because data has gone beyond just traditional data and ai do generative AI. So when we look at generative AI, it is when I say a paradigm shift, but the principles are the same. What privacy can help is understanding the data, but also based on your role based on who you are. There certain level of controls we can put in and then get visibility around who's doing what and we apply those things in in a transparent way in for generative AI as well. So that any end user can be able to use generative, any model any application. But we are able to provide these controls and visibility for the for the organization. So for a CIO, for for a security officer, we can shine light on what the risks are, and have an ability to put controls in very seamlessly across the data is state. But doing that in a way that it doesn't impede their business application of data. Because at the end of the day, that's the balance. That's the balance between you have to really align to the business use of the data, but you have to put, you know guardrails in a way that's transparent and almost transparent to the end user, but in doing in a way that runs at scale. So, privacy law, again, that's a technology platform layer that we are really built it that work across data doesn't matter if it's an on premise or in the cloud. It doesn't matter which models you use or which applications you use. They are able to get that consistency for governance, but they also able to get that transparent application so that the end users I can do whatever they need to do. And then the governance is taken care of automatically behind the scenes.

Punit 30:09

That's very interesting. And how can people know more about Privacera? If they have?

Balaji 30:14

Absolutely. So you can reach out to us at privacera.com? Or in you know, there's a lot of resources that we have as a website part of it. So we have resources around that. How do you govern traditional data environments to generative AI, there's a lot of blogs and podcasts. So if people can take advantage of that governance is something we still believe it is the real cusp, we are the starting journey in many organizations. No organizations will say that they don't need governance, so but how do you go from where you are and where you need to be is most organizations to help. So certainly, we have resources to help that. You can always if you're, if you're technologists who are trying to solve this problem for your own data state. There are product tools and downloads, you can take a look at that port in hand, or you can ask any questions in our Contact Us form. And we should be able to reach out to you so again, and if you I'm available in LinkedIn, as well. So if you if you need to find us on LinkedIn, there's a lot of materials and LinkedIn for both private Sarah, as well as you can always reach out to me in LinkedIn as well.

Punit 31:25

That's perfect. So with that, I would say it's time to say thank you so much for your insight. It was wonderful conversation, and I hope people will learn a lot from it, and they will contact you.

Balaji 31:38

Thank you so much. Thanks for the opportunity to be here. It was very exciting. And I enjoyed the conversations and I look for the further feedback and questions from all of you.

Punit 31:49

Sure. Thanks, Balaji.

FIT4Privacy 31:51

Thanks for listening. If you liked the show, feel free to share it with a friend and write a review. If you have already done so, thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM, and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more? Visit www.FIT4Privacy.com. That's www.FIT4Privacy.com If you have questions or suggestions, drop an email at hello@FIT4Privacy.com. Until next time, goodbye.

Conclusion

Embracing AI in today’s business environment requires more than just technological readiness; it demands a holistic approach encompassing data governance, security, and ethical use. As Balaji Ganesan and Punit Bhatia articulate, the journey toward AI adoption is fraught with challenges but also ripe with opportunities for innovation and growth. By establishing a responsible AI framework, organizations can not only mitigate risks but also enhance their competitive edge. The insights from this episode underscore the importance of a balanced approach, where technological advancement and governance go hand in hand, paving the way for a future where AI and human ingenuity create unparalleled value together.

ABOUT THE GUEST 

Balaji Ganesan is CEO and co-founder of Privacera. Before Privacera, Balaji and Privacera co-founder Don Bosco Durai, also founded XA Secure. XA Secure’s was acquired by Hortonworks, who contributed the product to the Apache Software Foundation and rebranded as Apache Ranger. Apache Ranger is now deployed in thousands of companies around the world, managing petabytes of data in Hadoop environments. Privacera’s product is built on the foundation of Apache Ranger and provides a single pane of glass for securing sensitive data across on-prem and multiple cloud services such as AWS, Azure, Databricks, GCP, Snowflake, and Starburst and more.

RESOURCES 

About Punit Bhatia

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with