Oct 11 / Punit Bhatia and James O'Brien

How to Leverage AI with a Human Touch

Drag to resize

Imagine a world where customer service agents can effortlessly access the perfect response to any query, without spending hours searching through databases. Is it possible? In today's fast-paced digital age, customer service has become a critical factor in business success. However, the traditional approach of manually searching through knowledge bases and crafting responses can be time-consuming and inefficient. Enter AI-powered solutions like Ducky.ai, which are revolutionizing the way customer support teams operate. How can businesses leverage AI to enhance customer satisfaction, improve agent efficiency, and unlock hidden insights from their customer data?

Transcript of the Conversation

Punit 00:00
How do you leverage AI with the human touch? Yes, we talk about AI replacing humans, or we talk about AI being a threat for human existence, or things like that. But in reality, if you collaborate and use AI for assistance, that is, AI gives you an input the human looks into it, and then it's sent or it's used for any purpose that the human would have done. It improves the efficiency and efficacy of the process and the human as well. And that's what we are going to talk about in this episode. And we have none other than somebody who's actually practicing what we are talking about in form of product towards consumers and helping them have better efficiency or better reliability of Customer Care, and also providing them with better responses and better context on what all elements to look at. And I'm talking about none other than James O'Brien, who's the founder and COO@ducky.ai so let's go and talk to James.

FIT4Privacy 01:12
Hello and welcome to the fit for privacy podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host, Punit Bhatia, has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.

Punit 01:40
So welcome James. Welcome to Fit4Privacy podcast.

James 01:43
Thank you, Punit, great to be here. I really appreciate you having me.

Punit 01:47
It's a pleasure to have you. And as we talk about AI in these days, let's start with something which everyone has an opinion, and everyone sees it differently. How do you see AI or define AI? Because there are many definitions.

James 02:02
I love. I love this question. I'll give you two very quick answers. The first is that AI is a really broad term, right? People have been using the term AI for a long-term time, and even if we think of things like, you know, algorithmic logic trees, right? That that's AI, you know that they're not, they're not cognizant by any means. Not to say that standard AI is these days anyway, but all I'm trying to say is that AI is a very blanket term. So typically, when we think about the way that people perceive AI today, it's really more an application of machine learning technology where, you know, compute is compute, and programs, these transformers, are actually quote, unquote, making decisions or predicting the next thing. And then that's really the core corpus of the technology, is prediction.

Punit 02:49
Indeed, I think you said it right. There are many usage of AI, and many systems can be classified as AI, but people tend to overuse the word AI and confused with lot of, I would say, traditional systems, which were doing the stuff, making some predictions, but not using the data to learn and so on, and still being classified as, yeah. So there's a lot of confusion in the market around it, but there's also a confusion or a hype around the kind of impact it will have on human mankind or society. Because if you see the media, it's almost creating a hysteria or a challenge, but that's the job of the media. So, they love a sensation. So, they love to say, NASDAQ is down, and recession is coming, so that people get scared, and when it's up, they like to say, oh, there's a boom and you should buy and then next week there's a bust. So that's the same thing on Hi. So how do you see this impacting society? I would say?

James 03:47
Yeah. I mean, I could, I could talk about this for days on end, but I'll keep it short. The first is that I do expect this to follow similar trends to previous industrial revolutions or technological revolutions, right? You know people, people talk a lot about the fear of AI taking people's jobs. We like to say over at Ducky, AI is coming for your tasks. AI is not coming for your job. But that does imply that AI will be doing some of the work that you do, and in a lot of people's cases, it's already doing some of the work that we do. So then the question becomes, what is left for people, and how do we make sure that we leverage our inherent skills as human beings, like reasoning and correlation, causation, empathy, strategy, such that we are an augmentation to AI, and we're not just, you know, doing data entry right, because that's something that AI could do very well. To wrap up my answer here, I do expect, over the longer term, to for AI to create a, I would say, universal increase in gross domestic product. I think we can be more. More efficient and more proficient, more productive as a society with AI. But again, I do think that there will probably be some bumps along the way. But the last thing is just to say that AI in its current form, and I think AI in the form that we're even thinking about it, it might take place in 10 years or so, will not be able to replace some core aspects of humanity, and those are important, and those are the things that I think a lot of us should be doubling down into, both as businesses and the roles that we construct, as well as individual workers.

Punit 05:32
I would agree. I think a lot of the fear mongering is around people losing jobs, people being made redundant, or people being no longer being relevant, and that's not going to happen. Since we made the technology, we would find better ways of using the technology to assist us and get things done in a better way. But while that will happen, I think there's also a fair element of risk that comes along if it's not appropriately used, and that's to do with every technology. I mean, with electricity, you can burn yourself, or you can run your house and everything and everything, but it's the same with AI, with the power that we have, there are some risks, and you need to use it responsibly. Would you have an opinion on something around the risk as well?

James 06:20
Yeah, for sure. I mean, again, so so many. I think one is, and this is a pretty hot button topic in the world of customer service and support, where some people will, will, will message frequently. On message for us, it's like, hey, we've got a bot that's good, but I don't want my customers to know that it's a bot, right? Like, I want them to think they're talking to a person, I think that categorically, is a big risk, and not just because it's disingenuous and, I think dishonest, but more importantly, it's because, at least right now, with the way that AI functions, it's not as good as a human being, right? So you're gonna have your customers, to put it very simply, reaching out and communicating with something that they're just gonna think as an incompetent person at a certain degree, right? Like it can handle simple questions and even some more complicated workflows, but it can't handle everything, and it can't say, hold on, let me go research that or talk to my colleague, and I'll get back to you. I mean, I guess it could if you programmed it, but it's not how most of them function. So, I think the differentiation between application and reality and expectation is really dangerous, and it's something we like to talk about, like, honest AI. And I think that those of us who work in the industry, specifically those who are, you know, incredibly well funded and creating some of these transformative, like, base level language models, I think it's our responsibility to ensure that things like privacy and intellectual property are protected specifically on the side of, you know, creatives and artists, and I think it's also really important for us to educate people, not only are they interacting with, you know, I like, I like to say flippantly, are they interacting with robot and that, you know, cat leading people down an inappropriate path and thinking that they are talking to a human when they're not, I think is one of the more dangerous things, at least in the beginning of AI.

Punit  08:09
I think you're extending that aspect of responsible AI to honest AI, saying, make sure and make transparent that someone is interacting with AI, and let ai do its minimalistic job, rather than extending it to be a full chat box or chat bot, which is trying to solve customer problems because it won't. It can only do it, probably in collaboration or in assistance from a human. And that's where the real balance is. Is that?

James  08:38
Yeah I think that's and I know you're I know you're speaking back through a couple of things we talked about a moment ago, but I do want to make one caveat, which is that, like, I'm not out here, you know, ragging on chat bots, like chat bots are good, and they can solve, and they can solve a very specific subset of problems incredibly well. But what I am saying is that there is an outer bound to the things that they're capable of solving. And we've all been, we've all had this scenario, right? Like we're chatting with something, and it says, you know, here's my answer, Was that helpful? And you're like, No. And then it tries again and says, here's my answer, Was that helpful? You're like, no, it wasn't helpful, right? So there needs to be that stop gap where all of a sudden, the chat bot, or whatever the system is, says, Hey, let me. Let me call in my human overlords here, and because I think you need a little bit more help than I can provide. And that's a part of the honesty that I'm talking about, and the other part of the honesty I'm talking about is also kind of like a societal responsibility, as you were mentioning earlier, which is that it doesn't really do anybody any good to pretend like AI as a universal category is ready, you know, to take on like the concept of AGI, right? Like we're just not there yet, right? Sure, maybe there's some secret government programs that I that me as a peon don't know about, but for the most part, the public perception of what AI can do is like me. It can people pretty scared, and it's just not categorically accurate for most companies, at the very least.

Punit 10:06
Well that's fairly accurate, because AI by itself is not, at the moment, being capable of doing things by all by itself, but with some intelligence, with some augmentation, with some support, it can be a very good tool to enhance the productivity and improve the efficiency of what a customer support service, for example, can provide well behind it the technology, or the underlying component that we use is what we call large language models or LLM’s. So, is that something that can be configured, is that something that can be tweaked? Is that something that can be adjusted to make sure that it does its part, which is it is good at and then refers to a human when it's necessary? Is that possible?

James  10:52 
Yeah, it's very possible. It's very possible to, I mean, I mean, they're, they're incredibly customizable. And I don't necessarily mean, like, fine tuning, you know, billion and billion parameter models. Here we like to refer to the, you know, the fine tuning, so to speak, or the, I'm forgetting the word that you used, but the way to cobble them together to do very specific things and kind of limit or make more specific their outputs, we like to refer to them as harnesses, right? Just as an example, right? Like we can't go fine tune, open AI, this primary LLM that we that we use on a day-to-day basis, right? Like we can, there are different things that we could do to run that on premise, or to run fine tune an open-source model that was smaller. But there are pros and cons to making those choices, as there are with anything. So, I'll just give you an example of kind of how we structure some of it. On our side, we have a, you know, fundamental commitment to protecting user privacy and identity. And I'm not just talking about our clients, with our businesses, but our clients, clients, right? Very often, when a customer support email comes through. There will be names, there will be email addresses, there will be things of that nature that we don't actually need to feed to a large language model in order to do our job. Sometimes those names or email addresses are important identifiers for us, but they can be one way hash. They can be encrypted such that if we ever do need to harken back to that piece of information to say, oh, you know, this email came from James, and then we can combine those two pieces of information to know where they came from. That's an important thing to be able to do, but those are things that we can do on our side before we ever send any information to a third party, LLM provider. And similarly, we use a structure called simplified state representation. So, there are other pieces of a customer email, just as a very specific example for what we do that we run on premise for a fine-tuned version of an open-source model that we use. We essentially append them, we summarize them, and then once we need a larger, more capable, large language model to do some of the work for us. That's when we engage with an outside third party. But at that point, we've already stripped out all the PII, we've already stripped out any sort of identifiable information, and then we're just using, you know, the technological prowess of a larger model, then we can run in house to get that last like 10, 20% of what we need out of it, whereas we've already done, what we consider to be the bulk of the lifting on our side.

Punit 13:24
So, if I get it right, it works in stages and very, very high level. Stage One, you strip out all the unnecessary elements, and that's also a form of AI or LLM that's used say doing so, and when that information or the PII has been stripped off, let's take an example of an email, and then the content of the email is analyzed, and probably there can be a use case around it, so say, answering some emails or helping some customer support. Yeah, yeah, that create a certain part of LLM can do is get on top of that content and help create maybe an a response, a draft email.

James 14:03 
Yeah, very well stated. So. So essentially, one thing that we talk about a lot on our team is proprietary information creates proprietary results. And obviously that's also where you can get into hot water, right? Like you don't want to send a bunch of business proprietary information to a third party all the time, but feeding these LLM’s or harnessing them with business specific information, such that you can curtail what the output is, such that it's actually just as an example, right? The type of information, or, let's just say, email response that would be helpful for a manufacturing company is going to be very different than the type of email response that would be helpful for an E commerce company, selling, I don't know, selling mugs, right? So, but the but what you need from the third party, LLM, or really an LLM in general, is to generate that message, right? To generate the English in the, in our case, English verbiage that will go into the email response that you can send to a person, but before we send any of it to a third party, LLM, we've already done the hard work of like finding the proprietary information, understanding what's true, what's accurate, what's relevant, and then taking out the PII, and then saying, Hey, we just need you to generate these couple sentences in a way that is in line with the prompt that we're giving you, and then that's when it comes back, and that's when we, you know, make use of these larger LLM's strictly for like, sentence and language generation.

Punit 15:30 
So basically, the LLM would give you a draft, or a first draft, for responding to a client email in a context. So if it's configured for a bank or a manufacturing company or an auto company, depends, but it's very specifically configured. And once it's done that, it also gives you a prompt saying, Do write in an introduction or a salutation in this in this way. And that's where the human will come in and write, integrate the personal aspect or the culture aspect, and then it looks like a more professional and also augmented by human email.   

James  16:04 
Very well said, yeah, so anything that would be proprietary to an industry or an individual company like we handle that in house, and then we leverage third party LLM's to actually draft the email. And then when we get that back from open AI, in this case, we have, you know, I would say that 25 to 40% of the time responses are really good with no editing, which is wonderful. Another 40% of the time they require, you know, a couple words, maybe a sentence or two, of tweaking. And that also is just down to personal preference, right? Like some customer support representatives have their own flair or their own emojis that they want to use, and we always want to make sure they have space to do that. But more importantly, we think it's really important to make sure there's always a human being overseeing this type of work, at least in the very beginning, while we have time to fine tune everything, such that you're not just saying, Hey, cool. You know, chat bot, email bot, respond to this customer, and then they get a response that makes no sense or is hallucinatory, and all of a sudden it's turned from a tool that was supposed to bring productivity and greater customer happiness, and then you've just sent out a terrible response, and everybody's mad at you, right? So, it's like the idea of creating a tool, and let's be clear, like this will happen, you know, AI is in its early days, and there's a lot of work that we all need to do around making sure that there's less hallucination and all this good stuff. But by keeping human beings in the loop, you can not only reduce the likelihood of things like that happening, but you can also use those human beings to help train and refine the tooling along the way, such that they become better and better for individual companies.   

Punit  17:40 
No, that makes perfect sense, but I think isn't that exactly the use case that you have implemented in the key.ai your own company that you take you configure a customer service engine agent, LLM, whatever we call it, it analyzes the incoming email traffic and prepares the email and makes it ready for a human to look at, so that the human effort of doing the standard work of analyzing the email, finding out which it which it belongs to and what needs to be answered, is done, but of course, the human still has to review it and bring in the personalized touch and the responsibility around it.   

James  18:18 
Yes, that's very well stated. That's one of the things that we do, and we do it for exactly the reason that you said, which is that very often in customer support, specifically for contracted support folks or for more junior support folks, there's what we refer to as the cold start problem, where the customer email comes in and they don't necessarily know how to start. And typically, what happens in that case is they're either looking through a library of macro responses that are stored in a tool like Zendesk as an example. And very often those will be, you know, they'll be good, especially depending on the individual company. Like every company maintains their own macros, but very often they they don't cover every scenario, and the macros still need to be tweaked for the specific answer their specific problem at hand. So, by doing this, we allow a customer support person to open an email and immediately have a response right there that they could use to send to the customer and then review it, approve it, change it, send it whatever is within their workflow. But the other side of things that we do that's, I would say, more nuanced and also more time consuming than just responding to emails or crushing tickets, as we lovingly call it, is the research process. So as an industry average, every single customer support email that comes in requires about 20 minutes of research. And by that, I mean you're looking through your Google Drive, your Dropbox, your Confluence Notion, JIRA, Asana, right? Any of these connected tooling where knowledge is kept within a company, and that information can be just about anywhere, right like because we like to say customer support people live a very browser-based existence. You know, they have like nine to 14 tabs open at any given time, and looking between all of these disparate systems to find the right information is not only time consuming, but it's actually. Are very taxing in terms of context switching, right? Like you go from Oh, I'm ready to respond to Oh no, I have to research, and then I have to change my mind around, like the way that this data schema works versus that data schema works. So what ducky does is it also reads the incoming email, and because it knows what types of problems your business sees, and it knows the way that you like to solve those problems, in response via email, it will say, Hey, James, you know, here's two previous tickets where somebody else already solved this problem or a very, very similar problem. We think if you read these email back and forth, you'll know what's going on, and you'll be able to respond quicker with factful information or said a little bit differently. If somebody is new, it will surface a standard operating procedure so they know how to, just as an example, handle a return or a refund. And then you can get even more granular down to like specific engineering bugs or features that have been requested, or customer specific information on like order volume, order history. So anyway, to wrap this response up, it really just gives you everything you need in one place to start on making a response to a customer, and then hopefully do it a lot more quicker, a lot more quickly and a lot more efficiently, with better quality and fidelity of information.   

Punit 21:09 
No, I can understand. So basically, it's two things simultaneously being done, one a couple of assuming that's how it's working. Take a few keywords from the email, identify some previous responses and suggest them. This is what happened in these kinds of cases. But also propose a draft saying this is what you can reply. But then it's up to the customer support agent to take all that into consideration, plus his or her knowledge, to actually frame a response in human perspective.   

James 21:37 
That's incredibly well said, Yeah, I think there's, there's two things to expound on a tiny bit there. The first is that it's not just keyword searching. It's also doing something called semantic searching. So as an example, keyword is, you know, if I want to look for the word utilize, right, and I misspell the word utilize when I'm searching, the responses, or, sorry, the references that get surfaced will not be accurate. But if I was to say, you know, if I was to Google, sorry, not Google, if I was to search for, make use of, right, it would still find me emails that use the term, utilize because it's matching the intent or the context of what I'm curious about, not just the keywords that I'm searching for.

Punit 22:25 
That's pretty much like human beings, because somebody who's smart, somebody who's well to do and understands communication would not focus on the words. It would focus on the essence or the intention of saying and then frame a response accordingly to what the intention has been, rather than the words. While, if you focus on the words, sometimes the person doesn't have the capability or the skill to articulate what exactly they want to say, so sometimes you have to go beyond the words, and that is what you mean by semantics rather than keywords.   

James 22:55 
Again, you're, you're a master summarizer, very well stated. And I think that's, that's, that's true, right? All of its funny. I almost feel like when we get caught on keywords, a lot of the time, where we're just, we're just looking to disagree on something, whereas if you take, if you take a step back and you think about, like, All right, what's, what's funny, trying to say to me, right? You know, you can usually find more empathy, and then you can usually also find a more logical way to understand and communicate about any given topic.   

Punit 23:26 
And that's when the real communication happens. Before that is tennis match of words.   

James 23:32 
Yeah, which, which is a thing, right? I mean, we all know using different vocabulary is we actually spend so we have, you know, we have folks in the US. We have folks in the UK. We have folks who work in very different doctrines, engineers, designers, myself, operations and business development, and very often, the same acronym or the same word can be used by different people to mean different things, specifically culturally, and it's one of the things that we've kind of had to learn the hard way, but also the fun way, like we're better as a team because of it, because we do take the time to be like, Hey, this is what I'm talking about. Anybody have any questions on these words? This is what I mean by these things. And usually, we find that we were all together in person about a month ago, and we did a white boarding session specifically around this. And the first day, we found like we were debating minutia a lot. And the second day, we spent 10 minutes to, just like define our vocabulary, and the rest of the day was so much smoother. It was beautiful.   

Punit 24:27 
Right and when you talk about ducky.ai what kind of a use case or problem does it solve from a customer standpoint, I know the use case or the problem statement, as we said, in terms of being able to define a customer response, and that's where it's acting as an assistance or assistant. But how would a company say, no, I need ducky.ai or something similar?   

James 24:54 
Yeah, I mean, again, this is another topic. I could talk to you, talk at you at all day, but I'll keep it short. Um, the thing that we see the most is every business, and I'll take a step back to say this is also why we started the business. So my co-founder Hong and I decided to build a company together before we really had a concrete idea. And as a result, we went out looking for an idea. So, we interviewed hundreds of people who work within various businesses, on various teams and various rules. And like 9.5 people out of 10 said, like, knowledge is my biggest problem is, I know the knowledge is there, but I don't know where to find it. I don't know who wrote it. I don't know if it's up to date. And looking for it is really, really time intensive and very often, there's also a whole subset of knowledge that didn't used to get recorded, right? It was like what happened on calls. It happened face to face. Now, most of this knowledge, which we refer to as tribal knowledge, lives on Slack or in other unstructured sources, but as soon as it gets slacked to somebody, it's usually gone right? It's gone to the ether. It's no longer usable. It's no longer findable, even though it's incredibly valuable. So, what we find our customers really enjoying, more than anything about ducky is the ability to find and make use of knowledge, regardless of whether it's conversational, whether it's tribal, whether it's, you know, in a very, very regimented standard operating procedure, it doesn't matter. It's for companies who have information all over the place, and who want to give their people the best access to the right information in live time without having to go search for it.    

Punit 26:28 
Okay that's fairly clear and straightforward. Now, I think we've covered so much and so broad, starting with what is AI and how to use AI if someone wants to get in touch with you, from getting to talk about this concept of AI or leveraging AI for customer support or any other way, or using ducky.ai what would be the best way to reach out to you?  

James 26:54 
Yeah, the best way to get in touch is either via our website, which is www dot ducky, D, U, C, K, Y.ai. We've got a contact form on there. There's plenty of ways to get in touch with us, or if you'd like to reach out to me directly. James@ducky.ai always, always in the inbox, and very happy to help however we can. And as Punit mentioned, not just on customer support and service things you know, don't reach out. Don't reach out. Don't think you can only reach out if you are interested in working with us. We're always interested in meeting good people and hearing about fascinating problems, regardless of whether it's customer support related or not.   

Punit 27:33 
That's wonderfully said. And with that, and as Time is of essence, I would say, thank you so much, James, it was wonderful to talk to you, and I look forward to the release of this episode.    

James 27:44 
Thank you, Punit. I appreciate you.   

FIT4Privacy 27:47 
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www dot FIT4 privacy.com that's www, dot fit the number 4 privacy.com if you have questions or suggestions, drop an email at hello(@)fit4privacy.com, until next time.  

Conclusion

As AI continues to evolve, the potential for transforming customer service is boundless. By embracing AI-powered solutions like Ducky.ai, businesses can empower their agents, deliver exceptional customer experiences, and gain a competitive edge in the digital marketplace. However, it's important to approach AI implementation thoughtfully, addressing potential challenges such as data privacy concerns and employee resistance.

By combining the power of AI with human expertise, businesses can create a more efficient, effective, and personalized customer service experience. The future of customer service is here, and it's powered by AI.

ABOUT THE GUEST 

James O Brien has spent over a decade working in startups. He is the co-founder and COO of Ducky — customer support AI that works alongside humans to help them perform at their best.  

Prior to Ducky, James was the COO of Nashville, TN-based crypto asset manager Valkyrie Investments. During his tenure, Valkyrie investments launched the United State’s second bitcoin futures ETF and grew to over $1B in assets. Before Valkyrie, James was the first hire at AltoIRA, a Nashville-based fintech specializing in self-directed IRAs for alternative asset investing. He helped scale the business from pre-seed through series A — growing the team from 2 to over 100 individuals while focusing on customer support, partnerships and, later, crypto.  

Apart from the land of startups, James is a singer and fledgling piano player — he moved down to Nashville, TN over a decade ago singing in a band. He loves cooking, reading (primarily fantasy novels), yoga and spending time with friends + family. 

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with