The AI World
AI puts itself to the test by comparing and evaluating US and EU regulatory strategies. Investigating how AI systems might reinforce bias in anything from employment practices to social media algorithms, and navigating the murky waters of accountability and transparency. Could blockchain technology hold the key to enabling a future in which ethical and responsible usage of AI coexists? Can we use AI to our advantage, or is it destined to serve as our future's judge, juror, and executioner?
Transcript of the Conversation
Punit 00:00
The AI world is full of challenges. On one hand, the regulators are
challenged on what position to take? Do they embrace AI? Or do they take a
passive stance and let it develop and then make regulation? Then we have the EU
Act, which is creating some concerns amongst some companies. Then we have the
transparency challenge with the technology coming up. How do you ensure
privacy? How will it AI impact? What will blockchain do? And then there's also
how can you use technology to create transparency maybe. And then how do
companies embrace AI? Do they adopted because there's fear of missing out? Do
they take a slow start saying, don't use AI if we don't have to? What should
they do? Well, all these are fascinating questions. And to talk about these, we
have none other than Elene Gurevich, who's going to talk to us she's a lawyer,
she's fascinated by this technology world, where she's in the IP world, AI
world, and also this blockchain, and then we're going to talk to her about all
that. Let's go and talk to her.
FIT4Privacy 01:19
Hello, and welcome to the Fit4Privacy podcast with Punit Bhatia.
This is the podcast for those who care about their privacy. Here your host
Punit Bhatia has conversations with industry leaders about their perspectives,
ideas and opinions relating to privacy, data protection and related matters. Be
aware that the views and opinions expressed in this podcast are not legal
advice. Let us get started.
Punit 01:48
So here we are with Elena. Elena welcome to the Fit4Privacy
podcast.
Elena 01:53
Hi, Hi, Punit. Thanks for having me.
Punit 01:56
It's a pleasure to have you. And in your work being a lawyer what
fascinates you, or what attracts you towards this ip and blockchain world?
Elena 02:06
Got to mention one more magic word. I mean, two letters,
essentially AI, right? Because that's pretty much what, what everyone has been
buzzing about for the past, like a year and a half. I mean, I've been in into
that a little more than that. Yeah. I mean, what, what's not fascinating about
it, the blockchain and AI space and how they intersect with intellectual
property, and what it means for digital art, and intellectual property rights
and digital arts provenance for that matter, you know, how that all interplays
in what, what benefits digital artists might you know, how it might benefit
digital artists, using Blockchain technology when it comes to art, provenance,
digital art, I've known a lot of artists, and I know a lot of others. And there
are a lot of initiatives out there to foster this culture of digital
authenticity and provenance and, you know, using Blockchain technology as a
means to, to do to do that. And I think it's fascinating, you know, just pretty
much some of the some of the artists are trying to take the matters into their
own hands, considering there's no real comprehensive legislation on the topic.
You know, and it's, yeah, it's been there are a lot of interesting projects out
there who are building this. This structure like how you can, how you can trace
provenance, how you can trace the authenticity of that particular image video
or some record recordings online. And yeah, that's, that's fascinating. And as
an attorney, obviously, that's very interesting topic to me.
Punit 04:04
For sure. So let's maybe go block by block. Then when we talk about
artificial intelligence you talked about there's no comprehensive legislation.
I know the adoption hasn't of AI or embracing of AI has not been great.
Elena 04:19
Yeah.
Punit 04:19
Some countries are being very enthusiastic, proactive, some are
being reactive. What's your view on that? What's happening in terms of adoption
or evolution of AI, especially in terms of regulation? Of course, there's the
EU AI,
Elena 04:32
Yeah.
Punit 04:32
but the rest of it. Let's talk about that first.
Elena 04:36
Yeah. So yeah, obviously, when we're talking about comprehensive,
comprehensive, comprehensive AI, The severe regulation, the EU comes to mind,
which was just recently unanimously voted on by all the member states, and
we're just very, very near to it being just in the forest right when it gets
published in the official journal. And as for the rest of the world, I'm not
talking about the EU right now in the US, there's no such piece of
comprehensive AI regulation is pretty much this. We have this patchwork of
different bills and proposed bills and legislations that are very targeted.
They're very specific. And each and every one of those proposed bills, they
target particular issues that the authorities want to tackle when it comes to
AI be elections or defeats and stuff like that. And yeah, kind of there's
nothing, I would say, really fundamental. We have recently, just, I think, last
week, that Colorado AI bill so far that I would say it's the most comprehensive
of them all. That's targeting, again, specifically targeting algorithmic bias.
And that's, that's a very interesting piece of legislation. Again, it's very
comprehensive, and it addresses a lot of a lot of points. And by reading it, it
kind of got a lot of a lot of very, very useful ideas from the AI Act. And
yeah, it's going to be very interesting how it's going to be coming along, I
believe it's supposed to come into force in February 2026. So yeah, definitely
is going to be interesting. But again, it's again, it's state by state, right?
It's going to be the Colorado law. In other words, in New York, by that time,
they're buying me some something else. So the states in the absence of this
comprehensive federal regulation, they coming up with regulations of their own.
And, as usual, it's yeah, it goes just sideways at this point, and there's a
lot of lobbying as well. A lot of, a lot of people in the industry are not
happy with this. And this lobbying is sometimes very successful. A few of the
bills were taken down recently on
Punit 05:58
Lobbying to get the law, or lobbying to avoid a lot?
Elena 07:22
Lobbying to make these proposals as LAX as possible.
Punit 07:27
Okay.
Elena 07:27
So the main idea is we don't want to impede our, you know, we don't
want to impede the research and our innovation. We want, the US needs to be the
first one in the world and all this stuff. And let's just not play with
overregulating anything. So you know, the main theme being just let's just
leave this the way it is, you know, let's not interfere and let's let the
innovators innovate I would say.
Punit 08:01
Interesting, because if we come to the EU side, there the story is
we want responsible innovation, that's the term they use. We want to foster
innovation, but in a responsible and human centric way. And that's why they
have come up with what we call the EU AI act. And I don't know how you would go
about explaining it in two, three minutes, because it's comprehensive as people
most people say it.
Elena 08:29
Yeah. Yeah. It's very
comprehensive piece of legislation. If you want to get a sense of how
comprehensive it is, you can read it. It's a lot of pages. Warning a lot of
pages. But yeah, it's it's it's been. It's a risk based framework, basically,
that addresses AI systems and puts them into different categories of risks, and
acceptable risk, high risk, limited risk and minimal risk. So it's depending on
on their use as well. So the there are high risk applications and the where the
EU AI companies have strict obligations on those high risk ecosystems. That
includes rigorous testing, documentation and oversight. And there are, of
course, transparency and accountability requirements as well for AI operators
for AI developers, and employers. The EU AI Act also bans certainly AI
practices altogether. That's that being biometric, real by real time, biometric
identification, social scoring. There are a few exceptions to that as well, and
we can talk about it later. But overall, it's a very, very comprehensive piece
of legislation that I think a lot of people are saying it might have the
Brussels effect as the as they see on other other countries and the way they
tackle this problem, the problem of regulating AI space, but I guess we'll see
because, like we already mentioned that it's just been voted on, and it's going
to be coming into force kind of in stages. And by 2026, it will be in force
completely. So again, we'll see how it's gonna go. And the AI office will be
the one in charge sort of to enforce, enforce those rules and obligations. So
it's going to be very interesting how, how things gonna unfold, definitely
exciting times.
Punit 10:54
Absolutely I think the risk based approach, and not asking each and
every system to comply with each and every one requirement. That's a unique and
an interesting one. And let's see how other countries look at that. But if you
broadly categorize I think, about 50 to 60%, maybe 70% of systems can be in the
low risk, or the no risk category, and hence, escape away with light
obligations.
Elena 11:20
Or at least they'll try to get into that category, you know kinda.
Punit 11:24
Yeah, that's all so they will try to get into that category. That's
true.
Elena 11:28
Since can it be essential for them?
Punit 11:31
Yeah
Elena 11:31
To do that, or that some of them might just choose not to do business in
the EU altogether? I mean, depending on the company, right? And on its size,
considering what the compliance with the ACT entails in terms of even, I'm not
even saying staff, but just in terms of money, right, it's complying with the
act is, it's going to cost companies a lot of money. And depending on the size
of the company, some companies might even just be considering if it's a small
company, right? Do we even need to kind of roll out our services in the EU
market, so we don't need this sort of extra trouble. And I've already heard a
few conversations like that from small startup founders. So you know, kind of
people are considering this as well. So
Punit 11:31
I don't know there are always companies like that, I know,
some websites are not available in Europe, because of GDPR. Because they don't
want to take the extra effort of complying with the EU GDPR. And same thing can
happen for EU AI Act, then if you look at broadly speaking, AI creates a lot of
challenges. And to address those challenges, you need some sort of set of rules
or guidelines to handle it. And that way, the EU AI Act, or even the Colorado
Act or EU Colorado AI Act comes in, and maybe many more, because New York is
already talking about AI. We don't know what form or shape it will come, they
have some ledge
Elena 13:05
In New York. Yeah, in New York, we have this local law 144. That's
just this weird name that they have for it. But yeah, it was it came into force
last July actually. And it concerns using AI in the workplace in terms of
hiring people. hiring and firing a using AI in HR and you know, screaming
through resumes and stuff like that. So the companies, they have to they are
obligated under this law, to do the yearly audits and to publish the results of
those audits. But again, as always, the companies will find their loopholes,
you know, just anything and everything under the sun to be able to kind of get
away with as much as they can. So what they've been doing. And that's been a
few already reports out fascinating read, by the way. So there are companies
who actually take the time and do the research. And they go over the audits and
the results of those audits for the companies in New York. It turns out a lot
of companies, they do the audits, but they never publish the reports or they
publish them but the publish them, publish them, they put them somewhere on the
website, in like in such an inconspicuous place where an ordinary you know,
consumer, the person who uses them on the website, they will never find them.
And again, there's nothing in the law that says there are no instructions or
you know, there's nothing specific in terms of where you should place or where
where you should put that report for all the people to read. So it's yeah, it's
again, it's interesting how the companies are kind of going about that. Again,
so yeah, fun times.
Punit 14:58
It is fun times because that even though GDPR was clear, or the
privacy laws are clear, you need to put a privacy notice I was yesterday
browsing website and they were saying privacy notice, which is how it should be
in the footer, you go there. And there are three, privacy policy. privacy and
digital privacy notice and California privacy notice. Now for a normal user,
that one step of clicking privacy notice was enough. And now you've given them
three options. And somebody's like, Okay, which one? Do I read the digital
privacy policy, the privacy notice, or California, maybe somebody from
California would choose that one, but still, you've given others two options.
And the content of both of them was very different. So in EU AI act or AI
audits, that option to hide always remains for those who want to hide or create
confusion. But talking about the challenges we were talking, because that's why
we need the law. One of the things is AI, at least the new AI that we're
talking about these days, that can generate content that can aggregate content
with LLM's and everything. And then I'm curious to ask you from your IP
background, who would have the rights to the content that's generated or
created or even aggregated? Because usually expected to be aggregated over the
web? But sometimes it's one source. But who has that copyright? And how does
that copyright work?
Elena 16:24
So to answer your question, in very simple terms, depends
on the jurisdiction at the moment, because different countries are approaching
this very differently. Take China, for example, right? The recent, the recent
court's decision that an AI Generated Image deserves to be registered. Yeah. So
the court awarded copyright registration to an AI generated image, which is
very interesting. And the base, the basis was, the reasoning was pretty much
that person who created the image or using generative AI, to they were working
with different prompts, there were different iterations. So they were actually
doing some work. And there was actually some creative input from that person,
right, they're just, there was no simple pushing the button type of thing. And
that's it. So that's interesting the way they approach that. But when it comes
to the Yes, and I'm based in the US this, this stance is still the same. If you're
not human you are that the work cannot be copyrighted in any way, shape, or
form. If you're a monkey and you took a selfie, you cannot later copyright that
work, or if you're a celestial being, and you wrote a book or something like
that. So that effect, you cannot copyright that work as well. It has to be a
human, there has to be some modicum of creativity. And it has to be, it has to
be either, it has to be fixed in a tangible medium. It has to be, I don't know,
I read the song or I wrote a poem and I printed something out or I drew
something on a napkin, right in any type of fixing that would consider me in
the bar is pretty low, actually. So modicum of creativity, how creative
something can be right? That's subjective also, but the fact still remains
there has to be a human author. And we've had recently several decisions by the
US Copyright Office that rejected copyright registration by artists who claimed
that AI was supposed to be the author and the creator of the work and their
copyright registrations were filings were in effect rejected so yeah, that's
where we stand with a yes. In the UK things are different in the UK as far as I
know, you can infact copyright it war work generated by a computer. And it's
very interesting, but again, the UK kinda has been positioning itself as this
industry favourable country right, we are favoring AI and we have those few
companies that UK has, we do not want to impede progress and in all that, so
they they've been on this I wouldn't say not I've been on the fence is that
even like a word. But they they're just the kind of, they've been very cautious
to jump into this regulation fray, I would say the way the everybody's been
talking about the EU, the way the EU has right because some of the people who
are opposed to AI regulation. That's exactly, that's exactly what they've been
talking about. It's kind of just like the UK, the I'm sorry, the EU is just
shooting themselves in the foot with this legislation, and they won't have any
progress and blah, blah, blah, blah, blah. So people alarming thing, the
industry will die. Very, very dramatic. And which is, in fact isn't true at
all. And I think it's a very, very, it's a very good thing that this piece of
legislation has passed. So we'll again, we'll see how it's how it evolves later
on. But obviously, a lot of a lot of lawsuits, a lot of a lot of interesting
stuff will be coming out in the in the future. So definitely, it's going to be
interesting to see how it works. But coming back, I'm sorry?
Punit 21:13
I would say I
would agree. Depends on how it evolves.
And we will have to see how the future brings. Because at the moment, it's a
bit confusing for people and also the legislation perspective, because there
was a copyright law, which was very clear. Now there's the AI generated
content, which was never foreseen in that copyright law. And hence, the
confusion. But that will exist even for product laws, quality laws, safety
laws, everything because none of those laws or legislation is meant for self
generated or auto generated content.
Elena 21:15
Yeah.
Punit 21:15
Depends on how it evolves. And we will have to see how the future brings. Because at the moment, it's a bit confusing for people and also the legislation perspective, because there was a copyright law, which was very clear. Now there's the AI generated content, which was never foreseen in that copyright law. And hence, the confusion. But that will exist even for product laws, quality laws, safety laws, everything because none of those laws or legislation is meant for self generated or auto generated content.
Elena 22:06
Yeah
Punit 21:46
And doesn't cater to the soft aspect of it. It was always the
physical product for which you will have safety aspects.
Elena 21:46
Yeah
Punit 21:46
Now, how do you ensure safety of a software product or the LLM's
that is used by somebody? How do you ensure safety of that? Because how? so
that's going to be a fascinating world in next, say 10 to 20 years.
Elena 22:06
Yeah. Yeah, definitely. And precisely because of this reason, it's
very interesting how when we're talking about risk and harms of AI, right and
risk mitigation, how can you mitigate the risk, the risk when we're talking
about Open Source AI systems right are open Wait, as they're called? How can
you trace how can you possibly be aware of how the downstream deployers, like
how the downstream users how everyone is using that technology? You really can.
But what I've seen and what the latest AI report says that, essentially, open
weight AI systems have been on the rise. And obviously, it's available to
everyone, this technology is available to developers to build on the on top of
right. And considering that we're on this trend that AI models are starting to be
implemented on the edge right on personal devices of people. How do you control
that without being invasive? Right? We're talking about people's privacy and
all the all the implications of that? How do you control what people do on
their own devices with that AI system? Provided they might use it for the
various purposes? But how do you control that without being invasive? That's
the that's the question
Punit 23:42
That push all because whether it was GDPR, or whether it's AI Act,
or whether it be any of the digital act that's coming up, the intention of the
regulators is to bring the sense of control to the human around their data, and
also the actions around their behavior or actions impacting them. And so, some
of us call it responsible, some of scholars human centric, some call it
transparency, and how do you see that transparency coming up in coming years
because AI as we say, will come up more and more.
Elena 24:21
Yeah.
Punit 24:21
Then more privacy would be an issue more and more. And then we have
technologies which are coming up, for instance blockchain so how can these
technologies help assist in creation of the transparency culture or responsibility
culture?
Elena 24:37
Well, considering the AI, AI systems and AI in general right to
exist, it cannot exist without data, the more data and the model ingest the
better the AI model is right, the better the output is and not even the the
more data, the better that data is depends on quality again, and the diversity
of that data. But again, data privacy issues, data protection will be will be
at the forefront 100%. A lot of different issues already coming up. In terms of
transparency I've spoken about recently, there's been another report published
the AI transparency, foundation models transparency index, by Stanford
University researchers and the transparency trend, again, it's on the rise a
lot of a lot more companies than back in October 2023. A lot of more companies
are more willing to come up and be upfront with their transparency practices.
So they've been publishing a lot more resources and a lot more reports on their
transparency practices. And that's, that's very good. And we welcome that. But
again, a lot of things just didn't change and stay the same in terms of
companies again, and they still are not willing to share any details of the
data. They've been training their models on, they are now willing to share any
type of copyright, copyright data if the fact that they even used some of the
copyright data, which obviously the it was considering the amounts of data that
these models are being trained on. So a lot of issues are still the same. But
again, there's this slight, slight uptick in this transparency, willingness to
be transparent, at least on that, that front, so that's good. When it comes
to
Punit 26:49
Blockchain help create more
transparency, is that possible?
Elena 26:54
Ah, well, how blockchain works, right? The
blockchain is just this ledger that creates whenever you are putting something
on the blockchain, it creates this, this this hash, that's just you cannot
really forge it, change it or do anything with it. Right. So it's a great,
great resource for tracking when it comes to provenance of the work when we're
talking about art, right? So if it's on blockchain, you can trace any, any
action or any step in the creation and just existence of that work on chain,
like on the blockchain from the moment of its creation, right. And which is,
that's very good in terms of artists who work with digital art and NFT. And the
owners, for sure. There's just dropping some NFT words out there. Everybody
forgot at this point what that is, right. But yeah, I've been to the NFTNYC,
recently in April in New York, and a lot of people were talking about that,
actually, AI was obviously, on the agenda. But people were more concerned how
these two spaces can coexist. What can they bring to each other? How can we use
AI when we work on the blockchain? And vice versa? How can blockchain help?
Some some aspects of AI risk mitigation that we're talking about right? and a
lot people out there who are working on that? And that's an interesting space
as well. So these I would say these two were made for each other, in terms of
they complement each other, right, when we're talking about tracing and
tracking, and how can you make sure that work is the original work that wasn't
AI generated? Because obviously, we have a lot of companies rolled out their
own watermarking techniques, right, detecting AI generally work and detecting
is synthetic content. But again, those are not foolproof, they don't work 100%
all the time, a lot of those. A lot of those times these systems don't work at
all. We have a lot of a lot of I heard a lot of and read a lot of stories about
people who were accused of using, for example, tragic beauty, right to write
their essays, entrance essays to college and all that sort of things, which
wasn't that not true. And they were labeled as just those people basically who
did that and the outcome was people some people didn't get into college, some
people lost their I don't know position as a student in college, you know,
there were real life consequences to that and we need to think of ways how can
we ensure that there is no you know that there are no cases like that anymore.
It is obviously it's happening. And it's been happening in real time in real
life. And it's, it's serious, it has serious conflict consequences. It just
affects people's lives.
Punit 30:14
So you're talking about normally we talk about plagiarism that is
somebody has used AI to produce content, which is not their own to get some
advantage, like admission into a university or submitting an assignment.
However, somebody wrote their own content did not use AI.
Elena 30:30
Yeah.
Punit 30:30
And they were accused of plagiarism.
Elena 30:32
Yes.
Punit 30:34
Which is not the fact.
Elena 30:37
Exactly, yeah.
Punit 30:38
We call it
Elena 30:39
Yeah, cuz, apparently a lot of maybe not a lot. But some of the
professors or whoever that was they were using these AI tools that help you
detect help you detect 100%, right, they help you detect AI generated content.
And these tools were in fact, not correct at that point. And here, we come up
on the issue of liability right.? So who's at fault? If I'm that student who
just got rejected, and I didn't get to the college that I wanted to get into,
just by the mere fact that the AI tool detection tool, stated that I was using
AI, which, in fact, I wasn't right, who's, who am I going to complain about the
AI company? the developer of that tool? the professor who used that tool, all
of them? You know, it's just, it's interesting as the university the state. And
those issues need to be addressed as well, because obviously, the person's life
has been changed drastically. And you know, that people might might need some,
you know, some means of resolution.
Punit 32:01
I think that's where the EU rights like, right to be not subjected
to automated processing. And also understanding that these tools which allow
you to detect plagiarism or AI use are good, but not good enough. It's like the
weather, you say it will rain? Sometimes it does. And sometimes it doesn't. And
most time, it's right to same way the tool is 7080, maybe 90%. Right? But it's
not 100%, right?
Elena 32:30
Yeah.
Punit 32:30
So somebody is being, let's say accused that this is plagiarism, or
this is false, this is using some means, then it should not be an automated
decision saying okay, the tool says, then it's the responsibility of the person
to intervene and maybe have two or three persons check in and then make a
decision rather than completely rely on to like you're talking the New York Act
asking.
Elena 32:55
Yeah.
Punit 32:56
Hiring and firing not to rely only on AI. So that the same thing in
hiring of students are firing off or not hiring students, you need to apply
that judgment. And that will remain I think, in the AI world more and more,
because there is technology, it can do a lot of things, but only with a certain
amount of accuracy. And that's where the human factor comes in.
Elena 33:17
Yeah.
Punit 33:18
And that's where the human factor comes in.
Elena 33:19
Exactly. But as a I don't like this word, but as a species, right?
That humans, we are, for some reason, we tend to over rely on technology, right
versus me versus you, I would move I would rely more on my calculator on my
phone. So tell me, what's kind of when I'm trying to, you know, figure out
what's the sum of like those numbers, right? Versus you telling me or a certain
amount of information that just we have this with predisposed and just
believing this technology? It's been, I've read some research papers on that as
well, people are they trust videos on the computer that they see more than just
something that they saw it they own eyes at some point, you know, and this this
kind of game, like the mind games, like they will mind is basically playing on
them. And it's very interesting. So this tendency of people to over rely on the
signal. So if AI told me that if ChatGPT wrote that, it's in fact true, and
that's where critical thinking comes into play. We eat it's never been more
important to pretty much question everything you see and hear when you're on
the internet. When you're online. You need to check and triple check every
single source at this point. Yes, it's time consuming. And a lot of people
don't have time and don't want to waste time. But when talking about the
attorneys, for example, right, everybody heard about this case, the Avianca
case where the attorneys just submitted a bunch of these cited a bunch of cases
in the court docs that were just made up by ChatGPT. And the cutest part of this
case is for the best part that I love this part when the attorney asked
ChatGPT, if those cases existed, if this was correct and ChatGPT, of course it
responded, yes, they are correct. Everything is true. And it's just mind
boggling to me.
Punit 35:24
Yeah. But that's where, we need to be cognizant of the fact that AI
or any other tool is there to assist us, but not to be trusted blindly. And
that takes me to kind of final questions saying, if somebody or some company is
grappling with this challenge of on one hand, they want to go innovation,
business centric, and more customers more revenue. And other hand they hear
about this new legislations in Colorado, in EU and many others, thinking, I
think we can be responsible, or maybe you need to take cognizance of these
laws, or maybe we need to be looking at some framework, what would you advise
them?
Elena 36:08
I would first tell them, ask yourself a question. If you can get to
where you want to be, by not using AI in the first place. If there's any other
way for you to get where you want to be. Just think about it, think it through
because this FOMO is great everybody's scared to be second, everybody's scared
to miss on something even I'm scared you don't read your emails for two days
straight. You don't read any news, right? For some reason, I don't know you've
been in a cave, somewhere. You don't read that. And two days later, you have
this still in I'm still you trying to catch up, you're constantly trying to
stay on top of things, which is impossible. So at some point, I just told
myself, just let it go. So that would be my first advice. Just let it go and
think objectively, don't think about I need this and that. And that because
this FOMO might cost your business money, actual money when you're trying to
integrate the technology you don't really need.
Punit 37:16
Yep
Elena 37:16
Yeah and a lot of businesses
have faced that. And loss, you know, so you really you need to be cognizant
about that. And when it comes to customer service, if you especially when
you're dealing with caught your customers data, you need to be very cognizant
about data privacy issues, right? Because a lot of people are just blatantly
saying, well, for example, I have this, I have this tool, I'm planning to roll
it out, it will just process people's blood results, right? Just blood work
something like that medical tests. And that's it. There's no that's not private
information, right? We're safe? We can we don't have to do anything. Of course,
you're not! what do you mean, what is just the the way some people are
thinking, it's really again, it's not even funny, it's just, it's scary at this
point, the way companies are very, very, they're quick to jump into this all
without really consulting anyone without talking to people who know, know about
that stuff, you know, and people need to be very, very clever about how they
approach that if you figure out that you need to integrate that technology
that's gonna really help you take your business to the next level, right be
operating a chatbot, or kind of doing the logistics and, you know, predicting
some consumers, consumer behavior online, but again, it comes with a lot of
legal issues. And if you're not willing to tackle that right now, you might be
very sorry, later on. When another piece of legislation maybe next year, but
federal piece of legislation, very comprehensive one will just you know, befall
you and you will be too late to address that. Because because a lot of a lot of
businesses out there. In my previous discussions with them last year, even they
were very, I would say, blase when it came to compliance with any type of
regulations when it came to AI right. And everybody was like, well, the EU AI
after that doesn't apply to us or it's going to be it's not going to be here
until 2026. So we're just we have all the time in the world. So this mindset that
we have all the time in the world it needs to stop you don't. You should have
started to do your homework yesterday. That's where we are right now.
Punit 37:16
I think that's the
common philosophy or approach people have. One, it doesn't apply to us even
before on the GDPR. We don't process personal data that standard on so. And
nowadays, oh AI doesn't apply to us, they will try to look at one of the
categories and say are we don't do it. But when you dig deep, like, for
instance, when you look at the definition of AI, and even in EU AI act, it's
very broad. And then it says, Go and refer to annex one, or Annex A, I don't
know, a, and when you read that one, then everything is AI, almost. So that's
so broad. And that's where companies and people have to understand, but I get
your message, don't be afraid of the formal, you're not going to miss out much.
But be aware of the issues that you will be taking along by not looking at
legislation, or rather, adopting or bringing in AI when you can do without it.
And with that, if someone is interested in talking to you, or getting into a
conversation, from business work perspective, what's the best way to contact you?
Elena 41:04
That would be as of right now, my LinkedIn page. That's the best
way I'm constantly present on their message. Just DM me and you know, you'll
get in touch with me. And I'm almost done with my website. Finally, probably
when this episode comes up, it's gonna be you know, I'm very particular about
what I liked. I could have used AI, obviously, right, I could have just done
this website in like 30 seconds. The truth is, I tried and I didn't like the
results. Exactly. So yeah, I want what I want and yeah, hopefully soon, it's
gonna get there.
Punit 41:45
And also apply the medicine you give to others to yourself.
Elena 41:50
Exactly! I mean, you you have to preach what you teach. Right?
Punit 41:54
Yeah. Okay. So with that, I would say, Elena, it was wonderful to
have you have this conversation. Thank you so much for your time.
Elena 42:01
Thanks for having me Punit.
FIT4Privacy 42:03
Thanks
for listening. If you liked the show, feel free to share it with a friend and
write a review. If you have already done so, thank you so much. And if you did
not like the show, don't bother and forget about it. Take care and stay safe.
Fit for privacy helps you to create a culture of privacy and manage risks by
creating, defining and implementing a privacy strategy that includes delivering
scenario based training for your staff. We also help those who are looking to
get certified in CIPPE CIPM, and CIPT through on demand courses that help you
prepare and practice for certification exam. Want to know more? Visit
www.fit4privacy.com. That's www. FIT the number 4 privacy.com. If you have
questions or suggestions, drop an email at hello(@)fit4privacy.com.
Conclusion
Artificial Intelligence has two sides to its story. It poses important problems about ownership, accountability, and ethics while simultaneously offering enormous possibilities for innovation and advancement. This episode clarified the current discussion surrounding AI regulation and emphasised the importance of taking a fair stance. While the US and EU struggle with differing approaches, one thing is certain: cooperation, foresight, and a thorough grasp of the human element are necessary to enable responsible AI development. Transparency and the moral use of AI will be crucial as we forge ahead on this new frontier. Thus, the next time you engage with an AI system, stop and think about the intricate issues it brings up. We are all collaborating to shape the AI future.
However, the conversation doesn't end here. This is just the beginning of an ongoing dialogue. As AI continues to evolve, we must remain vigilant and proactive. We need to foster collaboration between policymakers, developers, and the public to ensure AI serves humanity – not the other way around. By striking a balance between innovation and responsible use, we can paint a future where AI empowers us to create a better world, a world where the human touch remains the guiding light.
ABOUT THE GUEST
Elena Gurevich is a multilingual lawyer with a background in intellectual property law, and a keen interest in data protection, AI, and blockchain technology. She provides legal advice and representation to clients in the fields of intellectual property, digital art, and emerging technologies. She is passionate about learning and shares her knowledge on how these technologies can transform creative industries.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com