EU AI Act is about Product Safety
Artificial intelligence (AI) is bringing a technological revolution and that creates safety concerns as well. As AI systems get more complicated and incorporated into our daily lives, the potential for harm increases dramatically. How do we ensure that AI is utilized ethically and safely? What if the secret to a safer, more ethical AI future is in how we control it now? This is where EU AI Act as a product safety legislation comes it.
Transcript of the Conversation
Punit 00:00
The EU AI Act is about product safety. Yes, you heard me right. EU
AI Act is not about compliance. It's about product safety. Well, yes, some of
us talk about AI Act protecting fundamental rights, taking an approach to
responsible AI and all that? Yes, it's true, but end of the day, it makes sure
that the AI System that you put in the market is safe. That's what it is about,
because it asks you about conformity. So if it is the case, why did we need it?
Because we had the GDPR, which was protecting fundamental rights. We have the
standards like the ISO 42001, the NIST and so on. We also had other conformity
assessments. So what is different from GDPR, and how does it become what we
call a product safety legislation? Or how will it serve as a product safety
legislation? Well, for that, we are going to go and talk to none other than
Caro Robson, who's a renowned expert and digital regulation leader. She's an
advocate of ethical AI data governance for over 15 years, and has worked with
governments, international organizations and multiple businesses, and she's
also sometimes referred as the AI ethicist. So let's go and talk to her about
why did we need EU AI Act when the GDPR was there? Why couldn't the standards
be sufficient? How would it be different from GDPR and much, much more. So
let's go and talk to her.
FIT4Privacy 01:53
Hello and welcome to the FIT4Privacy Podcast with Punit Bhatia.
This is the podcast for those who care about their privacy. Here's your host,
Punit Bhatia, has conversations with industry leaders about their perspectives,
ideas and opinions relating to privacy, data protection and related matters. Be
aware that the views and opinions expressed in this podcast are not legal
advice. Let us get started.
Punit 02:21
So here we are with Caro Robson, Caro, welcome to fit4privacy
podcast.
Caro 02:26
Hi, Punit. Thanks very much for having me. Great to be here.
Punit 02:29
It's a pleasure to have you. So tell us something about your
journey, and how did you get into this privacy and AI space.
Caro 02:40
Well, in terms of technical law, I suppose, slightly by accident.
So I was a criminal barrister by background, and then I got a job with the
serious Organized Crime Agency, which is now the National Crime Agency, and I
was given the technology portfolio, can you look after data? Can you look after
intelligence? Can you look after cyber? And so me, being me, I wanted to learn
a bit more, so I did a Master's in Computer and Communications law part time,
and just fell in love with it. I fell in love with telecoms law. I fell in love
with data and technology. Getting to sit down and have people explain to you
how stuff works is a joy. So I went from there. I was overseas with the Foreign
Office in Afghanistan for a while, and then I worked for two multinationals in
the Middle East for eight years, building privacy cyber security compliance
programs for those two organizations. Then I was in Brussels consulting for the
EU institutions on everything to do with digital and data. And of course,
during that time, the AI Act was coming along. So looking at some of the
interesting work happening there, I've been the strategy executive for one of
the Channel Islands information commissions. And again, AI was really taking
off. ChatGPT was just being released. And so since that time, I've been working
a lot with the UNECE. I've been on a number of working parties and committees
with with other bodies. And really, I'm always being asked about AI so I
thought, Well, I'd better read and learn a little bit more about that. And so
now I've become kind of an AI ethicist. Someone described me as the other day,
so I quite like that. I might, I might keep that as my title, because that's
kind of what I'm doing. And it's so interesting to have an opportunity to talk
to everyone at different stages of the supply chain and all the different
angles that are being impacted by AI, including the one that I think we're
going to talk about today, which is product safety, and talking to people who
look at it from that perspective. So yeah, here I am, AI ethicist? I suppose
you could say.
Punit 04:43
Okay, so let's get to the AI ethicist aspects. But let's go back a
bit. 2018 or 16, the way you look at it, we had the EU GDPR. Now the GDPR gave
citizens the. Protection of the fundamental rights that's the right to privacy,
especially in the modern world when we talk about data, and it was about the
general data protection regulation, as we call it, so it provided protection
for their data in the modern world with privacy by design and other principle
based setup. Now we have that, and there is a set of proponents amongst us who
say, well, that by itself, is capable of managing, handling, protecting
anything on data, and that includes the AI systems, while, of course, we need
to define what is an AI system, but they believe anything, including AI or futuristic
systems can be taken care of, and then all of a sudden, or not, all of a
sudden, about three years it took, we have this EU AI Act, which is being
interpreted in very different ways. So help me understand and help our
audiences understand, why was there a need for this EU AI act when a GDPR like
law could act upon the general data of citizens, and we are still saying the EU
AI act will also help protect the fundamental rights. So let's start with that
premise and take the conversation forward.
Caro 06:19
Perfect. Well, I'm not sure I can help people with with that point,
actually, because I, like a lot of people, saw the flurry of legislation around
AI, and there's been a lot, and there continues to be a lot. California is
just, just looking at passing a law at the moment, and I thought, right, well,
we've got automated decision making, we've got profiling, we've got protection
of privacy, we've got management of bulk data sets. All of this is there in the
GDPR. Why are you suddenly rushing to legislate? Is there something we did
wrong? You know, did you not like what we were doing here? And I think part of
it is honestly perhaps that that those those rights for review of automated
decisions perhaps weren't being exercised as often, or it wasn't as public. I
suppose the correct answer would be that the GDPR is technology neutral, and
it's data neutral. The only question the GDPR asks about data, other than
whether it's sensitive, is it personal data or not? If it's personal data
you're covered. If it's not personal data, you aren't and there's followed a
lot of arguments about what is and isn't personal data with AI, I think the
legislation in the EU started out very much with that in mind, protecting
fundamental rights, slightly broader set of rights than the GDPR. So if you
look at the harms that it's protecting about in the opening section, it's
protecting economic harms and social harms. It's protecting innovation. It's
protecting the free market. The GDPR has a bit of that, but it's a much broader
set of rights and responsibilities. And it started, I think, with that
approach. And it looked at the end use of AI systems. So how will this thing be
used? What will the impact be on people? And hence, you have that lovely
pyramid we keep seeing of the different levels of what level of risk it is. How
is it being used? How does this impact people? And I think, during the debates,
and this was done in record time for the EU but during during the cycle of
this, it became clear that generative AI and particularly general purpose AI
systems or foundation models, were changing this, because now you have a supply
chain. Now you don't just have someone develops, builds and uses something. Now
you have a company, possibly in California, who does their work to build a system
could be used for anything that goes to a product manufacturer who puts that in
their product that then goes to someone else who might then sell it onto a
business who then uses it for a particular purpose. And so this, I think,
shifted the debate, and at the same time, we had all of the I don't want to say
hysteria, but let's say hysteria. Let's use a good word. The hysteria around
humanity is going to be wiped out. This is going to be the end. This is It's
Skynet, it's the singularity, it's whatever 1980s movie reference you want to
put in. And so the dialog shifted from this is about fairness and this is about
ethics to now this is about safety. And I think there the approach shifted from
being this is fundamental right to legislation. Have you thought about the
people who will be affected by what you're doing to we need to make sure this
technology is safe and the products using this technology are safe. And I think
what that's done is meant that actors progressed into much more product safety
legislation. And then fundamental rights. It's kind of trying to do both. It's
trying to do a lot of things, and it's leaning on its kind of fellow
regulations and EU directives. You know, it's without prejudice to many things,
so it leans on others a lot. But it's trying to do both, but it is becoming
much more product safety legislation, and that's very, very different. And I
think that's something that maybe people don't appreciate. It is very, very
different to have a product safety approach to having a fundamental rights
approach.
Punit 10:34
That makes sense, because if you talk about fundamental rights,
there's also an element of ethics and leaving it to the principles and the
framework and accountability per se and transparency per se. And if you look at
the product safety, it's very much this, this, this, have you done it? Have you
done it? Is this conformant and so on. And we do have what we call a conformity
assessment as well. But before we go to the conformity assessment, staying with
we had GDPR, and now we have this EU, AI act. But there were lot of standards.
There was a NIST standard. There's the ISO 42001 and then there is this OECD,
guidelines, and there is this UN framework for AI, for humanity, and so on and
so forth. Why a law? Why not the standard that's also something that is worth
delving into now?
Caro 11:25
I think so. So I think the EU felt pressure to legislate, and I
think so from a practical perspective, we do have lots and lots of standards.
We have lots of standards with privacy. There are standards on privacy by
design, but they don't have the force of law behind them. So what you can have
is, oh, I'm ISO 27001 certified. That is a good indicator, good indicator that
I am compliant with technical and organizational measures. But it's not
mandated that you have 27001 it's a good indicator that you've got a risk
management system that is in compliance with standards on risk management,
whether they are from NIST or IEEE or ISO or whichever standards body, but it's
not mandated that this is the specific one that you need to use. And I think as
the app kind of evolved, and as the EU I think there was a push for the
Brussels effect. I think there was a push for we want to get that legislation
out there first. But I think as it evolved, it became about, okay, we want to
specify what the standards are given how rapidly this technology is developing.
We want to write clear technical standards. We want to have the right to update
those standards. We want to make sure they're right for Europe, and we want
them to have the force of law behind them. And I think that's the difference
these these standards will be not cited in the act, but compliance with them
will become almost mandatory. Some form of conformity assessment is mandatory.
Marks will be mandatory. Conformity marking will be mandatory. But the
standards themselves will therefore, I think, take on a much more prescriptive
legislative force than the existing standards and frameworks that we have for
AI.
Punit 13:22
So that makes sense. The standard by itself is there, and your many
standards, you can pick and choose, but nobody can mandatorily ask you, are you
compliant with ISO 42001 for example, and having a law saying, EU AI Act with
these standards, these guidelines, how do you conform? And if you then say, I'm
using ISO 42001 not the NIST framework to get to manage the AI risk, and this
is how I'm managing risk. But then, because of the law, you need to demonstrate
what we call the conformity to the law or the requirements. But then there's
another aspect, in terms of still staying with GDPR and not leaving it saying
because it will continue to apply.
Caro 14:01
It continues the GDPR lives on. It lives on.
Punit 14:06
Yeah, it's add on and but there has also been this approach saying,
How will regulatory bodies manage it? Will we have, we are going to have an AI
office at EU level? Will there be different authorities, even in the
organizations? Will the privacy office manage AI or there'll be an AI office?
How will the collaboration be? How will be the segregation be? And we also talk
about in terms of enforcement, saying there are opinions about that enforcement
in GDPR, but we also have the Brussels effect, or the GDPR effect, as we call
it, because about 140 countries adopted a GDPR, like law, calling it in their
own name, or calling it with their own and sometimes creating complexity
because of that, rather than adopting as is so there was this Brussels effect.
Do you see similar thing because of the EU i? Act, or would it go differently.
Caro 15:02
I think, in terms of regulation. So at present, so in the UK, for
example, and the ACT won't apply here directly, because we've somewhat left the
EU, but in the UK we've we've got the digital regulators Cooperation forum. And
the government policy here is there are four regulators who will share
responsibility, and that will be the Information Commissioner's Office. It'll
be the Competition and Markets Authority, the FCA, the Financial Conduct
Authority, and it will be Ofcom, which is our kind of telecommunications and
media standards body. What I think will be different under the AI act is and it
talks about it, and I'm going to glance down at my notes to make sure I get the
correct section. But it talks around it in Article 28 and 31 and 32 that
section where it talks about national bodies, the presumption, I think, is that
these will be conformity assessment bodies, market surveillance bodies, bodies
that can go out and have metrologists test technology against technical
standards, because this will be a process of conformity assessment. So rather
than almost ex post regulation saying, right, if there is an infringement or a
breach, you can complain to this person, and I realized that privacy regulators
are slightly broader. They have an ex ante function in terms of giving advice
and guidance and sandboxes. But I think here, there'll be much more of a sense
of right? I have to have a sticker. I've got to have a CE mark. I've got to
have this paperwork done. I've got to have this stamp of approval. I've got to
make sure that I am in conformity. I've got to have it assessed so someone, a
metrologist, has to come to my, my office, my lab, whatever, and test it
against the standard, and they've got to certify it. And then we can talk later
on about the challenges, if you like, but then keep testing it, because AI, as
we know, learns and changes. And so I think we're going to see a very different
set of actors responsible for enforcing and having power under the AI act. So,
for example, the standards that are being developed are being developed by the
EU standards body, unsurprisingly, which is CEN and Senelec. And CEN is the
European Commission for standardization, and Senelec is the EU committee for
electro technical standards. I dance down at my knots again to make sure I
always confuse them and their joint technical committee 21 is responsible for
drafting AI standards right now. So these are bodies of technical standards
developers who are having to draft against a standardization request that was
put in by the Commission prior to the act. Actually it's it was passed under
the decision on the EU policy on AI, but that request for standardization asks
for 10 standards requests. And so this body of 140 different experts, and then
experts in mirror committees in national authorities are having to develop
technical standards in various areas. I do have the the standardization request
next to me, if we want to go through them, but they're having to develop these
technical standards that then gets voted on and passed, and once they're in
place, it will fall to the national notified authorities to check for
conformity against them. And I am going to pull I'm going to just prove that I
have checked the standardization request. But you've got, out of the 10
standards standardization request, you have standards that are similar to
42001 which is a risk management framework. You know, have you? Have you got
your risk policies in place? Have you escalated things? Have you got risk
management but there are some very technical ones. So request six is standards
on accuracy. Specifications for AI systems. Seven is robustness. Specifications
for AI systems. Eight cybersecurity specifications, nine talks about post
market monitoring processes and 10 talks about conformity assessment. So we
have quality of data sets as well in standard two. So we have ones on risk, but
we also have actual specifications that your technology must do, X, Y and Z, in
cyber security, in accuracy, in robustness, and the language of post market
monitoring, market surveillance, conformity checking is really, I think, aimed
at Product Safety institutes. So the British Standards Authority here, British
Standards Institute here, passes standards and then they're checked by the
Physical Laboratory. Here, countries have their own national standards
institutes under that new legislative framework that I talked about, which is
about product safety. So they're the bodies I think that will be tasked with.
Market surveillance, so seeing what products are coming onto, the market
conformity assessments, metrologists actually going out and checking against
these standards. You know, what is the standard deviation against this
robustness framework? What is the technical standard against this particular
cyber security requirement? Does your system have this? Does it do this and and
checking the systems, issuing conformity marks and decisions, following up and
then doing that post post market surveillance and assessment as well. So it's
it's a very different way of approaching regulation than we've seen with the
GDPR. The GDPR had privacy by design. It had the potential for certification
systems, but they weren't widely adopted, and they've not been the main
regulatory tool. This is much more in line with the new legislative framework,
which is product safety and placing products on the market and protecting
people from a safety perspective, and it's everything from medical devices to
toys to seat belts to cars to electricity. That whole area of EU law is quite
different. And the approach of monitoring the market and the product life cycle
is where I think the the AI act is going.
Punit 21:18
Okay. So in that case, I take two things away. One, it's about
product safety, and lot more technical than what it the GDPR is. But the second
I take that there is, or this is basically a base, or a fundamental basis, for
additional standards, additional conformity requirements that are going to come
in and be enforced on top of, or in addition to the EU AI act. So this is not
the be end all solution. This is the beginning of journey in which we will have
regulations. Like, if we look at a product, or like, look at a house, it's not
that you just take a permission on the architecture and it's done. No, you got
to demonstrate conformity to the architecture when you've laid out the
basement, when you laid out the roof, and then eventually an inspector comes in
and checks physically, have you adhere to the architecture? Have the energy
standards been confirmed? Have the safety standards been followed? The Fire
standards have been and if it's a commercial building, then even more. And here
we are talking about a virtual product, which can't be touched, can't be felt.
And the similar approach is going to come because essentially, if you experts
have been talking of, and I've always been a proponent of, if you have a
physical product, let's say you buy a moisturizer, you buy a soap, you buy a
toothpaste. It goes through a series of checks and product conformity, and it
says, I'm stand certified to this standard, like you mentioned, the British
Standard Institute, and so on and so forth. Now if that physical product, which
goes into hand of somebody and can only cause damage to that specific person or
that family, is checked so thoroughly. How can we let this AI which we don't
know what it can do and what it can create, and what damage or harms it can
create be without any standards? So essentially, that concerns has been heard,
or concern has been heard, and now EUA act, plus those conformity requirements
and standards for specific products or sectors would be coming along to protect
what we essentially say the EU fundamental human rights charter, but in a much
more Technical, much more detailed way, is that the right understanding?
Caro 23:43
I think so. Yes. So the current standards being drafted are
actually being drafted under the 2022 or the 2020 3c 20233215, if anyone wants
to look it up from May 23 under the policy. So they're not specifically for the
act, but the act does, does relate and refer to them. And the acting article 40
gives the commission the power to have issue more standardization requests so
the Commission can request further technical standards. What's interesting you
talk about, you know, if it's a toy, if it's a microwave, if it's something,
well, it could harm someone. Well, a lot of those products will develop AI
features, and already do so. My my washing machine has AI, you know, it checks
the load, it manages the weight, and things like this. It adjusts and assesses it
calls itself AI. It's never talked to me or anything. I'm very disappointed I
thought it, you know, might be a friend for me, but you know it that has AI in
it, and so products that have AI embedded are also covered under the Act and
specifically mentioned. And what the Act does is it says, right, we've got this
new legislative framework, we've got product safety legislation, we've got
product specific and sector specific legislation. All of that still applies.
But insofar as a component of it includes AI, then this act will apply as well.
That produces some issues with the scope of the Act and the supply chain, which
we can, we can get on to, if you like, but that's, that's the idea that it's,
it's an additional layer of product and consumer safety legislation. And so
what the act is really trying to do is introduce, and it does talk about
physical stickers. It says you have to put a sticker on the box, and if it's
not possible to put a sticker on the box, then you've got to make sure that the
sticker is available on digitally or elsewhere. So it the AI act is using the
language of physical stickers and physical checks. And it, I think it does
envisage metrologists and conformity assessors going out and actually looking
at the tech and looking at the products. I think this is very much a physical
product safety approach. But you're right. I think one of the issues there is
going to be that supply chain, and you know is, is open? AI going to throw its
doors open to allow a wave of a vu conformity assessors in well under the Act,
yeah, they should, if they're releasing that on the market, making that
available and putting it on the market, yeah, the conformity assessors should
be able to come and check that they comply with standards. Gonna be
interesting.
Punit 26:26
Interesting, certainly! but there's an certainly, the uncertainty
or to be seen how it develops. But there's also the other dimension, which is
very, very clear. In case of the GDPR, it starts to apply as soon as you
collect and start processing personal data. So from day one, when you are
collecting data, you need to collect minimal, be lawful, and so on and so
forth. So it starts as soon as you start interacting. Here, in this case, we
are saying when you release your product, when you release your AI system, then
it needs to be conformant So, and of course, we have the sandboxes in between.
But what about if your product is not rolled out?
Caro 27:10
This is you've hit on the exact challenge. I think so. So first of
all, the GDPR will will continue to apply. The GDPR lives on. So if you're
using personal data, and a lot of large language models are, of course, using
lots of personal data. Then GDPR applies throughout the life cycle. Existing
product safety legislation will apply throughout the life cycle to those
products in those areas. The Act itself, I think, poses a bit of a challenge,
and that's because it is product safety legislation, but it makes clear in
Article 28, I think it is that it doesn't apply prior to a system or product
being made available in the EU or placed on the market. So it doesn't apply to
scientific research and development generally, and it doesn't apply to product
development before it goes on the market, except when it does so. The scope
says that placing on the market is our trigger point, and now the Act applies.
But there's several sections that talk about, okay, if you're a high risk
system, your documentation about deciding whether your high risk should be done
prior to going on the market, if you have to document various aspects of a high
risk system that needs to be done prior to placing on the market. Under that's
article 11, and Article Six for if anyone's interested, she says glancing at
her notes. So it's quite confusing, I think because the act is clear, it does
not apply before something is placed on the market. And I think from from what
I've heard from my spies in the in the drafting world, for people coming up with
these standards, it's a big problem, because product safety is about the whole
life cycle. It's about the conception, the design, the development. If you do
Agile or sprint, however it is, it's about embedding in that process, product
safety standards, so that when you get to actual manufacturing, refining,
checking, changing, when it finally goes in its shiny, shiny box onto the
market. A ton of work has been done and checks have been done, and various
stages have have had to conform to different standards. And that doesn't seem
to be the scope of the act itself. And that's a big challenge, because if
you're drafting technical standards with a product safety mindset, and you're
told, yeah, but it's only when it's on the market. Well, the bulk of your
product safety work is, is when you're building something or you're developing
something. And the Act has a little bit, I'm going to be really generous, okay,
I'm going to be very, very generous to the drafters. I think what the Act does
is a little bit of a slight of hand. And. What the Act says. And I'm glancing
at my notes to find the exact section. I think it's article 11, where they talk
about sandboxing and real world testing. It refers to regulation 2023988, on
the safety net. And it refers to the product safety legislation. And I think
what the act, as generous as I can be, I think what the act is trying to say
is, okay, we want to perfect, protect innovation, we want to protect
development, we want to protect industry. We want to protect our scientific
sector. We're not going to regulate, and these fines aren't going to bite
unless it goes on the market. Thinking it through. We need to have some
influence before it's on the market. So there's a few sections that say you
should and the justification for that is, well, this is complementary to that
new legislative framework for product safety. It's complementary to the GDPR,
it's complementary to the Corporate Social Responsibility Directive. It's
complementary to the platform workers directive. I'm not sure how that's going
to end up in front of the CJEU and quite how standards will develop around pre
market testing, and I'm not sure how easy that makes it to draft standards. I
suspect extremely difficult, but I think that's the argument that they're making
that will only apply the act from when it's on the market. We have expectations
that things will have been done prior to that, but we'll only check when it's
on the market and this other body of the new legislative framework is a nice
safety net for all those earlier stages, which we absolutely expect people to
have complied with in full throughout the life cycle. So there's a little bit
of having cake and eating it, I think. And it's a little bit unclear, if I'm
perfectly honest as to how, how that line is going to be drawn and enforced.
And again, my suspicion is that's because it's coming from an internal market,
a pro competition, a pro industry approach and a fundamental rights approach,
and has evolved into product safety. And product safety is about the full life
cycle, and this has only kind of latterly evolved into that. And I think
that's, that's the kind of fudge and the compromise that there's a safety net
and there are expectations prior to being on the market, but the act is, is
just about being on the market. And I actually, I actually checked a good
friend of mine and I were chatting about this, and we checked 73 times placing
on the market is used in the act. That phrase is 73 times that appears
throughout the act. Sometimes it's placing on the market doesn't apply until
you do, and sometimes it's before you placed it on the market you should. So I
haven't done a statistical breakdown of the percentages of that, but it's a key
phrase throughout the act, and I think it's going to be one of the trickier
aspects of enforcement and compliance, because it doesn't apply technically
until things are placed on the market. That's difficult when you're trying to
regulate how things are developed prior to being placed on the market.
Punit 33:27
That would be interesting indeed, because there are two ways of
looking at it. You cannot, as you say, it's well thought through that you
cannot penalize the group of scientists or innovators or researchers who are
just looking at an idea and developing it, while, of course, that idea or that
ideation or sandboxing is certainly protected through GDPR and other laws which
protect fundamentally, the processing of data so it can be looked at in both
ways. One, it's a positive thing, because no company would like to be penalized
when the product is undercooked, while they do need to comply with the GDPRs
and the others. So that's a positive side. And on the other hand, it's also a
challenge that if it's not applied in the product life cycle, how will you
ensure that in the end it's conformant? But then there can be an argument, and
that's my reaction to when you mentioned that it would be a challenge, I'd say
it would be a bonus, because it will be highly impossible. Because in software
world, we say if you have some requirements about safety design or something,
you need to have them incorporated when you are listing the requirements,
because you cannot say, now I am testing it, and when I'm testing it, I want to
change the requirements, because you test against the requirements you defined
originally. And if you have to introduce new requirements, be it safety, be it
conformity, be it anything that the EU puts in. The challenge would be that the
cost of InCorp. Getting those requirements would be very, very high, because in
the IT world at least, and that's where it will apply. The cost of in
incorporating a requirement later on, after testing is many, many times then
having it embedded. So probably they have well thought through and leaned on
it, while not creating a burden the other approach. So apparently, it's well
thought through, and there are probably lot of things to be seen how it evolves
in next five or seven years, but also which standards and conformity guidelines
will come in. But still, while leaving the debate there, because it's debatable
which side they've chosen and why they've chosen. But there's one aspect. It
categorizes systems into risks, high risk and the unacceptable risk and the
limited risk and the obligations are significantly different. Of course, the
unacceptable risk systems are prohibited, so no discussion there. But is there
an opportunity or scope for companies being able to maneuver saying, I think
I'm building this tool. Let me call this this one rather than this one, and
then it's a limited risk rather than a high risk. Is there an opportunity or
that would also get narrowed down and clarified.
Caro 36:24
I think again, it's a slight difficulty with the act in that when
we talk about risk, the risk categories are about the end use. So that's your
kind of fundamental rights approach of, okay, how is it going to be used? How
is it going to affect people? And I think the slight challenge there is if
you're developing a system and you're not clear how it will be used, because
with AI, often systems are developed, maybe with, if other than, I'll talk
about general purpose or foundation models in a second. But even with, with
more specific models, they're often developed with a vague idea of its final
use, but but not quite. And I think there, it's about making sure that you've
got a genuine assessment of how it's likely to be used. The Act does say this
is one of the few areas where, if you're saying it's not high risk, you have to
document that before you put it on the market and make sure it's available when
you do so, you do have to take these things seriously, and you do have to make
that assessment and document that assessment if you're going to say it's not
high risk, and therefore, I don't need to go through conformity assessment and
all the other checks. When it comes to the foundation models, the general
purpose AI systems, there's a whole section in the act that talks about unless
it's very, very narrow, it's essentially to be assumed that they are akin to a
high risk system, or could become a high risk system. And there are some
special requirements there, not least of which are obliging providers of those
systems. So that would include the likes of open AI, if they're targeting the
EU market, obliging them to do the kind of transparency work, checking work,
accountability work that can then be used by importers, deployers of these
systems as they're deemed under the app, but companies who use that technology
to meet their later obligations. So, I mean, we both worked in tech for a long
time, right? And we know that there's always going to be companies that that
argue, and will argue with their in house counsel and to live blue in the face
that this definitely isn't, because it does this and this, and there'll be all
of those arguments about which which category it falls into. As far as the act
is concerned, you have to document a decision that it's not high risk, and you
have to do that arguably before it's placed on the market. And arguably that
will be caught by that kind of catch all. Yes, all of the legislation applies,
but the expectation under the act when something is on the market is that
you've done those checks before you put it there, so you do have to check those
things. And I think you're right when you talk about it's so much more
expensive to retrofit requirements. I had to smile a little bit when you said
that, because how many times have we had those arguments where you know, data
protection or privacy or legal are the last people to be consulted on the
design of something, and it's normally the day before it goes to the market.
Can you just check that? And we always make this argument that if you had just
come to me when you first had this idea, this would have been cheaper and
easier, and we could have made this a competitive advantage for us by
incorporating these elements. So I think will that change with the notion of
conformity assessments and requirements? Possibly, yes, my suspicion This is.
This is a slight, slight drift topic, but my suspicion is it will be about
corporate culture. So I've worked in aviation, I've worked in payments and
fintech. I've worked for government, I've worked with military, I've worked
with law enforcement. Sectors have different cultures in terms of their
compliance, conformity, mindset. So aviation absolute risk management, safety
first, the entire organization is built around, if we get this wrong, planes will
fall out of the sky, and everything is about meet the standard, do the checks,
manage the risk, evaluate it in other sectors, and I think the tech sector, I
think could be caught in this category. There isn't necessarily a culture of
complying with safety requirements and putting your tech for testing. You might
have to pitch to a VC. If you're a startup, you might have to pitch to
investors. You might have to go through testing, if you're pharma, tech or
fintech. But there isn't a culture I don't think of okay, this is how my
industry works. This is how I have to develop things in this industry. I have
to prove everything I do. I have to be transparent, I've got to document. I've
got to be certified. It's got to be explainable, testable, repeatable. I think
different industries have different cultures on that, and I suspect some
industries therefore will cope a little better with the requirements under the
AI act, because they'll already have the culture, the personnel, the processes,
the mentality and the organizational focus on complying with these kind of
requirements and opening the doors and letting the assessors come in and check
things. So I think it will be interesting. I want to be there when, when the
conformity assessors with their briefcases and their boxes rock up to open AI,
I think it's going to be it's going to be interesting if you've seen, I don't
know if you've watched the dropout, which is a dramatization of Theranos
Elizabeth Holmes, startup, which grew very quickly and was a bit of a scandal.
Blood testing from just a drop and nothing quite worked. And she constantly, at
the end, laments the fact and gets angry about the fact that, because she's
working in pharma, all these people are allowed to come in and check her labs
and tell her what to do, and tech companies do not have this kind of
regulation. This is not fair, and I say that because it's just a an example of
the different mindsets that, yeah, you're in pharma. Pharma is used to working
like this. Labs are inspected, things are calibrated, things are checked. Your
tech is conformity assessed, and tech companies aren't. And I wonder, I wonder
how well that's going to play out with some of the bigger tech companies where
this product safety approach now starts to be introduced, I wonder if it is
going to be a bit similar to circle back to the GDPR, that kind of shift in
Okay, right? We've got to get ready. We need systems, and we need processes,
and we need a mindset shift. It's going to be interesting to see.
Punit 43:19
It would certainly be interesting to see. And I think one of the
things that I take away from this conversation is certainly there's a shift
happening, shift in terms of from fundamental rights to product safety, shift
in terms of leaving it open to making it very precise, shift in terms of you
can do what you want to do till you put it in the market, but when you put it
in the market, this must be aligned. This must be conformed to the way it is.
And of course, there's a lot of uncertainty around how will it shape up, where
it will go? But I think this is in the right direction. If we have to manage
the hysteria that you talked about, AI will come. It will take away all the
jobs. Mission Impossible will happen, and this will happen, and that will
happen. If we have to protect that there is no other way but to regulate. And
this is a step in that direction. I would say any final message you want to
add?
Caro 44:23
No. I think, I think you're quite right. This is a step in the
right direction. I think it remains to be seen how well the standardization
process will play out. I hear and and people have talked about this, this
publicly, that there's a lot of debate about, no, you can't reopen the AI act.
It is what it is. Just let's get this done. And I think there are healthy
debates going on in terms of the standards as to the content of them, and
they'll be interesting to see. April next year is the final deadline, but they
have a lot of hurdles to pass. They have to be drafted and checked and then
voted on and then published. The Official Journal. So they're going to be very
interesting to see. And I think it is going to be interesting if they can come
up with really technical standards that really check and manage that can be
used across various AI systems, because the technology is changing very
rapidly. And I think whether we end up with a really strict product safety
approach, or whether we have slightly higher level standards and we fall back
into fundamental rights, I don't know, but certainly the act itself is set up
to be a very different beast to the GDPR. Fundamental Rights are forgive the
point fundamentally different to product safety in terms of legislative
approach. And I think, I think that's, that's my one takeaway, that this is
different. This is product safety. It's a different mindset. Just bear that in
mind. When you you read the act and you look at your AI compliance.
Punit 46:00
Okay that's perfectly fine, and I like that way. So the question,
if someone wants to contact you, based on this conversation, what's the best
way?
Caro 46:12
You can reach out to me on LinkedIn, or you can email me
caro@carorobson.com and I would be very happy to get in touch. I consult on
these issues. I talk about these issues a lot. I'm really passionate about AI
and all the various aspects of it. This is just one of them, but you can
contact me via LinkedIn or caro@carorobson.com I'd love to talk more, because
I'm very passionate about this, and I spend a lot of time speaking about it, as
as you can probably tell, and for anyone who's sat through me talking now I'm
very grateful, because I talk a lot.
Punit 46:47
No way you do talk, but you do talk sensible, and you provide a lot
of insight. And with that, I would say, thank you so much Caro for sharing your
insight, your wisdom, your knowledge and your predictions.
Caro 47:02
Thank you so much Punit! It's been an absolute pleasure, and I hope
with your Mission Impossible reference this, this podcast won't self destruct
in two minutes. So thank you. It's been wonderful. Really enjoyed it. Thank
you.
Punit 47:14
Thank you.
FIT4Privacy 47:16
Thanks for listening. If you liked the show, feel free to share it
with a friend and write a review if you have already done so. Thank you so
much. And if you did not like the show, don't bother and forget about it. Take
care and stay safe. Fit4Privacy helps you to create a culture of privacy and
manage risks by creating, defining and implementing a privacy strategy that
includes delivering scenario based training for your staff. We also help those
who are looking to get certified in CIPPE, CIPM and CIPT through on demand
courses that help you prepare and practice for certification exam. Want to know
more, visit www.fit4privacy.com, that's www. FIT the number 4 privacy.com if
you have questions or suggestions, drop an email at hello(@)fit4privacy.com.
Conclusion
The discussion highlighted the complexities of AI regulation and the necessity of striking a balanced approach between product safety and rights protection. While the EU AI Act aims to bridge regulatory gaps and enforce product safety, its implementation poses challenges, especially around compliance and defining market entry points. As we move forward, it’s clear that fostering collaboration between policymakers, developers, and the public is crucial.
The conversation about AI's future is far from over. As technology evolves, we must stay vigilant and proactive, ensuring AI serves humanity rather than the other way around. By balancing innovation with responsible use, we can create a future where AI empowers us to build a better world, guided by ethical principles and a human touch.
ABOUT THE GUEST
Caro Robson is a renowned expert and leader in digital regulation. She is a passionate advocate for ethical AI and data governance, with over 15 years’ global experience across regions and sectors, designing and embedding practical solutions to these challenges.
Caro has worked with governments, international organisations and multinational businesses on data and technology regulation, including as strategy executive for a regulator and leader of a growing practice area for a prominent public policy consultancy in Brussels.
Caro was recently appointed UK Ambassador for the Global AI Association and is an expert observer to the UNECE Working Party on Regulatory Cooperation and Standardization Policies (WP.6), Group of Experts on Risk in Regulatory Systems.
Caro holds an Executive MBA with distinction from Oxford, an LLM with distinction in Computer & Communications Law from Queen Mary, University of London, and is a Fellow of Information Privacy with the International Association of Privacy Professionals. She has contributed to legal textbooks, publications, and research on privacy and data governance, including for the EU, ITU and IEEE.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com