Demystifying AI Myths
Artificial Intelligence. It's a term tossed around in media, it's the buzzword of the decade in tech conferences, and even casual conversation. Conjuring images of robots taking our jobs and machines surpassing human intelligence. But what exactly is AI, and how much of what we hear is science fiction hype? Are we hurtling towards a dystopian future dominated by AI, or is there a more collaborative future waiting to be built?
Transcript of the Conversation
Punit 00:00
Demystifying AI and myths? Well, yes. When we talk about AI, there
are a lot of myths, misconceptions doubts, wrong information about artificial
intelligence. Like people say, AI will take away all our jobs. It's fair and
biased or fair and unbiased, I would say, in many, many, many more. How do you
get to the facts? Are these for real? Or what is the real situation? For that,
I'm going to talk to none other than the top AI voice on LinkedIn. Nektarios Charalampous , who has kindly agreed to share his wisdom. And I liked his post
on LinkedIn about the AI bots. And hence we are going to talk to him in a very
interesting conversation on myth by myth, we demystify the myth. Let's go and
talk to him.
FIT4Privacy 01:03
Hello, and welcome to the Fit4Privacy podcast with Punit Bhatia.
This is the podcast for those who care about their privacy. Here your hosts Punit Bhatia has conversations with industry leaders about their perspectives,
ideas and opinions relating to privacy, data protection and related matters. Be
aware that the views and opinions expressed in this podcast are not legal
advice, let us get started.
Punit 01:31
So here we are with Nektarios! Nektarios, welcome to Fit4Privacy
podcast.
Nektarios 01:37
Thank you, thank you for inviting me Punit.
Punit 01:40
It's a great pleasure to have you. And before we get into debunking
the myths around AI, which is the post I liked about your AI in LinkedIn, I
want to ask you, how did you end up or get into this AI field?
Nektarios 02:01
So towards the end of 2018, beginning of 2019, I had a discussion
with my manager, and he mentioned that, you know, we need to start doing more
in terms of machine learning and AI. And I was like, I don't even know what AI
is, you know, the only AI I knew was from science fiction movies. So I went
back, I started researching about artificial intelligence machine learning. I
took some courses online, you know, Coursera, Udemy, YouTube, simple courses,
too. And then I headed to more advanced courses like Python and data science,
and how to use Python to, to create machine learning algorithms. So that's when
you know, I, I liked, and I started learning more and more and more. And I
ended up creating my own course, for my company, when I used to work back then
and hated to do a face to face fight five day session with my colleagues. And
since then I'm learning about AI. And I'm trying to post about the in write
about AI. And you know, over the last year with generative AI coming to the
picture, everyone is talking AI.
Punit 03:29
Yeah, these days, everything is AI, not AI, but people are talking
about AI. So it's very fascinating. But you also recently became the top AI
voice on LinkedIn. So how did that happen?
Nektarios 03:41
So I started posting on LinkedIn, regularly like two months ago.
And you know, LinkedIn has these top voice badges. One is invitation only. And
the other one is community top voice. So when I started posting on LinkedIn and
responding to posting about different perspectives on AI related questions,
that community liked my responses, you know, I try to be transparent and human.
I tried not to use AI for my responses, and the community liked it. And one
day, I just, you know, with a notification from LinkedIn saying, you'll
recognize this, and I took voice, which is great to hear.
Punit 04:24
Indeed great, good achievement. And I would say congratulations a
lot. Indeed, your content is very useful. And let's maybe start to debunk some
of the myths that surround with AI. And one of the fears or myths around the AI
I consider is around the jobs because people are talking. And in one of the
conferences, Elon Musk said 80% of the jobs will go away. I'm sure he meant
something differently, but people have interpreted it really black and white.
And people are thinking all the jobs will go away. We all will be jobless. or
whatever. And then some of people are saying, we will have robots working, and
we will have holidays on the beach, what's your opinion about this myth?
Nektarios 05:11
So we already have different industries, we already have robots
working, you know, doing different tasks that require low cognitive skills or
that are mundane and needs to be done daily, you know, but the, the water
Economic Forum, you know, predicted that by 2025, there will be around 85
million jobs losses. But it also predicted that there will be 97 million new
ones. Right? So, what this means, in my opinion is that AI will not replace
jobs, but it will transform transport them. Yeah, it will transform the roads.
Punit 06:02
I agree with you fully, the way we do our work would transform to
change, like, already in even in my job and using ChatGPT a lot, which I was
not, but that has replaced some of my dependency on some trainees or
assistants, because I can get my first draft, sometimes I can get my first
social media post or something from AI. But that doesn't mean I use it as is, I
use my own judgment. And I use the collective intelligence of AI and myself, to
make the post better. So that's how it will be. And when we say there was a
famous quote from Harvard, in one of the articles, they said, AI will not take
jobs of human AI will take jobs of humans who refuse to learn AI. Because
humans who will learn AI and who are willing to work with AI, they will always
have a job.
Nektarios 06:59
Yeah you know, AI is doing copyright. Everyone now can create
content with charging btw. But if you are an expert copywriter, you can use AI
to automate that and you can move into being like an AI content strategist, you
know, you can set the brand you can set the tone, you can set anything, right.
And, people what a lot of people are not realizing is that when AI replaces
your job, it gives you an opportunity to, you know, upskill and rescale,
towards AI. So you can you have the skill set to move into the new transformed
role.
Punit 07:42
Indeed, and I think the jobs will always remain the job content
will change the way the job is being done will change. There'll be more, it's
like in the banks 20 years ago, 30 years ago, people were saying all jobs will
go away, it will be computers, but then somebody needed to put data into
computers, then they were saying all branches will go away. Now we are in 2024.
And still branches exist, okay, they are less in number. They are more ideas,
more digitized machines. And the same, what human can do only human can do. But
the thing is human has to upskill and do things which only human can do rather
than the robots. If you look at the automobile industry, the manufacturing on a
supply chain is severely or almost digitized. But that doesn't mean there are
no humans. It's digitized. It's more efficient. It's more effective but still
jobs aren't there. Robots are also there, there are more robots than humans in
that.
Nektarios 08:38
But yeah, you take out, we take out let's remove AI from the
picture. It's the same like you use a software that's 20 years old. Now you
need you need the use of software. People don't want to switch to your
software, but they have to because of the new functionalities, then you are
industry standards, everything. So it's the same with AI. And I think it's the
same with a lot of other.
Punit 09:02
Yeah. And there's another one another myth, which we call is people
saying AI is more intelligent than humans. And there are different views. I do
have my own opinion also. But I like to hear first your opinion, is AI more
intelligent than humans?
Nektarios 09:23
Currently, I would say no, I think we are not there yet. Right? We
have these we are right now in the stage of Artificial Narrow Intelligence,
which is where AI is used for specific tasks to automate different activities
and stuff. But AI being more intelligent than humans is what we call artificial
general intelligence or AGI. Right? So there was basically an interesting paper
that was published by different researchers on when we lay AI exceed human
intelligence, right? Say that with generative AI, the timelines of AGI becoming
a reality for different industries for specific industries like manufacturing,
or for driving, and stuff like that was the timeline is moved by even up to
2025 years. Right. But I think we will need to have human touch in everything
we do, you know, that comes down to a different means of autonomous agents and
stuff. But when AI when AI will become more intelligent than humans in specific
tasks, we still need people to be part of that decision making process to do
some more deep in the data and be able to strategize where AI is gone.
Punit 11:00
I'm fully with you. And I will say it in a different words.
Because, again, everyone has his own perspective. And that's why we are all
humans, not artificial machines. So what do you said exactly, because humans
are complex, or human intelligence is complex, there's reasoning, there's
mathematics, there's rationality, there's psychology, there's resonance,
there's rush, there's emotions, there's feelings, perceptions, and so on. So
when it comes to mathematical rational logic, or do some mundane tasks, the
artificial intelligence, perhaps will be more intelligent than humans. Because
there, we have the fatigue factor after I do the same, let's say, now I'm in a
supply chain, and I have to connect the doors, after maybe putting 100 doors, I
will get bored, and I will need break, I will not need break. And yeah, we'll
do it with the same efficiency, whether it's the first door or the 1002. So the
area will be better, but in the sense of reasonability, in the sense of
perception, in the sense of being able to feel any emotion and judge. And where
there is no two plus two equal to four always, there humans will always have an
edge. Even the AGI we talk about at least now, we don't know the future, nobody
knows. But the artificial general intelligence are the super intelligence that
will come in, we believe that will be capable of reasoning, emotions, and so
on. But we have no evidence yet of that. And even if it comes, it will need not
only emotionality or rationality or perception, many other things, because for us,
our human mind is very spontaneous, and can perceive and interpret in many
ways. So human intelligence has many dimensions, we will explore those
dimensions and learn how we are different from humans. So yes, at the moment,
it's not so intelligent, but eventually it will become intelligent in certain
parts or certain aspects of intelligence. But having said that, the collective
intelligence of artificial intelligence and human intelligence will supersede
what artificial intelligence can do.
Nektarios 13:09
They should complement each other, you know, there is another thing
that's called one shot learning, right? So this is when people people's mind
can learn something from very few examples to know like, for example, if you
see a tree that's falling on top of you, right, you can calculate in your mind,
and that's the first time it's happening to you, because if it happened before,
then maybe we wouldn't be here. But you can calculate your mind that the tree
is going to fall this site so you can avoid it. And you know, and this is the
same thing that children are very good at learning very fast, like AI needs
1000s, or millions of data to be able to categorize things, whereas children
and humans and keyboard in adults are much faster in clearing something.
Punit 14:02
And I think this is where we differ. And where we differ also is we
have that element of intelligence, which is non definable, because it's not
binary, yes or no, especially when it comes to ethical or moral concerns.
Because the same situation we may go left, and we may go right, based on the
context, while AI tends to interpret those situations as more of a binary
thing, either left or right, because I was watching a video, which was
illustrating how AI incorporated and so in that story, there's a guy who hires
a robot as a servant. Then he teaches the robot about religion. And the robot asked him about religion questions, what is good and what is bad and we all
know, good and bad is defined in function of the situation. So it's a context
driven thing. Now the robot what it does is it interprets the definition which
the guy is giving him, because it's guy's bias also. And then later on, he
kills somebody on behalf of his master, because he interprets, that's the right
thing to do, because he asked him when he's teaching. So if somebody was to hit
you, it's my duty to protect you. Now, nobody was hitting him. But the robot
perceived that this action can hit my master. So the robot acted and killed the
other person. Now, because he was interpreting the ethical or the religious
dimension of what is good and what is bad. So it's, it's very fascinating that
aspect, but that takes us to the other part of our question, also, which is,
okay, intelligent or not, is a separate question. Humans always have the
ability to supervise an ability to give input, but is, or can AI be autonomous
studies can do things by itself.
Nektarios 15:59
So this is this is the question, this is the meat that's
complementing the previous meal, you know, so I was thinking about it this
morning. And I was thinking, you know, let's say you go to an Airbnb, right,
and this place is nothing like the pictures and go to a hotel, we go to Airbnb,
places, nothing like the pictures. And then the first thing you think of, is to
put some negative feedback or just for other people not to visit that place
again. And then you sleep on it. And then the next morning, you know, the old
lady that has the place comes to you offers you breakfast, and she starts
telling her story that she loves her husband, or you know, whatever life threw
up here, you know, and she's just doing this to be able to provide for her
grandchildren and everything. Right? So at that point, me personally, and most
of the people, you know, wouldn't put that negative feedback, right? Because
the human, the human feeling is coming to the picture, you know, it's a fact
that the place looks nothing like the pictures. But it's also a fact that the
old lady is trying, the owner of the house is trying, and she's doing it to
help her family and stuff. So if you have an AI make this decision, focus on
the fact, this is where it will put the negative feedback, most probably no one
would ever go to the place. So there was something I was thinking this morning,
right. But you have a lot of different examples about this. For example, there
was a I think it was 2017. I don't remember. But I was reading somewhere that they
did a chatbot for a hospital to help answer questions or stuff, right. So after
a month or so, one of the guys who had the pleasure was asking the chat that I
have these 100 hours, do you think I should keep myself and the childhood said,
I think you should. Right? So they stopped the chalkboard right away. But this
is, you know, you have a lot of examples, especially in industries like
healthcare or ideation, where we are not there yet. You know, we still need to
have machine driven decision making, but with a human hand to support that
decision make.
Punit 18:33
I would agree with that. I think it says the in the EU GDPR. Also
it says you can have automated decision making, but with the right to object to
that automated decision making or ask something in addition to that automated
decision making, like a few weeks back, my LinkedIn was hacked, or put on hold.
Now the robot system decided that I was not mean, it's possible because that's
the trouble. So and I tried to get in, I gave all the inputs as I mean, it's my
account, so I know what I have to give him. But it chose to reject all those
inputs. And then it was asking me some strange question then finally, in one of
the questions because of the internet connection, it got wrong. So there was a
image which I had to interpret, saying, which is the right image up and upside
down and Downside Up, and I clicked twice my error, because my mouse word, and
it interpreted that I'm a robot now. So it blocked me for 24 hours. So then I
started again, after 24 hours, and then the same thing happened, something else
went wrong. And then it blocked me another for 24 hours. And then finally after
two iterations or three iterations, I got to know if so I was more cautious,
more diligent not to act as a robot but be more human, taking more time before
answering everything giving all the data as it was asking because I knew And
that's where I think AI is autonomous, in a sense, but you need at certain
times, what do you say, the human intervention, and it was the same thing. Few
years ago, I was working with Microsoft, I lost my password. And I lost my
email at that time, because I just created one day before, so I lost both
password and email, but I paid for the account. So now the AI was thinking or
the.ai, that was primitive AI, it was not so smart. As of today, it was
rejecting me, but then it finally after all, the trial said, our automatic
systems conclude that you are not the right person. However, if you like, we
can make you talk to a customer care service tomorrow. And if you convince him
that you are the right person, then because for AI to work, it needed some
parameters, which I didn't have, because the name of the password, then I
explained the guy, this is my credit card. This is me, this is my domain name
in which I created the account. But since it was the first day, I lost both,
somehow, and I didn't make a note of it, and he or she activated the account.
So that's why human intervention is needed. And that is also leading to
something what we call is, some people say that AI is always or it's more fair,
and more unbiased than human beings. And that's another myth we have. What do
you say about that?
Nektarios 21:31
I think that's a misconception because of the fact that AI is, is
machine mimicking human behavior, right, mimicking human intelligence. So by
their nature, machines, or object, whatever they're programmed to do that will
do. So this is why a lot of people think that AI is fair and unbiased. And I'm
hoping that eventually we will get right. But I know we're out there. That's
because of a number of reasons. Number 1, AI is trained by humans, like even
though AI will understand and identify adults, like, you know, through data
without being explicitly programmed. The data need to be pre processed, clean,
normalized, you know, all these things, humans are going to do a data
scientist. So these data, if you don't have the right amount of data, if you
don't have the right quality of data, if you don't have the right data, full
stop, then they I will be biased. Okay, let's, let's consider this scenario
where you have a in healthcare, let's say you have to create an AI tool to
identify cardiovascular diseases, right? And you input all the data, like with
people who have cardiovascular disease. So this is a case where AI can identify
with huge accuracy. Someone who has might have cardiovascular disease, right?
But because they I was never trained on data with healthy heads, you know, when
it sees a healthy heart, it might think that these personal has cardiovascular
disease. So this is a false positive scenario, which can, you know imagine now
you are healthy, and you have some robot telling you like, you 95% accuracy that
you have this, and this is how a lot of symptoms, check in, check its work
online, you know. So that's one thing, the data and then the other thing you
have assumption bias, right? So you have let's, let's take software engineers,
you know, most of their software engineers, are men. So if you have a team of
50 men of 40 minute 10, woman engineers, you know, developing an AI system,
then the because of their assumptions that most engineers are men, then they AI
system might discriminate against women, you know, Amazon did this when they
wanted to recruit high quality they wanted to recruit, to do an AI recruitment
tool in 2000. When was it? 2014? I think, and it was, they shut it down,
because when they were training it because of this assumption, the AI was any
civil that had a woman in it, or the word woman, it was automatically declined.
So when they started refining it that they figured it out. They couldn't fix it
down through the shutdown. So how can we solve this? I would say that As you
know, diversifying the committee having a committee, which can audit for
accuracy and bias outcomes. And they're having a diverse and inclusive
development team and organization that we developed this area. This is this
way, I would say is the main cause and the main solution.
Punit 25:31
Yeah, I think that's the only solution combined with the human
intervention, because you need to consistently check if there is a bias or not.
And but that we talked about, and when we talk about myths, there is also one perception
people have or myth people have that AI is only for big companies. It's AI is
for large corporates, who have large projects on programs, and they need to
automate a lot of things. Do you think that's has any element of truth?
Nektarios 26:06
You know, when we listen about AI, we always think, expensive
solutions. You know, when someone says, Let's develop an AI system, the first
thing that comes to mind is, how much time and money will this take? Right? And
it makes sense, because when you want to have a lot of data, data costs,
storage costs, you know, so most of these small companies or individuals, they
think about not going into AI because of limited carry dynamic sources. And I
would say this was the case, how would we expect, right, but now you have a lot
of options and lots of solutions. You even have free source large language
models, you know, you have API's, like open AI is giving away their LLM through
API you can use, you can create a custom Chatbot. Even if you don't want to use
ChatGPT, you can create a custom chatbot a 15 minute train on your knowledge
base. So I would say there are a lot of solutions that small business owners or
company or companies or even individuals can do, starting with Europe, general
purpose AI, we didn't want to develop an application web app, they can be even
if they don't know how to code, you know, English is the new coding. This is
what the CEO of Nvidia was saying the other day. And I think it's true, because
now you just need to know how to write an advanced prompt, and you can get out
of AI, whatever you want. So yeah, I think that that's the first step. And the
misconception. I would agree with you a couple of years back, but now, I don't
think I agree with you. I think AI is now more accessible than ever. There are
a lot of case studies and researchers done that AI is now more accessible and
applicable to all organizations, regardless of any size or resources.
Punit 28:22
I fully agree with you. I mean, in the modern world, especially now
is in the last six months or one year, AI has become far more accessible. And
you don't need to develop systems, you can just use some ready to use tools and
the necessity to know the technical stuff and do the coding has almost
disappeared. In most circumstances. Of course, if you want to build something
advanced, you may need some technical knowledge but for the basic tasks in an
English like query language, or English like language as if you're talking to a
friend, you can do many things with artificial intelligence. Now, so we did
mystify all the 5 myths as we call them, but then there are a few challenges
which we hear, like there is we talked about the bias, then there's
hallucination. Then there's toxicity in the decisions. What's your view on
those challenges? More than the view I am more interested in? How can companies
overcome these challenges or these concerns when they are building AI systems?
Nektarios 29:32
Okay, so I think these three are interconnected. You know, one is
mainly the result of the other. So one of the causes for bias is the lack of
data. And that's also one of the causes for my hallucinations, right a couple
of weeks back or months as a record, but if you have asked chargeability when
was the Mona Lisa painting, it will give you different years. Every time
because it didn't have enough data on, on the Mona Lisa paint, right. So now
they fixed. But, you know, this is one of the main causes of hallucinations and
bias and the toxicity. It's either the output, you know, when the data is not
accurate, or the quality of the data is not up to speed, or when the data are
discriminatory towards a group of people, right? So how can companies toggle
these these issues? Right, so I would say the number one thing is to have a
diverse, inclusive committee that could audit outputs and refine the models,
you know, when you, when you test the model, and you start training it and
giving me the examples that are better, then you can see that the AI can
understand and retain itself on on those examples. And then you can always go
back and refine your data to get more data and, and reiterate this whole
workflow until you end up in a situation where you have an acceptable level of
hallucinations, an acceptable level of bias, and no toxicity.
Punit 31:30
It can agree with you fully thing managing AI you need to diverse
intelligences, diverse perspectives, diverse skills, to bring in different
perspective because if you say, Oh, the legal guy would bring in a compliance
issue, and then asked me to comply with the law and that and if you start
avoiding see perspectives like that, then you will develop an AI which is
unfair or biased and having hallucination and toxicity. But if you allow for
all those different perspectives, which may in the short term feel like
creating some bottlenecks, creating some challenges, but in the long term, it
will create for a more sustainable, more equitable, more humanitarian AI, which
is, as they call it, responsible AI. That's the right word I was looking for.
Responsible. Yeah. So that will be the responsible AI when you have multiple
perspectives, as you said, and that boils down to the right governance of AI
activities right from the start, not once AI is developed. So with that, I
think it's about time to say it was a wonderful conversation. I enjoyed it very
much. And thank you so much Nektarios.
Nektarios 32:45
I enjoyed it as well. And, you know, thank you. Thanks again for
invited me. I love our conversation and demystifying those AI meats for
everyone.
FIT4Privacy 32:57
Thanks for listening. If you liked the show, feel free to share it
with a friend and write a review. If you have already done so, thank you so
much. And if you did not like the show, don't bother and forget about it. Take
care and stay safe. Fit for privacy helps you to create a culture of privacy
and manage risks by creating, defining and implementing a privacy strategy that
includes delivering scenario based training for your staff. We also help those
who are looking to get certified in CIPPE CIPM and CIPT through on demand
courses that help you prepare and practice for certification exam wants to know
more, visit www.Fit4Privacy.com. That's www.fit4privacy.com. If you have
questions or suggestions, drop an email at hello@fit4privacy.com.
Conclusion
One thing is clear: AI is not here to replace us. AI is here to stay, but its development and implementation need to be done responsibly. Nektarios emphasizes diverse perspectives and focus in mitigating bias are crucial. The future of AI is not a race towards superintelligence, but rather a collaborative effort between humans and machines
Nektarios paints a picture of a future where humans and AI work together, each leveraging their unique strengths. AI can handle the repetitive tasks, freeing us to focus on creativity, innovation, and the uniquely human skills that machines simply can't replicate. But building a responsible and ethical AI future requires careful consideration. By fostering diverse teams, prioritizing data quality, and establishing clear governance structures, we can ensure that AI serves humanity, not the other way around.
This episode of Fit4Privacy has shed light on the exciting possibilities and important considerations surrounding AI. So, the next time you hear about AI, don't panic. Instead, get curious! Learn about its potential and its limitations. Become an active participant in shaping the future of AI, a future where humans and machines dance together, creating a world that benefits everyone.
ABOUT THE GUEST
Nektarios Charalampous is the Business Operations Director at Amdocs, where he excels in streamlining processes and empowering teams. With an MBA and PMP certification, his expertise spans beyond traditional operations into the realms of AI, Web 3.0, and blockchain technology. A passionate advocate for innovation and lifelong learning, Nektarios continuously explores the transformative potential of emerging technologies on business and society. His commitment to embracing new perspectives and sharing knowledge makes him an invaluable voice in demystifying AI and its capabilities.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com