How would AI Impact Us
In a world where AI is rapidly growing, how can we ensure that its benefits are distributed equitably while mitigating its potential risks to privacy and social justice? The rapid evolution of artificial intelligence (AI) is reshaping our world at an unprecedented pace. From self-driving cars to personalized healthcare, AI is permeating every aspect of our lives.
However, this technological revolution also raises significant questions about its societal implications. As AI becomes more sophisticated and integrated into our daily routines, it is crucial to consider the ethical and practical challenges it poses. The potential benefits and drawbacks of AI, exploring its impact on various aspects of society. Examining the ethical considerations surrounding AI development and deployment, including issues of privacy, bias, and accountability. Raghu Bala and Punit Bhatia discuss strategies for ensuring that AI is developed and used responsibly, benefiting humanity as a whole.
Transcript of the Conversation
Punit 00:00
Would AI impact us? Would there be robots all around or would it penetrate our lives? Would every one of us be using it, or will some of us be using it? Will we lose our jobs? Or will it complement and help us co drive, co pilot things? Well, these are fascinating questions. And of course, these are interesting questions, sometimes scary, sometimes opportunistic. And how about asking someone who is a speaker, a mentor, a coach, a guide, a professor on AI, and an entrepreneur? So, I'm talking about none other than CEO of synergy, learning facilitator at MIT AI, also at Wharton and many other universities, and author and speaker and even a mentor in IIT. I'm talking about none other than Raghu Bala, whom I had the privilege to meet when I was doing this MIT's business implications for AI or business implications of artificial intelligence, and he's a very learned man. Let's go and talk to him and tap into his wisdom.
FIT4Privacy 01:12
Hello and welcome to the Fit4Privacy podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host, Punit Bhatia, has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.
Punit 01:41
So here we are with Raghu Bala. Welcome Raghu to fit for privacy podcast. It's a pleasure to have you and with you, spending your energy, your time in many dimensions, university, coaching, guiding, mentoring and even Entrepreneuring. What are the AI trends or things that are going on in the market that fascinate you, and you expect them to remain relevant in the coming, say, three to five years.
Raghu 02:06
Yeah, so what I've come to the conclusion? I think some of it is quite obvious, in some of which, and sort of, it's sort of unfolding. I think AI is a technology similar to the telephone, for example, or after that you had the internet. And why I mentioned that is because when telephone came, almost every business needed to have the telephone, otherwise you are not in business. It's just a common cross cutting technology so and when the internet came, it's like every business, whether you are in a legal firm or whether you are retail, or whether you are in banking or insurance or whatever you needed to have internet presence, a website, email, and all of your employees on the net. So, it is a technology that was cross cutting, similarly mobile. Almost everyone uses mobile. So AI is one of those things where I think we have gone past the you know, because when I was at Yahoo about 15 years ago, 20 years ago, almost, we were using AI, then support vector machines and those types of techniques to do ad placements which are relevant to the story that was on the page. Let's say so, you know, so contextual advertising. Now AI was when the domain of some of the kind of elite companies in the Bay Area and all the technology companies have used it for a number of years. But then now what is needed really become in the public, sort of like awareness of everyone, both in B to C and B to B contexts, is of obviously, Gen AI, and things like tools like ChatGPT, making it very easy for anyone and everyone to sort of use and attain AI. Now, having said that, I think all businesses will start to use it. It'll be equivalent of internet or telephone or whatever it is you have to use it, or it just be part of everything that you do. And if you don't use it, you'll be laggard, and probably, you know, lose market share, lose your competitive edge. So, the first, first sort of insight, is that everyone will start to implement it. There's no if ands or buts. It's whether you're nonprofit, government, a for profit, sort of like it will be so commonplace. Everyone will use it. Now, the extent to which people you use it will differ. Some people might use it a little bit to automate some functions which are sort of manual today, and some people might augment the human capital with it and some people use it a lot in the sense that they might change the way they do business and automate more than others, so that it might differ by company, we differ by industry, and so on. So that's, I think, my feeling now, there's a lot of other things we can talk about, but, but initial feeling is this, is that it will be very, very commonplace. Almost everyone, the whole industry, is going to go through a sort of a retooling and retraining. Phase where everyone will have to get up to speed and so on. And just give a quick anecdotal evidence. So, the classes I teach for to you, the MIT Sloan courses on AI, the enrollment used to be about three to 400 per class online, roughly speaking, about a year ago, but since last August, it's between 1012 100 per class. It has gone up three to four folds. And the reason why is because there's a everyone, sort of like suddenly, a light bulb went off. You know, across the world, actually, you have taken the class yourself. And you know, students come from across the world, and sort of like everyone feels like, whoa, if I don't retool and retrain myself, I might be left behind. So, companies feel it, employees feel it. Everyone feels it. So, there's a huge deluge of people really wanting to catch on here, because if you don't, you might be left behind. Sort of feeling is there so, so that's just anecdotally, there's a great push everywhere sort of implement AI.
Punit 06:07
That's for sure, and it's very well put. And in fact, the class we are talking about is MIT's executive education, or distance education that has a course called business implications for AI, correct Well, business implications of artificial intelligence. That's over six to eight weeks, and it's done very well. And that explains how a business can leverage AI, what are the implications? And in fact, also explains, in a very non-technical way, business here. And I can say that's one of the best courses I have attended, best training programs I have attended, not because we you are here. Everyone who asks me, I say you can do anything, but if you do that, one that gives you real toolkit on how do you implement in a business context. But since you use a very good analogy of phones and computers or internet, I think if we see that those things or those technologies have evolved over 25 to 30 years. If you look at the phone, first it was a handheld device you need to book trunk calls or calls. Then it became instantly available. Then it became handphone on or a big one, Ericsson ones, and then pager, and then eventually the mobile phone, which is now a smartphone. So about 50 plus years of journey. If you look at internet, we have about 25 to 35 years of journey, which started with an email communication, then networks, and then so on. And now everything is on Cloud, fast tracking the journey. And same thing with computers. It was on the desktop. Now it's on in hand, on wrist, and everywhere, and even in rings. So how do you see this journey unfolding? Because when there's an important shift in the industry, or in our What do you say? Like we say industrial age, and it's an information age, and this is going to be an age of AI, I don't know what's the acronym we are going to use. So, let's classify into three parts, a short term, maybe say three to five years, maximum up to five years. Then about medium term, say 10 to five to 15 years, and then long term, 20 years and above, sure. How do you see those three evolutions happening? What comes into mind that this will happen? That will happen because right now there's a lot of hysteria. There's a lot of fear of missing out, and people are trying to understand this technology more than use it effectively. I would say.
Raghu 08:18
Yeah. So I think the short term is that I think, first of all, I think there are two sort of things which are doing on the AI space. First of all, just for the listeners, I want them to understand AI itself is a very broad category of tools. And it's not just one, one thing, and that's called AI. It's many, many things that make up AI. So it's got many tools in it. So the way I like to explain it is, AI is very similar to your brain. There's left brain and this right brain. Left Brain is very logic oriented. Right brain is very creative. So the left brain type of AI has been around for a long time. It's called discriminative AI. That is your supervised learning, unsupervised learning and reinforcement learning and so on. So just as an example, so for example, those who use Netflix, when you watch Netflix, you'll say, these are the movies you watch. You might like this, and that's because it's able to use your old data to figure out a recommendation for you to say, hey, you know what you might be interested in this movie. If you go to Amazon, you'll see those who bought this also bought this. It's based on the purchasing habits of all of the previous people who bought the particular item that you're looking at. And so those types of AI has been around their recommendation system. They've been around for a while now this generative AI is the right brain type of AI, which is more thinking about it's almost creative in its way. So the thing the experiment people can try at home is try putting a prompt into ChatGPT and run it twice in a row. The first time you run it. Will give you one response. Second time you run it, he'll give you a second response. The reason why it's doing that is it's like me telling, uh, Leonardo da Vinci, go and paint Mona Lisa once, and then next day on a clean slate, plain painted again. I'm almost 100% sure the second time and first time won't look alike. It looks slightly different because it's different. I mean, the situation is completely different. So because it's a very creative endeavor, it's not a repeatable process. So generative AI is indeterministic. Discriminative. AI is very deterministic. If you go today and ask a recommendation on Netflix, and you know, tomorrow probably be the same. Of course, if time passes like six months later, you have watched more movies. Movie titles have changed. Your habits have changed, whatever it might be, then it might recommend different movies over a period of time, but not immediately. It's somewhat deterministic, so that those are two like broad branches, really taking it and splitting into two types. Now, coming back to your question, three to five years, I think a lot of different professions are going to see some change, and the first thrust of AI is to use it for augmenting some of the vocations and human functions. And we already starting to see that the word co pilot has been used quite a lot where AI is your co pilot. And it's kind of interesting marketing from all the people who sell AI or involved AI, because they don't want to inject fear into the community saying that AI is the pilot, and you are now co pilot to it. So it's you are still a pilot, but it's your helper. That's how it's being used, and that's okay. I mean, I think that's good. That's very healthy also. So AI is helping you do your job, and maybe do it faster, better, more efficiently, whatnot. Okay, and so that is the first use of AI, and I think that is unfolding in many professions over the next three to five years. That will be the primary thrust. Now, having said that, I think five years and beyond, I think some of the things that I can already foresee, for example, I think we are beginning to see some new innovations, and I'll mention couple of things from Tesla. So Tesla, as in the US, announced a robotic car, a robotic cab, they want to release in October, talking to a friend of mine who owns a Tesla and actually uses their full self driving system. And he told me he lives in Toronto, and he said I drove 50 miles with my daughter. I didn't even touch the steering wheel. It drove anyone inclement weather, and it drove us all the way, even in heavy traffic, and no problem. And dropped off my daughter, and I came back and nothing happened. It did everything. I didn't have to do a touch the steering wheel at all. Now they're trying to roll this out such that there's no humans in the car except for the passengers. And this is a threat to companies like Uber and Lyft and others, because the car can do things. Now if you take that step further, you could be at home and give your car a list and order some groceries from your nearby grocery store or go and do food pickup and so on. Then what happens is the car go and pick up the grocery or food or whatnot. So you don't need to go, or even this might be a threat to things like DoorDash and so on, because your car can do the delivery the drivers no longer need it. So what might happen is you have these fleets of cars operating autonomously. So you've gone from this co pilot, which is a AI bot and you're going towards something called autonomous AI bot, not just AI bot, but autonomous AI bot. So that is, I think when we that will unfold, it may unfold even sooner, like I said, with the robot card is coming up soon. Another thing that Tesla has, I forget the name right now, starts with S. It's their bot, actually, like a humanoid bot that they're going to release. I think that they're projecting the price to be $20,000 per bot. And I think that price point is a bit high. I actually have a kind of a internal, sort of bad family bet with my wife saying that every household in America will have a bot, a domestic bot, in their home by 2030, this decade, by the end of this decade, and you can buy that one of those. In America, we have a store called Costco. I don't know in Europe where you are, you have that, but it's like a big warehouse store, we call it, and you can buy things at wholesale prices. So it'll be a Costco for I was thinking like, you know, 299, to 599, different models and so on, and these bots can do domestic work. They can clean the house, they can fold your clothes, they can put their wash. They can even do some basic cooking things like that, micro wheeling things, blah, blah, blah. So I think those types of things will come about. I think vacuum cleaner bots have been around for a while. And then there's also one gardening bot that is around is also doing gardening and things like that. I don't think there's a huge stretch to do this. In fact, there's a project you can check on YouTube called Project Aloha from Stanford, which shows the dexterity of these bots is amazing, like one of the problems with bots up until now. For example, I'm just going to show you a piece of paper. Our human hands are so dexterous. I can hold it like this. That means I can hold a thick piece of things like this. But also I can be so thin, and I can turn it like this, one page at a time, thinness of this, the thickness of this, I have the dexterity in my fingertips to do that, and that was quite difficult for bots to do. But with Project Aloha, you will be amazed it can. It even holds a pill to give the human for ingestion, and down to folding clothes and opening drawers and doing all kinds of very, very, very dexterous things. I think those types of bots will become very available at the at least on the B to C side. Within three to five years, they'll be sort of everywhere. And now that also makes you think like, what kind of business models will come about, like we talked about some disruption to Uber and Lyft and DoorDash and things like that, that could be very possible. In fact, now people could be owning fleets of vehicles and using them to, you know, derive income, passive income, from a fleet of cars, because these cars are driving around for you. They are like Robot cars. Robot you become a robot taxi fleet, if you own like 10 Teslas, and they're driving around and they come back and park themselves at your parking lot every night after driving passengers all over the place, those types of things, you know. So the new businesses, new business models, will come about once you start to do these types of autonomous agents, whether in the physical world or some of the things I'm working with Synergetics, my startup is in the virtual world, having digital twins and autonomous bots in the in the digital world, being able to do various things like I'll give you a quick example. Over the weekend, I was catching up on my books, you know, like, meaning accounting books and bookkeeping, so putting in bank entries, reconciliation, credit card entries, all this usual stuff. It's painful work. It takes time and trying to get your profit and loss and balance sheet and other things and it has had piled up for a while, so I had to kind of do it at one shot. Now what happens is, after spending couple of days to catch up and so on, I said, this is very arduous work. Let me. Let me do something else. Let me give this whole PDF to a LLM large language model and tell it to strip out all the credits and debits. And it did it as Whoa. It could do it. It did it. After I did it, I said, Okay, I use QuickBooks for my accounting. So I said, Okay, now put these debits and credits into QuickBooks format. It did that too. Okay, now write me a piece of code that would use the QuickBooks API to insert these debits and credits into the books. Even generated the code. Then I said, Wow. Then I got one of my developers, because it requires some security things to be worked out to enter into your books, not just generic books. So we're working on the small agent. And so the technology we are building is agentic workflows in AI. So we're building a small accounting agent that can do this kind of work. Why I mentioned this example is it is very practical. So now I don't need to do this drudgery of entering entries one by one, I can automate this thing and use my time more beneficially to build my business instead of working on this stuff, right? So I think a lot of so I wouldn't want to characterize AI as something that is, you know, necessarily take away jobs or replacing but makes you a lot more efficient with your existing time. So you can do more pursue more creative endeavors than doing kind of road work. So anyway, three to five years, I think these autonomous agents, whether it's software agents like what I described, or physical agents like robots and cars, all of that should become commonplace, very commonplace before the end of this decade, for sure. Now going beyond, now five years and beyond, or even, I think even after 20 years. So I think longer term, it's harder to imagine, but a few, few thoughts come to mind. I think longer term, humankind, the workforce, will look very different. I think every company will be a combination of humans and digital humans. And also, I think a lot of the types of social change will happen in a lot of ways, and that might happen now, but also over time, social change will happen a lot of ways. Now what happens is most countries follow a sort of a capitalistic society, and we are all on a five day week and things like that. I think one of those things, because AI is doing a lot of the sort of like drudgery and work, repeatable work. I think governments will move towards a four day week. I think that'll happens, already happening in some places, but I think it'll become very commonplace, like a three day weekend. Now, if. They have a three day weekend, then that relieves the human to do other things. So there will be more emphasis on the arts rather than the sciences. There'll be more emphasis on arts, whether it be theater, music, movies, content, things like that. So if you look at that as a kind of a broad directional change the amount of streaming content, or streaming, even channels that we can get on OTT and so on, has increased the amount of content I think we all consume has increased a lot. A lot of people binge watch a lot of content, series, movies, thrillers, whatever, romance, whatever it is you, consume a lot more content than you consumed before. I think that's a trend overall. So now you have more free time on your hand, you're going to consume more content. I think the media the media industry is going to take a big swing upwards, because there can be a lot of content, and that content creation is human content. No one wants, no one I think would want AI generated content creation, although that's also there for images and movies and deep fakes and all that is going on at one side. But I think content creation will become very big. It'll become bigger than it is right now, and we already seeing that consumption shoot up. So I think that's going to be a big trend. So you're going to have a shorter work week, more leisure time, more sports, more movies, more music, more concerts, more all of these media types are, you know, grow up. That's, that's the second trend. I think that a time and age in most countries will also change. I think that will change because now I don't know whether, actually, I've got two schools or thought, whether they'll become lower or higher, because the higher would be like people need to work longer, because things are, you know, getting more expensive and almost like inflationary scenario, so they have to work longer. That could be one way goes. The other way is if gowns go towards, like a universal monthly, I don't know what they call it, a universal monthly check that they send you, like a payment, like a stipend, to help you live. Some countries have experimented with it. If they go towards it's more of a socialist model. If they go towards that, then I think the retirement age will come down, not go up. So I think these types of structural changes in society will happen over time. I really believe, but that's not that's going to be a political process, social process, adjustment, dislocation, retraining, retooling, a lot of things. So those, I don't think will happen very quickly. It'll happen bit by bit by bit by bit, but in about 10 years, you'll see like everything has shifted. And even we'd be surprised, like, wow. You know, we used to be doing these types of things 10 years ago. Nowadays, I don't do that thing anymore. AI is doing it, and I am spending my time doing this other thing. So I think those structural changes will happen in society. I think they'll take some time to play out. So those are my sort of, like, you know, thoughts at this point in time.
Punit 20:59
No, I can imagine. It's difficult to predict the future, but what we see there will be a huge structural shift. And you highlighted in detail what kind of shifts are possible. And nobody knows which direction we go, or maybe we go in both directions. Some countries explore left, some countries explore right, and then we find the center. But there will be interesting times and interesting opportunities, interesting possibilities, and many, many, many ways of dealing with or taking those opportunities, because you mentioned the social benefits. Those are there in some countries, but people take advantage of it, people make right use of it, and people also misuse it. And that's always the risk with bigger opportunities, bigger possibilities, and era is certainly one of the bigger ones we've ever seen, or rather, the speed at which it will make, the change will be unforeseen or unfathomable. So, if we look at the consequences, especially the negative consequences, and while we zoom into the let's say the human aspect of it, the privacy aspect of it, I think what kind of risks do you see? Because there's also this fear, especially in the Europe, because people are talking about fundamental rights, human rights, and we also have this EU AI Act, which is a product safety legislation, more aiming to certify and ratify the products or AI systems as safe for usage, and also alerting people that it's an AI system. What kind of privacy risks do you see happening and materializing?
Raghu 24:17
Yeah, so that's a good question. And so, the problem with AI is, is a question of observability. So, the thing is, it's like a black box. A lot of people, they don't know what's going on inside the black box. In fact, in a way, I think even engineers, don't know what's going on inside because if you look at a neural network, it's got 1000s upon 1000s of layers and so many variables going in. I don't think any human, even the brightest minds in Silicon Valley, can say, okay, this layer is doing exactly this or this computing, because it's just going on at such a fast pace with so much data. You can the ability to observe what's really going on is quite difficult. It's a difficult task now. Now once something that you know is a black box, and something goes in, something comes out, now the question is whether this thing is biased, whether this thing has is protecting your personal information, all of that comes into question. So, when we go back, just roll back to LLM’s and things like that, when we deal with enterprise customers, almost the very first thing they tell us, I don't want my data to be out there in the public domain. I don't want my data to be with the big companies who might misuse it for their own benefit. So, so the thing with the last language models is there are two schools of thought that there's large language models like open, AI, Claude and then Gemini and all of this coming from big companies like, you know, open, AI, Amazon, Google and so on. I think those types of models would be used as competing against Google's search engine or Bing search engine, or these search engines will incorporate these LLM’s. So, what happened is these are almost things that people can ask everything about everything which is science, art, math, geography, history, everything they can ask on an interface like ChatGPT and even the open source models. Yesterday, I was playing with something called Chat missile.com it's a French one which is from France. It's open-source model which you can use it just at ChatGPT and do various things. Now those are large language models which cover anything and everything. Now, when it comes to enterprise space, they are a bit more guarded. They want verticalized large language models which only have their information. Their information doesn't get leaked to the public domain, and it's used for their own purposes and nothing else. And I think when companies are starting to implement AI, they're going to want to have this kind of thing, and the hosting is going to be in their own cloud or on premises or whatnot. So, it's going to be very would you call it, that? That hosting model will be very privacy oriented, that their information doesn't leak. Now, if you delve deeper in certain domains or in certain jurisdictions, so let's look at domain like if I look at healthcare, the data in the US. We have HIPAA laws. The HIPAA laws require all the data to be encrypted, either during transmission or storage, so that personally identifiable information is not leaked and so on. So, you don't want to know that John Doe had this particular illness or whatever health problems and so on. That's his private information. So, you want to block out, you can, you can, you can anonymize it, and so on, but you cannot show his information to people without permissions, and so on, so forth. Now, in the jurisdiction space, as you very well know, like GDPR and so on, has got data residency requirements, such that the data for people in Europe needs to live in Europe, and in some jurisdictions, it has to live within the boundaries of that country itself, like it's not only a region, but it cannot leave that particular country, because then you're breaking the law. So those types of things will become very It's another way of sort of governing, where the data resides and so on. So those types of constraints will be put on AI as well. So you have got jurisdiction, like our data residency. You have regulation like a you have the requirements of companies not wanting to leak outside. These immediately come to mind. And then you have the EU AI Act, which has been, I think, four, four years of classifying applications and being, you know, it's like mild and then kind of more serious security things. And then the fourth one is, like, you are really, you know, you have to really safeguard things or whatever. There are four layers of going from, from really, very, not, not doesn't need to be very secure to needs to be very, very secure type of thing, correct? Oh, so all of these things are playing out, but I think it begins with having tools for observability. If you don't know what's going inside, that's a problem. Another thing that just before I forget to mention is when we when we counsel companies on data collection, we have to counsel them on bias. So, I was invited to be a keynote speaker event in Africa, in Nigeria, in fact, called AI in Nigeria, and then this one called AI in Africa, coming up in September. Now, if you look at the African continent, it's a good example. When you use LLM’s from other countries, that data in that LLM does not represent the population in Africa. Yeah, it's got nothing to do with popular in Africa. It's about Caucasian people, not African people. Or if you go and look at some data set in a country like China, they obviously want to use their local data set, not some other countries data set, which has got nothing to do with the people in their country. So, this type of thing is also another problem. So, it may not be malicious, it just is the wrong data set. You just have to collect the right data set. In some cases it's biased, but in some cases, you're collecting from the wrong place, so you're not going to get what's representative of the population that they're trying to serve that that's the problem. So, when you collect data for these language models, it has to be the. Targeted or collected for your people. So that's going to be a issue too. So, because otherwise you can get wrong output, you're going to get output that does not reflect the interest or taste of the people in your country or whatnot. So those are types of things that we also have to think about as we use these models, not to just take everything as eyes all the same. It's not the same. It's very different because tastes differ, or even if you take a large language model for clothing sizes, or let's say for retail, different people in different countries have got different stature size, you know, and so on. So, it's not going to be the right same measurements and whatnot. It's going to be very, very, very different. So, there's a lot of nuances that people have to care about in the data, where it's collected, how it's collected, how it's cleaned, how stitched together and so on, before they can start using AI. So that's another area to think about. I think it's very important.
Punit 30:56
Of course. I think it goes back to the healthcare model. So, if clinical research or trials have happened in United States and you want to apply to Europe or Africa or Asia, that drug needs to be tested on the relevant Exactly. And it's the same thing with AI, if your model was built in us, or if the model was built in Europe, it may not be fit for Asia. It may not be fit for Europe or Australia. So there needs to be retraining. And it's nothing unusual. But the whole thing is, we don't have such laws or such instruments in government, like, if you have a drug, there's always a medical agency or FDA, which approves the drug in each country, and then they look out into these parameters. But we don't have such infrastructure for AI system that will come over time. I mean, say, 20 years, 30 years. And the second dimension, when you talk about privacy, I think, yes, there are data residency constraints in some countries. And then like EU, they say you need to have data ek protection equivalence. So, if you're moving data to another country, it should be equivalent protection. And that is another thing. Maybe when the AI systems are coming, they would expect similar standards to be implicated, like EU AI Act is one. The AI for humanity of UN is another. OECD is another. So, we will see that evolution in which, hopefully we will get a NIST like standard, or ISO like standard, like ISO 42001 which would be becoming more or less the norm for everyone. And then everyone has their own standard and certification or conformity. But again, I think the message from what I gather from your input is there are very interesting times it will penetrate into our lives in one way or the other, just like the internet did, just like the phones did, just like the computers did or the electricity did 100 years ago. Because initially there were horror stories about that also. So, there are horror stories of this, but it is going to come in one way or the other.
Raghu 31:46
And everything like someone yesterday. I mean, just a very, very cliche thing, but it'll be in your toaster oven. I mean, you'll talk to your toast oven. You may not even press buttons in the future, because, and one other thing I'll mention, kind of interesting fact, imagine you had a VCR, old fashioned VCR, or a DB player, the vocabulary of the LLM that you need to talk to for the VCR thing is very simple. Start, talk, Rewind, forward. That's a that's a vocabulary that LLM will be small. It doesn't need to know much. It doesn't need to know the history of China or something like that. It needs to only know start, stop and be able to understand and comprehend and trigger processes. So, but then when you talk to some other device, like your car, its vocabulary is very different. It will say, okay, take me to the nearest Starbucks, or take me to the, you know, like my daughter's school or something like that. You'll understand spatial knowledge. But again, you would not talk to your car about something which is irrelevant, you know about software or whatever. You won't be talking to car. You'll be talking to it about going places and things like that, or how's the traffic is, or something like that. So, what we'll see is he'll permeate every facet of life, but it'll be in different vocabularies and different brains that each of these elements might have, because it's only relevant to what it is supposed to do. So, it'll be, it'll be, you know, like constrained to that vocabulary. That that's an interesting thing to think about, but, but that's where we are going with a lot of things. It won't be all large language models. Will be a lot of small, small language models here, and that that's what will happen.
Punit 32:42
Absolutely and I think, can you give the example of car? It reminds me the Mercedes on BMW now have artificial intelligence, or say, language assistant built in. And somebody told me that if you ask your Mercedes, what do you think about BMW? So, I asked it, and then it says it's exactly what you think about otherwise we won't be having a conversation. Smart answer. But of course, in about two months, they changed that answer because probably people complained, or people were putting it on internet, so they're also adapting. So, it is the ability to have that intelligence to give the right and also the appropriate answer. It is not saying no, Mercedes is better, or BMW is better, or Adi is better. It's to give an appropriate answer in context, and that's even. Smarter than a human being, because there is no emotion attached. But with that, I would say, while it has been a fascinating conversation, if somebody wants to tap into your knowledge, maybe get mentorship, coaching, speaking, whatever from you, which is the best way to contact?
Raghu 35:14
Yeah, LinkedIn is the best. Like, my name shows up on the screen Raghu Bala. I show up fairly I'm there, so it's easiest way to reach me, and then from there, we can discuss there or take it offline to email or other places, for sure.
Punit 35:29
So, with that, in essence of time, I have to say, thank you so much. Raghu, it has been an enlightening conversation. You highlighted the use cases in a very simple and elegant manner and showed the depth and breadth of AI. What are the possibilities? And I'm sure people will enjoy it. And I would say, thank you so much.
Raghu 35:46
Thank you, Punit. Thank you for having me.
FIT4Privacy 35:48
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so. Thank you so much. And if you did not like the show, don't bother and forget about it, take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario-based training for your staff. We also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www dot FIT for privacy.com that's www.FIT the number 4 privacy.com if you have questions or suggestions, drop an email at hello(@)fitforprivacy.com.
Conclusion
The rise of AI presents both immense opportunities and significant challenges. While AI has the potential to improve our lives in countless ways, it is essential to approach its development and deployment with caution and foresight. By addressing ethical concerns, mitigating risks, and ensuring equitable distribution of benefits, we can harness the power of AI to create a more just and prosperous future for all.
As we move forward, it is imperative to engage in ongoing dialogue and collaboration among policymakers, technologists, and society at large to shape the future of AI in a responsible and ethical manner. Only by working together can we ensure that AI serves as a force for good, benefiting humanity and promoting a more equitable and sustainable world.
ABOUT THE GUEST
Raghu Bala is a distinguished technology thought leader, entrepreneur, and author whose expertise spans a broad spectrum of cutting-edge domains, including IoT, AI, blockchain, mobile technologies, cloud computing, and Big Data. With a unique blend of deep technical knowledge and robust business acumen, Raghu has established himself as a visionary in Internet-related ventures. As the CEO of UnifyGPT Agentic Platform and an instructor for MIT Sloan's AI, De-Fi, and Blockchain courses, he is at the forefront of shaping the future of technology. Raghu's impressive resume includes co-authorship of the Handbook on Blockchain and contributions as a Contributing Editor to Step into the Metaverse. With four successful exits under his belt and previous experience at industry giants like Yahoo and PwC, Raghu brings invaluable insights to the table. His affiliations with prestigious institutions like Wharton, MIT, Columbia, Stanford, and Princeton, coupled with his role as an author, speaker, and mentor at institutions like IIT Madras and VIT, further solidify his reputation as a leader in the field of technology and innovation.
Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.
Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.
As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.
For more information, please click here.
RESOURCES
Listen to the top ranked EU GDPR based privacy podcast...
EK Advisory BV
VAT BE0736566431
Proudly based in EU
Contact
-
Dinant, Belgium
-
hello(at)fit4privacy.com