Dec 13 / Punit Bhatia and Dr Cari Miller

Responsible Acquisition in an AI World

Drag to resize

In a world where Artificial Intelligence is seamlessly blending into the fabric of everyday life, how often do we consider how we acquire it responsibly? Beyond the exciting discourse on Responsible AI lies the often-overlooked yet critical domain of Responsible Acquisition of AI. When government agencies and private organizations procure AI systems, are they ensuring these acquisitions align with ethical, legal, and operational best practices?

This question becomes even more pressing in light of a recent memorandum issued by the U.S. Office of Management and Budget (OMB), setting expectations for federal agencies to adopt a structured and principled approach to AI procurement. Why now? And how does this compare to the EU’s broader AI Act? In this episode, we delve into these pivotal questions with none other than Dr. Carrie Miller, an expert at the intersection of Responsible AI and procurement strategy. Join us as we explore the implications of these guidelines for fostering trust in AI systems, ensuring compliance, and avoiding vendor lock-in.

Transcript of the Conversation

Punit 00:00
Responsible Acquisition in an AI world, yes, we often talk about responsible AI. But what recently caught my attention thanks to a post from one of my fellow colleagues that there is an aspect called Responsible Acquisition, that is when you are doing procurement, when you are doing sourcing, when you are working with other agencies, other entities, you need to do that in a responsible manner, because you may be sourcing, you may be procuring something which is an AI system, or something which has elements of AI being incorporated into it. And then, as I came into concept, or came into touch with this concept, I also learned that there has been something issued by the Government of US, the White House, as we call it, Executive Office of the President, and the Department of Office of Management and Budget, wherein they pushed a memorandum to the heads of executed departments and agencies saying advancing the Responsible Acquisition of AI in Government. So as I came across that, the question was, it's relevant, because it helps us create or enhance the digital trust, as we usually talk about I thought you should talk to or talk about it with someone, and I found none other than Dr. Carrie Miller, who always talks about responsible procurement, or Responsible AI, and combined Responsible Procurement in the AI World. So, who is better than her to go and talk about it. So, let's go and talk to Dr. Carrie Miller about this important topic.

FIT4Privacy 01:53
Hello and welcome to the Fit4Privacy Podcast with Punit Bhatia. This is the podcast for those who care about their privacy. Here your host, Punit Bhatia, has conversations with industry leaders about their perspectives, ideas and opinions relating to privacy, data protection and related matters. Be aware that the views and opinions expressed in this podcast are not legal advice. Let us get started.

Punit 02:21
So here we are with Dr. Carrie Miller, welcome Dr. Carrie to the Fit4Privacy Podcast.

Dr. Carrie 02:27
Thanks for having me.

Punit 02:28
It's a pleasure to have you, and it's from your post a few weeks back when you posted about this directive, it was issued to the federal agencies, and I was curious to understand, what is this responsible acquisition in the AI world? Can you talk about it?

Dr. Carrie 02:48
Yeah, we expected this memo to come. OMB, Office of Management and Budget, which is a section of the White House, they had released a memo in March. M2410 and this is an extension of that. So M2410 did some things with organizational structure and expectations on that front, with reporting and things like that. This is the extension of that that really gets to operationalizing AI. And so, they focused on acquisition and the areas that they focused on were how they expect these agencies to collaborate, internally and intra agency collaborations. So that was really fascinating. They really focus on rights and safety, which is a little different than the EU you would call that high risk. And they had a couple of call outs with specific AI technologies that were interesting, and they were very focused on competition in acquisition.

Punit 03:55
So essentially, we are talking usually about AI being responsible. Here we are talking about the world is in the auto say, the world has a lot of AI elements, and in that everyone who's acquiring that is procuring something, they need to be responsible in those acquisitions, like you mentioned, the collaboration between intra government entities or collaboration with external entities. But why did they need to talk about it now? What was the hurry? What was the reason at this moment?

Dr. Carrie 04:31
Yeah, I think AI governance, you know, once you set up your organization and you have people in place, so we have a lot of Chief AI officers in place now, and we have policies in place. We know what. You know the right kind of AI is that we should buy the next step is process. How exactly do we do that? So, this was a logical next step. It's very timely. A lot of states actually are way ahead of the federal government in this perspective. So. All it was time.

Punit 05:02
So, it was time, but it is right now in context of government agencies only, so it's mandated or meant for them. Are there any solutions?

Dr. Carrie 05:11
Federal government, yes.

Punit 05:14
Not state?

Dr. Carrie 05:15
Correct.

Punit 05:16
Okay, that's interesting. And then I think you mentioned earlier, when we were having the chat, that some agencies are included or excluded? Sorry, which one are the ones which are excluded?

Dr. Carrie 05:28
Yeah, all of these memos and executive orders and things that are not well, even actually, some of the legislation that has come through, which we have very little of here in the US, they tend to step around military or intelligence uses anything that's going to protect the homeland, or, you know, for security of the homeland. I believe why they do that. In fact, my understanding why they do that is those are very different measurements that you use whenever you're evaluating AI. So, evaluating AI to understand your national security versus life and safety risks, you tend to do different tradeoffs for that type of AI. So, this is your average run of the mill AI that can impact everyday human citizens that need welfare benefits or unemployment benefits or just your average stuff.

Punit 06:25
Yeah, and you mentioned that the EU does its own stuff. So how is it different from the EU? Because the EU has its own EU AI Act, which encompasses a lot more and talks about AI being responsible. Here we are picking one element that is acquisition that in the AI world and then focusing on the federal agencies. So, what's so special that this is going to help, maybe, say, create trust, or create digital trust, or how is it going to be impactful?

Dr. Carrie 06:59
Yeah, I mean the end game absolutely is digital trust. I think in my opinion what the difference is, the main difference to me, what stood out was the EU AI Act calls out high risk AI everyone is talking about high-risk AI, sometimes we don't understand exactly, in the weeds what that means. But here, they specifically called out rights and safety, and they were really clear about it. There are existing laws in the books that say what people have rights to, and there are expectations about people's safety, their physical safety and their mental health safety. So, we have laws around that stuff, so they kind of grounded it in what we have existing today. Unfortunately, that kind of makes it a narrow memo, because it doesn't cover some of the things that the EU AI Act does but you know it's still pretty well. So, there's those were that was an important distinction i thought.

Punit 08:03
I agree with you. That's an important distinction, though it always makes me wonder, why do they go specific and why don't they go with the full-fledged legislation? But that's always for the debate.

Dr. Carrie 08:15
That's politics. Yes, exactly. How much time do you have?

Punit 08:19
And then you also talked about how it applies to applications or models or Gen AI or biometric applications. So, what's happening in that space?

Dr. Carrie 08:29
That part was really interesting. So, they specifically called out biometric which thank goodness they did, because we have rights to non-discrimination. But to finally specify biometric issues and data collection around that, and you know, safety in the data sets and robustness in those data sets. So, it does a great job covering those aspects of expectations when you are procuring these types of systems. But they did surprise me with the generative AI approach. They require a couple of things. One is a very hardened approach to focusing on attempts to misuse and abuse these types of systems, and that's mainly internally. So those were words that were directed at internal agencies. Be careful with these things. They can be dangerous. Make sure your people are trained. That was great. The one thing that really surprised me was their requirement to get multiple bids when you're going to select one of these things. So don't just naturally, oh, we like ChatGPT, so that's what we're going to use. No sir. It has to be a bid from multiple of these, and they ask in the memo, they specifically say, look for fit for purpose. Identify your mission. Understand what these models are doing underneath. How are they trained? Does this make sense for what you're trying to do? And so, I thought I was surprised that they did that, and I thought it was great. Great nugget.

Punit 10:02
Indeed. I mean when you do a read of it, there are elements which are good, but those elements are just touched, dropped and left there, because it doesn't go deep into what you should do. Like it says you should protect the rights of individuals when biometrics are used. Now, what do you have to do. Of course, they talk about bit on privacy and security, but very, very, very high level. So, you can go as deep as the EU Act, or you can leave it there, saying, I've done my bit because they don't enforce anything like rights impact assessment, or any conformity checks or any quality checks that they leave it up to you. It's more high-level stuff, isn't it?

Dr. Carrie 10:44
It? Yeah, I would say it's just a tick, just a tick underneath of high level. I think they expect, you know, they have the structure in place for every agency to have a chief AI officer, and they've anointed that person with certain duties and expectations about, you know, the values and principles of the United States and what we see and expect of AI. And so, I think they are just expecting that each agency will be responsible. And so, we're not going to say too much. We expect you in your context, in your use cases, to understand how to make this stuff work, so that it is rights and safety respected and so they don't really go too aggressively on prescribing some things.

Punit 11:32
Yeah, I see that they don't prescribe, but in terms of touching upon everything, anything, they have been quite comprehensive. They touched on almost all aspects, saying you are expected to do this. I mean, even saying you should have metrics, you should have testing, and you shouldn't do risk management, data management, so they haven't, in other way, left anything out. And when we say it's a memo, how does a memo stand in the US landscape like here we have Local laws, EU laws. And EU laws can be directives or regulations. Where does it fit in?

Dr. Carrie 12:09
Yeah, insert giant sigh here. I just wish it was a law. I think there are some laws coming behind this. Senator Peters has something in the works that I hope gets through. You know, memo is if we have an a change at the top, a memo can be just dismissed.

Punit 12:29
No, it can be dismissed. Oh, yeah. So, it's more of a note, a communication to agencies, saying, we expect you to do this. But since it comes from the Office of the President, yes, it carries the weight?

Dr. Carrie 12:42
Correct. These are our operating rules, so says the big guy, yeah, exactly.

Punit 12:47
But they do not need to be aligned, harmonized, synchronized, fine-tuned, as a law would need to be, because then you have to be more precise, because then it's the rule of law and it can be interpreted in the court. So, this doesn't have any value. But from a government perspective, agencies can be questioned. Why didn't you do this? We told you so.

Dr. Carrie 13:06
Exactly. That is exactly right, yeah? Plus, a law is much harder to undo with each you know, regime change.

Punit 13:14
That's also true, yeah? And then, okay, it's for agencies. It's for accusation. But there's some good stuff in there, and there are many private corporations or public also saying we don't have a law, we don't have guidance. Can they, or should they, take this as guidance and maybe incorporate into their day-to-day purchasing at least because that's where this fits in?

Dr. Carrie 13:40
I would if I were them. There are some good guiding practices of how to conduct acquisition. So one example is, and this is just you can adopt some of the processes that they're recommending, but also, if you're selling to the government, you need to understand this and be ready for how they're going to approach you for requests for bids, adopting some of the practices that are in here. For example, they kind of have opened and given permission to use different contract types or different approaches to contracts. So, when you deal with AI, you might not have all the answers when you go into that contract. For example, if you buy pencils, you might say, I need an eight-inch pencil with an eraser on the end. That's going to be two centimeters. And these pencils need to be sharpened. And whatever you're familiar with that territory with AI, you're not familiar with the territory. And so, they've offered up the idea to say, look, use a statement of objectives. Use a performance work statement. Use these broader, more, slightly more ambiguous ways to approach this stuff, and then on the backside, use a quality assurance surveillance plan, so that you're monitoring everything as you go through, by the way. Also get some. Good training. So, you're not, you know, too far off when you're doing that stuff. So that's great practice, no matter what kind of business you're in, government or private sector. But if you are selling to the government, you need to understand that they're going to have a lot of questions for you. That's the kind of bid they're going to ask you to do is it's going to feel a little more ambiguous. They're going to expect that you're going to explain some training data stuff and some privacy information and models and how they work and how you're monitoring them. And the expectation I think is raised now.

Punit 15:36
I can agree with you, the expectation is raised. And as you rightly pointed out, if you're going to be selling it to the government agencies, this is what they are going to be asking you, yeah, so you better incorporate it, and if you have incorporated it, then it helps you sell to the normal corporate world. But if there's an organization who are not selling to government agencies, they can still take learning because it is broad, but still useful in terms of pointing out, what are the factors, or what are the things to take care of? And as with EU AI Act, companies are already, what do you say, grappling to figure out. Because right now the push is we are not high-risk system, but it's not about high-risk system. It is about you knowing that you have a system, which is an AI system, classifying it and then incorporating the best practices. And of course, if it's high risk, you go far and go further and further in terms of making sure that you have enough documentation and enough conformity requirements.

Dr. Carrie 16:39
That is, that is the interesting trick yeah.

Punit 16:43
But on that part, I think in the EU AI Act, when it talks about acquisition there, it's a bit light, don't you think so?

Dr. Carrie 16:52
Yes! I was so surprised. And even there's a new bill for acquisition in the EU, and it does not really cover AI. It's sort of revamps acquisition in general, but just for the purposes of better competition. But it does not touch AI in the way that it really needs to.

Punit 17:11
Seems like they missed that part because they were relying on building AI and not on acquiring AI or acquiring services in which AI can be incorporated.

Dr. Carrie 17:21
Right yeah, it is a little bit different. You don't throw away all of your acquisition activity or processes, but you have to augment them to address these unique needs.

Punit 17:32
Yeah, like in this one, also they talk about the IP and other aspects, which is still a gray zone?

Dr. Carrie 17:42
Right?

Punit 17:42
It's easy to say like they say, make sure the IP is respected, but what does it mean?

Dr. Carrie 17:49
That's exactly right? And this and the OMB memo, they also talk a lot about lock in, vendor lock in, they call it. They also call it promoting competition, which is two sides of the same coin depending on if you want to smile or frown, I guess. But they really get into, you know, disclosing licensing agreements and making sure that there's an understanding of the entire value chain underneath of these systems, so that when you go to switch to a different vendor, you are not held hostage because, oh, they didn't tell you were on Mistral versus llama versus open AI, so it's a pretty invasive investigation whenever they do look into these systems for the purposes of promoting competition and not getting locked into something.

Punit 18:40
Yeah, and I think in a way, it's good, but in a way, as we said, it's not the law. So we will have the coming months and years to determine which shape, which direction it goes, because, to me, it starts with what is an AI system. It does define it, but it's since it's not a law who defines what's an AI system, and that's where it will get in. But it's good to have this because it was long overdue, and some guidance, some direction from the government.

Dr. Carrie 19:14
Agreed.

Punit 19:16
So, with that. I think this, for this topic, that would be it, unless you have some final message or few words to add.

Dr. Carrie 19:25
No can't wait to see how it plays out. They have a couple of deadlines coming up. I think on November 1 they have to identify the existing contracts that are subject to this new rule, and by December 1 they have to have everything incorporated into those contracts. So that's fast.

Punit 19:44
Yes, that is indeed fast. So I think as our episodes go live with the delay, by the time our episode is live, people would have identified the contracts and everything, but what we want to share to our audiences is there is the dimension of responsible acquisition in the AI world, while we talk about AI being responsible in itself. So, if anyone wants to get in touch with you, what would be the best way?

Dr. Carrie 20:16
I would say probably my LinkedIn would be Dr. Carrie Miller on LinkedIn, you should be able to find me or look up AI procurement, I usually write a lot about that.

Punit 20:26
Sure. So, with that, I would say thank you so much. Dr Carrie Miller, it was wonderful to have you.

Dr. Carrie 20:31
Thank you. Thank you for having me.

FIT4Privacy 20:34
Thanks for listening. If you liked the show, feel free to share it with a friend and write a review if you have already done so, thank you so much. And if you did not like the show, don't bother and forget about it. Take care and stay safe. FIT4Privacy helps you to create a culture of privacy and manage risks by creating, defining and implementing a privacy strategy that includes delivering scenario based training for your staff, we also help those who are looking to get certified in CIPPE, CIPM and CIPT through on demand courses that help you prepare and practice for certification exam. Want to know more, visit www.fit4privacy.com, that's www. FIT the number 4 privacy.com if you have questions or suggestions, drop an email at hello@fit4privacy.com.

Conclusion

As AI systems continue to revolutionize industries and public services, the need for responsible acquisition has never been more critical. The OMB memorandum may not carry the weight of law, but it sets a compelling precedent for organizations—governmental and private alike—to rethink how they procure AI technologies. By emphasizing principles such as transparency, safety, competition, and fairness, this initiative nudges us closer to a future where AI-driven systems are not only innovative but also ethical and trustworthy.

While the world waits for more robust laws and clearer global standards, the responsibility to act falls on individual organizations. The steps we take today in adopting these principles could shape the trajectory of AI governance for years to come. Whether you are part of a government agency or a private enterprise, the message is clear: Responsible acquisition isn’t just a compliance checkbox; it’s the foundation for digital trust in an AI-powered future.

Let’s stay informed, prepared, and proactive—because the future of AI doesn’t just depend on how we build it, but also on how we acquire and deploy it.

ABOUT THE GUEST 

Dr. Cari Miller is the Principal and Lead Researcher for the Center for Inclusive Change.  She is a subject matter expert in AI risk management and governance practices, an experienced corporate strategist, and a certified change manager. Dr. Miller creates and delivers AI literacy training, AI procurement guidance, AI policy coaching, and AI audit and assessment advisory services.  She has worked with some of the largest brands in the world to successfully plan and implement complex business model shifts, system implementations, data science projects, process overhauls, product launches, organizational design and compensation alignments, and cultural improvement initiatives, positively impacting thousands of employees, clients, supply-chain partners, and shareholders.

Dr. Miller serves on the Board of ForHumanity, a nonprofit developing AI audit criteria for high-risk systems, and is the Vice Chair of the IEEE working group developing international AI Procurement standards. She holds a bachelor’s degree in international business, an MBA in Marketing, and a Doctorate in Business Administration, and she has a deep research background in AI-related to HR tech and EdTech. 

Punit Bhatia is one of the leading privacy experts who works independently and has worked with professionals in over 30 countries. Punit works with business and privacy leaders to create an organization culture with high AI & privacy awareness and compliance as a business priority by creating and implementing a AI & privacy strategy and policy.

Punit is the author of books “Be Ready for GDPR” which was rated as the best GDPR Book, “AI & Privacy – How to Find Balance”, “Intro To GDPR”, and “Be an Effective DPO”. Punit is a global speaker who has spoken at over 50 global events. Punit is the creator and host of the FIT4PRIVACY Podcast. This podcast has been featured amongst top GDPR and privacy podcasts.

As a person, Punit is an avid thinker and believes in thinking, believing, and acting in line with one’s value to have joy in life. He has developed the philosophy named ‘ABC for joy of life’ which passionately shares. Punit is based out of Belgium, the heart of Europe.

For more information, please click here.

RESOURCES 

Listen to the top ranked EU GDPR based privacy podcast...

Stay connected with the views of leading data privacy professionals and business leaders in today's world on a broad range of topics like setting global privacy programs for private sector companies, role of Data Protection Officer (DPO), EU Representative role, Data Protection Impact Assessments (DPIA), Records of Processing Activity (ROPA), security of personal information, data security, personal security, privacy and security overlaps, prevention of personal data breaches, reporting a data breach, securing data transfers, privacy shield invalidation, new Standard Contractual Clauses (SCCs), guidelines from European Commission and other bodies like European Data Protection Board (EDPB), implementing regulations and laws (like EU General Data Protection Regulation or GDPR, California's Consumer Privacy Act or CCPA, Canada's Personal Information Protection and Electronic Documents Act or PIPEDA, China's Personal Information Protection Law or PIPL, India's Personal Data Protection Bill or PDPB), different types of solutions, even new laws and legal framework(s) to comply with a privacy law and much more.
Created with