As AI becomes ubiquitous, organizations of all sizes need to understand exactly how they use AI and to ensure the technology is developed and deployed in a safe, responsible and ethical way.
Q - We arewitnessing unprecedented progress in artificial intelligence (AI) but the pace of development and deployment raises new risks. Do you think people are aware of the risks?
Boy, that feels like atrick question at this moment in time. I mean, you can’t turn on the news
without seeing something about AI invading our lives, whether it’s in a new movie or in a new product offering. So, I think a lot of people would say, “yes, I’ve heard of AI.” However, I don’t believe we have achieved widespread AI literacy or fluency.
I’m not saying we need people to learn how to code or build super computers, but rather, we just need them to understand some basic concepts about the different types of AI. That is important because it helps you understand when to be on the look out for things that could be risky.
You might think of itthe same way we think of common appliances. We don’t know exactly how some appliances are engineered, but thanks to awareness campaigns, labels, and other
efforts, we know what gas ovens can do to our food (#benefit) and that we shouldn’t insert our head and take a deep breath (#risk).
We also know not to take bath with a hairdryer. We need to learn the same kinds of things about different kinds of AI. For example, AI shouldn’t be used to tell us if kids are
bored in class or employees are not paying attention in a meeting.
There is literally NO viable scientific proof to back that up. Another example is that facial
recognition might help us secure our iPhone, but it’s not ok to use it in a manufacturing
plant for employees to clock-in/clock-out. Facial recognition notoriously doesn’t work well on all individuals – especially on those with more melatonin in their skin.
Q – What are the big questions that organizations should be asking themselves?
I really think thefirst question is pretty basic right now. Most organizations need to ask
themselves, “what AI do we have here?” It’s the first step in getting your AI house in order.
I realize some organizations have an AI strategy, but I find, more often than not, that a lot of organizations haven’t taken the time to do an audit – or take an inventory of ALL the AI tools their people are using. The trick with this exercise is finding the AI that was authorized by the company AND the “shadow AI” that is being used by workers outside the purview of authorized IT rules.
So, that is the firstquestion that has to be asked and answered because AI governance splinters off from this “inventory.” The next few things you need to do is –
- Make sure that none of the systems/tools are being misused
- Read all the privacy policies and terms and conditions
- Set some realistic AI polices based on what you find
- Decided which AI is just too risky
- Implement sustainable AI practices (e.g., AI literacy training, AI policy announcements, AI incident management processes, AI procurement practices, etc.).
Q - Artificial Intelligence is already reshaping industries and transforming the way we live and work. What sectors do you think will gain the most positive impact.
You know the 2024 Consumer Electronics Show just wrapped up in Las Vegas. It’s a huge show, and it was full of AI tech! There were two categories that blew me away this year.
Now, I’m sure there are probably more sectors exploding than these, but I’m just saying what I saw related to AgTech and Disability Tech was simply amazing. I’m sure drug discovery and medical technology is also going to see significant advancements too. However, what we are seeing in agriculture technology is just unreal.
I think about the statistics on global food shortages due to global climate change, a lack of workers, and even a lack of arable land in some places, and then you see what AI can do to solve ALL of those problems. It’s stunning, and these advancements are so critically important to humanity.
Likewise, the innovation coming to the table to support disabled persons is also off the charts. These innovations are being thoughtfully woven into mainstream products, and there are developers creating whole new categories of products. I saw a belt and glasses that could be worn to support visually impaired individuals through sound and haptic feedback. This means they might not need to use a cane, which (and I didn’t realize this) using a cane can sometime create a safety risk because it draws attention and signals that you might be a vulnerable individual.
What I love about the procurement function is that it can be the ultimate gate-keeper element for letting the “good” AI in and keeping the “bad” AI out. There are a lot of AI vendors knocking on our doors right now, making all sorts of promises.
The problem is that the vendors don’t always tell you the full story. That’s where good AI
procurement practices become essential to the risk management equation. AI has the potential to disrupt your reputation, your operations, you culture, your legal standing, your shareholder value – all sorts of important things that companies like to keep protected.
So what I’ve beenworking on is making sure we understand how to insert some AI governance practices into the procurement process so organizations can leverage the benefits of AI and also mitigate the risks. It starts with making sure you even need AI to begin with, and if you do, then you have to know what questions to ask the vendors. Beyond that, your team needs to also understand what good vs bad answers look like (so that’s a little AI literacy requirement), and then you have to know what new contract clauses you should be adding to your contracts for these types of systems.
The thing is, you need all the old contract clauses, but because it’s AI, there are a few new ones that you really need to plug in there now too. It’s all in the name of risk management. If you skip these steps, you really expose your organization to some pretty significant risks. AI is no joke.
Q – Of course, it’s impossible to predict the end-game – what do you see in the near future, both positive and negative.
Well, I hesitate topredict anything with AI. I think we all watched the near implosion of Sam Altman and OpenAI with our eyes popping out of our heads and our popcorn on our laps at the same time. I mean, we’re in some crazy times here.
It’s the wild wild west without the necessary laws and regulations for a lot of these things. The worst part is that as those laws and regs are developed and implemented, there’s not enough talent to enforce them. So, we are literally years away from good compliance. Again, that’s why I like procurement so much. AI procurement doesn't require a law or a regulation to make sure you properly vet your AI solutions before deploying them!
I am optimistic about the learning curve we’re all on. I’m seeing more and more companies finally pivot to education and training services targeting the workforce and whole industries to help facilitate AI literacy and fluency. That is so important. Our lawmakers need it. Our workers, managers, and leaders need it. Our kids and teachers need it. So I love that that is happening more and more.
But, I am worriedabout open systems and large language models. Not in a general sense. I think they are important to our future. They are amazing technologies. What worries me about them is the rush to incorporate them into important and critical system without the scale of governance over those systems that is truly needed.
We need a lot more governance, which is to say we need a lot more transparency,
explainability, disclosure, and a ton of other things that ensure awareness and trustworthiness are front and centre when these systems are present. (e.g., Don’t take a bath with a hair dryer. Don’t use a LLM chatbot to give psychological advice to a suicidal caller).
Without these guardrails, we’re walking on the highwire with no net. That’s a lot of risk for organizations - and humanity - to accept.
- A Framework for procuring AI Systems.