It’s crazy to think how quickly artificial intelligence has become a staple of our society, shifting from fascination with ChatGPT in 2022 to widespread adoption less than two years later. McKinsey & Company found that 65% of respondents were regularly using generative AI, nearly double the percentage from last year.
More than idle curiosity, we know this is a serious investment. EY found that 88% of senior organizational leaders are spending 5% or more of their budgets this year and 50% of leaders anticipate spending 25% or more next year on AI. Moreover, McKinsey & Company found 50% of respondents to their State of AI survey were using the technology in two or more functions.
As organizations spend more money on artificial intelligence, there needs to be a deepening maturity of AI implementations to efficiently build upon current successes, avoid expensive pitfalls, and maximize your ROI. Here are two lessons to keep in mind as you plan the next stage in your AI strategy.
Start By Adopting Proven AI Use Cases
Artificial intelligence is appealing in its potential spectrum of solutions, but not every promise is going to pay off in ways that justify the investment. Some will absolutely save millions, streamline workflows, and boost productivity. Others might be overpromising their capabilities or outright AI washing their business.
Kyle Chayka, journalist and cultural critic, had this to say in a recent article about the big social media giants’ investment in artificial intelligence:
“As of yet, all these A.I. experiences are still nascent features in search of fans, and the investment in A.I. is vastly greater than the organic demand for it appears to be […] Tech companies are building the cart without knowing whether the horse exists.”
With the billions that are going into this technology, organizations need to have a sense of whether there’s a demand or worthwhile use case for their proposed project. Each industry will have its own record of repeatable wins, debatable trials, and outright flops.
Let’s look at artificial intelligence in healthcare. Kaiser Permanente has invested in AI scribe technology, and this savvy decision has enabled them to reduce their documentation burden. In fact, they’re saving an average of one hour per clinician at the keyboard, freeing them up to engage with patients and build trust.
In the fast-food industry, there have been mixed results with AI food ordering technology. Earlier this year, McDonald’s decided to put a moratorium on their AI drive-thru experiment and removing their Automated Order Taker (AOT) from trial locations. Past reports on their AOT showed the technology had voice ordering accuracy in the low 80% when they were aiming for 95%. Though the fast-food giant plans to explore this technology again in the future, they likely spent hundreds of millions on an abandoned implementation.
Both examples offer a reminder that being the first across the finish line with AI use cases doesn’t always result in the most cost-effective IT investments. Early (not first) adopters can build off the success stories of innovators without assuming the same degree of financial risk and failed implementation. Trailblazers might reach a destination first, but they don’t always reach it unscathed.
McKinsey’s survey results point to safe AI investments. Here are the top three functions where generative AI is being most used:
Many organizations are applying artificial intelligence to some of these functions, but there’s an opportunity to expand AI use cases with a reliable ROI. So, if you’re not using AI to personalize sales emails, iterate product design, monitor threats, or any other application under the above three functions, you’ll want to explore these first.
Require Explainability in AI Tools
Even if there is an exciting, widely embraced AI use case, your organization needs to audit the effectiveness and authenticity of the potential tool.
The challenge is that outside of building your own homegrown AI tools, you might struggle to get a transparent view of the underlying AI models guiding different platforms. Many vendors perceive their AI models as intellectual property, which they don’t want to expose to theft or even AI poisoning by making them accessible.
In the wake of their Strawberry AI model release, OpenAI issued a warning to ban any users who attempt to probe into how their model works. When California Governor Gavin Newsom signed a bill into law requiring generative AI companies to disclose a “high-level summary of the datasets used in the development of the system or service,” many of the big AI players refused to comment on whether they’ll comply. We understand their protectiveness, but there does need to be a balance between trade secrets, performance, and transparency.
When the stakes are low (think AI writing suggestions or movie recommendations), you don’t need to overload your stakeholders with the logistics of why an algorithm came to its conclusion. Yet when it comes to complex systems and decisions that can have an extreme negative impact if made incorrectly or under a biased influence (medical diagnoses, loan authorization, fraud detection, and even hiring decisions), there needs to be some explainability.
What should your organization do? Ask the right questions while evaluating AI platforms.
- What type of data is used in the AI training model? When you’re outsourcing tasks to AI tools, be sure to evaluate the types of data they are using to make their decision. For example, you might want to reconsider using an AI interviewing tool that heavily relies on recordings of body language (a non-universal metric) as a primary factor for evaluation.
- What are the sources of the training data? Garbage in, garbage out applies to AI and ML processes as well. If the source data is biased, incomplete, or inaccurate, the recommendations will be too.
- What data points influence the AI model in its decision making? Though you might not want to double check every AI recommendation, it’s useful to know what data points they are using to make their decision. If you are using AI to detect melanomas, it helps to have a sense of what qualities of skin lesions are suspicious. Or if AI is denying loans, you want to be able to verify that it’s a matter of credit history and increased debt (unfortunately, one Lehigh University study found AI loan application tools were using race to deny applicants).
Working with the Right Partner to Mature Your AI Strategy
Though you can implement artificial intelligence platforms without these fundamentals in place, you can put your organization at risk of poor decision-making and increased threat exposure. If you intend for AI to be part of your business going forward, you need to take precautions up front to make this advanced computational power and perspective sustainable.
Choosing the right data and digital solutions partner can simplify your journey to scalable and trustworthy AI. Data supports, influences, and shapes the ability of AI models to make predictive analysis and autonomous decisions. When a partner knows how to align AI models with business processes and integrity, AI strategy will be more sustainable and successful.
Are you looking to evaluate the foundation for your AI strategy? Reach out to w3r Consulting to verify that your data and digital processes are fueling AI-driven business success.
Related Articles
How Generative AI Boosts Performance, Accuracy, & Retention in Healthcare
Turning Risks into Rewards: How Your Business Can Harness AI with Accuracy and Security
3 Use Cases for AI in Insurance That Will Revolutionize the Industry