Blog

Want to Unlock Artificial Intelligence? See If You Have the Right Foundation First

Aug 23, 2023

Innovation needs the proper groundwork to thrive. When building skyscrapers, you dig to the bedrock and build upwards with stabilizing beams. When implementing artificial intelligence, you dig into your data and infrastructure to create a firm basis for enhanced analysis, automation, chatbots, and other features.

In truth, your AI goals can fail to yield the projected cost savings, efficiency, and accuracy without the proper framework of digital and data management solutions. That’s why your artificial intelligence strategy needs to be integrated with these elements of a robust IT and data strategy if you are going to achieve your desired results.

Training Data Shapes Your Outcomes

Any artificial intelligence program or machine learning model will only be as good as the data it is trained on. Only data that is task-relevant, consistent, systematic, and comprehensive will enable the models to make accurate predictions or perform essential functions.

However, studies show training data needs to be authentic, generated by humans rather than AI projecting benchmarks. AI/ML programs taught using ersatz data sets will overestimate probable events and underestimate improbable (but still possible) events. Over time, the value and accuracy of AI projections can gradually collapse.

Because of this threat, organizations need to ensure their machine learning algorithms can pull data and learn from real-world, centralized data hubs. A Zendesk survey found only 22% of business leaders felt their organization shared data well, which indicates there are still pervasive shortcomings in data management and data warehouse creation (both of which are foundational to a good AI strategy).

This can result in calamity for different sectors. In the healthcare payor space, this may result in incorrect conclusions about medical claims or chatbots which provide members with incorrect answers. For financial services, this could result in automation issues ranging from confusing or contradictory loan denial explanations to late or invalid payments.

Either your own data stewards or trusted data governance consulting services need to be proactive about breaking data silos and unifying disparate sources. That’s only part of the equation.

Organizations also need to verify that the data quality is impeccable before submitting it to AI, especially with the supervised learning approach. The reason being? Human error can pollute the results as much as generic AI-generated data points.

MIT researchers found that errors can occur in the process of trying to label images for better categorization and meta data management. End users might misinterpret data points or lack standardized labels which can skew the results. Unfortunately, these programs are not astute enough to identify errors or omissions in the benchmark datasets, carrying misperceptions through to the end.

Garbage data results in flawed or corrupted results, so your artificial intelligence strategy should focus on gathering quality data and verifying accuracy in a streamlined way.

User Know-How Determines Results

Even with the right data foundation, not every AI program is intuitive. Plus, generative AI is so new few people fully understand the fundamentals, let alone exploring advanced practices. We see this problem arise with tools like ChatGPT or even more rudimentary chatbots. Users will ask simple or unspecific queries and will receive elementary and unsatisfying answers. Or worse, they’ll use the wrong tools (untrained on the data users are reliant upon) as a panacea response, coming to wrong conclusions in the process.

One example that has caught media attention is the story of two New York attorneys who used ChatGPT in the preparation of a legal brief for a personal injury case. The presiding judge found six case citations in the brief were completely fabricated and chose to sanction the law firm for bad faith actions and “acts of conscious avoidance and false and misleading statements to the court.”

The problem is the version of ChatGPT available to the public (GPT-3.5) isn’t designed for eDiscovery or legal support. It’s a language model that’s meant to provide responses to questions (which sometimes has resulted in the algorithm concocting facts out of whole cloth). Tools like Casetext or CoCounsel (which was specifically trained for the law) would have been better alternatives, pulling from litigation data and case law precedent.

In all cases, organizations need to prepare their employees to use the right tool for the right job (basically, not using a crescent wrench when a jigsaw is required). Effective artificial intelligence strategy accounts for the technicalities of a task and supplies the appropriate platform from the ever-expanding AI tool kit to people when they’re needed.

People need proper training on the AI tools themselves. There’s an emerging field called AI prompt engineering, so asking the right questions and getting useful responses is not instinctual. Your artificial intelligence strategy needs to account for three key questions employees have about using these augmented capabilities and offer informative resources:
 

  • Which processes are ripe for AI to transform? Do your employees have a sense of the current capabilities of artificial intelligence and machine learning? If your culture cultivates AI preparedness (much like the data-centric drive over the last decade), your people might spot opportunities before executives do. If you keep the lines of communication open, your organization can more easily fast track your unexplored potential.
  • How much information do you need to provide AI/ML tools? Artificial intelligence can provide responses with vague queries, but you’ll get the best results when your people are specific in their phrasing. Generative AI is a great example. Clarifying the purpose of a request, giving clear instructions, offering contextual information, and even asking open-ended questions can elevate the level of value from AI responses.
  • Are there biases in the tools themselves? Unfortunately, yes. Like any type of human-created analysis, there’s a potential for confirming biases or perpetuating systemic prejudices, if users are not careful.Again, the problem starts with the data. If what’s fed into the training set is a narrow fraction of possible data points, then the result will always be skewed. Maintaining a diverse workforce and encouraging your people to reflect on their own biases while inputting or reviewing data can mitigate the risk of some of these issues.If you work at a healthcare payor, then you want to verify that information contained with EHRs and other insurer data points are accurate and complete. Or if you are an insurance company training an ML tool to spot fraud, you want to make sure that the training model and rejection criteria are accurate.
  • Cybersecurity Is Still Paramount

    Artificial intelligence, like every digital asset, is being targeted by cybercriminals. Rather than compromising data to sell on the dark web, the intent is a bit more devious. A recent paper explored hypothetical situations where cybercriminals can insert bad data into training data as a way of poisoning outcomes and wreaking havoc with everything from analysis to automation.

    The threat isn’t limited by industry. With tainted training data, self-driving cars might abruptly stop or careen off the road if they scan specific license plates, signs, or conditions. Financial services companies must worry about hackers subtly altering training data to reduce the effectiveness of fraud detection. The healthcare payor space needs to worry about how cybercriminals can disrupt risk assessment and risk prediction.

    The way to protect your artificial intelligence capabilities is to maintain a proactive security posture across every facet of your organization. Applying a zero trust framework so all users must authenticate and validate their identity for any attempt to access applications and data can help to mitigate risks to your overall system. Additionally, training your people on the potential threats to watch for can help them identify warning signs to avoid phishing scams, the precursors to ransomware attacks, and other attack methods.

    Creating an AI Foundation

    Training your people, preparing the proper data, and building up cybersecurity fortifications are only part of a good artificial intelligence strategy. You also need to identify experts who can help your organization make the transition, whether they are internal leaders or a trusted partner. Whoever you trust should have a comprehensive knowledge of IT management solutions, your industry’s challenges, and the potential of AI/ML platforms. Building that right foundation now can help to create an infrastructure that allows your business to thrive now and in the future.

    Want to amplify your artificial intelligence strategy? Start by creating the right digital foundation with w3r Consulting.

     

    Learn about our digital solutions

     

    Related Articles

    The Most Valuable Applications of Banking AI in 2023

    Why AI Won’t Replace Your Need for Skilled Staffing and Recruiting Partners

    Which Emerging Technologies & Practices Will Maximize the ROI of Digital Transformations in Healthcare During 2022?

     

Recent Articles

How Hyperscale Computing Can Elevate Data-Mature Businesses

A limitless growth mindset is baked into the business world these days, thanks in part to the runaway proliferation of data. We’re on our way to making hundreds of zettabytes of data every day. The almost unfathomable increase has prompted more enterprises to prepare...

A Holiday Message from w3r Consulting

Thank you: It’s a message we hope shines through every action we take during the holiday season. Especially after a year filled with exciting opportunities and hard work with plenty to be thankful for. Here’s a shortlist of shoutouts to those who share a stake in our...

3 Reasons Insurers Should Embrace Multi-Cloud Environments

Though there are dominant players in the cloud computing space, there are no true monopolies. The expanding number of cloud vendors has created a blizzard of options, compelling insurance companies to sift through PaaS, SaaS, and IaaS choices in search of the perfect...

Share via
Copy link