Blog

Want to Unlock Artificial Intelligence? See If You Have the Right Foundation First

Aug 23, 2023

Innovation needs the proper groundwork to thrive. When building skyscrapers, you dig to the bedrock and build upwards with stabilizing beams. When implementing artificial intelligence, you dig into your data and infrastructure to create a firm basis for enhanced analysis, automation, chatbots, and other features.

In truth, your AI goals can fail to yield the projected cost savings, efficiency, and accuracy without the proper framework of digital and data management solutions. That’s why your artificial intelligence strategy needs to be integrated with these elements of a robust IT and data strategy if you are going to achieve your desired results.

Training Data Shapes Your Outcomes

Any artificial intelligence program or machine learning model will only be as good as the data it is trained on. Only data that is task-relevant, consistent, systematic, and comprehensive will enable the models to make accurate predictions or perform essential functions.

However, studies show training data needs to be authentic, generated by humans rather than AI projecting benchmarks. AI/ML programs taught using ersatz data sets will overestimate probable events and underestimate improbable (but still possible) events. Over time, the value and accuracy of AI projections can gradually collapse.

Because of this threat, organizations need to ensure their machine learning algorithms can pull data and learn from real-world, centralized data hubs. A Zendesk survey found only 22% of business leaders felt their organization shared data well, which indicates there are still pervasive shortcomings in data management and data warehouse creation (both of which are foundational to a good AI strategy).

This can result in calamity for different sectors. In the healthcare payor space, this may result in incorrect conclusions about medical claims or chatbots which provide members with incorrect answers. For financial services, this could result in automation issues ranging from confusing or contradictory loan denial explanations to late or invalid payments.

Either your own data stewards or trusted data governance consulting services need to be proactive about breaking data silos and unifying disparate sources. That’s only part of the equation.

Organizations also need to verify that the data quality is impeccable before submitting it to AI, especially with the supervised learning approach. The reason being? Human error can pollute the results as much as generic AI-generated data points.

MIT researchers found that errors can occur in the process of trying to label images for better categorization and meta data management. End users might misinterpret data points or lack standardized labels which can skew the results. Unfortunately, these programs are not astute enough to identify errors or omissions in the benchmark datasets, carrying misperceptions through to the end.

Garbage data results in flawed or corrupted results, so your artificial intelligence strategy should focus on gathering quality data and verifying accuracy in a streamlined way.

User Know-How Determines Results

Even with the right data foundation, not every AI program is intuitive. Plus, generative AI is so new few people fully understand the fundamentals, let alone exploring advanced practices. We see this problem arise with tools like ChatGPT or even more rudimentary chatbots. Users will ask simple or unspecific queries and will receive elementary and unsatisfying answers. Or worse, they’ll use the wrong tools (untrained on the data users are reliant upon) as a panacea response, coming to wrong conclusions in the process.

One example that has caught media attention is the story of two New York attorneys who used ChatGPT in the preparation of a legal brief for a personal injury case. The presiding judge found six case citations in the brief were completely fabricated and chose to sanction the law firm for bad faith actions and “acts of conscious avoidance and false and misleading statements to the court.”

The problem is the version of ChatGPT available to the public (GPT-3.5) isn’t designed for eDiscovery or legal support. It’s a language model that’s meant to provide responses to questions (which sometimes has resulted in the algorithm concocting facts out of whole cloth). Tools like Casetext or CoCounsel (which was specifically trained for the law) would have been better alternatives, pulling from litigation data and case law precedent.

In all cases, organizations need to prepare their employees to use the right tool for the right job (basically, not using a crescent wrench when a jigsaw is required). Effective artificial intelligence strategy accounts for the technicalities of a task and supplies the appropriate platform from the ever-expanding AI tool kit to people when they’re needed.

People need proper training on the AI tools themselves. There’s an emerging field called AI prompt engineering, so asking the right questions and getting useful responses is not instinctual. Your artificial intelligence strategy needs to account for three key questions employees have about using these augmented capabilities and offer informative resources:

  • Which processes are ripe for AI to transform? Do your employees have a sense of the current capabilities of artificial intelligence and machine learning? If your culture cultivates AI preparedness (much like the data-centric drive over the last decade), your people might spot opportunities before executives do. If you keep the lines of communication open, your organization can more easily fast track your unexplored potential.
  • How much information do you need to provide AI/ML tools? Artificial intelligence can provide responses with vague queries, but you’ll get the best results when your people are specific in their phrasing. Generative AI is a great example. Clarifying the purpose of a request, giving clear instructions, offering contextual information, and even asking open-ended questions can elevate the level of value from AI responses.
  • Are there biases in the tools themselves? Unfortunately, yes. Like any type of human-created analysis, there’s a potential for confirming biases or perpetuating systemic prejudices, if users are not careful.Again, the problem starts with the data. If what’s fed into the training set is a narrow fraction of possible data points, then the result will always be skewed. Maintaining a diverse workforce and encouraging your people to reflect on their own biases while inputting or reviewing data can mitigate the risk of some of these issues.If you work at a healthcare payor, then you want to verify that information contained with EHRs and other insurer data points are accurate and complete. Or if you are an insurance company training an ML tool to spot fraud, you want to make sure that the training model and rejection criteria are accurate.

Cybersecurity Is Still Paramount

Artificial intelligence, like every digital asset, is being targeted by cybercriminals. Rather than compromising data to sell on the dark web, the intent is a bit more devious. A recent paper explored hypothetical situations where cybercriminals can insert bad data into training data as a way of poisoning outcomes and wreaking havoc with everything from analysis to automation.

The threat isn’t limited by industry. With tainted training data, self-driving cars might abruptly stop or careen off the road if they scan specific license plates, signs, or conditions. Financial services companies must worry about hackers subtly altering training data to reduce the effectiveness of fraud detection. The healthcare payor space needs to worry about how cybercriminals can disrupt risk assessment and risk prediction.

The way to protect your artificial intelligence capabilities is to maintain a proactive security posture across every facet of your organization. Applying a zero trust framework so all users must authenticate and validate their identity for any attempt to access applications and data can help to mitigate risks to your overall system. Additionally, training your people on the potential threats to watch for can help them identify warning signs to avoid phishing scams, the precursors to ransomware attacks, and other attack methods.

Creating an AI Foundation

Training your people, preparing the proper data, and building up cybersecurity fortifications are only part of a good artificial intelligence strategy. You also need to identify experts who can help your organization make the transition, whether they are internal leaders or a trusted partner. Whoever you trust should have a comprehensive knowledge of IT management solutions, your industry’s challenges, and the potential of AI/ML platforms. Building that right foundation now can help to create an infrastructure that allows your business to thrive now and in the future.

Want to amplify your artificial intelligence strategy? Start by creating the right digital foundation with w3r Consulting.

 

Learn about our digital solutions

 

Related Articles

The Most Valuable Applications of Banking AI in 2023

Why AI Won’t Replace Your Need for Skilled Staffing and Recruiting Partners

Which Emerging Technologies & Practices Will Maximize the ROI of Digital Transformations in Healthcare During 2022?

 

Recent Articles

w3r Consulting Wins NMSDC National Supplier of the Year Award

w3r Consulting, a best-in-class IT consulting and staffing firm, is honored to announce they have won the National Minority Supplier Development Council’s (NMSDC) National Supplier of the Year Award. Receiving acknowledgment from the nonprofit organization signifies...

A Mature AI Strategy Relies on Applying 2 Essential Lessons

It’s crazy to think how quickly artificial intelligence has become a staple of our society, shifting from fascination with ChatGPT in 2022 to widespread adoption less than two years later. McKinsey & Company found that 65% of respondents were regularly using...

Improving HEDIS Measures: How to Optimize Your Member Engagement

It takes considerable effort for healthcare payers to distinguish themselves from the noise. Employers and consumers have a smorgasbord of healthcare plan options available to them. Without clear standardized metrics to compare the level of quality care, healthcare...

How to Recover from Nurse Burnout: 4 Tips to Help You Recharge

When we talk to nurses these days, there’s a bit more optimism than there was a few years back. They’re increasingly happier and more satisfied with their work, which is reassuring after the worst days of the pandemic. That said, there is still a fairly high number of...

How to Work with a Recruiter to Find and Secure Better Jobs

When you’re searching for a new job, it’s easy to feel very isolated. You apply for dozens of open positions, conduct a smattering of interviews, and mostly hear crickets. If you’re searching for about 21.2 weeks (the length of unemployment according to the BLS in May...

How to Improve Your Technical Resume & Stand Out From the Competition

How do you stand out when you’re competing with hundreds of other people for a single job? That’s the reality for IT professionals ever since job boards and social media platforms have simplified the application process. Yes, most applicants will be woefully...

Which Is Better for Your Career: Choosing Hybrid or Remote Work?

The pandemic proved that a little job flexibility is more than manageable. When organizations trust high-quality workers to do their jobs, they’ll get the work done. Better yet, remote appears to foster a greater sense of productivity, balance, and loyalty in...

w3r Consulting Wins Best and Brightest Metro Detroit

w3r Consulting, a best-in-class IT consulting and staffing firm, is thrilled to announce its recognition as one of Detroit's Best and Brightest Companies to Work For® in 2024. This is the fifteenth consecutive year w3r has won this prestigious award, which...

Share via
Copy link