Like much of the world, I have watched with fascination as new Artificial Intelligence (AI) technologies rolled out over the last year. Each new release is better than the last, with seemingly endless applications to improve our lives and businesses. But I have also watched with slight trepidation – not only because of AI’s potential impact on jobs but also because I have witnessed the failure of new technologies to live up to their promises. With proper guidance, training, and governance, these new AI technologies will bring massive, positive change to the world. Without these best practices, AI can disappoint its users, and at worst result in negative impacts on businesses.
A recent experience crystallized this belief for me. I received an unsolicited (my boss may be reading this) recruiter email trumpeting the fact that their recommended positions were “powered by ChatGPT!” In short, they were all wildly unsuitable recommendations, and the fact that they were so bad got me thinking about the limitations of this AI craze. Why were the recommendations so bad, and how can companies avoid these failures when adopting AI? It was an eye-opening experience for me, and a comforting one too, considering all the headlines that say AI is coming for my job. These shortcomings present opportunities for humans to make AI more effective, and not just the reverse.
But first, why were the recommendations so bad?
The jobs recommended to me were so off the mark, they were laughable. Almost all were outside of my industry, most were in a discipline where I have absolutely no experience or credibility, and many were for positions out of sync with my experience.
Three Reasons AI-Powered Recommendations Missed the Mark
- Lack of Relevant Data. Their model had limited visibility in my experience – that is, only a small piece of the data needed to generate meaningful recommendations. The recruiters did not speak to me, and they did not have my resume. My public LinkedIn profile gave them a few clues, but the lack of broad context was obvious from the poor recommendations.
- Lack of Nuance. The AI tool (ironically) missed my fundamental expertise, which is data and analytics. Instead, it made recommendations based on industries or other contexts where I help clients. Examples include options to review patent attorney and internal audit manager positions. A scary thought, me in those roles.
- Underpowered or Generic Model. Finally, it is highly likely their recommendations were generated by an underpowered and/or generic model and passed along with no thought. These models often require high levels of iteration to learn over time and refine their recommendations. “Wrong tool for the job” is not a problem unique to AI, but given AI’s immense speed and power, it is a particularly concerning one.
An experienced recruiter focused on my industry could provide a list of spot-on positions with relative ease and use AI to hunt down as many of those options as are available. This would harness the speed and efficiency offered by AI, coupled with the critical experience possessed by humans. AI cannot make recommendations based on experience it does not have, so human interaction in the process remains critical. It is not just recruiting; many real-life processes and problems will benefit from AI-assisted experts.
Can AI Answer the Key Questions?
Although AI can quickly synthesize large amounts of data and report back insights, it is not a great decision tool by itself. One big reason is that it often lacks critical background and context.
For example:
- It can synthesize data quickly, but what data should it consider in the first place?
- What human-centric processes generated that data, and what nuances exist in it that can only be understood through discussion with its owners?
- Can AI access that data?
- What interpersonal dynamics, company goals, or other important issues need to factor into a decision, but are not represented in the data?
Without this context, models can spew incomplete, misleading, or flat-out wrong responses. Fortunately, these are areas where humans are perfectly suited to step in. Consider a couple of recent client engagements, during which we overcame similar obstacles to effectively utilize advanced analytics.
Two real-world examples of how humans can guide the use of analytics to overcome these shortcomings:
Example 1: Analytics to support a litigation matter
To support a recent litigation matter, the Ankura Analytics team designed and executed a series of analytics to support our client’s claim for legal fees. We received a variety of data sets to rely on, including extracts from their legal invoicing system, policy, and procedural documents. Additionally, we conducted interviews with multiple employees and stakeholders.
Three top takeaways regarding the limitations of data and analytics in isolation:
- Even “clean” data can yield misleading results if not properly prepared. The data, although mostly “clean,” required significant transformation and preparation before it could be fed into our models. This preparation step is common and necessary – although the data was stored neatly in a system, it was collected and stored for a different purpose. Without this prep work, which amounted to weeks of work, any analysis would have generated flawed results.
- Even “clean” data can be wrong. We learned that the system data extracts, by default, listed each attorney with their most recent title, even for bills submitted 10 years earlier. Of course, historical billing would have captured each attorney’s level at the time of work performed. It required significant digging, system knowledge, and familiarity with the lawyers in question to identify and account for this nuance.
- Even great models cannot replace real-life brainstorming and strategy sessions. Some of our most compelling findings came from combining multiple information sources to get a more complete view of the truth – and in some cases, our data strategies were inspired by cultural and process-related learnings identified through interviews.
Example 2: Analytics and dashboarding to create a “better” staffing model
On another engagement, our Labor Strategy team partnered to help our client, a Fortune 100 Consumer Packaged Goods (CPG) company, develop a better staffing model at its regional manufacturing facilities. What does “better” mean? This question cannot be answered solely through data available to AI; it must be probed through employee interviews and analysis of historic timesheets and production data, among other information.
Three areas our experts guided the process to identify clear solutions:
- Humans often disagree! Employees, managers, and leadership could (and likely will) express discrepant preferences and opinions regarding staffing models, and even regarding past events. These varying perspectives can drive multiple iterations of modeling approaches.
- Source data may not capture the “why.” Historic staffing and production data may contain spikes and dips resulting from several factors, like illness, local events, atypical changes in demand, and others. These explanations often do not exist in the data.
- Data needs an arbiter of the “truth.” Plant-level staffing and production data might not reconcile to higher-level corporate reporting (sadly, this is all too common), and nobody can say which is correct. Sometimes, both sources may be correct, but they differ due to methodology and uses. Experts can determine which “truth” is most relevant to use for modeling.
Our solution resolved these issues by creating a series of staffing models that were presented and tweaked with the client. Eventually, our work benefited from sophisticated analytics, but they were built on a foundation only humans could build.
Final Thoughts
AI is an incredibly powerful technology with many applications, and it is improving every day. To get the most out of it, and to avoid embarrassing mistakes, it should be approached thoughtfully and with the proper expertise. When configured and trained appropriately, AI can help uncover new insights with the guidance of qualified humans. That is not only good news for the future of human employment, but for humanity overall.
© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.