AI project failures stem from data issues, misalignment with business needs, and false promises
After discussing a Proof of Concept (POC) with numerous customers regarding GRAVITY AI-Assist, I can corroborate the findings of the RAND study, which indicate that the failure rates for AI projects are alarmingly high, estimated to be between 70-85% as per multiple analyses.
One of the most significant reasons AI projects fail is due to issues with the quality and quantity of data. AI models are entirely dependent on the data they are trained on, and if this data is incomplete, biased, or inaccurate, the resulting model will reflect those flaws. As noted by AI Multiple Research, "Data quality is crucial in artificial intelligence because it directly impacts AI models' performance, accuracy, and reliability. High-quality data leads to well-performing AI systems, while low-quality data can result in inaccurate outputs, biased decisions, and unreliable performance."
Collecting and preparing high-quality datasets is a major challenge for many organizations. Data may be siloed across different systems, inconsistently formatted, or missing crucial information. There can also be inherent biases present in the data, reflecting historical inequities or human prejudices. If these issues are not addressed, the AI model will simply learn and amplify those biases, leading to unfair and unethical outcomes.
Furthermore, many AI applications require massive amounts of data to be effective. Techniques like deep learning are data-hungry, and models often struggle with insufficient data quantities. Ensuring there is enough representative data to properly train AI systems is an ongoing obstacle, especially in specialized domains or for rare events.
During our Proof of Concept (POC) stages, we encounter significant issues stemming from "lack of integration with Authentication Systems" to AI projects that are not customized to fit the unique needs and workflows of an organization. Often, there is a sense of doing something for the sake of it. When our direct customers present actual use cases, these AI teams frequently find it challenging to provide meaningful value.
The CIO's article "10 Famous AI Disasters" doesn't fully capture what we have observed. The projects we have seen don't come close to those use cases; even in failure, those projects had defined use cases. In contrast, we have encountered projects that lacked even basic use cases.
But what I think it doesn’t go into enough is the “why”. Because such facts have been known for years in general IT projects. Data handling and gathering is complex, independent of the project type.
As I referenced in a recent blog post, Michael Seemann's assertion provocatively frames the discourse: "I am ready to say: AI is a scam. Not as obvious a scam as the crypto pyramid schemes, but a scam on the level of expectation management."
I experienced misplaced expectations across the professional spectrum, from IT engineers to C-level management. The assumption that AI equates to automatic solutions is widespread. Questions even arose after we branded our GRAVITY Content Type as "AI-Assist"; following demonstrations, inquiries were made as to why any project effort was required given the involvement of AI, which was expected to handle tasks independently.
So why do these things happen? According to “tantes” re:publica 24 speech “Empty Innovation” talk; in recent years, we've observed numerous instances of what can be termed 'empty innovations' - Blockchain/NFTs, the Metaverse, and the "Uber but for X" trend. These supposed innovations, championed by our "innovation leaders," often result in minimal meaningful change, focusing rather on increased financialization, rental models for previously owned goods, and practices that undermine labor rights.
Artificial Intelligence appears to be following a similar trend, where it is highly effective for certain applications but clearly not suitable for others. Despite this, it is often marketed as a universal solution.
To clarify, despite my years of experience working as an IT consultant, I do not claim expertise in corporate enterprise AI integration projects. My insights should not be taken as authoritative; they are simply what I have gleaned over the past month through demonstrating a product with "AI" in its title and engaging with AI project teams across Europe.
Deep understanding of operations: Successful AI deployments require in-depth understanding of current operations and meticulous planning to ensure the AI augments rather than disrupt critical processes. Failing to map out integration points and adapt the AI to mesh with legacy systems is a recipe for inefficiencies, errors, and user rejection. AI cannot operate in a silo; it must be seamlessly woven into the fabric of the business to realize its full potential.
Cross-functional team: Assemble a cross-functional team with expertise spanning data science, software engineering, domain knowledge, and business operations. Diverse perspectives are critical for properly integrating AI into existing processes.
Data infrastructure and pipelines: Invest in data infrastructure and pipelines to ensure AI models have access to high-quality, properly formatted data.
Continuous monitoring your AI models: Implement processes for continuous monitoring and improvement of AI models. AI is an iterative process, so mechanisms for ongoing learning and evolution are essential.
Copyright © The logo of "Failarmy" featured in the title image of this blog is the property of Failarmy.