DE
← All articles

Why AI Projects Fail

AIAI StrategyTransformationManagementOrganisation
"Why AI Projects Fail"

Why AI Projects Fail

Extended web version of the article published in Red Stack Magazin 03/2025 (in German), based on the talk at the KI-Navigator Conference 2024.

AI projects rarely fail on the technology. They fail on people and organisations — on conflicting expectations, missing strategy, and technology sprawl without governance. This piece walks through the patterns that most often stall programmes in regulated industries, and the levers that successful programmes pull.

At a Glance

Why it matters: AI projects fail systematically — but not on the technology. Knowing the actual root causes (strategy gaps, organisational sprawl, missing governance) lets you address them deliberately and protect your investment. Key takeaways:

The Three Acts of AI Failure

Act 1: The Diagnosis

Anyone who has accompanied AI projects for years sees the same picture: down in the engine room the technology works. Up on the bridge the compass is lost. The causes almost never sit in the models — they sit with the actors who arrive at the same project with different expectations and different agendas.

Media and tech evangelists announce daily breakthroughs; even good journalists often lack the technical depth, because sensation rewards the click and differentiation does not. The result is a hype noise that gives boards the feeling that they urgently have to do something, without it being clear what.

Leadership reacts to that pressure with the fear of falling behind — and expects quick wins that aren't realistic. There is a structural side to this in Germany: in software-native companies the engineering background of the founders still shapes today's executive layer. In long-established German corporations, the typical career paths — sales, finance, legal — rarely route IT experience all the way into the boardroom. That is not an individual reproach; it is a structural question. The consequence is uncertainty in handling technical depth — and with it, susceptibility to tools that look impressive but don't carry weight.

AI consulting, finally, knows the latest technologies — and has to sell them. It sees the gap between reality and customer expectation, and at the same time stands under enormous technical pressure: the field of language models alone has been completely overhauled three times in the past twelve years — from Word2Vec and GloVe through the transformer architecture and BERT to today's large language models like GPT-4 and Llama. Anyone who has not personally rowed those waves either sells outdated architectures or the next vendor promise.

Act 2: The Patterns

In advisory practice, three patterns show up over and over — independent of industry and company size.

The first is the prototype dilemma: six months of development, three years of discussion. The prototype works in the lab. No one thought about the production environment, real-world data quality looks completely different, organisational processes don't fit the technology, and change management was never part of the project from the start. What is left is a demo video and a roadmap nobody believes in any more.

The second is the silo solution: marketing builds on Tool A for content generation, IT develops Tool B for data analysis, HR pilots Tool C for recruiting. Every department has its own vendor, its own data definition, its own contract. What emerges is technology sprawl without shared infrastructure — fragmented data flows, no scalability, and a compliance landscape no one can survey any more. In regulated industries this pattern is particularly expensive, because every island generates its own audit trail.

The third is vendor lock-in. Many companies bet on proprietary cloud services without asking the uncomfortable questions: what happens at the next price increase? How do we get out if the model degrades or the terms change? Where does our data ultimately sit — and what is lost in the worst case?

Act 3: The Success Factors

Successful AI implementations look surprisingly similar. Three axes run through almost every transformation that has worked in my experience.

On the people axis, sequencing decides. Stakeholders are involved early, from tech teams to C-level. Expectations are set realistically — what AI can do today, what it cannot. Change management is an integral part of the programme, not an afterthought activated three weeks before go-live.

On the technical axis, an open-source-first architecture carries the load. It creates flexibility and control, lets new models be integrated modularly, and provides the auditability that is non-negotiable in regulated environments. The longer argument for why open models are, in enterprise contexts, the economically and regulatorily more grown-up choice is carried by the companion post Stop Waiting, Start Shipping. Add to this quality assurance from day one: evaluation and monitoring are set up with the first model, not after the first incident.

On the organisational axis, data strategy comes before tool shopping: what do we have, what do we need, at what quality. Governance is clear — who decides what, when and how. And the approach is iterative: small steps, fast learnings, honest corrections.

What Distinguishes Successful AI Transformations

After more than ten years in the data science community and hundreds of project contacts, I see one factor that does not come out of methodology textbooks: the most successful AI implementations emerge where knowledge flows across domain boundaries.

In the Python community I have watched since 2014 how people from completely different fields work on problems together — from the European Space Agency to fintech start-ups, from climate research to industrial computer vision. At a PyData conference an engineer from a banking team picks up a pipeline idea from astronomy, because the same class of time-series problem has been solved there for ten years longer. This cross-pollination produces better solutions, because many supposedly new problems have long since been worked through in other fields. That is exactly the job of an advisor who doesn't just know the latest tool generation, but knows which patterns transfer.

Concrete Recommendations

1. Before you invest in tools: analyse what your people actually need — and what the regulatory landscape allows. 2. Develop a data strategy before you train AI models. 3. Bet on open standards — they secure optionality, auditability and negotiating position. In regulated environments, open source is the strategically more grown-up choice, not the budget version. 4. Build internal expertise: external advisors can show the way; you have to walk it yourselves. 5. Start small: a working pilot is worth more than ten failed visions.

Conclusion

AI projects do not fail on the technology — they fail on strategy gaps, organisational sprawl and missing governance. Anyone who builds a stable foundation of clear prioritisation, data-centric planning and open-source-first architecture creates the precondition for sustainable success.

The technology is there. The question is: are you organisationally and strategically ready for it?

AI projects fail at predictable points — and that can be avoided.

Let's talk

Selected Slides

Slide from the KI-Navigator talk — Act 1: The Diagnosis From Act 1 — the actor diagnosis. Slide from the KI-Navigator talk — Act 2: The Patterns: Tool Thinking From Act 2 — recurring patterns from advisory practice. Slide from the KI-Navigator talk — Act 2: The Patterns: Tool Thinking, Consequences From Act 2 — recurring patterns from advisory practice. Slide from the KI-Navigator talk — Act 3: The Success Factors: Focus From Act 3 — the three axes of successful implementations.