DE
← All articles

AI for Decision-Makers: Why You Can't Buy AI, You Have to Build It

ai-for-managersdata-literacyenterprise-aimanagementmisconceptionsteam-culture
Artificial Intelligence for Managers — guest lecture at WHU

"I need to buy AI." That sentence from a senior executive at a railway station captures the biggest misconception in management: artificial intelligence is not a product you simply buy off the shelf. After years of advising executives and data scientists, the fundamental challenges are clear — and they have less to do with technology than most people think.

A look back from 2026: This piece was written in 2020 and is left deliberately unchanged. The core theses — AI is not a product to buy, data quality decides, the human factor is the actual challenge — have carried through six years and one LLM surge. Some numbers and examples are visibly pre-LLM-era; I am not updating them, because the value is in the logic, not in the comparison figures. For anyone asking what long-term experience in this topic looks like: this is a trace back to the time when AI in the German Mittelstand still had to be spelled out letter by letter.

At a Glance

The AI Paradox in Management

When you Google "AI for managers", you always see the same white robot. Google "replacing managers" and you find articles about how AI will replace executives. But Google "replacing data scientists" — there the discussion is still open. For data engineers the discussion does not even exist. Which shows: the biggest threat from AI is not the technology itself but the misunderstanding around it.

The reality of successful managers

Harvard Business Review studies show: managers spend 70% of their time on administration and problem-solving. Only 10% goes to strategy and innovation, 7% to people development. At the same time they believe digital technologies and data analysis are among the most important future competencies — but massively underestimate the importance of people skills.

The decisive point: the biggest challenge in AI projects is human communication, not technology.

AI Isn't New — Neither Are the Problems

Artificial intelligence is older than relational databases. AI was invented in the 1940s, relational databases only in the 1970s. MIT Technology Review, the magazine of one of the world's leading technology universities, was already debating the same topics in the 1980s and 1990s: "Will artificial intelligence ever fulfill its promise?" (1986, after 25 years!), "Automation" (1985), "How to keep mature industries innovative" (1987) and "Can computers create literature?" (1998).

The insight: what you experience today as revolutionary is often a wave of developments that have been running for decades. Understanding that helps you separate signal from noise.

Why AI works now

Three factors pulled AI out of the "winter": compute (Moore's Law, GPU computing, specialised AI chips), data (the internet age, cheap storage, global data collection) and software (open source, global knowledge sharing, the Python ecosystem). This combination simply did not exist two decades ago.

Data Literacy for Decision-Makers: Understanding from the Boardroom Down to the Code

As a leader you do not need to be able to code, but you do need data literacy — the ability to evaluate data and make decisions on it. That is the core of your job as a decision-maker.

Working with data has four levels. On the first two — data collection and data management — all you need to know is that professional systems handle these tasks. Here is where it becomes relevant for you: at the data evaluation level you understand what the data means, how findings are presented, and above all whether data-driven decisions are sensible. That is your core competence.

The fourth level — data application — is your main domain. Decisive here: data ethics (non-negotiable for leaders), critical thinking (which you delegate to experts) and decision evaluation (your core competence).

My rule of thumb after years of advisory work: of everything your data scientists do, you may need to understand maybe 20 percent technically. The other 80 percent is change management, business understanding and people leadership.

The 5 Biggest AI Misconceptions in Management

1. "Bigger is better" — the Hadoop mistake

Classic example: companies buy Hadoop clusters before they hire their first data scientist. Result: "we don't actually need the Hadoop cluster, we only have a few gigabytes of data." The rule: don't buy resources pre-emptively, understand the need first.

2. "Data lakes are clean lakes"

Data lakes are not clear mountain lakes that you scoop clean data out of. They are complex systems made of many components — more concept than technology. Data quality comes from company culture and governance, not from technology.

3. "It's an IT project"

Wrong. Data science and AI are research and development. They need an experimental culture instead of rigid processes, an open budget for experiments that can fail, interdisciplinary teams, and a willingness along the way to solve different problems than the ones planned.

4. "Data is objective and unbiased"

Data is always biased. Example: a US police AI system systematically discriminates against certain groups because it was trained on historical, biased data. The solution: diverse teams, ethical guidelines, transparent processes.

5. "Deep learning solves everything"

Often you can solve problems better with classical statistical methods. They are more stable and provable, explainable (often not possible with deep learning), less data-hungry, and faster to implement.

The Garry Kasparov Principle

In 1997 IBM's Deep Blue beat the world chess champion Kasparov. Many theorists had predicted: "when a machine beats humans at chess, they have overtaken us."

What actually happened? Kasparov did not become unemployed. He says today: "computers are great tools." He developed new concepts for how humans and computers can work together. Chess is more popular today than ever. Kasparov's quote from Pablo Picasso: "computers are useless. They can only give us answers." The lesson: AI does not solve problems, it answers questions. Asking the right questions remains a human task.

Practical Implementation: How AI Really Works

The Titanic example

The famous Titanic dataset is boring for data scientists, but for executives it is perfectly instructive:

Insight: simple solutions are often better than complex AI.

Style transfer as a teaching tool

Take a Van Gogh painting, learn the style, apply it to a photo — there is your artwork in Van Gogh style.

But careful: it doesn't work on backlit photos. Why? Probably no backlit images in the training data. The lesson: AI only works as well as the data it was trained on.

Speech synthesis — a realistic example

With a MacBook and 9 days of training you can build a system that reads any English text aloud. Cost: under €1,000.

But: 95% of research results never make it into production. Be realistic about expectations.

Company Culture as a Success Factor

The cooling-house experiment

The psychologist Dietrich Dörner had people steer a complex system (temperature regulation). Result: under stress, people fall into erratic behaviour and maximum reactions.

In a company that means: without the right culture, even the best AI projects fail. Under pressure, people fall back into old hierarchies and silos — and that is exactly what happens when an AI programme is overtaken by expectation pressure.

Six recurring postures in AI teams

In every AI team I encounter recurring patterns that lend themselves nicely to personification. Six archetypes from advisory practice — none of them wrong per se, none of them sufficient on their own:

The leader's job is not to judge these postures but to channel the energies. Show-off Sarah's enthusiasm is an asset when it lands on a problem that genuinely exists. Anodyne Andy needs clarity on the business problem before tools become a topic at all. Prepared Pam is the rare gold standard — and the profile that carries a programme, if you can hire one.

The communication principle

In the Python community, astronomers can work productively with web developers — held together by an open, respectful communication culture. That experience is one of the reasons why open source, for me, is not one option among many but a structural advantage.

Translated to your company: create spaces where everyone involved can honestly say "I didn't understand that", without fearing a loss of status. That is the precondition for interdisciplinary teams to work.

What Decision-Makers Have to Do Differently

Think across industries: innovation happens at unexpected intersections. Which problems do other industries solve in similar ways to yours? Where do you find unconventional partners? What can you learn from completely different domains? Rethink team composition: you don't need "AI superheroes". You need diverse teams with data engineers (making data accessible), data scientists (models and insights), domain experts (context and business understanding) and change managers (taking people with you). Establish an experimentation culture: Google brings only 5% of its AI models into production. Prepare yourself for that success rate. That means: budget for experiments that can fail, a learning culture instead of guaranteed outcomes, iterative development instead of big-bang projects. Realistic timelines: AI is a marathon, not a sprint. From experience you need 3–6 months for a proof of concept, 1–2 years for production maturity, and 2–5 years for scaling. ROI does not show up in quarters but in years.

AI in 10 Years: The Realistic Vision

Forget science fiction. AI will become part of everyday life, without our perceiving it as "AI":

Practical applications

Business models

When you have an AI model that works, you can scale it horizontally. That is Google's business model: build once, use millions of times.

The decision: do you only want to buy AI services, or do you want to develop your own scalable AI systems?

Concrete Recommendations

Now: develop a data strategy (where is your data? Who has access? What quality?). Map domain expertise — your best AI applications emerge where you have the deepest business understanding. Create regular formats in which technical and business stakeholders come together. Medium term (6–12 months): define an experimentation budget (5–10% of IT spend for AI experiments without guaranteed outcomes). Start with a pilot project: small scope, clear business value, measurable results. Implement AI literacy programmes for executives — basic understanding, not coding skills. Long term (1–3 years): make AI strategy part of your corporate strategy, not an isolated IT topic. Build partnerships with universities, open-source communities and unconventional industries. Establish ethics and governance before you scale.

Conclusion: The Human Factor Decides

The most important insight after years of AI advisory work: technology is only half the rent. Success depends on whether you solve the human challenges:

You cannot buy AI — you have to build AI. And that is a deeply human task. The decisive difference: companies that understand this use AI as a strategic advantage. The others remain customers of Google, Microsoft and Amazon.

AI for decision-makers — without the buzzword bingo.

Let's talk

Related links

  1. ▶ Explaining AI to Managers2019-09
  2. ▶ Machine Learning, Artificial Intelligence (AI) and Big Data2019-12
  3. ▶ Data Literacy for Managers2019-12
  4. ▶ Artificial Intelligence for Managers2020-03
This piece consolidates several talks and my guest lecture at the WHU – Otto Beisheim School of Management from 2019 and 2020. Six years on I find the core line more stable, not more obsolete — what in 2020 still required explaining is now common sense in the boardroom. The hard work, unchanged, sits in the execution.