The AI Productivity Lie: Why Automation Without Strategy Is Destroying Value

By Gary McRae on 25 Feb, 2026 2:55:49 PM
Last updated on Feb 25, 2026 2:55:49 PM

Leadership coaching for AI success in Singapore - The Clarity Practice

If I asked you to show me where your organisation decided which decisions should stay human, what would you show me?

Not your AI governance policy. Not your responsibility framework. The actual decision log that shows “This work requires human context and judgment. This work doesn’t. Here’s why.”

This question is rarely asked. Businesses are in a rush to demonstrate savings and efficiencies.

Daron Acemoglu, a Nobel Prize winner in 2024, recently explained why: current incentives default to automation over augmentation. Leaders are not consciously choosing that path. They are following it.

The pattern beneath the automation

Nobel Prize-winning economist Daron Acemoglu said this plainly in a recent MIT Sloan podcast: AI is not improving productivity.

Not because AI cannot improve productivity. Most organisations are deploying it under incentives that favour automation rather than augmentation. As a result, it replaces workers rather than empowering them.

He argues that technology does not have a fixed destiny. Today’s choices determine whether AI boosts human capability or simply accelerates automation and inequality.

Most leaders I work with in Singapore are not actively making that choice. They are following it.

They ask:

  • Which AI tool should we use?
  • How do we automate this process?
  • What is the ROI on this pilot?

These are tactical questions. The strategic question goes unasked: what work requires human judgment, context, and responsibility for the outcome, and what work does not?

So the boundary gets drawn by IT teams prioritising efficiency metrics, vendors selling automation platforms, and consultants optimising for cost reduction.

Then, usually around the six-month mark, something goes wrong.

  • The automated credit system declines a customer who should have been approved because the algorithm cannot read the cultural context the way a human underwriter can.
  • The fraud detection bot flags legitimate transactions from markets it was not trained on.
  • The customer service automation routes escalations incorrectly because it does not understand what actually matters to your clients.

By then, the human capability that once caught those problems has been lost. The team that knew how to make those judgment calls is gone, retrained, redeployed, or replaced.

You are managing consequences you could have prevented.

The question leaders are not asking

Count the AI projects in your roadmap right now.

How many of them started with the question: “Should this work be automated, or does it require someone with context, judgment, and responsibility for the outcome?”

If the answer is none, you are not choosing automation. You are accepting it as the default.

Think of it as the escalation problem reversed. Instead of decisions landing on your desk that should not, decisions are leaving your organisation to algorithms that should not.

Acemoglu is right. Current incentives push toward automation by default. Vendors sell efficiency. Consultants optimise for cost. Pilots are measured by time saved, not by value preserved.

But that is not inevitable. It is a design choice you are not making consciously.

What actually works

I use the CAS system with clients to map this boundary before they deploy AI, not after.

Clarity

Which work requires human judgment, lived context, cultural fluency, and responsibility for the outcome to do well? Which work is genuinely mechanical and can be systematised without loss?

Accountability

Who owns the decision about where that boundary sits? This cannot be delegated to IT, procurement, or a vendor. It is a strategic leadership call.

Support

How do you hold that boundary as pressure mounts to automate more? What structures keep the decision explicit instead of letting it drift?

Look at the AI pilots running in your organisation right now.

Some are genuinely mechanical, where speed and scale matter, errors are cheap to catch and fix, and no cultural context is required.

Others appear mechanical on a process map but require judgment to do well. Cultural context. Relationship history. Understanding what matters to the business beyond the metrics that an algorithm can see.

If you cannot draw that line clearly, you are not piloting AI. You are delegating by default, not by design.

The gap between “can be automated” and “should be automated” is where organisations lose the capability they did not mean to give away. By the time that loss becomes visible, the team that used to provide it is gone, retrained, redeployed, or replaced.

The boundary you need to draw

If you are deploying AI and have not explicitly decided what work should stay human, you are making the choice by omission.

Acemoglu’s argument is that this matters beyond your company. The cumulative effect of leaders' defaulting to automation is that AI accelerates inequality rather than broadly shared prosperity. But it starts with individual choices.

The question is not “Can AI do this?” It is “Should it?”

If the work requires someone to have lived through your industry, understood your culture, or carried responsibility for the outcome, it is human work. Build AI around it, not through it.

If you are not sure where that boundary sits, that is a leadership question, not a technical one.

If you are not sure where that boundary sits, the cost of guessing wrong is higher than the cost of mapping it properly.

 

Topics: AI Leadership

Get Email Notifications