Information Management

The Productivity Trap

8 min read
from insdustry to information thinking Illustration

Why current responses fail — and why optimising output won’t save you.

A familiar management reflex still dominates organisational thinking: if we optimise output per unit of time, everything else will follow.

Productivity is assumed to be unambiguously good. More productivity, better outcomes. Organisational success is framed as the aggregation of individual efficiency: more features shipped, more reports generated, more emails processed, more meetings attended.

For a long time, this logic worked. In an industrial economy, productivity thinking was the correct response. Materials were stable. Processes were linear. Environments were predictable. Scientific management, time–motion studies, and standardisation delivered extraordinary gains. They helped build the modern industrial world.

But we no longer operate in that context. Applied to information work, productivity thinking produces pathological outcomes.


The illusion of velocity

The dynamic becomes most visible when organisations apply productivity logic to AI adoption.

The promise is compelling: AI tools generate code, content, or analysis at machine speed. Costs drop. Delivery accelerates. Bottlenecks disappear. Yet empirical evidence tells a different story.

A 2025 randomised controlled trial by METR examined experienced developers working on real-world codebases — not isolated tasks, but complex, legacy systems typical of large organisations. The result was striking: developers using AI tools took, on average, 19% longer to complete their tasks than those working without them. Despite this, participants consistently felt faster. Even after completing tasks more slowly, many still believed the AI had helped.

The explanation reveals a core limitation of productivity thinking. AI removes the friction of the blank page. Output appears instantly. The sensation of velocity is real. But the developer’s role shifts from creating to reviewing — and review is where productivity assumptions fail.

AI is proficient at producing work that is almost correct. Code compiles. Tests pass. Nothing breaks immediately. But subtle errors, omissions, and flawed assumptions remain. Identifying these in large volumes of generated output takes longer than producing the work deliberately in the first place. The same applies to emails, meeting reports, articles — any type of knowledge work.

The productivity metric improves. The actual outcome degrades.

A recent large-scale study by Anthropic reinforces this from a different angle. When 1,250 professionals were interviewed about their AI use, a telling discrepancy emerged. Participants described their AI use as predominantly collaborative — 65% said they were augmenting their own work with AI assistance. But observed behaviour told a different story: actual Claude conversations showed nearly the inverse, with 49% of interactions classified as automation, where AI was simply performing tasks rather than collaborating on them.

What people saidWhat was observed
65% augmentation49% automation
“I’m collaborating with AI”AI was performing the task

Source: Anthropic, Introducing Anthropic Interviewer, 2025

People do not just feel faster than they are. They also believe they are collaborating when they are mostly delegating. The sensation of partnership is real. The measurement of it is not. This is precisely the trap productivity thinking sets: it captures motion and mistakes it for direction.


Activity without progress

This pattern is not unique to AI. It characterises information work more broadly.

When productivity becomes the dominant objective, organisations appear busy while becoming less capable. Speed is rewarded over understanding. Activity substitutes for progress. Throughput is measured — but there is no way of measuring meaning.

The number of meetings rises to compensate for missing clarity. Individuals optimise their own output rather than shared understanding. Work is processed continuously, but insight does not accumulate. The organisation moves faster, but coherence erodes.

The Anthropic research surfaces exactly this tension in lived experience. Across 1,250 professionals, high satisfaction with AI tools was consistently paired with elevated frustration. In the words of one information security researcher: “If I have to double-check and confirm every single detail the agent is giving me to make sure there are no mistakes, that kind of defeats the purpose of having the agent do this work in the first place.” A mathematician echoed the same problem: “After I have to spend the time verifying the AI output, it basically ends up being the same amount of time.”

The to-do list shrinks. The organisation’s ability to act effectively declines. Productivity culture creates the performance of capability without the substance of it.


Assumptions that no longer hold

Productivity thinking rests on assumptions that were valid in industrial contexts and fail in informational ones.

Work is repeatable. On an assembly line, yes. In information work, rarely. Even in case management, where organisations attempt to define strict categories, context changes continuously. Standardising processes often removes precisely the nuance that creates value.

Output is measurable. Physical goods lend themselves to measurement. Understanding, judgement, and insight — what is required in information work — do not. What is easiest to measure becomes what is optimised, regardless of relevance.

Variation is waste. In manufacturing, reducing variation improves quality. In thinking, variation is the source of learning and discovery. Eliminating it eliminates adaptation — and ultimately, value.

Information emerges naturally. This is the most persistent assumption: that if people work efficiently, the information required for decisions will appear. As shown in earlier articles in this series, the inverse is true. Information must be designed and produced deliberately.

The Anthropic data provides an instructive example of this failure. When asked about AI’s role in their work, 91% of scientists expressed a desire for more AI assistance — yet in practice, they confined their use to peripheral tasks like literature review and writing. The core of their research — hypothesis generation, experimental design, sense-making — remained untouched. Not because they lacked tools, but because the information architecture required to trust AI with those tasks had not been designed. Availability of a tool does not produce the conditions for its meaningful use.

The material has changed; the management logic has not.


The performance of busyness

Productivity culture also reshapes individual behaviour — and not in ways that serve the organisation.

Research on workplace status perception has consistently shown that busyness functions as a social signal. People perceived as overloaded are assumed to be important, competent, and in demand. Overloaded calendars become symbols of value. This creates a reinforcing loop: productivity is rewarded with more work; capability is mistaken for capacity. Over time, work expands to fill all available space.

The Anthropic research surfaces a version of this at the individual level. Across professions, 69% of workers mentioned the social stigma that surrounds AI use at work. One fact-checker described simply staying silent when a colleague expressed hostility toward AI — not sharing their own process to avoid judgment. This is the performance of busyness in a new register: workers managing the appearance of how they work, rather than the substance of what they produce.

Everyone runs faster while the organisation does not move forward.

Most importantly, productivity culture individualises a systemic problem. When people struggle to keep up, the solution is framed as personal optimisation — better tools, better habits, more discipline. Structural design questions disappear.


What productivity obscures

When productivity becomes the dominant lens, critical questions fade from view.

Are we working on the right problems? Do people share the same understanding of what the information means? Are decisions improving over time? Is the organisation learning?

Productivity metrics do not answer these questions. They measure motion, not direction.

The Anthropic research makes this visible in microcosm. Workers across sectors reported high satisfaction with AI — but when asked to look forward, 55% expressed anxiety about AI’s impact on their future. They could articulate what AI was doing for them today. They could not articulate what the organisation was becoming. The measurement of output was available; the measurement of orientation was not.


A different frame

The alternative is not lower productivity. It is a different organising principle.

Productivity asks: How much output can we generate per unit of time? Effectiveness asks: Are we producing what matters, and does it work?

In information work, effectiveness takes precedence. And effectiveness requires different measures.

  • Clarity over speed. Shared understanding enables better decisions than rapid execution on misunderstood information.
  • Understanding over throughput. Fewer insights that change behaviour outperform volumes of unused reports.
  • Learning over output. Organisations that learn faster than their environment changes remain viable.
  • Coherence over activity. Coordinated action outperforms isolated efficiency.

This does not require abandoning measurement. It requires measuring outcomes rather than activity — understanding rather than throughput, decisions improved rather than artefacts produced.

The distinction matters practically. The Anthropic study found that 48% of professionals are already anticipating a shift in their roles toward managing and overseeing AI systems rather than performing direct work themselves. They are sensing, intuitively, that the value question is changing. What they lack is an organisational framework that makes that shift legible — and designable.


Productivity as a symptom

The fixation on productivity is best understood as a response to unmanaged complexity.

When environments become difficult to interpret, productivity offers the comfort of control. Even if meaning is missing, activity remains measurable. Busyness provides reassurance when understanding is absent.

But information work does not obey industrial rules. The raw material is information. The process is sense-making. The output is decisions and actions that enable value.

Organisations optimised for productivity will continue to accelerate while losing orientation. They will deploy AI to feel faster while becoming slower. They will celebrate busyness while coherence declines. The Anthropic data is a mirror of this dynamic at scale: high satisfaction, high frustration, low trust, and an eighteen-point gap between how people believe they are working and how they actually are.

The alternative is not working less.

It is working differently.

That begins by asking the questions productivity makes invisible:

  • What must this organisation understand?
  • How does information flow, degrade, or accumulate?
  • Is the organisation becoming more capable — or merely more active?

The next organisational revolution will not be driven by productivity.

It will be driven by design.


Next in the series: Why AI Initiatives Fail