Part 1 of 2: When “Efficiency” isn’t Efficient
There is an epidemic and I think we can all feel it. AI is accelerating the pressure to prove efficiency, and for some orgs, that pressure is getting close to a breaking point. While the use of AI in personal and professional lives is increasing, many organizations are invested in apps and programs that claim to improve productivity, efficiency, and quality…and now they need to have something to show for it.
Quality gains have been uneven, and without oversight, it can drop fast. We see the receipts with what a large portion of the population not-so-lovingly call “AI slop”. This is happening on social media, in education, and in corporate, and is usually a byproduct of when people offload far too much to the AI program and don’t retain enough human oversight. Most productivity with AI is treated simply as “more tasks completed”. And while this can be true in output-driven work, in modern commercial teams and knowledge work, this usually means that you’ve simply created more noise to filter through in order to generate outcomes.
For the purposes of this post, I want to talk about “efficiency”. There are obvious benefits to efficiency gains that we have achieved to this point, and that we will achieve in the future. But a big problem we have is agreeing on the definition of efficiency.
The cleanest definition I have found is that if our output stays flat but our input increases, we lose efficiency. Put another way, we should be able to increase or sustain output with the same or less input.
Unfortunately, that definition doesn’t always translate to the corporate world, and I’ve seen it countless times over the last 20 years. Leadership will push for efficiency (like they push for productivity), but what they’re really pushing is utilization and proof-of-work. This, unfortunately, ends up as overwork and is unsustainable.
What efficiency should legitimately mean:
Lower cost per unit (same outcome)
Faster end-to-end cycle time (same quality)
Less rework / fewer defects
What gets dangerously mislabeled as corporate efficiency:
Higher utilization and less slack where everyone is “booked solid”
Fewer people on a team in an attempt to prove work
There is a point where perceived efficiency gains become detrimental to the overall outcome. Most of the “danger zone” isn’t really about efficiency but is more about mistaking utilization and austerity for efficiency.
This happens because we are making efficiency the goal, and, as I’ve talked about before, corporate will optimize for the goal, and not the outcome. For instance, if you set the goal as more tasks, fewer wasted minutes, or more reports, then you optimize for activity and simply get busy. The key is to focus on outcomes by optimizing for things like flow, quality, resilience, learning, and capacity.
The most common way that efficiency becomes harmful is when you’ve spent and excess of time and energy optimizing one part of the system so much that it causes stress on the rest of the system. There are many ways that this creates problems, but we’ll cover three of them here.
1. When utilization gets too high, your waiting time increases…a lot
Let’s assume the system you are building or running has variability. Don’t want to assume that? Too bad, because nearly every system will have variability. In commercial teams, variability is seen in things like demand spikes, escalations, approvals, and context switching.
In these systems, queues of some kind will inevitably form, and as utilization increases, delays can spike quickly. You might look efficient and even productive on paper because everyone is busy, but customers and employees feel the opposite because everything starts to take longer.
I’ve seen many leaders aim for “maximum efficiency” by keeping everyone at 95% capacity, and they end up building a massive queue. Response times get worse, cycle times will stretch, handoffs get delayed, and small spikes turn into major disruptions because the system has become brittle.
Let’s think of an always-on enablement desk as an example.
Your team supports the sales org with content tweaks, deck reviews, onboarding asks, and certification coaching. Leadership wants faster turnaround, all-hands-on-deck, and high utilization, so every enablement partner is booked with team calls, updates, housekeeping, program building and maintenance, and other tasks. It all looks very efficient and productive on paper.
However, in real life, sellers are waiting longer for responses, managers are waiting longer for coaching input and data, “quick questions” are now escalations, and deadlines end up causing rushed work. This increases rework, which increases load, which slows down response.
You see the problem?
The paper efficiency of the super-productive team created a self-reinforcing loop of issues down the line. And most of the time you don’t know you have an issue until it is suddenly a massive issue.
2. When you remove slack, you remove resilience
A lot of leaders think that removing slack will remove waste. They argue that the only reason there is slack is because of waste through inefficient use of time and work. “In order to be optimized, you need to be near capacity”, is the cry heard from every productivity guru trying to sell you a fantasy.
Sure, there can be unused time that is wasted, but they fail to realize that there is a necessary amount of slack in order to have a buffer that absorbs the inevitable issues that will come up.
Modern research on organizational resilience frames slack resources as a contributor to resilience and as important in absorbing shocks and adapting to them (1). Another study in International Journal of Production Economics describes how “unabsorbed slack” can be mobilized into redundancies that cushion operations against disruptions (2). Of course, this depends a lot on how well the organization pays attention to what’s going on, and how it chooses to react.
Slack, or “unused time” isn’t automatically wasted or inefficient, it’s often what is keeping you from breaking when things go sideways.
This trap is very visible in healthcare because the costs are immediate. A 2025 editorial from JAMA Network Open cites projections that national hospital occupancy could exceed 85% and calls it a critical threshold where basic hospital operations can become “dysfunctional and even unsafe” (3).
But why would 85% occupancy lead to dysfunction? At this higher occupancy, the system has no buffer, so even “normal variability” can tip it into a crisis quickly.
We can translate this “hospital occupancy” into a number of commercial systems:
· support tickets
· legal review
· onboarding ramp and bandwidth
· enablement requests
· deal desk approvals
· customer escalation
The problem isn’t system specific. Any time you optimize away a buffer, you create fragility.
3. When “efficiency” drives quality down through measurement, documentation, and defensive work
This is the one that most knowledge teams don’t see coming until it is already happening. In most customer-facing and commercial roles, this “efficiency” gets operationalized as something to the effect of “every minute is documented and accounted for”. Time tracking, activity logging, internal updates, and status checks all to prove what you did, show utilization, and justify your schedule. This is also a common trap in productivity, but we will focus this section on how it is weaponized in the name of efficiency.
These are often looking at tasks completed but fail to consider overall task quality until it’s dropped below healthy thresholds.
But, most commercial and knowledge work is not just about completing a task. It’s about making the right judgments, catching nuance, preventing rework, aligning stakeholders, and communicating clearly enough that the downstream teams don’t mis-execute or damage the customer experience. But the more you push “efficiency” (and activity as productivity), the more you create a predictable, ugly shift. People start optimizing for what’s measurable, instead of what’s valuable, and begin to perform defensive work like documenting, over-explaining, and pre-justifying. Work starts getting broken into smaller fragments to be “trackable” and creates more need for context switching, which causes quality to suffer. Trying to squeeze productivity and efficiency by forcing constant proof-of-work increases overhead, decreases quality, and creates rework issues down the line.
A lot of your teams feel this deeply and innately in their bones, and this is one reason that they push back. This resistance is backed up by research on electronic monitoring and performance monitoring, and it shows consistent downsides. Meta-analytic evidence finds electronic monitoring is associated with higher stress and lower job satisfaction and doesn’t reliably improve performance, especially when paired with targets and feedback pressure (4).
And if you want to talk about what’s happening in modern knowledge work before we even add the “document every minute” layer, Microsoft’s analysis of digital work patterns describes employees being interrupted frequently during core hours, making deep work and quality thinking harder to sustain (5).
We are literally keeping our teams from being able to produce the outcomes we are requiring by requiring them to become over-efficient and over-productive. Of course, they aren’t either of those because the mechanisms are completely wrong when leadership implements and looks for performative solutions instead of real results. The already fragmented day is nearly shot.
The Takeaway
Efficiency and productivity are very good, but there is a line where organizations stop pursuing true efficiency (less waste, less rework, faster flow), and are instead pursuing efficiency proxies like utilization, elimination of slack, and proof-of-work…which hurt outcomes.
In Part 2, I’ll tie this directly to the productivity lens I use in From Busy to Better: why “more output” is not the same as “better outcomes,” how AI makes that distinction unavoidable, and how commercial teams can build a system that’s actually sustainable.
Resources:

