The AI Mirror
AI is a mirror for uncomfortable truths about how we work and what we value. What it reflects back might surprise you.
Today’s narrative about AI blends the mythic—a search for god-like intelligence—with the magical—a generative power more instant and expansive than we expect from machines—with the mundane—the automation of everyday tasks. What unites these three is often a single idea: human obsolescence.
It’s a story we’ve heard before—but what if it’s the wrong one?
What if AI doesn’t just surpass our capabilities—but reflects back a truth we’ve long avoided: that much of the knowledge economy runs on activity we’ve mistaken for value?
Each week brings fresh proof of AI’s reach: acing math olympiads, generating lifelike video in seconds, replacing layers of customer service. No part of knowledge work feels safe.
The narrative is so dominant, it can feel irresponsible to think beyond protecting jobs. But there’s another way to see what AI is doing—not as a substitute for human work, but as a mirror. Not a tool that shows us what machines can do—but one that reveals what we’ve already become.
AI doesn’t just replicate our capabilities—it reflects the systems we’ve built, the effort we expend, and the value we assume. If we’re willing to look into that mirror, what it reveals is sobering: an oversaturated knowledge economy that absorbs talent without clear impact, while other sectors—equally vital—remain persistently underserved.
But it also reveals an opening: a chance to confront how much of the knowledge economy has grown disconnected from value—and to begin building a more grounded, more diversified economy in its place.
Of course, there are areas where AI genuinely replaces high-value work—tasks with clear stakes and measurable outcomes. But much of what AI reflects back is the opposite: activity we’ve mistaken for value, and systems that reward motion more than results.
Fast Times in the Knowledge Economy
Work in the knowledge economy often produces activity without results—and we know it.
We know it from experience. To move up in the knowledge economy—from entry-level analyst to senior executive—is to produce, consume, and commission an inordinate amount of activity without outcome. Tasks, reports, and studies that lead nowhere.
No one who has spent time in the knowledge economy escapes this feeling. We’ve all built decks no one used, launched strategies that changed nothing. It’s not a failure of effort or ethics. It’s the structure of a system that rewards activity over outcome, and spins through repetition rather than results.
Not every work environment functions this way. In flight crews, orchestras, emergency rooms, and restaurant kitchens, interdependence is immediate and non-negotiable.
But knowledge organizations—whether thirty-person startups or multinationals, public or private—operate differently. They are home to a cascading rhythm of activity without impact, largely because in today’s knowledge economy, it’s often hard to tell how individuals—from entry level to C-suite—actually create value.
We’ve absorbed that opacity into how organizations function—where activity passes for impact, and effort signals value, even when no real connection exists. The signals of productivity are everywhere, even when the results aren’t. It’s a feature of the knowledge economy, not a bug.
That AI hallucinates—or produces persuasive botshit—shouldn’t be judged solely against whether it fails to match trustworthy human work. It should be seen as a mirror to the broader knowledge economy it was shaped by: one already full of confidently produced nonsense. Quarterly plans, annual roadmaps, five-year strategies—rituals often more performative than productive, and rarely built to withstand scrutiny.
We already know this. We feel it in the projects that stall, the initiatives that never land, the busy days that leave no mark.
The AI mirror shows us something staggering: an incalculable number of hours where human talent is misallocated—not to creating markets, solving problems, or advancing frontiers, but to keeping the machine in motion.
And still the knowledge economy grows—more roles, more processes, more layers of abstraction. We’ve built a system where managing for results is difficult by design.
OKRs a Go Go
We’ve built systems to measure outcome—but most work escapes their grasp.
One of the core challenges of knowledge work—since the term first appeared—has been figuring out how it creates value. We know our teams are busy, but we’re never quite sure if their work actually drives results. If we’re honest, we’ve all had moments of wondering whether that’s true of our own work, too.
What we want is less activity, more impact. Not: “Last quarter we conducted a study and gained insights.” But: “Last quarter we integrated those insights into the product and saw a 5% increase in retention and a 10% bump in first-time customer spend.”
OKRs—objectives and key results—were meant to force that shift. To translate activity into outcomes. A team that embraces OKRs shouldn’t report completion. It should report change.
But in most organizations, OKRs relapse into glorified to-do lists. The work rarely moves past “study conducted, insights gained.” There’s no clear link to what changed—or whether it mattered.
You could swap in other frameworks—quarterly goals, success metrics, or scorecards—and get the same result. Some teams and organizations do have a clean line from effort to impact. Most don’t. Our tools for measuring knowledge work still fall short of the clarity we crave.
Now contrast that with the promise—and the trajectory—of AI. The more it gets embedded into knowledge work, the more that work exits the ambiguous world of activity and enters the traceable space of digital workflow. And in a digital workflow, everything can be measured.
Your monthly book club might not know you’re bluffing your way through The Odyssey. The meeting imposes structure and accountability, but everyone knows how to fake it—George Costanza-style.
Your Kindle, on the other hand, knows you stopped at page 15. It knows your pace, your session length, your font size. Every interaction is trackable.
The same shift is coming for knowledge work. As tasks move from human-to-human to human-with-AI to AI-to-AI, measurement becomes unavoidable. Not because AI outperforms humans at every task—but because we’ll finally see which tasks were never worth doing in the first place.
The real challenge in managing for results isn’t just about better tools—it’s about the work itself. Too much of what we do today is motion without impact. We’ve tried to fix it with sharper goals and smarter frameworks. But the harder truth is this: a lot of the work may not need doing at all. We’ve allocated more talent to the knowledge economy than it can meaningfully absorb.
Mind the Gap
The knowledge economy isn’t just misallocating effort—it’s misdirecting human potential.
This isn’t just a firm-level problem. And it’s not a critique of who does the work or what it’s directed toward. The knowledge economy clearly skews rewards and serves some customers better than others—across lines of race, gender, age, geography, and other dimensions of social and economic advantage.
But the deeper issue is structural: too often, no one knows what good looks like—or what value really means. And yet we’ve absorbed more and more talent into that ambiguity.
Part of this has to do with how the knowledge economy grew. As more people became college-educated and digital tools made it easier to produce and distribute ideas, the sector expanded rapidly. It didn’t just grow—it became a status system, attracting a disproportionate share of talent. And like any status system, it built its own flywheel: generating work that validated and reinforced the system itself.
But that growth came at a cost.
It came at the expense of other sectors—care, food, skilled trades, health, infrastructure—where labor shortages persist, markets remain uneven, and essential needs are too often unmet.
The opportunity now is reallocation: into underserved sectors beyond the bounds of today’s knowledge economy.
Some of that reallocation is already underway. The rise of “tool belt” careers—where value is tangible and the work speaks for itself—is a sign that something is shifting. But when that shift is driven more by fear of job loss than by recognition of imbalance, it shows just how strong the prevailing narrative still is.
Let’s be clear about what that imbalance is—and isn’t:
It’s not that all knowledge work is useless.
It’s not that AI should replace people.
It’s not that the answer is to do less work.
It’s that we’ve overbuilt one part of the economy and underinvested in others. That AI doesn’t just automate what we do—it reveals how narrowly we’ve defined where human effort belongs.
It’s a moment not just to react—but to rethink. To reevaluate where human effort is going—and where it’s most needed.
Beyond Today’s Knowledge Economy
There’s plenty of debate about how far AI will go—how many jobs it will transform or replace, and how soon. But here’s a more immediate truth: AI already shows us how easily much of what we do can be reproduced. That doesn’t mean all of it ought to be replaced. It means we finally have to ask: what’s worth preserving, and what isn’t?
Even if AI never replaces all knowledge work, the shock alone should be enough to confront what we already know: we’ve absorbed too much human talent into an economy built on activity over outcome, on establishing and reproducing status, at the expense of other kinds of needed work. The threat doesn’t need to fully materialize to expose what’s been broken all along.
What would it mean to look into the AI mirror—and walk away from today’s knowledge economy?
For years, we’ve seen labor shortages in sectors like care, trades, infrastructure, food systems, climate and energy, and teaching—while knowledge industries have become oversaturated. High pay, cultural status, and the portability of white-collar skills pulled talent in. Post-COVID, even teachers moved out, opting for more flexible, higher-paid roles.
Today, AI introduces precarity into what once felt like the permanence of knowledge work—and with it, the space to reconsider where human effort is most needed. It offers the possibility of rebalancing: from overserved to underserved, from overgrown industries to neglected ones.
Viewed this way, AI isn’t just a threat to jobs—it’s a catalyst for rethinking where talent belongs. Like pulling a block from a Jenga tower and realizing we don’t need to rebuild upward—we can build outward.
Are we willing to make that choice?
COVID was a preview. The knowledge economy went remote, with little impact on productivity or profit—and real gains in well-being. But it also revealed how dependent we are on the sectors that couldn’t go remote: care, food, delivery, infrastructure. They held the world together.
We’re still negotiating what it means to separate work from place. But the deeper imbalance—between the sectors we value and the ones we rely on—remains unresolved.
And we’ve seen elsewhere that skipping legacy infrastructure can be an advantage. Many emerging economies leapfrogged the PC era to build mobile-first financial systems. Never having developed oversaturated knowledge sectors, they may now be better positioned to adopt AI in ways that diversify rather than concentrate.
In more advanced economies, the opportunity is different. It requires moving beyond the knowledge economy we’ve built—no longer trying to optimize it, but rethinking where human effort truly belongs. And that won’t happen unless we realize that our deepest constraint isn’t technological. It’s narrative.
AI carries not just computational power, but narrative weight. We are in the grip of story as much as automation. And we have a choice: to accept the tale of obsolescence and displacement, or to write a new one—about redirection, and a broader understanding of where human effort belongs.
The story of obsolescence hinges on an understanding of people as finite—in the limits of our intelligence and in the varieties of our needs and wants. But what AI reveals isn’t our limits—it’s how narrowly we’ve defined them.
The AI mirror shows us what we already knew. That the knowledge economy is bloated, uneven, and full of work whose value is hard to defend.
But it also shows us what’s possible: a future where effort is more connected to need, and where new ladders are built—not just in today’s industries, but in the ones we’ve ignored for too long.
A more realistic story begins here: many of our needs are still unmet. Our wants remain limitless—and often unpredictable.
The question AI raises isn’t obsolescence—it’s recognition: of how much human capacity we’ve overlooked, and how much human need and want we’ve yet to meet.