6 Comments
User's avatar
Steve Pereira's avatar

What if the constraint was never execution but coordination? Large firms don’t slow down because they get worse at execution. They slow down because they get worse at coordination.

This seems backwards to me:

"As AI systems make execution and coordination programmable, the firm’s role evolves. It no longer exists primarily to perform work. It exists to orchestrate the doing of work, routing objectives across a mesh of agents, humans, and hybrid systems."

The firm has to primarily exist to perform work - work is what creates value, and novelty and human strengths are what create that value for the foreseeable future. That may be augmented by AI, but this 'orchestration' is the low hanging fruit that the firm used to do manually and painfully that's about to be disrupted.

Work eventually, sure, but the coordination is the starting point. The coordination is just waste, and it's far easier to outsource to agents.

Consider the internet, the original large scale work-doing coordination mechanism. It works because coordination is automated and contract-driven. The work is still done by humans, but the internet takes care of the messy coordination. I see organizations becoming more like the internet over time, and allowing contributors to focus more on what they want to execute, rather than outsourcing execution.

Expand full comment
Steve Pereira's avatar

More detail on the ‘network for flow’ concepts here: https://www.linkedin.com/pulse/wiring-flow-steve-pereira-5ntnc/

One really interesting concept to borrow, which I think you allude to, is the control plane. In networking, that piece is essentially the backbone of supervision and the enabler of the management plane. In Tesla, that would be ‘Digital Self Management’, and their swarming techniques, etc. In Amazon, it’s basically the Team APIs and 6-Pagers. They collapse all the variation of two sides of a network to pass through a clearly defined interface, which means you can effectively manage all the complexity and coordination.

Expand full comment
Mark Settle's avatar

There's a colloquial term for the orchestration work that you're describing Matan - it's 'agent wrangling'. The humans who survive in the agentic enabled workplace of the future will come to be known as agent wranglers.

Expand full comment
Living Yield Editorial Team's avatar

You're dead on Matan, lots of great points to discuss in the next year.

Expand full comment
John Zell's avatar

Matan-Paul, this is a great conversation about the deep shift we're seeing in how work is defined and organized. It feels like one of those rare turning points—kind of like when electricity transformed work from something done mostly on farms to something that powered entire industries in cities.

I agree that “As AI makes execution and coordination programmable,” the firm's role is fundamentally shifting. It no longer exists to do the work—it exists to orchestrate it, routing objectives across a dynamic mesh of human, artificial, and hybrid agents.

To expand the conversation, I’d add that in this new paradigm, the line between activation and execution isn't just technical—it's strategic, even ethical. Redefining that boundary becomes urgent, especially as personal data flows through increasingly opaque systems, making privacy harder to protect and control more difficult to trace.

In other words, as AI systems increasingly take on autonomous roles in executing tasks and coordinating processes, a critical question emerges: What does “activation” mean in a world where action and organization are automated?

This question matters because activation—once a simple human command like pressing "start"—now stands at the threshold between intention and automation. As we shift more control to intelligent systems, understanding the evolving role of activation becomes essential to ensuring that AI operates within human-aligned goals, adapts to changing priorities, and remains responsive to strategic intent.

Activation now involves initiating systems with specific goals, constraints, and values. It sets the strategic intent behind AI-driven operations, ensuring that execution aligns with larger objectives. This includes determining when to act, why, and under what conditions, making activation a kind of programmable, dynamic decision-making layer that provides the opportunities for a person as part of a team to make course corrections, and/or test hypothesis in much shorter intervals.

Additionally, activation takes on the role of meta-control, capable of interrupting or redirecting AI behaviors in real time. As AI becomes capable of continuous operation, activation functions like a supervisory system that monitors for drift, prioritizes competing goals, and embeds ethical or operational boundaries. But that meta-control must be operated, monitored, and controlled by humans. In this model, activation becomes the point where purpose, alignment, and control are embedded—essentially turning humans (or higher-level systems) into designers and governors of intelligent orchestration rather than operators of discrete tasks.

Expand full comment
Samer Solh's avatar

Great to read this. Thanks!

Expand full comment