In late March, Jack Dorsey and Roelof Botha published an essay called “From Hierarchy to Intelligence.” It traces two thousand years of organizational design back to a single constraint: a human leader can coordinate somewhere between three and eight people, so you need layers. The Romans figured this out with eight soldiers sharing a tent. The Prussians added a planning class after Napoleon humiliated them at Jena. The railroads copied the military. Taylor optimized what happened inside the layers. McKinsey sold the matrix. Spotify tried squads. Zappos tried holacracy. Everyone eventually reverted to hierarchy because nothing else could carry information at scale.
Dorsey and Botha argue that AI changes this. Not as a productivity tool grafted onto the existing structure, but as the coordination mechanism itself. Block is reorganizing around four components: atomic financial capabilities with no user interfaces of their own, a dual world model covering both company operations and customer behavior, an intelligence layer that composes capabilities into solutions for specific people at specific moments, and interfaces like Square and Cash App that deliver the result. People move to what they call “the edge,” where they handle what the model cannot: intuition, ethics, cultural context, trust, the reading of a room. Three roles replace the old hierarchy. Individual contributors who are deep specialists. Directly Responsible Individuals who own cross-cutting problems for fixed periods. Player-coaches who build while developing people.
I’ve spent almost thirty years studying how people think. When I read this essay, I recognized something I’ve seen before in a different form. It’s the same thing that happens when you compose a photograph, or when you look at color.
The analog mind in a digital world
We talk constantly about digital transformation, digital intelligence, digital systems. But the human mind is analog. It doesn’t process experience in discrete categories. It blends.
Consider how you see color. The world presents you with a continuous spectrum of electromagnetic radiation. Your eye captures it through three types of cone cells sensitive to roughly red, green, and blue wavelengths. From those three channels, your brain synthesizes the entire visible world. The specific orange of a late afternoon in the Aegean. The particular green of new olive leaves in April. These are not stored somewhere as named colors waiting to be retrieved. They are synthesized in the moment, from three fundamental inputs, into something that has never existed in exactly that combination before.
Human cognition works the same way. Three temporal orientations — Past, Present, Future — operate as fundamental channels through which the mind processes experience. Every person carries all three. The question is never which one someone “is”. The question is how, in this moment, facing this situation, these three orientations blend to produce this person’s particular way of seeing. The synthesis is continuous, analog, and unique. It produces what I think of as the sovereign compound: the individual’s cognitive signature, woven together with their accumulated experience, skills, memory, and understanding, generating a perspective that no other mind will replicate.
This is what is actually happening at the edge of Block’s architecture. When they say people at the edge handle what the model cannot, they are describing the sovereign compound at work. The world model feeds information to a mind that synthesizes it through a cognitive blend, no algorithm specified. The value isn’t in the information. It’s in the synthesis. And the synthesis depends on the particular ratio of temporal orientation in the mind doing the synthesizing.
Block has built a remarkable system for capturing and routing information. What it lacks is any representation of the thing that makes the information meaningful: the analog cognitive signature of the person receiving it.
The compression problem
Here is something I could not have written ten years ago, because I didn’t fully understand it myself until AI gave me the tools to see it.
For nearly three decades, my work with Vincent Fortunato on the MindTime framework produced peer-reviewed science grounded in three cognitive vectors: Past, Present, and Future perspectives. The science was sound. Suddendorf and Corballis had mapped the neural architecture of mental time travel. Tulving identified chronesthesia. Zimbardo and Boyd demonstrated that temporal orientation predicts decision-making more reliably than personality traits. Our own published work established the construct validity, the correlations with the Big Five, the longitudinal impact on wellbeing, the relationships with need for cognition.
But we faced a problem every scientist faces when the work has to become practical. Three continuous vectors, each present in every person in varying degrees, producing a unique cognitive blend, that is a rich and true description of how the mind works. It is also nearly impossible for ordinary people to use in daily conversation. You cannot walk into a team meeting and say “my Past vector is at 38, my Present is at 45, and my Future is at 78, so here’s how I’m going to perceive this problem.” People need handles. They need language they can hold.
So we did what you do when you need to make a continuous signal practical for human use: we compressed it. We derived ten archetypes from the vector space. Named patterns that people could recognize in themselves and each other. It was honest compression — the archetypes are real regions in the cognitive space, and they gave teams and coaches something to work with. For twenty years, that was the best we could do. The underlying science held three continuous dimensions. The practical application required discretization.
This is exactly the problem that AI dissolves.
An AI system doesn’t need archetypes. It doesn’t need named categories or simplified handles. It can work directly with the full vector space, the way your visual cortex works directly with the full color space. It can hold the continuous blend. It can perceive that a person with a Past orientation of 38, a Present of 45, and a Future of 78 will attend to certain features of a problem and systematically underweight others, and it can compute what those features are, in context, in real time, without compressing anything.
For the first time, the framework can operate at its actual resolution.
This is what I mean when I say AI was needed before MindTime could be fully leveraged. We had the map. We’d always had the map. But the territory it described was too granular for human coordination to navigate without simplification. AI removes the simplification requirement entirely. The framework shines brightest when the deep nuance achievable by the nature of mind is properly mirrored as a continuous cognitive model, and only a machine intelligence can mirror it at that fidelity.
What this means for Block, concretely
Take the DRI model. Block proposes that a Directly Responsible Individual should own a problem like merchant churn in a given segment for ninety days, with authority to pull resources from multiple teams.
I’ve spent decades watching what happens when you give the same problem to people with different thinking perspectives.
Hand that DRI mandate to someone who leads with Past thinking. They will pull transaction histories, map when churn accelerated, identify which operational changes preceded the spike, and build a remediation plan grounded in what worked before in comparable segments. The problem, as they define it, is: what changed, and how do we reverse it?
Hand the same mandate to someone who leads with Future thinking. They will question whether the segment itself is the right unit of analysis, look for emerging patterns that suggest the customers who are leaving are reorganizing around a need the current product doesn’t address, and propose capabilities that don’t exist yet. The problem, as they define it, is: what is this churn telling us about where the market is going?
Both are competent. Both are using the same data from the same world model. They are solving different problems. The world model has no way to surface that fact, because it contains no representation of how the person at the edge perceives and processes information.
Now imagine the world model does contain a cognitive layer. Not a personality label. Not an archetype. The actual vector: three continuous values representing this person’s thinking perspective, held at full resolution by an AI that can compute implications in context. The system can now do something that was previously impossible: it can tell the DRI, before they begin, what their cognitive signature is likely to foreground and what it is likely to leave in shadow. It can suggest what a mind with a different blend would see in the same data. It can pair DRIs with complementary perspectives on problems that require multiple kinds of seeing.
This is the edge, made self-aware.
The player-coach problem
There’s a second structural issue the essay doesn’t address. Block says player-coaches replace managers whose job was information routing. What remains is people development. But developing someone whose mind works differently from yours is a qualitatively different task than developing someone who thinks the way you do, and without a shared language for the difference, the relationship defaults to unconscious patterns.
I’ve observed this closely enough to have given the pattern a working name: perspective recruitment. A player-coach with strong Future orientation and weak Past orientation will, without realizing it, rely on their reports to supply the rigor, evidence-gathering, and historical grounding they naturally skip. It looks like delegation. It functions as cognitive extraction. The coach gets the benefit of a thinking style they lack. The report does cognitive labor they never signed up for and don’t get recognized for, because there is no vocabulary for what is happening.
An AI holding both profiles at full vector resolution can see this dynamic forming and name it. Not to police the relationship, but to illuminate it. The same way a photographer who knows they habitually underexpose can compensate, a player-coach who can see the cognitive asymmetry can develop their people with genuine awareness of what each mind brings.
The question beneath the question
The essay closes with a challenge I find genuinely compelling: “What does your company understand that is genuinely hard to understand, and is that understanding getting deeper every day?”
Block answers with their economic graph. Millions of transactions, both sides, compounding daily. That is a powerful answer.
But there is a category of understanding that no transaction log will ever contain. It is the understanding of how the people reading that data actually think. What they attend to and what they don’t. What they treat as signal and what they discard as noise. How their minds synthesize past, present, and future into the specific thinking perspective through which they perceive everything the world model surfaces.
At MindTime, the advent of AI in its current form has changed what we can do with the science we’ve been developing for nearly thirty years. We are learning, daily, what happens when machine intelligence works with the full vector space rather than the compressed archetypes we had to use when humans were the only coordination mechanism. The three vectors don’t need to be simplified for a machine. They can be held in their full, continuous, analog richness. And it is in that richness that something becomes visible which human coordination alone could never resolve: the extraordinary, granular synthesis of thinking perspectives that produces each person’s sovereign way of seeing.
Block has asked the right question. They’ve given an answer about customer understanding that is hard to argue with. But the same question applies inward. Does Block understand how its own people think? Is that understanding getting deeper every day?
I don’t know yet whether the cognitive layer I’ve described is the right answer for Block’s architecture. That would take a real conversation between people who understand the systems involved. What I do know is that the question is worth asking. And that a world model without a model of the minds that complete it is, at best, unfinished.
That conversation is one I’d welcome.
John Furey is the creator of the MindTime Framework and founder of the MindTime Foundation, established to preserve and protect the framework and the Theory of MindTime from abuse and misuse, and to ensure that MindTime enters the world as a people-first, privacy-first, sovereignty-first endeavor. He is also the founder of MindTime Holding B.V. and has spent nearly three decades developing a peer-reviewed cognitive architecture mapping human thinking across three perspectives, with academic collaborator Dr. Vincent Fortunato. He is the creator of Clara, an AI built on metacognition.
A note on how this article was written: This piece is itself an expression of the sovereign compound it describes. It emerged from the synthesis of John’s Future-oriented thinking perspective, nearly thirty years of research and development spanning cognitive science, market dynamics, financial systems. large-scale educational systems, global brand strategy, and cognition in enterprises and teams — work that has included decoding the diffusion of innovation paradox, advancing our understanding of how people and markets actually think, and weaving these threads into leading-edge inquiry on human cognition and ultimately consciousness — alongside Anthropic’s Claude as a reasoning and writing partner, and Clara, MindTime’s AI built on Claude’s foundation. No single element produced it. The compound did. That is the argument, made visible in the making.