The Last Economy

By
IMG 20260423 WA0002

The most useful way to read The Last Economy is not as a prophecy, not as a forecast model, and certainly not as revealed truth, but as a bundle of hypotheses about what happens when intelligence stops behaving like a scarce human service and starts behaving like infrastructure.

That distinction matters because the book’s central move is large and rather impudent in the old grand-theory way. It argues that the economy we inherited was built around scarcity of labor, expertise, coordination, and information; that generative artificial intelligence now attacks those scarcities at the source; and that once intelligence becomes abundant, the moral language, price signals, labor markets, and institutional dashboards of industrial capitalism stop describing reality and begin hallucinating it.

This is the intelligence inversion.

In Mostaque’s telling, history has already seen several foundational inversions. Land once dominated. Then labor. Then capital. Now, he argues, intelligence itself inverts from scarce to abundant, and that is not merely another productivity wave but a civilizational phase transition. The practical claim is blunt enough to rattle the windows: if human cognition can be copied, distributed, and recombined at machine scale, then the old bond between human effort and economic value snaps. A mind ceases to be a premium input and becomes, in many domains, a near-zero-marginal-cost utility.

It is a strong hypothesis. Strong hypotheses are useful precisely because they are dangerous. They force you to ask what would follow if they were even half right.

From there the book builds outward like an engineering schematic masquerading as political economy. Its second major hypothesis is that most orthodox economics is now running on expired assumptions. The “seven fatal lies” chapter is really a demolition list: scarcity is fundamental, human labor has durable value, growth requires ever more material throughput, markets reliably find equilibrium, money measures value, humans are rational optimizers, and distribution more or less tracks contribution. One need not accept the author’s rhetoric to see the target. He is saying the old model mistakes temporary industrial conditions for permanent laws of nature.

There is a real argument hiding beneath the theatrical phrasing. Digital goods already made nonsense of several industrial intuitions. A software artifact can be replicated at negligible marginal cost. A search engine, map, translation system, or large language model can serve millions at once. Value creation migrates from making another unit to controlling the network, the interface, the training loop, the data exhaust, the distribution channel, the defaults. The book’s wager is that generative AI extends this pattern from media and software into cognition itself.

Then comes the dashboard problem, one of the book’s better instincts. If Gross Domestic Product, or GDP, records paid transaction volume rather than actual human flourishing, then a world in which intelligence services become cheap or free will look, to legacy metrics, like partial collapse. Encyclopedia sales vanish. Translation costs crater. Design iterations that once employed small armies become trivial. The user becomes more capable while the accountant sees less billable activity. That is not a small measurement error. It is a category mistake. More data does not mean more truth. Sometimes it means the instrument is measuring the wrong animal.

Another hypothesis sits underneath this one: civilization should be measured not only by money flows but by broader stocks of capability and resilience. The book packages that under a broader civilizational dashboard, a claim that financial throughput is an impoverished proxy for social health. On that point the book is less a prophecy than a complaint, and a fair one. Modern systems are full of quantities that are easy to count and hard to mistake for meaning.

Then the argument becomes more architectural. The economy, Mostaque says, is not best understood as a pile of transactions but as a generative system that produces order from disorder. Here he leans heavily on analogies to machine learning, diffusion models, reinforcement, and network design. Some readers will find this exhilarating. Others will find it a bit too pleased with its own metaphors. Both reactions are justified. The analogy is fruitful when it clarifies structure. It becomes slippery when analogy starts dressing up as proof.

Still, several hypotheses here are worth isolating.

One is that network topology is destiny more than price theory admits. A platform economy does not behave like a village bazaar. It behaves like a gravity well. If the value of a network rises as more users, data, tools, and developers pile into it, equilibrium becomes a quaint Victorian dream. Winner-take-most is not an accident. It is an architectural property. Another is that firms and markets split into two engines: execution and exploration, inference and training, routine exploitation and adaptive search. That is a clever way of describing modern organizational life. The stable machine does one thing. The outer ecosystem probes for what the next machine should be.

The book’s darker hypothesis is the one it calls the alignment economy. Once machine agents can plan, negotiate, design, optimize, and coordinate at superhuman scale, the key question is no longer whether machines can produce value. They plainly can. The question becomes who sets objectives, who owns the agents, who audits the feedback loops, who captures the rents, and who absorbs the errors. In plainer language: not “Can the machine think?” but “Who commands it, and for whose benefit?”

This is where the book stops being techno-poetry and becomes politically serious.

Because if intelligence becomes abundant but access to that abundance is gated by a few firms or states, then abundance does not emancipate. It stratifies. The book’s three futures follow naturally from that premise. Digital feudalism is the default path: a handful of platforms own the models, the interfaces, the identity layers, the data, the payment rails, and the customer relationship. Great fragmentation is the geopolitical path: the internet splinters into rival sovereign stacks and AI becomes a cold-war asset. Human symbiosis is the hopeful path: universal access to powerful agents, new institutions for distribution, and governance that treats machine capability as a public substrate rather than a private castle wall.

Notice what kind of statement these are. They are not predictions in the strict sense. They are attractor states. Scenarios. Stability basins. They are useful because they tell you where the system may settle, not because they tell you what Thursday in October will look like.

And that is where skepticism must enter.

Only the future can really tell what the future will be. This sounds obvious, almost childish, until one notices how often intelligent adults forget it. What futurists usually do is extend one visible line, ignore two hidden ones, and smuggle in a human psychology that belongs to their own decade. They are less like prophets than like cartographers drawing coastlines from a ship in fog.

History is full of these magnificent misfires.

In 1929 Irving Fisher declared that stock prices had reached what looked like a “permanently high plateau.” That phrase now sits in the museum of famous last words, not because Fisher was stupid, but because he mistook a local regime for a durable law. He saw momentum and named it destiny.

In 1954 Lewis Strauss spoke of electricity becoming “too cheap to meter.” It was not a foolish dream in a laboratory age drunk on atomic promise. It was simply a reminder that technical possibility, industrial rollout, regulation, capital cost, public fear, infrastructure complexity, and political economy do not move at the same speed. Physics may open a door. Permitting, insurance, and institutions often nail it half shut.

In 1975 the paperless office was imagined as near at hand. Instead we got the more irritating hybrid that any office worker could have predicted if anyone had asked them: endless screens plus endless printouts, because legal systems, signatures, habits, audits, and human comfort lagged the device brochure by decades.

In 1995 Clifford Stoll mocked the notion that the web would transform commerce, newspapers, and public life. He was spectacularly wrong on the broad direction. Yet even there the mockery aged in a crooked way. He missed online trade, online reservations, and networked transactions, but he was not entirely wrong that the internet would become a swamp of unedited data, manipulation, distraction, and commercial intrusion. A forecast can fail on the headline and still accidentally catch the pathology.

Then there are the futures that arrived on time in magazines and nowhere else. Mid-century space optimism had humans progressing from the Moon landing to lunar bases and colonies in a few brisk steps. Plans in the 1960s imagined scientific stations by the mid-1970s and colonies not long after. The Moon, as ever, remained where it was, while budgets, politics, risk tolerance, and public attention behaved like the true launch vehicle. Technology did not fail on paper. Society failed to remain the society the forecast required.

That is the dirty little secret of prediction. Most forecasts do not fail because the gadget was impossible. They fail because they silently assumed stable institutions, stable incentives, stable geopolitics, stable energy prices, stable public tolerance, stable law, stable culture, and stable human desire. In short, they assumed away history.

The same caution applies to The Last Economy.

Its strongest hypothesis is not that artificial intelligence will certainly erase the meaning of labor by a date certain. That is too crisp, too cinematic, too dependent on deployment rates, regulation, energy costs, compute concentration, model plateaus, security failures, public backlash, and the stubborn fact that institutions are slower than demos. Its strongest hypothesis is narrower and more persuasive: the price of many forms of cognition is falling so fast that the old relationship between skill scarcity and compensation is already under strain; the institutions built around work, merit, and distribution are badly adapted to that change; and whoever controls AI mediation layers will likely control an alarming share of future value capture.

That is not prophecy. That is present-tense diagnosis with a future attached.

Its weaker hypotheses are the ones that arrive dressed as inevitabilities. Human labor will not disappear in one grand clean sweep because labor is not one thing. It is compliance, accountability, trust, embodiment, licensing, politics, ritual, coercion, care, presence, risk absorption, and sometimes performance for other humans. A machine may outperform a radiologist on pattern extraction, yet hospitals will still need someone to sign, defend, explain, insure, escalate, and absorb blame. Economics is not merely production. It is adjudication under uncertainty.

Likewise, abundance in bits does not magically solve scarcity in atoms. Compute sits on data centers, grids, minerals, cooling, land, network backbones, export controls, and capital. Even synthetic abundance has a supply chain. The mind may become cheaper. The world that houses the machine does not.

And yet the book is right to insist that old language is failing. “Reskilling” is not a serious answer if the question is whether human cognition remains the premium bottleneck. “Productivity” is not a sufficient lens if the gains accrue to a narrow ownership class. “Innovation” is not a moral good when it arrives as a distributional wrecking ball with a cheerful logo.

What makes The Last Economy interesting, then, is not that it sees the future clearly. Nobody does. It is that it names the fault lines with enough force that one can no longer pretend they are hairline cracks.

Read it, then, as one reads a dangerous map. The intelligence inversion is a hypothesis that the cheapest copy of many cognitive acts is no longer human. The seven fatal lies are hypotheses that industrial economics has mistaken contingent rules for permanent truths. The alignment economy is a hypothesis that governance of machine agency will matter more than sheer model capability. The three futures are hypotheses about institutional settlement: platform feudalism, geopolitical fracture, or some deliberately built symbiosis. The “nucleation” idea is a hypothesis that new orders spread not by universal persuasion but by local proof, by working prototypes that become too attractive to ignore.

All of that may turn out partly true, badly timed, overdrawn, or wrong in sequence. That is perfectly normal. The future has a long record of arriving sideways.

What history does show, with almost insulting consistency, is that utopians underestimate friction, doomsters underestimate adaptation, and both sides underestimate combination. We did not get the paperless office. We got PDF bureaucracy. We did not get atom-powered abundance. We got a complicated energy mix and fifty years of argument. We did not get an internet that simply liberated knowledge or destroyed civilization. We got both the public library and the carnival barker rolled into one glowing rectangle.

So the adult position is neither to sneer at futurism nor to kneel before it. It is to separate structural insight from theatrical certainty.

The Last Economy deserves to be taken seriously where it says this: intelligence is becoming weirdly abundant; market concentration around that abundance may become the central political economy question of the age; GDP and wages are poor instruments for tracking what is happening; and a society that ties dignity entirely to market labor is about to discover how brittle that bargain always was.

It deserves pushback where it implies inevitability, speed, or finality. History rarely grants clean endings. It prefers messy layerings. New regimes do not replace old ones in a single clean cut. They pile up, jam, interlock, and spend decades making life confusing.

Only the future can tell what the future will be because the future is not one variable. It is technology colliding with law, money, war, habit, boredom, fashion, energy, bureaucracy, fear, and ordinary human stubbornness. Crystal balls fail not because tomorrow is unknowable in principle, but because tomorrow is produced by too many systems touching at once.

That is why books like this matter, and why they must never be mistaken for scripture.

© 2026 Suvro Ghosh