The Coming Unemployment

By
Compress 20260506 211530 0762

The first thing AI breaks is not the job. It breaks the path into the job, which is worse, because a civilization can survive some unemployment, but it cannot easily survive the disappearance of apprenticeship without turning its young into decorative livestock with degrees.

A great deal of soothing commentary about AI and work still begins with the same little parlor trick. Do not worry, it says, only the junior jobs are exposed. Only the entry-level coder, the first-year analyst, the call-center trainee, the content drudge, the spreadsheet monk, the poor young fellow told to “just get industry experience” by people who obtained industry experience during the reign of Queen Victoria. This is meant to reassure us. It does the opposite. It is like telling a man not to worry because only the ground floor of his house is on fire.

There is no senior engineer without a junior engineer first. There is no expert analyst who was not once a tolerated nuisance with a login, a manager, a ticket queue, and the sacred right to misunderstand production under supervision. The old workplace was inefficient, yes, but its inefficiency was also a school. People learned by doing the small, ugly, boring, reversible work. They learned by breaking builds, misreading requirements, annoying senior people, copying patterns badly, fixing them later, and slowly growing a second nervous system made of judgment. AI threatens that layer first because that layer looks, to a spreadsheet, like waste. It is not waste. It is how competence reproduces.

This is why the new labor-market optimism smells faintly of mothballs and consultant cologne. The World Economic Forum [WEF, a global policy and business organization that surveys employers about future work trends] can project tens of millions of new jobs by 2030, and the arithmetic may even be sincerely assembled, but a projected job is not a wage, not an offer letter, not rent, not a parent’s medicine, not rice in a kitchen in Dum Dum, not a career ladder with a first rung. It is a number wearing a tie. Some of the newly announced positions will be real. Some will be aspirational. Some will be budgetary incense waved before investors. Some will be requisitions kept open because a company wants to look vigorous, benchmark salaries, harvest resumes, frighten current employees, or maintain the fiction that help is coming. These are ghost jobs, and the modern job board increasingly resembles a cemetery with filters.

The ghost-job problem matters because it contaminates every cheerful statistic about opportunity. A job posting used to be a crude but useful signal: somewhere, a budget, a manager, and a need had briefly achieved alignment, which in corporate life is about as rare as a sober mongoose. Now postings can be branding, surveillance, salary research, compliance theater, or hope with no purchase order behind it. Greenhouse reported that a meaningful share of postings on its platform fit the ghost-job pattern, and surveys of hiring managers have found companies admitting to posting roles they did not intend to fill. That does not mean every job is fake. It means the applicant cannot easily tell which doors are doors and which are painted on the wall.

That is the correct frame for AI unemployment in 2026: not a clean machine-versus-human duel, but a foggy market in which real demand, fake demand, automated screening, capital discipline, outsourcing, remote competition, and corporate theater all blur into one demoralizing gray chutney. The person looking for work experiences it as silence. The economist experiences it as mixed signals. The executive experiences it as optionality. The unemployed graduate experiences it as a small private apocalypse refreshed every morning by email.

The old defense of the middle-class professional was complexity. We told ourselves that coding was hard, legal work was subtle, finance required judgment, medical documentation required context, customer support required empathy, analytics required interpretation, and management required the mysterious ability to attend meetings without visible bleeding. All true, up to a point. But AI did not have to become a complete replacement for each profession. It only had to become good enough to remove chunks of work, compress teams, and shift the burden from production to supervision. One senior engineer with AI tools may now do what previously required two seniors, three mid-level engineers, and one junior who was mostly there to learn and occasionally delete the wrong branch. The organization calls this productivity. The junior calls it nonexistence.

The mid-level worker is not safe either. The middle is where much of modern office life lives: interpreting tickets, writing glue code, creating reports, preparing decks, summarizing meetings, cleaning data, drafting emails, checking logs, reconciling systems, updating documentation, translating human confusion into machine-shaped artifacts. This is not glamorous work, but it is the connective tissue of organizations. AI is unusually good at producing first drafts of this connective tissue. Not always correct. Not always safe. Not always explainable. But often cheap enough and fast enough that a manager, under pressure, will decide the remaining errors can be handled by fewer humans downstream.

At the high end, the story becomes stranger. Senior people do not simply write better code or better memos. They decide what matters. They know what not to automate. They know which requirement is a trap, which metric is lying, which stakeholder is performing certainty, which legacy system is held together by fear and a cron job. Good senior judgment remains valuable. But AI also moves upward. It generates architecture options, compares frameworks, writes migration plans, produces design documents, reviews code, drafts strategy, and simulates expertise with the silky confidence of a hotel pianist. Much of this is shallow. Some of it is useful. The danger is not that AI becomes a wise architect. The danger is that organizations accept synthetic plausibility as a cheaper substitute for hard-earned judgment.

This is where the matter turns from technological to political. A labor market can absorb disruption when the new path is visible, affordable, and broadly accessible. But the current AI economy is not evenly distributed. The frontier models are expensive to train. The chips are scarce. The data centers are capital furnaces. The leading firms are backed by states, hyperscalers, and financial structures ordinary workers cannot even see from the pavement. A curious person with a laptop can use AI, yes. That is not the same as participating in the ownership, infrastructure, or direction of AI. Using the black box is not the same as building the cathedral. Most people will be tenants in someone else’s cognition factory.

This matters especially in India, where the educated labor market was already a crowded train with one functioning door. The engineering-college-to-service-job pipeline, however imperfect, gave millions of families a plausible story: study, code, join, migrate upward, buy appliances, tolerate life. That story is now under pressure. Indian information technology [IT, the industry of software services, outsourcing, systems integration, and digital operations] firms are not immune to automation, client budget tightening, platform consolidation, or the brutal fact that AI can eat precisely the kind of repeatable knowledge work that outsourcing industrialized. The first casualty may not be the famous senior consultant. It may be the fresher waiting outside the gate with a resume, a certificate, and a family that has already mentally spent the salary.

And then comes the social unpleasantness, the part polite conferences avoid because the sandwiches have arrived. Educated unemployment is not merely an economic variable. It is combustible material. A large population of credentialed, disappointed young people is historically not a decorative asset. Add religious polarization, political opportunism, regional resentment, algorithmic propaganda, family debt, urban isolation, and the daily insult of being told to “upskill” by men whose main skill is owning things, and you have a society placing oily rags beside a furnace. AI need not cause the fire by itself. It only has to raise the temperature.

The phrase “upskill” deserves a small trial at The Hague. Of course people should learn. Of course technical adaptation matters. But upskilling is often used as a moral solvent, dissolving structural failure into individual inadequacy. If a company eliminates entry-level work, posts ghost jobs, filters applicants through broken automated systems, requires three years of experience for a tool released eighteen months ago, and then tells rejected candidates to upskill, it is not offering advice. It is laundering blame. The applicant becomes the defective component. The system strolls away whistling.

There will be new jobs. Let us not become cartoon doomers with tin pots on our heads. AI will create work in model evaluation, data governance, safety testing, integration, security, workflow redesign, synthetic data management, compliance, AI operations, domain adaptation, human-machine interface design, robotics maintenance, and many fields we have not named yet. But the existence of new categories does not solve the transition. New jobs may require fewer people, higher credentials, geographic mobility, elite networks, better English, stronger mathematics, domain depth, or access to expensive tools. A displaced support worker cannot eat the abstract future occupation called “AI workflow orchestration specialist” unless someone hires him into it, trains him, and pays him before his landlord becomes metaphysical.

There is also a timing problem. Labor markets do not reconfigure at the speed of software releases. People have children, loans, aging parents, medical conditions, rent agreements, immigration constraints, caste and class barriers, language limitations, and the ordinary human inability to become a different person every fiscal quarter. A model can be updated overnight. A workforce cannot. A society that forgets this will discover that humans have bodies, and bodies become angry when fed only dashboards.

The most defensible outlook is therefore neither “AI will take all jobs” nor “AI will create more jobs than it destroys.” Both are slogans pretending to be analysis. The better claim is narrower and nastier: AI will increase the productivity of smaller groups, weaken the apprenticeship layer, make hiring signals less trustworthy, shift income toward owners and highly leveraged experts, and leave many educated workers competing for fewer genuine on-ramps. The result may not be universal unemployment. It may be something more administratively elegant and morally disgusting: permanent underemployment, intermittent contract work, fake openings, unpaid tests, AI-screened rejection, and a growing class of people who are technically skilled but economically unnecessary at the offered price.

The architecture of the working world is being refactored. In the old design, firms needed people because processes were labor-intensive. In the new design, firms need fewer people because processes can be partially automated, monitored, and scaled through software. The human role moves from doing to supervising, from drafting to reviewing, from searching to judging, from producing to correcting. This sounds elevated until one notices that a system needs far fewer reviewers than producers. A factory that replaces ninety workers with ten inspectors has not upgraded the ninety into philosophers. It has removed them.

The rich will adapt first because they always do. They will buy the tools, own the platforms, automate the firms, personalize the medicine, educate their children with AI tutors, and describe the resulting inequality as innovation. The upper professional class will attach itself to AI like remoras to a shark. The rest will be told that opportunity has never been greater, which is what powerful people say when opportunity has become harder to measure and easier to fake. There will be glowing reports. There will be panels. There will be founders in black T-shirts explaining that human creativity is more important than ever while quietly replacing the humans who used to create invoices, code, copy, support responses, test plans, and first drafts of everything.

The coming unemployment may therefore arrive wearing the mask of efficiency. It will not always look like breadlines. It will look like adults living with parents longer. It will look like marriages delayed, clinics avoided, cities full of delivery riders with engineering degrees, freelancers refreshing platforms at 2 a.m., middle-aged professionals discovering that their experience is both too expensive and not fashionable enough, and graduates training themselves for jobs that are already being quietly hollowed out. It will look like people being busy but not secure. Tired but not employed. Credentialed but not trusted. Available but not chosen.

For someone like me, the alienation has a technical flavor. I can use the tools, read the papers, follow the architecture, understand the direction of travel, and still feel the ground moving away. The cost of serious participation rises. The models become larger. The chips become geopolitical objects. The cloud bills become comic obscenities. The frontier moves from cleverness to capital. Once, being an engineer meant that a stubborn person with a machine, a compiler, a database, and enough tea could still feel some fellowship with the future. Now the future arrives as an API with usage limits, a terms-of-service document, and a monthly invoice that looks like it was written by a loan shark with a PhD.

There is still work worth doing. Governments should stop pretending that job creation projections are the same as labor absorption. They should measure real hiring, not postings. They should regulate deceptive job advertisements. They should support apprenticeships, wage subsidies, public digital infrastructure, and portable benefits. Universities should stop selling degrees as lottery tickets and start building supervised work pathways with real employers and real tasks. Firms using AI to compress teams should be made to explain where entry-level training will occur if they remove the entry level. And workers, poor devils, should learn AI not because it guarantees salvation, but because refusing to learn it is like refusing to learn email in 1998: emotionally satisfying for six minutes, then professionally fatal.

But no clean solution is coming. The ownership structure is too concentrated, the incentives too sharp, the technology too useful, the politics too slow, and the suffering too easy to individualize. We will muddle through, which is what societies call failure when it happens gradually enough. Some will prosper. Some will adapt. Some will be discarded and then blamed for the shape of the ditch.

The machine will not need to hate us. That is childish cinema. It only needs to be useful to people who find other people expensive. That is enough. The rest is not science fiction. It is procurement.

Topics Discussed

  • Video
  • Engineering Blog
  • SuvroGhosh

© 2026 Suvro Ghosh