The AI Business Model Makes a Broke Bengali Nervous

By
Compress 20260517 073622 2338

Acronyms: AI means Artificial Intelligence, software that appears to reason, write, code, predict, classify, summarize, and sometimes confidently invent nonsense with the face of a bank manager. GPU means Graphics Processing Unit, the specialized chip now doing much of the heavy lifting in modern AI. GW means gigawatt, one billion watts of electrical power, which is not a poetic unit but a serious appetite. MW means megawatt, one million watts. Capex means Capital Expenditure, the money spent on long-lived assets like land, buildings, servers, chips, cooling systems, and other expensive objects that do not care about your feelings. PR means Public Relations, the art of making half a bridge sound like a completed highway.


AI has stopped looking like a software business and started looking like a construction company with a chatbot in the lobby.

That is the part that makes a broke Bengali nervous.

Not the cleverness. The cleverness is real. You can ask these systems to write code, explain Kant, make a picture of a shark under black water, or convert your bad English into respectable English without once laughing at your schoolteacher. That is not nothing. A machine that can do even half of that deserves a chair, tea, and at least moderate suspicion.

The problem is the bill.

Old software was a beautiful racket. You wrote the thing once, copied it endlessly, and sold it to the world. Microsoft did not manufacture a fresh Word every time some office fellow opened a document titled final_final_REAL_final.docx. The copy cost almost nothing. The margin was fat enough to require its own cardiologist.

Frontier AI is different. It eats GPUs, electricity, cooling, land, debt, transformers, fiber, backup power, construction labor, and patience. It is less like selling a song and more like running a steel plant where every furnace wants bottled water.

This is where the first trap opens.

People are asking, “Is AI fake?”

That is the wrong question.

The sharper question is: what if AI is real, useful, and still financially mistimed?

A thing can be real and overpriced. Tulips were real. Railways were real. The internet was real. Many dot-com companies still drove into history like a scooter with one brake and too much confidence.

The AI boom has a language problem before it has a money problem. Or rather, the language problem is part of the money problem.

“Announced” is not “built.”

“Planned” is not “powered.”

“Under construction” is not “running.”

“Operational” is not “fully operational.”

“Capacity secured” is not “customers paying enough to make the thing worthwhile.”

These words are being herded together in AI infrastructure PR like goats at a village fair, and then investors are invited to admire the herd as if it were a disciplined cavalry regiment.

Take Stargate and Abilene. OpenAI talked about nearly 7 GW of planned capacity and more than $400 billion of investment over three years. That sounds like a machine the size of mythology. But the Abilene material also described one building already running and seven more still under construction.

Both things may be true.

That is the trick.

If someone says “Abilene is operational,” the sentence may be technically correct. But the listener may imagine the whole grand machine humming away, when the more honest picture is one tea stall open beside a highway dhaba still covered in bamboo, cement dust, and men shouting at each other about wiring.

One stall is not nothing.

One stall is not the food court.

This difference matters because AI valuation is now leaning heavily on future physical capacity. Not vibes. Not pure software. Physical capacity. Real substations. Real land. Real permits. Real power. Real machines growing old in real time.

And machines do grow old. A GPU bought today is not a family deity. It does not sit forever collecting flowers. It depreciates. It becomes slower relative to the next generation. It consumes power. It needs paying customers. If it sits idle, it is not strategic capacity. It is a very hot cupboard full of investor optimism.

Here is the little mystery in the room: how much of this promised AI capacity is actually productive compute today?

Not announced.

Not contracted.

Not in a slide deck.

Productive.

That word is the mosquito in the net.

Public companies disclose capex, but not always in the tidy format a Bengali accountant would prefer after lunch: how much went to land, how much to buildings, how much to power gear, how much to prepayments, how much to GPUs, how much to live GPUs, and how much to revenue-producing GPUs. Money can be committed without becoming useful capacity. A wedding caterer can be booked before a single luchi is fried.

This is why Ed Zitron’s loudest claims should be handled with tongs, but his central worry should not be dismissed. When he says the story is overcooked, he is not wrong to look at the oven. The absolutist version—nothing is being built, nobody is making money, it is all fraud—is too hot. NVIDIA is plainly making money. The big cloud companies are still profitable monsters. Amazon’s Project Rainier has been described as already supporting Anthropic workloads. There is real buildout.

But real buildout is not the same as good economics.

This is the part we tend to skip because it is dull, and dull things often kill faster than dramatic things. Debt is dull. Depreciation is dull. Grid interconnection is dull. Cooling is dull. Transformer lead times are dull. Then, one day, they form a small committee and ruin your heroic narrative.

AI used to be sold as magic in a browser.

Now it increasingly looks like project finance.

Borrow money. Pour concrete. Buy chips. Fight for power. Install cooling. Lease capacity. Chase utilization. Hope demand arrives. Hope pricing holds. Hope the next hardware generation does not make today’s miracle look like a Nokia phone in a museum drawer.

That is a lot of hope wearing a hard hat.

A few years ago, Silicon Valley talked as if capital were air. Now AI infrastructure is pulling in debt, private credit, bond issuance, and large long-term financing structures. That does not prove doom. Ports, bridges, airports, telecom towers, and power plants also use serious finance. But they come with a label attached: risk.

Construction risk.

Power risk.

Customer risk.

Utilization risk.

Obsolescence risk.

Refinancing risk.

A whole thali of risk, with extra lime.

Sitting in the southern fringe of Kolkata, where the ceiling fan has opinions and the lane floods if the clouds merely clear their throat, I find power demand a wonderfully clarifying subject. People may lie. Electrons do not. They are not impressed by valuation. They do not attend launch events. They do not say “agentic workflow” and then multiply themselves.

A GW is a giant appetite. Data centers do not merely need power in theory. They need it at a place, through a grid, at the right time, with enough cooling, backup, and stability. When US data-center demand begins pushing the grid hard enough that operators are asked to reduce load at peak periods, the story has left the TED Talk and entered the electricity bill.

And everyone understands the electricity bill.

The Anthropic-Colossus story adds another puff of smoke. xAI and SpaceX described Colossus 1 as a vast AI compute cluster with more than 220,000 NVIDIA GPUs, while reporting said Anthropic would get access to more than 300 MW of capacity and possibly all of Colossus 1’s compute capacity.

Does this prove xAI is failing?

No.

That would be too neat.

But it tells us something. In this business, unused compute is perishable. If you build a dragon, the dragon must eat. If your own workload cannot keep it fed, you rent the dragon to someone else. This may be sensible. It may even be clever. But it also reminds us that the real game is not owning impressive hardware. The real game is keeping it busy at prices that justify its existence.

A parked taxi is still a taxi.

It is also not earning.

This is the central difference between the AI dream and the AI business. The dream says intelligence will become cheap and universal. The business says someone must pay for the chips, the buildings, the cooling, the network, the power, the loans, and the next round of chips after this round gets old and starts looking slightly embarrassed.

The dream is light.

The business is heavy.

That contrast is why I am skeptical without being dismissive. AI may become deeply useful. It may change software development, customer support, research, education, design, law, medicine, and all the clerical corners of life where human beings currently spend their days moving text from one box to another like bored priests of a small paper religion.

But usefulness is not revenue.

Revenue is not profit.

Profit is not return on capital.

And return on capital is where the floor may give way.

You may love a thing and still refuse to pay enough for it. Ask any writer, teacher, musician, open-source maintainer, or consultant waiting for a client who has suddenly discovered silence as a financial strategy.

The danger is not that AI has no value. The danger is that the infrastructure being built assumes a speed and scale of paid demand that may not arrive on schedule.

That phrase—on schedule—is the trapdoor.

If enough customers pay enough, soon enough, the story works. If not, the whole structure begins to look less like software and more like an overbuilt railway line waiting for passengers who are still at home comparing ticket prices.

Bubbles do not require fake things. This is important. The laziest bubble analysis says, “It is all imaginary.” History is ruder than that. Bubbles often grow around real breakthroughs. The new thing works. The future does change. But the financing gets drunk, removes its shirt, and starts dancing on the table.

That may be where AI is.

Not fake.

Not safe.

The serious bear case is not “AI is nonsense.” The serious bear case is “AI is real but too capital-hungry, too power-hungry, too debt-dependent, and too dependent on future revenue arriving exactly when promised.”

That is a more frightening sentence because it does not require the technology to fail. It only requires the spreadsheet to be too cheerful.

A middle-class man understands cheerful spreadsheets. They are cousins of New Year resolutions, gym memberships, and promises to eat less fried food after Durga Puja. They look excellent at the moment of creation. Then Tuesday arrives.

So the question I keep coming back to is small enough to fit in a teacup and large enough to worry an empire:

Can real paid AI usage grow fast enough to justify hundreds of billions in chips, power, buildings, debt, and depreciation before the hardware ages and the lenders become less romantic?

That is the question.

Not whether there is smoke. There is smoke.

Not whether AI is useful. It is useful.

Not whether big companies are building real things. They are.

The question is whether the real things are being built at the right price, at the right speed, for demand that is real enough, durable enough, and profitable enough.

This is not a technology fire yet.

It is an infrastructure-finance fire.

Maybe it becomes the kitchen stove that cooks the next century.

Maybe it becomes a transformer fire at the corner, with everyone standing outside in rubber slippers saying they always knew the wiring was suspicious.

I do not know.

But when a business needs chips from one kingdom, power from another, debt from a third, construction miracles from a fourth, and customers from everywhere, a broke Bengali sitting under a fan that sometimes coughs before starting is allowed to feel nervous.

P.S. References: OpenAI Stargate expansion announcement; Oracle/OpenAI Stargate data-center fact sheet; Microsoft FY2026 earnings call transcripts; Reuters reporting on Big Tech AI infrastructure spending, debt-market funding, private credit, and data-center power demand; Data Center Dynamics reporting on Anthropic and Colossus 1; xAI and SpaceX statements on Colossus; Amazon Web Services material on Project Rainier; NVIDIA fiscal 2026 financial results; Morgan Stanley analysis of data-center financing needs.

To give full credit and support to the original creator, please click the video title and watch it on YouTube. Once you go to their channels, you will discover many more videos that are not here. My linking them here is not to take traffic away from them or YouTube where they make a few cents, but to introduce them to a wider world who may not have yet stumbled on them.

Topics Discussed

  • AI
  • Artificial Intelligence
  • AI Business Model
  • AI Bubble
  • AI Infrastructure
  • AI Data Centers
  • AI Capex
  • AI Debt Financing
  • OpenAI Stargate
  • Abilene Data Center
  • Microsoft AI Capacity
  • Amazon Project Rainier
  • Anthropic
  • xAI Colossus
  • Nvidia GPUs
  • GPU Shortage
  • Data Center Power Demand
  • AI Energy Use
  • AI Economics
  • AI Valuation
  • Big Tech Capex
  • Private Credit
  • Cloud Infrastructure
  • Technology Bubble
  • AI Finance
  • Engineering Blog
  • SuvroGhosh

© 2026 Suvro Ghosh