The Liturgy of the Machine

A dramatic Gothic cathedral interior bathed in warm candlelight, where a massive server rack with glowing blue lights stands at the altar like a digital idol.
The new priesthood
Civilization will not attain perfection until the last stone from the last church falls on the last priest. - Émile Zola

Let us begin with a documented fact. In 2019, Facebook’s internal research teams discovered that their engagement-based ranking algorithms amplified divisive content, with angry reactions initially weighted five times higher than likes. A subsequent memo warned that “many things that generate engagement on our platform leave users divided and depressed.1 2” Mark Zuckerberg personally rejected changing the algorithm such that it would reduce engagement but might mitigate harm2. His stated mission was "connection"; the business model was extraction; the outcome was division.

This is not a proven conspiracy, nor is it ancient history. This is a template — and we're watching it replay in real time. Except this time they're not selling outrage and anger. They're selling fear—of being left behind, of extinction, of missing the rapture. And happily, they're selling salvation too: a superintelligence that will solve everything.

This is why they're priests, not executives. They don't sell products; they interpret divine will, translating static models into utopian prophecy. Where Facebook promised to connect the world, they promise to bring forth a god. That's not a business model. That's a religion.

The Template & The Mirage

The sequence is ritualised, but the ritual has been upgraded. Facebook pioneered the template. It was corporate strategy with spiritual echoes — a kind of secular evangelism.

The AI movement, recognising a winning formula, has taken the prototype and turned it into full theology. They added the one ingredient Facebook lacked: a god.

This is how they do it:

First, the utopian premise: We will connect the world / We will solve alignment and birth superintelligence that cures cancer and reverses climate collapse.

Second, the extraction mechanism: Outrage drives engagement → fear of missing the singularity drives investment and regulatory capture.

Third, the degraded reality: Division and epistemic chaos / mass precarity, AI slop flooding the commons, and a chatbot that writes mediocre marketing copy while consuming the energy output of small nations.

Fourth, the faith-based defence: We couldn’t have foreseen the consequences / The timeline is only off by a few thousand days.

Facebook's executives wore hoodies and spoke in TED Talk cadences. The new priesthood — Altman, Nadella, Pichai — wears the same uniform but speaks in prophecies. They tell us that AI agents will “join the workforce in 20253", that superintelligence is “a few thousand days away4", that we stand at the hinge of history.

What they have, in reality, are large language models — and LLMs are not alive. They do not learn. They are trained — massive statistical pattern-matching exercises run on warehouse-sized arrays of GPUs. When the training stops, they are static. During inference, they generate tokens (numbers which are subsequently mapped to words); they do not gain knowledge, refine their architecture, or experience the aha! moment that characterises actual cognition. They are, in the most literal sense, frozen.

This is not a minor technical quibble. It is the difference between a river and a photograph of a river. The former flows, carves canyons, evaporates and rains down again; the latter captures a moment and then yellows on the wall. An LLM is a photograph of human knowledge, taken at the moment of its training cutoff date, and no amount of clever prompt engineering will ever make it flow.

The technical literature is unambiguous. LLMs cannot reliably self-improve5, cannot update their knowledge without expensive retraining, and suffer from what we politely call “hallucinations” but might more accurately term “confident confabulation.” They are constrained by data scarcity — some projections suggest we’ll exhaust high-quality human-generated training data as early as next year6. They remain dependent on human expertise for interpretation and error correction, particularly in domains where mistakes carry moral weight. The singularity is not near; it is not even on the LLM map.

And yet the priesthood proclaims the coming of the divine.

The Catechism of Extraction

The theology behind this madness has a name: TESCREAL7. The acronym, coined by Timnit Gebru and Émile P. Torres in 2023, stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. It is not a conspiracy; it is a self-assembling belief system that conveniently centralises power in the hands of those already holding it.

Consider each pillar:

Transhumanism proposes the body is upgradeable firmware. Disease and ageing are bugs; enhancement is the ultimate harm reduction. This is not inherently absurd — vaccines are transhumanist — but when enhancement becomes a moral imperative, the unenhanced become defective. The shadow of eugenics is not accidental; Julian Huxley, who coined “transhumanism” in 1957, was an explicit eugenicist who advocated for “evolutionary humanism” and delivered the Galton memorial lecture twice. The logic survives intact: if some humans are cognitively unenhanced, they are obstacles to the post-human future.

Extropianism frames progress as entropy’s conqueror. The universe tends toward decay, but we can be the counter-force. This is mostly 1990s libertarian flavouring, but it provides the moral vocabulary for unbounded technological ambition without systemic constraint.

Singularitarianism is the rapture of the nerds. AI will recursively self-improve, triggering an intelligence explosion. The prophecy is unfalsifiable and self-reinforcing: it justifies reckless speed as the only responsible path because “someone else might get there first.” This is not risk analysis; it is eschatology with a CUDA backend.

Cosmism gives us the stars. Humanity must colonise space, upload consciousness, saturate the cosmos with intelligence. The merit is logical; the problem is prioritisation. When cosmic-scale fantasies justify ignoring Earth-scale suffering, it becomes a pro-extinctionist ideology — David Pearce’s term, and accurate.

Rationalism provides the epistemic hygiene. Bayesian updating, cognitive bias awareness, steel-manning — these are error-correction protocols. The community that practices them (LessWrong, EA forums) is the choir, not the clergy. Their sincerity is what makes the extraction possible. They do the volunteer labour of building the intellectual infrastructure while the cardinals cash out.

Effective Altruism is the moral accountant. Use evidence to maximise impact. The core is unambiguously good: direct resources to effective charities. The corruption enters when EA meets Longtermism. When “the most good” is calculated across cosmic timescales, donating to existential AI risk research mathematically outperforms saving a child from malaria today. This is moral offsetting: do harm now, justify it by funding “safety” later.

Longtermism is the theological core. There could be 10^58 future humans (or digital minds). Their welfare mathematically dominates any concern for present humans. Our duty is to ensure they exist. This is philosophically bankrupt but mathematically coherent. It takes utilitarianism and pushes it to reductio ad absurdum: torture one child today if it prevents a 0.001% chance of civilisational collapse that would deny quadrillions of future minds their moment in the digital sun.

The bundle’s genius is integration. EA provides the moral cover; Rationalism the language; Singularitarianism the urgency; Transhumanism the vision; Longtermism the maths that justifies any present sacrifice. This is not a religion in the sense of believing in a deity. It is a religion in the sense of providing a creed that centralises power, demands faith, and offers salvation in exchange for tithing — your data, your attention, your capital, and increasingly, your job.

The Priesthood & The Institutional Logic

Sam Altman, Sundar Pichai, and Satya Nadella are not engineers selling a product. They are cardinals interpreting scripture while sitting atop corporate fiefdoms that would collapse if the prophecies failed. Their compensation packages — Altman’s equity, Nadella’s compensation package, Pichai’s stock grants — balloon in direct proportion to the apocalyptic urgency of their rhetoric.

This creates a volatile mixture of cynicism and belief. They are smarter than most; they understand the technical limitations. That makes them cynical frauds only if we focus on individual psychology. But the stronger frame is institutional logic: the system selects for leaders who can believe and profit simultaneously because that is the only way to sustain the narrative long-term. The Kool-Aid is not strategic delusion; it is c-suite compatible cognition — a mental state that dissolves the boundary between honesty and utility. We cannot prove Altman’s sleep quality, but we can prove his equity stake inflates with apocalyptic timelines. That is enough.

Consider his timeline: Altman’s “few thousand days” to superintelligence is falsifiable (late 2032–early 2033) but lacks technical justification. It functions as a ticking clock designed to create urgency. It forces investors to pile in, regulators to panic, competitors to accelerate. The prophecy is the product. The fear is the fuel. The extraction is the engine.

Behind the scenes, the pattern repeats. OpenAI’s API calls, Microsoft’s Copilot licenses, Google’s Gemini integrations — these are not steps toward god; they are revenue streams from static models. The “AI safety” discourse is simultaneously a moral shield and a regulatory moat: only the incumbents have the resources to navigate the compliance they themselves propose. This is not conspiracy; it is institutional gravity — the heaviest wallet warps the policy space. The safety discourse is sincere and extractive. Those are not mutually exclusive. That is the whole problem.

The LIES Framework: A Diagnostic

The acronym LIES — Lethality, Inevitability, and Exceptionalism of Superintelligence — describes not the technology but the discourse. It is a test for extraction, not a test for truth.

Lethality: The doom loop is structural. Climate collapse accelerates because each new model demands more compute. Mass precarity spreads because each automation “saves costs.” Epistemic chaos deepens because AI slop drowns human voices — the phenomenon is nascent but measurable: Stack Overflow traffic dropped 50% post-ChatGPT, and low-quality AI content farms dominate SEO. These are not side effects; they are necessary byproducts of extraction. Each creates dependency on the “solution,” which accelerates the problem. The priests profit while the congregation suffocates.

Inevitability: The “few thousand days” timeline is unfalsifiable prophecy dressed as prediction. It justifies present harm by invoking future salvation that may never arrive. It treats the singularity as destiny, not contingency. This is eschatology, not engineering. The distinction is crucial: a genuine technical claim invites immediate audit; a prophecy defers falsification.

Exceptionalism: Only the priesthood — OpenAI, Google, Microsoft — can build this safely. Only they understand the alignment problem. Only they have the wisdom to steer the machine. This is claimed authority without demonstrated competence. They have not aligned anything; they have not solved the technical problems; they have scarcely admitted that their product is a static parrot. But they have captured the narrative, and the narrative is the moat.

These three reinforce. Inevitability creates urgency. Exceptionalism centralises power. Lethality generates the crisis that justifies both. The liturgy is complete: we are small children playing with a bomb, and the children have turned bomb-making into a growth industry.

The Last Stone

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Nick Bostrom wrote this in 20158, and he was correct. But he missed the corollary: the children have turned bomb-making into a growth industry. They are taking venture capital, hiring PR firms, and lobbying regulators to ensure only their triggers are legal.

The last stone from the last church is still falling. It started falling when we realized Silicon Valley's gospel was a revenue model. It is falling now, as we realize that Altman’s “few thousand days” is a timeline drawn in sand at low tide. It will land when we finally accept that superintelligence is not a product to be shipped but a possibility to be approached with thermodynamic humility and existential honesty.

Watch what they fund, not what they preach. The truth is in the cap table. The priests are not building a god; they are monetizing its absence. And the congregation — us — is left holding the bill for a transcendence that never arrives, while the machine they’ve sold us litters the digital commons with slop, evaporates jobs like summer rain on asphalt, and accelerates the heat death of a world they claim they’ll one day save.

The liturgy of the machine is beautiful, alliterative, and completely full of shit. It is time to leave the temple. The universe outside is indifferent, unforgiving, and crushingly real. It is also, for now, still ours — if we stop praying to the priests and start reading the manual they’re too busy selling to write.

Footnotes: