From Authors to Interfaces: what ‘Energym’ gets right about our AI future
In 2036’s “Energym” world, humans no longer train models; they literally power them, and buy back a sense of usefulness by turning their bodies into batteries for the very systems that made them obsolete.
From techno‑utopia to human treadmills
The video’s premise is brutal: after a 2026 crash and mass unemployment, the tech elite reinvents work as a giant gym where people pedal to keep AI running, “fulfilling both the machine’s need for energy and the people’s need for purpose.” Under the irony, a real fear appears: if AI eats most cognitive and service jobs, will we end up inventing artificial “tasks” to keep humans busy, supervised and pacified by the same platforms that automated them out of a role?
Energym is funny because it is absurd; it is unsettling because it is only one step beyond how we already talk about “reskilling”, “engagement” and “wellbeing” while aggressively automating away human initiative.
From Authors of our lives to mere interfaces
For centuries, our institutions have presupposed a certain kind of human: an Author of their own life, capable of forming judgments, taking decisions, acting in the world and owning the consequences. Schools are supposed to form critical thinkers, markets rely (in theory) on informed choices, democracy treats each citizen as co‑author of the common rules, and civil law recognises individuals as subjects of rights and responsibilities.
AI challenges this model, by slowly shifting four pillars of what it means to be an Author:
Political: the citizen becomes a user who clicks “accept” in systems that decide upstream, a contemporary form of voluntary servitude that Étienne de La Boétie would instantly recognise.
Cognitive: everyday thinking (searching, drafting, structuring arguments) is increasingly offloaded to assistants that predict and pre‑write, leaving us in the role of validators rather than originators.
Experiential: instead of acting, failing, learning and being transformed, we consume pre‑packaged outcomes (the meal instead of cooking, the answer instead of the search, the curated feed instead of debate).
Gestural: skills and know‑how embodied in the body and hands give way to prompts on a smooth screen, weakening the link between thought, action and material reality.
Seen in this light, Energym goes way beyond a joke about bikes and billionaires; it is a caricature of a deeper drift where humans slide from Authors of systems to mere interfaces through which technical architectures execute their own logic.
Energym as a dark mirror of “purpose”
The most disturbing line in the mockumentary is the last one: “Energym solved our need for energy and your need for purpose.” Purpose is reframed as the feeling of being useful to a machine that no longer needs us for thought, coordination or creativity, only for raw input.
You can read the setup as a hyperbolic version of several trends already visible today:
Work reframed as engagement metrics: steps, streaks, OKRs, badges.
Purpose reduced to “alignment” with corporate roadmaps or system goals.
Politics displaced into product design and terms of service, where the only real choice is to accept or be excluded.
The fictional Energym participants are “lucky” to have something to do, but their margin of real decision is almost zero: they don’t design the system, they don’t debate it, they don’t even choose the objective. Their main freedom is to pedal harder.
A different question for AI leaders
The usual AI discussion in business is framed around productivity, headcount savings and competitive advantage. Energym invites a harder question: after automation, what kind of agency remains for the humans still inside the system?
A few filters that leaders, policymakers and builders can apply when they design AI‑augmented organisations:
Political: is there space for refusal, negotiation and contestation, or only for “accept” and “customise your preferences”?
Cognitive: do tools genuinely extend human thinking, or do they quietly replace it with plausible defaults that people just tweak?
Experiential: does the system leave room for error, surprise and learning, or does every deviation from the optimal path get treated as a bug?
Gestural: where do human bodies, voices and skills still matter for real, beyond serving as ID tokens or, in the Energym nightmare, pure energy sources?
If we cannot answer these questions, we’re participating in the quiet manufacture of humans as low‑agency interfaces in high‑intensity systems.
Energym, in the end, is a joke with an uncomfortable aftertaste: it forces us to ask whether we are using AI to free up more human authorship, or to make it easier, one optimisation at a time, to live comfortably without it.
P.S. These questions (about agency, delegation to AI, and the long transformation from humans as Authors of their lives to humans as Interfaces in technical systems) are at the heart of my next book, Homo Delegatus. It explores how AI, platforms and automation reshape our capacity to think, decide, act and embody choices, and asks what it would take to reclaim authorship in a world designed to make delegation the default.
P.P.S. The “Energym” mockumentary itself was created by AiCandy, a Belgian AI video agency that blends human creativity with generative tools to craft satirical and commercial films, proudly made in the same small country where I live.