H. G. Wells’s World Brain and the Plight of the Invisible Worker
What a century-old vision of the global network tells us about "ghost" labor and the risks of intellectual totalitarianism in the age of AI
“Graham’s Escape,” Illustration by Henry Lamos for “When the Sleeper Wakes”
by H.G. Wells (1899) Source
In 1938 H.G. Wells famously predicted that “the whole human memory can be, and probably in a short time will be, made accessible to every individual.” But his vision of a so-called World Brain involved more than just organizing information; he was convinced that it would eventually incubate a “widespread world intelligence.”
Wells is best remembered today as a sci-fi writer (and enthusiastic proponent of free love), but he was also a prolific polemicist and social reformer, who thought deeply about the transformative possibilities of new technologies. Today, his vision of the World Brain seems eerily prescient, as we start to encounter tools capable not just of organizing human knowledge but of generating it.
I was reminded of Wells’s work while reading Henry Shevlin’s recent essay Behaviourism’s Revenge, where he ponders the question of machine consciousness. Putting aside the endless debates about AGI, he argues that ultimately, intelligence is in the eye of the beholder. “At a certain point, whether or not a system is ‘really’ conscious becomes less important than how people respond to it in practice.”
Wells arrived at something like this intuition from roughly the opposite direction. His putative machines are not convincing because of their inner workings, but because of the roles they come to occupy in the social imagination.
But what Wells underestimated—though he came close—is the extent to which such a system might tend toward extractive labor practices, and how easily it can shade into a kind of intellectual totalitarianism. In order to work effectively, any such system requires continual care and feeding.
Wells was not blind to the level of work that might be involved in constructing and maintaining the World Brain. He imagined an elaborate machinery of classification, curation, and coordination, carried out by trained experts and supported by new institutional forms. But in his telling, this work manifests as a kind of enlightened bureaucracy—and he declines to ask how such efforts might be compensated in practice, or whether social power imbalances might emerge as a result.
Today, companies are paying workers to train the next generation of “intelligent” machines—folding laundry, setting tables, assembling furniture on factory production lines, or evaluating the quality of AI outputs—what researchers Mary Gray and Siddharth Suri call “ghost work.” These systems do not learn in the abstract; they are often built from painstaking, repetitive acts performed by a human labor force. Such systems do not simply spring into being; they must be continuously maintained.
Wells’s vision of the World Brain thus carries with it a darker undercurrent. He imagined that the World Brain would “hold men’s minds together in something like a common interpretation of reality,” ultimately “pull[ing] the mind of the world together.” But that vision raises an obvious question: whether a single, shared interpretation of reality is inherently desirable—and what kinds of biases might be embedded within it. Such universalist ambitions are never neutral.
More troubling still, Wells suggested that the World Brain would give rise to “a common ideology.” As appealing as such a thing might sound in principle—as long as the ideology happens to align with one’s own—it also implies a narrowing of intellectual life, with limited space for dissent or competing perspectives. In Wells’s World Brain, difference is not eliminated so much as absorbed into a single, overarching framework.
Implicit in Wells’s vision was a particular theory of authority: that knowledge could be centralized, curated, and stabilized by a relatively small group of trained experts.
Beginning with his 1905 work, A Model Utopia, Wells developed a fascination with the problem of information retrieval—the need for better methods for organizing the world’s recorded knowledge. This led him to reject old values and institutional strictures and embrace a mechanistic approach, one founded on Taylorist ideals of scientific management and a belief in the power of science to solve humanity’s problems, and the coming war in particular. Only by improving the flow of information, he reasoned, could humanity restore its collective moral, political and intellectual health.
In 1939, Wells published World Brain, a collection of essays and lectures drawn from his decades of thinking about the possibilities of new technologies for strengthening humanity’s collective intellect. He saw universal access to knowledge as more than just an intellectual boon but as a crucial step toward an uplifted society, one that “foreshadows a real intellectual unification of our race.”
But who would do the heavy lifting towards building this utopian world? His answer to the problem of coordination arrived in the figure of the “Samurai”: a self-selecting class of highly trained individuals who would guide and maintain the system. In Wells’s hands, the messy realities of knowledge work—its tedium, its conflicts, its power dynamics—are transmuted into a kind of ethical calling.
Wells’s Samurai bear a striking similarity to today’s AI artisans: highly skilled, accomplished people wielding an outsized influence over the lives of others.
“The social theorists of Utopia,” Wells writes, “did not base their schemes upon the classification of men into labour and capital. They esteemed these as accidental categories, indefinitely amenable to statesmanship, and they looked for some practical and real classification upon which to base organisation.” The Samurai were also, Wells writes, the custodians of the future: “Except for processes of decay, the forms of the human future must come also through men of this same type, and it is a primary essential to our modern idea of an abundant secular progress that these activities should be unhampered and stimulated.”
The systems we are now building present themselves as autonomous, self-improving, even self-explanatory. Yet beneath that surface lies a vast, distributed, and often invisible workforce: people who label data, train models, evaluate outputs, and moderate content. If Wells’s World Brain imagined a visible class of “Samurai” charged with maintaining the system, our own equivalents tend to remain out of sight—even as they perform many of the same functions. In this sense, Wells’s vision retains a distinctly premodern cast: for all its technocratic aspirations, the Samurai system rests on something like a feudal social order, with a small class of stewards presiding over a much larger—and largely unacknowledged—labor force.
When coherence presents itself as consensus, and consensus as truth, the line between coordination and control begins to blur. The machines may “think,” but what they produce is not a unified world brain so much as the appearance of one—assembled from the efforts of a workforce whose contributions are easy to overlook. In doing so, it carries the weight of authority while obscuring the work required to sustain it. Our present-day World Brain does not just generate answers; it demands our belief.
Notes:
Portions of this essay appeared in my books Informatica, Cataloging the World, and in my dissertation.
For an excellent overview of Wells’s work and the dark side of his imagination, see Boyd Rayward’s excellent critical reassessment.
Other sources cited:Gopnik, Adam, “The War Inside H.G. Wells,” The New Yorker, November 15, 2021.
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.
Shevlin, Henry. Behaviourism’s Revenge (2026).
Well, H.G. A Modern Utopia (1905).
Wells, H.G. World Brain (1936).


