Neo-Luddism in the Age of Artificial Intelligence
Power, Labour, Authenticity, and the Fight to Humanise Technological Progress
One of the cornerstones of the technological age is rapidity, and just as quickly we have advanced to a level of dystopian dehumanised state of technology, we are also seeing a rapid incline of protest that has taken shape as micro-trends and what people consider ‘cool’ or ‘in’. If it hasn’t already been abundantly clear, we dislike AI and the dehumanisation of the creative world, and every other world to make things strictly clear, but it feels slightly more disturbing in creative spaces, given the paradoxical nature and the necessary relationship for art with humanity and passion. We can sense this shift in the new ‘dopamine detox’ rituals that people say they’re incorporating in their self-care routine. You can certainly see it in the way people hesitate before saying they use generative AI for their writing, their code, their dating profiles, it’s embarrassing now to depend on Chat-gpt or Gemini rather than just thinking for yourself. The old techno-optimism is fraying, and in its place is a sensibility that feels at once ancient and freshly minted, or, in other word(s): Neo-Luddism.
To invoke the original Luddites is to summon a caricature. Popular memory paints them as anti-technology zealots smashing looms in a blind rage against progress. The reality was more nuanced. They were skilled textile workers in early nineteenth-century England protesting the ways new industrial machines were being deployed to undercut their wages and destabilise their communities. They were not against technology in the abstract; they were against a particular political economy of technology. The frames they broke were symbols of dispossession. They were asking who benefits and who pays.
That question feels painfully current in the age of AI. When a large language model drafts a legal memo in seconds or when it can generate a magazine cover without hiring an illustrator, it opens up a myriad of necessary conversations, and the one that should be primary is ‘whose labour has now become invisible or taken away entirely? The original luddites were so afraid of the new technology taking away the need for people and in turn taking away their jobs and income, however, as big of a threat as it posed, the technology still needed a significant amount of working people in order to run, with AI however, where teams were once needed, now we just need one tech savvy person to understand AI systems and run them.
The companies building these systems frame them as tools of augmentation. At OpenAI, executives speak about empowering humanity and democratising intelligence. Their CEO, Sam Altman, has oscillated between visionary optimism and sober warnings about existential risk. This dual posture is part of the aesthetic of contemporary AI: we are promised both convenience and catastrophe. It keeps some enthralled whilst also keeping all of us slightly off balance.
To understand why this mood has traction, it helps to zoom out. The last two decades were dominated by a specific strain of technological faith. The internet would flatten hierarchies, or at least let us peer into each class system closer than ever possible before, so that we may all feel like one, and the level of social mobility that has been afforded to us would connect the world in an unprecedented way. There was always skepticism, but it was marginal compared to the prevailing narrative of liberation. Even as the cracks widened, misinformation, surveillance capitalism, gig work precarity, the story held. Tech was progress, and progress was good.
Then came a series of reckonings. Whistleblowers exposed the psychological experiments embedded in social platforms. Researchers mapped how recommendation algorithms radicalised users. The term “surveillance capitalism,” popularised by Shoshana Zuboff, entered mainstream discourse, giving language to the sense that our clicks and scrolls were being harvested and monetised at scale. The romance cooled and tech was no longer just a shiny gadget; it was infrastructure with power.
AI arrived in this atmosphere of ambivalence. Its capabilities felt like science fiction made banal. In 1999, ‘The Matrix’ imagined a world where machines harvested humans for energy, while in 1984 ‘The Terminator' gave us an unstoppable killer robot sent from the future. These were fantasies about domination. Today’s AI is more prosaic and, in some ways, more intimate. It does not hunt us in alleyways; it finishes our sentences, it recommends our playlists or creates us a Spotify DJ, and it drafts our emails in a tone eerily calibrated to our own.
That intimacy is precisely what unsettles people. When a machine can mimic your voice, your style, your humour, what remains distinctly yours? The anxiety is not just about jobs, though that is real enough. It is about authorship and authenticity. Our industries are being threatened when one system can master each individually in a matter of seconds, simply by scanning the internet and picking up centuries and more of information and education that people have spent countless years studying and investing into, monetarily and with their energy.
There is a temptation to frame this as a simple battle between progress and fear but that is lazy. The more interesting tension is between different visions of progress. One vision, inherited from industrial modernity, equates progress with efficiency and scale. If a task can be automated, it should be. If a process can be optimised, it also should be. This logic is relentless. It is the logic that turned artisanal workshops into factories and local shops into global platforms. It is also the logic that now sees cognition itself as a domain ripe for extraction.
Another vision of progress asks about flourishing. It is less impressed by raw productivity and more concerned with meaning, dignity, and autonomy. This vision has intellectual roots that stretch back at least to Karl Marx, who worried about alienation under industrial capitalism, and to the Romantic critics of mechanisation who feared a world stripped of texture and craft. It is echoed in the cautionary imagination of Mary Shelley, whose novel Frankenstein remains a parable about creation without responsibility.
Neo-Luddism taps into this second vision. It is not a wholesale rejection of technology but a demand that it serve human ends rather than redefine them. It asks whether a world saturated with AI might erode certain forms of skill we value precisely because they are hard won. It wonders whether friction, slowness, and even boredom have a role in a life well lived. It resists the framing of every inefficiency as a problem to be solved by software.
Part of what makes the current moment so charged is that AI development is concentrated in specific geographies and cultures. Silicon Valley is not just a location; it’s an ideology. It valorises speed, disruption, and the myth of the founder as a world-historical hero. When AI systems built in this milieu are exported globally, they carry embedded assumptions about language, norms, and values. A chatbot trained predominantly on English-language data reflects particular cultural biases. A content moderation algorithm encodes decisions about what counts as acceptable speech.
Communities outside these centers of power experience AI less as magic and more as imposition. Facial recognition systems misidentify darker-skinned faces at higher rates. Automated welfare systems flag vulnerable families for investigation based on opaque criteria. Predictive policing tools disproportionately target already marginalised neighborhoods. In these contexts, neo-Luddism is not an aesthetic choice but actually a survival strategy and a refusal to accept bigoted algorithmic authority without accountability.
Yet there is also a quieter, more personal dimension to the backlash. It surfaces in the desire to log off, to write by hand and of course to read physical books. It also appears in the renewed interest in crafts, analog photography, vinyl records. These are not inherently anti-AI gestures, but they express a hunger for tactility in a world increasingly mediated by screens and models. When everything can be simulated, the real acquires a new aura.
The irony is that many of the same people who critique AI also use it. They experiment with prompts, marvel at its fluency, and rely on it for mundane tasks. This ambivalence is not hypocrisy; it is the condition of modernity. We are entangled with the systems we question. The original Luddites wore clothes woven on the very machines they protested. The challenge is not purity but agency and speaks more to how the rapidity of its advancement has surpassed our protest and ethical code.
What would it mean to channel neo-Luddism into constructive politics rather than aesthetic posturing? It would require moving beyond personal vibes and edgy character choices to governance. Regulation is often dismissed in tech circles as innovation-killing bureaucracy, but history suggests otherwise. Labour laws, environmental protections, and safety standards did not end industrial progress; they shaped it. The question is whether democratic institutions can keep pace with AI’s velocity, so far, they evidently cannot, and it’s leading to catastrophically disgusting things like the authorisation that has been allowed by Elon Musk's ‘Grok’ to create AI child pornogrophy and deepfakes of unconsenting people.
There are early signs of movement. Policymakers debating transparency requirements for training data, courts considering whether scrapping copyrighted material constitutes fair use. Unions negotiate over the use of generative tools in creative industries. However, can it be described as too little too late, or is that just a terribly pessimistic view? Many people, industries and companies are now deeply dependent on AI tools, will taking it away now or even just safeguarding and regulating how and when it is used, spark protest and deep upset?
Culturally, we may also need to recalibrate our metrics of worth. If AI systems can perform many cognitive tasks faster and cheaper than humans, tying identity too tightly to productivity becomes dangerous. We risk a crisis of meaning if we equate value solely with market efficiency. A neo-Luddite sensibility nudges us to expand our definitions. Care work, community building, contemplation, and play may not scale well, but they are central to human life.
Education is another frontier. Instead of treating AI as either a cheat code or a cheating device, institutions could integrate it critically, teaching students not just how to use models but how to interrogate them. What data were they trained on? What biases do they exhibit? Where do they fail? This approach treats AI literacy as civic literacy. It acknowledges the technology’s presence without surrendering to it.
There is also space for design choices that embed restraint. Not every application needs to be maximally addictive or frictionless. Developers could prioritise user control, clear opt-outs, and slower defaults. This would require a shift in incentives away from pure engagement metrics. It would mean admitting that more usage is not always better.
The deeper philosophical question underlying neo-Luddism is about what kind of species we want to be. If intelligence can be simulated at scale, what distinguishes human thought? Some argue that consciousness, embodiment, and mortality remain uniquely ours. Others speculate that these boundaries will blur. The debate can quickly become abstract, but it has practical stakes. If we see ourselves as obsolete components in a machine-driven economy, despair follows. If we see AI as one tool among many, subject to collective choice, possibility opens.
In the end, the relevance of neo-Luddism is not that it predicts the end of AI. It insists on human judgment in the face of technological momentum. It reminds us that inevitability is often a story told by those who benefit from it. It invites us into a conversation about limits, about care, and about the texture of daily life in a world where machines can write, see, and speak.
You do not have to choose between worship and sabotage. You can demand better. You can ask awkward questions at product demos. You can support policies that align innovation with justice. You can cultivate spaces where human skill is valued not because it is efficient but because it is expressive. That posture may not trend on venture capital Twitter, will forever refuse to say X , but it resonates elsewhere.
The future of AI will not be decided solely in research labs or boardrooms. It will be shaped by cultural attitudes, labour struggles, legal frameworks, and everyday habits. Neo-Luddism, in its contemporary form, is one thread in that tapestry. It is a reminder that technology is not destiny. It is design, and design can be contested.
We are early in this story. The models will get better and therefore the integration will deepen. The temptations of convenience will intensify. So will the critiques. The task is not to retreat into nostalgia for a pre-digital past that never really existed, nor is it to sprint blindly into a frictionless future. It is to stay awake and to keep the conversation alive.
Somewhere between those extremes is a politics of technology that feels grown up. It recognizes that the tools we build, in turn, build us. If neo-Luddism helps us hold that tension a little longer, think a little harder, and demand a little more from the systems shaping our lives, then it is not a regression. It is a sign that we are paying attention.
References
Zuboff, Shoshana. The Age of Surveillance Capitalism.
Marx, Karl. Economic and Philosophic Manuscripts of 1844.
Shelley, Mary. Frankenstein; or, The Modern Prometheus.
OpenAI. Public statements and technical reports.
Altman, Sam. Interviews and public remarks on AI governance.
The Matrix. Directed by the Wachowskis, 1999.
The Terminator. Directed by James Cameron, 1984.