Why I Am a Luddite Photo by Thomas Kelley on Unsplash

The story most people know goes like this: in early 19th century England, textile workers who feared the future smashed the machines that threatened their jobs. Ignorance versus progress. The Luddites lost, technology won, end of story.

Brian Merchant, who spent years researching the movement, found something messier and more useful. The Luddites were not anti-technology, they were anti-poverty. They were skilled workers who understood the machines intimately and objected not to machinery itself, but to the conditions of its deployment. Their phrase for what they opposed was machinery hurtful to commonality.

They were not asking whether the machines worked. They were asking who they worked for.

That reframing matters for AI in 2026.

I use AI every day. It is genuinely useful, occasionally magical, occasionally strange, and worth taking seriously. I am not arguing that it does not matter. What I have become more skeptical of is the claim that the current shape of AI adoption is inevitable, and that asking questions about it is just resistance.

That framing usually sounds like this: the future is already decided, the only question is whether you are keeping up.

Who decided this deployment model? What alternatives were considered? Who absorbs the cost when it fails? These are not anti-technology questions. They are basic governance questions.

I do not know the future of AI, and neither does anyone else, whatever confidence they project. What has helped is having a framework for the present: evaluate specific deployments, not the technology in the abstract.

The Luddite question is that framework: not is this impressive, but who does this serve, and on what terms?

AI Does Not Fix Systems, It Accelerates Them

One pattern keeps repeating: AI does not transform how an organization works. It accelerates whatever is already there.

A team with broken processes, heavy stage gates, stale documentation, and meaningless metrics introduces AI. Now it has governance bots enforcing the same gates, AI-generated versions of documents nobody reads, and automated summaries of reports that were already being ignored.

Everything gets faster. Nothing gets better.

I have seen this play out in data work repeatedly: dashboards nobody uses produced at higher volume, strategy documents polished into confident illegibility, status updates that are longer and cleaner but carry less signal than what they replaced.

Velocity increases and noise increases with it, for the same reason: the tool was applied to a broken model rather than to the question of whether the model was worth keeping.

The roughest implementations are often not caused by careless people. They happen because pressure to be seen adopting AI outpaces the harder work of deciding what adoption should mean.

The label changes. The thinking does not.

The IKEA Effect And Prompting

Dan Ariely and colleagues studied what they called the IKEA effect: people overvalue things they help construct. In one of the better-known experiments, participants made origami cranes and then stated how much they would pay to keep them. Neutral observers were asked to price the same cranes.

The builders were willing to pay roughly five times more.

The mechanism is effort, not quality. Putting work into something changes how we value it, independent of what it objectively is.

That is interesting in an AI context.

When you prompt something into existence, you frame the question, iterate outputs, and shape the final result. That is real effort, even if it differs from producing every word or line yourself. If the threshold for ownership feelings is low, then AI-assisted output may feel more trustworthy than it deserves simply because we invested effort in producing it.

IKEA Effect Motif
Effort can increase attachment without increasing quality.

Observer view

Looks like output. Low attachment.

valuation: 1x

How to read: same crane, different valuation signal. The output can look identical while attachment changes with effort.

I do not think this means AI-assisted work is inherently bad. It means our confidence in that work may be less objective than we assume.

So I use a simple rule: if I have put significant effort into shaping AI-assisted output that I am about to act on, I get someone who was not in the room to review it.

Not because AI must be wrong. Because I may be the least reliable judge of something that feels like I made it.

At team scale, this matters even more. When everyone has invested effort in an AI-assisted deliverable, you can end up with a room full of ownership bias.

Skills Are Shifting, Not Disappearing

This is not a case for denial. Skills always depreciate and compound over time.

The skills that are depreciating fastest are the ones AI can increasingly perform: fast SQL drafting, standard dashboard production, report formatting, basic summarization.

The skills that compound are judgment-intensive: knowing what to ask, reading context, seeing when an analysis is technically correct but organizationally wrong, and helping stakeholders clarify what they actually need.

As the cost of producing mediocre output approaches zero, judgment becomes more scarce and more valuable.

If anyone can generate a passable dashboard in five minutes, the value shifts to deciding which dashboards should exist at all.

Whether workers are compensated for that shift, or whether gains are extracted elsewhere, is the Luddite question applied to a career.

A Useful Question, Even Without Guarantees

The Luddites lost in the most literal sense. The state crushed the movement and the machines continued.

That is not an argument against asking the question. It is a reminder that asking it does not guarantee the answer you want.

And yet, versions of this question have won before: collective bargaining over automation, labor protections, organizations that deploy AI to augment skilled work instead of deskilling it.

The framework does not promise a good outcome. It gives you a way to see clearly enough to push for one.

Sometimes the answer is good. AI can remove tedious work and free people for higher-judgment work they find meaningful.

But on the surface, that can look very similar to AI deployed to deskill, justify layoffs, and extract more from fewer people under the language of progress.

The Luddite frame helps you distinguish between those two paths.

Not refuse the technology. Ask what it is for, who decided, and who benefits.

Nobody else is going to ask that on your behalf.

You should be a Luddite too.

Written by

Mitchell Lisle

I am a Data / Privacy Engineer based in Sydney.