Blog

Will AI replace bullshit jobs?

Sep 06, 2025


git I just finished David Graeber's 2018 book "Bullshit Jobs", based on his 2013 essay On the Phenomenon of Bullshit Jobs. The essay at the time apparently caused a bit of a sensation, including an attempted rebuttal from The Economist, and several polls were conducted to find out the extent to which Graeber was correct. The key thesis is:

While corporations may engage in ruthless downsizing, the layoffs and speed-ups invariably fall on that class of people who are actually making, moving, fixing and maintaining things; through some strange alchemy no one can quite explain, the number of salaried paper-pushers ultimately seems to expand, and more and more employees find themselves, not unlike Soviet workers actually, working 40 or even 50 hour weeks on paper, but effectively working 15 hours just as Keynes predicted, since the rest of their time is spent organizing or attending motivational seminars, updating their facebook profiles or downloading TV box-sets.

While Graeber is careful to point out that he cannot decide which jobs are bullshit and which are not, there are a startling number of people who think their own jobs are bullshit. His argument rests of believing them, and makes a convincing case that we should. The limitation of the essay at the time was he didn't have real data: he had to rely on anecdotal accounts by bullshit-job holders. By the time the book was published, surveys had been conducted that show roughly 40%-50% of the population of both the US and UK believe that their own jobs have no social value of any kind and should not exist. I don't want to discuss here whether Graeber is right. That argument seems to have been going on since 2013, and I doubt anyone is going to change their mind based on what I say. What I would like to do is think about two big narratives of work in the 2020s, and how the "bullshit jobs" thesis might inform what people are writing and saying about them.

Remote work, hybrid work, and return to office

"Bullshit Jobs" (the book) was published in 2018. Two years later, nearly everyone who had a bullshit job had to leave the office, and work via email, Slack, and Zoom. As the pandemic receded, most white-collar workers did not return to the office full-time; instead, most of them can now do their jobs in some kind of hybrid work arrangement, and (especially in tech) lots and lots of workers are fully remote. Why workers partly won this particular battle against their managers, the first they'd won in decades, remains a bit of a mystery. Cal Newport has argued that the pandemic simply reversed the ordinary labour collective action problem: since workers were already home, employers who demanded their return-to-office were faced with the loss of enough of their workforce that they blinked. In addition, so many people either moved or took new fully-remote jobs during the pandemic meant that there were plenty of workers who could not return, even if they had wanted to (which they mostly didn't). What do we make of bullshit jobs that are done over the internet? I imagine the experience is somewhat similar to what Anne Helen Peterson calls "LARPing your job", a constant need to perform the ritual of "presence". Estimates vary, but most research shows that workers check Slack and email excessively, and it's difficult to imagine that much deep work gets done when checking Slack once every few minutes. Thus, remote work does not seem to have ushered in a revolution where deep work has replaced presence, and bullshit remains bullshit. The fact that workers feel a need to perform in this way suggests that the bullshit was always the point.

Will AI replace bullshit jobs?

Just as the pandemic seemed well and truly behind us, the release of ChatGPT in November 2022 caused the world to suddenly take large language models seriously, and there were attendant predictions that AI was capable of doing a large amount of knowledge work. Opinions are greatly divided on this, with some (mostly the leaders of AI companies) predicting that job losses will be so severe that we will have "reorganize society" to allow it to function, while others are skeptical that AI is capable of doing much that is valuable enough and reliable enough to replace a human worker. Both predictions are ultimately pessimistic. Either mass unemployment, or - once the AI bubble bursts - a recession (causing mass unemployment). However, if most of the jobs that will be replaced are bullshit, the actual future we realize might be even worse. It suggests that most existing bullshit cannot be replaced: since the bullshit jobs don't exist for any real reason anyway, then keeping people employed is the point, not the work that they do. But the constant threat of replacing of workers with AI provides a perpetual sense of precariousness that companies can use to further keep their workers in line. Depressingly, this might mean that the AI bubble does not actually burst, or at least not for a long time. The irritating hype we've seen from managers and tech journalists over the last three years can continue indefinitely, no matter how many catastrophic failures the attempt to pivot to AI creates. The point is not to replace real work with AI, because the real work never existed in the first place. We may even see a whole new category of bullshit job: fixing the problems introduced by AI that the company unwisely adopted in the first place. Graeber talks quite a bit about the phenomenon of "duct tapers": workers who's job it is to deal with systemic problems that the organization is unwilling to fix. AI creates huge opportunities in this space: already there are legions of coders who's day-to-day seems to mostly consist of fixing problems in AI-generated code. Perhaps you shouldn't fear losing your job to an AI, so much as you should fear that AI making your job suck (more than it already does).

Conclusion

David Graeber passed away in 2020, which is really too bad because there are few people I'd like to talk to about the dumpster fire of the current decade more than him. Maybe someone can make an AI that will answer questions on his behalf.


Blog