
Many are concerned that AI will take away their jobs. This is not unreasonable, and one underlying reason why this is a fear is that the current AI technology has been partially built to automate. The corporate world has been trying to automate and mechanize human work for a long time, and in the last several decades, in particular, we have seen the steady routinization of white collar and thinking work. The recent AI technology has developed in this sociocultural context. I think that if this desire for automation wasn’t widespread in our society, recent AI technologies like large language models and other forms of generative AI either would have developed very differently or not developed in the first place.
This raises a question: to what extent must AI technology reflect this automative impetus, or can we create other forms of AI that work very differently. I will reflect on that in this article, but I do not yet have a definitive answer.
From the Industrial Revolution to Ford’s assembly line, we have seen decades of technologies designed to automate blue collar work. That was the real idea behind factories. A shoe factory can build many more shoes much more quickly than a family of shoemakers. The technology and machinery is built to increase scale. Since the second half of the nineteenth century, we have seen a similar push in the white collar world. These jobs too became incorporated into a corporate, semi-mechanistic machine of reports and meetings that allowed corporations to churn out thinking content in a similar way to a factory. In a certain sense, computer algorithms themselves are an extreme form of this process: code are detailed instructions that a machine follows literally. Algorithms are then detailed instructions to complete various tasks or strategies efficiently, an ultimate form of mechanization.
Out this context, AI technology is just the next attempt to automate thinking. I think it seems like a qualitative jump that will increase the degree to which this is possible, more than simply a continuation in the trend, but it is still part of a longer historical trend. Nothing is really new under the sun. Many people and corporations who developed AI technology did so with the idea of automating certain kinds of thinking work in mind.
For example, many creative tasks like writing and drawing became seen as never fully automatable; sure, employers could influence the conditions in which these creative processes could happen, but on some level, a human had to sit down and actually create the art. Now that generating artistic products has become just another part of the automative process, where a program determines things randomly and the human creators may shift to a more editorial role, refining that output.
Historically, after the advent of pretty much any new technology, utopian optimists would say that this marks the end of work. No longer will humans have to work for the majority of the day; this new technology will do it for them. For example, in the 1950s, new house keeping technology like vacuums and dishwashers will allow housewives (seen as “women’s work” at that time) to complete all the housework in only a few minutes and spend the rest of their time relaxing. Similarly, utopians in the tech world have promised 4 hour work days as the latest piece of tech automates most busywork.
This never seems to happen, though. Modern home appliances did make cleaning quicker, but people increased their social expectations for how clean to expect a home to match, and suddenly, the housewives of the time spent the same amount of time cleaning as before.
Similarly, when new technology substantially automates aspects of professional work, employers end up expecting the same amount of work just with a bigger output. In our culture, we are obsessed with work, and without fixing that, new technology will not meaningfully decrease the amount of work; it will only shift the expected levels of that work and also shift what that work is.
All this relates to one of the biggest fears people have of the new AI technology: that it will steal our jobs. It has an element of truth. I don’t think it will remove all jobs forever, because our society will always invent new ways to make people work, but many specific jobs in this day and age are likely go by the wayside. Writing, for example, may shift to a type of editing, where one refines what ChatGPT does, something which may require less writers.
Within capitalism, there will always be more work to do, though: if the automative machine becomes more widespread (whether a factory, a regular computer program, or new AI technology), people will need to work to maintain that machine in complex ways. This may disenfranchise people as the skills they have cultivated no longer become useful, and these jobs could be more boring as the work becomes more and more routinized into a mechanized process.
But, we won’t see an end to work unless we see an end to our capitalist mindset that there ought to be work. To slow down, we need to remove the drive for more more more at all costs. This new AI technology or really any new technology for that matter won’t do that.
Where does all this leave us? I don’t fully know. One question for those concerned they’d loose their jobs due to AI would be, “Do you actually like your job?” Sure, you like your paycheck, but do you like your job? Some with amazing jobs they are passionate about stand to loose them, but many whose jobs are most threatened are precisely those who are already working mind-numbing drudge in the first place. They work jobs that they hate in fear that even their awful job will disappear on them.
What they really fear is an end to their livelihood. If they could have a livelihood without working their job, they’d prefer that in a heartbeat. For people in this situation, I’d suggest we rethink work in the first place. To do so, we may need to reimagine our relationship to work and profit as difficult as that conversation is.
But all this brings us back to much older, longer conversations in our society. Why work in the first place? When people talk about AI, they see new innovation coming out of nowhere, not how this stuff is the next step in a wider trend towards automation in our society. And maybe it doesn’t have to be the way it is? Maybe if we can grapple with this, we could reimagine forms of AI not built implicitly to create an ever-spinning machine churning out more and more. Such AI could be much more interesting and beneficial to humanity.