Despite GenAI already able to absorb a lot of roles, there’s been a seemingly gradual decline, ironically, in roles being reduced or replaced. When the ChatGPT-pocolypse hit, many forecasts predicted that the prophesied wave of automation and its subsequent effects was near. Some independent data from around the world seemed to back up this, with startup CEOs in India and corporations like UPS trimming the bloat from their workforce. All indicators pointed to a mass shock to the global job market. But it never came.
My prediction isn’t along the lines of the futility of GenAI to replace jobs. While I believe this to be true I find it boring to constantly repeat it. Instead, I want to take a gander at the potential social impact of automation, how citizens would treat the companies/employers who choose non-humans over the pack.
Automation is simply the term to describe losing a job or being replaced by a machine. It goes back to the Industrial Revolution in the 20th Century, where a lot of humans lost their physical roles but ended up reskilling to some facet of the workforce that machines couldn’t impact. This made sense at the time. Machines can do things humans cannot (lift heavier, be more productive) and naturally, we should remove humans from these tedious and possibly dangerous physical tasks, instead use robots or other machines to do them instead. This type of logic doesn’t make sense anymore. In the 21st Century, physical as well as cognitive labour is being superseded, relying more and more on silicon and less and less on carbon.
The social impact of this, again looking at the Industrial Revolution, was spotty but calm. People were up in arms over being replaced but ended up cooling off when they were able to find other work. Jobs were destroyed and then created, and even augmented, with machines.
In 2024+, jobs are being destroyed but not created. This presents a stark departure from what was the status quo: technology destroys a little, but creates a lot. The PC, for example, took away three million jobs but created 19 million. AI was supposed to be a similar if not upgraded ratio, destroying a lot more but creating a massive surplus; a garden of plenty in the job market.
Unfortunately, AI and GenAI are not equivalent terms. Taking a bit of a tangent, AI, going back to 1956, was meant to be human intelligence in a machine. Let’s leave that definition as plain as possible, for now. GenAI refers to mimicking human intelligence, an imitation game. Essentially, GenAI is chalked up to the outputs, replete with all the problems that come with the “stochastic parrots,” namely bias, misinformation, destroyed focus abilities, etc. If it sounds like a parrot, it’s probably a parrot. But there’s no real ghost in the machine, no deus ex machina. GenAI is just fancy computation and lots and lots of data. The two terms are straying farther and farther apart, and it doesn’t help when people use them interchangeably. If human intelligence were actually put in a machine, things would be different, especially when it comes to automation.
GenAI is great for some things, and horrible for others. This means that only certain types of jobs can be replaced or reduced. This amounts to very low-skill cognitive labour. Customer support specialists, for example, are usually copying and pasting responses. This doesn’t require much to automate, and thus, should result in lots of customer support roles currently done by humans being given to GenAI chatbots. But this isn’t happening.
Coming back to automation, the dire question is why? Why aren’t we seeing mass layoffs in the workforce? Surely, if bloating the bottom line is still the purpose, and GenAI is clearly capable and far more efficient/optimized, then the sweeping force of automation should be upcoming. In simpler words, if the technology is adequate for the masses, why are the levers inside corporations not being pulled to realize the wave of automation?
There seems to be a social aspect to automation. More probable, an uncomfortable social impact. Companies seem to be afraid of trimming their employee count and instead choose to keep the numbers high, and even increasing. This is illogical, especially for Big Tech. Alternatively dubbed “Big AI” in this post, shouldn’t they be the guinea pigs in their AI hopes and dreams? Shouldn’t they be the ones showing the world the capabilities of their technology, the self-proscribed reasoning and communication abilities?
Again, the equation here doesn’t add up. According to Big AI, their chatbots are adequate for a variety of tasks. They are building up their capacities, pouring massive amounts of money, water, energy, and hype into the GenAI machine. The “experts” are warning of the nearing Singularity, proclaiming that their specific companies’ tech will be the one to achieve human-level artificial general intelligence (HLAGI) — the “new new” term to describe AGI, don’t get me started. And still, there’s merely a whisper of automation in the conversation of the new economy with AI.
Conjecture ahead, I feel that Big Tech and the like are rather afraid of the impact that automating a vast number of workers would have. The PR nightmare that would ensue is rather incomprehensible. No company wants to be the one that has the fingers pointed at them, the scape-goat, the source of attention, when it comes to the genesis of rapid automation. Being the historic starters of displacing mass amounts of humans from work and altering the way the economy functions forever isn’t a good look. This assumed corporate psychology means that no company will want to be the first, meaning none will take the first step, meaning no automation plague.
This also could mean that when one company eventually does bite the bullet, out of notoriety, confidence, or lack of self-awareness around the terror behind AI wholesale, that other companies immediately follow suit with layoffs and the like. Similar to how all of Big Tech reacted when ChatGPT was released. They were caught with their pants down, and sprinted towards playing catchup with OpenAI. A comparable economic earthquake and sprint may occur with automation.
Automation is on everyone’s mind, especially the companies that want to be lightweight, tech-savvy, and highly-profitable. The social pressure that comes with AI automation is uncanny. Like a ghost, we know it’s there, but we can’t really do anything about it. We can take precautions, but until supernatural events occur, it’s a waiting game. Attempting to predict exactly when this dormant volcano will become active is silly. It could happen tomorrow or in the next decade. But I feel it will just take one major company to start this trend. Maybe it’s already happened secretly but the public doesn’t know about it. Maybe there are plans in the pipeline. Maybe this, maybe that.
I just feel that there’s something fishy about the state of automation. Everyone is talking about the effects of AI, the “Singularity” that’s near, and all that jazz. But no one seems to be relating it back to AI in any meaningful sense. There is an aspect of cosmic horror that occurs when talks of automation hit the dinner table, namely if we’ll descend into a solarpunkish utopia or a cyberpunkish dystopia. All bets are off. As said before, it’s a waiting game, so let’s wait and see.