
How AI accelerates the bullshitisation of our nine-to-five jobs
In today’s Open Column submission, Sheilla Njoto pulls down the veil behind the overly euphemised—yet vague—sheen of AI to reveal yet another increasingly insidious by-product of AI in today’s field of human work.
Words by Whiteboard Journal
As an AI Sociologist, I’m often asked two questions, mostly posed with equal urgency. The first is technical and tiresome: “How can I prompt ChatGPT more effectively?”—as though prompt syntax will shield us from structural change. The second is existential: “Will AI take my job?” And it’s a question worth lingering on because behind it lies a quiet dread that the ground beneath our labour is eroding, and we have no vocabulary yet for the fall.
To preface my answer to that, let me get this straight: my deepest concern around AI isn’t about the shift per se. Change, after all, isn’t inherently bad. But the more urgent, more unsettling question emerges when we zoom out and examine what, and whom, AI ultimately serves. And despite all the glossy rhetoric of empowerment, AI isn’t being designed to serve people first and foremost. Above all, I often suggest that AI serves capitalism—and not the benign, Adam-Smith-esque version, but the extractive version. It’s a system optimised to expand productivity and minimise cost, but also externalise harm with near-total detachment from the human consequences.
This is why the question we should be asking is not simply whether AI will replace our jobs, but also whether the jobs left for us will still hold any meaning, dignity, or purpose; or whether they will become increasingly hollow, defined not by value nor substance, but by visibility and mere simulation. And while some AI tools appear to help individuals, they do so by coercing us to match the pace of a machine-driven market. Said plainly, while we used to joke that we were slaves of capitalism, that irony has probably been made reality by the rise of AI.
The paradox of being ‘too poor to replace’ in the Global South
The uncomfortable truth that we collectively agree on is that AI doesn’t replace labour equitably. And this isn’t new.
History has shown us that innovation led thousands of workers displaced by machines that could perform faster and cheaper, like mechanised looms and printing presses. And its brunt was borne by the working class, particularly the blue-collar—while the white-collar, whose roles demanded interpretation, coordination, or judgment, remained relatively insulated. In fact, the upheaval gave rise to new white-collar jobs that didn’t previously exist: administrative staff, software engineers, or compliance officers. They emerged because industrial systems more than ever needed human cognition to supervise and optimise what machines could not yet manage alone.
But now that the industry is increasingly dominated by AI and, more recently, GenAI, the historical script is shifting. White-collar work is no longer exempt from automation. Cognitive tasks once considered uniquely human—writing, designing, coding—are increasingly being handled at scale by machines. The existential question has now breached the glass walls of offices: Will my job be next?
Despite this newfound anxiety, the structural inequality of impact remains intact. AI’s societal consequences are, once again, deeply classist. In countries with high labour costs and strong institutional protections, AI is deployed to streamline operations, leading to displacements of many.
Meanwhile, in most of the Global South, where labour remains cheap and protections are limited, the calculus changes. Here, automation is often postponed, not due to ethical considerations or technological lag, but because hiring humans is still cheaper than hiring machines.
The cruel paradox is this: many workers are spared not because they’re valued, but rather because they are undervalued. And the only reason some workers are not yet made redundant by AI is because their continued exploitation is more profitable than innovation.
This isn’t a matter of technical feasibility, but rather an economic convenience. Where capital can extract value more efficiently through human labour, it will. This paradox of being too poor to replace is highly alarming, not only for its entrenched inequalities but also because it reveals the underlying logic of AI adoption: it’s less about progress and more about arbitrage.
Bullsh*t jobs are now the infrastructure
For those who haven’t yet known, bullsh*t job is a thing. The late anthropologist David Graeber (2013) coined the term “bullsh*t jobs” as “a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence, even though, as a part of the conditions of employment, the employee feels obliged to pretend that this is not the case.”
In the era of AI, I argue, this category isn’t only expanding, but is also becoming more necessary, especially to maintain the illusion of productivity in a system that values motion over meaning.
The jobs emerging in this AI-dominated era aren’t replacing human creativity or ethical labour. They’re proliferating in the margins of machine activity. Take, for instance, prompt engineers, AI content validators, synthetic brand strategists, and machine-learning auditors. Their primary function is to simulate value: to make machine outputs legible to other machines. And these jobs are epistemic shell games—they’re roles in which workers are hired not because their labour is needed, but because systems have become too complex to be left entirely unattended.
Then consider the growing layers of even more vague, unjustifiable job titles, like digital management in brand marketing, future forecasters, or trend trackers whose jobs are to prevent influencers from getting outpaced by the social media algorithms—jobs where teams of strategists and insight analysts organise around AI-generated content calendars, which are then A/B tested, engagement-optimised, and interpreted through sentiment dashboards. All is done to retrofit AI outputs into posts that feign creativity.
Imagine all months of work result in a dashboard about an engagement rate of a post that was written by a model. Then imagine if it were all deleted tomorrow, how much would society carry on unbothered, or perhaps even… relieved?
These roles are routinely hailed as “the jobs of the future,” yet a closer inspection reveals their disturbing vagueness and redundancy. I’m not saying that all AI-adjacent labour is pointless. But we have to admit that many of these roles don’t exist to truly serve people. They exist to serve systems. They emerge not from human need, but from the institutional obligation to appear busy, to justify budgets, or to demonstrate agility.
And ironically, the more sophisticated the technology becomes, the more performative the human layer around it grows. So, while we claim to build a future of liberation through automation, what we’re building is a future where human work becomes increasingly ornamental, where the value of labour is judged not by its social necessity, but by its proximity to machines. And in doing so, we risk hollowing out the very meaning of work, reducing it to a ritual of presence, a performance of oversight, or a pageant of productivity.
The accelerating rise of what I call the ‘gold collar workers’
One of the more insidious consequences of AI’s integration into the workplace is the way it distorts our sense of contribution. As the line between human and machine labour blurs, we begin to internalise a quiet panic: that our value must be constantly and increasingly defended, justified, and reasserted. The response, almost universally, is to overstate our so-called human “value-add”: creativity, emotional intelligence, contextual sensitivity, cultural nuance, among others. Even as the actual space for deploying these traits shrinks, we inflate their significance to prove that we’re still, somehow, irreplaceable.
The result isn’t a flourishing of human potential, but a culture of professional performativity. What matters isn’t simply the output, but the narrative around it: the carefully crafted story of why this particular work required a human touch. We convince ourselves and our clients that the insight we offered came from a kind of affective depth that AI can’t reach, or from an interpretive layer too subtle for machines to detect. “What I offer,” we say, “can’t be bought from an algorithm.” And increasingly, that is the product: not the work, but the claim that, “it couldn’t have been done without us.” Suddenly, the labour of justification eclipses the labour itself—because from there, we create demands to justify our own existence.
This logic, I argue, breeds a new category of labour: the gold-collar work.
If blue-collar work is defined by physical production and white-collar workers do administrative or cognitive tasks, gold-collar work involves roles that exist primarily to package outputs in ways that feel distinctively human and strategically irreplaceable. These jobs are shiny by design, aesthetic in nature, and often deliberately vague in function. They traffic in subjectivity and thrive in highly subjective measures of values—precisely because all that is measurable, optimisable, and objective has already been handed over to machines. And subjectivity—framed as taste, vision, intuition—becomes the last bastion of human uniqueness, the final frontier for economic legitimacy.
Gold-collar work, I argue, isn’t about delivering something irrefutably useful. It’s about signaling exclusivity. Because when companies outsource tasks to AI, they may solve problems efficiently, but they rarely stand out. And to differentiate themselves in a market saturated by machine-made solutions, they purchase what AI cannot convincingly simulate: the illusion of freshness and the humanised veneer of strategy.
And this becomes the new currency of class—not competence, nor impact—but the ability to brand human labour as luxury: bespoke, boutique, and irreproducible. This is why gold-collar work is, arguably, the most bullsh*t of all bullsh*t jobs: not because it does nothing, but because its primary function is to not be AI, and in doing so, deception becomes their feature product.
And it’s exhausting. It demands emotional labour without reciprocity, creativity without autonomy, and oversight without authorship. Yet, we cling to it because the alternative is worse. To admit that our work is becoming adjacent rather than central, reactive rather than generative, is to risk confronting the existential void in this economic shift.
But perhaps the more uncomfortable truth is this: gold-collar work doesn’t exist in spite of AI. It exists because of it. And in propping up the human brand, we may be keeping ourselves visible, but we are no longer building anything of real value.
The optimism industrial complex and the urgent need to claim labour
In conferences and roundtables, we often hear a recurring refrain: AI will free us to pursue more meaningful work. The statement is often delivered with such confidence that to question it feels contrarian, even regressive. But meaningful work, in this context, is never clearly defined.
So, our assumption is that automation will handle the rote and the repetitive, leaving us with the enriching, the creative, the interpersonal. It sounds hopeful, but it also feels suspiciously like a deferral. Because that liberation very rarely has arrived.
In reality, the jobs that add value and require care, presence, or ethical accountability, like educators, nurses, or social workers, are still underpaid and undervalued. Meanwhile, AI-centric roles are expanding precisely where ambiguity is easiest to monetise.
If there’s a way forward, it will begin with a philosophical reckoning. We have to ask what kind of labour deserves protection and what kind deserves to disappear. The reality is that the jobs worth preserving are often the least scalable, the least profitable, and the most human. They involve contradiction, slowness, intimacy, and care. They bind communities, educate generations, challenge the status quo and build ethical resilience. And for these, they’re irreplaceable not because AI can’t mimic them, but because the social value they produce cannot be reduced to output.
To resist the bullsh*tisation of work isn’t to romanticise labour either. It’s to demand that the future of work be measured by more than efficiency and productivity in a traditional sense. It’s to assert that value cannot be determined by price alone. And it’s to insist that dignity must be engineered into systems before they become too powerful to challenge.