Who fires whom? The day an AI agent says you’re redundant in the company

Until now, artificial intelligence has been presented as a co-pilot designed to help people with their daily tasks: writing emails, summarising reports or analysing large volumes of data. But the discourse is beginning to change. More and more organisations are experimenting with AI agents capable of measuring productivity, generating performance metrics and recommending staff reorganisations. And with this comes an unsettling debate: what will happen the day when a digital system suggests that a worker is no longer needed?
From efficiency to ethical dilemma
The appeal of these systems is undeniable. In environments where efficiency is the priority, AI can identify bottlenecks, detect redundancies or pinpoint repetitive tasks that could be automated. In theory, the promise is to free up teams to focus on higher-value work.
However, the reverse side poses a major ethical dilemma. When an AI agent recommends that a function or a person should be discontinued, who takes responsibility for that decision? The algorithm that did the calculation? The technology provider that trained it? Or the steering committee that blindly trusted its recommendations?
The risk is not just occupational, but reputational. A company that allows an AI to dictate layoffs may find itself caught in a social and legal storm of unforeseeable consequences.
The shadow of bias
Artificial intelligence systems are not neutral. They are trained on historical data that reflect human patterns – and therefore also human biases. As Cathy O’Neil warned in her influential work
Weapons of Math Destruction
: algorithms can become “weapons of mathematical destruction” when they amplify biases instead of correcting them.
An agent assessing productivity may inadvertently penalise those who work in less visible ways – such as support tasks or emotional management in a team – and favour those who produce measurable deliverables. What appears to be an objective decision may actually reinforce invisible inequalities.
Social fear as a driver of headlines
It is no coincidence that the conversation on social media and job forums has increased about whether AI will “take our jobs”. The
published in 2023 already estimated that up to
And few phrases generate as much impact as the possibility of an algorithm deciding that someone is redundant.
Responsible frameworks: the business response
Given this scenario, at Pasiona we believe that the key is not to deny the potential of AI, but to establish clear limits.
We advocate that generative agents should be conceived as empowerment tools, never as labour judges.
Our proposal involves platforms such as
AIgents Manager
, which allow organisations to coordinate multiple AI agents in a controlled way, with traceability and auditing of each recommendation. The premise is simple: AI can suggest, but the final word must remain human.
For those who want to explore this vision further, we have published a whitepaper
Can AI recommend dismissals?
The answer, strictly speaking, is that it already does in some cases. Large HR platforms incorporate AI modules that suggest workforce restructuring based on performance and cost data. What changes is the degree of automation: while some companies limit the role of AI to preliminary analysis, others are flirting with systems that directly suggest who to replace or which tasks to eliminate.
The problem is that such suggestions can become, in practice, automatic decisions. If a committee receives an “objective” report with AI metrics, the temptation to follow it to the letter is enormous.
Reputation at stake
Beyond efficiency, companies must consider their most fragile asset: trust. A headline that says “an AI fired a worker at X company” can become a viral case with a devastating reputational cost. Social sensitivity to job casualisation is high, and the role of technology in replacing jobs generates immediate media attention.
Therefore, the most responsible approach is to draw clear red lines: AI can produce reports, generate metrics and point out areas for improvement, but never make final decisions about people.
An inevitable but governable future
Experts agree that automation will continue to advance and affect millions of jobs. The question is how we manage this transition. Just as the industrial revolution once displaced professions but created new ones, artificial intelligence will transform the labour market in directions that are still difficult to foresee.
The challenge for companies will be twofold: to harness the power of AI without being tempted to turn it into a labour arbiter, and to demonstrate with facts that their ethical values are above the cold logic of an algorithm.
Conclusion: rule AI before it rules employment
The image of a digital agent signing a letter of dismissal is still symbolic, but it points to a real debate that will mark the next decade. The day when an algorithm recommends that an employee is redundant, the response cannot be a simple “the machine said so”.
The future of work is not about choosing between humans and machines, but about building frameworks where artificial intelligence strengthens people without stripping them of their dignity.. Companies that anticipate this dilemma by establishing transparency, auditing and clear boundaries will be not only safe from viral headlines, but also better positioned to lead in a market increasingly sensitive to ethics and trust..
AI Assessment, AI ethics, AI in human resources, artificial intelligence, automation, Digital Strategy
Go back


