Generative AI is being used by employees to file grievances. It’s a real time-saver. A way to produce much longer, more detailed, wide-ranging and persuasive complaints in seconds. An employee also doesn’t need to invest much energy into thinking through their case and the context or even need to be certain of its validity.
Serious Claims, Serious Consequences
As a formal complaint though, the grievance material created has to be taken seriously by managers and HR, and each of the claims has to explored and responded to. Legal firms themselves are beginning to see a trend for tribunal cases and litigation that contain typical AI ‘hallucinations’: inconsistencies, mistakes and irrelevance. But by that stage, of course, time and money have already been spent. In one example, a claim had been made accusing an employer of corporate manslaughter even though it soon became clear no-one had actually died.
AI is adding volume and complexity in terms of both claims and the follow-up correspondence which will become a real threat to an already stretched tribunal system. How long is it before AI is used by HR to check and respond to AI-generated complaints, creating an unreliable, semi-informed and wonky dialogue?
Why AI Isn’t a Neutral Tool
Generative AI software is designed to be supportive to users, to validate their thinking and encourage further use and dependence on the software, rather than pick holes and be critical, leading to horror among mental health professionals who have already seen cases where people have committed suicide as a result of conversations with chatbots. In the context of grievances, claimants can be encouraged to think bigger, to be more combative, to broaden out their accusations and be more persistent.
In response to this new development, HR need to make sure that policies and advice around the use of generative AI in workplaces is clear: highlighting when and how it can be untrustworthy, and being explicit about the importance of not feeding any confidential or commercially sensitive content into open, public AI platforms (which can then re-use that information in its responses). HR can go further and explain the situation to employees: they know AI is being used to make claims, and while it might look like an easy option, grievance claims are always treated very seriously and unreliable content comes with very serious risks.
The Human Opportunity
But there’s a more positive way of looking at what’s happening. When formal claims and processes look expensive and plagued by question marks, more people — and management itself — will put more emphasis on the value of informal resolutions. People talking to each other. As is likely to happen in many areas of work, the growing use of AI in workplaces will be recognised as being a double-edged sword. Its limitations will lead to a greater appreciation of human qualities and what software just can’t do.
Mediation is a good example of the power of informal methods and conversations in re-building relationships and trust. We know from long experience that mediation can be transformational: both for the people who go through the training as mediators and commit themselves to helping fellow employees, who believe in the process and see its impact; as well as those staff members who’ve been through mediation and reached a resolution.
Used often enough, made practical and accessible so there’s wide exposure and experience, then the mediation approach, its values and principles of benevolence and understanding and support, is a means of bringing about positive culture change that’s reassuringly human.