The TUC said workers could be “hired and fired by algorithm”, and new legal protections were needed.
Among the changes it is calling for is a legal right to have any “high-risk” decision reviewed by a human.
TUC general secretary Frances O’Grady said the use of AI at work stood at “a fork in the road”.
“AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work - like who gets hired and fired.
“Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy,” she warned.
Many workplaces already use automated decision making for simple tasks. For example, Uber assigns driving jobs to its drivers automatically, by computer, and Amazon is known to use AI monitoring systems to watch its staff in its warehouses.
And many firms already use an automated system with no human oversight in the first stage of the hiring process, to narrow the field.
But as AI becomes more sophisticated, the fear is that it will be entrusted with more serious, high-risk decisions, such as analysing those performance metrics to figure out who should be first in line for promotion – or being let go.
That can happen even when a human is involved, a TUC report warns, thanks to automated decision making.
“A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the authors write.
“Sometimes the human decision making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision what to do with the output from the machine.”
The TUC’s report, written with the aid of employment rights lawyers and the AI Law Consultancy, argues that the law has failed to stay abreast of quick progress in AI in recent years.
The union body is calling for:
* An obligation on employers to consult unions on the use of “high risk” or “intrusive” AI at work
* The legal right to have a human review decisions
* A legal right to “switch off” from work and not be expected to answer calls or emails
* Changes to UK law to protect against discrimination by algorithm
Discrimination by algorithm has been well-documented in recent years, often as an unintentional side-effect of using systems that fail to account for racial bias.
One high-profile example is in facial recognition technology, which has in the past been trained to recognise white faces more easily than those from other backgrounds. Such problems led IBM to abandon some of its efforts with the technology last year, labelling it as “biased”.
The TUC also pointed to recent reports of allegations from delivery drivers for Uber Eats who claimed they had been fired because the facial recognition software was unable to recognise their faces.
That led to drivers with 100% ratings and thousands of deliveries under their belts being fired for failing to complete an ID check, the affected drivers claimed. Uber denies this, saying a human review is always involved before it drops drivers from its platform.
The authors of the report for the TUC, Robin Allen and Dee Masters from Cloisters law firm, said while AI could be beneficial, “used in the wrong way it can be exceptionally dangerous”.
“Already important decisions are being made by machines,” the pair said in a joint statement.
“Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose. There are clear red lines, which must not be crossed if work is not to become dehumanised.”