7.6 C
Tuesday, April 23, 2024

AI at work: Staff ‘hired and fired by algorithm’


Related stories

WHO hosts the first forum on traditional medicine

The World Health Organization will convene its first summit...

Despite profit-taking, the price of oil still records a weekly rise

As the dollar rose and oil speculators took profits...

Kenya bans churches after allegations of killing worshipers who were starved

According to a government document made public on Friday,...

The Trades Union Congress (TUC) has warned about what it calls “huge gaps” in UK employment law over the use of artificial intelligence at work.

The TUC said workers could be “hired and fired by algorithm”, and new legal protections were needed.

Among the changes it is calling for is a legal right to have any “high-risk” decision reviewed by a human.

TUC general secretary Frances O’Grady said the use of AI at work stood at “a fork in the road”.

“AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work – like who gets hired and fired.

“Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy,” she warned.https://emp.bbc.com/emp/SMPj/2.40.2/iframe.htmlmedia captionHow will AI change the future jobs market?

Many workplaces already use automated decision making for simple tasks. For example, Uber assigns driving jobs to its drivers automatically, by computer, and Amazon is known to use AI monitoring systems to watch its staff in its warehouses.

Read Also  Roblox CEO apologies after three-day blackout

And many firms already use an automated system with no human oversight in the first stage of the hiring process, to narrow the field.

But as AI becomes more sophisticated, the fear is that it will be entrusted with more serious, high-risk decisions, such as analysing those performance metrics to figure out who should be first in line for promotion – or being let go.

That can happen even when a human is involved, a TUC report warns, thanks to automated decision making.

Human agency

“A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the authors write.

“Sometimes the human decision making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision what to do with the output from the machine.”

The TUC’s report, written with the aid of employment rights lawyers and the AI Law Consultancy, argues that the law has failed to stay abreast of quick progress in AI in recent years.

Read Also  Kanye's account is restored by the new Twitter, X: U.S. media

The union body is calling for:

  • An obligation on employers to consult unions on the use of “high risk” or “intrusive” AI at work
  • The legal right to have a human review decisions
  • A legal right to “switch off” from work and not be expected to answer calls or emails
  • Changes to UK law to protect against discrimination by algorithm

Discrimination by algorithm has been well-documented in recent years, often as an unintentional side-effect of using systems that fail to account for racial bias.

One high-profile example is in facial recognition technology, which has in the past been trained to recognise white faces more easily than those from other backgrounds. Such problems led IBM to abandon some of its efforts with the technology last year, labelling it as “biased”.

The TUC also pointed to recent reports of allegations from delivery drivers for Uber Eats who claimed they had been fired because the facial recognition software was unable to recognise their faces.

That led to drivers with 100% ratings and thousands of deliveries under their belts being fired for failing to complete an ID check, the affected drivers claimed. Uber denies this, saying a human review is always involved before it drops drivers from its platform.

Read Also  New Startup Accelerator Opens in Ethiopia for Women in Technology

‘Exceptionally dangerous’

The authors of the report for the TUC, Robin Allen and Dee Masters from Cloisters law firm, said while AI could be beneficial, “used in the wrong way it can be exceptionally dangerous”.

“Already important decisions are being made by machines,” the pair said in a joint statement.

“Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose. There are clear red lines, which must not be crossed if work is not to become dehumanised.”

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome & exclusive content in your inbox, every week.

We don’t spam! Read our privacy policy for more info.

InsiderBLM is a fast-growing business site with deep financial, media, tech, and other industry verticals.


- Never miss a story with notifications

- Gain full access to our premium content

We don’t spam! Read our privacy policy for more info.

Latest stories