According to a top official, the European Union is pressuring web companies like Google and Meta to do more to combat fake information by labeling text, images, and other content produced by AI.
Vera Jourova, vice president of the EU Commission, stated that the potential of a new generation of AI chatbots to produce sophisticated content and visuals in a matter of seconds presents “fresh challenges for the fight against disinformation.”
Jourova said she urged the 27-nation bloc’s voluntary disinformation-fighting agreement signatories, including Google, Meta, Microsoft, TikTok, and other tech firms, to concentrate their efforts on addressing the AI issue.
Jourova stated during a briefing in Brussels that online firms that have integrated generative AI into their services, including Google’s Bard chatbot and Microsoft’s Bing search engine, should implement measures to stop “malicious actors” from spreading misinformation.
Companies should implement technology to “recognize such content and clearly label this to users,” according to her, if they provide services that have the potential to disseminate AI-generated misinformation.
EU laws, according to Jourova, are intended to safeguard free speech, but when it comes to AI, “I don’t see any right for the machines to have the freedom of speech.”
The rapid development of generative AI technology, which can create text, graphics, and video that resembles human speech, has astounded some people and scared others due to its potential to drastically alter many parts of daily life.
With the AI Act, Europe has assumed a leading position in the worldwide effort to regulate artificial intelligence; however, the law still needs to receive final approval and won’t go into effect for a number of years.
Officials in the EU are concerned that they must move more quickly to keep up with the quick development of generative artificial intelligence. The EU is introducing a new set of rules this year to protect people from harmful online content.
The EU’s Digital Services Act, which will compel the largest digital companies by the end of August to better regulate their platforms to protect users from hate speech, disinformation, and other harmful material, will soon turn the voluntary commitments in the disinformation code into legal requirements.
However, according to Jourova, those businesses need to start marking AI-generated material right once.
The majority of those digital behemoths have already agreed to abide by the EU rule, which obliges businesses to track their progress in battling misinformation and report on it on a regular basis.
In what looked to be Elon Musk’s most recent effort to relax rules at the social media firm after he acquired it last year, Twitter dropped out last month.
Jourova harshly criticized the exit, calling it a mistake.
Twitter made a difficult decision. They opted for conflict,” she remarked. “Make no mistake, Twitter has attracted a lot of attention by breaking the code, and its actions and compliance with EU law will be vigorously and urgently scrutinized.”