TechDigits

Tech news
Thursday, Apr 25, 2024

EU is eyeing tough rules for ChatGPT. What would regulation look like?

EU is eyeing tough rules for ChatGPT. What would regulation look like?

ChatGPT has ushered in an explosion of interest in AI - and the EU is eyeing regulation with its new AI Act.

An EU official has said proposed rules regulating artificial intelligence (AI) will tackle concerns around the risks of products like ChatGPT.

Thierry Breton, the European Commissioner for the Internal Market, told Reuters the sudden rise of popularity of applications like ChatGPT and the associated risks underscore the urgent need for rules to be established.

"As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data," he told Reuters in written comments.

They were the first official comments on ChatGPT from a senior EU official. Breton and his colleagues in the Commission are currently working with the European Council and Parliament on what will be the first legal framework on AI.

Launched just over two months ago, ChatGPT has ushered in an explosion of interest in AI and the uses it can now be put to.

Developed by OpenAI, ChatGPT allows users to enter prompts which can then generate articles, essays, poetry - and even computer code.

With ChatGPT rated the fastest-growing consumer app in history, some experts have raised fears that systems used by such apps could be misused for plagiarism, fraud and spreading misinformation.

Microsoft declined to comment on Breton's statement. OpenAI - whose app uses a technology called generative AI - did not immediately respond to a request for comment.

OpenAI has said on its website it aims to produce artificial intelligence that "benefits all of humanity" as it attempts to build safe and beneficial AI.


The first AI regulatory framework


Under the EU draft rules, ChatGPT is considered a general purpose AI system which can be used for multiple purposes, including high-risk ones such as the selection of candidates for jobs and credit scoring.

Breton wants OpenAI to cooperate closely with downstream developers of high-risk AI systems to enable their compliance with the proposed AI Act.

The regulatory framework currently defines four levels of risk in AI - which is causing disquiet amongst some companies who fear their products being labelled as high risk.

The four levels are:

*  Unacceptable risk - any system considered a clear threat to people "will be banned" according to the Commission, including “social scoring by governments to toys using voice assistance that encourages dangerous behaviour".

*  High risk - these are AI systems within critical infrastructures such as transport, or within educational or employment contexts where the outcome of exams or job applications could be determined by AI. Law enforcement contexts that put people’s fundamental rights at risk are also included as high risk.

*  Limited risk - These are systems with "specific transparency obligations," such as a chatbot identifying itself as an AI.

*  Minimal or no risk - The Commission says the "vast majority" of systems currently used in the EU are in this category, and they include AI-enabled video games and spam filters.

"People would need to be informed that they are dealing with a chatbot and not with a human being," Breton said.

"Transparency is also important with regard to the risk of bias and false information".

Being in a high-risk category would lead to tougher compliance requirements and higher costs, according to executives of several companies involved in developing artificial intelligence.

A survey by the industry body appliedAI showed that 51 per cent of the respondents expect a slowdown of their AI development activities as a result of the AI Act.

Effective AI regulations should centre on the highest-risk applications, Microsoft president Brad Smith wrote in a blog post on Wednesday.

"There are days when I'm optimistic and moments when I'm pessimistic about how humanity will put AI to use," he said.

Generative AI models need to be trained on huge amounts of text or images for creating a proper response - which can lead to allegations of copyright violations.

Breton said forthcoming discussions with lawmakers about AI rules would cover these aspects.

Newsletter

Related Articles

TechDigits
0:00
0:00
Close
FTX's Bankman-Fried headed for jail after judge revokes bail
America's First New Nuclear Reactor in Nearly Seven Years Begins Operations
Southeast Asia moves closer to economic unity with new regional payments system
Today Hunter Biden’s best friend and business associate, Devon Archer, testified that Joe Biden met in Georgetown with Russian Moscow Mayor's Wife Yelena Baturina who later paid Hunter Biden $3.5 million in so called “consulting fees”
Google testing journalism AI. We are doing it already 2 years, and without Google biased propoganda and manipulated censorship
Musk announces Twitter name and logo change to X.com
The future of sports
TikTok Takes On Spotify And Apple, Launches Own Music Service
Hacktivist Collective Anonymous Launches 'Project Disclosure' to Unearth Information on UFOs and ETIs
Typo sends millions of US military emails to Russian ally Mali
Server Arrested For Theft After Refusing To Pay A Table's $100 Restaurant Bill When They Dined & Dashed
Democracy not: EU's Digital Commissioner Considers Shutting Down Social Media Platforms Amid Social Unrest
Sarah Silverman and Renowned Authors Lodge Copyright Infringement Case Against OpenAI and Meta
Why Do Tech Executives Support Kennedy Jr.?
The New York Times Announces Closure of its Sports Section in Favor of The Athletic
Florida Attorney General requests Meta CEO's testimony on company's platforms' alleged facilitation of illicit activities
The Poor Man With Money, Mark Zuckerberg, Unveils Twitter Replica with Heavy-Handed Censorship: A New Low in Innovation?
The Double-Edged Sword of AI: AI is linked to layoffs in industry that created it
US Sanctions on China's Chip Industry Backfire, Prompting Self-Inflicted Blowback
Meta Copy Twitter with New App, Threads
BlackRock Bitcoin ETF Application Refiled, Naming Coinbase as ‘Surveillance-Sharing’ Partner
UK Crypto and Stablecoin Regulations Become Law as Royal Assent is Granted
A Delaware city wants to let businesses vote in its elections
Alef Aeronautics Achieves Historic Milestone with Flight Certification for World's First Flying Car
Google Blocked Access to Canadian News in Response to New Legislation
French Politicians Advocate for Pan-European Regulation on Social Media Influencers
Melinda French Gates Advocates for Increased Female Representation in AI to Prevent Bias
Snapchat+ gains 4 million paying subscribers in its first year
Apple Makes History as the First Public Company Valued at $3 Trillion
Elon Musk Implements Twitter Limits to Tackle Data Scraping, but Faces Criticism for Technical Misunderstanding
EU and UK's Slow Electric Vehicle Adoption Raises Questions About the Transition to Green Mobility
Top Companies Express Concerns Over Europe's Proposed AI Law, Citing Competitiveness and Investment Risks
Meta Unveils Insights on AI Usage in Facebook and Instagram, Amid Growing Calls for Transparency
Crypto Scams Against Seniors Soar by 78% in 2022, Experts Urge Vigilance
The End of an Era: National Geographic Dismisses Last of Its Staff Writers
Shield Your Wallet: The Perils of Wireless Credit Card Theft
Harvard Scientist Who Studies Honesty Accused Of Data Fraud, Put On Leave
Putting an End to the Subscription Snare: The Battle Against Unwitting Commitments
The Legal Perils of AI: Lawyer Faces Sanctions for Relying on Fictional Cases Generated by Chatbot
ChatGPT’s "Grandma Exploit": Ingenious Hack Exposes Loophole in AI, Generates Free Software Codes
The Disney Downturn: A Near Billion-Dollar Box Office Blow for the House of Mouse
A Digital Showdown: Canada Challenges Tech Giants with The Online News Act, Meta Strikes Back
Distress in the Depths: Submersible and Passengers Missing in Titanic Wreckage Expedition
Mark Zuckerberg stealing another idea: Twitter
European Union's AI Regulations Risk Self-Sabotage, Cautions smart and brave Venture Capitalist Joe Lonsdale
Nvidia GPUs are so hard to get that rich venture capitalists are buying them for the startups they invest in
Chinese car exports surge
Reddit Blackout: Thousands of Communities Protest "Ludicrous" Pricing Changes
Nvidia Joins Tech Giants as First Chipmaker to Reach $1 Trillion Valuation
AI ‘extinction’ should be same priority as nuclear war – experts
×