TechDigits

Tech news
Friday, Apr 26, 2024

Are privacy concerns sparked by ChatGPT's woes really that bad?

Are privacy concerns sparked by ChatGPT's woes really that bad?

As individuals, we need to take steps to protect our data and keep our personal information safe, Jose Blaya writes.

Since its launch, ChatGPT and the possibilities that AI brings have been hotly debated.

Whilst many wondered at this almost limitless tool that will magnify human intellect, concerns have been raised as to the largely unquantified risks that these intelligent learning platforms present.

Last month, Italy made a stand against ChatGPT, becoming the first Western nation to ban the platform “until ChatGPT respects privacy”.

So what are the privacy implications of these new tools, and how worried should we be about them?


Conversational AI's privacy black hole


Given the indiscriminate way in which ChatGPT gathers data, it can draw on a huge range of source material, including social media, blog posts, product reviews, chat forums and even email threads if publicly available.

This means that both personal information and data are being used without people’s knowledge or consent.

A man speaks with a booth representative next to a digital display (L) promoting ChatGPT, during the three-day 7th AI Expo, part of NexTech Week Tokyo 2023, May 2023


If you sign up to ChatGPT, you’re agreeing to a privacy policy that allows your IP address, browser type and settings all being stored, not to mention all data interactions you have with ChatGPT and your wider internet browsing activity.

All of this can be shared with unspecified third parties “without further notice to you”.


Personal information could be exposed by the wrong person asking the right question


By analysing your conversations with it alongside your other online activity, ChatGPT develops a profile of each user’s interests, beliefs, and concerns.

This is also true of today’s search engines, but as an “intelligent” learning platform, ChatGPT has the potential to engage with both the user and the information it is given in a completely new way, creating a dialogue that might fool you into thinking you are speaking with another human, not an AI system.

ChatGPT draws all these inputs together and analyses them on a scale not previously possible in order to “answer anything”.

ChatGPT’s proven tendency to get things wrong and even make things up could lead to damaging and untrue allegations.

Webpage of ChatGPT, a prototype AI chatbot, is seen on the website of OpenAI, on a smartphone.

If asked the right question, it can easily expose the personal information of both its users and of anyone who has either posted or been mentioned on the internet.

Without an individual’s consent, it could disclose political beliefs or sexual orientation, In turn, this could amount to releasing embarrassing or even career-ruining information.

ChatGPT’s proven tendency to get things wrong and even make things up could lead to damaging and untrue allegations.

Some people will nonetheless believe them and spread false statements further in the belief that the chatbot has uncovered previously withheld and secret information.


Could basic safeguards tackle these issues?


Given the power of these machine learning systems, it’s difficult to build even basic safeguards into their programming.

Their entire premise is that they can analyse huge amounts of data, searching all corners of what is publicly available online and drawing conclusions from it very quickly.

There is no way to detect when the chatbot is collecting data without someone’s knowledge or consent, and without sources, there is no opportunity to check the reliability of the information you are fed.

There is no way to detect when the chatbot is collecting data without someone’s knowledge or consent, and without sources, there is no opportunity to check the reliability of the information you are fed.

A person uses their phone at a bar in San Francisco, March 2019


We’ve seen the ease with which people have already managed to “jailbreak” current safeguards, giving little hope that any further rules built into the platforms won’t also be able to be circumnavigated.


Privacy laws are not keeping pace


Privacy laws have a lot of catching up to do in order to keep up with this new threat, the full extent of which we haven’t yet seen.

The way in which ChatGPT and others are using our data is already a clear violation of privacy, especially when it is sensitive and can be used to identify us.

Contextual integrity, a core principle of existing privacy laws, states that even when someone’s information or data is publicly available, it still shouldn't be revealed outside of its original context. This is another rule ignored by ChatGPT.

There are currently no procedures for individuals to check what personal information on them is being stored or request for it to be deleted as you would with other companies.

A person's phone shows HKmap.live apps as they join others at a rally to mark Taiwan's National Day in Hong Kong, October 2019


We have barely even touched on the data protection infringements inherent in the way AI chatbots learn.

There are currently no procedures for individuals to check what personal information on them is being stored or request for it to be deleted as you would with other companies.

Nor have we given consent for this data to be stored in the first place — just because it exists somewhere on the internet should not give ChatGPT the right to use it.


How can we protect our privacy in this new era of artificial intelligence?


Private Internet Access has been closely monitoring the privacy risks inherent in ChatGPT and other AI platforms.

With many competitors hotly chasing OpenAI’s lead, including Microsoft Bing, Google Bard and Chinese tech giant Baidu’s Ernie, and within a sector that is almost completely unregulated, the privacy and security implications are only growing.

As individuals, we need to take steps to protect our data and keep our personal information safe.

A man takes a picture during a conference at the Mobile World Congress (MWC) in Barcelona, February 2023


Whilst embracing AI’s potential, we must be vigilant of the privacy threat it presents. The laws and regulations protecting our privacy need to adapt.

As individuals, we need to take steps to protect our data and keep our personal information safe.

This includes thinking about exactly what we are happy to share online while knowing how easily a machine-learning platform can now find, extract, and share this information with anyone.

Ultimately, we need to be wary of how much trust we put in this new technology, questioning rather than blindly accepting the answers we are presented with.

Newsletter

Related Articles

TechDigits
0:00
0:00
Close
FTX's Bankman-Fried headed for jail after judge revokes bail
America's First New Nuclear Reactor in Nearly Seven Years Begins Operations
Southeast Asia moves closer to economic unity with new regional payments system
Today Hunter Biden’s best friend and business associate, Devon Archer, testified that Joe Biden met in Georgetown with Russian Moscow Mayor's Wife Yelena Baturina who later paid Hunter Biden $3.5 million in so called “consulting fees”
Google testing journalism AI. We are doing it already 2 years, and without Google biased propoganda and manipulated censorship
Musk announces Twitter name and logo change to X.com
The future of sports
TikTok Takes On Spotify And Apple, Launches Own Music Service
Hacktivist Collective Anonymous Launches 'Project Disclosure' to Unearth Information on UFOs and ETIs
Typo sends millions of US military emails to Russian ally Mali
Server Arrested For Theft After Refusing To Pay A Table's $100 Restaurant Bill When They Dined & Dashed
Democracy not: EU's Digital Commissioner Considers Shutting Down Social Media Platforms Amid Social Unrest
Sarah Silverman and Renowned Authors Lodge Copyright Infringement Case Against OpenAI and Meta
Why Do Tech Executives Support Kennedy Jr.?
The New York Times Announces Closure of its Sports Section in Favor of The Athletic
Florida Attorney General requests Meta CEO's testimony on company's platforms' alleged facilitation of illicit activities
The Poor Man With Money, Mark Zuckerberg, Unveils Twitter Replica with Heavy-Handed Censorship: A New Low in Innovation?
The Double-Edged Sword of AI: AI is linked to layoffs in industry that created it
US Sanctions on China's Chip Industry Backfire, Prompting Self-Inflicted Blowback
Meta Copy Twitter with New App, Threads
BlackRock Bitcoin ETF Application Refiled, Naming Coinbase as ‘Surveillance-Sharing’ Partner
UK Crypto and Stablecoin Regulations Become Law as Royal Assent is Granted
A Delaware city wants to let businesses vote in its elections
Alef Aeronautics Achieves Historic Milestone with Flight Certification for World's First Flying Car
Google Blocked Access to Canadian News in Response to New Legislation
French Politicians Advocate for Pan-European Regulation on Social Media Influencers
Melinda French Gates Advocates for Increased Female Representation in AI to Prevent Bias
Snapchat+ gains 4 million paying subscribers in its first year
Apple Makes History as the First Public Company Valued at $3 Trillion
Elon Musk Implements Twitter Limits to Tackle Data Scraping, but Faces Criticism for Technical Misunderstanding
EU and UK's Slow Electric Vehicle Adoption Raises Questions About the Transition to Green Mobility
Top Companies Express Concerns Over Europe's Proposed AI Law, Citing Competitiveness and Investment Risks
Meta Unveils Insights on AI Usage in Facebook and Instagram, Amid Growing Calls for Transparency
Crypto Scams Against Seniors Soar by 78% in 2022, Experts Urge Vigilance
The End of an Era: National Geographic Dismisses Last of Its Staff Writers
Shield Your Wallet: The Perils of Wireless Credit Card Theft
Harvard Scientist Who Studies Honesty Accused Of Data Fraud, Put On Leave
Putting an End to the Subscription Snare: The Battle Against Unwitting Commitments
The Legal Perils of AI: Lawyer Faces Sanctions for Relying on Fictional Cases Generated by Chatbot
ChatGPT’s "Grandma Exploit": Ingenious Hack Exposes Loophole in AI, Generates Free Software Codes
The Disney Downturn: A Near Billion-Dollar Box Office Blow for the House of Mouse
A Digital Showdown: Canada Challenges Tech Giants with The Online News Act, Meta Strikes Back
Distress in the Depths: Submersible and Passengers Missing in Titanic Wreckage Expedition
Mark Zuckerberg stealing another idea: Twitter
European Union's AI Regulations Risk Self-Sabotage, Cautions smart and brave Venture Capitalist Joe Lonsdale
Nvidia GPUs are so hard to get that rich venture capitalists are buying them for the startups they invest in
Chinese car exports surge
Reddit Blackout: Thousands of Communities Protest "Ludicrous" Pricing Changes
Nvidia Joins Tech Giants as First Chipmaker to Reach $1 Trillion Valuation
AI ‘extinction’ should be same priority as nuclear war – experts
×