TechDigits

Tech news
Thursday, Mar 28, 2024

Will AI turn humans into 'waste product'?

Will AI turn humans into 'waste product'?

A tech guru warns that robots that think for themselves may take over the world and use weapons of mass destruction to wipe out mankind. Is it time to worry?
This year, the BBC’s prestigious Reith Lectures will be delivered for the first time by a computer scientist. UK-born Stuart Russell, professor of computer science at the University of California, Berkeley, will look at ‘Living with Artificial Intelligence’ in a series of weekly broadcasts during December.

In a trailer for the lecture series, Russell was interviewed on BBC Radio 4’s Today on Monday. As confirmation of the old journalistic adage that “if it bleeds, it leads,” the conversation was dominated by gloomy prognostications about what AI might be doing to our society and even more nightmarish possibilities for the future. Never mind developing machines that can learn – we need to learn from the history of new technologies to treat both hype and horror with equal scepticism.

Artificial intelligence is already in use in society. Computers can guess what we would like to watch next on YouTube, what products we might want to buy on Amazon and show us adverts based on previous internet searches on Google. More usefully, perhaps, machines can learn to identify cancerous growths on medical scans with great speed and accuracy and flag up potentially fraudulent financial transactions – something very useful when banks and other institutions perform astonishing volumes of trades constantly.

Russell believes that AI is “not working necessarily to our benefit and the revelations we’ve seen recently from Facebook suggest media companies know it is ripping societies apart. These are very simple algorithms, so the question I’ll be asking in the lectures is what happens when the algorithms become much more intelligent than they are right now.”

This is an odd way of looking at things – that algorithms rather than human politics are the problem in society right now. Of course, dumb algorithms that push social-media posts to you on the basis that “if you liked that one, you might like this one,” probably don't help in getting people out of their “echo chambers.” But people sticking with their own ‘tribe’ when it comes to politics is mostly about personal choice and unwillingness to accept that people with a different view might have a point, not the work of evil computer algorithms.

Where Russell is really concerned is when AI goes beyond task-specific applications to the possibility of general-purpose AI. Instead of setting computers up to do particular things – like churning through vast amounts of data with a particular goal and learning how to do it better and faster than humans – general-purpose AI systems would be able to take on a wide variety of tasks and make decisions for themselves.

In particular, Russell worries about autonomous weapons that “can find targets, decide which targets to attack and then go ahead and attack them, all without any human being in the loop.” He fears that these AI WMD could destroy whole cities or regions, or take out an entire ethnic group.

Russell cooperated on a startling and scary Black Mirror-style film, Slaughterbots, in 2017, showing one particularly gloomy vision of tiny, bee-like drones selecting and assassinating anyone who dares to disagree with the authorities.

But while some degree of learning and autonomy is in use already – for example, to take humans out of the dangerous business of clearing minefields – the combination of recognising individuals or groups accurately and making decisions about who and how to attack are way beyond current capabilities. As a US drone strike in Afghanistan in August – which killed 10 people, including seven children – showed, it's possible to have hi-tech, intelligence-led attacks that go horribly wrong. Moreover, if political and military leaders have few qualms about killing the innocent, why wait for fantasy AI-powered autonomous weapons when you can just carpet-bomb whole areas, whether it is Dresden in the Second World War or Cambodia in the Seventies?

The all-conquering power of AI is, as things stand, just hype. Take driverless cars. Just a few years ago, they were the Next Big Thing. Google, Apple, Tesla and more poured billions into trying to develop them. Now they’re on the back burner because the difficulties are just too great. A year ago, Uber – once dreaming of fleets of robotaxis – sold off its autonomous vehicles division. As for robots and AI taking over our jobs, at best they will be a tool to improve the productivity of humans. Using computers to do bits of our jobs could be useful, but actually replacing teachers, lawyers or drivers is a whole different ball-game.

Silicon Valley seems to have a schizophrenic attitude to its own technology. On the one hand, the importance of artificial intelligence is exaggerated. On the other hand, we have doom-mongering speculations about AI systems gradually taking control of society, leaving human beings, in Russell’s words, as so much “waste product.” In truth, AI keeps confirming that it is both extremely useful for doing specific tasks and also pretty dumb at anything beyond that.

According to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute, we suffer from multiple misunderstandings about AI. First, specific AI and general AI are completely different levels of difficulty. For example, getting computers to translate different languages has involved an enormous amount of work but text- and voice-based systems are getting pretty good. Getting two AI machines to hold a conversation, on the other hand, is much harder.

Second, many things that humans find easy are really difficult to automate. For example, we’re evolved to scan the world quickly, pick out distinct things and figure out what is important right now. Computers find this extremely hard. Third, humans have a rich experience of the physical world through our senses that, researchers are finding, has a significant impact on how we think. Fourth, Mitchell argues, human beings develop common sense, built on experience and practice. AI systems can chuck ever-greater amounts of processing power at problems, but struggle to replicate that. Elon Musk failed in his attempt to fully automate his Tesla factories – humans were simply irreplaceable for some tasks.

If we could cut out the boosterism about AI, we could see a useful group of technologies that can help us out in specific ways to make our lives easier. Equally, it would burst the bubble of all those catastrophists who think AI systems will take over the world. Ultimately, we’re still in control of the machines and they’re not about to replace us any time soon. With a bit of historical perspective, we can see that the fretting about AI is just the latest in a seemingly endless series of fearful spasms about new technology.
Newsletter

Related Articles

TechDigits
0:00
0:00
Close
FTX's Bankman-Fried headed for jail after judge revokes bail
America's First New Nuclear Reactor in Nearly Seven Years Begins Operations
Southeast Asia moves closer to economic unity with new regional payments system
Today Hunter Biden’s best friend and business associate, Devon Archer, testified that Joe Biden met in Georgetown with Russian Moscow Mayor's Wife Yelena Baturina who later paid Hunter Biden $3.5 million in so called “consulting fees”
Google testing journalism AI. We are doing it already 2 years, and without Google biased propoganda and manipulated censorship
Musk announces Twitter name and logo change to X.com
The future of sports
TikTok Takes On Spotify And Apple, Launches Own Music Service
Hacktivist Collective Anonymous Launches 'Project Disclosure' to Unearth Information on UFOs and ETIs
Typo sends millions of US military emails to Russian ally Mali
Server Arrested For Theft After Refusing To Pay A Table's $100 Restaurant Bill When They Dined & Dashed
Democracy not: EU's Digital Commissioner Considers Shutting Down Social Media Platforms Amid Social Unrest
Sarah Silverman and Renowned Authors Lodge Copyright Infringement Case Against OpenAI and Meta
Why Do Tech Executives Support Kennedy Jr.?
The New York Times Announces Closure of its Sports Section in Favor of The Athletic
Florida Attorney General requests Meta CEO's testimony on company's platforms' alleged facilitation of illicit activities
The Poor Man With Money, Mark Zuckerberg, Unveils Twitter Replica with Heavy-Handed Censorship: A New Low in Innovation?
The Double-Edged Sword of AI: AI is linked to layoffs in industry that created it
US Sanctions on China's Chip Industry Backfire, Prompting Self-Inflicted Blowback
Meta Copy Twitter with New App, Threads
BlackRock Bitcoin ETF Application Refiled, Naming Coinbase as ‘Surveillance-Sharing’ Partner
UK Crypto and Stablecoin Regulations Become Law as Royal Assent is Granted
A Delaware city wants to let businesses vote in its elections
Alef Aeronautics Achieves Historic Milestone with Flight Certification for World's First Flying Car
Google Blocked Access to Canadian News in Response to New Legislation
French Politicians Advocate for Pan-European Regulation on Social Media Influencers
Melinda French Gates Advocates for Increased Female Representation in AI to Prevent Bias
Snapchat+ gains 4 million paying subscribers in its first year
Apple Makes History as the First Public Company Valued at $3 Trillion
Elon Musk Implements Twitter Limits to Tackle Data Scraping, but Faces Criticism for Technical Misunderstanding
EU and UK's Slow Electric Vehicle Adoption Raises Questions About the Transition to Green Mobility
Top Companies Express Concerns Over Europe's Proposed AI Law, Citing Competitiveness and Investment Risks
Meta Unveils Insights on AI Usage in Facebook and Instagram, Amid Growing Calls for Transparency
Crypto Scams Against Seniors Soar by 78% in 2022, Experts Urge Vigilance
The End of an Era: National Geographic Dismisses Last of Its Staff Writers
Shield Your Wallet: The Perils of Wireless Credit Card Theft
Harvard Scientist Who Studies Honesty Accused Of Data Fraud, Put On Leave
Putting an End to the Subscription Snare: The Battle Against Unwitting Commitments
The Legal Perils of AI: Lawyer Faces Sanctions for Relying on Fictional Cases Generated by Chatbot
ChatGPT’s "Grandma Exploit": Ingenious Hack Exposes Loophole in AI, Generates Free Software Codes
The Disney Downturn: A Near Billion-Dollar Box Office Blow for the House of Mouse
A Digital Showdown: Canada Challenges Tech Giants with The Online News Act, Meta Strikes Back
Distress in the Depths: Submersible and Passengers Missing in Titanic Wreckage Expedition
Mark Zuckerberg stealing another idea: Twitter
European Union's AI Regulations Risk Self-Sabotage, Cautions smart and brave Venture Capitalist Joe Lonsdale
Nvidia GPUs are so hard to get that rich venture capitalists are buying them for the startups they invest in
Chinese car exports surge
Reddit Blackout: Thousands of Communities Protest "Ludicrous" Pricing Changes
Nvidia Joins Tech Giants as First Chipmaker to Reach $1 Trillion Valuation
AI ‘extinction’ should be same priority as nuclear war – experts
×