Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Alamy Stock Photo

AI poses 'extinction' risk to humanity if it grows too advanced, experts say

A statement signed by dozens of specialists said tackling the risks from AI should be ‘a global priority’ as important as preventing ‘pandemics and nuclear war’.

GLOBAL LEADERS SHOULD be working to reduce “the risk of extinction” from artificial intelligence technology, a group of industry chiefs and experts warned today.

A one-line statement signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said tackling the risks from AI should be “a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts.

The program’s wild success sparked a gold rush with billions of dollars of investment into the field, but critics and insiders have raised the alarm.

Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.

Superintelligent machines 

The latest statement, housed on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI.

The center said the “succinct statement” was meant to open up a discussion on the dangers of the technology.

Several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as one of the godfathers of the industry, have made similar warnings in the past.

Their biggest worry has been the rise of so-called artificial general intelligence (AGI) – a loosely defined concept for a moment when machines become capable of performing wide-ranging functions and can develop their own programming.

The fear is that humans would no longer have control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet.

Dozens of academics and specialists from companies including Google and Microsoft – both leaders in the AI field – signed the statement.

It comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for a pause in the development of such technology until it could be shown to be safe.

However, Musk’s letter sparked widespread criticism that dire warnings of societal collapse were hugely exaggerated and often reflected the talking points of AI boosters.

US academic Emily Bender, who co-wrote an influential paper criticising AI, said the March letter, signed by hundreds of notable figures, was “dripping with AI hype”.

Bender is among the most prominent critics of the debate around large language models like ChatGPT, often pointing out that such algorithms do not actually understand any of the prompts they are given nor the answers they provide. 

“When you read the output of ChatGPT, it’s important to remember that despite its apparent fluency and despite its ability to create confident sounding strings that are on topic and seem like answers to your questions, it’s only manipulating linguistic form,” she wrote in a blogpost on Medium. 

“It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer. The only knowledge it has is knowledge of distribution of linguistic form.”

In another blogpost, Bender conceded that there are very real threats to society posed by these systems but that the fanfare, hype and doom mongering has been over the top. 

“Puff pieces that fawn over what Silicon Valley tech bros have done, with amassed capital and computing power, are not helping us get any closer to solutions to problems created by the deployment of so-called “AI”. On the contrary, they make it harder by refocusing attention on strawman problems,” she wrote.

‘Surprisingly non-biased’

Bender and other critics have slammed AI firms for refusing to publish the sources of their data or reveal how it is processed – the so-called “black box” problem.

Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material.

Altman, who is currently touring the world in a bid to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing.

“If something goes wrong with AI, no gas mask is going to help you,” he told a small group of journalists in Paris last Friday.

But he defended his firm’s refusal to publish the source data, saying critics really just wanted to know if the models were biased.

“How it does on a racial bias test is what matters there,” he said, adding that the latest model was “surprisingly non-biased”.

 – © AFP 2023

Author
View 29 comments
Close
29 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds