Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Alamy Stock Photo

Opinion AI is already altering reality, but it doesn’t have to end in doom

Ciarán O’Connor examines the good, bad, and ugly sides of Gen AI.

YOU CAN NOW add “doomsayers” to our government’s lexicon. The term popped up recently in a document titled ‘Friend or Foe?’, a government review of how Artificial Intelligence might impact Ireland’s economy.

The authors describe how there are two broad perspectives concerning AI adoption — optimists and doomsayers. The former believes the technology will ultimately benefit society, while the latter expects AI to bring overall negative impacts.

Many of us have concerns about AI, though we sometimes forget this technology has been a fixture in our lives for years. It subtly improves our iPhone photos, responds to our questions on airline websites, or recommends our next Netflix binge. What has really focused the mind for governments is Generative AI (GenAI), an emerging technology that uses deep learning to analyse patterns and context in vast amounts of data and generate original outputs. It can speak back to the user, basically, so it can effectively think for itself. 

This leads to the impressive conversational ability that we’ve become used to from contemporary GenAI chat tools like ChatGPT, but also has the potential to accelerate the creation of propaganda, hate and misinformation. It will definitely have widespread effects on the way we live and work. Whether that is for better or worse, we’ve yet to find out.

With 3.7 billion people voting in over 70 countries, it was predicted that 2024 might be the year that AI-generated content would cause chaos in elections worldwide. Those fears didn’t materialise, though that’s not to say it won’t impact our own general election. Despite those anxieties not coming to fruition, AI is already altering reality for many.

Reality

The surge in AI-generated content has created a “fundamentally polluted information ecosystem” where voters now struggle to distinguish artificial from genuine material. This was the conclusion of my colleagues at the Institute for Strategic Dialogue, who analysed over a million social media posts about AI and the US presidential election on X, Reddit, and YouTube, collectively viewed billions of times.

From a sample of 300 posts, they found users misidentified genuine content as AI-generated 52% of the time.

A term you hear a lot in discussions of AI is the ‘liar’s dividend,’ a phenomenon which describes how awareness of AI fakes lets deceptive actors cast doubt on genuine material. In August, for example, many Trump supporters believed him when he falsely claimed photos of Kamala Harris greeting a crowd in Detroit were created with AI. Awareness of GenAI is already skewing our ability to separate fact from fiction, with many of us likely overestimating our own skill at spotting AI material. Platforms are failing too.

In October, Michael Healy Rae TD became an AI canary in the coal mine and posted the election’s first political deepfake: a poorly synced video of Taylor Swift endorsing him. Unconvincing and unmistakably fake. But despite pledges to label AI-generated content, Facebook, Instagram and X still haven’t flagged that video. If platforms fail to label a deepfake featuring one of the world’s most recognisable faces, shared by a prominent politician, can we trust them to catch subtler, more sophisticated AI content?

Thinking

Major shifts in communication technologies have always prompted us to rethink the ways we understand and interact with the world. From the invention of the printing press to the development of the internet, each new media technology has transformed not only society but also our thought processes. Now with AI entering more aspects of our lives, we find ourselves at another juncture. GenAI challenges us not only to reconsider how we assess the online world but to rethink how we think.

Artificial Intelligence will become more enmeshed in our lives with each passing year, so learning about its capabilities and its implications is fast becoming not just beneficial but essential for us all. That is a key motivation behind a podcast series that Dónal Mulligan, a media technology lecturer at Dublin City University, and I created called Enough About AI.

Through discussions about the history of computing (including a detour to Cork), the emerging models and companies involved in AI development, and the potential misuse of these tools, we hope to help people more confidently navigate what can be a complicated or confusing topic.

In addition to being better informed on the fundamental principles, it’s important that GenAI users become aware of the privacy and energy concerns tied to these technologies too. These models rely on vast datasets, which often include personal information, raising risks of data misuse and privacy violations. Additionally, the energy required to train and run large models is considerable, contributing to environmental impact.

Future

Despite the challenges, it’s not all doom and gloom. There’s immense potential in this technology to advance society in ways we’re only beginning to understand. Dario Amodei, the co-founder of AI firm Anthropic, describes this vision in a recent essay on what he terms the “compressed 21st century.”

GenAI’s capabilities could allow us to identify patterns and solve complex problems in medicine, economic development or governance at an accelerated pace, compressing decades of progress into a few years.

That is a fascinating premise, but we must remain as measured and critical as we are hopeful and optimistic about the emerging and potential impacts of this technology. Regulation is essential and, while the EU’s AI Act is a positive first step, more initiatives encouraging the ethical use of GenAI while we race into the future will be paramount. Striking that balance will require input from optimists and doomsayers alike, whichever you may be.

Ciarán O’Connor is a Senior Analyst at the Institute for Strategic Dialogue, a NGO that researches online extremism, disinformation and hate, and co-host of the Enough About AI podcast, available on Spotify, Apple and all other platforms.
 

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds