Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Mo design, CC BY

Opinion Is it time for us to talk about creating AI-free spaces?

Law professor Antonio Pele looks at the rise of AI and asks if we’re doing enough to protect ourselves along the way.

IN DAN SIMMONS’ 1989 sci-fi classic Hyperion, the novel’s protagonists are permanently connected to an artificial intelligence network known as the “Datasphere” that instantly feeds information directly to their brains. While knowledge is available immediately, the ability to think by oneself is lost.

More than 30 years after Simons’ novel was published, the rising impact of AI on our intellectual abilities might be thought of in similar terms. To mitigate these risks, I offer a solution that can reconcile both AI’s progress and the need to respect and preserve our cognitive capacities.

The benefits of AI for human well-being are wide-ranging and well publicised. Among them is the technology’s potential to advance social justice, combat systemic racism, improve cancer detection, mitigate the environmental crisis and boost productivity.

However, the darker aspects of AI are also coming into focus, including racial bias, its capacity to deepen socio-economic disparities and manipulate our emotions and behaviour.

The West’s first AI rulebook?

In spite of the growing risks, there are still no binding national or international rules regulating AI. That is why the European Commission’s proposal for a regulation on artificial intelligence is so relevant.

The EC’s proposed AI Act, of which the latest draft was green-lit by the European Parliament’s two committees recently, examines the potential risks inherent in the technology’s use, and classifies them according to three categories: “unacceptable”, “high” and “other”. In the first category, AI practices that would be forbidden are those that:

  • Manipulate a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.
  • Exploit the vulnerabilities of a specific group of persons (e.g., age, disabilities) so that AI distorts the behaviour of these persons and is likely to produce harm.
  • Evaluate and classify people (e.g., social scoring).
  • Employ real-time facial recognition in public spaces for the purpose of enforcement, except in specific cases (e.g., terrorist attacks).

In the AI Act, the notions of “unacceptable” risks and harms are closely related. Those are important steps and reveal the need to protect specific activities and physical spaces from the interference of AI. With my colleague Caitlin Mulholland, we have shown the need for stronger AI and facial recognition regulation to protect basic human rights such as privacy.

It is particularly true regarding recent developments on AI that involve automated decision-making in the judicial fields and its use for migration management. Debates around ChatGPT and OpenAI also raise concerns over their impact on our intellectual capacities.

AI-free sanctuaries

These cases show concern over deploying AI in sectors where human rights, privacy and cognitive abilities are at stake. They also point to the need for spaces where AI activities should be strongly regulated.

I argue these areas can be defined through the ancient concept of sanctuaries. In an article on “surveillance capitalism”, Shoshana Zuboff presciently refers to the right of sanctuary as an antidote to power, taking us on a tour of sacred sites, churches and monasteries where oppressed communities once found refuge. Against the pervasiveness of digital surveillance, Zuboff insists on the right of sanctuary through the creation of robust digital regulation so that we can enjoy a “space of inviolable refuge”.

The idea of “AI-free sanctuaries” does not imply the prohibition of AI systems, but a stronger regulation of the applications of these technologies.

In the case of the EU’s AI Act, it implies a more precise definition of the idea of harm. However, there is no clear definition of harm in the EU’s proposed legislation nor at the level of member states. As Suzanne Vergnolle argues, a possible solution would be finding shared criteria between European member states that would better describe the types of harm resulting from manipulative AI practices. Collective harms based on race and socio-economic background should also be considered.

To implement AI-free sanctuaries, regulations allowing us to preserve our cognitive and mental harm should be enforced. A starting point would consist in enforcing a new generation of rights – “neurorights” – that would protect our cognitive liberty amid the rapid progress of neurotechnologies. Roberto Andorno and Marcello Ienca hold that the right to mental integrity – already protected by the European Court of Human Rights – should go beyond the cases of mental illness and address unauthorised intrusions, including by AI systems.

AI-free sanctuaries: a manifesto

In advance, I would like to suggest the right of “AI-free sanctuaries”. It encapsulates the following (provisional) articles:

  • The right to opt out. All individuals have the right to opt out from AI types of support in sensitive areas one is able to choose during the period of time one may decide. This entails the complete non-interference of AI device and/or a moderate interference.
  • No sanctions. Opting out from AI support will never entail any economic or social drawbacks.
  • The right to human determination. All individuals have the right to a final determination made by a human person.
  • Sensitive areas and people. In collaboration with civil society and private actors, public authorities will define areas that are particularly sensitive (e.g., education, health) as well as human/social groups, like children, that should not be exposed/or moderately exposed to intrusive AI.

AI-free sanctuaries in the physical world

Until now, “AI-free spaces” have been unevenly applied, from a strictly spatial point of view. Some US and European schools have chosen to eschew screens from classrooms – the so-called “low-tech/no-tech education” movement.

Many digital education programs rely on designs that can favour addiction, while public and low-funded schools tend to increasingly rely on screens and digital tools, which enhance a social divide.

Even outside of controlled settings such as classrooms, AI’s reach is expanding. To push back, between 2019 and 2021, a dozen of US cities have passed laws restricting and prohibiting the use of facial recognition for law-enforcement purpose. Since 2022, however, many cities are backing off in response to a perception of rising crime. Despite the EC’s proposed legislation, in France, AI video surveillance cameras will monitor Paris Olympics in 2024

Despite its potential to reinforce inequalities, facial-analysis AI is being used in some jobs interviews. Fed with the data of candidates who were successful in the past, AI would tend to select candidates from privileged backgrounds and exclude those from diverse ones. Such practices should be prohibited.

AI-powered Internet search engines should also be prohibited, as the technology is not ready to be used at this level. Indeed, as Melissa Heikkiläa points out in a 2023 MIT Technology Review article, “AI-generated text looks authoritative and cites sources, that could ironically make users even less likely to double-check the information they’re seeing”. There’s also a measure of exploitation, as “the users are now doing the work of testing this technology for free.”

Permitting progress, preserving rights

The right to AI-free sanctuaries will allow the technical progress of AI while protecting simultaneously the cognitive and emotional capacities of all individuals. Being able to opt out of AI’s being used is essential if we want to preserve our abilities to acquire knowledge, experience in our own ways, and preserve our moral judgement.

In Dan Simmons’ novel, a reborn “cybrid” of the poet John Keats is disconnected to the Datasphere and is able to resist the takeover of AIs.

This point is instructive since it also reveals the relevance of the debates on AI’s interference in arts, music, literature and culture. Indeed, and along with copyright issues, these human activities are closely tied to our imagination and creativity, and these capacities are primarily the cornerstone of our abilities to resist and think for ourselves.

Antonio Pele is Associate professor, Law School at PUC-Rio University & Marie Curie Fellow at IRIS/EHESS Paris & MSCA Fellow at the Columbia Center for Contemporary Critical Thought (CCCCT) w/ the HuDig19 Project, Université Paris Nanterre – Université Paris Lumières This article is republished from The Conversation under a Creative Commons license. Read the original article.

CONVERSATION LOGO

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Author
Antonio Pele
View 6 comments
Close
6 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds