Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Olivier Hoslet

EU unveils 'landmark' rules on Artificial Intelligence to curb 'high-risk' technologies

“Generalised surveillance” would be off-limits, as well as any tech “used to manipulate the behaviour, opinions or decisions” of citizens.

THE EU HAS unveiled a plan to regulate the sprawling field of artificial intelligence, aimed at easing public fears of Big Brother-like abuses by imposing checks on technology deemed “high-risk”.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” EU competition chief Margrethe Vestager said.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”

The European Commission, the bloc’s executive arm, has been preparing the proposal for more than a year and a debate involving the European Parliament and 27 member states is to go on for months more before a definitive text is in force.

Brussels is looking to set the terms with a first ever legislative package on AI and catch up with the US and China in a sector that spans from voice recognition to insurance and law enforcement.

It insists that by laying out a clear framework for companies across the bloc’s 27 member states it will help promote innovation.

The bloc is trying to learn the lessons after largely missing out on the internet revolution and failing to produce any major competitors to match the giants of Silicon Valley or their Chinese counterparts.

But the draft rules have sparked competing complaints from all sides of the debate, with big tech warning bureaucracy could suffocate development and civil liberties groups complaining the proposals have too many “loopholes”.

‘High-risk’

“Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market,” EU internal market commissioner Thierry Breton said.

The draft regulation lays out a “risk-based approach” that would lead to bans on a limited number of uses that are deemed as presenting an “unacceptable risk” to EU fundamental rights.

EU rules EU EU

This would make “generalised surveillance” of the population off-limits as well as any tech “used to manipulate the behaviour, opinions or decisions” of citizens.

Anything resembling a social rating of individuals based on their behaviour or personality would also be prohibited.

On the rung below, the regulation requires companies to get a special authorisation for applications deemed “high-risk” before they reach the market.

These systems would include “remote biometric identification of persons in public places” – including facial recognition – as well as “security elements in critical public infrastructure”.

Special exceptions are envisioned for allowing the use of mass facial recognition systems in cases such as searching for a missing child, averting a terror threat, or tracking down someone suspected of a serious crime.

Military applications of artificial intelligence will not be covered by the rules.

Other uses, not classified as “high risk”, will have no additional regulatory constraints beyond existing ones.

presentation-of-a-robot-of-the-security-service-provider-ciborius Spot, a robot with dog-like movements, walks past a dog in Cathedral Square, Erfurt. DPA / PA Images DPA / PA Images / PA Images

Infringements, depending on their seriousness, may bring heavy fines for companies.

‘Loopholes’

Google and other tech giants are taking the EU’s AI strategy very seriously as Europe often sets a standard on how tech is regulated around the world.

Last year, Google warned that the EU’s definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology.

Alexandre de Streel, co-director of the Centre on Regulation in Europe think tank, said there is a difficult balance to be struck between protection and innovation.

The text “sets a relatively open framework and everything will depend on how it is interpreted,” he told AFP.

Tech lobbyist Christian Borggreen, from the Computer and Communications Industry Association, welcomed the EU’s risk-based approach, but warned against stifling industry.

“We hope the proposal will be further clarified and targeted to avoid unnecessary red tape for developers and users,” he said in a statement.

Civil liberties activists warned that the rules do not go far enough in curbing potential abuses in the cutting-edge technologies.

“Although the proposal technically bans the most problematic uses of AI, there are still loopholes for Member States to go through to get around the bans,” said Orsolya Reich of umbrella group Liberties.

“There are way too many problematic uses of the technology that are allowed, such as the use of algorithms to forecast crime or to have computers assess the emotional state of people at border control.”

Author
AFP
View 13 comments
Close
13 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds