Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock/sdecoret

Opinion When it comes to AI we need to proceed with caution and build trust

Data Scientist Dr Oisín Boydell looks at the emerging trends in Artificial Intelligence and how they impact humans and society. 

“MOVE FAST AND break things” was Facebook’s original company motto. It sums up the prevailing attitudes from just a few years ago to the development and advancement of disruptive technologies such as Artificial Intelligence (AI).

This new frontier attitude facilitated the breakneck speed at which AI has advanced, with global technology giants, innovative start-ups, along with academic research labs all racing to innovate the next big AI breakthrough.

But more recently, we are seeing a shift in this outlook as we consider the implications and effects of this transformational technology from a human and societal perspective.

We have all seen the results of this AI acceleration, not just with the science-fiction-esque examples that attract all the limelight, but also in the behind-the-scenes AI that is having an increasing influence on all our lives.

AI in everyday life

We may also be equally familiar with conversational AI and chatbots where we can no longer be sure if we are communicating with a human or not, or generative AI models that can write novels and poetry or create photo-realistic images of whatever we ask them to, as well as self-driving cars and robot dogs.

But we also live in a world where AI can decide whether or not to give you a bank loan or what insurance premium you’ll be charged, which information to show you (or hide from you) in your news feeds and search results, and how you are diagnosed and treated if you fall ill.

As AI has become more capable, as well as more ubiquitous and influential in our lives, we are all now becoming more aware of its potential negative or harmful effects. Move fast and break things… but do we want to risk breaking down trust, human autonomy, our shared ethics and values and our ability to be able to control what path the evolution of AI will follow?

Human-centric

What is emerging more recently is a human-centred focus on AI that puts human values first and foremost, which we are seeing both through technical developments, as well as from a legal and regulatory perspective.

For example, in recent years there has been much focus on trustworthy AI; AI that is designed so that we can trust in the decisions and outputs it produces, that it protects our privacy, and that it is fair and unbiased.

Explainable AI (XAI) is a field of AI research that explores how humans can interrogate and understand why an AI system made a particular decision. As AI has become more advanced, and subsequently more complex, AI models are often treated as a ‘black box’; we cannot look inside and interpret how and why a particular decision was arrived at. 

If an AI model decides to deny my loan application, I might rightly like to know why, but if that decision was based on many thousands of different factors and a complex decision path that even the developers of the model cannot grasp, how am I supposed to be able to understand and trust that outcome?

Explainable AI focusses on developing approaches and techniques that help enable AI systems to explain their inner workings and decision processes in human understandable ways. While this is welcome, it is worth bearing in mind that XAI will always have its limits as the AI algorithms and data they are built on become more and more complex.

Building trust

Another active area of research and development is Privacy Preserving Machine Learning (PPML) which addresses another important aspect of trust in AI systems. Models are trained on huge datasets and this data can often contain sensitive personal and private information.

In the case of an AI system supporting medical diagnoses, that AI may be trained on medical records containing thousands of examples of patients’ symptoms, treatments, and outcomes so the AI is able to learn patterns in the data to diagnose and recommend the best treatments.

Sounds great, but how do we also trust that these AI systems do not also expose sensitive patient details that may identify individuals? Even if the data is first anonymised, unique combinations of symptoms may still be used to identify people, inadvertently exposed by the AI system. Privacy Preserving Machine Learning looks at techniques and solutions to ensure private data is protected, whilst enabling AI systems to still be accurate and useful.

These types of large datasets also often contain biases and prejudices that can be amplified by the AI trained on them, affecting the fairness of the AI’s decision making. Returning to my declined loan application, how can I, or even the owners of the AI system trust that the decision wasn’t prejudiced due to gender, ethnicity, or other factors that as a society we have agreed are not acceptable, but nevertheless may be present in any large dataset the AI was trained on? Understanding how we can even identify and measure bias in complex AI systems is an open research problem.

Legal footing

So far, we have looked at some technical approaches for building trust in AI, as part of a more human-centred focus on AI. Putting human values first and supporting the responsible use of AI are not merely technical problems awaiting solutions, however. We can also work with regulations and laws to ensure AI is used ethically and fairly.

The EU AI Act is a proposed European law on AI that is the first law on AI by any major regulator, which puts human values and rights front and centre.

The act defines five graduated risk categories for AI, each with its own appropriate rules and restrictions. The highest level, unacceptable risk, prescribes a blanket ban on certain uses and fields of AI such as social scoring, manipulation causing physical or phycological harm, and real-time biometric identification systems.

Will the AI Act hamper innovation and investment in Europe, similar to initial fears when the General Data Protection Regulation (GDPR) was introduced, or will it establish a blueprint for trustworthy AI that is followed by other countries? As humans, we are all in this together.

Data scientists, researchers and developers of AI systems share a social responsibility to create AI that leaves the world in a better place. AI education and training has traditionally focussed on the technical and engineering aspects, but now we are seeing ethics and trustworthy AI topics being added to the curriculum. For example, the Human-Centered AI Masters (HCAIM) programme is being taught across Europe to equip practitioners with the awareness, knowledge and skills for this human-centred approach.

For a technology such as AI that has such a profoundly disruptive and society altering potential, instead of “move fast and break things” maybe we should rather “move thoughtfully, and don’t break our trust in a technology that has the potential for so much good”?

Dr Oisín Boydell is Principal Data Scientist at CeADAR, which is Ireland’s national centre for Applied Artificial Intelligence.

VOICES

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
3 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds