Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock

Opinion As educators, we must step up our game in the face of ChatGPT

Three University of Galway academics explore how Artificial Intelligence is sweeping academia and ask what should be done.

WHO’D HAVE THOUGHT that news headlines this year would be filled with debate about assessment and examinations, but that’s exactly what we’ve seen over the last few months as new technology has caught everyone’s attention.

ChatGPT - an artificial intelligence chatbot developed by the AI research and development company OpenAI - was made available publicly last year and has led to much controversy given its seeming ability to answer any question in fluent, informed, well-structured language, and to refine its answers based on user feedback.

chatgpt-chat-bot-screen-seen-on-smartphone-and-laptop-display-with-chat-gpt-login-screen-on-the-background-a-new-ai-chatbot-by-openai-stafford-unit ChatGPT chat bot screen seen on smartphone and laptop display with Chat GPT login screen on the background. A new AI chatbot by OpenAI. Stafford, Unit Alamy Stock Photo Alamy Stock Photo

Given that in education we ask our students questions and look for fluent, informed, and well-structured essays in return, you can see the problem.

The technology

ChatGPT is built using a ‘large language model’ (LLM). Essentially, this is a program that estimates the probability of words following others in a sequence, which it “learns” by analysing example text provided to it. When you use auto-complete in a text message, you are using a language model.

ChatGPT was trained to predict the most likely next word, using 500 billion words of text: all of Wikipedia, all public-domain books, huge volumes of scientific articles, and other web text (from many sites, including discussion forums of varied quality) – more than humans could read in a lifetime. It has additional training to enable it to participate in a turn-taking conversation and to recognise requests.

Older language models base their next-word predictions on the previous 5-10 words, but GPT and other new ones use a summary of all previous text as a context. This enables them to generate long sequences of plausible text.

It is intuitive to use, and its structure and scale lead to extremely interesting “emergent properties” – ChatGPT can generate answers to questions on virtually any topic, it can retain context across multiple turns, it can even generate rhymes, adopt any requested writing style, and produce computer code.

However, there is nothing fundamental in its design to ensure it generates truthful outputs, except that factual statements might be more common than non-factual ones in its training text. But it does often write factual material: you can predict what word is likely to follow “the capital of Bolivia is”, even if you don’t really know what a capital city is.

One way to think about ChatGPT is that it is like an improv performer, playing along with the prompts you give, rather than an oracle of truth. This creates risks if we take its outputs at face value.

Some people are proposing that ChatGPT could be a “co-author” on papers, or could replace the need to develop analytical writing skills. However, this needs real caution. Its suggestions might be good or might be totally fabricated.

Assessing course work

A typical view of assessment is one-dimensional. There is a focus on a final exam, testing the learner at one fixed date and time to see how good they are (at such tests), or requiring the submission of a final product (an essay, a project report) by a deadline, and then graded out of their sight. Assessment, in other words, is something that is done to the student.

The challenge to assessment integrity is large because many courses include a substantially continuous assessment component. Almost any assessment that is primarily text-based, and completed out of sight of the assessor, is susceptible to ChatGPT.

Educational institutions have a public responsibility to be able to stand over their awards and qualifications and need to be assured that any work a student submits was produced by them and demonstrates their level of knowledge or skill. We have to protect honest students from reputational damage by association.

chatbot-chatgpt The homepage of ChatGPT. Artificial intelligence that writes greeting cards, poems or non-fiction texts - and sounds amazingly human. With the chatbot, you can not only talk on the Internet. With the help of artificial intelligence (AI), it also writes essays, poems, letters and all kinds of other texts on command - and impresses with its abilities. DPA / PA Images DPA / PA Images / PA Images

In recent years we have seen the growth of companies that deliberately target students under pressure, offering them contract cheating services. One view is that using ChatGPT (or similar a tool) is not fundamentally different from using such a ‘service’ or having a friend do your assignment, but it may be more tempting because it is so accessible and easy to use.

Academia’s response

Considerable work is being done nationally and internationally, through groups such as the National Academic Integrity Network, Quality and Qualifications Ireland (QQI) and individual institutions, to counter this threat and to build resilience into assessment and qualifications.

Focusing on assessment integrity doesn’t mean that we can’t also think of assessment as being ‘for’ learning, and even serving ‘as’ learning.

Much of the conversation in educational circles has been around this more holistic perspective. In this view we can look at the artefacts we ask students to produce (essay, report, project) as being something in which they can feel pride and an opportunity for them to demonstrate that they’ve met the learning goals of the course.

womans-hand-holding-an-iphone-with-appstore-and-chatgpt-and-chat-bot-apps-artificial-intelligence-time-writesmith-ai-essay-writing-01-01-2023-ams iPhone with appstore and ChatGPT and Chat bot apps, artificial intelligence time. WriteSmith - AI Essay Writing 01.01.2023 Alamy Stock Photo Alamy Stock Photo

We can recognise that learning is a developmental journey and so rather than evaluating a student only on a final assignment, we can place value on the process that led to it.

When we write, we think. We have to gather ideas, connect them, reshape and structure an argument. Writing is a thinking tool, not just an ‘output’.

Therein perhaps lies the contrast with tools such as ChatGPT. As LLMs inevitably become embedded in a range of products such as MS Word, they will no doubt have a role in aiding the technical aspects of writing, but they cannot be used to shortcut thinking, insight, and creativity. We have to reward students for these abilities, not just for high-quality prose.

To achieve this, we educators may need to up our game. We shouldn’t aim to reward the “regurgitation” of material but rather aim to develop and assess abilities which build on each other – from memorisation and knowledge of basics to the ability to apply concepts in new situations, provide critiques, and generate new concepts.

Being clearer about how, what and why we assess, as well as ensuring our students have a critical digital literacy will also help position tools such as ChatGPT appropriately.

Of course, in practice, there are limitations imposed by resources and time constraints. Perhaps as a society, we need to be honest about the cost of high-quality education and invest to ensure we have realistic staff/student ratios, appropriate teaching spaces, and the learning supports required for diverse student needs.

But, nonetheless, re-framing learning and assessment to focus more on the learning journey will allow us to redesign courses, finesse our choice of teaching methods and reinforce the importance of student engagement and responsibility for their own learning.

Dr James McDermott is a Lecturer in the School of Computer Science, University of Galway. Professor Michael Madden is Established Professor of Computer Science at University of Galway where he leads the Machine Learning Research Group. Dr Iain MacLaren is the Director of the Centre for Excellence in Learning & Teaching at the University of Galway.

PastedImage-42927

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
6 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds