Please ensure Javascript is enabled for purposes of website accessibility

This was not written by ChatGPT

Photo of Dr Wayne HolmesBy Dr Wayne Holmes, Associate Professor for AI and Education, University College London

Over the last few months, the application of Artificial Intelligence in education (AIED) has burst into the sunlight. While researchers have been studying AIED for more than 40 years, and commercial organisations have been deploying AIED in schools for more than a decade, such innovations have largely gone unnoticed by the general public. This all changed with the launch, last November, of the AI application ChatGPT.[1]

Within days, ChatGPT, from the AI research lab OpenAI,[2] became the fastest-growing online sensation ever, thanks to its ability to automatically generate, in response to a prompt, impressively human-like text — and all within seconds. Meanwhile, organisations across the Commonwealth immediately jumped on its potential use by students to cheat, especially for writing essays: “Cheating with ChatGPT? Controversial AI Tool Banned in These Schools in Australia First” (SBS News[3]), “ChatGPT: New AI Tool Raises Education Concerns Globally” (Punch Nigeria[4]) and “Why AI Tools Like ChatGPT Will Widen the Education Gap” (Global Indian Times[5]).

In contrast to this inevitable knee-jerk response, some educators have cautiously welcomed the arrival of ChatGPT and similar tools (new ones, such as Google’s Bard,[6] seem to be announced every day). For example, a school in Germany expects its students to use AI when they write their essays, before going on to critically examine the AI-generated text.[7] Meanwhile, my own institution (University College London) has released guidance which says, of tools like ChatGPT, “rather than seek to prohibit their use, students and staff need to be supported in using them effectively, ethically and transparently.”[8] This is not to suggest that educators should ignore students using AI to cheat, but rather an acknowledgement that these tools are now widely available, are likely only to become more sophisticated over time, and have unique and both negative and positive potential.

I have used ChatGPT to support my writing and teaching. For example, when tasked with writing a paragraph on something new to me, I used ChatGPT to write a first draft; and while I chose not to use any of the sentences that it suggested verbatim, it definitely inspired what I did go on to write, helping me overcome my writer’s block. Meanwhile, the Internet is awash with novel ideas for how to use ChatGPT to inform teaching and learning, such as using it to suggest lesson plans, generate ideas, summarise texts, or simplify difficult ideas. Similar tools are also being used to automatically generate new images, music and even computer code.

However, before we get too enamoured, we need to recognise some fundamental facts which derive from how these tools work. In essence, ChatGPT and others identify correlations between words (or images or code) in huge amounts of data scraped from the Internet and then generate an output that is another example of those correlations. As a consequence, although the output might appear human-like, unlike humans, these tools actually understand nothing. In any case, they can and often do generate nonsense (garbage in, garbage out still holds), which is most obvious if you prompt it about something on which opinions are divided. As for its potential for cheating, industry is already launching tools9 that (they claim) detect when a piece of text has been written by AI. However, such an approach is likely only to lead to an unwinnable arms race — with each generation of detector being leap-frogged by the next version of generator, and the cycle repeats.

So, what are the takeaways? First, that these technologies are only going to become more available and sophisticated over time. Second, that they can be used in multiple ways, including inspiring ideas as well as generating texts that might be indistinguishable from student-written essays. Third, to avoid an unwinnable arms race, we need to rethink how we assess students, perhaps beginning with setting tasks that require understanding and critical thinking, neither of which can (yet?) easily be replicated by AI. Finally, we need to think carefully about how these technologies can be used ethically and transparently, promoting and not undermining fundamental human rights, while effectively supporting student agency and learning.

Wayne Holmes
(with a little help from ChatGPT)


  1. https://chat.openai.com/chat
  2. https://openai.com
  3. https://www.sbs.com.au/news/article/cheating-with-chatgpt-controversial-ai-tool-banned-in-these-schools-in-australian-first/817odtv6e
  4. https://punchng.com/chatgpt-new-ai-tool-raises-education-concerns-globally
  5. https://www.globalindiantimes.com/globalindiantimes/chatgpt-education
  6. https://blog.google/technology/ai/bard-google-ai-search-updates
  7. https://the-decoder.com/a-teacher-allows-ai-tools-in-exams-heres-what-he-learned
  8. https://www.ucl.ac.uk/teaching-learning

Licence

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Connections (VOL. 28, NO. 1) Copyright © 2023 by Commonwealth of Learning is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book