Figure: Witch-burning in 16th century Germany
By R. Decker – File:Zeitung Derenburg 1555.JPG, Public Domain, https://commons.wikimedia.org/w/index.php?curid=33410964
In my previous post about ChatGPT, I used a photo of an actress clutching her pearls – a common metaphor for an over-reaction about something. According to dictionary.com, it is “outrage or dramatic protest, especially from a woman, caused by something the person perceives as vulgar, in bad taste, or morally wrong but that does not elicit a similarly strong reaction from most other people”. I want to revise my metaphor, as I don’t think this is a particularly gendered situation, nor is this about bad taste. Many people are concerned that these tools will now be exclusively used by students to take shortcuts in their learning and cheat on assessments, because we (teachers) won’t be able to tell any different. This looks like a threat to our current systems and practices. For this reason, I think the concept of moral panic fits the bill better. According to Wikipedia (another tool which generates moral panic, in my view), “Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”. While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm” (Cohen 1972). We should be thinking about this as an opportunity to change our practices rather than fretting about yet more policing of student assessment.
How we use AI already
I have been thinking about how much we already use the same kinds of technology. For instance, I am trying very hard to learn a new language right now. It’s hard work and I often need to understand documents which are way beyond my basic skills. So I am very dependent on Google Translate to help me out. Am I cheating?
I also use DuoLingo to gamify my language learning – this is an adaptive app which builds up the difficulty for me. I like it because it is quite motivating. Am I cheating?
Neither of these products really helps me learn the language. I need the structure of a regular course for that, and I’m doing that too. But they help me with vocabulary.
I have also played around with using ChatGPT for some tasks. I have got quite a few writing projects right now. I always start writing a paper by writing the abstract, and then generate the structure from that. I am aware this isn’t what everyone does, because my writing partners often give me strange looks when I suggest it. So I gave ChatGPT a couple of my current abstracts and asked it how it would structure a paper with them. It gave me perfectly sensible outlines – pretty similar to the ones I already had. So I might trust it in the future to help me a) see if the abstract leads to a sensible structure and b) not miss out on elaborating everything I had put in the abstract. I still wrote the abstracts, and did the research which I’m going to write about. Am I cheating?
What should we do next?
I think there are two main concerns:
- Students will pretend that something generated by one of these tools is their own work, thereby potentially gaining unfair advantage over students who don’t do this.
- We (teachers) will suffer existential doubt about our role, if the students can ask ChatGPT anything about our discipline areas and come up with reasonable-sounding replies.
Let’s look at these two issues separately.
- Gaining unfair advantage by pretending something is one’s own work.
In the previous post, I said this was identical to contract cheating: buying or using work produced by someone else. The key here is to think how these tools can help you and your students, rather than wondering how you can stop their use. Some universities are banning access to OpenAI. That seems pointless – students don’t only use the internet on campus, and it stops you experimenting with the tool. Bans haven’t worked with other forms of cheating. Rather think about setting assignments which can make use of these technologies. My previous post has some ideas about avoiding this without resorting to tighter regulations and policing of student activity. There are also some better ideas in some of the links in the next section.
2. Rethinking our roles
It’s a long time since the main role of a teacher in higher education was to read aloud from scarce books. I suppose books may be getting scarce again, but now the computer can read their digital versions aloud. But our expectations of students have changed too: we now expect them to develop skills in analysis, synthesis, problem-solving, communication, creation, and so on. Teachers are still needed to structure this work, scaffold the learning by making tasks progressively more difficult, and to create an environment where people can learn effectively. This last one is absolutely key. Why can’t I learn a language from the textbook and apps alone? Because I need someone to structure my learning, encourage me, give me meaningful feedback, and make me speak to others. I also want to talk to a fellow human when I’m having difficulties, have to miss a lesson, or lack confidence. Believe that we are still important, and think about how we can use these tools to help us to improve student learning, and spend more time on the human interactions essential for high level learning.
You could also think about what these tools can do for you. In the links below, people suggest activities such as helping you to write learning outcomes, plan sessions, and suggest additional resources. IF you use this last example, I would be quite careful. Tony Bates asked ChatGPT to tell him something about his specialist subject, online learning. He said “ChatGPT provided references that I was unaware of, but they are impeccable sources.”. The only problem is, they looked plausible, but weren’t. They were non-existent web pages and invented citations. This is pretty easy for a teacher to check, but time-consuming if you are marking/grading a lot of assignments. You need to talk to your students about this before they use these tools. They still have to assess the quality of the output.
The UK Guardian newspaper has reported that the company behind ChatGPT, OpenAI, is planning to add some kind of watermark to the text so that people will know that it has been generated by AI. I’m not sure how effective this would be – it should be easy for someone to make small edits to the text and get round this. To return to my insistence that this is similar to contract cheating, it’s like an essay mill saying that their products are only provided for study purposes.
What does ChatGPT think we should do?
I asked it: How do you think a university teacher should use your capacities to support their teaching?
The reply was:
As an AI, I am not capable of providing direct support to university teachers in their teaching. However, I can offer general advice on how a university teacher might be able to use my capabilities to support their teaching.
One way a university teacher could use my capabilities is by asking me questions about teaching-related topics. For example, a teacher could ask me about best practices for engaging students in online learning, effective ways to assess student learning, or strategies for supporting diverse learners. I can provide information and insights on these topics based on my knowledge of research and theory in the field of education.
Another way a university teacher could use my capabilities is by asking me to perform tasks or generate content that could be useful in the classroom. For example, a teacher could ask me to create a quiz or a set of discussion questions on a particular topic, or to summarize a research article or a book chapter. These materials could be used by the teacher as part of their lessons or assignments, or shared with students as additional resources for learning.
Overall, my aim is to provide information and support that can help university teachers to enhance their teaching and learning practices. However, it is important to note that AI is not a substitute for human expertise and judgment, and teachers should use their own discretion and professional judgment when incorporating AI-generated materials into their teaching. [my highlighting – the data in the tool’s system indicates that we are still needed]
*Updated 11/01/23 – I will continue to update as I find great resources, thank you to all of the community for sharing these freely.
Lots of people are working on this and thinking through the implications. Here are some highlights, but there are a lot of good resources out there.
Torrey Trust provides a readable and usable summary of the issues and possible responses, which summarises a lot of the excellent discussion which has been happening on Twitter and elsewhere.
Christina Micelli has produced a large set of informative slides which explain the technology and its limitations. For instance, ChatGPT was trained on a vast corpus of material – but can’t look up further information from search engines, so it’s currently stuck in 2021.
A lot of people are looking at ChatGPT and thinking about the implications for education. I highly recommend following Philippa Hardman’s substack blog where she has systematically worked through some uses of the system in higher education.
The UK podcast Ways to Change the World ‘interviewed’ ChatGPT – I found this very helpful in understanding how it works without any technical language.
Somehow, Ethan Mollick and Lilack Mollick got out a discussion paper on this very quickly. This contains lots of useful ideas about working with the technology, rather than worrying about how it will encourage students to cheat.
The UK national body, Jisc, which comments on and provides resources related to digital teaching and learning, has put out an article “Does ChatGPT mean the end of the essay as an assessment tool?” (spoiler – the question isn’t directly answered, but they give an idea of their plans to develop this topic).
Cohen, S. (1972). Folk Devils and Moral Panics (1st ed.). Routledge. https://doi.org/10.4324/9780203828250
Mollick, Ethan R. and Mollick, Lilach, New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments (December 13, 2022). Available at SSRN: https://ssrn.com/abstract=4300783 or http://dx.doi.org/10.2139/ssrn.4300783