*Disclaimer – this article was written with GPT, but not by GPT – explanation within.
There has been a great deal of buzz surrounding the emergence of AI services, with ChatGTP being one of the most widely explored. However, other services are quickly emerging, and as we work to make sense of this technology, it’s clear that educators may feel both unsettled and excited. The use of AI raises immediate questions about how we, in higher education, assess our students, as well as more all-encompassing questions about what it means to learn and the role of universities. I find this technology fascinating – it feels like a step-change. Although we remain uncertain about what lies ahead, AI feels like a significant leap in possibility. I am adding a few thoughts as I work to make my own sense of the situation.
While social media is full of ways to make the most of these platforms, from using them to generate business insights to harnessing their power to polish writing skills, conversations around AI are sometimes fearful, angry, and tinged with exasperation as this is another thing to contend with in an already demanding sector. As we try to reconcile these different perspectives, it is essential that we take a thoughtful, informed, and critical approach to the use of AI in assessment.
In practice I am observing three types of responses from colleagues and students:
- go back to unseen exams or verbal assessment – ‘it’s the only way’ (REVERT).
- design questions or tasks which cannot be answered by AI technology (OUTRUN) e.g. ask questions which draw upon specific in-class resources or practical experiences.
- embrace the bot and ask students to use it as the basis for an answer of higher quality or the development of different skills (EMBRACE) e.g. generate a ‘first answer’ through AI and then undertake research to build up the piece; discuss how you developed the first draft; or, use AI to learn about different ways of presenting your topic by asking questions; then show how you use that advice to create a presentation.
While reverting to traditional assessment methods may solve some problems, it can send us back to an assessment landscape that flatters some students disproportionately and unjustly at the expense of others. Similarly, attempting to outrun AI technology by designing tasks that cannot be answered by AI is a gamble, as AI capabilities continue to evolve rapidly. Instead, embracing AI and asking students to use it as a tool to improve their work has the potential to magnify the benefits.
Authentic assessment, which, according to my own earlier definition (2022) is an assessment relevant to future employment, the advancement of the discipline, our collective future, or individual aspiration and which often mirrors real, complex challenges, is a widely advocated approach to assessment. AI could be a natural ally to authentic assessment, as it can be used to advance current understanding in the discipline, help students work with uncertainty, and trigger reflections on the process of creating understanding.
If we begin to use AI in authentic assessment we must be aware of our own assumptions about what is ‘relevant’ and therefore authentic in a world with AI. We may have previously focussed on how students can undertake a specific task such as writing a letter for a job application, diagnosing from a list of symptoms, describing the characteristics of a small island landscape, or writing a basic computer code; AI may render some or all of this redundant. This doesn’t mean we should always stop doing these tasks, but we must address their relevance and be clear about why we still do them when AI could. For example, one explanation is teaching a topic from first principles helps students to master skills that can then be used to engage critically with AI and take our work to an even higher level.
To balance this piece and, ironically, in response to feedback from ChatGTP, it is important to acknowledge that embracing AI is not without risk which includes “overreliance on technology and the potential for students to “game the system” by exploiting the limitations of AI” (ChatGTP, 2023).
We need to approach this moment in an informed and critical manner with curiosity to understand the issues. We must avoid fearful and protective nostalgia that leads us nowhere productive. Our disciplines and professions are different, and so we may need to approach AI in different ways. Respect is needed between those with different views.
My call to action (again as requested by my chatGPT feedback) is that we get on and use this facility and gain some first-hand experiences to inform our own positioning and our work with students.
Post Script notes on my experience of editing with ChatGTP:
- I used Chat GPT to develop this article in two ways.
- I asked for feedback (and I used this)– posted here below.
- I asked ChatGPT to rewrite my article – which it did (twice at my request).
- In all honesty, I found the feedback helpful.
- The rewrite gave me mixed emotions – it made me think I couldn’t write (that was the emotional reaction). Then I got selective and thought about which bits were better and which were worse (reaction 2). I used it to selectively edit. Sometimes it felt like my personality was being removed by the rewrite. I guess that’s not surprising. It made me notice my own original voice more.
- As an aside, I question whether students would always have the confidence to choose their voice over a ‘corrected’ or ‘improved’ version even when they liked their own work more– in embracing AI we have work to do to support students to make such choices.
- The step I didn’t go through, which I ideally should, was to compare the versions and actively consider what could be learnt for other pieces of writing.
- On the upside – I may have fewer typos than usual!
- The published version is not a Chat GPT rewrite alone – it is a rewrite and then a fusion of versions with further editing.

Loved this blog Lydia. It stands out in a throng of contributions to the topic. I’m interested in how you both directly referenced ChatGPT and declared its other input to your work. How do we encourage our students to do that within an ‘embrace’ approach? What course or institutional guidance do we need to create?
Thanks, Jackie, it felt very natural to reference Chat GTP -old habits.
As we create guidance, I think the first step is for us to understand what is possible; how can we use this tool as academics, as professionals and in the workplace. We need to understand it before we can guide others.
The other thing which I didn’t mention is about involving students – how are they reacting? I think they should inform our approach. Some I have spoken to see AI as a really useful tool, some are using it to get started with their assignments, and others are horrified, seeing AI as cheating. We need to understand the diverse student reactions as we position our responses and guidance too.
More broadly though it may be time to rethink our conceptions of cheating for a world where collaboration is valued and technology can help. Analogue regs need to move into a truly digital world. Forming that guidance is surely going to be a challenge.