*Disclaimer – this article was written with GPT, but not by GPT – explanation within.

There has been a great deal of buzz surrounding the emergence of AI services, with ChatGTP being one of the most widely explored. However, other services are quickly emerging, and as we work to make sense of this technology, it’s clear that educators may feel both unsettled and excited. The use of AI raises immediate questions about how we, in higher education, assess our students, as well as more all-encompassing questions about what it means to learn and the role of universities. I find this technology fascinating – it feels like a step-change. Although we remain uncertain about what lies ahead, AI feels like a significant leap in possibility. I am adding a few thoughts as I work to make my own sense of the situation.

While social media is full of ways to make the most of these platforms, from using them to generate business insights to harnessing their power to polish writing skills, conversations around AI are sometimes fearful, angry, and tinged with exasperation as this is another thing to contend with in an already demanding sector. As we try to reconcile these different perspectives, it is essential that we take a thoughtful, informed, and critical approach to the use of AI in assessment.

In practice I am observing three types of responses from colleagues and students:

  1. go back to unseen exams or verbal assessment – ‘it’s the only way’ (REVERT).
  2. design questions or tasks which cannot be answered by AI technology (OUTRUN) e.g. ask questions which draw upon specific in-class resources or practical experiences.  
  3. embrace the bot and ask students to use it as the basis for an answer of higher quality or the development of different skills (EMBRACE) e.g. generate a ‘first answer’ through AI and then undertake research to build up the piece; discuss how you developed the first draft; or, use AI to learn about different ways of presenting your topic by asking questions; then show how you use that advice to create a presentation.

While reverting to traditional assessment methods may solve some problems, it can send us back to an assessment landscape that flatters some students disproportionately and unjustly at the expense of others. Similarly, attempting to outrun AI technology by designing tasks that cannot be answered by AI is a gamble, as AI capabilities continue to evolve rapidly. Instead, embracing AI and asking students to use it as a tool to improve their work has the potential to magnify the benefits.

Authentic assessment, which, according to my own earlier definition (2022) is an assessment relevant to future employment, the advancement of the discipline, our collective future, or individual aspiration and which often mirrors real, complex challenges, is a widely advocated approach to assessment. AI could be a natural ally to authentic assessment, as it can be used to advance current understanding in the discipline, help students work with uncertainty, and trigger reflections on the process of creating understanding.

If we begin to use AI in authentic assessment we must be aware of our own assumptions about what is ‘relevant’ and therefore authentic in a world with AI. We may have previously focussed on how students can undertake a specific task such as writing a letter for a job application, diagnosing from a list of symptoms, describing the characteristics of a small island landscape, or writing a basic computer code; AI may render some or all of this redundant. This doesn’t mean we should always stop doing these tasks, but we must address their relevance and be clear about why we still do them when AI could. For example, one explanation is teaching a topic from first principles helps students to master skills that can then be used to engage critically with AI and take our work to an even higher level.

To balance this piece and, ironically, in response to feedback from ChatGTP, it is important to acknowledge that embracing AI is not without risk which includes “overreliance on technology and the potential for students to “game the system” by exploiting the limitations of AI” (ChatGTP, 2023).

We need to approach this moment in an informed and critical manner with curiosity to understand the issues. We must avoid fearful and protective nostalgia that leads us nowhere productive. Our disciplines and professions are different, and so we may need to approach AI in different ways. Respect is needed between those with different views.

My call to action (again as requested by my chatGPT feedback) is that we get on and use this facility and gain some first-hand experiences to inform our own positioning and our work with students.

Post Script notes on my experience of editing with ChatGTP:

  • I used Chat GPT to develop this article in two ways.
  • I asked for feedback (and I used this)– posted here below.
  • I asked ChatGPT to rewrite my article – which it did (twice at my request).
  • In all honesty, I found the feedback helpful.
  • The rewrite gave me mixed emotions – it made me think I couldn’t write (that was the emotional reaction). Then I got selective and thought about which bits were better and which were worse (reaction 2). I used it to selectively edit. Sometimes it felt like my personality was being removed by the rewrite. I guess that’s not surprising. It made me notice my own original voice more.
  • As an aside, I question whether students would always have the confidence to choose their voice over a ‘corrected’ or ‘improved’ version even when they liked their own work more– in embracing AI we have work to do to support students to make such choices.  
  • The step I didn’t go through, which I ideally should, was to compare the versions and actively consider what could be learnt for other pieces of writing.
  • On the upside – I may have fewer typos than usual!
  • The published version is not a Chat GPT rewrite alone – it is a rewrite and then a fusion of versions with further editing.
The blog article "AI and Assessment – another voice" provides a thought-provoking perspective on the role of AI in education, particularly in the context of assessment. Here are some suggestions to further improve the article:

Provide more concrete examples: While the article mentions the three types of design responses to AI in assessment, it would be helpful to provide specific examples of how each approach could be applied in practice. This would help readers better understand the implications of each approach.

Elaborate on the potential benefits of AI in assessment: The article touches on the potential benefits of AI in assessment, but it could be strengthened by providing more specific examples of how AI could improve assessment outcomes, such as reducing bias, increasing efficiency, and providing personalized feedback to students.

Address potential drawbacks and concerns: The article acknowledges the potential challenges of reconciling different pedagogic values and priorities, but it could be strengthened by addressing some of the potential drawbacks and concerns associated with the use of AI in assessment, such as the risk of overreliance on technology and the potential for students to "game the system" by exploiting the limitations of AI.

Clarify key terms: The article uses several terms that may not be familiar to all readers, such as "assessment lockdown" and "authentic assessment." Providing more context and clarification for these terms would help readers better understand the arguments being made.

Provide a call to action: The article concludes by emphasizing the importance of approaching AI in assessment consciously, critically, and based on a full understanding of the issues. To further engage readers, it could be strengthened by providing a clear call to action for educators, such as exploring how AI could be used in their own teaching and assessment practices or engaging in discussions with colleagues and students about the implications of AI in education.
This is what I got when I asked ChatGPT for feedback on draft number 1