For some time, I have been working to exemplify different types of assessment to help make clear what options are available when colleagues make choices about assessment design. The aim of my Top Trump Cards was to inspire thought about what might be possible in different disciplinary settings. With the recent popularisation of large language model artificial intelligence tools, I have been asked multiple times to exemplify ways that AI can be used in assessment.  I attempt that here and add further comment.

An earlier post described how I believed there were broadly three possible assessment responses to this type of AI – to embrace it (that is integrating AI into assessment), to outrun AI with design choices (e.g. by detection and technology measures), and thirdly to avoid AI and revert to secure methods of assessment. These should not necessarily be fixed positions that an individual adopts as if making a lifelong paradigm choice – although it may be the case that we all have our own feelings towards the use of AI. Instead, we need to make active choices about how AI is used in specific circumstances; context really does matter.  The selection of an AI position may, for example, be linked to such factors as learning outcomes and the development of skills and attributes, including and especially criticality. But what choices are available to us when we consider how AI may be used appropriately in assessment? Currently, I can see two ways of conceiving a relationship between AI and assessment.

  1. Where appropriate, use AI in existing or adjusted assessment activities to support task completion: From research projects drawing on AI as a source of inspiration or challenge to writing for specific audiences where AI is used to generate feedback, AI can support existing assessments as one tool amongst many. Other tools include library searches, web searches, collaboration, and, social media searches. Giving the choice to use AI, arguably, increases the ‘real-world-ness’ of assessments by allowing and encouraging students to use all the tools available to them. Students, in conjunction with faculty, may decide that AI is less beneficial than imagined, or they may find it hugely valuable.  We should remain critical and vigilant about the limitations of the tools at our disposal whether they are web searches, AI responses, or research papers. Positioning AI as ‘just’ a tool that can help, while supporting students to be critical, skilled, and discriminate in the use of said technology may be a proportionate response to integrating AI into existing assessments.  
  2. Formulate new variants of assessment activities enabled by AI: As we all become more proficient in working with AI, the development of tasks that are made possible by AI will inevitably transpire. I have seen few examples of what this might look like. As an interim measure and to try to bring to life what may be possible, as part of this post I have ‘sketched’ some assessment designs that are made possible by AI – they place AI at the heart of assessment not for its own sake, but as an enabler of other skills. These ideas are an initial attempt to visualise how AI may go beyond complementing existing assessment types and unlock new possibilities. I’ve deliberately not added them to the Top Trumps, as at this stage they are ideas to prompt further thought.

These two approaches are almost sub-categories of an ’embrace’ response.

Messaging our expectations around AI is something that many of us are grappling with. Helpfully Flinders University has developed and shared its own guidance on AI with simple visual indicators to describe levels of acceptable use of AI in any assessment.

  • Green – Use AI in specified ways
  • Orange – Limited use of AI
  • Red – No use of AI

I find this a simple and effective categorisation. Although based on the analysis above, I offer three adjusted categories (I avoided red, orange, and green to prevent any confusion with the Flinders model):

  • Level 1 – AI is central to the tasks and must be used.
  • Level 2 –AI may be used critically within a balance of other source materials e.g. library, and web sources.
  • Level 3 – AI must not be used in this task.

This post is an attempt to conceptualise how we may begin to face the reality of AI. I still have an enormous amount of sympathy for those suggesting that the academy should resist AI (see this interesting blog post as an example), and equally welcome visions of transformation for assessment. The debate is polarised and we are in the process of navigating our way. Just as in business, education is balancing a need to engage with this topic and the risk of over-focus (see this cautioning about the risk of being stuck in an AI bubble).  Gartner’s Hype Cycle may be a helpful way to conceptualise the journey we are on. The trough of disillusionment inevitably follows the hype of technology, before the technology settles and finds its place.