AI in Teaching, Learning and Assessment [Long-Read from Zayed University event]

Written by:

Last week, I visited Zayed University for the university’s Scholarship of Teaching and Learning (SoTL) conference, where I offered a keynote talk … here’s the gist! The talk essentially walked through current issues around AI from an educator’s perspective, seeking to notice the challenges and opportunities. 

Nuance and Criticality  in Artificial Intelligence

My talk started by asking what AI is – for this, I drew on Narayanan and Kapoor’s (2024) recent book “Snake Oil AI”, which emphasises that AI is a collection of technologies rather than a single thing. Their book highlights that AI falls into generative, predictive, content moderation and general categories, which is useful to know. It also highlights the risks of using AI tools and outputs without human sense-making, thus encouraging us all to look beyond face value when claims are made. I would recommend the aptly titled book as a significant insight into AI that is accessible to ‘non-technical’ people. This book gives me more confidence to understand the basics of AI. It informs the critical lens that I know I will need. 

Hype

The almost daily advancements in AI have fueled media hype, confusing many about what AI can achieve. It can feel overwhelming to try to keep up. Turning to existing theoretical models may not help us navigate this moment though – it feels like there is no handbook! If we look at a model like the widely cited Diffusion of Innovations model (Rogers, 2003), adopters are categorised as innovators, early adopters, early majority, late majority, and laggards. It’s tempting to say earlier adopters are playing with the tools, and ‘other people’ are lagging. But I find this doesn’t work in my mind (and in fairness, the theory wasn’t written necessarily for this moment); choosing not to use the tools right now may be a conscious and fully informed choice after deep engagement regarding ethics, environmental concerns or rightful concern about quality. Moreover, individual responses to AI are deeply linked to culture (see, for example, Mumtaz, 2023), so we may infer simplistic models of usage may lack context. Another well-known model –Gartner’s Hype Cycle doesn’t work for me either because AI keeps changing constantly and we are not looking at the advent of a single technology. So, historical models aren’t helping my sense-making here; it’s not quite that simple.   

Economic and development ambitions (mind the gap) 

In researching this talk, I was particularly interested in how AI fits with the economy and the future of work. I was drawn to the work of Russell Beck (2023), who talked about changing demographics in society, alongside five other factors, as an important force that will shape future work. Put simply, falling birthrates and ageing societies may mean we need AI to help with the work that must be done. Similarly, a recent article by Darren Acemogulu (2024) highlighted the demographic shifts happening in the USA, and the potential role AI might take. Acemoglu notes particularly that younger members of society tend to do more of the entrepreneurial risk-taking work as well as the physical – so these may be our gaps of the future.  I then continued my field trip by examining what governments said about AI in national AI strategies. I was struck when thumbing through these how much ambition there is in government for the promise of AI in the economy. The UAE strategy is certainly ambitious (Artificial Intelligence Office, UAE, 2021). The drivers, of course, may well be economic, but I was especially struck by a line in the Nigerian plan which referred to AI as ” a developmental equaliser at a scale similar to the internet. It will also be the great differentiator and the nations that become the leaders in its application will rule the emerging world” (NITDA, 2024, p10). This line alone woke me up to the idea that however much many academics have around AI, and I share some of them, there is a moral positive for us to examine, which may be taken for granted through a lens of privilege. Added to this, from a UK perspective, Keir Starmer, at a recent event, said we should ‘run towards’ AI (see Politico, 2024; we could debate that(!) – but go with it …). So, my takeaway is that some governments have BIG ambitions for AI, and faculty are unsure. This is not a criticism of higher education – it is our job to scrutinise, consider, and critique. Still, ultimately, there may be a gap between the national picture and the everyday educator’s reality. Our challenge is to navigate this. 

Two people looking at each other - one representing government and one representing the universities. University is labelled as AI, and the University labelled as confusion, fear, bias, questions.

 

Let’s start with the thorny issue of assessment 

The unbounded use of GenAI in education inevitably raises ethical and integrity questions, particularly around assessment. So, how do we respond? With policy, support and robust assessment and programme design perhaps? There is a lot of work happening in this space. 

Responses vary. I am very drawn to work in the ethics space – believing rules will always be broken, and AI can render rules futile with new possibilities coming online quicker than we can make rules to deal with them. In my talk, I offered a few examples of how some assessment types are very cheatable – showing screenshots of fully worked maths tests with complete working-out delivered in seconds and automated podcast or presentation creation. Understanding AI’s assessment capabilities is likely to be a running effort and a good number of people are looking at this. But cheating isn’t a strictly technical act. Students cheated long before AI – whether your mum did a bit too much editing or you copied your mate’s weekly maths homework (which I recall from personal insight 1996 – not me .. obvs!). 

Some responses to the assessment quandary suggest a “two-lane” approach as a way forward (Bridgeman and Liu, 2024, cited in Lui et al 2024). Liu et al. (2024) further suggest assessments should be either secure (where AI is disallowed) or open (where AI is unrestricted. Having a position of no middle-ground between secure and open recognises that limiting AI use outside the classroom is likely impractical.

Another part of the way forward might be to think about programme-level design and how we test assessment across the programme. We need to find ways to join up with colleagues and plan the assessment journey – we can’t deal with this alone, as individuals, within a module or unit.  Programme-level thinking is something I’ve championed for a long time and something we have just worked on in our institution within the curriculum context. However, it’s important to recognise that achieving this may well be beyond most of our control – this is in the domain of institutional change and how we organise teams and curricula. I profess no answers here. We need a sector-wide effort to work this out.

It’s not all about assessment – what skills will our students need? 

To effectively navigate the future shaped by AI, students must develop a combination of technical, critical, and ethical skills. Sentiments from the day and insights from the literature indicate that higher education the world over might not be ready for the skills piece. By example, Suonpää et al. (2024) suggest “students’ knowledge and skills in using GenAI are still rather weak, and they wished for more support from their lecturers on how to use GenAI effectively in their studies.” But what skills do students need? I attempted a longer list of AI skills. Among my list, these stand out: (1) An appreciation of how AI works to inform our scrutiny, (2) critical thinking skills, enabling students to question AI-generated information rather than accept it at face value and, particularly, to ask what ideas and what opportunities, people, or ideas are being excluded; (3) ethical reasoning, to understand the implications of AI on society and make responsible decisions, and (4) confident subject knowledge to ensure junk or slop is noticed and rejected and, (5) skills to use different tools in more sophisticated ways so that AI can be a partnering-tool strengthening our intellectual muscle rather than making us flabby (6) a digital fearlessness to help manage personal development in the digital space and with that to manage oneself amongst the cacophony of tech hype. Many of the skills we may need for AI are not new; they are already important but must be applied in an AI context with urgency and with respect for individual ethics, beliefs and culture. 

So, how do we develop some of these skills?

It’s tough … especially when curricula gets crowded. This can’t just layer on, so it needs to be done thoughtfully with embedding. Practically, though, breaking the skills down makes it easier to conceive a response – for example, on how AI works, how about an initial induction unit, perhaps with refreshers later in the journey. 

The bigger and more ongoing effort to embed AI will likely involve educators examining how and what they teach. For ideas on this, I would highlight the Jisc‘s AI Cards (Jisc, 2023), which result from a collaborative consideration of using AI in show assessment activities. Still, these can be activities as well as assessments. Models can guide us too – the Generative AI CHECKLIST developed by Sue Beckingham and Peter Hartley (2024), offers a practical framework that can be used instantly. Use it with students and use it yourself to look at AI use with a critical lens. 

It feels like we are in an experimental era. We need to try things out and assess what works. I am also personally interested in how we might pro-actively use AI for developing feedback literacy – having AI as a non-human partner may go some way to de-escalate the emotion in learning to use feedback, to seek feedback and to notice what we each need.

Co-learning with AI

One of the most promising ways to approach AI in education is through co-learning. By exploring AI tools alongside students and colleagues, we can foster a collaborative environment where AI’s potential and limitations become a shared inquiry. I looked at examples of where students and staff had worked to explore large language models (LLMs) to assess their usefulness. As a good example of this, I recommend that anyone look at my colleague Heather Campbell’s work undertaken and published with her Master’s students (see Campbell et al., 2024). This paper shows that educators and students can work together to actively take a critical view and assess where AI is useful and where it fails. The student group articulated that they value deep learning and that AI, used badly, could get in the way. It reinforces that while AI might assist in tasks, intellectual engagement remains irreplaceable. 

The potential for AI as a teaching assistant 

One possibility for AI is that it may take the strain off faculty by acting as an assistant. We explored, playfully by example, how this might work. This was not a ‘telling’ but more an encouragement for others to consider how AI could assist them. I shared my adventures in turning our Professional Standards Scheme documentation into a chatbot for applicants using Google’s NoteLM (the jury is out on how successful this will be). My experiment was to look at making mind maps from my lecture material after being inspired by Winn Wing-Yiu Chow’s blog, only to find this process was a helpful planning step for my lecture creation. I have also been using AI to create help videos, as these take a while for me. I shared how I had used AI to help generate case studies for use in class, giving an example of creating case studies for inclusive practice and how it helped me personalise content to a specific group of students I don’t usually teach, customising research methods resources for physiotherapists. However, as the conversation at the conference reminded us, there is no single answer to the question of how we should be working with AI. Every institution and every educator needs to determine how AI aligns with their values, pedagogy, policies, and environmental and ethical beliefs.  Social media is a great place to find ideas on what may work (LinkedIn and BlueSky for me these days). 

Institutional responses

For completeness, it’s important to acknowledge the institutional context and the need for an institutional responses. Institutional leaders should formulate responses and offer guidance rails but not seek to ‘fix AI’. It is not a problem to be solved. My sense is that there are committees, governance groups, strategy groups, and working groups being formed in different institutions at pace, but this needs to be supported by action that cannot wait. These groups themselves should be learning communities, open to evolving as the tools and uses grow, too.  Responses must be multifaceted, reflecting staff training and support, an element of play, and appropriate policy. Sue Beckingham, Stephen Powell, and Peter Hartley (2024) offer a helpful model showing what an AI strategy may look like and include. Check out the diagram on the slides (or, better, the whole book) for more on this (see reference list). It is a useful starting point for university leaders who may be pondering how to begin navigating AI’s opportunities and challenges in a meaningful way. Looking at this diagram reminds us that a response needs action as well as deliberation, particularly around up-skilling. 

Finally within my talk, Chat GPT helped wrap things up (of course it did). I wanted to emphasise that while my talk had been about how we may use AI, it doesn’t mean I was a fan of displacing the important work of educators.  An AI-generated poem reassured us that we will have work to do for some time yet!  My inbox agrees; no one is sending their queries to AI instead of me just yet!

Here is the slidedeck from the event at Zayed University with notes so it makes sense! [Thanks to Sue Beckingham for permission to use images].

References

Acemoglu, D. (2024) ‘America is sleepwalking into an economic storm’, The New York Times, 17 October. Available at: https://www.nytimes.com/2024/10/17/opinion/economy-us-aging-work-force-ai.html.

Artificial Intelligence Office, UAE. (2021) UAE National Strategy for Artificial Intelligence 2031. Available at: https://ai.gov.ae/wp-content/uploads/2021/07/UAE-National-Strategy-for-Artificial-Intelligence-2031.pdf.

Beck, R. (2023) The Future of Work to 2030. London: Bloomsbury Press.

Beckingham, S., Lawrence, J., Powell, S., & Hartley, P. (2024) Using Generative AI Effectively in Higher Education: Sustainable and Ethical Practices for Learning, Teaching and Assessment. Routledge.

Beckingham, S. and Hartley, P. (2024) ‘The Generative AI CHECKLIST’. Available at: https://doi.org/10.25416/NTR.27022309.v1.

Campbell, H., Bluck, T., Curry, E., Harris, D., Pike, B. and Wright, B. (2024) ‘Should we still teach or learn coding? A postgraduate student perspective on the use of large language models for coding in ecology and evolution’, Methods in Ecology and Evolution, 15, pp. 1767–1770. doi: 10.1111/2041-210X.14396.

Chow, W.W-Y. (2024) ‘Lectures into Comprehensive Learning Resources’. American Association of International Education and Exchange Council. Available at: https://www.aaieec.org/post/ai-driven-examples-for-turning-lectures-into-comprehensive-learning-resources.

Gartner. (2023) Hype Cycle for Emerging Technologies. Gartner, Inc. Available at: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle.

Jisc. (2023) Assessment ideas for an AI-enabled world. Available at: https://repository.jisc.ac.uk/9234/1/assessment-ideas-for-an-ai-enabled-world.pptx.

Lui, D., Bassett, M.A. and Iocono, C.L. (2024) ‘Engaging with AI in education: Four Separate Mindset Shifts’, Teacher Learning Network Journal, 31(2). Available at: https://tln.org.au/Web/Web/TLN-Journals/TLN%20Journal%20Public.aspx.

Narayanan, A. and Kapoor, S. (2024) AI Snake Oil. Princeton University Press

Mumtaz, S. et al. (2024) ‘Ethical use of artificial intelligence-based tools in higher education: Are future business leaders ready?’, Education and Information Technologies. doi: 10.1007/s10639-024-13099-8.

National Information Technology Development Agency (NITDA). (2024) National AI Strategy. Available at: https://ncair.nitda.gov.ng/wp-content/uploads/2024/08/National-AI-Strategy_01082024-copy.pdf.

Politico. (2023) ‘Britain must run towards AI opportunities, says Keir Starmer’. Available at: https://www.politico.eu/article/britain-must-run-towards-ai-opportunities-says-keir-starmer/.

Rogers, E.M. (2003) Diffusion of Innovations. 5th edn. New York: Free Press.

Leave a comment