Today I attended a community of practice meeting on artificial intelligence. Communities of practice (CoPs) are a great way of sharing and understanding different ways of working. I’ve been engaged in a number of these through my career and I’ve always found them very rewarding. Our AI one is particularly important given the pace of change and that, together, we are figuring out appropriate use of AI in learning and teaching. We are exploring what works, pondering ethical issues, wrestling with questions of attribution, and an assortment of other things. This process of figuring out how to use AI and how to respond as we scale up beyond our own experimentation is important. At this time it’s important to remember that we are in the early stages of such a roll-out, as it frames our work as experimentation, trial and play, and with this comes a tentativeness where claims to what works remain fluid, open to change, and perhaps quickly superseded. I was struck by this quote from Olsen and Wyss at the Brookings Institute, talking about scaling up tech (and AI) in education. It isn’t an easy process, there are false hopes, and messy roll-outs:
Scaling is “relational and occasionally improvisational. It requires moral decisions about trade-offs, understanding how an innovation operates differently in different places for different people, and continuous adaptation based on local evidence and changing circumstances”.
(Olsen and Wyss, 2025)
Based on work I was involved in with Sam Elkington, Edd Pitt, and Carmen Tomas, around change in the pandemic period, I would add that we also need to work through the many legitimate concerns generated by change (often framed as resistance), structural constraints, myths and questions around QA, policy, and the permissibility of practices, before we can scale and embed.
My contribution at the CoP was really sharing some things that I do. I have previously shared a reflection on my own use of AI (in a retrospective analysis of all my GPT threads), and today I shared approaches that I’m particularly using around teaching, assessment and wider administration tasks in case others may benefit, but also to give me the opportunity for feedback. To share more widely, and to complement talks I am doing in coming days, I have set up a TikTok channel because the short video format is a really useful way of just having a window on practice and seeing, practically, what people are doing and what’s possible. I’m not expecting millions of views, it may not even be the right channel, but hopefully they will be of some use (and I get to polish my video creation skills using VEED, Canva and inbuilt phone tools).
As I share though, I want to highlight a concern. It’s something that I feel – that I struggle to put my finger on – but I sense that there’s a real risk that our process of collectively figuring things out becomes divisive. I don’t feel this in my institution but I do feel it online. Surfacing different perspectives and different thoughts about what’s okay and what’s not okay risks being perceived as a moral battle, when statements about our use are posited as hard-lines, I struggle with that. We’re working it out, right? Is it OK to use AI to assist in feedback production? Is using AI to support emails OK or is that inauthentic? How about if AI supports someone communicate better when they otherwise struggle (are the rules different then)? In an age of resource pressure should we welcome AI co-creation as a great relief? Or is it a threat to thinking? As we navigate the emerging tools, we should avoid an AI tribalism – silently judging each other’s use. At the moment, we’re in an experimental phase – a sense-making moment, together. There are many legitimate concerns (environmental, deskilling, authentic interaction and much more) in this space. Concerns should be surfaced and explored and respected, but so too should experimentation. Tech change can be associated with moral panic (Orben, 2020), particularly in mainstream media. But this is our moment to work it out. Gilmore and colleagues suggest that universities are not panicking, but instead negotiating and integrating. In the throes of that process now, I sometimes feel internal contradictions about my use, the negotiations are with ourselves and each other, my sharing is an honest account of my own figuring it out.
Elkington, S., Arnold, L., Pitt, E., & Tomas, C. (2023). Lessons learned from enabling large-scale assessment change: a collaborative autoethnographic study. Higher Education Research & Development, 43(4), 844–858. https://doi.org/10.1080/07294360.2023.2287730
Gilmore, J. N., Whims, T., Blair, B. W., Katarzynski, B., & Steffen, L. (2025). Technology acceptance, moral panic, and perceived ease of use: Negotiating ChatGPT at research one universities. Convergence: The International Journal of Research into New Media Technologies, 31(4), 1251–1266. https://doi.org/10.1177/13548565251337576 (Original work published 2025)
Olsen, B., & Curtis Wynn, M. (2025). How to avoid past edtech pitfalls as we begin using AI to scale impact in education. https://www.brookings.edu/articles/how-to-avoid-past-edtech-pitfalls-as-we-begin-using-ai-to-scale-impact-in-education/
Orben, A. (2020). The Sisyphean Cycle of Technology Panics. Perspect Psychol Sci, 15(5), 1143-1157. doi: 10.1177/1745691620919372. Epub 2020 Jun 30. PMID: 32603635; PMCID: PMC7477771.
AI Statement: How did I use AI here: I voice noted my text (writing voice to text); I then edited and added; finally I used GPT to check my final copy for errors. AI was not used to generate text, only to correct errors for readability.



Leave a comment