I’ve been using AI for a few years now. What started as playful experimentation has grown into something much more integrated into my professional and personal life. Over the summer, I took the time to step back and reflect on how my use has changed. This may be indulgent introspection, but it’s intended to spark conversations about how we are individually utilising these tools. I’ve used AI for everything from creating resources and analysing data to generating feedback and coaching myself through tricky tasks. I have analysed my use because I think the conversations that I see and hear are often generic – talking about prompting and productivity, or writing assistance but not really in any detail.
My platform of choice has been ChatGPT (the paid version), and because I’ve rarely deleted my threads, I have a useful digital footprint to look back on. I asked GPT to describe my usage and it replied with:
“You’ve moved from curiosity and experimentation to professional integration, ethical leadership, and now towards high-level data-informed innovation.”
Accurate, maybe, but not very helpful. So instead, I looked at the threads myself. What follows is an account of how my use has evolved, what I’ve noticed along the way, and what it might mean for how we work with AI in future. These stages aren’t fully discrete, they overlap, but each reflects a dominant pattern of use at the time.
No judgement as you read – this is shared in the interests of being helpful 😂
Phase 1: Exploration and playful curiosity
I began experimenting with AI in playful, exploratory ways – for example, asking if it could write poetry about ‘xxxxxx’ or a story about ‘xxxxx’. I asked if it could help with basic tasks (e.g., editing text, simplifying content, tidying grammar, or repurposing information such as writing LinkedIn blurbs or emails) and tried these out. This phase involved testing AI’s capabilities with relatively closed and short tasks, leading to short threads. A memorable moment was getting ChatGPT to create a poem about a family garden shed as it was dismantled. I input the things to include, and it generated an ‘ode to the shed’. This moved my family WhatsApp group, and they lauded my poetic skills in capturing the joy of generations of children who had played in said shed… then I fessed up, with a minor lesson in integrity. At this time, I was also trying to replicate possible student cheating behaviours – understanding how students may use AI. I created some responses to essay questions, and like others, stood back with some amazement at this new tool. This was a time of revelation and realisation at the power of the new tool on the block.
What do I notice?
- Use was mainly about ‘finding out’ what AI can do or helping with productivity and accuracy. It was also a time of exploration as I looked at capability as if I was a student.
- Playful tasks were about working out the potential of the tools.
- Looking back, I had two coexisting mindsets. How can this tool aid cheating? (and how might we respond?), and How can it help me be more productive? These co-existing positions are loaded with unfair assumptions, which I can only attribute to being in the finding-out phase of use.
Phase 2: Early co-creation building trust
What did I do? I moved into using AI for educational resource creation, such as training scripts, workshop outlines, and guidance documents. Here, then, there is a bit more trust in AI to do more important tasks. Examples include inputting a description of some research methods and asking GPT to create ideas for how people from different backgrounds would, or could, use those methods. This helped me prep for a class where I was teaching a familiar subject to an unfamiliar group. I also created interview transcripts that could be used for a class activity on data analysis for practising coding. Nothing was taken at face value, but this was the start of co-creation.
What do I notice?
- AI became a co-author in pedagogic or professional spaces.
- I started shaping outputs collaboratively with AI; I was clear about the inputs I wanted to make and asked for changes when things were ‘off’ or when adjustments were needed.
- Shaping outputs for different audiences or purposes was important here; I hadn’t outsourced my brain, but I was using AI to save time. Importantly, I was not using AI to ‘blag’ my way through teaching unknown material, I was using it to re-shape things I was already confident of.
As time went on, I felt more confident to produce higher stakes outputs … getting braver, but no less critical.
Phase 3: [More] Complex application
What did I do? Usage evolved to address a wider range of tasks, and the instructions given in prompts were clearer and more directed. For example, development of resources included requests for outputs with specific voice, tone, structure, and accessibility features. At this point I became more defensive of my own style, noticing that AI ‘vanilla-ised’ my writing, making it less like me. Here, I was clearer that I did not want to be erased, merely assisted.
What do I notice?
- I started layering multiple needs (tone, ethics, accessibility, workload) into AI prompts.
- I exercised a high level of editorial control, often asking for minimal change or iterative. refinement. I needed to remember the good aspects of my own style and be continually confident in them.
- My use here was almost exclusively professional, focused on audience tailoring. AI was framed as an assistant.
Phase 4: Analytical power tool (mind the gap)
What did I do? At this point I was working on multiple projects, so I wanted to see if AI could help. It had become an integrated tool. One task was a systematic review (now completed and submitted). Second, I was also working on a task to analyse an ‘inbox’ for a service within my university. Third, I was undertaking research into AI policy. I asked AI to support research for all of these, including coding qualitative data (from ris files) and building tables and visual reports. It frequently made errors, and it really did alert me to the risk of errors with mindless use, even when uploading one’s own data for use. For the inbox analysis, I wouldn’t upload other people’s messages without permission, so I had AI coach me to get started with Python to the point where I could complete the task I needed to. I had tried a MOOC course on this topic, but found GPT way more useful. AI was a pretty effective coach, but only if you are a responsive learner. Still, it was in this ‘era’ I felt AI had really started to add value for me, beyond being an editing buddy.
What do I notice?
- I now used AI as an analytical partner and teacher. Though I lost count of the times that I said, “I think that is wrong, and here is why.” While politely replying “my mistake,” I was finding some limits or weak spots. It would have been foolish to offer full trust.
- It was in this space of immersive use that I imagine we can work with students to help them develop a full appreciation of the potential and the limits of AI. If they themselves can experience its limits and errors, I believe it will make very clear why subject knowledge, source provenance, and even strong analytical skills are needed. AI can do the heavy lifting, but it can be erroneous.
Phase 5: Getting personal
What did I do? Along the way I had used GPT for an occasional personal thing – like making an itinerary optimised for a walking tour in a city, or testing out “here’s what’s in my fridge, inspire me to make something” (back in phase 1). But more recently, I have used it as a personal assistant (help me make a packing list for a specialist trip), and I have used it more like a search engine. I appreciate this is controversial and arguably unnecessary. Perhaps the starkest ‘personal’ moment for me was when I was working through a diagnosis for neurodivergence and I had a personal wobble, thinking I was crazy (apparently, this is normal in the process). I used all of my previous threads to provide text for an analysis, analysing when I may have shown trait X or Y. I am not recommending this for others, but it was interesting. While I don’t think this is a good idea for others, examining our own threads can make a useful reflective tool. If students are using AI platforms, then perhaps it is useful to gather threads up and see what they can learn about their writing, or confidence, or ways of working. More recently, I have found myself slipping into a lazy use of AI rather than Google as a search engine.
What do I notice?
- I am not sure why AI became more normal as a go to than a search engine, phone apps/interface was a factor, but the need for quick and condensed answers mattered too (but not for everything).
- The risk of AI replacing more traditional web searches risks inertia to the limitations and biases of AI, as outputs become normalised and taken for granted. At least with a search engine, there is a middle step of choosing your source.
- Health related advice – whatever it is will always be a tricky space with AI. I have seen news coverage of the terrible effects of AI coaching on vulnerable groups. We need to understand more about this.
- AI threads create a rich resource to reflect upon – what do they say about or confidence, our writing strengths or, wider uncertainties.
Overall, my early use was playful and low stakes. Over time, I gradually allowed AI into higher-stakes professional work, but only after testing. I use AI to smooth work but try to retain my voice. Whether creating teaching materials, coding qualitative data, or visualising research outputs, AI added most value when used as a collaborator or coach, not a substitute. My repeated “I think that’s wrong—here’s why” moments highlighted the need for critical engagement. You cannot be off your game – you need to notice the errors. Experiencing AI’s mistakes firsthand reinforced the ongoing importance of subject knowledge. Health-related or self-reflective use is particularly complex and should be approached with caution. I noticed that each (or ‘era’) built on the one before. Playful experimentation laid the groundwork for co-creation; co-creation built the confidence and vision needed for complex analytical use; and this got me into a habit of use for personal use. I am aware of environmental considerations around this post, and these do bother me; that is something I want to look at more before habits become ingrained.
So, who cares? It has been quite interesting to look at my own use, but somewhat indulgent also. I just wonder how this compares to other people, and it raises a load of questions for me:
- How are we all using AI?
- Is there a path that we all follow, or are we on different journeys with this technology?
- How serious is the digital divide, where some people start later while others are already using it in quite sophisticated ways?
- How do we know when our AI use has become too habitual or uncritical?
- Are we working with students in a simplistic way when we just ask them to rewrite an AI output?
- Do we need to think more reflectively about our own use? Really exploring this has been quite confronting for me
.




Leave a comment