To kick things off, let’s look back to just before Christmas 2025. You mentioned that when Anthropic dropped Claude Opus 4.5, it felt like a turning point for you and the developer community. What changed?
It was significant, really. For the first time, you could set the AI to a task and be reasonably confident it would consistently deliver real quality. Claude 3.5 Sonnet and GPT-4 were already being used for agentic coding tasks before this of course, but for me it felt like it didn’t need to be “kept on the rails” quite so much. It was the first moment where I thought we need to improve some processes, that support it generating more code for review. So, like many devs, I ended up spending my Christmas holiday exploring it, pushing it to see what the picture looked for when we restarted in January.
We’re now two months down the line and 4.6 has already dropped. How has this actually changed your day-to-day workflow?
It’s important to preface this question with there is no ‘perfect’ technology – however, we’ve been continually pushing and testing, to understand what is achievable. We’ve moved beyond just informally ‘chatting’ with a Large Language Model (LLM) in an Integrated Development Environment (IDE). Depending on the context, different members of the team will use Neovim and OpenCode, or Claude Code, to work directly with the agents in the terminal (to give it access to read/write files) and manage that process through GitHub and Linear.
The biggest change, though, is the level of abstraction and the increasing shift from the coding perspective to the natural language perspective (with code reviews). We can now build out a specification in dialogue with the LLM, go back and forth over the implications, and turn that into a ticket. That then becomes the prompt for the agents to set up a new branch, build the feature, and submit for review, before merging with the main codebase.
So, does this mean you’re pivoting into a newer, slightly different role?
Yes and no. For the simple, mundane coding tasks that have been done a hundred times before, we’re more frequently using agents to support the development team. In late 2025, we would have been coding most things with light-touch ‘assistance’ from agents. Now we can direct, observe and manage Claude as it generates code for review. We act as the check and balance to ensure it meets the specification while proactively intervening to avoid any drift from expectations.
It’s not perfect (however I’ve yet to meet a developer who is perfect), and there is a significant investment required in terms of time and ongoing budget, but the speed and efficiencies it brings are significant. It allows the team to spend that extra time on building out the aspects of a product that really make it stand out. Humans are still the essential step for identifying when things create impact for the right reasons.

You’ve built such a wealth of experience as digital lead, over many years. What’s been the psychological impact for you of this accelerated pace?
It’s mixed. It oscillates between seeing incredible potential, offset with the overwhelming nature of keeping up with a sector that always moved pretty quickly, but now seems to leap forward in ever-shorter iterations. We’ve moved from a relatively stable environment to what feels like a four-week cycle of “right, what can the next version do that the last one couldn’t and what processes need to change?”.
It can’t all be about speed though, otherwise we run the real risk of quality suffering and the reward for working faster is often more work. The focus has to be on using the efficiencies to take a breath and bring the focus back to quality – how can we use any time ‘saved’ to build something really special?
You mentioned that it’s not just developers seeing the benefit. How is this impacting design teams?
It’s been fascinating. There’s often been a bit of friction where a designer pushes a concept and a developer says, “hmmm, that may be out of scope/budget to build”. Now, a designer can prototype in Figma and see it built almost simultaneously (save for some cleanup, responsiveness requirements and accessibility).
They get real-time feedback on practical limitations. It’s a “skill acceleration” and a “tool acceleration” at the same time. We’re removing that linear loop of designing, handing over, developing, and testing. We can now get a sophisticated, working prototype to a client much faster, which helps bridge the gap and provide us with invaluable feedback much earlier in the process. Working in new, more efficient ways, for certain phases, enables us to reinvest time and focus on the more complex and rewarding problem-solving.
And what does that mean for our clients?
We’re strategic communicators. Anything that helps us convey a message more coherently and articulately is a win. Instead of showing a client a static Figma file where they have to zoom in and out and ‘imagine’ how it feels on a device, we can send them a URL and encourage them to “play around with it!”. That is massive.

You’ve experimented with “personifying” these AI agents (giving them names and faces) – what are your thoughts?
It was a bit of fun (part of seeing what was possible over the Christmas period). For people who don’t work in a terminal, seeing lines of text flying by doesn’t really connect. Giving agents names and descriptions helps people understand their scoped skills and ground what the agents can do… but for the life of me I can’t remember the names I have given them – they were modern takes on Victorian engineers… I think it was a late night
However, it can go both ways. For those optimistic about AI, they can see what the experiment was trying to achieve and found it engaging. For those who are more apprehensive and perhaps a little fearful, I imagine it was just super weird!
What does this shift mean for someone just starting their career in development?
It’s potentially going to be very hard for junior developers, and I know young people currently at University doing relevant courses are concerned. It’s a difficult choice for a business to invest in training a junior but I think the most important skill won’t change – complex problem solving. Specification writing, understanding constraints and trade-offs, and knowing what to do when things fail.
But we do need to get away from the term “prompt engineer”. If you have someone who can solve complex problems, the requirements they give the LLM will naturally be better.
Final question: How would you feel if it all went away tomorrow? If we put the tech back in the box?
I’d say it depends. I think if they just turned it off and said, right, everyone, we’re going back to how we did things before, I think I’d be okay with that – the problem-solving isn’t going to change. But saying that, after a week, we would inevitably be stuck with laborious tasks that cut into the time we spend on the quality features, and I might just find myself wishing it back again.
Share this
