What AI-Assisted Coding Feels Like in Early 2026

A few weeks ago at work, during an engineering all-hands meeting, we were given a new mandate: no more handwritten code. Aside from quick one-liners, everything should be generated by AI, with the expectation that this would make us more productive.
Nothing about that was shocking. If anything, it felt like a natural continuation of where things have been heading. And yet, after the meeting wrapped up, I found myself circling around a vague sense of discomfort — not disagreement, but quiet unease I couldn't put into words, a sense that something important was being sped past.
Later that evening, during a conversation with my husband (who is not technical), I tried to explain this unease. In doing so, it took me a while to realize that what felt "off" wasn't the technology itself, but how it was changing my relationship to understanding the systems I work on. In trying to put that into words, I came up with a metaphor that I thought was worth sharing.
It's Really Fast
Before I dive into the metaphor, let me back up a few steps. To understand why this mandate left me with that quiet unease, it helps to look at the usual rhythm of software development. The exact process can vary depending on the company, project, or team, but many long-running projects follow a recognizable pattern:
- Analyze the requirement
- Explore the codebase
- Make design decisions, potentially formalized as Architecture Decision Records
- Write the code and automated tests
- Iterate and refine
- Deploy to staging or a test environment and address business user's feedback (and/or QA if project uses human QA testers)
- Submit for review and address reviewer's feedback
- Deploy to production, monitor/observe, fix bugs
To be clear about the context: this isn't side projects or early-stage prototypes. This is long-running, mature products with large user bases, real revenue, and real consequences when things break. The kind of systems where behavior accumulates over years, institutional knowledge is uneven, and changes ripple in ways that aren't always obvious upfront.
Before AI was in the picture, the act of taking time to think through each of those steps served a useful purpose: you marinated in the problem. That slower cadence gave you space to really absorb the requirements, grapple with edge cases, and deeply understand the business problem and code you were working on.
With AI-assisted tooling, many parts of that process happen in minutes, where you previously might have spent for hours, or even days and weeks. Drafts of design options, trade-off analysis, even scaffolding for new features and the entire solution can be composed almost instantly. It's remarkable!
But what hasn't sped up is the human side of this work. This was the source of the discomfort I couldn't name earlier — the sense that speed has increased, but our ability to absorb and fully understand the work has not.
Output vs. Understanding
I've come up with a sentiment that captures this core tension:
The volume of work AI can produce exceeds current human capacity to reason about it.
That's not a criticism of the technology, it's an observation about human cognition. There's only so much time and mental energy we have to understand what's been generated, evaluate whether it actually matches the business rules, find edge cases, and check that there's appropriate observability and logging for long-term support.
Sure, AI can generate documentation, and code that adds logging and metrics, but the engineer responding to the pager in the middle of the night is still a human who has to decide whether those logs are actually helpful when something goes wrong. That's a kind of understanding that takes time, something the older, slower cycle of writing code naturally gave us.
In other words, AI accelerates production of documents and code, but it doesn't accelerate comprehension or accountability at the same rate.
A Metaphor
This was the metaphor I eventually landed on while trying to make sense of it out loud, one that felt intuitive even outside engineering circles.
Imagine your usual weekly grocery shopping. Let's say it takes a couple of hours between driving to the store, finding parking, picking up what you need, checking out, and driving home. You're doing it in your trusty old Honda Civic — reliable, predictable, if not the fastest acceleration.
Now imagine that overnight, someone has swapped out your Honda Civic for a Formula 1 racing car, and you're expected to complete your usual errands 10× faster because you now have a car that is capable of insane speeds.
That sounds appealing at first, exhilarating even, until you consider a few things about F1 cars:
- They require a special track that includes run-off areas, barriers designed to absorb mistakes, and strategically placed medical facilities to ensure rapid response in case of accidents.
- They run on engineered fuel blends, optimized and tuned for each engine to extract maximum performance.
- They require a highly trained driver: an athlete conditioned through rigorous physical and mental training to endure extreme forces, heat, and sustained focus at extreme speeds.
- They are supported by a specially trained pit crew, telemetry and safety infrastructure.
Now also imagine that while you're expected to run errands faster because you have a faster car, you're still fully liable for any property damage or accidents along the way. You can't blame the car, the responsibility still rests entirely on you. What once would have been minor mistakes now feel terrifying: every slip at F1 speed could have serious consequences. You need not just speed, but careful attention, foresight, and an awareness of every risk around you.
On a public road full of traffic lights, stop signs, pedestrians, buses, school zones, pot holes, lanes closed for construction, etc. you'd be lucky to make it out of the driveway in one piece!
That's the feeling AI-assisted coding can evoke: exhilarating to see solutions appear almost instantly, yet terrifying in the sense that the responsibility for correctness and stability remains entirely on you. AI gives us an engine capable of ludicrous speed, but the roads we're actually on: mature codebases, complex requirements, regulatory or compliance constraints, legacy interactions, and real users, haven't magically turned into race tracks overnight.
We've gained horsepower, but we haven't gained the infrastructure, tooling, or organization designed around moving at that speed, without causing serious damage. Some of that may come with time, but for now, we're still figuring out how to handle this new power.
Not Anti-AI
To be clear: I'm not arguing against AI-assisted coding. I've been using it almost exclusively in my work for nearly a year now, and the benefits are obvious. This is not a nostalgia post for going back to the "old" days of handwritten code. The AI genie isn't going back in the bottle.
But there is a tension that needs to be acknowledged: engineers are expected to produce more in less time, while remaining fully responsible for the correctness and reliability of what ships. At the same time, the cognitive load of truly understanding what has been generated hasn't gotten any easier — if anything, it can feel heavier, because the pace leaves less space to absorb and reason through the work.
So What Now?
I don't think there's a single correct solution here, at least not yet. We're still learning what it means to work responsibly with tools that can move faster than our own capacity to reason. My current approach has been to add an explicit planning and reasoning phase as a counterweight to AI's speed, which I've written about in Slowing Down AI on Purpose. While not a silver bullet, this practice has helped me stay grounded in understanding rather than just throughput.
As an industry, we're still in the early stages of figuring out what responsible speed looks like. It's likely that more best practices will emerge as we learn where the serious risks tend to hide. For further reading on responsible AI-assisted coding and human oversight, see AI writes code faster. Your job is still to prove it works.



