Why I Doubt Developers' Code Now (But I Probably Shouldn't)
16-04-2026
I used to be okay with code. Not great, but okay. A developer would build something, I'd test it, I'd scan their code to understand the implementation, we'd find bugs, they'd fix them.
It was simple.
Then AI agents became everywhere, and something changed.
Now, whenever I read someone's code, there's this little voice. "Did they really write it well? Or did they just... let the AI do it?"
It's subtle at first, a missing edge case here, a function that feels a bit too clean there, i see too many emojis (which was rare a few years back). But increasingly, I catch myself thinking, "This feels like it was written by a tool." And not in a good way.
I know how this sounds.
I know I'm supposed to be pragmatic about it.
I literally work in QA. I should be looking at unit tests, integration tests, test coverage reports to invalidate my doubts. I should be running through my mental checklist instead of indulging in... well, whatever this is. Bias, I guess. Skepticism masquerading as due diligence.
But this is what it is.
The Real Problem
I don't actually think AI-written code is worse. That's not what bothers me. I've seen absolutely terrible code written by humans. I've also seen clean, well-structured code generated by AI tools. The quality isn't the issue.
What gets under my skin is the confidence gap. Or maybe it's the confidence surplus? I'm not even sure anymore.
(Which is kind of why I'm still keeping my own project (Evaliphy) in beta, if I'm being honest. I've been building a tool using agents for a few months now, and it works. It actually works decently. But I keep finding myself hesitating before opening it up to everyone. "Is it really ready?" I ask myself. "Did I miss something? Did the agents miss something?" The irony isn't lost on me I'm using the exact same tools I'm skeptical about, and I'm still not confident enough to ship it. Maybe that tells you something about how deep this doubt actually goes.)
When a developer used to hand me their code, there was this implicit message: "I've looked at this, I'm reasonably sure it works, I'm ready to defend it." Even if it had bugs, at least they'd thought about it. They'd owned it.
Now, I sometimes feel like I'm getting code that someone scrolled through once, maybe tested locally, and then shipped because the AI agent said it was good. And maybe the AI was right. But they never really know if they didn't dig deeper.
That's what bugs me. Not the code itself. The uncertainty about whether the person behind it is certain.
The Uncomfortable Part
A lot of this probably says more about me than it says about them.
I've been doing QA for more than a decade to know that something always gets missed. Always.
No amount of testing catches everything. But there's a difference between "we did due diligence and something still slipped" and "we didn't really check that thoroughly."
The first one doesn't bother me much. The second one does.
And maybe I'm reading too much into things.
Maybe that developer with the suspiciously clean code really did their due diligence. Maybe they tested it properly, reviewed their own work, and just... wrote good code. Maybe the AI just helped them write what they would've written anyway, only faster.
But I don't know that from looking at the code. And I've stopped trusting my intuition about when to assume the best versus the worst.
What I'm Actually Worried About
I think what bothers me is that the barrier to entry has changed. It used to be harder to ship code. You had to know more, or at least think you knew more. There was a gatekeeper function happening, even if it was just in your own head.
Now? Now anyone can generate something that looks right. Sounds right.
But without the work of actually understanding it, you're building on a foundation that feels solid but might not be.
I don't know if that's a real problem or if I'm being a overly concerned about it.
Probably both.
The Part Where I Should Know Better
Here's where I'm supposed to pivot and talk about how I'm working on this. How I'm going to stop assuming and start testing properly. How I'm going to look at the metrics and ignore my gut.
And I will. I should. That's literally my job.
But I also think this feeling is telling me something worth listening to, even if I'm wrong about the conclusion. There's something real in the gap between assumed competence and demonstrated understanding. The fact that I'm picking up on it, even when I'm wrong about the details, means something.
I just don't know what yet.
In the Meantime
So what do I do? I do what I've always done. I test the code. I look for edge cases. I think creatively. I too take help from AI.
I report my bugs. I do my job, and I try not to let my biases decide what I'm looking for.
But I'm also going to keep noticing that feeling. The doubt. The suspicion that something's not quite right.
Maybe it'll fade as AI agents become more normal(isn't it already?) and I get used to seeing code that's AI-assisted. Maybe I'll realize I was being ridiculous and developers are doing exactly what they always did but now they are just faster.
Or maybe I'll be picking up on something real, and eventually the industry will figure it out.
Either way, I'm stuck with this bias for now.