LLMs are helping us more and more when writing code. But they’re also changing how we think about responsibility, and not always for the better.
I’ve been reviewing more pull requests lately where the author generated code with an LLM, then submitted it without properly reviewing. Simple mistakes that any human would catch with a quick read-through are appearing more often.
The problem isn’t the AI-generated code itself. The problem is the unintentional assumption that “the reviewer will catch it.”
When you submit a pull request for review, you’re not just sharing code. You’re saying “I believe this solves the problem correctly and conforms to coding standards.” Code review was not designed to catch basic errors that the author should have spotted. The responsibility doesn’t shift to the reviewer just because an LLM wrote it.
The author is still responsible for the quality of their submission, regardless of who or what wrote the initial version.
You are the LLM’s reviewer.
At least at this point in time, this feels like a temporary growing pain as we learn to work alongside AI tools. But bad habits formed now will be hard to break later.
This connects back to something I wrote about fundamentals recently. AI democratises coding, but makes understanding more valuable, not less.
Code ownership is still yours.