There’s a moment, when an idea you’ve been carrying around for months starts taking shape in a matter of hours, that feels genuinely new. Not just faster, different. I’ve had that feeling more than once over the past year working with AI coding tools, and I’d be dishonest if I said it wasn’t exciting.
But somewhere along the way, I started asking a different question.
How much of this is mine?
When AI helps you build something (suggesting the structure, filling in the implementation, catching the edge cases), how much of that project actually belongs to you? Not in a legal sense. In a competence sense.
I’m not being precious about writing code by hand. But here’s the thing about friction: it’s not just an obstacle. It’s actually the mechanism by which understanding forms.
When you get stuck on a problem and have to dig yourself out, something happens. You look at the internals. You read the error message carefully. You build a mental model of why something behaves the way it does. That model stays with you. It becomes pattern recognition you can apply the next time. The path from “I wrote it” to “I understand it” runs directly through that struggle.
When AI absorbs that friction, it can produce correct output while bypassing the process entirely. You get the answer without the understanding. And the next time you hit a similar problem, you prompt again, because you never built the map.
The question isn’t whether AI is useful. It clearly is. The question is what you’re trading when you let it do more and more of the thinking.
What Copilot taught me about myself
The most honest data point I have is this: I considered cancelling my Copilot subscription because I noticed I was becoming lazy and less focused.
Lazy is a broad word, so let me be specific. What I noticed was a change in how I started problems. Before: I’d read through the context, form a rough approach in my head, maybe sketch something out. After a few months with Copilot: I found myself opening a file and almost immediately waiting for a suggestion, before I’d fully thought about what I wanted to do. The autocomplete had become a first instinct, not a second one.
That’s a subtle but meaningful shift. It’s not that the suggestions were wrong. Often they were fine. It’s that the act of reaching for them had started to replace the act of thinking. My attention was shorter. I was less likely to sit with a problem before moving. More passive in a way I hadn’t consciously chosen.
You’re still producing. The output looks fine. But the cognitive mode is different, and over time that adds up.
The risk scales with your foundation
This is where I think the conversation gets more serious: the effect isn’t the same for everyone.
If you’re a senior engineer with years of context, AI can be a genuine amplifier. You know what good looks like. You can evaluate suggestions critically. You catch the subtle mistakes. You ask better questions. The tool accelerates you because you already have the foundation to steer it.
If you’re still building that foundation, the dynamic is different. The productive struggle I mentioned earlier is not just useful, it’s almost irreplaceable at that stage. The process of debugging something from scratch, of understanding why a design decision matters, of recognizing a bad pattern before it becomes a problem: those capabilities come from having done the work, not from having seen the output.
I think AI used carelessly by junior developers doesn’t just slow their growth. It can actively shape them away from the kind of deep understanding that makes someone genuinely capable rather than just productive-looking.
The real risk isn’t replacement
There’s a lot of conversation about AI replacing developers. I think that framing misses something more immediate.
The risk, at least for me, isn’t that AI takes my job. It’s that I let it take my competence (the accumulated judgment, pattern recognition, and contextual understanding that makes a senior engineer different from someone who can prompt their way to a working prototype).
AI rewards people who already know what good looks like. It accelerates insight when you have the foundation to evaluate what it gives you. When you don’t, or when you stop maintaining that foundation, you become dependent on a system you can’t fully reason about.
What intentional use actually looks like
I didn’t cancel Copilot. But I did change how I use it.
The shift was simple: I try to form my own approach first, before I ask AI anything. Not just a vague direction, a specific enough idea that I could start without it. Then I use AI to accelerate, validate, or challenge that approach. Sometimes it shows me a better path. Sometimes I find I’d already gotten there. Either way, I’ve done the thinking.
I also try to be deliberate about when I let AI drive and when I don’t. Boilerplate, repetitive patterns, documentation: fine. Logic I don’t fully understand yet, architectural decisions, anything I’d struggle to explain to someone else afterwards: I work through those myself first.
The distinction I keep coming back to: are you using AI to extend your capabilities, or to externalize them? One makes you better over time. The other quietly makes you less capable than you were.
What truly makes the difference as a software engineer is the ability to craft and package a tailored solution for each specific context, not just to follow the latest tool, trend, or suggestion. That judgment doesn’t come from prompts; it comes from deliberate practice and honest reflection about what you actually understand.
The question is worth asking regularly, especially when the output looks good and everything feels fast.
Are you extending yourself, or outsourcing yourself?