Are AI Assistants Making Us Worse Programmers

Oh, many of you are going to let yourselves become output devices for your external AI brains. And most won’t even understand how it works or who controls it. Is it good or bad? Who knows, but it will happen, and humans will justify it just to keep their sanity. So, welcome to the future.

Can I go against the grain and say no? AI-generated code is still too limited in usefulness. I’m not sure many skilled programmers are actually relying on it.

You could argue that coding students who depend on AI too much may struggle with large projects, but if you’re a programmer who learned before AI tools came along, I doubt you’re affected much.

This conversation interests me because I absolutely know AI boosts my productivity and quality of work by two to three times. I don’t just trust the first response; I often work through tough problems with many follow-up questions.

Copilot’s autocomplete is underwhelming and useful mainly for trivial tasks. I give it detailed comments, and I only expect useful results when it’s just a few lines. For someone who sometimes forgets syntax, it’s a huge time-saver.

For everything else, like Copilot Chat or Claude, is crucial. I spend lots of time transferring code and forming questions. However, just this process makes my final product much better. Since I work in a startup, many problems aren’t clearly defined, so this has a big impact.

AI isn’t perfect, often not even great, but once you learn how to prompt it effectively and weed out inaccuracies, and leverage its strengths, it becomes an incredible force multiplier.

Never used it; I just don’t get how it could help me.

I bet most current programmers couldn’t code in assembly if their lives depended on it. Most web developers also seem disconnected from semantic HTML too. We move forward, and some things inevitably get left behind.

No more than Google or Stack Overflow do.

Overall, it’s making me a better programmer. I learn every time I use Codeium or Claude; they make things quicker. But if you can’t understand what it’s doing, you’ll end up struggling anyway. I’ve recently learned about tview, bubbletea, and lipgloss in Go, which I might have never stumbled upon otherwise.

When the various AIs can’t deliver a functional solution, I have to think deeply. This has improved my approach to algorithms, honestly. “Explain this code” has proven valuable, even for code I wrote years ago, leaving me thinking about my past thought processes.

It really depends on your intention behind using AI. If you rely on it without understanding the generated code, then yes, it could make you worse. But if you aim to build on your existing knowledge, it could make you better — provided you ensure what you’re shown is accurate.

To confidently use these tools, having prior experience to understand the AI output is crucial. You might not need to understand the concepts fully at first, especially if you’re experimenting with the results, but it’s still a tool like any other that can only take you so far.

Not if you couldn’t code before using AI.

It all begins with the debugger.

Do calculators make us worse at solving math problems?

If you’re using them for real work, yes. Today, I had to click 1000 buttons on a web page, and ChatGPT made that simple task happen instantly.

I think so.

They seem valuable if used properly. They shift some focus from writing code to validating it.

I’m curious about the implications of AI-generated code on codebases and libraries. The author notes that instead of reaching for reusable components, one might rely on AI output instead. What does that mean for a codebase or in open-source projects?

@Ari
From my experience, I can tell you it’s a mess. Now, any sort of migration or library update has to be done in multiple places. LLMs are not consistent, leading to subtle bugs in some versions but not in others.

I’ve noticed methods ballooning to hundreds of lines because everything gets lumped together, resulting in more localized context. It becomes a nightmare to maintain. Unit tests often end up non-existent or meaningless, as they become a formal proof for the mocking library instead, which might not even be real since the tests can be AI-generated and not assert anything useful regarding the business context.

@Oli
I’ve seen these issues even before LLMs.

Branley said:
@Oli
I’ve seen these issues even before LLMs.

I have too. To me, it feels like LLMs just make it easier to develop bad habits.

Oli said:

Branley said:
@Oli
I’ve seen these issues even before LLMs.

I have too. To me, it feels like LLMs just make it easier to develop bad habits.

That’s a fair point.

No, it’s just another tool. Sure, we may not understand everything it does (or care), but I don’t know exactly how assembly works either.

It has simplified my coding and provided solutions I’d only conceptualize and didn’t attempt because my old way was fine.

@Flint
Many are relying on AI like it’s oxygen. I saw a comment about someone with a degree who was desperate for XCode’s big scope AI suggestions. Developers today aren’t made from the same cloth as before.