Claude Code: My Most Trusted Coworker and My Worst Enemy
July 24, 2025Claude Code is my most trusted work companion. It helps me when I need it, saves me time, and improves the way I write code. But it's also my worst enemy.
I've been using LLMs daily since the end of 2022. Over time, I've learned to use them and they've become an indispensable tool for my work. My approach has always been more or less the same, following a single principle: an LLM is a tool that enhances my capabilities.
Initially, I had a very arrogant attitude toward LLMs. Given my limited experience but especially the level of LLMs from that period (GPT-3.5 first and then GPT-4), I encountered responses that were ineffective or sometimes completely inadequate. Hence my arrogant reaction. Better to write the code myself!
However, I could see their enormous potential, especially as a gateway to knowledge and invaluable study aid. But I'll talk about that in another article.
I was saying the beginning wasn't easy with code. Not so much because of my ability to write prompts, since I never had particular problems in that regard, but because of the quality of code generated by models of that period.
Everything changed around mid-2024, when GPT-4o and Claude Sonnet 3.5 came out almost simultaneously. That's when I truly perceived a turning point, and within a few days I found myself as a premium user of both platforms.
The use of LLMs had officially become daily for my work as a programmer too. Not just for studying.
My approach was very simple: a new conversation for each task and extremely detailed prompts. For both LLMs I subscribed to. This was to have a comparison and follow what seemed like the best solution.
And then continuous supervision and understanding of the generated code, and final manual integration of the code into my editor. So no agents and no editors with integrated agents. Just chat interactions.
As you can easily understand, I didn't notice any noteworthy improvement in timing. In fact, working this way probably even slowed me down on many occasions, given my rigor in wanting to compare multiple LLMs at once, sometimes reporting one's responses to the other. Moreover, with the arrival of Gemini Pro 2.5, I even added a third LLM to my arsenal. But all in all, by starting to dedicate their use only to the most complicated and difficult tasks (which would have required a lot of time anyway), I significantly improved my life as a programmer.
I found myself with a "mentor" to use in the most complicated moments. And for someone like me, who works in a small team or on personal projects alone, this represented a great novelty. And it gave me a certain comfort.
The real point of no return, however, came when Anthropic released Claude Code in its Pro plan, to which I was already subscribed. Initially, I looked at it with superficiality and arrogance (again). This because my pride has always made me "hate" code agents, because I'm a programmer and I have a very DIY nature. On the other hand, it's my intention to try to stay in the AI loop and use contemporary tools. So Claude Code had to be tested.
Furthermore, there was an additional push to test it this time. A peculiarity that for some might seem trivial but for me made all the difference in the world: I wouldn't be forced to change editor! This was certainly the main deterrent why Cursor and Windsurf didn't last more than 20 minutes on my computer.
By coincidence, I was just starting a new personal project. What better way to seriously test it then? So I gave Claude Code a chance.
The result was stunning at times, and certainly beyond my expectations.
Although I maintained the same critical and rigorous approach, this time programming times drastically decreased. This is undoubtedly an advantage compared to using LLMs via chat. Because having direct access to project files, it's no longer necessary to copy and paste them into the prompt. I just need to make a precise reference to them in the prompt.
Additionally, and I know some purists will turn up their noses (a point I’ll return to later), manually integrating code into the editor is instantaneous. I just need to accept Claude Code's proposal and I find it in the editor. No more copy and paste.
I've become much more demanding of myself. This new "power" has made me more responsible. And if before, due to time constraints or a bit of laziness, I would immediately dive into the code putting architecture and cleanliness in second place, today I don't compromise. More than in the initial prompt, efforts are dedicated to planning.
I always start the task in plan mode, and I interact with Claude Code until I've found a plan that I consider state-of-the-art. Then I ask Claude Code to write the plan directly in the plans/
folder of my project. The instructions for creating the plan require a detailed description of the problem and the proposed solution. I've created a specific command for this, and the result is a truly complete document in every aspect.
This serves two functions: the first is to keep Claude Code always aligned with the plan. I've noticed that it tends to lose context in longer tasks. If instead you invite it to follow this file at every critical step, updating it from time to time with completed tasks, the results are better. Moreover, having the spec written in a file allows for a precise reference in future conversations, especially if you need to resume a very long task.
The second reason is that I can easily pass this document to another LLM (Gemini Pro 2.5 in my case) to correct it if necessary. Very often, this second pass is useful for fixing the last details.
I don't always use Claude Code passively. If the task requires it, in the initial prompt I specify that everything must be executed in small steps. And that it must wait for my confirmation before proceeding. Usually it's quite capable of stopping and inviting me to test at appropriate moments. If it doesn't happen, I check anyway and point it out so it follows this pattern.
Other times I let it go on its own. And it pains me to admit it, but it happens more and more often. Let me be clear, I'm always talking about specific and well-planned tasks as described above (this step is essential). And I carefully verify the generated code. But when after 5/10/20 times you realize that Claude Code actually works, it's almost natural to trust it and let yourself be carried away in auto-accept edits on mode. I think this is the real risk of code agents.
And here I'd like to talk about the negative aspects, which essentially can be reduced to a single cause: vibe coding. I'll be very clear: I find this practice harmful for the programmer, counterproductive for young people, and unsatisfying for curious and/or DIY minds.
And yet I keep falling for it...
The biggest problem with vibe coding for me isn't the quality of the result. Because if done intelligently, one task at a time and in small steps as described above, the results are excellent. The problem is that knowledge retention is practically zero. Both from a programming perspective and from a project architecture perspective.
The first aspect might be secondary for an experienced programmer, especially if working in a simple or transitory domain. There's no exceptional value in knowing the latest React framework that will probably be obsolete in 5 years. It's a transitory skill, recoverable in a few weeks. And giving up doing something by hand that you're already an expert in might not be essential.
The discussion would be different for projects and tasks that involve learning unknown skills and rules. Then even the experienced programmer would miss a good opportunity to learn something new.
This step, however, becomes crucial for a young person. By relying on vibe coding, they don't have the opportunity to face certain dynamics and crystallize the logical and practical patterns of programming. Thus giving up a series of experiences that usually propagate in a capillary manner, creating an interconnected fabric of rules, best practices, and (even strong) opinions on programming.
Moreover, the risk of a certain level of skill atrophy is real. It's natural. And it applies to the most experienced too.
Another problem is the possible loss of that complete knowledge in every aspect of one's codebase. Right now, I don't remember the details of many functions of a project I'm working on. And guess which project this is? The one I started testing Claude Code with. While I remember very well all the projects from the last 4-5 years pre-Claude Code, including those where I used LLMs extensively. Because I wrote and/or integrated them, from the first to the last line.
I don't know how much of a problem this can be in practice, it's quite common to come across code written years ago that you don't remember at all. Just as it's natural to find yourself in this situation every time you clash with a pre-existing codebase written by others. But it's a fact and it had to be noted.
The feeling is therefore one of a clear improvement in timing and virtually unlimited access to code. Now I'm no longer afraid to tackle the most demanding projects, because I have no more limits. And this is revolutionary and exciting.
On the other hand, I'm aware that every day I'm compromising my potential knowledge and part of my skills in exchange for time. Claude Code extends my capabilities but at the same time erodes those I already possess. Sure, I can use the time saved to study other things, and that's what I do. But a part of me feels very bad about this.
What should I do, stop using Claude Code passively? No more vibe coding?
I don't know, but I believe we still live in a time when it's possible to afford the luxury of deciding which path to follow.
So my strategy is not to use it in automatic mode or not to use it at all for projects where the software itself is the product. I consider the LLM + human pairing superior and with more benefits in the long term.
But I'll use it for all those transitory projects or those where the core business is something else and software is just a tool to achieve it.
In 2026, we'll see.