Bill Rafferty
Bill Rafferty
  • Sep 13, 2025
  • 16 min read

Sea of Agentic Developers

Critical engagement with AI to maintain developer relivance

Developer in a Sea of Agentic Developers

I’m skeptical about the imminent singularity and the rise of Super AGI. But I have to admit, I think the current developments in AI are pretty amazing. To borrow a line from Arthur C. Clarke, this magic is the kind of computer science that I still don’t fully understand yet.

Trigger warning: this post will be so filled with buzzwords I’m going to include its own glossary. This will help some who need it and increase the SEO value for the search engines driving the web to a zero-click-through hell, which is a topic for another time.

There has always been a level of imposter syndrome inherent in being a developer. By the very nature of our work, we repeat learned development patterns. Through trial, error, and bug fixing, we naturally find a way forward that often stands on the shoulders of greater developers— those confident enouph to answer Stack Overflow cries for help.

This feeling isn’t new to me. Growing up with exercise books made of papyrus, I had minor auditory conceptualization problems, something often associated with Auditory Processing Disorder (APD) or a difficulty with phonemic awareness. This is an overcomplicated way to tell you that I have terrible spelling. Uniquely, this has never held me back because of the advent of the computer spell checker (bonus points for anyone old enough to remember the Seiko Instruments Spell Checker WP-1200).

Did using this automated correction make my prose any less articulate, like an early form of AI slop? Or was I simply using a tool to bypass an unnecessary specialty to achieve a goal?

From Spell Check to Co-pilot: Is it cheating?

My GitHub profile tells me I signed up on October 27, 2011. Unless an employer paid for it, I never had any motivation to upgrade from the free tier. Then, like many developers, when GitHub announced Copilot, I was immediately on board and signed up for the pro version. There was a protracted (and possibly ongoing) onboarding stage, filled with some hilarious hallucinations and lost efficiency, but I could always see the value.

I’ll now admit to having the GitHub Copilot Pro+ plan. I need the extra Premium requests because Claude is my wingman and free GPT is still a sycophant creep. I also use many other agentic tools and services, for pure evaluation or to help deliver paid work. Without this sounding like a sponsored post, I’d shout out Warp, recraft.ai, and Motiff’s UI editor (which has already ceased service). Spoiler warning: I’m using Evernote and Gemini to assist in writing this article, and I doubt I’d be confident enough to put it together without them.

And herein lies the rub. I don’t think it does justice to compare these tools to a simple word-processing spell checker. They are far more sophisticated and enabled to autonomously achieve my goals. So, does this feed the imposter syndrome? Does it enable the downfall of my individual intelligence through the wholesale abandonment of agency to the machine overlords, further feeding the wealth, power, and entrenchment of the Tech Broligarchy?

Uncomfortably for me, it does.

The “Vibe Code” Fallacy and the 10x Developer

Does this then mean that anyone with access to these tools can “vibe code” their way into becoming a 10x developer? Am I adrift in a sea of Agentic Developers until we all become redundant to the machine we have been training and need to go clean toilets?

This is much less certain, so please take a big step back from the ledge with me.

AI has only become more useful as we humans have become more skilled in using it. Like any good tool, it has improved through a cycle: a human has a goal, uses the tool, thinks of how to improve it, uses the improved tool, and repeats. This is one of my core life philosophies.

There is nothing wrong with being perpetually under construction if you think of it as constantly working on iterative enhancements.

The way these tools have improved is through the adoption of new open standards, innovation in development practices, peer review, and competition. One of the early blockers was getting AI agents and LLMs to “talk” to other systems. So, human developers invented the Model Context Protocol. “Prompt engineering” has become a well-understood practice, and just like runic spellcasting or digital marketing, its value and outputs are highly debated. We now see tools like Spec Kit, which helps organizations focus on product scenarios rather than writing undifferentiated code with Spec-Driven Development.

This will always remain an iterative process of enhancement. Even if AI supports the process and iterates at the speed of light, I can’t see a logical end to human involvement. We will always be looking to improve our tools.

Sharpening Your AI Edge: My Current Best Practices

Here are some of my current strategies for working with agentic coding tools to help you sharpen your own AI edge.

1. Document As You Go

I know I should have always been doing this. Traditionally, I’d build something and then, if it was complex or I was handing it over, I might get around to producing documentation. Now, as part of my Conventional Commits process, I first document the thing I just built or updated, with AI doing the heavy lifting.

2. Supercharge with Custom Instructions

Many AI tools now support instructions files that give additional context on how to understand your project and how to build, test, and validate changes. Combine this with my previous tip: not only tell your coding tool to follow commit message standards but also to update the detailed documentation you have now started keeping.

3. Plan and Scaffold Before You Prompt

Don’t be afraid to pre-plan an idea or design. You can then feed this scaffold into your AI instructions. Sometimes you can one-shot your requests, but I’ve found a much better return by first chatting with a separate LLM (with less context about a codebase) on how to approach a problem. Have a back-and-forth, share code snippets, and then build out the plan in Zed to then bring into GitHub Copilot with full project context.

4. Peer Review and Challenge Your AI

This might sound obvious, but blindly following an overconfident model will lead you to doom loops. This is often the downfall of anyone just there for the vibes. I feel I’m contributing most to the code’s success when I challenge the AI with relevant questions about its approach or the impact of its changes. Asking “why?” to the world’s most patient programming peer can result in improvements—to the codebase, the AI tool, or often, the human developer who has just learned a better way to handle errors in promises.

5. Be Lazy, But Don’t Get Lazy

The magic AI agent can often seduce a developer into just handing over the wheel and having micro-naps or doom-scrolling between prompts to “continue iterating” (tell me you have never done that). Fight this urge. Use the time you’ve clawed back from refactoring 1000 lines of code to discuss and review those changes. Focus on high-level objectives but also keep your fingers in the game. Often, a basic keyboard shortcut like selecting the next matching code pattern (Cmd+D) is quicker than the best AI model. More importantly, having your hands in the bowl stops the codebase from being abstracted into a black box of AI slop. You are using the AI to prevent repetitive manual work, leading to more reliable systems. If AI becomes the “developer” and you are just the project manager, you are one step away from calling yourself an “AI artist” and fully outsourcing your self-respect. Being “Lazy” means working smarter, not harder, and still doing your job well.

Perpetually Under Construction

In this new AI world, anyone can identify as a developer, but I’m not worried about being replaced by them or the tools we all have access to. I’ll be constantly working on iterative enhancements—to myself and to those same tools.

Glossary

10x Developer: A term referring to a software developer who is supposedly ten times more productive than an average developer. Often used in tech culture to describe highly skilled or efficient programmers, though the concept is widely debated.

Agentic: Describes something, typically an artificial intelligence system, that acts with autonomy, proactively making decisions and taking actions to achieve goals without constant human oversight.

AI Slop: Low-quality, generic, or repetitive content generated by AI systems, often characterized by a lack of originality, depth, or human insight. Used pejoratively to describe automated output that lacks meaningful value.

Conventional Commits: A specification for adding human and machine-readable meaning to commit messages in version control systems. It provides a standardized format for describing changes to code, making project history more readable and automated tools more effective.

Imposter Syndrome: A psychological phenomenon where individuals doubt their skills, talents, or accomplishments and have a persistent fear of being exposed as a “fraud,” despite evidence of their competence. Common among developers and other technical professionals.

LLMs (Large Language Models): Advanced AI systems trained on vast amounts of text data to understand and generate human-like language. Examples include GPT, Claude, and other conversational AI models used for various text-based tasks.

Model Context Protocol: A standardized way for AI agents and large language models (LLMs) to integrate and use external tools, data sources, and services.

One-shot (Prompting): In AI context, refers to providing a single, comprehensive prompt to an AI system with all necessary information to complete a task, as opposed to iterative or multi-turn conversations.

Prompt Engineering: The practice of crafting effective instructions or queries for AI language models to produce desired outputs. It involves understanding how to communicate with AI systems to get optimal results.

Tech Broligarchy: A portmanteau of “bro” and “oligarchy” that refers to the rule or influence of a powerful, wealthy, and predominantly male group of “tech bros.”

Vibe Code: Refers to “vibe coding,” a method of software development where a developer provides a high-level description or “vibe” to an AI, which then generates the functional code, shifting the focus from writing code to guiding the AI. Popularized by Andrej Karpathy in early 2025.

Ai Development

Was this post helpful?

Related articles