AI and technology collaboration visualized

Slop Squared: The Article That Literally Writes Itself

Technologies used:
Claude Claude
Content Strategy Content Strategy

In an era where “AI-generated content” often becomes synonymous with “slop”—that low-quality, regurgitated word salad that floods the internet—we’ve discovered something unexpected. At Code4, we’ve learned that large language models, when given proper context, domain expertise, and thoughtful prompts, don’t produce slop at all. Instead, they produce engaging, authentic content that genuinely communicates the value of the technologies we implement. Currently, we work primarily with Claude Haiku 4.5, though we’re continuously evaluating and comparing multiple LLMs to optimize our workflows. Yes, you could say it’s “slop squared”… except it’s not actually slop.

The Content Problem in Tech

Technical content creation has become increasingly challenging. Most tech content falls into one of three categories:

  • Marketing Drivel: Buzzword-heavy prose that says everything and nothing simultaneously.
  • Incomprehensible Jargon: Deep technical writing that only speaks to specialists and alienates everyone else.
  • Low-Effort Filler: AI-generated content that reads like it was processed through a blender of random tech blogs.

At Code4, we do real technical work—Astro.js migrations, cloud architecture, business intelligence solutions, and digital transformations. These aren’t marketing concepts; they’re tangible projects with measurable outcomes. How do you communicate this authentically without drowning in either oversimplification or impenetrable jargon?

Enter Claude.

How We Use Claude to Write About Our Work

The key insight we’ve discovered: large language models don’t write slop when you provide them with:

  1. Genuine expertise and context about what you actually do
  2. Real data and outcomes from your projects
  3. A clear understanding of your audience and their pain points
  4. Specific instructions on tone, structure, and depth
  5. Iterative refinement rather than expecting perfection from the first prompt

It’s not that an LLM automatically understands your business. It’s that when you give the model the raw materials—your actual experience, real project results, and clear direction—it excels at synthesizing that into engaging prose.

The Right Prompt Changes Everything

Consider the difference between these prompts:

Bad Prompt: “Write about web development”

Good Prompt: “I’m a web developer in a Sydney based agency that recently migrated from WordPress to Astro.js. Write an insider’s perspective piece aimed at business owners who are tired of WordPress maintenance costs, sucurity concerns and slow performance. Include specific metrics with citation links for me to review. explain the why behind the tech choices, and mention that this isn’t just marketing—we did this to our own website first. Use a conversational tone but maintain credibility.”

The second prompt doesn’t just produce better content—it produces authentic content because it’s grounded in reality. We’ve found that this principle holds true across multiple language models we evaluate. Whether we’re working with Claude, GPT, or other LLMs in our testing workflows, the quality of output is directly proportional to the quality and specificity of input.

Why This Matters: The Authenticity Advantage

Here’s where the “slop squared” headline becomes ironic. The internet is increasingly flooded with generic AI content. Meanwhile, authentic technical content—content that demonstrates real expertise and real results—becomes increasingly valuable. When Claude helps you articulate what you actually know, you’re not creating slop. You’re creating something rare: honest, knowledgeable content in a sea of mediocrity.

This approach has several advantages:

  • Speed Without Sacrifice: You can produce more content without sacrificing quality or authenticity.
  • Consistency: Claude maintains your voice and perspective across multiple pieces.
  • Focus on Expertise: You spend time on strategy and context, letting Claude handle the writing.
  • Accuracy: AI trained on vast amounts of technical documentation produces fewer errors than rushed manual writing.
  • Persuasion: Readers can tell when content is authentic, and they respond accordingly.

The Process at Code4

When we use LLMs to create content about our work, here’s our actual workflow:

1. Expert Input

One of our technical leads provides:

  • The specific project or technology
  • Key metrics and outcomes
  • The business impact (speed, cost, reliability)
  • Unique insights they’ve learned

2. Strategic Framing

We identify:

  • The core audience (developers, business owners, CTOs)
  • What they care about (performance, cost, maintainability)
  • Where we have unique perspective or competitive advantage
  • The story we want to tell

3. Detailed Prompt

We write a comprehensive prompt that includes all of the above, plus specific instructions about structure, tone, and depth.

4. LLM Processing

Our chosen LLM (currently Claude, though we regularly evaluate alternatives like GPT and others) produces a first draft incorporating all our context and requirements.

5. Expert Refinement

Our technical person reviews, edits, and ensures accuracy. They fact-check every claim, verify every statistic, and maintain our standards.

This step is non-negotiable. Technical expertise and exceptional writing skills are rarely found in the same person. Our brilliant engineers and architects are experts in their domains, but that doesn’t automatically make them skilled writers—nor should it be expected to. That’s not a weakness; it’s a reality of specialization.

An LLM can produce well-written, eloquent prose from a technical outline. But only someone with genuine domain expertise can validate whether that prose is accurate. This is where the human element becomes irreplaceable:

  • Fact-checking: Verifying statistics, benchmarks, and claims against actual project data
  • Technical accuracy: Ensuring explanations are correct, not just plausible-sounding
  • Nuance preservation: Catching where an LLM oversimplified or missed important caveats
  • Industry context: Confirming claims align with current best practices and standards
  • Error detection: Spotting hallucinations or confident-sounding inaccuracies

This is the true collaboration: the LLM handles articulation and structure, the expert ensures accuracy and authenticity. The LLM wouldn’t pass an expert’s accuracy review. The expert’s raw notes might not be well-written. Together, they produce something better than either could alone.

6. Polish and Publish

We review for voice, flow, and impact—then publish content that represents our expertise.

The Uncomfortable Truth About AI Content

Let’s be honest: most AI content is slop. It’s slop because:

  • The prompts are generic
  • There’s no subject matter expertise behind the writing
  • No one reviews it for accuracy
  • It’s optimized for search engines, not readers
  • It tries to sound authoritative while being shallow

But that’s not an AI problem. That’s a human problem. The tools are only as good as the expertise and direction you provide—and this principle applies across all language models we test.

The Value Proposition

When Claude helps Code4 write about Astro.js migrations, cloud architecture, or business intelligence, we’re not creating content despite having an AI. We’re creating content because we have expertise and a tool that can help us communicate it at scale.

This applies whether we’re using Claude (our current primary choice) or evaluating other leading language models—the value comes from pairing genuine expertise with capable AI assistance.

This is radically different from:

  • Outsourcing content to generalist writers
  • Hiring expensive technical writers
  • Manually writing everything ourselves
  • Using AI as a shortcut to avoid thinking

It’s augmentation, not replacement. Expertise plus AI equals better content faster.

Why It Actually Works

The reason this approach succeeds where others fail:

  1. Credibility is non-negotiable: Everything we say about Code4 is verified. We don’t exaggerate. We cite real metrics.
  2. We understand our audience: We’re writing for people who know enough to spot bullshit.
  3. Expertise shows: A prompt written by someone who actually understands the technology will produce content that reads like it came from someone who actually understands the technology.
  4. Iteration matters: We don’t publish the first draft. We refine until it meets our standards.
  5. Accuracy Review is Essential: Our technical experts meticulously review every claim because we recognize that deep technical knowledge doesn’t automatically translate to great writing—but it does translate to spotting errors that sound plausible but are wrong. This is where the magic happens: pairing someone who knows the technology deeply with someone (or something) who knows how to communicate it clearly.

The Expertise-to-Writing Gap: A Hidden Advantage

Here’s something nobody talks about: being an exceptional software engineer, cloud architect, or technical leader doesn’t automatically make you an exceptional writer. This isn’t a flaw—it’s a feature of specialization.

Our team includes brilliant people who can design complex distributed systems, optimize cloud infrastructure, and solve intricate technical problems. But ask them to write marketing copy? To structure a persuasive narrative? To make technical concepts accessible without losing accuracy? That’s a different skill set entirely, and there’s no shame in outsourcing it to a tool built specifically for language generation.

The traditional approach tried to solve this by hiring technical writers—usually generalists without deep domain knowledge who had to extract expertise from engineers through interviews and documentation. The result: mediocre writing about mediocre understanding. Or alternatively, brilliant engineers spending hours writing content they dislike, producing stilted prose that doesn’t accurately reflect their deep knowledge.

The LLM approach is different:

Our engineers provide raw expertise: real numbers, actual design decisions, genuine challenges they overcame, and insights they’ve learned. They don’t have to write it; they just have to validate it’s accurate. An LLM handles the articulation—structure, narrative flow, accessibility, persuasion—which is exactly what it’s good at.

The Critical Ingredient: Accuracy Review

Where this breaks down with generic AI content is the missing accuracy review. An LLM-generated article about “cloud architecture best practices” sounds authoritative even if it contains subtle errors that someone without domain expertise won’t catch. That’s how slop gets created.

At Code4, we flip this: we have subject matter experts review for accuracy. They don’t need to be great writers (though they can be). They just need to know their domain well enough to catch when an AI-written sentence sounds right but is actually misleading, oversimplified, or wrong.

This is the difference between:

  • Generic AI content: Good writing, unknown accuracy
  • Expert-written content: Accurate, possibly poorly written
  • Our approach: Good writing, verified accuracy

The last one requires both: an LLM that can write, and an expert who can verify. Skip either one, and you get slop. Include both, and you get something genuine.

The Future of Technical Content

The internet will be flooded with AI-generated slop. It already is. But there’s also an emerging category: expertly-guided AI content that maintains all the authenticity of human expertise while gaining the clarity and efficiency of AI assistance.

That’s what Code4 is building. Not “slop squared”—authentic content, squared. Content that educates, informs, and genuinely represents what we do and the value we deliver.

Evaluating and Comparing LLMs in Our Workflows

One key principle guides our AI strategy: we never rely on a single tool. At Code4, we continuously evaluate and compare multiple language models—Claude, GPT models, and emerging alternatives—to understand their strengths, weaknesses, and optimal use cases.

Why We Compare:

  • Different models excel at different tasks. One LLM might produce superior technical documentation while another excels at creative framing or storytelling.
  • The landscape evolves rapidly. New models launch frequently, and we stay current with capabilities and pricing.
  • Avoiding lock-in. By testing multiple tools, we ensure our processes aren’t dependent on a single vendor.
  • Quality assurance. Running content through multiple models helps identify blind spots or areas where one LLM consistently underperforms.
  • Cost optimization. Different models have different pricing and speed profiles that suit different workflows.

Currently, Claude is our primary choice for content creation due to its reasoning depth, nuance, and reliability. But that preference is based on continuous evaluation, and we’re always ready to adapt as the AI landscape evolves.

The lesson: be pragmatic about tools, but principled about processes. Good content creation workflows work with multiple LLMs because the workflow itself—not the tool—is what creates quality.

The Bottom Line

Claude AI didn’t write this article by itself. An expert at Code4 provided direction, real data, and genuine perspective. Claude helped structure and articulate that expertise. Together, they produced something more authentic than either could alone.

Is that “slop squared”? Only if you define slop as “content that honestly represents real expertise and real results.” In that case, we’re guilty as charged—and the internet could use a lot more of this kind of “slop.”