NewsWorld
PredictionsDigestsScorecardTimelinesArticles
NewsWorld
HomePredictionsDigestsScorecardTimelinesArticlesWorldTechnologyPoliticsBusiness
AI-powered predictive news aggregation© 2026 NewsWorld. All rights reserved.
Trending
AlsFebruaryTrumpNuclearMajorDane'sResearchElectionCandidateCampaignPartyNewsDigestSundayTimelineOneMilitaryPrivateStrikesNationPoliticalCrisisEricIran
AlsFebruaryTrumpNuclearMajorDane'sResearchElectionCandidateCampaignPartyNewsDigestSundayTimelineOneMilitaryPrivateStrikesNationPoliticalCrisisEricIran
All Articles
How I use Claude Code: Separation of planning and execution
Hacker News
Published about 7 hours ago

How I use Claude Code: Separation of planning and execution

Hacker News · Feb 22, 2026 · Collected from RSS

Summary

Article URL: https://boristane.com/blog/how-i-use-claude-code/ Comments URL: https://news.ycombinator.com/item?id=47106686 Points: 82 # Comments: 35

Full Article

I’ve been using Claude Code as my primary development tool for approx 9 months, and the workflow I’ve settled into is radically different from what most people do with AI coding tools. Most developers type a prompt, sometimes use plan mode, fix the errors, repeat. The more terminally online are stitching together ralph loops, mcps, gas towns (remember those?), etc. The results in both cases are a mess that completely falls apart for anything non-trivial. The workflow I’m going to describe has one core principle: never let Claude write code until you’ve reviewed and approved a written plan. This separation of planning and execution is the single most important thing I do. It prevents wasted effort, keeps me in control of architecture decisions, and produces significantly better results with minimal token usage than jumping straight to code. flowchart LR R[Research] --> P[Plan] P --> A[Annotate] A -->|repeat 1-6x| A A --> T[Todo List] T --> I[Implement] I --> F[Feedback & Iterate] Phase 1: Research Every meaningful task starts with a deep-read directive. I ask Claude to thoroughly understand the relevant part of the codebase before doing anything else. And I always require the findings to be written into a persistent markdown file, never just a verbal summary in the chat. read this folder in depth, understand how it works deeply, what it does and all its specificities. when that’s done, write a detailed report of your learnings and findings in research.md study the notification system in great details, understand the intricacies of it and write a detailed research.md document with everything there is to know about how notifications work go through the task scheduling flow, understand it deeply and look for potential bugs. there definitely are bugs in the system as it sometimes runs tasks that should have been cancelled. keep researching the flow until you find all the bugs, don’t stop until all the bugs are found. when you’re done, write a detailed report of your findings in research.md Notice the language: “deeply”, “in great details”, “intricacies”, “go through everything”. This isn’t fluff. Without these words, Claude will skim. It’ll read a file, see what a function does at the signature level, and move on. You need to signal that surface-level reading is not acceptable. The written artifact (research.md) is critical. It’s not about making Claude do homework. It’s my review surface. I can read it, verify Claude actually understood the system, and correct misunderstandings before any planning happens. If the research is wrong, the plan will be wrong, and the implementation will be wrong. Garbage in, garbage out. This is the most expensive failure mode with AI-assisted coding, and it’s not wrong syntax or bad logic. It’s implementations that work in isolation but break the surrounding system. A function that ignores an existing caching layer. A migration that doesn’t account for the ORM’s conventions. An API endpoint that duplicates logic that already exists elsewhere. The research phase prevents all of this. Phase 2: Planning Once I’ve reviewed the research, I ask for a detailed implementation plan in a separate markdown file. I want to build a new feature <name and description> that extends the system to perform <business outcome>. write a detailed plan.md document outlining how to implement this. include code snippets the list endpoint should support cursor-based pagination instead of offset. write a detailed plan.md for how to achieve this. read source files before suggesting changes, base the plan on the actual codebase The generated plan always includes a detailed explanation of the approach, code snippets showing the actual changes, file paths that will be modified, and considerations and trade-offs. I use my own .md plan files rather than Claude Code’s built-in plan mode. The built-in plan mode sucks. My markdown file gives me full control. I can edit it in my editor, add inline notes, and it persists as a real artifact in the project. One trick I use constantly: for well-contained features where I’ve seen a good implementation in an open source repo, I’ll share that code as a reference alongside the plan request. If I want to add sortable IDs, I paste the ID generation code from a project that does it well and say “this is how they do sortable IDs, write a plan.md explaining how we can adopt a similar approach.” Claude works dramatically better when it has a concrete reference implementation to work from rather than designing from scratch. But the plan document itself isn’t the interesting part. The interesting part is what happens next. The Annotation Cycle This is the most distinctive part of my workflow, and the part where I add the most value. flowchart TD W[Claude writes plan.md] --> R[I review in my editor] R --> N[I add inline notes] N --> S[Send Claude back to the document] S --> U[Claude updates plan] U --> D{Satisfied?} D -->|No| R D -->|Yes| T[Request todo list] After Claude writes the plan, I open it in my editor and add inline notes directly into the document. These notes correct assumptions, reject approaches, add constraints, or provide domain knowledge that Claude doesn’t have. The notes vary wildly in length. Sometimes a note is two words: “not optional” next to a parameter Claude marked as optional. Other times it’s a paragraph explaining a business constraint or pasting a code snippet showing the data shape I expect. Some real examples of notes I’d add: “use drizzle:generate for migrations, not raw SQL” — domain knowledge Claude doesn’t have “no — this should be a PATCH, not a PUT” — correcting a wrong assumption “remove this section entirely, we don’t need caching here” — rejecting a proposed approach “the queue consumer already handles retries, so this retry logic is redundant. remove it and just let it fail” — explaining why something should change “this is wrong, the visibility field needs to be on the list itself, not on individual items. when a list is public, all items are public. restructure the schema section accordingly” — redirecting an entire section of the plan Then I send Claude back to the document: I added a few notes to the document, address all the notes and update the document accordingly. don’t implement yet This cycle repeats 1 to 6 times. The explicit “don’t implement yet” guard is essential. Without it, Claude will jump to code the moment it thinks the plan is good enough. It’s not good enough until I say it is. Why This Works So Well The markdown file acts as shared mutable state between me and Claude. I can think at my own pace, annotate precisely where something is wrong, and re-engage without losing context. I’m not trying to explain everything in a chat message. I’m pointing at the exact spot in the document where the issue is and writing my correction right there. This is fundamentally different from trying to steer implementation through chat messages. The plan is a structured, complete specification I can review holistically. A chat conversation is something I’d have to scroll through to reconstruct decisions. The plan wins every time. Three rounds of “I added notes, update the plan” can transform a generic implementation plan into one that fits perfectly into the existing system. Claude is excellent at understanding code, proposing solutions, and writing implementations. But it doesn’t know my product priorities, my users’ pain points, or the engineering trade-offs I’m willing to make. The annotation cycle is how I inject that judgement. The Todo List Before implementation starts, I always request a granular task breakdown: add a detailed todo list to the plan, with all the phases and individual tasks necessary to complete the plan - don’t implement yet This creates a checklist that serves as a progress tracker during implementation. Claude marks items as completed as it goes, so I can glance at the plan at any point and see exactly where things stand. Especially valuable in sessions that run for hours. Phase 3: Implementation When the plan is ready, I issue the implementation command. I’ve refined this into a standard prompt I reuse across sessions: implement it all. when you’re done with a task or phase, mark it as completed in the plan document. do not stop until all tasks and phases are completed. do not add unnecessary comments or jsdocs, do not use any or unknown types. continuously run typecheck to make sure you’re not introducing new issues. This single prompt encodes everything that matters: “implement it all”: do everything in the plan, don’t cherry-pick “mark it as completed in the plan document”: the plan is the source of truth for progress “do not stop until all tasks and phases are completed”: don’t pause for confirmation mid-flow “do not add unnecessary comments or jsdocs”: keep the code clean “do not use any or unknown types”: maintain strict typing “continuously run typecheck”: catch problems early, not at the end I use this exact phrasing (with minor variations) in virtually every implementation session. By the time I say “implement it all,” every decision has been made and validated. The implementation becomes mechanical, not creative. This is deliberate. I want implementation to be boring. The creative work happened in the annotation cycles. Once the plan is right, execution should be straightforward. Without the planning phase, what typically happens is Claude makes a reasonable-but-wrong assumption early on, builds on top of it for 15 minutes, and then I have to unwind a chain of changes. The “don’t implement yet” guard eliminates this entirely. Feedback During Implementation Once Claude is executing the plan, my role shifts from architect to supervisor. My prompts become dramatically shorter. flowchart LR I[Claude implements] --> R[I review / test] R --> C{Correct?} C -->|No| F[Terse correction] F --> I C -->|Yes| N{More tasks?} N -->|Yes| I N -->|No| D[Done] Where a planning note might be a paragraph, an implementation correct


Share this story

Read Original at Hacker News

Related Articles

Hacker Newsabout 3 hours ago
Iranian Students Protest as Anger Grows

Article URL: https://www.wsj.com/world/middle-east/iranian-students-protest-as-anger-grows-89a6a44e Comments URL: https://news.ycombinator.com/item?id=47108256 Points: 17 # Comments: 1

Hacker Newsabout 4 hours ago
Japanese Woodblock Print Search

Article URL: https://ukiyo-e.org/ Comments URL: https://news.ycombinator.com/item?id=47107781 Points: 14 # Comments: 3

Hacker Newsabout 5 hours ago
Palantir's secret weapon isn't AI – it's Ontology. An open-source deep dive

Article URL: https://github.com/Leading-AI-IO/palantir-ontology-strategy Comments URL: https://news.ycombinator.com/item?id=47107512 Points: 37 # Comments: 21

Hacker Newsabout 6 hours ago
A Botnet Accidentally Destroyed I2P

Article URL: https://www.sambent.com/a-botnet-accidentally-destroyed-i2p-the-full-story/ Comments URL: https://news.ycombinator.com/item?id=47106985 Points: 32 # Comments: 12

Hacker Newsabout 7 hours ago
Are compilers deterministic?

Article URL: https://blog.onepatchdown.net/2026/02/22/are-compilers-deterministic-nerd-version/ Comments URL: https://news.ycombinator.com/item?id=47106626 Points: 29 # Comments: 17

Hacker Newsabout 8 hours ago
Who's liable when your AI agent burns down production?

Article URL: https://reading.sh/whos-liable-when-your-ai-agent-burns-down-production-039193d82746?sk=4921ed2dbc46f0c618835ac458cf5051 Comments URL: https://news.ycombinator.com/item?id=47106406 Points: 18 # Comments: 3