Key Takeaways
- 01 Persistent markdown planning creates a shared memory between you and AI agents that survives across sessions
- 02 Breaking complex tasks into smaller, reviewed steps prevents the 'go off and hope' problem common with AI coding
- 03 The Manus pattern treats your AI agent like a senior developer teammate, not a magic box
- 04 File-based planning enables better code review, debugging, and iteration than chat-based workflows
The $2B Workflow Hiding in Plain Sight
Last month, AI startup Anthropic acquired Manus AI for a reported $2 billion. While the press covered the acquisition as a breakthrough in “agentic AI,” the actual innovation was something far more practical: a workflow pattern called planning with files.
I spent the last two weeks implementing this pattern in my own workflow, and honestly? It’s the biggest productivity leap I’ve had since switching to Claude Code last year.
Here’s the thing — this pattern isn’t new. It’s been hiding in plain sight in repositories like Planning with Files (16,900+ stars), which implements the exact Manus workflow. But most developers either don’t know about it or dismiss it as “just writing todo lists.”
They’re missing something powerful.
The Problem with Chat-Based AI Development
Let me paint a familiar picture. You’re working on a feature that requires several changes across your codebase. You open Claude Code or Cursor, paste in your prompt, and watch as it flies through your files.
Five minutes later, you have three files modified, but you have no idea:
- Which files were touched and why
- What decisions were made about architecture
- What edge cases were considered
- What was rejected and why
The “go off and hope” problem. You either trust the agent completely, or you don’t use AI for anything complex. Neither is a good option.
The harder the task, the more you need a traceable trail of decisions — exactly what chat history doesn’t provide.
This is precisely why the Planning with Files pattern exists. Instead of treating AI interactions as ephemeral chat messages, you create persistent markdown documents that serve as the single source of truth for both you and the AI agent.
How Planning with Files Works
The concept is elegantly simple:
- Create a planning document (markdown file) for each complex task
- Structure it as you would a PRD — goals, constraints, approach, milestones
- Review each section before the agent proceeds
- Update the document as decisions are made — not after, but during
- Keep the document alongside your code — version controlled, auditable
Let me show you how this looks in practice.
The Planning Template
Here’s the basic structure I use:
# Feature: User Authentication Flow
## Goals
- Implement JWT-based auth for API endpoints
- Support refresh tokens with secure storage
- Maintain backward compatibility with existing users
## Constraints
- Max 100ms overhead for auth checks
- Must work with existing PostgreSQL schema
- No breaking changes to public API contracts
## Technical Approach
1. Create auth middleware in /lib/auth
2. Add JWT validation to Express router
3. Implement refresh token rotation in /lib/tokens
## Milestones
- [ ] Basic token validation works
- [ ] Refresh endpoint functional
- [ ] Integration tests passing
- [ ] Documentation updated
Notice what’s happening here: I’m not just describing what I want. I’m structured thinking through the entire task before the AI touches any code.
Review at Each Step
Here’s the critical part most people miss. You don’t just dump this file to the AI and say “go.” Instead:
- Share the goals and constraints first — let the agent internalize the boundaries
- Wait for its technical approach — let it propose how to solve it
- Review and refine — discuss tradeoffs before writing code
- Only then proceed — with clear checkpoints
This transforms the relationship. Your AI agent isn’t a command executor; it’s a senior teammate who needs对齐 (alignment) on approach before coding.
My Two Weeks of Testing: Real Results
I used this pattern for three different project phases over the past two weeks:
Case 1: Database Migration
We needed to migrate from MongoDB to PostgreSQL while keeping the app running. I created a planning document with 47 specific migration scenarios. The key insight: by writing out each scenario first, Claude Code caught edge cases I would have missed — like what happens when a user has pending transactions during the migration window.
## Scenario: User has pending transactions during migration
- Current: Cannot migrate accounts with active transactions
- Solution: Create 'migrated = false' flag, queue for after settlement
- What could go wrong: Transaction completes after check but before migration
- Mitigation: Double-check active transactions right before migration
Notice the “what could go wrong” section. That’s the value-add of structured planning — it forces you to think about failure modes.
Case 2: Building a New API Endpoint
The planning document became a working specification. I wrote out the endpoint contract, error handling approach, and test scenarios. When Claude Code generated the code, it matched my specification almost exactly — because we both had the same source of truth.
We diverged on two points:
- I suggested gzip compression for responses
- Claude Code pointed out our CDN already handles that, making it redundant
That’s exactly the collaboration this pattern enables.
Case 3: Debugging a Production Issue
This was where the pattern truly proved its worth. A payment webhook was failing intermittently. I created a planning document structured as:
# Debug: Payment Webhook Failures
## Known Facts
- Failure rate: ~2% of webhooks
- Pattern: Mostly Stripe, rare PayPal
- Timing: Clusters around 2-4 AM UTC
- Error: "signature verification failed"
## Hypotheses to Test
1. [ ] Clock skew on Stripe servers during off-hours
2. [ ] Race condition in signature validation
3. [ ] Malformed payloads not caught by validation
## Test Approach
1. Add detailed logging to signature validation
2. Capture raw request payloads for failed cases
3. Compare timing across success/failure
By the time I brought this to Claude Code, I had structured my thinking enough that the debugging session took 12 minutes instead of the usual hour of back-and-forth.
But Isn’t This Just Extra Work?
I felt the same skepticism. Writing out a full markdown plan for things I could just explain in chat — wasn’t that friction?
The answer is yes, but it’s productive friction. Like TDD, the planning overhead pays off quickly:
| Complexity Level | Without Planning | With Planning |
|---|---|---|
| Simple script | Too much overhead | Skip it |
| Medium feature | 50% faster outcome | Baseline |
| Complex system | Chaotic, unreliable | Steady progress |
For simple tasks, you skip the planning. The pattern isn’t universal — it shines exactly where chat-based AI falls short.
When This Pattern Shines
From my testing, here’s where persistent markdown planning wins:
Multi-file refactors: You need traceable decisions about what’s changing and why
Long-running tasks: Chat history buries context; markdown documents preserve it
Team collaboration: Reviewing an agent’s work requires structured documentation
Debugging: Structured hypothesis-test cycles beat exploratory chat sessions
Compliance requirements: When you need an audit trail of AI-assisted decisions
The one place it fails? Quick one-off tasks. If you’re asking “how do I center a div in CSS”, just ask. The overhead isn’t worth it.
Implementation Options
You don’t need to build this from scratch. Here are practical options:
Option 1: Claude Code Skill (Recommended)
The Planning with Files skill from OthmanAdi works out of the box:
claude code
# Install the skill from the marketplace
# Then use /planning command
This gives you a proven template and workflow structure.
Option 2: Custom Template
Create your own planning.md in your project root:
# Task: [Title]
## Context
Why this matters, what's the background
## Goals
- Primary goal
- Secondary goals
## Non-Goals
What we're explicitly NOT doing
## Constraints
- Technical limits
- Timeline limits
## Approach
- Proposed implementation path
- Alternative considered
## Open Questions
- Things we need to decide
Option 3: Directory Structure
For complex projects, create a planning/ directory:
planning/
├── README.md # Project-level overview
├── active/
│ ├── feature-xyz.md # Current in-flight work
│ └── bug-login.md # Active investigation
└── archived/
├── completed/
└── deprecated/
The Deeper Shift
After two weeks, what strikes me most isn’t the productivity gains — it’s the mental model shift.
I’m no longer thinking of AI coding tools as “smart autocomplete.” They’re teammates. And like any teammate, they need:
- Clear goals — not just what to build, but why it matters
- Boundaries — what not to change, what’s out of scope
- Structured context — not just current state, but background, hypotheses, decisions
What the Manus pattern recognized — and what the $2B acquisition validated — is that AI agents excel when given structured collaboration, not magic commands.
The future of AI-assisted development isn’t about smarter prompts. It’s about treating AI agents as teammates who deserve structured context.
Quick Start
Want to try this today? Here’s the minimal setup:
- Create a
planning/directory in your project - Add a template file like the one above
- For your next complex task, write out the plan first
- Share the document with Claude Code before asking for code
- Keep the document updated as decisions are made
Start small. Use it for one task this week. Notice how your conversations with AI agents change.
I think you’ll find — as I did — that this simple pattern transforms AI-assisted development from “hoping for the best” to “collaborating with intention.”
What do you think? Are you already using a planning workflow, or is this completely new to you? I’d love to hear about your experience — always curious what’s working for other developers.
Comments
Join the discussion — requires GitHub login