The original thread is on Boris’s X account. A good companion site is howborisusesclaudecode.com which compiles everything in one place.
The “Surprisingly Vanilla” Setup #
Boris opens with this: his setup is surprisingly vanilla. Claude Code works great out of the box, and he doesn’t customize it much.
This is worth sitting with for a second. The person who built the tool doesn’t have some secret 500-line CLAUDE.md or a custom plugin stack. He basically uses the defaults.
I think there’s a lesson here. The developer tooling community has a habit of over-engineering setups before actually using the thing. I’ve seen people spend more time configuring their AI coding assistant than actually coding with it. Boris’s approach is the opposite: start simple, add customization only when you hit a real friction point.
That said, “vanilla” for the creator of Claude Code and “vanilla” for the rest of us are probably different things. He knows the tool’s internals. He knows what it can do without being told. The rest of us might need a bit more scaffolding.
Opus for Everything #
Boris uses Opus 4.5 with extended thinking for every single task. His reasoning: even though Opus is bigger and slower than Sonnet, you steer it less. It’s better at tool use, better at following complex instructions, and that makes it faster end-to-end.
I’ve gone back and forth on this. For a while I was using Sonnet for quick tasks (rename this variable, write a test for this function) and Opus for anything architectural. Boris’s argument changed how I think about it.
The cost isn’t just “time to generate output.” It’s also the correction loops. When a smaller model misunderstands the task and you spend three turns fixing it, that Sonnet speed advantage evaporates.
5 Parallel Sessions #
This is the one that blew people’s minds. Boris runs 5 Claude Code instances in parallel across 5 separate git checkouts of the same repo, numbered tabs 1-5. On top of that, he runs 5-10 more sessions on claude.ai/code. He uses system notifications to know when Claude needs input.
The result: 20-30 PRs per day.
That number is impressive, but context matters. He’s working on the Claude Code codebase itself, which he knows intimately. He can scope tasks precisely because he wrote most of the code. Each parallel session gets a well-defined, independent task.
For my own work, I’ve tried running 3 sessions in parallel. It works best when:
- The tasks are genuinely independent (different features, different parts of the codebase)
- Each task is well-scoped enough that Claude can run for a while without needing input
- You have a good mental model of what each session is doing
Where it falls apart: when tasks overlap, when you need to context-switch constantly to answer Claude’s questions, or when you don’t have enough independent work to fill the sessions. Running 5 instances on a small project is just waste.
CLAUDE.md as Team Knowledge #
The Claude Code team shares a single CLAUDE.md checked into git. The whole team contributes to it multiple times a week. Whenever Claude does something wrong, they add a note so it doesn’t happen again.
This is the tip I find most valuable.
I’ve been using CLAUDE.md as a personal config file — project structure, build commands, testing conventions. Boris treats it as a living document that captures institutional knowledge. Every mistake becomes a permanent fix.
Think about it: in a normal team, someone discovers a gotcha, mentions it in Slack, and it’s forgotten in two days. With CLAUDE.md, that gotcha becomes a rule that every team member’s AI assistant follows forever.
# Example CLAUDE.md pattern (inspired by Boris's approach)
## Things Claude gets wrong in this codebase
- Don't use `console.log` for debugging, use the `logger` module
- The `user` table has soft deletes — always filter by `deleted_at IS NULL`
- Tests must not hit the network — use the `nock` fixtures in `test/fixtures/`If I could only keep one tip from the entire thread, it would be this one. A team-maintained CLAUDE.md is more valuable than any individual configuration trick.
Plan Mode First #
Most of Boris’s sessions start in Plan mode (Shift+Tab twice). If the goal is a PR, he iterates on the plan until he likes it, then switches to auto-accept edits mode. Claude usually one-shots it from there.
I’ve adopted this pattern and it works well. The planning step catches misunderstandings before Claude writes 200 lines of code in the wrong direction. Without Plan mode, I’d often get a large diff that was 80% right but required manual cleanup. With Plan mode, the success rate on the first attempt is noticeably higher.
The workflow looks like:
- Enter Plan mode (Shift+Tab twice)
- Describe what you want
- Read Claude’s plan, push back on parts you disagree with
- Once the plan looks right, switch to Act mode
- Let Claude execute
Step 3 is where most of the value is. It’s cheaper to fix a plan than to fix code.
Slash Commands for Inner Loops #
Boris uses custom slash commands for every workflow he repeats multiple times a day. These are markdown files in .claude/commands/ that define reusable prompts.
I haven’t used these as much as I should. Most of my repetitive tasks are things like “run the tests and fix whatever fails” or “lint this file and fix the issues.” Those are good candidates for slash commands.
# .claude/commands/fix-tests.md
Run the test suite. For any failing tests, analyze the failure,
fix the code (not the test unless the test itself is wrong),
and re-run until all tests pass.The idea is simple: anything you type more than twice should be a slash command. Boris has these for his inner-loop workflows. I’m starting to build mine.
Permissions, Not –dangerously-skip #
Boris doesn’t use --dangerously-skip-permissions. Instead, he uses /permissions to pre-allow specific bash commands he knows are safe in his environment. Most of these are checked into .claude/settings.json and shared with the team.
This is the security-conscious approach. --dangerously-skip-permissions is a sledgehammer. /permissions is a scalpel.
In practice, I’ve found a handful of permissions covers 90% of the prompts:
npm test,npm run lintgitcommands (status, diff, log)hugo server,hugo buildnpx playwright test
Pre-allowing these means Claude can run tests and lint without asking, while still prompting for anything destructive. Much better than either extreme (constant prompts or no safety at all).
MCP Integrations #
Claude Code uses all of Boris’s tools for him. It searches and posts to Slack (via MCP server), runs BigQuery queries, grabs error logs from Sentry. The Slack MCP configuration is checked into .mcp.json and shared with the team.
MCP (Model Context Protocol) is where Claude Code goes from “AI code editor” to “AI coworker.” When Claude can pull context from Slack, query your database, and check your error logs, the quality of its suggestions improves dramatically because it has the same context you do.
I don’t have Slack or Sentry hooked up, but I use MCP for documentation lookups. The principle is the same: the more context Claude has about your environment, the less you have to explain.
Verification is Everything #
Boris saved the most important tip for last: give Claude a way to verify its work. If Claude has a feedback loop, it will 2-3x the quality of the final result.
His example: Claude tests every single change he lands to claude.ai/code using the Claude Chrome extension. It opens a browser, tests the UI, and iterates until the code works and the UX feels good.
For those of us without a Chrome extension testing setup, the same principle applies at a simpler level:
- Write tests before asking Claude to implement the feature
- Include “run the tests and verify” as the last step of every prompt
- Use CI as a verification step (Claude can read CI output and fix issues)
What I’m Taking Away #
If I had to compress Boris’s thread into three principles:
- Start simple, customize when friction appears. Don’t over-engineer your setup.
- Make CLAUDE.md a team habit. Every mistake Claude makes should become a permanent rule.
- Give Claude a feedback loop. Tests, linters, browser automation. The tool that can verify its own work produces dramatically better results.
The parallel sessions and Opus-for-everything tips are worth experimenting with, but they’re optimizations on top of these fundamentals. Get the basics right first.
What’s your Claude Code setup? I’m always curious how other developers are using it.
Sources: Boris Cherny’s original thread, howborisusesclaudecode.com