Original article by Boris Cherny , developer of Claude Code.
Compiled & Organized by: Xiaohu AI
You may have heard of Claude Code, and even used it to write some code or edit some documentation. But have you ever thought about how AI would change the way you work if it weren't just a "temporary tool," but a formal member of your development process, or even an automated collaboration system?
Boris Cherny, the creator of Claude Code, wrote a very detailed tweet sharing how he uses the tool efficiently and how he and his team deeply integrate Claude into the entire engineering process in their actual work.
This article will provide a systematic overview and accessible interpretation of his experience.
How did Boris make AI an automation partner in his workflow?
He described his workflow, including:
Run multiple Claude instances simultaneously: Open 5-10 sessions on the terminal and web page to process tasks in parallel, and also use Claude on your mobile phone.
Avoid haphazardly changing the default settings: Claude is ready to use out of the box and doesn't require complex configuration.
Use the most powerful model (Opus 4.5): Although it's a bit slower, it's smarter and easier to use.
Plan before you write code (Plan mode): Let Claude help you think things through before you start writing, which increases the success rate.
After generating the code, use a tool to check the format to avoid errors.
The team maintains a "knowledge base": whenever Claude makes a mistake, they add the experience to it so that he won't make the same mistake again.
Automatically train Claude when writing a PR: Let Claude see the PR and learn new usages or conventions.
Your frequently used commands become slash commands, which Claude can automatically call, saving you from repetitive work.
Use "sub-agents" to handle certain fixed tasks, such as code simplification and function verification.
Instead of arbitrarily skipping permissions, it sets up secure commands to be allowed to pass automatically.
Synchronize Claude workflow across multiple devices (web, terminal, mobile).
The most important point:
Claude must be provided with a "verification mechanism" so that it can verify whether what it writes is correct.
For example, Claude can automatically run tests, open test web pages in the browser, and check if the function is working.
Boris first conveyed a core concept: Claude Code is not a static tool, but an intelligent partner that can work with you, continuously learn, and grow together.
It doesn't require much complex configuration and is powerful out of the box. But if you're willing to invest the time to build better usage patterns, the efficiency gains it can bring are exponential.
Boris used Claude's flagship model, Opus 4.5+, with its "with thinking" mindset for all development tasks.
Although this model is larger and slower than Sonnet, it has the following advantages:
When we open Claude, many people instinctively type "help me write an interface" or "refactor this code"... Claude usually does "write some", but often goes astray, misses logic, or even misunderstands the requirements.
Boris's first step was never to let Claude write any code. He used the Plan pattern—first, he and Claude would work together to develop an implementation plan, and then they would move on to the execution phase.
When starting a pull request, Boris doesn't let Claude write code directly; instead, he uses Plan mode:
1. Describe the goal
2. Develop a plan with Claude.
3. Confirm each step
4. Only then did Claude start writing.
Whenever a new feature needs to be implemented, such as "adding rate limiting to a certain API," he will confirm it step by step with Claude:
This "planning consultation" process is similar to two people drawing up "construction blueprints" together.
Once Claude understands the goal, Boris will turn on "automatic accept for edits" mode, allowing Claude to directly modify the code and submit PRs, sometimes without even requiring manual confirmation.
"Claude's code quality depends on whether you reached a consensus before writing the code." — Boris
Lesson learned: Instead of repeatedly fixing Claude's mistakes, it's better to draw a clear roadmap together from the beginning.
The Plan mode isn't a waste of time; it uses prior negotiation to ensure stable execution. No matter how powerful AI is, it still needs you to "explain yourself clearly."
Boris doesn't use just one name, Claude. His daily routine is like this:
Each Claude instance is like a "personal assistant": some are responsible for writing code, some for completing documentation, and some run test tasks in the background for extended periods.
He even set up system notifications so that Claude would be alerted immediately when he was waiting for input.
Claude's context is local, making it unsuitable for "doing everything in one window." Boris splits Claude into multiple roles for parallel processing, reducing both waiting time and "interference with memory."
He also reminded himself through system notifications: "Claude 4 is waiting for your reply" and "Claude 1 has completed the test," managing these AIs like managing a multi-threaded system.
Imagine you're surrounded by five bright interns, each responsible for a task. You don't need to see everything through to the end; you just need to "switch people over" at crucial moments to keep the tasks running smoothly.
Implication: Treating Claude as multiple "virtual assistants" to perform different tasks can significantly reduce waiting time and context switching costs.
Some workflows we do dozens of times a day:
He encapsulated these operations into Slash commands, such as:
/commit-push-pr
These commands are backed by Bash script logic, stored in the .claude/commands/ folder, and are managed by Git so that team members can use them.
When Claude encounters this command, it doesn't just "execute the instruction," but rather understands the workflow that the command represents and can automatically execute intermediate steps, pre-fill parameters, and avoid repeated communication.
Understanding the key points
The Slash command is like an "auto button" you install on Claude. You train it to understand a task flow, and then it can execute it with a single click.
“Not only can I save time with commands, but Claude can too.” — Boris
Lesson learned: Avoid repeatedly entering prompts; abstract high-frequency tasks into commands to automate your collaboration with Claude.
Boris's team maintains a .claude repository and has incorporated it into Git administration.
It's like an "internal Wikipedia" for Claude, recording:
Claude will automatically refer to this knowledge base to understand the context and determine the code style.
Whenever Claude misunderstands or makes a logical error, the lesson is added to the equation.
Each team maintains its own version.
Everyone collaborates on the editing, and Claude refers to this knowledge base in real time to make judgments.
For example:
If Claude keeps writing the pagination logic incorrectly, the team only needs to write the correct pagination standard into the knowledge base, and every user will automatically benefit from it.
Boris's approach: Instead of scolding it or turning it off, he "trained it once":
这段代码我们不这样写,加进知识库
Claude won't make the same mistake again next time.
More importantly, this mechanism is not maintained by Boris alone, but by the entire team contributing and modifying it every week.
Lesson: Using AI is not about each person working alone, but about building a system of "collective memory".
When Boris is reviewing code, he often @Claude on pull requests, for example:
@.claude Please add this function syntax to the knowledge base.
When used with GitHub Actions, Claude will automatically learn the intent behind the changes and update its internal knowledge accordingly.
This is similar to "continuous training of Claude," where each review not only improves the code but also enhances the AI's capabilities.
This is no longer "post-maintenance," but rather integrating AI's learning mechanisms into daily collaboration.
The team used pull requests to improve code quality, while Claude simultaneously improved his knowledge.
Lesson learned: PR is not just a code review process, but also an opportunity for AI tools to evolve.
In addition to the main task flow, Boris also defines some subagents to handle common auxiliary tasks.
Subagents are a series of modules that run automatically, such as:
These sub-agents, like plugins, automatically integrate into Claude's workflow and run collaboratively without requiring repeated prompts.
Lesson learned: Sub-agents are Claude's "team members," elevating Claude from an assistant to a "project commander."
Claude is not just a person, but a mini-manager who can lead a team.
In a team, it's not easy to get everyone to write code in a consistent style. While Claude has strong generation capabilities, it's not immune to minor flaws like slightly poor indentation or too many blank lines.
Boris's approach is to set up a PostToolUse Hook.
Simply put, this is the "post-processing hook" that Claude automatically invokes after the "task is completed".
Its functions include:
This step is usually simple, but crucial. It's like running Grammarly again after writing an article; this ensures that the submitted work is consistent and neat.
For AI tools, the key to usability often lies not in their generative power, but in their ability to handle tasks.
Boris explicitly stated that he does not use `--dangerously-skip-permissions`—a parameter of Claude Code that skips all permission prompts when executing commands.
It sounds convenient, but it can also be dangerous, such as accidentally deleting files or running the wrong script.
His alternative is:
1. Use the /permissions command to explicitly declare which commands are trusted.
2. Write these permission configurations into .claude/settings.json
3. Share these security settings with the entire team.
This is like creating a "whitelist" of operations for Claude in advance, such as:
"preApprovedCommands": [
"git commit",
"npm run build",
"pytest"
]
Claude executes these operations directly without interruption each time.
This permission mechanism is designed more like a team operating system than a standalone tool. He uses the `/permissions` command to pre-authorize frequently used and secure bash commands, and these configurations are stored in `.claude/settings.json` and shared by the team.
Lesson learned: AI automation does not mean loss of control. True engineering lies in incorporating safety strategies into the automation process itself.
Boris didn't just allow Claude to write code locally. He configured Claude to access multiple core platforms through MCP (a central control service module):
MCP configuration is saved in .mcp.json
Claude reads the configuration at runtime and autonomously executes cross-platform tasks.
The entire team shares a single configuration.
All of this is accomplished through integration with Claude via MCP (Claude's central control system), with the configuration stored in .mcp.json.
Claude is like a robotic assistant that can help you:
"Write the code → Submit the PR → Check the results → Notify QA → Report the log".
This is no longer an AI tool in the traditional sense, but the nerve center of an engineering system.
Lesson: Don't let AI only work in the "editor".
It can become the scheduler in your entire system ecosystem.
In real-world projects, Claude sometimes needs to handle long tasks, such as:
Boris's approach is highly engineering-oriented:
1. After completion, Claude verifies the result using the background agent.
2. Use a Stop Hook to automatically trigger subsequent actions when the task ends.
3. Use the ralph-wiggum plugin (proposed by @GeoffreyHuntley) to manage long-running process states.
In these scenarios, Boris will use:
--permission-mode=dontAsk
Alternatively, run the task in a sandbox to avoid interrupting the entire process due to permission prompts.
Claude isn't about "constant monitoring," but rather a collaborator you can trust to manage your projects.
Lesson learned: AI tools are not only suitable for quick and easy operations, but also for long-term and complex processes—provided that you build a proper "managed mechanism" for them.
The most important lesson Boris learned is:
Any result output by Claude must have a "verification mechanism" to check its correctness.
He will add a verification script or hook to Claude:
If it fails, Claude will automatically modify and re-execute until it succeeds.
It's as if Claude brought his own "closed-loop feedback system".
This not only improves quality but also reduces the cognitive burden on people.
Lesson learned: What truly determines the quality of AI results is not the number of model parameters, but whether you have designed a "results checking mechanism".
Boris's approach did not rely on any "hidden features" or black magic, but rather engineered the use of Claude, upgrading it from a "chat tool" into a component of a high-performing work system.
His use of Claude has several key characteristics:
In fact, Boris's method demonstrates a new way of using AI:
This approach doesn't rely on black magic, but rather demonstrates engineering capabilities. You can also learn from it to use Claude or other AI tools more efficiently and intelligently.
If you often feel that "it knows a little, but is unreliable" or "I always have to fix the code it writes" when using Claude, the problem may not be with Claude, but with the fact that you haven't provided it with a mature collaboration mechanism.
Claude can be a competent intern or a reliable engineering partner, depending on how you use him.


