Author: Yanhua
Compiled from: Podwise

OpenClaw has recently become incredibly popular, with discussions everywhere. But frankly, most of the content focuses on theory, architecture, and vision. Few people clearly explain what it can actually do or how to implement it in daily work.
Matthew Berman recently released a video showcasing all the use cases he built using OpenClaw. It's all hands-on, no abstract concepts. CRM, knowledge base, business consulting team, security audit, social media tracking, video topic pipeline, daily briefings, food diary… One person, one MacBook, did the work of a small company's middleware team.
Let me break down his core use cases and discuss them one by one. Without exaggeration or bias, I'll explain what each use case is, how it works, and what its advantages are.
This is the first use case Berman demonstrated, and also the most intuitive one.
The setup process: He told OpenClaw in natural language, "Build me a CRM that extracts data from Gmail, Google Calendar, and Fathom, filters out marketing emails and cold sales pitches, and only keeps valuable conversations and contacts." He didn't write a single line of code. It was up and running in 30 minutes.
Data Ingestion: The system scans emails every 30 minutes and checks Fathom (an AI-powered meeting recording tool) every 5 minutes during work hours. Before being stored, all data undergoes an LLM (Local Management Model) assessment: Is this email worth storing? Is this contact important?
Core competencies:
All 371 contacts can be searched using natural language. "What did I talk about with John last time?" "Who was the last person I spoke with at Company X?"
Relationship health score, automatically marking people you haven't contacted in a long time.
Duplicate Contact Detection and Merging Suggestions
Vector embedding search supports semantic-level fuzzy matching
The most impressive detail: When Berman is in other scenarios (such as thinking about video topics), the CRM will proactively interject: "You've talked to a sponsor about similar topics before, maybe they'd be willing to sponsor this episode." The system works across modules, not just passively storing data, but actively creating connections.
Berman's original words: "If I can build a fully customized CRM in 30 minutes, and then spend an hour or two iterating and optimizing it, why would I pay a CRM company?"
This use case works closely with CRM, but it deserves to be discussed separately.
Workflow: Meeting ends → Fathom transcribes the full text → OpenClaw matches CRM contacts → Extracts action items → Sends to Berman via Telegram for approval → Approved items automatically enter Todoist.
Several key design features:
Distinguish between "my" and "the other party's" actions. Items promised to you by the other party are marked "waiting on" by the system, and the system automatically tracks whether the other party fulfills their promise.
Self-learning filtering. If Berman rejects an action ("This isn't my task"), the system learns the reason and updates the extraction rules. It will not extract similar actions again.
The system automatically checks the completion status three times a day. For example, if you say in a meeting, "I will send the email today," the system will check whether you actually sent it, and if you did, it will automatically check the box.
Automatic archiving after 14 days. Overdue and incomplete items are automatically cleaned up to keep the list clean.
The value of this system lies not in any single function, but in its complete automation of the "post-meeting follow-up," the part most prone to failure.
Berman has long suffered from a pain point: he sees good content, saves it, and then can't find it again.
His solution was extremely simple: put all the links into Telegram and leave the rest to OpenClaw.
The system will automatically process these types of content:
Article : Directly scrape the full text and extract it after automatically logging into paywall websites using a browser.
YouTube Video: Capture Captions/Transcribe Text
X Posts: Not just captures a single post, but automatically tracks the entire post string, including backlinks.
PDF: Direct Text Parsing
All content is vectorized and embedded, then stored locally in SQLite. Afterwards, you can use natural language search: "Show me all articles about OpenAI," for a one-click search.
Enhanced team collaboration: Each piece of content added to the database is automatically synced to Slack with the message "Matt wants you guys to see this." The team knows that this is something the boss has personally read, not something randomly pushed by AI.
The key to this use case isn't its technical complexity, but rather its extremely low barrier to entry. No tagging, categorization, or organization is required. Just throw it in, and AI will handle the rest.
Personally, I think this is the craziest use case in the entire video.
Data input: 14 business data sources. YouTube analytics, Instagram post engagement, X Analytics, TikTok data, email campaigns, meeting minutes, Cron task health status, Slack messages... basically covering all dimensions of his business.
Analysis Process: Eight AI expert roles (finance, marketing, growth, operations, etc.) independently analyze all the data, running in parallel. After the analysis is complete, they discuss their findings, synthesize disagreements, and then merge them into a priority-based list of recommendations.
Delivery method: The process runs automatically every morning at midnight, and the results are sent to Telegram as a numbered summary. Berman glances at it after waking up and can ask follow-up questions about any of the results: "Expand your discussion on result 3."
The inspiration behind this use case lies in the multi-agent collaboration model. It's not one AI giving you advice, but a group of AIs debating and then offering their own. Like a real board of directors, finance advocates for saving money, marketing advocates for spending, and ultimately a pragmatic compromise is reached.
It has a similar structure to a business consulting firm, but the direction is completely different.
Runtime: 3:30 AM every night (staggered from other task times to avoid Anthropic API quota conflicts).
Review Team: Security experts from four perspectives: offensive, defensive, data privacy, and operational authenticity.
Scope of review: The entire codebase, Git commit history, runtime logs, error logs, and stored data. This is not static rule scanning; it allows AI to truly read the code and understand the logic.
Output: Opus 4.6 synthesizes all findings, numbers them, and posts them on Telegram. Critical issues are immediately alerted. Berman can directly reply "fix it," and the system will automatically fix them.
Self-evolution: Experiences from each fix are remembered, and review rules are continuously iterated. Some nights there are no new suggestions because the system confirms the current state is safe.
The most brilliant aspect of this use case is using AI to examine AI itself. Berman is quite frank: safeguards against injection vulnerabilities can never be perfect. But rather than pretending the risk doesn't exist, it's better to let the system perform a self-check every day.
Tracking scope: YouTube, Instagram, X, and TikTok. It automatically retrieves snapshots daily and stores them in an SQLite database.
Data dimensions: YouTube tracks video views, watch time, and engagement rate; other platforms track post-level performance data.
Dual use:
Daily briefing. Sent to Telegram every morning, informing him of yesterday's performance, highlighting what went well and what didn't.
Feed it to the Business Advisory Board . Social media data is one of 14 data sources directly involved in nightly business analytics.
This illustrates the flywheel effect of the entire system: the social media tracking module does not operate in isolation; the data it generates simultaneously serves two downstream use cases: briefings and advisory committees.
Triggering method: In a Slack discussion, anyone replies to " @Claude , this is a video idea" under a post.
Automated processes:
Read the full context of a Slack discussion thread
Web Search + X Trend Research
Search the knowledge base to see if there are any relevant existing materials.
Check for plagiarism to see if the topic is the same as an existing topic.
Generate a complete video outline: title suggestions, thumbnail suggestions, opening hook, and video flow framework.
Conduct an assessment to determine whether this topic is worth pursuing.
Create a project card in Asana, including all research materials and links.
In the video, Berman demonstrated a real-world example: the release of Quen 3.5 was shared on Slack, someone tagged it as a video idea, and the system automatically generated a complete topic package, including discussions from different KOLs on Twitter, reactions from the open-source community, and suggested video angles.
The value of this use case is that it reduces the distance between "inspiration capture" and "executable solution" to near zero.
Most people's experience with ChatGPT is that every conversation feels like the first time they meet. Berman's OpenClaw is not like that.
Memory levels:
Conversation Memory: Daily conversations are automatically saved as Markdown files.
Preference Extraction: Extract writing preferences, tone and style, interests, stock tracking, email classification rules, etc. from the conversation and store them in memory.md
Identity Update: At the start of each new conversation, the system reads the memory files and updates identity.md and soul.md.
Vectorized retrieval: All memory files are vectorized, supporting RAG search.
Context-dependent personality switching: Berman assigned two personalities to the AI. In private Telegram chats, it acts like a friend, humorous and casual; in the Slack team channel, it automatically switches to a professional, colleague-like style. These are all defined in soul.md.
This use case transforms AI from a "tool" into a "partner." It no longer just executes instructions, but truly understands who you are and what you want.
This is the most unexpected use case.
How to use: Take a photo of your food and send it to OpenClaw. It will automatically recognize and record the data. You will receive three reminders daily to report how your stomach feels. All data is stored in a food log.
Analytical capabilities: Triggers weekly analysis, cross-referencing food records and symptom reports to identify patterns.
Real Results: By analyzing the food components in the photos and Berman's symptom feedback, the system discovered that his stomach was sensitive to onions. This was something he was completely unaware of.
A chatbot helped people identify food allergens, something that previously required specialized testing at a hospital.
This part is less sexy, but it's probably the most important infrastructure.
Cron task list:
| Frequency | Task |
|------|------|
| Every 5 minutes | Check Fathom meeting minutes |
| Every 30 minutes | Scan email |
| 3 times a day | Action item completion check |
| Every night | Document synchronization, CRM scanning, security audit, log ingestion, video data refresh, morning briefing generation |
| Weekly | Memory Synthesis, Earnings Preview |
| Hourly | Git commits + database backups |
Backup strategy: All SQLite databases are automatically discovered, encrypted, packaged, and uploaded to Google Drive, retaining the most recent 7 days' worth of data. Code is pushed to GitHub hourly. Any backup failure triggers an immediate Telegram alert.
Automatic updates: Check for new OpenClaw versions every night at 9 PM, display the changelog, and automatically upgrade and restart with a single "update" message.
API tracing: Records which model was used and how many tokens were consumed for each LLM call. It even downloaded the official prompting guidelines for each model, allowing the system to optimize its prompts based on the actual models used.
The design philosophy of this infrastructure is simple: the system works while you sleep; and you know immediately when the system malfunctions.
Berman integrated Veo (video generation) and NanoBanana Pro (Gemini image generation) into OpenClaw.
It's very easy to use: Simply say "Video of a villa in Tuscany, Italy" in Telegram, and the system will use Veo to generate the video, automatically download it, and send it to Telegram, then delete the local file to save space. Images work similarly; just tell it what you want, and NanoBanana Pro will generate and push it directly.
This use case itself isn't particularly groundbreaking, but its value lies in its ability to be embedded into other workflows. For example, when generating thumbnail suggestions in a video topic selection pipeline, image generation can be directly invoked to produce the images.
If you only look at a single use case, you might think, "That's pretty cool, but it doesn't seem that special." ChatGPT can also help you look up contacts, and Notion AI can also help you organize your knowledge base.
But the real power of the Berman system lies in the data flow between use cases:
CRM data → Feed to the Business Advisory Committee
Knowledge base content → Feeding the video topic selection pipeline
Social media data → fed to daily briefings + advisory board
Meeting minutes → Feed to CRM + Action Item System
All module runtime logs → fed to the security committee
Each module is not an island. They form a mutually reinforcing data flywheel. This is why one person plus a MacBook can produce the results of a small team.
There's a quote from Berman that I think is particularly apt: "You'll begin to see how all the different parts I've built interact and make each other stronger."
Berman's efforts in safety deserve special emphasis:
Injection defense tip: All external content is considered potentially malicious; deterministic code pre-scanning is performed before data is added to the database.
Minimize permissions: Emails and calendars are read-only, write permissions are denied.
Output control: Summarize without verbatim repetition, automatically filter keys and tokens.
Release approval: Manual confirmation is required before sending emails or tweets.
Encrypted backup: Double password protection, .env files are never added to the database.
He himself made it very clear: "There is no perfect security solution. Large language models are nondeterministic systems, and it is impossible to completely prevent hint injection. But that doesn't mean you should do nothing."
After reviewing these use cases, my biggest takeaway is that in the AI era, "full-stack" no longer refers to being able to write front-end and back-end code, but rather to being able to build and manage an entire AI workflow. Berman doesn't write code, but he has an extremely clear understanding of his needs and knows how to translate those needs into a working system using natural language.
This may be the most worthwhile skill to learn in 2026.
Based on Matthew Berman's video "21 INSANE Use Cases For OpenClaw," compiled by the podcast Podwise , the original video includes a complete prompt for each use case, which is recommended for viewing. If you are also using OpenClaw or a similar framework to build your own AI system, please share in the comments which use case you built first.


