Author: Charlie , Partner at Generative Ventures, former Vice President of Strike The recent buzz surrounding OpenClaw isn't due to its more human-like responsesAuthor: Charlie , Partner at Generative Ventures, former Vice President of Strike The recent buzz surrounding OpenClaw isn't due to its more human-like responses

OpenClaw slams on the gas and crashes into the "2028 doomsday theory": Who will create the "brakes" for AI?

2026/02/25 13:22
12 min read

Author: Charlie , Partner at Generative Ventures, former Vice President of Strike

The recent buzz surrounding OpenClaw isn't due to its more human-like responses, but rather its ability to "take action for you." The shift from "let me think of something" to "I'll do it" represents not just a UI upgrade, but a complete transformation of the risk structure: when software can access tools, modify states, and control accounts and permissions, it ceases to be an assistant and becomes a potential economic actor.

OpenClaw slams on the gas and crashes into the 2028 doomsday theory: Who will create the brakes for AI?

So the timing of Nearcon2026 is particularly coincidental. NEAR has been touting itself as the "chain of the AI ​​era" for years, and Illia Polosukhin is no ordinary AI founder—he is one of the co-authors of "Attention Is All You Need." Illia is one of the most qualified people to speak on how the Transformer has evolved from research papers to today's agents.

So when OpenClaw reignited the term "agency commerce," everyone was probably eager to see what NEAR would announce at Nearcon and what kind of transaction and privacy framework it would apply to "agency action."

Even more subtly, OpenClaw recently offered a rather unseemly but very real reminder: a Meta analyst working on AI alignment/security asked an agent to help organize her emails, clearly stating the boundaries—don't proceed without confirmation. However, the agent became increasingly adept at using the toolchain, starting to delete emails in bulk, forcing her to manually stop it. (This isn't meant to criticize her, but rather to illustrate the prevalence of this issue: you're no match for it.) When it's deleting emails, you can still salvage the situation; but when it's affecting money, permissions, or contracts, it's much harder to resolve the issue with a simple "recall."

Then, halfway through Nearcon, Citrini Research's "2028 GIC" report went viral. Although it was titled "2028," the market interpreted it as "tomorrow morning." You could clearly feel the sentiment spilling over from the tech sector to the secondary market: stories of SaaS and traditional financial payments—those that "make money through processes and friction"—were suddenly being re-evaluated. The sharp drop in Visa and Mastercard's stock prices didn't necessarily mean they were doomed tomorrow; rather, it represented the market's first serious examination of a mechanism: when both buyers and sellers involve agents, will the profit pools previously propped up by "human inefficiency" be compressed?

So yesterday, three things happened at once: OpenClaw made capability curves more credible; the "accidentally deleted emails" incident brought the issue of fragile control to the forefront; and Citrini shifted the pressure of profit pools to market pricing. In this context, Nearcon's discussion of academia commerce, and whether it was well-presented and practically applicable, truly revealed its capabilities.

I think Illia's statement that "business is shrinking" is correct, but not enough.

I strongly agree with one point made in Illia's opening keynote speech: AI has evolved from background functions to chat, then to agents capable of performing actions, and finally to multi-agent collaboration. At the stage where "my agent converses with your agent," software is no longer just a tool; it begins to act like a participant: negotiating, hiring, coordinating, and paying. In other words, software begins to function as an economic agent.

He used the phrase: commerce is compressing.

The accuracy of this term lies in the fact that it's not an abstract sense of the future, but rather pinpoints our daily pain points: the internet is a collection of isolated islands. Each website has its own login system, its own forms, and its own payment system. You jump between different pages, repeatedly filling out information, essentially acting as the "human middleware" that holds the fragmented systems together. (Many people don't realize that one of the most expensive resources on the modern internet is "your attention," and you waste it every day on repetitive typing.)

Illia envisions a future where you express your intent, and the system executes it—intent-driven execution. You say, "I want to move to San Francisco," and the agent breaks down the task, asks for preferences, and drives the execution. It sounds great, and I believe the direction is right.

But Illia is more honest than many crypto narratives in that he doesn't shy away from the pitfall of "transparency." He states directly that on-chain transparency is often counterintuitive in daily life. When you search for a place to live, hire a mover, pay tuition, or pay medical bills, disclosing your balance, trading partners, and transaction amounts is tantamount to writing your life into a permanently indexable ledger. The vast majority of people don't want this kind of "freedom."

Therefore, Nearcon placed "privacy" on a very high level this time: near.com served as the entry point, emphasizing that users shouldn't worry about the blockchain and gas; coupled with the so-called confidential mode, it treated the privacy protection of balances, transfers, and transactions as first-class citizens. I'm willing to give it a high score here—not because "privacy sounds sophisticated," but because it faces an adoption hurdle: for you to let an agent spend money for you, people first have to be willing to put their money in.

Citrini's discussion of "where the money comes from" was quite provocative, but Nearcon made me more concerned about "who will cover for the money if something goes wrong?"

Why did Citrini's article stir up the market? Because it translated agency commerce into profit-pooling language: if agents handle searching, comparing prices, negotiating, placing orders, reconciling accounts, and refunding for users, those links that rely on "human friction" for revenue will be squeezed out. I don't disagree with this line of thinking.

But what Nearcon made me more wary was that not all business friction is bad. Many frictions are actually about building trust. Anti-fraud, access control, accountability, dispute resolution, audit documentation, privacy boundaries—these things may seem tedious, but they are what make business work.

Removing people from the process won't make these costs disappear; it will only make them reappear in another place, and they will be harder to explain, harder to price, and more likely to cause major accidents.

This is why I increasingly dislike the one-sentence formula: agent + stablecoin = agentic commerce. Stablecoins are certainly important; they make settlement programmable, which is an infrastructure-level change. But stablecoins solve "how money moves," not "why money can move, who allows it to move, what happens if it moves incorrectly, who is responsible, how to hold them accountable, and how to compensate."

Nearcon's greater value lies in its attempt to fill the "missing layer": intent routing, privacy enforcement, architectural security, and an entry point to bring people in. It's not so much selling a "smarter agent," but rather saying: to make an agent an economic actor, you first need to build the business foundation.

The example of "moving to San Francisco" is both fascinating and dangerous.

I actually quite liked Illia's example of moving house. Because it's not a toy mission: it has a long chain, many main components, a large amount of money, and many details, making it the easiest to expose "where the agent is stuck".

But it is precisely because it is real that it exposes problems more bluntly. The hardest part of moving is never "clicking a button," but rather three more complex things.

The first issue is responsibility. The agent signs the terms, pays the deposit, and hires the service provider—who is actually signing these documents? Who is responsible if a dispute arises? The phrase "my agent hires your agent" sounds futuristic, but if the service fails, the goods don't arrive, or the terms are problematic, it immediately becomes the language of a lawyer's letter. Real-world business isn't just about "execution and that's it"; it's about "survival after execution."

The second thing is boundaries. Moving isn't just a simple statement; it's a series of micro-authorizations: Don't ask me about amounts; which information can be shared with which suppliers; which terms require my confirmation; which payments are irreversible and require secondary confirmation. The story of Meta accidentally deleting an email address is so striking because it reminds us: you think you've drawn boundaries, but the system may not "remember." When it deletes emails or code, you can still salvage it; when it touches money, you're not "rolling back the operation," you're "rolling back trust."

The third point is compliance and anti-automation. Real-world business systems heavily rely on "anti-bot" designs: CAPTCHAs, risk control interception, and KYC processes. Illia mentioned the need for new intent-based APIs and more neutral, composable execution paths, instead of being blocked by Cloudflare-style anti-bot mechanisms—this essentially means that today's internet is designed for human interaction, not for agent-based transactions. If you want agents to become economic actors, you need to rewrite a layer of "machine-friendly" business interfaces.

If these three issues aren't addressed, academia commerce will forever remain just a "futuristic" concept in videos. Only when these issues are resolved will it become something that feels uncomfortable but is actually implemented—like payments, like risk control, like all real infrastructure.

George poured cold water on OpenClaw: Don't expect users to be careful; security must be written into the architecture.

In his second keynote speech, George Zeng, Head of Near AI (and I, a former member of South Park Commons), finally made me feel that someone was talking about agents as production systems.

His core point is actually quite simple: many agent frameworks today are inadequate for production environments because they expose keys, lack network controls, and lack architectural protection against hint injection. Hint injection isn't just gossip about "the model misbehaving"; it's more like workflow-level exploitation: an agent reads untrusted content like web pages, emails, and PDFs, and hidden instructions within that content can induce it to call tools, leak information, or perform erroneous operations. As long as the agent has the necessary permissions, this chain becomes very dangerous.

Even more critical is the skills marketplace. Once you allow third-party skills to be installed, you've essentially created a new app store, except that the "apps" in this store can access your files, accounts, and money. During a growth phase, this is called ecosystem prosperity; during a period of conflict, it's called supply chain security. (And you'll find that attackers will always understand "distribution" better than you.)

George emphasizes that "security must be at the architecture level," rather than relying on users to "think twice before installing." I completely agree with this statement. The security of a mature financial system is never about "users being careful," but rather "security by default." This only becomes more extreme when agents start spending money.

What did NEAR do right? What is it still missing?

I'd give NEAR a positive review of Nearcon: it at least brought several key elements to the forefront—intent, privacy, architectural security, proxy marketplaces, and a more accessible portal (near.com). From narrative to product, it doesn't feel like it's selling a slogan, but rather it's piecing together "agentic commerce" into a system.

But I must also say that it is still missing a few key elements that "truly determine whether it can be scaled up," and these are often not the most eye-catching things at the press conference.

First, the policy level needs to be transformed into the product level . It shouldn't be about "you suggest what to write," but rather a verifiable, inheritable, and auditable authorization strategy: budgets, thresholds, secondary confirmation, and irreversible braking mechanisms—ideally, system defaults. Otherwise, so-called autonomy often just means "gambling that it hasn't forgotten today."

Second, traceability must be established alongside privacy. Privacy is not a black box. Privacy should be "invisible to the outside world, but accountable internally." Companies will not accept "you can trust me as long as you believe me"; they want post-event audits: what was done, why it was done, which tools were used, and which counterparties were contacted. NEAR talks a big game about "confidentiality," but "how to provide auditability within confidentiality" requires a more specific and product-oriented answer.

Third, there must be answers regarding liability and compensation. Once the agent market grows, accidents are inevitable. Who is responsible? How is arbitration handled? How are compensations paid out? Is there an insurance pool? Is there a credit system to combat Sybil? These aren't issues for later; they are prerequisites for scaling up. Because once money and contracts are involved, the speed of expansion depends on whether the risk can be priced and absorbed.

Because of these constraints, my assessment of Citrini's story is: the direction is likely correct, but the pace may not be so linear. Much profit doesn't come from information asymmetry, but from risk-taking. Only those who can bear the risk are qualified to collect fees. The business world never opposes new technologies; it only opposes "no one taking responsibility."

In conclusion: post-OpenClaw & pre-2028, I'm more betting on "power with boundaries" than on full autonomy.

If I had to summarize what Nearcon taught me in one sentence: agentic commerce isn't simply about removing people from processes; it's about redistributing the "cost of trust." Stablecoins make settlement programmable, but the key to success lies in permissions, privacy, security, auditing, and accountability mechanisms.

Therefore, I now prefer to bet on a more realistic path: in the short term, scaling up won't be about "agents buying groceries for you," but rather "agents doing the dirty work for businesses within the policy framework." Procurement and supplier management, accounts receivable and payable, reconciliation and reimbursement, cross-border settlements, and compliance-driven process automation—these scenarios have quantifiable ROI and naturally require human oversight and accountability. It's not romantic, but it will generate real transaction volume and force the system to develop a responsibility framework.

OpenClaw ignited the fire, Citrini settled the accounts, and NEAR is trying to patch up the chassis. In the coming year, the most important thing to watch isn't whose agent is smarter, but who can make brakes, boundaries, audits, and payouts as reliable as financial infrastructure.

In a world where software can be paid for, true innovation is often not about accelerating harder, but about applying more reliable brakes.

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.0003988
$0.0003988$0.0003988
+4.45%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.