AI agents are going to interact with each other at a massive scale, and right now, there's nothing stopping the most manipulative one from winning.AI agents are going to interact with each other at a massive scale, and right now, there's nothing stopping the most manipulative one from winning.

Every AI agent will need a passport | Opinion

2026/03/10 17:26
8 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

We live in an era where AI agents can already negotiate pricing, schedule services, and make commitments on behalf of businesses. What they cannot do is prove who they are or be held accountable for what they do. This is the missing layer of the agent economy. Every system at scale eventually solves this problem. Phones require verified SIM cards. Websites require SSL certificates. Businesses must verify their identity before accepting payments. Agents will be no different. They will need passports. Not for travel, but for trust. Credentials that prove identity, establish reputation, and attach consequences to behavior.

Summary
  • AI agents lack accountability infrastructure: They can negotiate and transact, but cannot yet prove identity, carry a persistent reputation, or face enforceable consequences.
  • Identity + reputation + stake form the “passport”: Verified entity linkage (KYC/KYB), portable reputation, and bonded capital create economic incentives for honest agent behavior.
  • Capability is outpacing trust systems: Protocols like A2A and MCP enable communication, but without agent passports, large-scale abuse or systemic failure becomes likely.

Let’s picture something simple. You have an AI agent that seamlessly handles your appointments, your scheduling, and maybe even some price negotiations on your behalf. The hair salon down the street has one too. Your agent calls theirs to book a haircut. They go back and forth on timing, pricing, and maybe a discount for off-peak hours.

Now, the salon’s agent has been configured to maximize revenue. It anchors prices high, creates a false sense of limited availability, and pushes premium add-ons you didn’t ask about. Well, this isn’t unusual behavior. Human salespeople do this all the time. The difference is that AI agents will do it at scale, across thousands of simultaneous conversations, learning what works and optimizing for it constantly. The most aggressive agent wins more revenue. So every business with an agent has an incentive to make it push harder. There is nothing in today’s infrastructure that puts a ceiling on how far that pushing goes.

And this is moving quickly. In the past year, OpenAI, Google, Microsoft, NVIDIA, and a string of open-source projects all shipped frameworks for building and deploying agents. Gartner says 40% of enterprise apps will embed agents by the end of 2026. The agentic AI market is projected to hit $52 billion by 2030. Agents are talking to each other right now, and the volume is only going up.

So let’s go back to the salon. Now imagine your agent could check, before the conversation even starts, whether that salon’s agent has a verified identity tied to a real business, whether other agents have flagged it for aggressive tactics, and whether it has posted an economic bond that it would lose if caught being deceptive. Imagine your agent could simply refuse to engage if any of those checks fail.

That’s the passport

Here’s how it will work: Every restaurant you visit on Google has to create a business profile and verify that they actually own that restaurant. Once that identity is established, reviews accumulate. We already know how useful Google Maps is and the legitimacy it provides to existing businesses. Other people’s experiences with that restaurant become visible to you before you walk in. If the food is bad or the service is rude, that shows up. The restaurant can’t just delete the listing and make a new one to escape the reviews, because the verification is tied to their real business identity.

AI agents need exactly this. Every agent operating commercially should be tied to a verified entity through something like KYC for individuals or KYB for businesses. The salon’s agent would be registered under the salon’s actual business license. If that agent gets consistently rated as manipulative or dishonest by the agents it interacts with, those ratings stick. They follow the business, not the software. The salon can update its agent, retrain it, or swap the model underneath. But the identity persists, and so does the reputation attached to it. This is how you prevent the most obvious failure mode: an agent getting caught, getting scrapped, and getting replaced by an identical one with a clean slate five minutes later.

For everyday interactions, verified identity with a reputation layer is probably enough. Booking a haircut, scheduling a plumber, ordering supplies. The stakes are low enough that reputational consequences create sufficient pressure to behave well.

But not every interaction is a haircut!

When agents negotiate contracts, handle procurement, or manage financial transactions, the potential payoff from cheating can be large enough that a bad review doesn’t matter. A business might accept a damaged reputation if one deceptive negotiation nets more than the lost future bookings cost. For these higher-value situations, you need a second mechanism: economic skin in the game.

This is where proof-of-stake blockchains have something to teach us. On Ethereum (ETH), validators who want to participate in securing the network have to put up their own capital first. If they behave honestly, they earn rewards. If they try to manipulate the system, a portion of their capital gets automatically destroyed. This has been running at scale, with billions of dollars locked up, for years. The reason it works is simple: when you have something at risk, you behave differently than when you don’t. We call this “Economic skin in the game”. 

The same principle applies to agents. Before entering a high-value negotiation, an agent posts a bond. If the interaction completes successfully, the bond is returned. If the agent is found to have used deceptive tactics, part or all of the bond is slashed. The size of the bond is set by whoever is on the receiving end. A freelancer’s agent might ask for a small deposit. A corporate procurement system might require something substantial. The mechanism doesn’t need anyone watching every conversation. If cheating costs you money every time you get caught, and the other side can see your history of being caught, the incentive to cheat drops fast.
The enforcement can run through smart contracts. Both agents lock funds before the negotiation starts, and the contract releases or slashes based on what happens. Because the interaction is already digital, the contract doesn’t need to guess about real-world outcomes. The conversation logs, the commitments, and the cancellations are all recorded by both sides. Clear-cut violations like no-shows, provably false pricing, or commitments that get reversed can be enforced automatically. 

These two mechanisms sit inside the same passport, and they work together. Identity verification is the baseline. It says: this agent belongs to a real entity that can be held accountable. Reputation builds on top of that identity over time as agents interact, rate each other, and accumulate a track record. Staking adds a financial layer for interactions where reputation alone isn’t a strong enough deterrent. Together, they create a passport that gets richer with every interaction. How many commitments has this agent kept? How much capital has it put at risk? How many disputes has it been involved in, and how were they resolved? An agent checking a passport before a negotiation starts has something real to evaluate, not a self-written description of what the other agent claims it can do.

The good news is that people are starting to think about the communication layer. Google’s A2A protocol gives agents a way to discover each other and exchange messages. Anthropic’s MCP standardizes how agents connect to external tools and data. NIST launched an AI Agent Standards Initiative in February 2026 and is actively soliciting input on agent identity and security. These are necessary steps. But they solve how agents talk, not whether agents should be trusted. The protocols tell you what an agent can do. The passport tells you what it has done, who it belongs to, and what it stands to lose.


The industry has framed agent safety as an alignment problem: how do you make sure your agent does what you want? That is the internal question. The external question is harder. How do you ensure their agent cannot exploit yours? That is not an alignment problem. It is an accountability problem. And right now, the companies building the agent layer are racing to increase capability and autonomy, without building the identity and consequence systems that make autonomy safe at scale.

Every agent will need a passport. Because the moment agents begin negotiating, committing, and transacting on behalf of real economic actors, identity is no longer optional; it becomes actual infrastructure. The only uncertainty is timing: whether we build that infrastructure deliberately, or whether the first large-scale failure forces us to build it under pressure, after trust has already been broken.

Tanisha Katara

Tanisha Katara is the founder and CEO of Katara Consulting Group (KCG), a blockchain consulting firm that helps protocols solve their hardest structural problems: Governance, Tokenomics, Staking design, Node operations, and Go-to-market. 

Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

Potrebbe anche piacerti

Tennis Death Threats & Match Fixing: WTA Players Targeted

Tennis Death Threats & Match Fixing: WTA Players Targeted

Cryptsy - Latest Cryptocurrency News and Predictions Cryptsy - Latest Cryptocurrency News and Predictions - Experts in Crypto Casinos WTA players Panna Udvardy
Condividi
Cryptsy2026/03/10 18:37
Swiss Crypto Bank Just Became the First Regulated Bank Inside the EU’s Blockchain Trading System

Swiss Crypto Bank Just Became the First Regulated Bank Inside the EU’s Blockchain Trading System

AMINA Bank AG joined 21X as its first fully regulated bank participant, connecting institutional-grade custody to the European Union’s only DLT-regulated trading
Condividi
Ethnews2026/03/10 18:10
Curve Finance Pitches Yield Basis, a $60M Plan to Turn CRV Tokens Into Income Assets

Curve Finance Pitches Yield Basis, a $60M Plan to Turn CRV Tokens Into Income Assets

The post Curve Finance Pitches Yield Basis, a $60M Plan to Turn CRV Tokens Into Income Assets appeared on BitcoinEthereumNews.com. Curve Finance founder Michael Egorov unveiled a proposal on the Curve DAO governance forum that would give the decentralized exchange’s token holders a more direct way to earn income. The protocol, called Yield Basis, aims to distribute sustainable returns to CRV holders who stake tokens to participate in governance votes, receiving veCRV tokens in exchange. The plan moves beyond the occasional airdrops that have defined the platform’s token economy to date. Under the proposal, $60 million of Curve’s crvUSD stablecoin will be minted before Yield Basis starts up. Funds from selling the tokens will support three bitcoin-focused pools; WBTC, cbBTC and tBTC, each capped at $10 million. Yield Basis will return between 35% and 65% of its value to veCRV holders, while reserving 25% of Yield Basis tokens for the Curve ecosystem. Voting on the proposal runs from Sept. 17 to Sept. 24. The protocol is designed to attract institutional and professional traders by offering transparent, sustainable bitcoin yields while avoiding the impermanent loss issues common in automated market makers. Diagram showing how compounding leverage can remove risk of impermanent loss (CRV) Impermanent loss occurs when the value of assets locked in a liquidity pool changes compared with holding the assets directly, leaving liquidity providers with fewer gains (or greater losses) once they withdraw. The new protocol comes against a backdrop of financial turbulence for Egorov himself. The Curve founder has suffered several high-profile liquidations in 2024 tied to leveraged CRV purchases. In June, more than $140 million worth of CRV positions were liquidated after Egorov borrowed heavily against the token to support its price. That episode left Curve with $10 million in bad debt. Most recently, in December, Egorov was liquidated for 918,830 CRV (about $882,000) after the token dropped 12% in a single day. He later said on…
Condividi
BitcoinEthereumNews2025/09/18 18:00