A constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiabilityA constitution for agentic AI is not just a safeguard; it’s the new gateway to participation in trusted markets and governance through verifiability

Agentic AI must learn to play by blockchain’s rules | Opinion

2025/10/22 17:39

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included.

Summary
  • Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence.
  • Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms.
  • Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready.

This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary. 

Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots. Provenance is key, and it has to be machine-verifiable from the moment inputs get captured through to the moment actions are taken.

Identities, not usernames

Handles or usernames are not enough; agents need to be given identities that can be proven with verifiable credentials. W3C Verifiable Credentials (VCs) 2.0 provides a standards-based way to bind attributes (like roles, permissions, attestations, etc.) to entities in a way that other machines can verify. 

Pair this verification with key management and policy in smart accounts, and soon enough, an agent can present exactly ‘who’ it is and ‘what’ it can do long before it executes a single action. In such a model, credentials become a trackable permission surface that follows the agent across chains and services, and ensures they play by their rules with accountability.

With frequent misattributions and license omissions above 70%, the messy provenance of more widely used AI datasets shows how fast non-verifiable AI crumbles under inspection. If the community can’t keep data straight for static training corpora, it can’t expect regulators to accept unlabeled, unverified agent actions in live environments. 

Signing inputs and outputs

Agents act on inputs, whether that be a quote, a file, or a photo, and when those inputs can be forged or stripped of context, safety collapses. The Coalition for Content Provenance and Authenticity (C2PA) standard moves media out of the realm of guesswork and into cryptographically signed content credentials. 

Once again, credentials win over usernames, as seen by the likes of Google integrating content credentials in search and Adobe launching a public web app to embed and inspect them. The momentum here is toward artifacts that carry their own chain of custody, so agents that ingest data and emit only credentialed media will be easier to trust (and to govern).

This method should be extended to more structured data and decisions, such as when an agent queries a service. In this scenario, the response should be signed, and what follows should be the agent’s decision being recorded, sealed, and time-stamped for verification. 

Without signed statements, post-mortems dissolve into finger-pointing and conjecture. With them, accountability becomes computable — every decision, action, and transition cryptographically tied to a verifiable identity and policy context. For agentic AI, this transforms post-incident analysis from subjective interpretation into reproducible evidence, where investigators can trace intent, sequence, and consequence with mathematical precision.

Establishing on-chain or permission-chained logs gives autonomous systems an audit spine — a verifiable trail of causality. Investigators gain the ability to replay behavior, counterparties can verify authenticity and non-repudiation, and regulators can query compliance dynamically instead of reactively. The “black box” becomes a glass box, where explainability and accountability converge in real time. Transparency shifts from a marketing claim to a measurable property of the system.

Providers capable of demonstrating lawful data sourcing, verifiable process integrity, and compliant agentic behavior will operate with lower friction and higher trust. They won’t face endless rounds of due diligence or arbitrary shutdowns. When an AI system can prove what it did, why it did it, and on whose authority, risk management evolves from policing to permissioning — and adoption accelerates.

This marks a new divide in AI ecosystems: verifiable agents that can lawfully interoperate across regulated networks, and opaque agents that cannot. A constitution for agentic AI — anchored in identity, signed inputs and outputs, and immutable, queryable logs — is not just a safeguard; it’s the new gateway to participation in trusted markets.

Agentic AI will only go where it can prove itself. Those who design for provability and integrity now will set the standard for the next generation of interoperable intelligence. Those who ignore that bar will face progressive exclusion—from networks, users, and future innovation itself.

Chris Anderson

Chris Anderson is the CEO of ByteNova AI, an emerging innovator in edge AI technology.

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

SEC Backs Nasdaq, CBOE, NYSE Push to Simplify Crypto ETF Rules

SEC Backs Nasdaq, CBOE, NYSE Push to Simplify Crypto ETF Rules

The US SEC on Wednesday approved new listing rules for major exchanges, paving the way for a surge of crypto spot exchange-traded funds. On Wednesday, the regulator voted to let Nasdaq, Cboe BZX and NYSE Arca adopt generic listing standards for commodity-based trust shares. The decision clears the final hurdle for asset managers seeking to launch spot ETFs tied to cryptocurrencies beyond Bitcoin and Ether. In July, the SEC outlined how exchanges could bring new products to market under the framework. Asset managers and exchanges must now meet specific criteria, but will no longer need to undergo drawn-out case-by-case reviews. Solana And XRP Funds Seen to Be First In Line Under the new system, the time from filing to launch can shrink to as little as 75 days, compared with up to 240 days or more under the old rules. “This is the crypto ETP framework we’ve been waiting for,” Bloomberg research analyst James Seyffart said on X, predicting a wave of new products in the coming months. The first filings likely to benefit are those tracking Solana and XRP, both of which have sat in limbo for more than a year. SEC Chair Paul Atkins said the approval reflects a commitment to reduce barriers and foster innovation while maintaining investor protections. The move comes under the administration of President Donald Trump, which has signaled strong support for digital assets after years of hesitation during the Biden era. New Standards Replace Lengthy Reviews And Repeated Denials Until now, the commission reviewed each application separately, requiring one filing from the exchange and another from the asset manager. This dual process often dragged on for months and led to repeated denials. Even Bitcoin spot ETFs, finally approved in Jan. 2024, arrived only after years of resistance and a legal battle with Grayscale. According to Bloomberg ETF analyst Eric Balchunas, the streamlined rules could apply to any cryptocurrency with at least six months of futures trading on the Coinbase Derivatives Exchange. That means more than a dozen tokens may now qualify for listing, potentially unleashing a new wave of altcoin ETFs. SEC Clears Grayscale Large Cap Fund Tracking CoinDesk 5 Index The SEC also approved the Grayscale Digital Large Cap Fund, which tracks the CoinDesk 5 Index, including Bitcoin, Ether, XRP, Solana and Cardano. Alongside this, it cleared the launch of options linked to the Cboe Bitcoin US ETF Index and its mini contract, broadening the set of crypto-linked derivatives on regulated US markets. Analysts say the shift shows how far US policy has moved. Where once regulators resisted digital assets, the latest changes show a growing willingness to bring them into the mainstream financial system under established safeguards
Paylaş
CryptoNews2025/09/18 12:40