The post Agentic AI must learn to play by blockchain’s rules appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included. Summary Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence. Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms. Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready. This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary.  Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots.… The post Agentic AI must learn to play by blockchain’s rules appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included. Summary Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence. Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms. Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready. This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary.  Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots.…

Agentic AI must learn to play by blockchain’s rules

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

Systems that can call tools up on demand, set goals, spend money, and alter their own prompts are already creeping out of sandboxes and into production — agentic AI, or artificial intelligence, included.

Summary

  • Governance through verifiability: As AI agents gain autonomy to spend, publish, and act, systems must enforce cryptographic provenance and auditability — turning AI accountability from guesswork into verifiable evidence.
  • Identity over anonymity: Agentic AI needs verifiable identities, not usernames. Using W3C Verifiable Credentials and smart account policies, agents can prove who they are, what they’re allowed to do, and maintain traceable accountability across platforms.
  • Signed inputs and outputs: Cryptographically signing every input, output, and action creates a transparent audit trail — transforming AI from a “black box” into a “glass box” where decisions are explainable, reproducible, and regulator-ready.

This shift completely overlooks the bargain that society made with AI during its origins, that outputs were suggestions while humans were on the hook. Now, agents act, flipping that onus and opening the door to a wide world of ethical complications. If an autonomous system can alter records, publish content, and move funds, it must learn to play by the rules, and it must (more vitally) leave a trail that stands the test of time so that it can be audited and disputed, if necessary. 

Governance by engineering is needed now more than ever in the modernity of agentic AI, and the market is beginning to see this. Autonomy becomes more about accumulating liabilities rather than optimizing processes with cryptographic provenance and rules to bind agentic AI. When a trade goes wrong or a deepfake spreads, post-mortem forensics cannot rely on Slack messages or screenshots. Provenance is key, and it has to be machine-verifiable from the moment inputs get captured through to the moment actions are taken.

Identities, not usernames

Handles or usernames are not enough; agents need to be given identities that can be proven with verifiable credentials. W3C Verifiable Credentials (VCs) 2.0 provides a standards-based way to bind attributes (like roles, permissions, attestations, etc.) to entities in a way that other machines can verify. 

Pair this verification with key management and policy in smart accounts, and soon enough, an agent can present exactly ‘who’ it is and ‘what’ it can do long before it executes a single action. In such a model, credentials become a trackable permission surface that follows the agent across chains and services, and ensures they play by their rules with accountability.

With frequent misattributions and license omissions above 70%, the messy provenance of more widely used AI datasets shows how fast non-verifiable AI crumbles under inspection. If the community can’t keep data straight for static training corpora, it can’t expect regulators to accept unlabeled, unverified agent actions in live environments. 

Signing inputs and outputs

Agents act on inputs, whether that be a quote, a file, or a photo, and when those inputs can be forged or stripped of context, safety collapses. The Coalition for Content Provenance and Authenticity (C2PA) standard moves media out of the realm of guesswork and into cryptographically signed content credentials. 

Once again, credentials win over usernames, as seen by the likes of Google integrating content credentials in search and Adobe launching a public web app to embed and inspect them. The momentum here is toward artifacts that carry their own chain of custody, so agents that ingest data and emit only credentialed media will be easier to trust (and to govern).

This method should be extended to more structured data and decisions, such as when an agent queries a service. In this scenario, the response should be signed, and what follows should be the agent’s decision being recorded, sealed, and time-stamped for verification. 

Without signed statements, post-mortems dissolve into finger-pointing and conjecture. With them, accountability becomes computable — every decision, action, and transition cryptographically tied to a verifiable identity and policy context. For agentic AI, this transforms post-incident analysis from subjective interpretation into reproducible evidence, where investigators can trace intent, sequence, and consequence with mathematical precision.

Establishing on-chain or permission-chained logs gives autonomous systems an audit spine — a verifiable trail of causality. Investigators gain the ability to replay behavior, counterparties can verify authenticity and non-repudiation, and regulators can query compliance dynamically instead of reactively. The “black box” becomes a glass box, where explainability and accountability converge in real time. Transparency shifts from a marketing claim to a measurable property of the system.

Providers capable of demonstrating lawful data sourcing, verifiable process integrity, and compliant agentic behavior will operate with lower friction and higher trust. They won’t face endless rounds of due diligence or arbitrary shutdowns. When an AI system can prove what it did, why it did it, and on whose authority, risk management evolves from policing to permissioning — and adoption accelerates.

This marks a new divide in AI ecosystems: verifiable agents that can lawfully interoperate across regulated networks, and opaque agents that cannot. A constitution for agentic AI — anchored in identity, signed inputs and outputs, and immutable, queryable logs — is not just a safeguard; it’s the new gateway to participation in trusted markets.

Agentic AI will only go where it can prove itself. Those who design for provability and integrity now will set the standard for the next generation of interoperable intelligence. Those who ignore that bar will face progressive exclusion—from networks, users, and future innovation itself.

Chris Anderson

Chris Anderson is the CEO of ByteNova AI, an emerging innovator in edge AI technology.

Source: https://crypto.news/agentic-ai-must-learn-to-play-by-blockchains-rules/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Pi Network Visa Integration Logic Suggests Potential Shift in Global Payment Liquidity

Pi Network Visa Integration Logic Suggests Potential Shift in Global Payment Liquidity

Alleged Visa Related Logic in Pi Network Code Sparks Debate Over Future of Global Payment Systems Recent discussions within the Pi Network and broader bloc
Share
Hokanews2026/04/26 15:23
The New Geometry of Global Trade: Why Asia Is Winning in the AI Era

The New Geometry of Global Trade: Why Asia Is Winning in the AI Era

Global trade is not collapsing—it is transforming, and Asia is at the center of this... The post The New Geometry of Global Trade: Why Asia Is Winning in the AI
Share
Bitcoin News Asia2026/04/26 15:01
Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36

Roll the Dice & Win Up to 1 BTC

Roll the Dice & Win Up to 1 BTCRoll the Dice & Win Up to 1 BTC

Invite friends & share 500,000 USDT!