Multi-agent systems often fail because agents don't speak the same language. This article explores Google's A2A (Agent-to-Agent) Protocol as the "universal translator" solution. We build "StoryLab," a practical system with three agents (Orchestrator, Creator, Critic) using Python and Ollama, demonstrating how standardizing discovery (Agent Cards) and communication (Message Envelopes) solves the interoperability crisis.Multi-agent systems often fail because agents don't speak the same language. This article explores Google's A2A (Agent-to-Agent) Protocol as the "universal translator" solution. We build "StoryLab," a practical system with three agents (Orchestrator, Creator, Critic) using Python and Ollama, demonstrating how standardizing discovery (Agent Cards) and communication (Message Envelopes) solves the interoperability crisis.

Building Multi-Agent Systems That Communicate Reliably with the A2A Protocol

2025/12/08 17:45

\

The AI landscape is shifting beneath our feet. We've moved past the "God Model" era where one massive LLM tries to do everything into the age of Multi-Agent Systems. We have specialized agents for coding, reviewing, designing, and testing. It's a beautiful vision of digital collaboration.

But there's a problem.

They don't speak the same language. Your Coding Agent speaks JSON-RPC, your Review Agent expects gRPC, and your Design Agent just wants a REST API. It's the Tower of Babel all over again. Instead of a symphony, we get a cacophony of 400 Bad Request errors.​

This is where Google's A2A (Agent-to-Agent) Protocol comes in a universal translator for the AI age.

In this deep dive, we're not just reading documentation. We're going to build A2A StoryLab, a collaborative storytelling system where three distinct AI agents work together to create, critique, and refine stories. It's practical, it's standardized, and it's how you future-proof your AI architecture.


The Architecture: Three Agents, One Mission

To demonstrate the power of A2A, we need a team. A single agent is just a script; a team is a system.

Our StoryLab consists of three specialized roles:

  1. The Orchestrator ("The Director"): The boss. It manages the workflow, sets the goal ("Adapt 'The Tortoise and the Hare' for Gen Z"), and enforces quality gates.
  2. The Creator ("The Artist"): The generative talent. It takes a prompt and spins a yarn. It's creative but needs direction.
  3. The Critic ("The Editor"): The quality assurance. It reads the story, scores it on creativity and coherence, and provides specific feedback for the Creator.

The Workflow of a Request

It starts with a simple user request: "Adapt 'Bear Loses Roar' as a scientist who lost formulas."

The Orchestrator spins up a session and pings the Creator. The Creator drafts a version. The Orchestrator passes that draft to the Critic. The Critic hates it (score: 4/10) and explains why. The Orchestrator passes that feedback back to the Creator.

They iterate. Once the score hits 8/10, the Orchestrator ships the final story.


1. Discovery: The "Business Card" of Agents

In a messy microservices world, finding the right service is half the battle. A2A solves this with Agent Cards. Think of them as a standardized business card that lives at /.well-known/agent.json.

When the Orchestrator needs a writer, it doesn't need to know the internal API schema of the Creator. It just checks the card.

python# src/creator_agent/main.py @app.get("/.well-known/agent.json") async def get_agent_card(): return { "name": "Story Creator Agent", "description": "Creates and refines story adaptations", "url": "http://localhost:8001", "protocolVersion": "a2a/1.0", "capabilities": ["remix_story", "refine_story"], "skills": [ { "id": "remix_story", "name": "Remix Story", "description": "Create a story variation from base story", "inputModes": ["text", "data"], "outputModes": ["text"] } ] }

This simple endpoint allows for dynamic discovery. You could swap out the Creator agent for a completely different model or service, and as long as it presents this card, the system keeps humming.


2. The Envelope: A Universal Standard

How do they actually talk? A2A enforces a strict Message Envelope. No more guessing if the data is in bodypayload, or data.

Here is a real message captured from our StoryLab logs. This is the Orchestrator asking the Creator to get to work:

json{ "protocol": "google.a2a.v1", "message_id": "msg_abc123xyz789", "conversation_id": "conv_def456uvw012", "timestamp": "2025-12-07T10:30:45.123456Z", "sender": { "agent_id": "orchestrator-001", "agent_type": "orchestrator", "instance": "http://localhost:8000" }, "recipient": { "agent_id": "creator-agent-001", "agent_type": "creator" }, "message_type": "request", "payload": { "action": "remix_story", "parameters": { "story_id": "bear_loses_roar", "variation": "scientist who lost formulas" } } }

Why This Matters

Notice conversation_id. This ID persists across the entire back-and-forth between the Orchestrator, Creator, and Critic. In a distributed system, this is your lifeline. It allows you to trace a single user request across dozens of agent interactions.


3. The Code: Bringing It To Life

Talking about protocols is dry; let's look at the implementation. We use Python and FastAPI to build these agents, with Ollama providing local LLM inference for both story generation and evaluation.

The Orchestrator's Loop

This is the brain of the operation. It implements an iterative refinement loop. It doesn't just fire and forget; it mediates a conversation.

python# src/orchestrator/main.py @app.post("/adapt-story") async def adapt_story(request: AdaptStoryRequest): # ... setup session ... for iteration in range(1, MAX_ITERATIONS + 1): # Step 1: Ask Creator to generate (or refine) if iteration == 1: story_result, msg_id = await _call_creator_remix( conversation_id, story_id, variation, session_id ) else: story_result, msg_id = await _call_creator_refine( conversation_id, session_id, current_version, current_story_text, feedback=evaluation ) current_story_text = story_result["story_text"] # Step 2: Ask Critic to judge eval_result, msg_id = await _call_critic_evaluate( conversation_id, session_id, current_story_text, original_id, iteration ) # Step 3: The Quality Gate if eval_result["approved"] and eval_result["score"] >= APPROVAL_THRESHOLD: logger.info(f"✓ Story approved at iteration {iteration}") break return {"story": current_story_text, "score": eval_result["score"]}

This pattern Generate, Evaluate, Iterate is a fundamental building block of agentic workflows. A2A makes it robust because every step is tracked and standardized.


4. The Critic: AI Keeping AI Honest

The Critic agent is interesting because it uses an LLM not to generate, but to analyze. It evaluates the story on four dimensions: Moral Preservation, Structure, Creativity, and Coherence.

python# src/critic_agent/main.py EVALUATION_WEIGHTS = { "moral_preservation": 0.30, "structure_quality": 0.25, "creativity": 0.25, "coherence": 0.20 } async def evaluate_story(message_data: dict): # ... unpack A2A message ... # LLM-powered evaluation eval_result = await ollama_client.evaluate_story( story_text=story_text, original_story=original_story.text, original_moral=original_moral ) # Calculate weighted score overall_score = ( eval_result["moral_preservation"] * EVALUATION_WEIGHTS["moral_preservation"] + eval_result["structure_quality"] * EVALUATION_WEIGHTS["structure_quality"] + eval_result["creativity"] * EVALUATION_WEIGHTS["creativity"] + eval_result["coherence"] * EVALUATION_WEIGHTS["coherence"] ) scaled_score = overall_score * 10.0 # Scale to 0-10 approved = scaled_score >= APPROVAL_THRESHOLD # Return A2A Response return create_response_message(..., payload={"score": scaled_score, "approved": approved})

By separating the Critic from the Creator, we avoid "hallucination myopia," where a model fails to see its own mistakes. It's pair programming, but for AI.


Conclusion: The Era of Interoperability

We are moving towards a world where you will buy a "Research Agent" from one vendor, a "Coding Agent" from another, and a "Security Agent" from a third. Without a standard like A2A, integrating them would be a nightmare of custom adapters.

With A2A, they just… talk.

A2A StoryLab is a proof of concept, but the pattern is production-ready:

  1. Standardize Identity (Agent Cards)
  2. Standardize Envelopes (A2A Protocol)
  3. Trace Everything (Conversation IDs)

The future of AI isn't a bigger model. It's a better team.

Resources

  • A2A StoryLab GitHub
  • Google A2A Protocol Spec

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

CME Group to Launch Solana and XRP Futures Options

CME Group to Launch Solana and XRP Futures Options

The post CME Group to Launch Solana and XRP Futures Options appeared on BitcoinEthereumNews.com. An announcement was made by CME Group, the largest derivatives exchanger worldwide, revealed that it would introduce options for Solana and XRP futures. It is the latest addition to CME crypto derivatives as institutions and retail investors increase their demand for Solana and XRP. CME Expands Crypto Offerings With Solana and XRP Options Launch According to a press release, the launch is scheduled for October 13, 2025, pending regulatory approval. The new products will allow traders to access options on Solana, Micro Solana, XRP, and Micro XRP futures. Expiries will be offered on business days on a monthly, and quarterly basis to provide more flexibility to market players. CME Group said the contracts are designed to meet demand from institutions, hedge funds, and active retail traders. According to Giovanni Vicioso, the launch reflects high liquidity in Solana and XRP futures. Vicioso is the Global Head of Cryptocurrency Products for the CME Group. He noted that the new contracts will provide additional tools for risk management and exposure strategies. Recently, CME XRP futures registered record open interest amid ETF approval optimism, reinforcing confidence in contract demand. Cumberland, one of the leading liquidity providers, welcomed the development and said it highlights the shift beyond Bitcoin and Ethereum. FalconX, another trading firm, added that rising digital asset treasuries are increasing the need for hedging tools on alternative tokens like Solana and XRP. High Record Trading Volumes Demand Solana and XRP Futures Solana futures and XRP continue to gain popularity since their launch earlier this year. According to CME official records, many have bought and sold more than 540,000 Solana futures contracts since March. A value that amounts to over $22 billion dollars. Solana contracts hit a record 9,000 contracts in August, worth $437 million. Open interest also set a record at 12,500 contracts.…
Share
BitcoinEthereumNews2025/09/18 01:39