AI applications need memory to share context across multiple interactions. In LangGraph, you can add two types of memory: (Read More: How to Add Memory to LLM Applications) Add short-term memory as a part of your agent’s state to enable multi-turn conversations Add long-term memory to store user-specific or application-level data across sessions Long-term Agentic Memory in LangGraph transforms AI agents from forgetful bots into genuinely helpful assistants that learn, adapt, and personalize over time a game-changer for building practical agentic workflows. Why Long-Term Memory Matters We all know that AI agents are stateless without memory, AI agents are limited to responding in short, exchanges missing the crucial ability to recall context, preferences, and past events. When an agent “remembers” several capabilities become possible: Adapting to user preferences and history Providing contextual, relevant answers across conversations Learning new skills or facts and evolving responses Delivering highly personalized experiences LangGraph makes these capabilities accessible through its modular approach to agent memory, supporting diverse storage and retrieval strategies. The Core Memory Types Humans use memories to remember facts (semantic memory), experiences (episodic memory), and rules (procedural memory). AI agents can use memory in the same ways. For example, AI agents can use memory to remember specific facts about a user to accomplish a task. Let’s understand this by an example. As data scientists, we often build internal tools to streamline workflows. Imagine we’re tasked with creating a project management assistant for a software team lead, Mr John. Her team is working on a major Q3 feature launch, codenamed “Project Marvel.”Source Sam sends a simple message to the assistant: If our “agent” is just a simple pass-through to a raw LLM, we see the immediate failure of statelessness. The LLM, with its vast general knowledge but zero local context, would likely reply: This response is practically useless. It forces the user to do all the work of providing context. Now, let’s see how an agent with a proper memory architecture would handle this.Lets discuss these memory types and its analogy.Source 1. Semantic Memory: The Agent’s Fact Book Semantic Memory is the agent’s collection of hard facts. It’s a database of objective information about you, the world, and the people it interacts with. This isn’t about remembering the full chat history, it’s about pulling out the important truths from that chat and filing them away for later. How it works for an agent: This is usually a vector store working behind the scenes. The agent gets tools like send_slack_message, create_jira_ticket, schedule_standup, check_sprint_capacity . When it learns something new and important during a conversation, it saves it using update_project_status. Later on, when it needs to make a decision, it can instantly look up relevant facts to get the context it needs. # ======================================== # SEMANTIC MEMORY: General knowledge about project management rules and procedures # ======================================== # This contains factual knowledge about how to categorize and handle different types of prompt_instructions = { "triage_rules": { "ignore": "General company announcements, unrelated project updates, promotional content", "notify": "Sprint retrospective notes, deployment status updates, team availability changes", "respond": "Blocker reports from team members, stakeholder requests, urgent bug escalations", }, "agent_instructions": "Use these tools to help John manage Project Marvel tasks and team coordination efficiently." } profile = { "name": "John", "full_name": "John Martinez", "user_profile_background": "Senior Team Lead managing Project Marvel - Q3 feature launch with 8 team members", } # SEMANTIC MEMORY: Knowledge about available tools and their mapping tools_map = { "send_slack_message": send_slack_message, "create_jira_ticket": create_jira_ticket, "schedule_standup": schedule_standup, "check_sprint_capacity": check_sprint_capacity, "update_project_status": update_project_status } In our “Project Marvel” example, this was the agent’s knowledge that “John Martinez is the Senior Team Lead managing Project Marvel — Q3 feature launch with 8 team members”. These factual details were the key that helped the agent understand the context and technical specification. 2. Episodic Memory: The Agent’s Lived Experience Episodic Memory is about recalling specific events, just as they happened. It’s not a single fact, but the entire replay of a past experience. For an agent, this is like a recording of a previous job it did what it saw, what it did, and how it all turned out. How it works for an agent: This is a fantastic way to create few-shot examples. By giving the agent a “diary” of its past successes, we can guide it to make better choices in the future. It’s a powerful way to show, not just tell, the agent how to act, which makes it far more reliable. # ========================================# EPISODIC MEMORY: Specific test scenarios and experiences# ========================================# These represent specific examples of messages that the system has learned to handletest_message_1 = { "author": "Sarah Chen <sarah.chen@company.com>", "to": "John Martinez <john.martinez@company.com>", "subject": "BLOCKER: API Integration Issue for Project Marvel", "message_thread": """Hi John,We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment.Error: Token validation failing with 401 on /marvel/auth/verifyThis is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today?Impact: High - affects core functionality for Q3 launchThanks,Sarah""",}def triage_router(state: State) -> Command[Literal["response_agent", "__end__"]]: """PROCEDURAL MEMORY: How to route messages based on triage classification.""" # EPISODIC MEMORY: Extract specific message details from current context author = state['message_input']['author'] to = state['message_input']['to'] subject = state['message_input']['subject'] message_thread = state['message_input']['message_thread'] # SEMANTIC + EPISODIC MEMORY: Combine general knowledge with specific context system_prompt = triage_system_prompt.format( full_name=profile["full_name"], name=profile["name"], user_profile_background=profile["user_profile_background"], triage_no=prompt_instructions["triage_rules"]["ignore"], triage_notify=prompt_instructions["triage_rules"]["notify"], triage_email=prompt_instructions["triage_rules"]["respond"], ) In our example, this was the agent recalling the specific experience of Sarah Chen’s urgent message on Tuesday morning about the OAuth authentication failing with a 401 error on the /marvel/auth/verify endpoint, which was blocking 3 developers and threatening the Q3 launch deadline. This wasn’t a general rule, but the complete contextual memory of that specific incident requiring immediate escalation. 3. Procedural Memory: The Agent’s Muscle Memory Procedural Memory is the agent’s set of core principles and instincts. It’s the rulebook that defines its personality, its style, and how it operates. Think of it less as a stored memory. How it works for an agent: This is the agent’s system prompt. The system prompt acts as its constitution or its prime directives a set of instructions it follows at all times. This is what ensures the agent is consistent, safe, and acts the way you want it to. # ========================================# PROCEDURAL MEMORY: Skills and action-oriented tools# ========================================# These functions represent learned procedures for executing project management tasks@tooldef create_jira_ticket( title: str, description: str, priority: str, assignee: str) -> str: """PROCEDURAL MEMORY: How to create and track project tasks.""" return f"JIRA ticket created: '{title}' - Priority: {priority}, Assigned to: {assignee}"@tooldef schedule_standup( attendees: list[str], topic: str, duration_minutes: int, preferred_time: str) -> str: """PROCEDURAL MEMORY: How to organize and schedule team meetings.""" return f"Meeting '{topic}' scheduled for {preferred_time} with {len(attendees)} attendees ({duration_minutes} min)"def call_gemini_for_triage(system_prompt: str, user_prompt: str) -> Router: """PROCEDURAL MEMORY: How to classify and triage incoming messages.""" model = genai.GenerativeModel(GEMINI_MODEL_NAME) # PROCEDURAL MEMORY: Steps for parsing JSON responses if response_text.startswith("```json"): response_text = response_text[7:] if response_text.endswith("```"): response_text = response_text[:-3] response_text = response_text.strip() try: result_dict = json.loads(response_text) return Router(**result_dict) except json.JSONDecodeError: # PROCEDURAL MEMORY: Error handling - default to notify for safety return Router( reasoning="Failed to parse response, defaulting to notify", classification="notify" )# PROCEDURAL MEMORY: Decision tree for routing based on classificationif result.classification == "respond": print("🎯 Classification: RESPOND - This message requires action") print(f"Reasoning: {result.reasoning}") goto = "response_agent" update = { "messages": [ { "role": "user", "content": f"Handle this Project Marvel message: {state['message_input']}", } ] }elif result.classification == "ignore": print("⏭️ Classification: IGNORE - This message can be skipped") print(f"Reasoning: {result.reasoning}") update = None goto = END For the project assistant, the procedural memory was its fundamental learned process: “When you receive a message with ‘BLOCKER’ in the subject line from a team member, immediately classify it as ‘respond’, then execute the sequence: create_jira_ticket() → schedule_standup() → send_slack_message() to coordinate the response.” This step-by-step procedure was what told the agent exactly how to handle Sarah’s authentication blocker systematically and efficiently. MAIN SYSTEM ORCHESTRATION def project_assistant(message_input: Dict) -> Dict: """Main function that orchestrates the project management assistant.""" print("\n" + "="*60) print("PROJECT MARVEL ASSISTANT - PROCESSING MESSAGE") print("="*60) # Step 1: Triage the message classification = triage_router(message_input) result = { "classification": classification, "message_input": message_input } # Step 2: If needs response, generate one if classification == "respond": print("\n🤖 Generating response...") response = response_agent(message_input) result["response"] = response print(f"\n📝 Generated Response: {response}") return result Now Lets test some cases, CASE 1: Critical Blocker Report: test_message_1 = { "author": "Sarah Chen <sarah.chen@company.com>", "to": "John Martinez <john.martinez@company.com>", "subject": "BLOCKER: API Integration Issue for Project Marvel", "message_thread": """Hi John, We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment. Error: Token validation failing with 401 on /marvel/auth/verify This is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today? Impact: High - affects core functionality for Q3 launch Thanks, Sarah""", } Classification: RESPOND — This message requires action Reasoning: The message explicitly states a ‘critical blocker’ on Project Marvel, which is John’s project. It details the issue (OAuth flow failing), provides an error, and highlights a significant impact (blocking 3 developers, affecting core functionality for Q3 launch). The sender, Sarah, is likely a team member, and she directly requests ‘a quick sync to debug this today’…… # Executing tool: schedule_standup with parameters: {'team_members': ['Sarah Chen', 'John Martinez'], 'date': 'today', 'time': 'earliest_available', 'duration': '30 minutes', 'topic': 'Project Marvel: Debug API Integration Blocker (OAuth Flow)'}## Generated Response: A sync meeting has been scheduled for today to debug the API Integration Blocker (OAuth Flow) with Sarah Chen and John Martinez. Please ensure both attend the scheduled meeting. I will also create a JIRA ticket to track this critical blocker. CASE 2: Company Announcement test_message_2 = { "author": "Company Wide <announcements@company.com>", "to": "All Employees <all@company.com>", "subject": "Reminder: Health & Wellness Week Next Month", "message_thread": """Dear Team, Just a friendly reminder that our annual Health & Wellness Week is coming up next month! Activities include: - Yoga sessions - Mental health workshops - Free health screenings Sign up on the company portal. Best, HR Team""", } Classification: IGNORE — This message can be skipped Reasoning: The message is from ‘Company Wide’ to ‘All Employees’ and is a ‘Reminder: Health & Wellness Week Next Month’. This is a classic example of a general company announcement. According to the rules, ‘General company announcements’ are messages that are not worth responding to or tracking. Conclusion Long-term agentic memory in LangGraph transforms AI agents from forgetful bots into intelligent assistants that learn and adapt over time. By implementing semantic, episodic, and procedural memory systems, you can create agents that: Remember: memory is not just about storage it’s about creating agents that become more valuable with every interaction, just like human assistants who learn your preferences and working style over time. Thank you for reading!🤗I hope that you found this article both informative and enjoyable to read. Fore more information like this follow me on LinkedIn Understanding Long-Term Memory in LangGraph: A Hands-On Guide was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this storyAI applications need memory to share context across multiple interactions. In LangGraph, you can add two types of memory: (Read More: How to Add Memory to LLM Applications) Add short-term memory as a part of your agent’s state to enable multi-turn conversations Add long-term memory to store user-specific or application-level data across sessions Long-term Agentic Memory in LangGraph transforms AI agents from forgetful bots into genuinely helpful assistants that learn, adapt, and personalize over time a game-changer for building practical agentic workflows. Why Long-Term Memory Matters We all know that AI agents are stateless without memory, AI agents are limited to responding in short, exchanges missing the crucial ability to recall context, preferences, and past events. When an agent “remembers” several capabilities become possible: Adapting to user preferences and history Providing contextual, relevant answers across conversations Learning new skills or facts and evolving responses Delivering highly personalized experiences LangGraph makes these capabilities accessible through its modular approach to agent memory, supporting diverse storage and retrieval strategies. The Core Memory Types Humans use memories to remember facts (semantic memory), experiences (episodic memory), and rules (procedural memory). AI agents can use memory in the same ways. For example, AI agents can use memory to remember specific facts about a user to accomplish a task. Let’s understand this by an example. As data scientists, we often build internal tools to streamline workflows. Imagine we’re tasked with creating a project management assistant for a software team lead, Mr John. Her team is working on a major Q3 feature launch, codenamed “Project Marvel.”Source Sam sends a simple message to the assistant: If our “agent” is just a simple pass-through to a raw LLM, we see the immediate failure of statelessness. The LLM, with its vast general knowledge but zero local context, would likely reply: This response is practically useless. It forces the user to do all the work of providing context. Now, let’s see how an agent with a proper memory architecture would handle this.Lets discuss these memory types and its analogy.Source 1. Semantic Memory: The Agent’s Fact Book Semantic Memory is the agent’s collection of hard facts. It’s a database of objective information about you, the world, and the people it interacts with. This isn’t about remembering the full chat history, it’s about pulling out the important truths from that chat and filing them away for later. How it works for an agent: This is usually a vector store working behind the scenes. The agent gets tools like send_slack_message, create_jira_ticket, schedule_standup, check_sprint_capacity . When it learns something new and important during a conversation, it saves it using update_project_status. Later on, when it needs to make a decision, it can instantly look up relevant facts to get the context it needs. # ======================================== # SEMANTIC MEMORY: General knowledge about project management rules and procedures # ======================================== # This contains factual knowledge about how to categorize and handle different types of prompt_instructions = { "triage_rules": { "ignore": "General company announcements, unrelated project updates, promotional content", "notify": "Sprint retrospective notes, deployment status updates, team availability changes", "respond": "Blocker reports from team members, stakeholder requests, urgent bug escalations", }, "agent_instructions": "Use these tools to help John manage Project Marvel tasks and team coordination efficiently." } profile = { "name": "John", "full_name": "John Martinez", "user_profile_background": "Senior Team Lead managing Project Marvel - Q3 feature launch with 8 team members", } # SEMANTIC MEMORY: Knowledge about available tools and their mapping tools_map = { "send_slack_message": send_slack_message, "create_jira_ticket": create_jira_ticket, "schedule_standup": schedule_standup, "check_sprint_capacity": check_sprint_capacity, "update_project_status": update_project_status } In our “Project Marvel” example, this was the agent’s knowledge that “John Martinez is the Senior Team Lead managing Project Marvel — Q3 feature launch with 8 team members”. These factual details were the key that helped the agent understand the context and technical specification. 2. Episodic Memory: The Agent’s Lived Experience Episodic Memory is about recalling specific events, just as they happened. It’s not a single fact, but the entire replay of a past experience. For an agent, this is like a recording of a previous job it did what it saw, what it did, and how it all turned out. How it works for an agent: This is a fantastic way to create few-shot examples. By giving the agent a “diary” of its past successes, we can guide it to make better choices in the future. It’s a powerful way to show, not just tell, the agent how to act, which makes it far more reliable. # ========================================# EPISODIC MEMORY: Specific test scenarios and experiences# ========================================# These represent specific examples of messages that the system has learned to handletest_message_1 = { "author": "Sarah Chen <sarah.chen@company.com>", "to": "John Martinez <john.martinez@company.com>", "subject": "BLOCKER: API Integration Issue for Project Marvel", "message_thread": """Hi John,We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment.Error: Token validation failing with 401 on /marvel/auth/verifyThis is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today?Impact: High - affects core functionality for Q3 launchThanks,Sarah""",}def triage_router(state: State) -> Command[Literal["response_agent", "__end__"]]: """PROCEDURAL MEMORY: How to route messages based on triage classification.""" # EPISODIC MEMORY: Extract specific message details from current context author = state['message_input']['author'] to = state['message_input']['to'] subject = state['message_input']['subject'] message_thread = state['message_input']['message_thread'] # SEMANTIC + EPISODIC MEMORY: Combine general knowledge with specific context system_prompt = triage_system_prompt.format( full_name=profile["full_name"], name=profile["name"], user_profile_background=profile["user_profile_background"], triage_no=prompt_instructions["triage_rules"]["ignore"], triage_notify=prompt_instructions["triage_rules"]["notify"], triage_email=prompt_instructions["triage_rules"]["respond"], ) In our example, this was the agent recalling the specific experience of Sarah Chen’s urgent message on Tuesday morning about the OAuth authentication failing with a 401 error on the /marvel/auth/verify endpoint, which was blocking 3 developers and threatening the Q3 launch deadline. This wasn’t a general rule, but the complete contextual memory of that specific incident requiring immediate escalation. 3. Procedural Memory: The Agent’s Muscle Memory Procedural Memory is the agent’s set of core principles and instincts. It’s the rulebook that defines its personality, its style, and how it operates. Think of it less as a stored memory. How it works for an agent: This is the agent’s system prompt. The system prompt acts as its constitution or its prime directives a set of instructions it follows at all times. This is what ensures the agent is consistent, safe, and acts the way you want it to. # ========================================# PROCEDURAL MEMORY: Skills and action-oriented tools# ========================================# These functions represent learned procedures for executing project management tasks@tooldef create_jira_ticket( title: str, description: str, priority: str, assignee: str) -> str: """PROCEDURAL MEMORY: How to create and track project tasks.""" return f"JIRA ticket created: '{title}' - Priority: {priority}, Assigned to: {assignee}"@tooldef schedule_standup( attendees: list[str], topic: str, duration_minutes: int, preferred_time: str) -> str: """PROCEDURAL MEMORY: How to organize and schedule team meetings.""" return f"Meeting '{topic}' scheduled for {preferred_time} with {len(attendees)} attendees ({duration_minutes} min)"def call_gemini_for_triage(system_prompt: str, user_prompt: str) -> Router: """PROCEDURAL MEMORY: How to classify and triage incoming messages.""" model = genai.GenerativeModel(GEMINI_MODEL_NAME) # PROCEDURAL MEMORY: Steps for parsing JSON responses if response_text.startswith("```json"): response_text = response_text[7:] if response_text.endswith("```"): response_text = response_text[:-3] response_text = response_text.strip() try: result_dict = json.loads(response_text) return Router(**result_dict) except json.JSONDecodeError: # PROCEDURAL MEMORY: Error handling - default to notify for safety return Router( reasoning="Failed to parse response, defaulting to notify", classification="notify" )# PROCEDURAL MEMORY: Decision tree for routing based on classificationif result.classification == "respond": print("🎯 Classification: RESPOND - This message requires action") print(f"Reasoning: {result.reasoning}") goto = "response_agent" update = { "messages": [ { "role": "user", "content": f"Handle this Project Marvel message: {state['message_input']}", } ] }elif result.classification == "ignore": print("⏭️ Classification: IGNORE - This message can be skipped") print(f"Reasoning: {result.reasoning}") update = None goto = END For the project assistant, the procedural memory was its fundamental learned process: “When you receive a message with ‘BLOCKER’ in the subject line from a team member, immediately classify it as ‘respond’, then execute the sequence: create_jira_ticket() → schedule_standup() → send_slack_message() to coordinate the response.” This step-by-step procedure was what told the agent exactly how to handle Sarah’s authentication blocker systematically and efficiently. MAIN SYSTEM ORCHESTRATION def project_assistant(message_input: Dict) -> Dict: """Main function that orchestrates the project management assistant.""" print("\n" + "="*60) print("PROJECT MARVEL ASSISTANT - PROCESSING MESSAGE") print("="*60) # Step 1: Triage the message classification = triage_router(message_input) result = { "classification": classification, "message_input": message_input } # Step 2: If needs response, generate one if classification == "respond": print("\n🤖 Generating response...") response = response_agent(message_input) result["response"] = response print(f"\n📝 Generated Response: {response}") return result Now Lets test some cases, CASE 1: Critical Blocker Report: test_message_1 = { "author": "Sarah Chen <sarah.chen@company.com>", "to": "John Martinez <john.martinez@company.com>", "subject": "BLOCKER: API Integration Issue for Project Marvel", "message_thread": """Hi John, We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment. Error: Token validation failing with 401 on /marvel/auth/verify This is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today? Impact: High - affects core functionality for Q3 launch Thanks, Sarah""", } Classification: RESPOND — This message requires action Reasoning: The message explicitly states a ‘critical blocker’ on Project Marvel, which is John’s project. It details the issue (OAuth flow failing), provides an error, and highlights a significant impact (blocking 3 developers, affecting core functionality for Q3 launch). The sender, Sarah, is likely a team member, and she directly requests ‘a quick sync to debug this today’…… # Executing tool: schedule_standup with parameters: {'team_members': ['Sarah Chen', 'John Martinez'], 'date': 'today', 'time': 'earliest_available', 'duration': '30 minutes', 'topic': 'Project Marvel: Debug API Integration Blocker (OAuth Flow)'}## Generated Response: A sync meeting has been scheduled for today to debug the API Integration Blocker (OAuth Flow) with Sarah Chen and John Martinez. Please ensure both attend the scheduled meeting. I will also create a JIRA ticket to track this critical blocker. CASE 2: Company Announcement test_message_2 = { "author": "Company Wide <announcements@company.com>", "to": "All Employees <all@company.com>", "subject": "Reminder: Health & Wellness Week Next Month", "message_thread": """Dear Team, Just a friendly reminder that our annual Health & Wellness Week is coming up next month! Activities include: - Yoga sessions - Mental health workshops - Free health screenings Sign up on the company portal. Best, HR Team""", } Classification: IGNORE — This message can be skipped Reasoning: The message is from ‘Company Wide’ to ‘All Employees’ and is a ‘Reminder: Health & Wellness Week Next Month’. This is a classic example of a general company announcement. According to the rules, ‘General company announcements’ are messages that are not worth responding to or tracking. Conclusion Long-term agentic memory in LangGraph transforms AI agents from forgetful bots into intelligent assistants that learn and adapt over time. By implementing semantic, episodic, and procedural memory systems, you can create agents that: Remember: memory is not just about storage it’s about creating agents that become more valuable with every interaction, just like human assistants who learn your preferences and working style over time. Thank you for reading!🤗I hope that you found this article both informative and enjoyable to read. Fore more information like this follow me on LinkedIn Understanding Long-Term Memory in LangGraph: A Hands-On Guide was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story

Understanding Long-Term Memory in LangGraph: A Hands-On Guide

2025/10/13 21:36
10 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

AI applications need memory to share context across multiple interactions. In LangGraph, you can add two types of memory: (Read More: How to Add Memory to LLM Applications)

  • Add short-term memory as a part of your agent’s state to enable multi-turn conversations
  • Add long-term memory to store user-specific or application-level data across sessions

Long-term Agentic Memory in LangGraph transforms AI agents from forgetful bots into genuinely helpful assistants that learn, adapt, and personalize over time a game-changer for building practical agentic workflows.

Why Long-Term Memory Matters

We all know that AI agents are stateless without memory, AI agents are limited to responding in short, exchanges missing the crucial ability to recall context, preferences, and past events. When an agent “remembers” several capabilities become possible:

  • Adapting to user preferences and history
  • Providing contextual, relevant answers across conversations
  • Learning new skills or facts and evolving responses
  • Delivering highly personalized experiences

LangGraph makes these capabilities accessible through its modular approach to agent memory, supporting diverse storage and retrieval strategies.

The Core Memory Types

Humans use memories to remember facts (semantic memory), experiences (episodic memory), and rules (procedural memory). AI agents can use memory in the same ways. For example, AI agents can use memory to remember specific facts about a user to accomplish a task.

Let’s understand this by an example. As data scientists, we often build internal tools to streamline workflows. Imagine we’re tasked with creating a project management assistant for a software team lead, Mr John. Her team is working on a major Q3 feature launch, codenamed “Project Marvel.”

Source

Sam sends a simple message to the assistant:

If our “agent” is just a simple pass-through to a raw LLM, we see the immediate failure of statelessness. The LLM, with its vast general knowledge but zero local context, would likely reply:

This response is practically useless. It forces the user to do all the work of providing context. Now, let’s see how an agent with a proper memory architecture would handle this.Lets discuss these memory types and its analogy.

Source

1. Semantic Memory: The Agent’s Fact Book

Semantic Memory is the agent’s collection of hard facts. It’s a database of objective information about you, the world, and the people it interacts with. This isn’t about remembering the full chat history, it’s about pulling out the important truths from that chat and filing them away for later.

How it works for an agent:
This is usually a vector store working behind the scenes. The agent gets tools like send_slack_message, create_jira_ticket, schedule_standup, check_sprint_capacity . When it learns something new and important during a conversation, it saves it using update_project_status. Later on, when it needs to make a decision, it can instantly look up relevant facts to get the context it needs.

  # ========================================
# SEMANTIC MEMORY: General knowledge about project management rules and procedures
# ========================================
# This contains factual knowledge about how to categorize and handle different types of
prompt_instructions = {
"triage_rules": {
"ignore": "General company announcements, unrelated project updates, promotional
content",
"notify": "Sprint retrospective notes, deployment status updates, team
availability changes",
"respond": "Blocker reports from team members, stakeholder requests, urgent bug
escalations",
},
"agent_instructions": "Use these tools to help John manage Project Marvel tasks and
team coordination efficiently."
}
profile = {
"name": "John",
"full_name": "John Martinez",
"user_profile_background": "Senior Team Lead managing Project Marvel - Q3 feature launch with 8 team members",
}
# SEMANTIC MEMORY: Knowledge about available tools and their mapping
tools_map = {
"send_slack_message": send_slack_message,
"create_jira_ticket": create_jira_ticket,
"schedule_standup": schedule_standup,
"check_sprint_capacity": check_sprint_capacity,
"update_project_status": update_project_status
}

In our “Project Marvel” example, this was the agent’s knowledge that “John Martinez is the Senior Team Lead managing Project Marvel — Q3 feature launch with 8 team members”. These factual details were the key that helped the agent understand the context and technical specification.

2. Episodic Memory: The Agent’s Lived Experience

Episodic Memory is about recalling specific events, just as they happened. It’s not a single fact, but the entire replay of a past experience. For an agent, this is like a recording of a previous job it did what it saw, what it did, and how it all turned out.

How it works for an agent:
This is a fantastic way to create few-shot examples. By giving the agent a “diary” of its past successes, we can guide it to make better choices in the future. It’s a powerful way to show, not just tell, the agent how to act, which makes it far more reliable.


# ========================================
# EPISODIC MEMORY: Specific test scenarios and experiences
# ========================================
# These represent specific examples of messages that the system has learned to handle

test_message_1 = {
"author": "Sarah Chen <sarah.chen@company.com>",
"to": "John Martinez <john.martinez@company.com>",
"subject": "BLOCKER: API Integration Issue for Project Marvel",
"message_thread": """Hi John,

We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment.

Error: Token validation failing with 401 on /marvel/auth/verify

This is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today?

Impact: High - affects core functionality for Q3 launch

Thanks,
Sarah""",
}

def triage_router(state: State) -> Command[Literal["response_agent", "__end__"]]:
"""PROCEDURAL MEMORY: How to route messages based on triage classification."""
# EPISODIC MEMORY: Extract specific message details from current context
author = state['message_input']['author']
to = state['message_input']['to']
subject = state['message_input']['subject']
message_thread = state['message_input']['message_thread']

# SEMANTIC + EPISODIC MEMORY: Combine general knowledge with specific context
system_prompt = triage_system_prompt.format(
full_name=profile["full_name"],
name=profile["name"],
user_profile_background=profile["user_profile_background"],
triage_no=prompt_instructions["triage_rules"]["ignore"],
triage_notify=prompt_instructions["triage_rules"]["notify"],
triage_email=prompt_instructions["triage_rules"]["respond"],
)

In our example, this was the agent recalling the specific experience of Sarah Chen’s urgent message on Tuesday morning about the OAuth authentication failing with a 401 error on the /marvel/auth/verify endpoint, which was blocking 3 developers and threatening the Q3 launch deadline. This wasn’t a general rule, but the complete contextual memory of that specific incident requiring immediate escalation.

3. Procedural Memory: The Agent’s Muscle Memory

Procedural Memory is the agent’s set of core principles and instincts. It’s the rulebook that defines its personality, its style, and how it operates. Think of it less as a stored memory.

How it works for an agent:
This is the agent’s system prompt. The system prompt acts as its constitution or its prime directives a set of instructions it follows at all times. This is what ensures the agent is consistent, safe, and acts the way you want it to.

# ========================================
# PROCEDURAL MEMORY: Skills and action-oriented tools
# ========================================
# These functions represent learned procedures for executing project management tasks

@tool
def create_jira_ticket(
title: str,
description: str,
priority: str,
assignee: str
) -> str:
"""PROCEDURAL MEMORY: How to create and track project tasks."""
return f"JIRA ticket created: '{title}' - Priority: {priority}, Assigned to: {assignee}"

@tool
def schedule_standup(
attendees: list[str],
topic: str,
duration_minutes: int,
preferred_time: str
) -> str:
"""PROCEDURAL MEMORY: How to organize and schedule team meetings."""
return f"Meeting '{topic}' scheduled for {preferred_time} with {len(attendees)} attendees ({duration_minutes} min)"

def call_gemini_for_triage(system_prompt: str, user_prompt: str) -> Router:
"""PROCEDURAL MEMORY: How to classify and triage incoming messages."""
model = genai.GenerativeModel(GEMINI_MODEL_NAME)

# PROCEDURAL MEMORY: Steps for parsing JSON responses
if response_text.startswith("```json"):
response_text = response_text[7:]
if response_text.endswith("```"):
response_text = response_text[:-3]
response_text = response_text.strip()

try:
result_dict = json.loads(response_text)
return Router(**result_dict)
except json.JSONDecodeError:
# PROCEDURAL MEMORY: Error handling - default to notify for safety
return Router(
reasoning="Failed to parse response, defaulting to notify",
classificati
)

# PROCEDURAL MEMORY: Decision tree for routing based on classification
if result.classification == "respond":
print("🎯 Classification: RESPOND - This message requires action")
print(f"Reasoning: {result.reasoning}")
goto = "response_agent"
update = {
"messages": [
{
"role": "user",
"content": f"Handle this Project Marvel message: {state['message_input']}",
}
]
}
elif result.classification == "ignore":
print("⏭️ Classification: IGNORE - This message can be skipped")
print(f"Reasoning: {result.reasoning}")
update = None
goto = END

For the project assistant, the procedural memory was its fundamental learned process: “When you receive a message with ‘BLOCKER’ in the subject line from a team member, immediately classify it as ‘respond’, then execute the sequence:

create_jira_ticket() → schedule_standup() → send_slack_message()

to coordinate the response.” This step-by-step procedure was what told the agent exactly how to handle Sarah’s authentication blocker systematically and efficiently.

MAIN SYSTEM ORCHESTRATION

def project_assistant(message_input: Dict) -> Dict:
"""Main function that orchestrates the project management assistant."""
print("\n" + "="*60)
print("PROJECT MARVEL ASSISTANT - PROCESSING MESSAGE")
print("="*60)

# Step 1: Triage the message
classification = triage_router(message_input)

result = {
"classification": classification,
"message_input": message_input
}

# Step 2: If needs response, generate one
if classification == "respond":
print("\n🤖 Generating response...")
response = response_agent(message_input)
result["response"] = response
print(f"\n📝 Generated Response: {response}")

return result

Now Lets test some cases,

CASE 1: Critical Blocker Report:

 test_message_1 = {
"author": "Sarah Chen <sarah.chen@company.com>",
"to": "John Martinez <john.martinez@company.com>",
"subject": "BLOCKER: API Integration Issue for Project Marvel",
"message_thread": """Hi John,

We've hit a critical blocker on the authentication microservice integration for Project Marvel. The OAuth flow is failing in staging environment.

Error: Token validation failing with 401 on /marvel/auth/verify

This is blocking 3 developers from completing their sprint tasks. Can we schedule a quick sync to debug this today?

Impact: High - affects core functionality for Q3 launch

Thanks,
Sarah""",
}

Classification: RESPOND — This message requires action

Reasoning: The message explicitly states a ‘critical blocker’ on Project Marvel, which is John’s project. It details the issue (OAuth flow failing), provides an error, and highlights a significant impact (blocking 3 developers, affecting core functionality for Q3 launch). The sender, Sarah, is likely a team member, and she directly requests ‘a quick sync to debug this today’……

# Executing tool: schedule_standup with parameters: 
{'team_members': ['Sarah Chen', 'John Martinez'],
'date': 'today', 'time': 'earliest_available',
'duration': '30 minutes', 'topic':
'Project Marvel: Debug API Integration Blocker (OAuth Flow)'}

## Generated Response:
A sync meeting has been scheduled for today to debug the API Integration Blocker (OAuth Flow) with Sarah Chen and John Martinez. Please ensure both attend the scheduled meeting. I will also create a JIRA ticket to track this critical blocker.

CASE 2: Company Announcement

test_message_2 = {
"author": "Company Wide <announcements@company.com>",
"to": "All Employees <all@company.com>",
"subject": "Reminder: Health & Wellness Week Next Month",
"message_thread": """Dear Team,

Just a friendly reminder that our annual Health & Wellness Week is coming up next month!

Activities include:
- Yoga sessions
- Mental health workshops
- Free health screenings

Sign up on the company portal.

Best,
HR Team""",
}

Classification: IGNORE — This message can be skipped

Reasoning: The message is from ‘Company Wide’ to ‘All Employees’ and is a ‘Reminder: Health & Wellness Week Next Month’. This is a classic example of a general company announcement. According to the rules, ‘General company announcements’ are messages that are not worth responding to or tracking.

Conclusion

Long-term agentic memory in LangGraph transforms AI agents from forgetful bots into intelligent assistants that learn and adapt over time. By implementing semantic, episodic, and procedural memory systems, you can create agents that:

Remember: memory is not just about storage it’s about creating agents that become more valuable with every interaction, just like human assistants who learn your preferences and working style over time.

Thank you for reading!🤗I hope that you found this article both informative and enjoyable to read.

Fore more information like this follow me on LinkedIn


Understanding Long-Term Memory in LangGraph: A Hands-On Guide was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.