Skip to main contentSkip to footer
ExamplescriptintermediateRunnablememory-lab

Support Agent

Runnable example (intermediate) for script using mem0, openai.

Key Facts

Level
intermediate
Runtime
Python • OpenAI API
Pattern
Memory-aware assistance with legible context
Interaction
Live sandbox • Script
Updated
14 March 2026

Navigate this example

High-level flow

How this example moves from input to execution and reviewable output
Support Agent -> Run the agent task -> User request -> System execution -> Reviewable output -> Ensure that background work…

Start

Support Agent

Checkpoint

Run the agent task

Outcome

User request

Why this page exists

This example is shown as both real source code and a product-facing interaction pattern so learners can connect implementation, UX, and doctrine without leaving the library.

Visual flowReal sourceSandbox or walkthroughMCP access
How should this example be used in the platform?

Use the sandbox to understand the experience pattern first, then inspect the source to see how the product boundary, model boundary, and doctrine boundary are actually implemented.

UX pattern: Memory-aware assistance with legible context
Ensure that background work remains perceptible
Align feedback with the user’s level of attention
Establish trust through inspectability
Source references
Library entry
knowledge-mem0-oss-support-agent
Source path
content/example-library/sources/knowledge/mem0/oss/support_agent.py
Libraries
mem0, openai, python-dotenv
Runtime requirements
OPENAI_API_KEY
Related principles
Ensure that background work remains perceptible, Align feedback with the user’s level of attention, Establish trust through inspectability

support_agent.py

python
from openai import OpenAI
from mem0 import Memory
from dotenv import load_dotenv

load_dotenv("../.env")


class CustomerSupportAIAgent:
    def __init__(self):
        """
        Initialize the CustomerSupportAIAgent with memory configuration and OpenAI client.
        """
        # ! Make sure qdrant is running (see docker-compose.yml)
        config = {
            "vector_store": {
                "provider": "qdrant",
                "config": {
                    "host": "localhost",
                    "port": 6333,
                },
            },
        }
        self.memory = Memory.from_config(config)
        self.client = OpenAI()
        self.app_id = "customer-support"

    def handle_query(self, query, user_id=None):
        """
        Handle a customer query and store the relevant information in memory.

        :param query: The customer query to handle.
        :param user_id: Optional user ID to associate with the memory.
        """
        # Start a streaming chat completion request to the AI
        response = self.client.chat.completions.create(
            model="gpt-4.1",
            messages=[
                {"role": "system", "content": "You are a customer support AI agent."},
                {"role": "user", "content": query},
            ],
        )
        # Store the query in memory
        self.memory.add(query, user_id=user_id, metadata={"app_id": self.app_id})
        print(response.choices[0].message.content)

    def get_memories(self, user_id=None):
        """
        Retrieve all memories associated with the given customer ID.

        :param user_id: Optional user ID to filter memories.
        :return: List of memories.
        """
        return self.memory.get_all(user_id=user_id)


# Instantiate the CustomerSupportAIAgent
support_agent = CustomerSupportAIAgent()

# Define a customer ID
customer_id = "default_user"

# Handle a customer query
support_agent.handle_query(
    "I need help with my recent order. It hasn't arrived yet.", user_id=customer_id
)
What should the learner inspect in the code?

Look for the exact place where system scope is bounded: schema definitions, prompt framing, runtime configuration, and the call site that turns user intent into a concrete model or workflow action.

Look for output contracts and validation
Look for the exact execution call
Look for what the product could expose to the user
How does the sandbox relate to the source?

The sandbox should make the UX legible: what the user sees, what the system is deciding, and how the result becomes reviewable. The source then shows how that behavior is actually implemented.

Enter a message or load a memory scenario.
Run the context recall step.
Inspect both the recalled memory and the personalized reply.
SandboxMemory-aware assistance with legible context
Memory lab

This experience makes it clear what the system remembers, what it retrieves, and how that context changes the reply.

UX explanation

Memory examples are only trustworthy when the user can understand which information was recalled and why it changes the response.

AI design explanation

Memory is not just persistence. It is a product choice about what to store, what to recall, and how to make the effect of context inspectable.

Interaction walkthrough

  1. 1Enter a message or load a memory scenario.
  2. 2Run the context recall step.
  3. 3Inspect both the recalled memory and the personalized reply.

Message

Recalled memoryNew memory

Recalled context

The recalled context appears here.

Memory to store

The suggested memory write appears here.

Visible reply

The personalized reply appears here.

Used in courses and paths

This example currently stands on its own in the library, but it still connects to the principle system and the broader example family.

Related principles

Runtime architecture

Use this example in your agents

This example is also available through the blueprint’s agent-ready layer. Use the For agents page for the public MCP, deterministic exports, and Claude/Cursor setup.

Define triggers, context, and boundaries before increasing autonomy
Make control, observability, and recovery explicit in the runtime
Choose the right operational patterns before delegating to workflows