Skip to main content
Reminix turns any AI agent into a production-ready REST API. Here’s what you can build.

Customer Support Bot

A conversational agent with memory, user context, and access to your systems. What you need:
  • Chat agent with conversation persistence
  • Custom tools for your domain (order lookup, ticket creation)
  • Client tokens for browser embedding
from anthropic import AsyncAnthropic
from reminix_runtime import agent, tool, serve, Message

anthropic = AsyncAnthropic()

@tool
async def lookup_order(order_id: str) -> dict:
    """Look up an order by ID."""
    # Query your database
    return {"order_id": order_id, "status": "shipped", "eta": "Jan 30"}

@tool
async def create_ticket(subject: str, description: str) -> dict:
    """Create a support ticket."""
    # Call your ticketing system
    return {"ticket_id": "TKT-1234", "status": "created"}

@agent(template="chat")
async def support_bot(messages: list[Message]) -> str:
    """Customer support assistant with access to orders and tickets."""
    response = await anthropic.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        system="You are a support agent.",
        messages=[{"role": m.role, "content": m.content or ""} for m in messages],
        tools=[
            {"name": "lookup_order", "description": "Look up order status", "input_schema": {...}},
            {"name": "create_ticket", "description": "Create support ticket", "input_schema": {...}}
        ]
    )
    return response.content[0].text

serve(agents=[support_bot], tools=[lookup_order, create_ticket], port=8080)
Relevant docs: Chat Agents, Custom Tools, Client Tokens, Conversations

Wrap Your Existing AI Stack

Already using LangChain, OpenAI, or Anthropic? Wrap it in 3 lines. What you need:
  • Your existing agent code
  • The appropriate adapter package
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from reminix_langchain import wrap_agent
from reminix_runtime import serve

# Your existing LangChain agent
llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

# Wrap and serve
serve(agents=[wrap_agent(executor, "my-agent")], port=8080)
Now your agent is accessible at POST /agents/my-agent/invoke with streaming, discovery, and SDK support. Relevant docs: LangChain Integration, OpenAI Integration, Anthropic Integration

Internal Workflow Automation

Task agents that connect to your internal systems. What you need:
  • Task agent (@agent / agent())
  • Custom tools for your APIs
from reminix_runtime import agent, tool, serve

@tool
async def query_database(sql: str) -> list:
    """Execute a read-only SQL query."""
    # Your database connection
    return [{"id": 1, "name": "Example"}]

@tool
async def send_slack_message(channel: str, message: str) -> dict:
    """Send a message to a Slack channel."""
    # Your Slack API call
    return {"ok": True, "ts": "1234567890.123456"}

@tool
async def create_jira_ticket(project: str, summary: str, description: str) -> dict:
    """Create a Jira ticket."""
    # Your Jira API call
    return {"key": "PROJ-123", "url": "https://..."}

@agent
async def ops_agent(task: str, context: dict | None = None) -> str:
    """Internal operations agent with access to database, Slack, and Jira."""
    # Your LLM logic here with tool access
    return f"Completed: {task}"

serve(
    agents=[ops_agent],
    tools=[query_database, send_slack_message, create_jira_ticket],
    port=8080
)
Relevant docs: Task Agents, Custom Tools

Streaming Chat Interface

Real-time token streaming for responsive UIs. What you need:
  • Streaming agent (async generator)
  • SSE-compatible frontend
from anthropic import AsyncAnthropic
from reminix_runtime import agent, serve, Message

anthropic = AsyncAnthropic()

@agent(template="chat")
async def streaming_assistant(messages: list[Message]):
    """Stream responses token by token."""
    async with anthropic.messages.stream(
        model="claude-sonnet-4-20250514",
        max_tokens=1024,
        messages=[{"role": m.role, "content": m.content} for m in messages]
    ) as stream:
        async for text in stream.text_stream:
            yield text

serve(agents=[streaming_assistant], port=8080)
Call with stream: true:
curl -X POST http://localhost:8080/agents/streaming-assistant/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "Write a poem"}], "stream": true}'
Relevant docs: Streaming

Serverless Deployment

Deploy to edge functions and serverless platforms. What you need:
  • .to_asgi() (Python) or .toHandler() (TypeScript)
  • Your serverless platform
from mangum import Mangum
from reminix_runtime import agent

@agent
async def my_agent(prompt: str) -> str:
    return f"Processed: {prompt}"

# Lambda handler
handler = Mangum(my_agent.to_asgi())
Relevant docs: Deploying, Self-Hosting

Multi-Agent System

Serve multiple specialized agents from one deployment. What you need:
  • Multiple agents with different capabilities
  • Single serve() call
from reminix_runtime import agent, serve, Message

@agent
async def summarizer(text: str) -> str:
    """Summarize long text."""
    # Your summarization logic
    return text[:200] + "..."

@agent
async def translator(text: str, target_language: str = "es") -> str:
    """Translate text to another language."""
    # Your translation logic
    return f"[{target_language}] {text}"

@agent(template="chat")
async def assistant(messages: list[Message]) -> str:
    """General-purpose chat assistant."""
    return f"You said: {messages[-1].content}" if messages else "Hello!"

# All agents available at their respective endpoints
serve(agents=[summarizer, translator, assistant], port=8080)
Discover all agents via /info:
curl http://localhost:8080/info
{
  "agents": [
    { "name": "summarizer", "type": "agent", ... },
    { "name": "translator", "type": "agent", ... },
    { "name": "assistant", "type": "agent", "template": "chat", ... }
  ]
}
Relevant docs: Multiple Agents

Summary

Use CaseAgent TypeKey Features
Customer SupportChat agentConversations, tools, client tokens
Wrap Existing StackAdapterLangChain, OpenAI, Anthropic, Vercel AI
Internal AutomationTask agentCustom tools, context
Streaming UIStreaming agentSSE, async generators
ServerlessAny.to_asgi(), .toHandler()
Multi-AgentMultipleSingle deployment, /info discovery

What Reminix Handles

When you serve an agent with Reminix, you get:
  • REST API at /agents/{name}/invoke
  • Streaming via Server-Sent Events
  • Discovery via /info endpoint
  • SDKs for Python and TypeScript clients
  • Client Tokens for browser/mobile apps
  • Conversations for chat persistence
You focus on your agent logic. Reminix handles the infrastructure.