Skip to main content

Agents

Agents are the core building block in Reminix. Create an agent, wire up tools, and call it via the /invoke endpoint.

Two Paths to Production

Reminix gives you flexibility without complexity. Choose the path that fits your needs.

Configure via UI

For quick resultsDefine agents through forms (Dashboard or CLI). No code required.
  • Configure system prompts, model, and settings
  • Wire up platform tools (web, memory, knowledge, storage)
  • Instant deployment
  • Type: managed

Deploy Custom Code

For full controlWrite Python or TypeScript. Discovered automatically on deploy.
  • Full control over implementation
  • Use any framework (LangChain, Vercel AI, etc.)
  • Build custom tools for domain-specific logic
  • Type: python, typescript, or adapter-specific
Either path gets you: instant APIs, real-time streaming, SDKs for embedding, and production infrastructure.
The rest of this page focuses on custom agents — agents you define in code. For managed agents configured via UI, see the Dashboard documentation.

Custom Agent Example

from reminix_runtime import agent, serve

@agent
async def my_agent(prompt: str) -> str:
    """Process a prompt and return a result."""
    return f"You said: {prompt}"

serve(agents=[my_agent], port=8080)

Calling Agents

The API uses input and output only. Request body: { input: { ... }, stream?: boolean }. Response: { output: ..., execution?: { id, url, type, status?, duration_ms? } }.
curl -X POST http://localhost:8080/agents/my-agent/invoke \
  -H "Content-Type: application/json" \
  -d '{"input": {"prompt": "Hello!"}}'

# Response: {"output": "You said: Hello!", "execution": {...}}
Or via the SDK:
response = client.agents.invoke("my-agent", prompt="Hello!")
print(response["content"])  # "You said: Hello!"

Agent templates

Use a template to get standard input/output shapes without defining schemas yourself. Pass template when creating an agent:
TemplateInputOutputUse case
prompt (default){ prompt: string }stringSingle prompt in, text out
chat{ messages: Message[] }stringMulti-turn chat; final reply as string
task{ task: string, ... }JSONTask name + params, structured result
rag{ query: string, messages?: Message[], collectionIds?: string[] }stringRAG query with optional history/collections
thread{ messages: Message[] }Message[]Multi-turn with tool calls; returns updated thread
Messages are OpenAI-style: role (e.g. system, developer, user, assistant, tool), content (string or array of parts), and optional tool_calls, tool_call_id, name. Use the runtime’s Message and ToolCall types for type-safe handlers.

Chat agents

Chat agents use the chat template: they expect messages and return a single string (the assistant’s reply).
from reminix_runtime import agent, serve, Message

@agent(template="chat")
async def assistant(messages: list[Message]) -> str:
    """A conversational assistant."""
    last_msg = messages[-1].content if messages else ""
    return f"You said: {last_msg}"

serve(agents=[assistant], port=8080)

Calling chat agents

Request body uses input with messages. Response is { output: string }.
curl -X POST http://localhost:8080/agents/assistant/invoke \
  -H "Content-Type: application/json" \
  -d '{"input": {"messages": [{"role": "user", "content": "Hello!"}]}}'

# Response: {"output": "You said: Hello!"}
Or via the SDK:
response = client.agents.invoke(
    "assistant",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response["content"])  # "You said: Hello!"

Streaming

Both agents and chat agents support streaming via async generators:
@agent
async def streamer(prompt: str):
    """Stream a response word by word."""
    for word in prompt.split():
        yield word + " "

@agent(template="chat")
async def streaming_assistant(messages: list[Message]):
    """Stream a conversational response."""
    response = f"You said: {messages[-1].content}" if messages else ""
    for char in response:
        yield char
Request with stream: true:
curl -X POST http://localhost:8080/agents/streamer/invoke \
  -H "Content-Type: application/json" \
  -d '{"input": {"prompt": "Hello world"}, "stream": true}'

Advanced: Custom Input & Output Schemas

For more control, define custom input and output schemas:
@agent(name="calculator")
async def calculator(a: float, b: float, operation: str = "add") -> float:
    """Add or subtract. Input schema is inferred from type hints."""
    if operation == "add":
        return a + b
    return a - b

API Request & Response

The API uses input and output only. Request body: { input: { ... }, stream?: boolean }. Response: { output: ... }. Discovery (/info) exposes each agent’s input schema and optional metadata:
{
  "agents": [
    {
      "name": "calculator",
      "type": "agent",
      "input": { ... },
      "output": { ... }
    },
    {
      "name": "assistant",
      "type": "agent",
      "template": "chat",
      "input": { ... },
      "output": { ... }
    }
  ]
}

Agent Type Reference

The agent type field indicates how the agent was created:
TypeDescription
managedCreated via the dashboard/CLI
pythonNative Python agent (decorator-based)
typescriptNative TypeScript agent (factory-based)
python-langchainPython agent using LangChain adapter
python-openaiPython agent using OpenAI adapter
typescript-vercel-aiTypeScript agent using Vercel AI adapter
typescript-langchainTypeScript agent using LangChain adapter
The type is automatically set based on how the agent is created and which adapter (if any) is used.

Quick Reference

Prompt / taskChat / thread
Factoryagent() / @agentagent(..., { template: 'chat' }) / @agent(template="chat")
Input{ prompt } or custom (default template: prompt){ messages }
Output{ output } (string or JSON)Chat: { output } (string). Thread: { output } (Message[])
Use caseTask-oriented, single promptConversations, tool-calling threads
StreamingYesYes