Task-Oriented Agent
Use the agent() factory for task-oriented agents:
import { agent, serve } from '@reminix/runtime';
const calculator = agent('calculator', {
description: 'Add two numbers',
input: {
type: 'object',
properties: {
a: { type: 'number' },
b: { type: 'number' }
},
required: ['a', 'b']
},
handler: async ({ a, b }) => (a as number) + (b as number)
});
serve({ agents: [calculator], port: 8080 });
Invoke the Agent
The API uses input/output only. Request: { input: { ... } }. Response: { output: ... }.
curl -X POST http://localhost:8080/agents/calculator/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"a": 5, "b": 3}}'
# Response: {"output": 8}
Define input schema for structured input:
const textProcessor = agent('text-processor', {
description: 'Process text in various ways',
input: {
type: 'object',
properties: {
text: { type: 'string' },
operation: { type: 'string', enum: ['uppercase', 'lowercase'] }
},
required: ['text']
},
handler: async ({ text, operation }) => {
const t = text as string;
return operation === 'uppercase' ? t.toUpperCase() : t.toLowerCase();
}
});
Agent templates
Use a template for standard input/output shapes: prompt (default), chat, task, rag, or thread. Messages are OpenAI-style (role, content, and optionally tool_calls, tool_call_id, name). Use the Message and ToolCall types from @reminix/runtime for type-safe handlers.
| Template | Input | Output |
|---|
prompt | { prompt: string } | string |
chat | { messages: Message[] } | string |
task | { task: string, ... } | JSON |
rag | { query: string, messages?: Message[], collectionIds?: string[] } | string |
thread | { messages: Message[] } | Message[] |
Chat agent
Use the chat template for conversational agents. The handler receives { messages } and returns a string (the assistant’s reply).
import { agent, serve, type Message } from '@reminix/runtime';
const assistant = agent('assistant', {
template: 'chat',
description: 'A helpful assistant',
handler: async ({ messages }) => {
const lastMsg = (messages as Message[]).at(-1)?.content ?? '';
return `You said: ${lastMsg}`;
}
});
serve({ agents: [assistant], port: 8080 });
Invoke the chat agent
Request body: { input: { messages: [...] } }. Response: { output: string }.
curl -X POST http://localhost:8080/agents/assistant/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"messages": [{"role": "user", "content": "Hello!"}]}}'
# Response: {"output": "You said: Hello!"}
With context
Access request context in handlers:
const contextualBot = agent('contextual-bot', {
template: 'chat',
description: 'Bot with context awareness',
handler: async (input, context) => {
const userId = (context as Record<string, unknown>)?.user_id ?? 'unknown';
return `Hello user ${userId}!`;
}
});
Streaming
Both factories support streaming via async generators:
import { agent, serve } from '@reminix/runtime';
// Streaming task agent
const streamer = agent('streamer', {
description: 'Stream text word by word',
input: {
type: 'object',
properties: { text: { type: 'string' } },
required: ['text']
},
handler: async function* ({ text }) {
for (const word of (text as string).split(' ')) {
yield word + ' ';
}
}
});
// Streaming chat agent
const streamingAssistant = agent('streaming-assistant', {
template: 'chat',
description: 'Stream responses token by token',
handler: async function* ({ messages }) {
const last = (messages as Array<{ content?: string }>).at(-1)?.content ?? '';
const response = `You said: ${last}`;
for (const char of response) {
yield char;
}
}
});
serve({ agents: [streamer, streamingAssistant], port: 8080 });
For streaming agents:
stream: true in the request → chunks are sent via SSE
stream: false in the request → chunks are collected and returned as a single response
View agent metadata via the /info endpoint:
curl http://localhost:8080/info
{
"agents": [
{
"name": "calculator",
"type": "agent",
"description": "Add two numbers",
"input": {
"type": "object",
"properties": { "a": { "type": "number" }, "b": { "type": "number" } },
"required": ["a", "b"]
},
"output": {
"type": "object",
"properties": { "content": { "type": "number" } },
"required": ["content"]
},
"streaming": false
},
{
"name": "assistant",
"type": "agent",
"template": "chat",
"description": "A helpful assistant",
"input": { ... },
"output": { "type": "string" },
"streaming": false
}
]
}
Integrating with AI Models
Use any AI SDK inside your handlers:
With OpenAI
import OpenAI from 'openai';
import { agent, serve } from '@reminix/runtime';
const openai = new OpenAI();
const gptAgent = agent('gpt-agent', {
template: 'chat',
description: 'Chat with GPT-4',
handler: async ({ messages }) => {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: (messages as Array<{ role: string; content?: string }>).map(m => ({
role: m.role as 'user' | 'assistant' | 'system',
content: m.content || ''
}))
});
const content = response.choices[0].message.content || '';
return content;
}
});
serve({ agents: [gptAgent], port: 8080 });
With Anthropic
import Anthropic from '@anthropic-ai/sdk';
import { agent, serve } from '@reminix/runtime';
const anthropic = new Anthropic();
const claudeAgent = agent('claude-agent', {
template: 'chat',
description: 'Chat with Claude',
handler: async ({ messages }) => {
const msgs = messages as Array<{ role: string; content?: string }>;
// Extract system message if present
let system: string | undefined;
const chatMessages: Array<{ role: 'user' | 'assistant'; content: string }> = [];
for (const m of msgs) {
if (m.role === 'system') {
system = m.content || undefined;
} else {
chatMessages.push({
role: m.role as 'user' | 'assistant',
content: m.content || ''
});
}
}
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
system: system || 'You are a helpful assistant.',
messages: chatMessages
});
const content = response.content[0].type === 'text'
? response.content[0].text
: '';
return content;
}
});
serve({ agents: [claudeAgent], port: 8080 });
For simpler integration with AI frameworks, use our pre-built adapters like @reminix/openai, @reminix/anthropic, or @reminix/langchain.
Multiple Agents
Serve multiple agents from one server:
import { agent, serve } from '@reminix/runtime';
const summarizer = agent('summarizer', {
description: 'Summarize text',
input: {
type: 'object',
properties: { text: { type: 'string' } },
required: ['text']
},
handler: async ({ text }) => (text as string).slice(0, 100) + '...'
});
const translator = agent('translator', {
description: 'Translate text',
input: {
type: 'object',
properties: {
text: { type: 'string' },
target: { type: 'string' }
},
required: ['text']
},
handler: async ({ text, target }) =>
`Translated to ${target || 'es'}: ${text}`
});
const assistant = agent('assistant', {
template: 'chat',
description: 'A helpful assistant',
handler: async ({ messages }) =>
`You said: ${(messages as Array<{ content?: string }>).at(-1)?.content ?? ''}`
});
// Serve all agents
serve({ agents: [summarizer, translator, assistant], port: 8080 });
Advanced: Agent Class
For more control, use the Agent class directly:
import { Agent, serve } from '@reminix/runtime';
const agent = new Agent('my-agent', { metadata: { version: '1.0' } });
agent.handler(async (request) => {
const prompt = request.input.prompt as string;
return { output: `Processed: ${prompt}` };
});
// Optional: streaming handler
agent.streamHandler(async function* (request) {
const prompt = request.input.prompt as string;
for (const word of prompt.split(' ')) {
yield word + ' ';
}
});
serve({ agents: [agent], port: 8080 });
Serverless Deployment
Use toHandler() for serverless deployments:
import { agent } from '@reminix/runtime';
const myAgent = agent('my-agent', {
input: {
type: 'object',
properties: { prompt: { type: 'string' } },
required: ['prompt']
},
handler: async ({ prompt }) => `Completed: ${prompt}`
});
// Vercel Edge Function
export const POST = myAgent.toHandler();
export const GET = myAgent.toHandler();
// Cloudflare Workers
export default { fetch: myAgent.toHandler() };
// Deno Deploy
Deno.serve(myAgent.toHandler());
// Bun
Bun.serve({ fetch: myAgent.toHandler() });
Next Steps