Basic Invoke
Invoke an agent and get a result:
from reminix import Reminix
client = Reminix()
response = client.agents.invoke(
"my-agent",
input={
"prompt": "Analyze this data",
"data": {"sales": [100, 200, 150]}
}
)
print(response["output"])
The API uses input and output only. Request: input={ ... }. Response: { "output": ..., "execution": { id, url, type, status?, duration_ms? } }.
# Task agent with custom input
response = client.agents.invoke(
"data-analyzer",
input={
"task": "analyze_trends",
"data": {
"values": [100, 200, 150, 300],
"labels": ["Q1", "Q2", "Q3", "Q4"]
},
"options": {"include_forecast": True}
}
)
# Agent with prompt input
response = client.agents.invoke(
"code-generator",
input={"prompt": "A function to calculate fibonacci numbers"}
)
# For chat-style interactions with messages, use client.agents.chat() instead
# See the Chat guide: /python/chat
Response
The response is { "output": ..., "execution": { id, url, type, status?, duration_ms? } }. Use response["output"] for the result.
response = client.agents.invoke("my-agent", input={"prompt": "Hello"})
print(response["output"])
For chat-style interactions, use client.agents.chat() instead which returns a standardized messages array. See the Chat guide for details.
With Context
Pass additional context to your agent:
response = client.agents.invoke(
"my-agent",
prompt="personalized analysis",
context={
"identity": {
"user_id": "user_456"
},
"tenant_id": "tenant_xyz",
"user_preferences": {"language": "en"}
}
)
Streaming
Stream responses for real-time output:
# Stream the response
for chunk in client.agents.invoke(
"my-agent",
prompt="Write a story",
stream=True
):
print(chunk, end="", flush=True)
print() # Newline after streaming
Collecting Streamed Response
chunks = []
for chunk in client.agents.invoke(
"my-agent",
prompt="Generate content",
stream=True
):
chunks.append(chunk)
print(chunk, end="", flush=True)
full_response = "".join(chunks)
Async Invoke
For async applications:
import asyncio
from reminix import AsyncReminix
async def main() -> None:
client = AsyncReminix()
# Async invoke
response = await client.agents.invoke(
"my-agent",
prompt="Analyze this"
)
print(response)
# Async streaming
async for chunk in await client.agents.invoke(
"my-agent",
prompt="Generate content",
stream=True
):
print(chunk, end="", flush=True)
print()
asyncio.run(main())
Idempotency
Prevent duplicate processing with idempotency keys:
response = client.agents.invoke(
"payment-processor",
action="charge",
amount=100,
idempotency_key="charge_abc123"
)
# Same key returns cached response (for 24 hours)
response2 = client.agents.invoke(
"payment-processor",
action="charge",
amount=100,
idempotency_key="charge_abc123"
)
# response2 is the same as response
Idempotency only works for non-streaming requests. Streaming responses are not cached.
Handling the Response
The response structure depends on your agent’s configuration:
response = client.agents.invoke("my-agent", prompt="Hello")
# Access the output (structure depends on agent)
output = response.output
# Handle different output types
if isinstance(output, dict):
print(f"Result: {output.get('result')}")
if isinstance(output, str):
print(f"Response: {output}")
if isinstance(output, list):
for item in output:
print(item)
Timeout Considerations
Invoke requests have a 60-second timeout. For longer tasks:
# Option 1: Use streaming (no timeout)
for chunk in client.agents.invoke(
"my-agent",
prompt="Long running analysis",
stream=True
):
print(chunk, end="")
# Option 2: Increase client timeout
client = Reminix(timeout=120) # 2 minutes