Research Assistant (Cloudflare DO)
This example shows how to build a research assistant using the Durable Objects runtime, which provides unlimited streaming by bypassing the 1000 subrequest limit.
Runnable Example: The complete source code is available in the monorepo at
examples/research-assistant-cloudflare-do/.
Why Durable Objects?
The DO runtime is ideal when:
- Your agent produces many stream chunks (chat responses, thinking tokens)
- You need real-time WebSocket/SSE connections
- You want simpler architecture (one DO per run)
- You want hibernation for cost optimization
v0.11 Features Demonstrated
This example showcases v0.11's AgentServer extensibility and AI SDK integration:
- Lifecycle Hooks:
beforeStart(),afterStart(),beforeResume(),afterResume() - LLM Configuration:
createLLMAdapter()for custom model setup - Multi-Agent Support:
resolveAgent()for handling multiple agent types - Protected Utilities: Access to
stateStore,streamManager,usageStore - Custom Endpoints: Add endpoints while preserving base functionality
Project Structure
research-assistant-cloudflare-do/
├── src/
│ ├── worker.ts # Worker entry point with routing
│ ├── my-agent-server.ts # Custom AgentServer with hooks
│ ├── agent.ts # Agent definition
│ ├── types.ts # Environment types and schemas
│ └── tools/
│ ├── index.ts
│ ├── web-search.ts
│ └── take-notes.ts
├── wrangler.toml # Cloudflare configuration
├── package.json
├── tsconfig.json
└── README.mdPrerequisites
- Cloudflare account with Workers Paid plan
- Node.js 18+
- Wrangler CLI
Setup
1. Install Dependencies
bash
npm install2. Set Up Secrets
bash
# Create .dev.vars for local development
echo "OPENAI_API_KEY=sk-..." > .dev.vars3. Start Development Server
bash
npm run devUsage
Start Research
bash
curl -X POST http://localhost:8787/agent/research-assistant/start \
-H "Content-Type: application/json" \
-d '{"message": "Research TypeScript best practices"}'Response:
json
{
"sessionId": "abc-123-...",
"streamUrl": "/agent/research-assistant/abc-123-.../stream",
"websocketUrl": "/agent/research-assistant/abc-123-.../ws",
"statusUrl": "/agent/research-assistant/abc-123-.../status"
}Stream Results (SSE)
bash
curl http://localhost:8787/agent/research-assistant/{sessionId}/streamStream Results (WebSocket)
javascript
const ws = new WebSocket('ws://localhost:8787/agent/research-assistant/{sessionId}/ws');
ws.onmessage = (e) => console.log(JSON.parse(e.data));Check Status
bash
curl http://localhost:8787/agent/research-assistant/{sessionId}/statusGet Messages
bash
# Get all messages
curl http://localhost:8787/agent/research-assistant/{sessionId}/messages
# Paginated
curl http://localhost:8787/agent/research-assistant/{sessionId}/messages?limit=10&cursor=...Interrupt & Resume
bash
# Interrupt
curl -X POST http://localhost:8787/agent/research-assistant/{sessionId}/interrupt \
-H "Content-Type: application/json" \
-d '{"reason": "user_requested"}'
# Resume
curl -X POST http://localhost:8787/agent/research-assistant/{sessionId}/resume \
-H "Content-Type: application/json" \
-d '{"mode": "continue"}'
# Resume with message
curl -X POST http://localhost:8787/agent/research-assistant/{sessionId}/resume \
-H "Content-Type: application/json" \
-d '{"mode": "with_message", "message": "Focus on performance"}'Key Implementation Details
AgentServer with Composition API
The MyAgentServer uses the composition API for extensibility:
typescript
// src/my-agent-server.ts
import { createAgentServer, AgentRegistry } from '@helix-agents/runtime-cloudflare';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { ResearchAssistantAgent } from './agent.js';
import type { Env } from './types.js';
const registry = new AgentRegistry();
registry.register(ResearchAssistantAgent);
export const MyAgentServer = createAgentServer<Env>({
// REQUIRED: LLM adapter factory
llmAdapter: (env) => new VercelAIAdapter(),
// REQUIRED: Agent registry
agents: registry,
// OPTIONAL: Lifecycle hooks
hooks: {
beforeStart: async ({ body, executionState }) => {
console.log('[MyAgentServer] beforeStart', {
agentType: body.agentType,
sessionId: body.sessionId,
});
// "Last message wins" - interrupt existing execution
if (executionState.isExecuting) {
await executionState.interrupt('superseded');
}
},
afterStart: async ({ sessionId, agentType }) => {
console.log('[MyAgentServer] afterStart', { sessionId, agentType });
},
onComplete: async ({ sessionId, status, error }) => {
console.log('[MyAgentServer] onComplete', { sessionId, status, error });
},
},
// OPTIONAL: Custom endpoints
endpoints: {
'/health': async (_request, { executionState }) => {
return Response.json({
status: 'healthy',
isExecuting: executionState.isExecuting,
});
},
},
});Agent Definition with State
typescript
// src/agent.ts
import { defineAgent } from '@helix-agents/core';
import { z } from 'zod';
import { webSearchTool, takeNotesTool } from './tools/index.js';
export const ResearchStateSchema = z.object({
topic: z.string(),
notes: z.array(z.object({
content: z.string(),
source: z.string().optional(),
})),
searchResults: z.array(z.object({
title: z.string(),
snippet: z.string(),
url: z.string(),
})),
});
export const ResearchAssistantAgent = defineAgent({
name: 'research-assistant',
description: 'A research assistant that searches and takes notes',
systemPrompt: `You are a helpful research assistant...`,
tools: [webSearchTool, takeNotesTool],
stateSchema: ResearchStateSchema,
initialState: {
topic: '',
notes: [],
searchResults: [],
},
llmConfig: {
provider: 'openai',
model: 'gpt-4o-mini',
},
maxSteps: 10,
});Tools with State Updates
typescript
// src/tools/web-search.ts
import { defineTool } from '@helix-agents/core';
import { z } from 'zod';
export const webSearchTool = defineTool({
name: 'web_search',
description: 'Search the web for information',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }, ctx) => {
const results = [/* mock results */];
// Update state using Immer
ctx.updateState((state) => {
state.topic = query;
state.searchResults.push(...results);
});
// Emit progress event
await ctx.emit('search_completed', { query, resultCount: results.length });
return { results };
},
});Worker Routing with FrontendHandler
typescript
// src/worker.ts
import { MyAgentServer } from './my-agent-server.js';
import { createFrontendHandler } from '@helix-agents/ai-sdk';
import type { Env } from './types.js';
export { MyAgentServer };
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Route: /chat/:sessionId - FrontendHandler handles all chat operations
const chatMatch = url.pathname.match(/^\/chat\/([^/]+)$/);
if (chatMatch) {
const sessionId = chatMatch[1]!;
// Create handler with DO namespace - no manual client setup needed
const handler = createFrontendHandler({
namespace: env.AGENTS,
agentName: 'research-assistant',
});
return handler.handleRequest(request, { sessionId });
}
return new Response('Not found', { status: 404 });
},
};Frontend Integration
Using useChat with HelixChatTransport
The FrontendHandler returns AI SDK Data Stream Protocol format, compatible with useChat:
tsx
import { HelixChatTransport } from '@helix-agents/ai-sdk/react';
import { useChat } from '@ai-sdk/react';
import { useState, useEffect } from 'react';
function Chat({ sessionId }: { sessionId: string }) {
const [snapshot, setSnapshot] = useState(null);
// Load snapshot on mount
useEffect(() => {
fetch(`/chat/${sessionId}/snapshot`)
.then(r => r.json())
.then(setSnapshot);
}, [sessionId]);
const { messages, input, handleInputChange, handleSubmit, status } = useChat({
id: sessionId,
initialMessages: snapshot?.messages ?? [],
experimental_resume: snapshot?.status === 'active',
transport: new HelixChatTransport({
api: `/chat/${sessionId}`,
resumeFromSequence: snapshot?.streamSequence,
}),
});
return (
<div>
{messages.map((msg) => (
<div key={msg.id}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit" disabled={status === 'streaming'}>Send</button>
</form>
</div>
);
}Starting a New Session
typescript
// Generate session ID on the client
const sessionId = crypto.randomUUID();
// Start by sending first message - FrontendHandler handles the rest
const response = await fetch(`/chat/${sessionId}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: 'Research AI agents' }),
});Stream Resumption
HelixChatTransport handles stream resumption automatically:
typescript
// HelixChatTransport handles this automatically via resumeFromSequence
const transport = new HelixChatTransport({
api: `/chat/${sessionId}`,
resumeFromSequence: snapshot.streamSequence,
});Deployment
1. Deploy
bash
npm run deploy2. Set Production Secrets
bash
wrangler secret put OPENAI_API_KEY3. Verify
bash
curl -X POST https://research-assistant-cloudflare-do.your-subdomain.workers.dev/agent/research-assistant/start \
-H "Content-Type: application/json" \
-d '{"message": "Research quantum computing"}'Comparison with Workflows Version
| Aspect | DO Runtime | Workflows |
|---|---|---|
| Streaming | Unlimited (FREE) | Limited (1000 subrequests) |
| State | DO SQLite | D1 Database |
| Setup | Simpler (1 DO) | More complex (D1 + Workflow + DO) |
| Durability | Checkpoint-based | Step-level automatic |
| Real-time | Native WebSocket/SSE | Via separate DO |
See Research Assistant (Cloudflare Workflows) for the Workflows version.
Related Documentation
- Durable Objects Runtime - Full runtime documentation
- FrontendHandler Guide - Frontend integration with DOs
- Migration Guide - Migrating from DOClient
- Interrupt & Resume - Pause and continue agents
- React Hooks - useAutoResync and other hooks