Composable Architecture
Mix and match runtimes, state stores, LLM adapters, and workspace providers. Use pre-built components or build your own execution loop.
Sessions, runs, durable HITL, and stateless suspension across every runtime. Swap runtimes, state stores, and LLM providers without changing your agent code.
The branch omnara/stateless-suspension-redesign is the v7 release train. v7 reshapes the framework around stateless suspension — the runLoop exits at every HITL boundary instead of holding in-memory promises, and resume creates a fresh execution that reads suspension context from the state store. See the v6 → v7 upgrade guide for the full migration walkthrough including breaking changes per package, storage migrations (V8/V9 D1, V5 DO, V5 Postgres), and code migration examples.
import { defineAgent, defineTool } from '@helix-agents/core';
import { JSAgentExecutor, InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/sdk';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Define a tool
const searchTool = defineTool({
name: 'search',
description: 'Search the web for information',
inputSchema: z.object({
query: z.string().describe('Search query'),
}),
outputSchema: z.object({
results: z.array(z.string()),
}),
execute: async ({ query }) => ({
results: [`Results for: ${query}`],
}),
});
// Define an agent
const ResearchAgent = defineAgent({
name: 'researcher',
systemPrompt: 'You are a helpful research assistant.',
tools: [searchTool],
outputSchema: z.object({
summary: z.string(),
sources: z.array(z.string()),
}),
llmConfig: {
model: openai('gpt-4o'),
},
});
// Create infrastructure and execute
const executor = new JSAgentExecutor(
new InMemoryStateStore(),
new InMemoryStreamManager(),
new VercelAIAdapter()
);
const handle = await executor.execute(ResearchAgent, 'Research AI agents');
// Stream results
for await (const chunk of await handle.stream()) {
if (chunk.type === 'text_delta') {
process.stdout.write(chunk.delta);
}
}
const result = await handle.result();
console.log(result.output);Core & SDK
| Package | Description |
|---|---|
| @helix-agents/core | Types, interfaces, pure orchestration functions |
| @helix-agents/sdk | Quick-start umbrella (core + memory + JS runtime) |
Runtimes
| Package | Description |
|---|---|
| @helix-agents/runtime-js | In-process JavaScript execution |
| @helix-agents/runtime-temporal | Durable workflows via Temporal |
| @helix-agents/runtime-cloudflare | Cloudflare DO + Workflows (HITL on both paths in v7) |
| @helix-agents/runtime-dbos | Postgres-backed durable workflows via DBOS |
State stores
| Package | Description |
|---|---|
| @helix-agents/store-memory | In-memory state (development) |
| @helix-agents/store-redis | Redis state + streams (production) |
| @helix-agents/store-postgres | PostgreSQL state (production, runtime-agnostic) |
| @helix-agents/store-cloudflare | D1 + Durable Objects (Cloudflare deployments) |
LLM, memory, embeddings
| Package | Description |
|---|---|
| @helix-agents/llm-vercel | Vercel AI SDK adapter |
| @helix-agents/memory | In-process semantic memory (dev) |
| @helix-agents/memory-redis | Redis-backed semantic memory (production) |
| @helix-agents/memory-cloudflare | D1 + Vectorize + Queues memory store |
| @helix-agents/embedding-vercel | Vercel AI SDK embedding adapter |
| @helix-agents/embedding-cloudflare | Cloudflare Workers AI embeddings |
Frontend, server, observability
| Package | Description |
|---|---|
| @helix-agents/ai-sdk | Frontend integration for the Vercel AI SDK |
| @helix-agents/agent-server | HTTP server for hosting agents remotely (11 v7 routes) |
| @helix-agents/tracing-langfuse | Langfuse tracing integration |
Workspaces
| Package | Description |
|---|---|
| @helix-agents/workspace-memory | In-memory workspace provider (dev) |
| @helix-agents/workspace-local-bash | Local bash workspace provider |
Helix Agents is built around composability. Every major component is an interface:
You can use the pre-built implementations, or implement the interfaces yourself. The core package provides pure functions like runStepIteration(), buildMessagesForLLM(), and executeCompanionToolDispatch() that you can compose into your own execution loop.
Use pre-built or build your own - the choice is yours.