JavaScript Runtime
The JavaScript runtime (@helix-agents/runtime-js) executes agents in-process within your Node.js application. It's the simplest runtime to set up and ideal for development, testing, and simple deployments.
When to Use
Good fit:
- Local development and testing
- Prototyping and experimentation
- Single-process deployments
- Short-lived agent executions (< 30 minutes)
- Serverless functions (Lambda, Cloud Functions)
Not ideal for:
- Long-running agents that may outlive the process
- Production workloads requiring crash recovery
- Multi-process distributed systems
Installation
npm install @helix-agents/runtime-js @helix-agents/store-memoryOr use the SDK which bundles everything:
npm install @helix-agents/sdkBasic Setup
import { JSAgentExecutor, InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/sdk';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
// Create stores
const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const llmAdapter = new VercelAIAdapter();
// Create executor
const executor = new JSAgentExecutor(stateStore, streamManager, llmAdapter);Constructor
new JSAgentExecutor(
stateStore: StateStore,
streamManager: StreamManager,
llmAdapter: LLMAdapter
)Parameters:
stateStore- Where agent state is persisted (Memory, Redis)streamManager- How events are streamed (Memory, Redis)llmAdapter- LLM provider adapter (Vercel, Custom)
Executing Agents
Basic Execution
const handle = await executor.execute(MyAgent, 'Research the benefits of TypeScript');
// Stream events
const stream = await handle.stream();
if (stream) {
for await (const chunk of stream) {
if (chunk.type === 'text_delta') {
process.stdout.write(chunk.delta);
}
}
}
// Get result
const result = await handle.result();
console.log(result.output);With Initial State
const handle = await executor.execute(MyAgent, {
message: 'Continue the research',
state: {
previousFindings: ['Finding 1', 'Finding 2'],
phase: 'analyzing',
},
});With Options
const handle = await executor.execute(MyAgent, 'Research topic', {
runId: 'custom-run-id', // Custom run ID
continueFrom: 'previous-run-id', // Continue from previous run (see Multi-Turn Conversations)
parentStreamId: 'parent-stream', // For sub-agent streaming
parentAgentId: 'parent-run-id', // Parent agent reference
});With Conversation History
When you manage your own message history externally:
const handle = await executor.execute(MyAgent, {
message: 'Continue from here',
messages: [
{ role: 'user', content: 'Previous question' },
{ role: 'assistant', content: 'Previous answer' },
],
});Execution Handle
The handle returned from execute() provides these methods:
stream()
Get an async iterable of stream chunks:
const stream = await handle.stream();
if (stream) {
for await (const chunk of stream) {
console.log(chunk.type, chunk);
}
}Returns null if streaming is not available.
result()
Wait for and get the final result:
const result = await handle.result();
if (result.status === 'completed') {
console.log('Output:', result.output);
} else {
console.log('Failed:', result.error);
}abort(reason?)
Cancel the agent execution:
await handle.abort('User requested cancellation');The agent checks the abort signal between steps and during tool execution.
getState()
Get current agent state:
const state = await handle.getState();
console.log('Step count:', state.stepCount);
console.log('Messages:', state.messages.length);
console.log('Custom state:', state.customState);canResume()
Check if the agent can be resumed:
const { canResume, reason } = await handle.canResume();
if (canResume) {
const newHandle = await handle.resume();
}resume()
Resume a paused or interrupted agent:
const { canResume, reason } = await handle.canResume();
if (canResume) {
const newHandle = await handle.resume();
const result = await newHandle.result();
}send()
Continue the conversation with another message. This is syntactic sugar for calling execute() with continueFrom:
// Simple string input (becomes user message)
const handle2 = await handle1.send('Tell me more about that');
const result = await handle2.result();
// Message array input (for advanced use cases)
const handle2 = await handle1.send([
{ role: 'user', content: 'Here is some context' },
{ role: 'user', content: 'Now my actual question' },
]);
// With state override
const handle2 = await handle1.send('Continue', { state: { mood: 'curious' } });State inheritance: Both string and Message[] inputs inherit state from the source run when no explicit state is provided. Use the state option to override.
Important: send() waits for the current execution to complete before starting the new one. If you need parallel conversations, create separate handles via executor.execute().
Reconnecting to Runs
Use getHandle() to reconnect to an existing run:
// Get handle for existing run
const handle = await executor.getHandle(MyAgent, 'run-12345');
if (handle) {
// Check if we can resume
const { canResume, reason } = await handle.canResume();
if (canResume) {
// Resume execution
const resumedHandle = await handle.resume();
const result = await resumedHandle.result();
} else {
// Get completed result
const result = await handle.result();
}
}Multi-Turn Conversations
Enable conversation continuation where each message builds on the previous exchange. There are three approaches:
Using continueFrom
Pass continueFrom to continue from a previous run's state and history:
// First message
const handle1 = await executor.execute(agent, 'Hello, my name is Alice');
await handle1.result();
// Continue the conversation - agent remembers the name
const handle2 = await executor.execute(agent, 'What is my name?', {
continueFrom: handle1.runId,
});
const result = await handle2.result();
// Agent responds: "Your name is Alice"Using handle.send()
Syntactic sugar for continuation - equivalent to execute() with continueFrom:
const handle1 = await executor.execute(agent, 'Hello, my name is Alice');
await handle1.result();
// Equivalent to execute() with continueFrom
const handle2 = await handle1.send('What is my name?');
const result = await handle2.result();Using Direct Messages
When you manage your own message history in an external database:
const handle = await executor.execute(agent, {
message: 'What is my name?',
messages: [
{ role: 'user', content: 'Hello, my name is Alice' },
{ role: 'assistant', content: 'Hello Alice! How can I help you today?' },
],
});This is useful when:
- You store conversation history in your own database
- You want full control over what context the agent sees
- You're building chat features outside the framework's state store
Note: System messages in messages are filtered out and re-added dynamically by the agent.
Behavior Table
Both messages and state have override semantics when combined with continueFrom:
| Input | Messages Source | State Source |
|---|---|---|
message only | Empty (fresh) | Empty (fresh) |
message + continueFrom | From continueFrom | From continueFrom |
message + messages | From messages | Empty (fresh) |
message + state | Empty (fresh) | From state |
message + continueFrom + messages | From messages (override) | From continueFrom |
message + continueFrom + state | From continueFrom | From state (override) |
| All four | From messages (override) | From state (override) |
Key points:
- Each turn gets a new
runIdfor clean separation (debugging, billing, tracing) messagesoverrides history fromcontinueFromwhen both are providedstateoverrides state fromcontinueFromwhen both are provided- Invalid
continueFromreturns an error if the source run is not found
Forking Conversations
Since each continuation creates a new run, you can "fork" a conversation:
// User asks two different follow-up questions from the same point
const handle1 = await executor.execute(agent, 'Tell me more', { continueFrom: runId });
const handle2 = await executor.execute(agent, 'Actually, tell me something else', { continueFrom: runId });
// handle1 and handle2 both share history from runId but diverge from thereExecution Flow
Here's how the JS runtime executes an agent:
execute() called
│
├── 1. Initialize state
│ ├── Create run ID and stream ID
│ ├── Parse initial state from schema defaults
│ ├── Add user message
│ └── Save state to store
│
├── 2. Start execution loop (async)
│ │
│ └── While status === 'running':
│ │
│ ├── 3. Build messages
│ │ ├── Add system prompt
│ │ └── Include conversation history
│ │
│ ├── 4. Call LLM
│ │ ├── Stream text deltas
│ │ └── Get tool calls
│ │
│ ├── 5. Process step result
│ │ ├── Check for __finish__ tool
│ │ ├── Extract output if complete
│ │ └── Plan tool executions
│ │
│ ├── 6. Execute tools (parallel)
│ │ ├── Regular tools: execute directly
│ │ └── Sub-agent tools: recursive execute()
│ │
│ ├── 7. Update state
│ │ ├── Add assistant message
│ │ ├── Add tool results
│ │ └── Save to store
│ │
│ └── 8. Check stop conditions
│ ├── maxSteps reached?
│ ├── stopWhen predicate?
│ └── Output produced?
│
└── 9. Return handle immediately
└── Execution continues in backgroundParallel Tool Execution
The JS runtime executes tool calls in parallel:
// If LLM returns multiple tool calls:
// [search('topic A'), search('topic B'), analyze('data')]
// All three execute concurrentlyThis includes sub-agent calls - multiple sub-agents can run simultaneously.
Parallel state updates:
When parallel tools update state, the runtime uses delta merging:
- Array pushes are accumulated (not overwritten)
- Object properties are merged
- Conflicts are resolved via last-write-wins
Sub-Agent Handling
Sub-agents execute recursively within the same process:
// Parent agent calls sub-agent tool
// JS runtime:
// 1. Detects sub-agent tool call
// 2. Creates new state for sub-agent (same streamId)
// 3. Recursively calls runLoop()
// 4. Sub-agent events stream to same stream
// 5. Sub-agent output becomes tool resultSub-agents share the stream but have isolated state.
Error Handling
Tool Errors
Tool errors are caught and returned to the LLM:
const searchTool = defineTool({
name: 'search',
execute: async (input) => {
throw new Error('API rate limited');
},
});
// LLM sees: "Tool 'search' failed: API rate limited"
// LLM can decide to retry, use different approach, etc.Execution Errors
Fatal errors fail the agent:
try {
const result = await handle.result();
} catch (error) {
// LLM API failed, state store failed, etc.
}Check result.status for graceful handling:
const result = await handle.result();
if (result.status === 'failed') {
console.error('Agent failed:', result.error);
}Limitations
No Crash Recovery
If the process dies, in-flight executions are lost:
// Process starts
const handle = await executor.execute(agent, 'Long task');
// Process crashes here - execution is lost
// After restart, state exists but execution stopped
const reconnected = await executor.getHandle(agent, handle.runId);
// reconnected.canResume() returns true
// But original execution context is goneMitigation: Use Redis stores to preserve state, then resume:
// After crash/restart
const handle = await executor.getHandle(agent, savedRunId);
if (handle) {
const { canResume } = await handle.canResume();
if (canResume) {
const resumed = await handle.resume();
// Continues from last saved state
}
}No Distributed Execution
Everything runs in one process. For distributed execution, use Temporal.
No Per-Tool Timeouts
Tools run without individual timeout enforcement. Add your own:
const toolWithTimeout = defineTool({
name: 'slow_api',
execute: async (input, context) => {
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Tool timeout')), 30000)
);
const apiPromise = callSlowApi(input);
return Promise.race([apiPromise, timeoutPromise]);
},
});Best Practices
1. Use Redis for Production
In-memory stores lose data on restart:
import { RedisStateStore, RedisStreamManager } from '@helix-agents/store-redis';
const executor = new JSAgentExecutor(
new RedisStateStore(redis),
new RedisStreamManager(redis),
llmAdapter
);2. Handle Abort Signals
Check abort signal in long-running tools:
execute: async (input, context) => {
for (const item of items) {
if (context.abortSignal.aborted) {
throw new Error('Aborted');
}
await processItem(item);
}
};3. Set Appropriate maxSteps
Prevent runaway agents:
const agent = defineAgent({
maxSteps: 20, // Reasonable limit for your use case
});4. Monitor Step Count
Track execution progress:
// In your tool
const state = await handle.getState();
console.log(`Step ${state.stepCount} of ${agent.maxSteps}`);Next Steps
- Temporal Runtime - For durable, production workloads
- Cloudflare Runtime - For edge deployment
- Storage: Memory - In-memory stores for development
- Storage: Redis - Production-ready stores