Step Processing
This document explains how Helix Agents processes LLM step results and plans subsequent actions.
Overview
After each LLM call, the framework must:
- Parse the response (text, tool calls, structured output)
- Determine what actions to take
- Decide whether to continue or stop
This is handled by pure functions in the orchestration module.
StepResult Types
The LLM adapter returns a StepResult discriminated union:
TextStepResult
Plain text response:
interface TextStepResult {
type: 'text';
content: string; // The text content
thinking?: ThinkingContent; // Reasoning trace
shouldStop: boolean; // LLM-indicated stop
stopReason?: StopReason; // Why it stopped
}ToolCallsStepResult
One or more tool invocations:
interface ToolCallsStepResult {
type: 'tool_calls';
content?: string; // Optional text with tools
toolCalls: ParsedToolCall[];
subAgentCalls: ParsedSubAgentCall[];
thinking?: ThinkingContent;
stopReason?: StopReason;
}
interface ParsedToolCall {
id: string; // Tool call ID
name: string; // Tool name
arguments: unknown; // Parsed arguments
}
interface ParsedSubAgentCall {
id: string; // Call ID
agentType: string; // Sub-agent type
input: unknown; // Input for sub-agent
}StructuredOutputStepResult
Direct structured output (no tool call):
interface StructuredOutputStepResult<TOutput> {
type: 'structured_output';
output: TOutput; // Validated output
stopReason?: StopReason;
}ErrorStepResult
LLM error:
interface ErrorStepResult {
type: 'error';
error: Error;
shouldStop: boolean; // Whether to terminate
stopReason?: StopReason;
}planStepProcessing
The main function for analyzing step results:
function planStepProcessing<TOutput>(
stepResult: StepResult<TOutput>,
options?: PlanStepProcessingOptions<TOutput>
): StepProcessingPlan<TOutput>;Return Value
interface StepProcessingPlan<TOutput> {
// Data for creating assistant message (null for structured_output)
assistantMessagePlan: AssistantMessagePlan | null;
// Tools to execute (excludes __finish__)
pendingToolCalls: ParsedToolCall[];
// Sub-agents to invoke
pendingSubAgentCalls: ParsedSubAgentCall[];
// Status update to apply (null if no change)
statusUpdate: StatusUpdatePlan | null;
// Whether execution should stop
isTerminal: boolean;
// Parsed output if __finish__ was called
output?: TOutput;
// Stop reason for logging/debugging
stopReason?: StopReason;
}Processing Flow
Text Response
const stepResult = { type: 'text', content: 'Hello!', shouldStop: false };
const plan = planStepProcessing(stepResult);
// Result:
// {
// assistantMessagePlan: { content: 'Hello!', toolCalls: [], ... },
// pendingToolCalls: [],
// pendingSubAgentCalls: [],
// statusUpdate: null,
// isTerminal: false,
// }Tool Calls
const stepResult = {
type: 'tool_calls',
toolCalls: [
{ id: 'tc1', name: 'search', arguments: { query: 'test' } },
{ id: 'tc2', name: 'fetch', arguments: { url: 'https://...' } },
],
subAgentCalls: [],
};
const plan = planStepProcessing(stepResult);
// Result:
// {
// assistantMessagePlan: { toolCalls: [...], ... },
// pendingToolCalls: [
// { id: 'tc1', name: 'search', ... },
// { id: 'tc2', name: 'fetch', ... },
// ],
// pendingSubAgentCalls: [],
// statusUpdate: null,
// isTerminal: false,
// }finish Tool Call
The __finish__ tool is special—it signals completion:
const stepResult = {
type: 'tool_calls',
toolCalls: [{ id: 'tc1', name: '__finish__', arguments: { result: 'done' } }],
subAgentCalls: [],
};
const plan = planStepProcessing(stepResult, {
outputSchema: z.object({ result: z.string() }),
});
// Result:
// {
// assistantMessagePlan: { toolCalls: [...], ... }, // Includes __finish__ in history
// pendingToolCalls: [], // Empty! __finish__ is not executed
// pendingSubAgentCalls: [],
// statusUpdate: { status: 'completed', output: { result: 'done' } },
// isTerminal: true,
// output: { result: 'done' },
// }Structured Output
Direct structured output (no __finish__ tool):
const stepResult = {
type: 'structured_output',
output: { result: 'done' },
};
const plan = planStepProcessing(stepResult);
// Result:
// {
// assistantMessagePlan: null, // No assistant message
// pendingToolCalls: [],
// pendingSubAgentCalls: [],
// statusUpdate: { status: 'completed', output: { result: 'done' } },
// isTerminal: true,
// output: { result: 'done' },
// }Error
const stepResult = {
type: 'error',
error: new Error('Rate limited'),
shouldStop: true,
};
const plan = planStepProcessing(stepResult);
// Result:
// {
// assistantMessagePlan: null,
// pendingToolCalls: [],
// pendingSubAgentCalls: [],
// statusUpdate: { status: 'failed', error: 'Rate limited' },
// isTerminal: true,
// }Stop Condition Checking
shouldStopExecution
Determines if the agent should stop:
function shouldStopExecution<TOutput>(
stepResult: StepResult<TOutput>,
stepCount: number,
config: StopConfig<TOutput>
): boolean;
interface StopConfig<TOutput> {
maxSteps?: number;
stopWhen?: (result: StepResult<TOutput>) => boolean;
}Stop Conditions (Priority Order)
- Structured output - Always terminal
- Error with shouldStop - Terminal error
- Text with shouldStop - LLM indicated stop
- Max steps exceeded - Safety limit
- Custom stopWhen - Application-specific
const shouldStop = shouldStopExecution(stepResult, stepCount, {
maxSteps: 10,
stopWhen: (result) => result.type === 'text' && result.content.includes('DONE'),
});determineFinalStatus
Maps step result to final status:
function determineFinalStatus<TOutput>(stepResult: StepResult<TOutput>): 'completed' | 'failed';Error stop reasons cause failure:
max_tokens→ failedcontent_filter→ failedrefusal→ failederror→ failed
Normal completions succeed:
end_turn→ completedstop_sequence→ completedtool_use→ completed
Message Building
createAssistantMessage
Creates the assistant message for history:
function createAssistantMessage(input: AssistantMessagePlan): AssistantMessage;
interface AssistantMessagePlan {
content?: string;
toolCalls: ParsedToolCall[];
subAgentCalls: ParsedSubAgentCall[];
thinking?: ThinkingContent;
}Sub-agent calls are stored with the subagent__ prefix:
// Input
{
toolCalls: [{ id: 't1', name: 'search', arguments: {} }],
subAgentCalls: [{ id: 's1', agentType: 'summarizer', input: {} }],
}
// Output message.toolCalls
[
{ id: 't1', name: 'search', arguments: {} },
{ id: 's1', name: 'subagent__summarizer', arguments: {} },
]createToolResultMessage
Creates tool result messages:
function createToolResultMessage(input: ToolResultInput): ToolResultMessage;
interface ToolResultInput {
toolCallId: string;
toolName: string;
result?: unknown;
success: boolean;
error?: string;
}Result is JSON-stringified:
createToolResultMessage({
toolCallId: 'tc1',
toolName: 'search',
result: { items: ['a', 'b'] },
success: true,
});
// Output
{
role: 'tool',
toolCallId: 'tc1',
toolName: 'search',
content: '{"items":["a","b"]}',
}createSubAgentResultMessage
Same as tool result but with prefix:
createSubAgentResultMessage({
toolCallId: 's1',
agentType: 'summarizer',
result: { summary: '...' },
success: true,
});
// Output
{
role: 'tool',
toolCallId: 's1',
toolName: 'subagent__summarizer',
content: '{"summary":"..."}',
}buildMessagesForLLM
Prepares messages for LLM calls:
function buildMessagesForLLM<TState>(
messages: Message[],
systemPrompt: string | ((state: TState) => string),
customState: TState
): Message[];Resolves dynamic prompts and prepends system message:
const messages = buildMessagesForLLM(
state.messages,
(state) => `You have ${state.notes.length} notes.`,
state.customState
);
// Prepends:
// { role: 'system', content: 'You have 5 notes.' }Runtime Integration
JS Runtime
// In JSAgentExecutor
while (state.status === 'running') {
const messages = buildMessagesForLLM(...);
const stepResult = await llmAdapter.generateStep(...);
const plan = planStepProcessing(stepResult, { outputSchema });
if (plan.assistantMessagePlan) {
state.messages.push(createAssistantMessage(plan.assistantMessagePlan));
}
for (const toolCall of plan.pendingToolCalls) {
// Execute tool, create result message
}
if (plan.statusUpdate) {
state.status = plan.statusUpdate.status;
state.output = plan.statusUpdate.output;
}
if (plan.isTerminal || shouldStopExecution(stepResult, stepCount, config)) {
break;
}
}Temporal Runtime
Same functions used in activities:
// In activity
export async function executeAgentStep(input) {
const stepResult = await llmAdapter.generateStep(...);
const plan = planStepProcessing(stepResult, { outputSchema });
// Return plan for workflow to process
return {
assistantMessage: plan.assistantMessagePlan
? createAssistantMessage(plan.assistantMessagePlan)
: null,
pendingToolCalls: plan.pendingToolCalls,
statusUpdate: plan.statusUpdate,
isTerminal: plan.isTerminal,
};
}Testing
import { planStepProcessing, shouldStopExecution } from '@helix-agents/core';
describe('planStepProcessing', () => {
it('detects __finish__ tool', () => {
const plan = planStepProcessing({
type: 'tool_calls',
toolCalls: [{ id: 't1', name: '__finish__', arguments: { done: true } }],
subAgentCalls: [],
});
expect(plan.isTerminal).toBe(true);
expect(plan.pendingToolCalls).toHaveLength(0);
expect(plan.output).toEqual({ done: true });
});
it('excludes __finish__ from pending tools', () => {
const plan = planStepProcessing({
type: 'tool_calls',
toolCalls: [
{ id: 't1', name: 'search', arguments: {} },
{ id: 't2', name: '__finish__', arguments: {} },
],
subAgentCalls: [],
});
expect(plan.pendingToolCalls).toHaveLength(0);
});
});