Skip to content

@helix-agents/llm-vercel

Vercel AI SDK adapter for the Helix Agents framework. Provides LLM integration using the Vercel AI SDK's streamText function.

Installation

bash
npm install @helix-agents/llm-vercel ai @ai-sdk/openai

VercelAIAdapter

Main adapter class implementing the LLMAdapter interface.

typescript
import { VercelAIAdapter } from '@helix-agents/llm-vercel';

const llmAdapter = new VercelAIAdapter();

generateStep

Generate a single agent step with the LLM.

typescript
const result = await llmAdapter.generateStep({
  messages,      // Conversation history
  tools,         // Available tools
  llmConfig,     // Model configuration
  outputSchema,  // Optional: for structured output
}, {
  // Optional callbacks for streaming
  onTextDelta: async (delta) => { ... },
  onThinking: async (content, isComplete) => { ... },
  onToolStart: async (id, name, args) => { ... },
  onToolEnd: async (id, result) => { ... },
});

Parameters:

typescript
interface LLMGenerateInput<TOutput> {
  messages: Message[];
  tools: LLMTool[];
  llmConfig: LLMConfig;
  outputSchema?: ZodType<TOutput>;
}

interface LLMStreamCallbacks {
  onTextDelta?: (delta: string) => Promise<void>;
  onThinking?: (content: string, isComplete: boolean) => Promise<void>;
  onToolStart?: (id: string, name: string, args: unknown) => Promise<void>;
  onToolEnd?: (id: string, result: unknown) => Promise<void>;
}

Returns: StepResult<TOutput>

formatAssistantMessage

Format assistant message plan to internal Message format.

typescript
const message = llmAdapter.formatAssistantMessage({
  content: 'Hello!',
  toolCalls: [],
  subAgentCalls: [],
  thinking: { type: 'text', text: 'thinking...' },
});

LLMConfig

Model configuration passed to the adapter.

typescript
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

const config: LLMConfig = {
  // Required: Model from AI SDK
  model: openai('gpt-4o'),

  // Optional: Temperature (0-2)
  temperature: 0.7,

  // Optional: Max output tokens
  maxOutputTokens: 4096,

  // Optional: Prompt caching
  caching: 'auto', // Automatic provider-specific cache optimization

  // Optional: Provider-specific options
  providerOptions: {
    // For OpenAI o-series models
    openai: {
      reasoningEffort: 'medium',
    },
    // For Anthropic with extended thinking
    anthropic: {
      thinking: {
        type: 'enabled',
        budgetTokens: 10000,
      },
    },
  },
};

Supported Providers

Any provider supported by Vercel AI SDK:

typescript
// OpenAI
import { openai } from '@ai-sdk/openai';
const model = openai('gpt-4o');

// Anthropic
import { anthropic } from '@ai-sdk/anthropic';
const model = anthropic('claude-sonnet-4-20250514');

// Google
import { google } from '@ai-sdk/google';
const model = google('gemini-1.5-pro');

// Azure OpenAI
import { azure } from '@ai-sdk/azure';
const model = azure('gpt-4');

// Cohere
import { cohere } from '@ai-sdk/cohere';
const model = cohere('command-r-plus');

Thinking/Reasoning Support

Anthropic Extended Thinking

typescript
const agent = defineAgent({
  llmConfig: {
    model: anthropic('claude-sonnet-4-20250514'),
    providerOptions: {
      anthropic: {
        thinking: {
          type: 'enabled',
          budgetTokens: 10000,
        },
      },
    },
  },
});

Thinking content is:

  • Streamed via onThinking callback
  • Included in StepResult.thinking
  • Stored in AssistantMessage.thinking

OpenAI o-series Reasoning

typescript
const agent = defineAgent({
  llmConfig: {
    model: openai('o1'),
    providerOptions: {
      openai: {
        reasoningEffort: 'high', // 'low' | 'medium' | 'high'
      },
    },
  },
});

Chunk Mapping Utilities

Convert Vercel AI SDK chunks to Helix stream chunks.

typescript
import {
  mapVercelChunkToStreamChunk,
  isTextContentChunk,
  isToolChunk,
  isCompletionChunk,
} from '@helix-agents/llm-vercel';

// Check chunk type
if (isTextContentChunk(vercelChunk)) {
  const helixChunk = mapVercelChunkToStreamChunk(vercelChunk, {
    agentId: 'run-123',
    timestamp: Date.now(),
  });
}

Usage Example

typescript
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { defineAgent, defineTool } from '@helix-agents/core';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

// Define agent
const MyAgent = defineAgent({
  name: 'my-agent',
  systemPrompt: 'You are a helpful assistant.',
  outputSchema: z.object({ response: z.string() }),
  tools: [
    defineTool({
      name: 'search',
      description: 'Search for information',
      inputSchema: z.object({ query: z.string() }),
      outputSchema: z.object({ results: z.array(z.string()) }),
      execute: async ({ query }) => ({ results: [`Result for: ${query}`] }),
    }),
  ],
  llmConfig: {
    model: openai('gpt-4o'),
    temperature: 0.7,
    maxOutputTokens: 4096,
  },
});

// Create executor with Vercel adapter
const executor = new JSAgentExecutor({
  stateStore: new InMemoryStateStore(),
  streamManager: new InMemoryStreamManager(),
  llmAdapter: new VercelAIAdapter(),
});

// Execute
const handle = await executor.execute(MyAgent, 'Search for TypeScript tutorials');
const result = await handle.result();

Error Handling

The adapter classifies Vercel AI SDK errors into typed HelixError instances.

mapVercelError

Convert Vercel AI SDK errors to typed HelixError:

typescript
import { mapVercelError, mapStatusCodeToErrorCode } from '@helix-agents/llm-vercel';

const helixError = mapVercelError(vercelError);
// Returns HelixError with code, category, retryable, statusCode

Handles ApiCallError (status code mapping), RetryError (extracts last error), and generic errors.

mapStatusCodeToErrorCode

Map HTTP status codes to error codes:

typescript
import { mapStatusCodeToErrorCode } from '@helix-agents/llm-vercel';

mapStatusCodeToErrorCode(429); // 'provider_rate_limited'
mapStatusCodeToErrorCode(503); // 'provider_overloaded'
mapStatusCodeToErrorCode(401); // 'provider_auth_error'
Status CodeErrorCode
401, 403provider_auth_error
429provider_rate_limited
408provider_timeout
400, 422provider_invalid_request
503, 529provider_overloaded
Other 5xxprovider_error

Error Flow

When the LLM adapter encounters an error, the runtime's onError callback receives the classified HelixError and writes an ErrorChunk to the stream with code and recoverable fields. See Error Handling Guide for the complete flow.

Stop Reason Mapping

The adapter normalizes finish reasons to StopReason:

Provider ReasonHelix StopReason
stopend_turn
end_turnend_turn
tool-callstool_use
tool_usetool_use
lengthmax_tokens
max_tokensmax_tokens
content-filtercontent_filter
Otherunknown

See Also

Released under the MIT License.