Skip to content

Remote Agents (Temporal + HTTP)

This example demonstrates a Temporal orchestrator agent delegating to remote specialist agents running on a separate HTTP service. It shows:

  • Cross-runtime orchestration (Temporal orchestrator + JS runtime service)
  • HttpRemoteAgentTransport for HTTP/SSE communication
  • AgentServer with Express for hosting remote agents
  • createRemoteSubAgentTool() for transparent remote delegation

Source Code

The full example is in examples/remote-agents-temporal/.

Architecture

mermaid
graph LR
    Client["Client<br/>(starts workflow)"]
    Worker["Temporal Worker<br/>Orchestrator Agent"]
    Service["Express Service<br/>AgentServer"]
    Researcher["Researcher Agent"]
    Summarizer["Summarizer Agent"]

    Client -->|"Temporal workflow"| Worker
    Worker -->|"HTTP + SSE"| Service
    Service --> Researcher
    Service --> Summarizer

The orchestrator runs on Temporal for durable execution and crash recovery. The researcher and summarizer run on a lightweight Express service using AgentServer. Communication uses HttpRemoteAgentTransport (HTTP for requests, SSE for streaming).

Prerequisites

  • Node.js 18+
  • Docker (for Temporal and Redis)
  • OpenAI API key

Project Structure

examples/remote-agents-temporal/
├── src/
│   ├── agents/
│   │   ├── orchestrator.ts  # Parent agent with remote sub-agent tools
│   │   ├── researcher.ts    # Specialist agent (web search + notes)
│   │   └── summarizer.ts    # Specialist agent (pure LLM, no tools)
│   ├── types.ts             # Shared Zod schemas
│   ├── server.ts            # Express server hosting agents
│   ├── workflows.ts         # Temporal workflow
│   ├── activities.ts        # Temporal activities
│   ├── worker.ts            # Temporal worker entry point
│   └── client.ts            # Client that starts the workflow
├── docker-compose.yml
└── package.json

Running the Example

1. Install Dependencies

bash
cd examples/remote-agents-temporal
npm install

2. Set Up Environment

bash
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY

3. Start Infrastructure

bash
npm run docker:up

This starts Temporal on localhost:7233.

4. Start the Remote Agent Service

bash
# Terminal 1
npm run server

The service starts on http://localhost:4000 with two agents:

  • researcher — Searches for information and takes notes
  • summarizer — Summarizes text into key points

5. Start the Temporal Worker

bash
# Terminal 2
npm run worker

6. Run the Client

bash
# Terminal 3
npm run client "benefits of TypeScript"

Key Components

Remote Agent Service

The server uses AgentServer to host specialist agents:

typescript
// src/server.ts
import express from 'express';
import { AgentServer, createHttpAdapter, createExpressAdapter } from '@helix-agents/agent-server';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';

const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const executor = new JSAgentExecutor(stateStore, streamManager, new VercelAIAdapter());

const agentServer = new AgentServer({
  agents: {
    researcher: ResearcherAgent,
    summarizer: SummarizerAgent,
  },
  stateStore,
  streamManager,
  executor,
});

const app = express();
app.use(express.json());
app.use('/', createExpressAdapter(createHttpAdapter(agentServer)));
app.listen(4000);

This exposes 6 endpoints: /start, /resume, /sse, /status, /interrupt, /abort.

Orchestrator Agent

The orchestrator uses createRemoteSubAgentTool to delegate to remote agents:

typescript
// src/agents/orchestrator.ts
import {
  defineAgent,
  createRemoteSubAgentTool,
  HttpRemoteAgentTransport,
} from '@helix-agents/core';

const transport = new HttpRemoteAgentTransport({
  url: process.env.REMOTE_AGENT_URL || 'http://localhost:4000',
});

const researcherTool = createRemoteSubAgentTool('researcher', {
  description: 'Delegate research to a remote specialist agent',
  inputSchema: z.object({
    query: z.string().describe('The research query'),
  }),
  outputSchema: ResearcherOutputSchema,
  transport,
  remoteAgentType: 'researcher',
  timeoutMs: 120_000,
});

const summarizerTool = createRemoteSubAgentTool('summarizer', {
  description: 'Delegate summarization to a remote specialist agent',
  inputSchema: z.object({
    text: z.string().describe('The text to summarize'),
  }),
  outputSchema: SummarizerOutputSchema,
  transport,
  remoteAgentType: 'summarizer',
  timeoutMs: 60_000,
});

export const OrchestratorAgent = defineAgent({
  name: 'orchestrator',
  outputSchema: OrchestratorOutputSchema,
  tools: [researcherTool, summarizerTool],
  systemPrompt: `You are a research orchestrator.
1. Use the researcher to gather information
2. Use the summarizer to distill findings
3. Call __finish__ with your final output`,
  llmConfig: { model: openai('gpt-4o-mini') },
  maxSteps: 10,
});

Specialist Agents

The researcher agent uses tools (web search, note-taking):

typescript
// src/agents/researcher.ts
export const ResearcherAgent = defineAgent({
  name: 'researcher',
  stateSchema: ResearcherStateSchema,
  outputSchema: ResearcherOutputSchema,
  tools: [webSearchTool, takeNotesTool],
  systemPrompt: (state) => `You are a research specialist...`,
  llmConfig: { model: openai('gpt-4o-mini') },
  maxSteps: 10,
});

The summarizer is a pure LLM agent (no tools):

typescript
// src/agents/summarizer.ts
export const SummarizerAgent = defineAgent({
  name: 'summarizer',
  outputSchema: SummarizerOutputSchema,
  tools: [],
  systemPrompt: `You are a summarization expert...`,
  llmConfig: { model: openai('gpt-4o-mini') },
  maxSteps: 5,
});

Shared Schemas

Output schemas are shared between the orchestrator and the remote service:

typescript
// src/types.ts
export const ResearcherOutputSchema = z.object({
  findings: z.array(
    z.object({
      title: z.string(),
      snippet: z.string(),
      url: z.string(),
    })
  ),
  rawNotes: z.array(z.string()),
});

export const SummarizerOutputSchema = z.object({
  keyPoints: z.array(z.string()),
  summary: z.string(),
});

export const OrchestratorOutputSchema = z.object({
  topic: z.string(),
  researchFindings: z.array(z.string()),
  summary: z.string(),
  sources: z.array(z.string()),
});

Execution Flow

  1. The client starts a Temporal workflow for the orchestrator
  2. The orchestrator LLM calls subagent__researcher with a query
  3. The Temporal workflow routes the call to a dedicated executeRemoteSubAgentCall activity that calls POST /start on the remote service, then consumes GET /sse with crash recovery and stream proxying
  4. The researcher runs independently (web search, note-taking), returns structured output
  5. The orchestrator LLM calls subagent__summarizer with the findings
  6. The summarizer returns key points and a summary
  7. The orchestrator calls __finish__ with the final structured output

Production Considerations

  • Replace InMemoryStateStore with RedisStateStore on the agent service
  • Replace InMemoryStreamManager with RedisStreamManager on the agent service
  • Add authentication headers to the transport
  • Set appropriate timeoutMs values based on expected agent execution times
  • Use Temporal Cloud for production workflow execution

Next Steps

Released under the MIT License.