page.run() — Direct Tool Invocation
Invoke a Donobu tool by name with explicit parameters, bypassing AI orchestration entirely for deterministic, zero-cost tool calls.
page.run invokes a Donobu tool by name with explicit parameters, bypassing the AI orchestration layer entirely. Use it when you know exactly which action you want and want to guarantee it executes without AI decision-making.
Signature
page.run(
toolName: string,
toolParams?: Record<string, any>,
options?: {
gptClient?: GptClient | LanguageModel;
}
): Promise<ToolCallResult>
Parameters
| Parameter | Type | Description |
|---|---|---|
toolName | string | Name of the tool to invoke. See Available Tools Reference. |
toolParams | object | Parameters to pass to the tool. Shape depends on the tool. |
options.gptClient | GptClient | LanguageModel | Override the AI provider (relevant only for tools that themselves call the AI, such as assert and analyzePageText) |
Return value: ToolCallResult
type ToolCallResult = {
isSuccessful: boolean;
forLlm: string; // Human/AI-readable result description
metadata: unknown; // Tool-specific structured result
};
When to prefer page.run over page.ai
Use page.run when… | Use page.ai when… |
|---|---|
| You know exactly which tool should execute | You want the AI to figure out the right sequence of actions |
| You want deterministic execution with zero AI cost | You are describing a goal, not a specific action |
| You are invoking a custom tool that is not part of an AI flow | The flow involves multiple steps or decision points |
You need the raw ToolCallResult metadata | You just need the flow to succeed |
Examples
Inspect an assertion result directly
page.run lets you capture the structured ToolCallResult from an assertion — useful when you want to branch on the result rather than throw on failure:
const result = await page.run('assert', {
assertionToTestFor: 'The confirmation banner is visible and contains an order number',
});
if (!result.isSuccessful) {
console.log('Assertion failed:', result.forLlm);
}
Extract page data with structured output
Use analyzePageText to collect information from a page and capture the AI's analysis as a string:
const result = await page.run('analyzePageText', {
analysisToRun: 'Extract all product names and their prices',
additionalRelevantContext: 'The page shows a product listing grid',
});
console.log(result.forLlm); // "Product A: $10.00, Product B: $25.00, ..."
Invoke a custom tool
const result = await page.run('syncSsoSession', { tenant: 'acme' });
expect(result.metadata.tenantId).toBe('acme');
See Custom Tools & Persistence Plugins for how to create custom tools.
Flow log recording
page.run calls are recorded in the flow log exactly like AI-invoked tool calls. They are visible in the test report's test-flow-metadata.json attachment and in Donobu Studio's step timeline, making it easy to trace what happened during a test regardless of whether actions were AI-driven or manually invoked.
Error handling
page.run throws ToolCallFailedException if the tool reports isSuccessful: false:
import { ToolCallFailedException } from 'donobu';
try {
await page.run('click', { element: '#non-existent-button' });
} catch (e) {
if (e instanceof ToolCallFailedException) {
console.log('Tool failed:', e.message);
}
}
Tool parameter shapes
Tool parameters vary by tool. Use TypeScript autocompletion or refer to the Available Tools Reference to find the accepted parameters for each tool. All tools accept an optional rationale string that is recorded in the flow log for debugging purposes.