Goal-oriented guides for accomplishing specific tasks. Assumes basic familiarity with Delfhos. Jump directly to the section that solves your problem.
Connect to a SQL Database
Connect an agent to PostgreSQL, MySQL, or MariaDB. The agent can inspect schemas, run SELECT queries, and execute writes.
schema — inspect table schemas and column definitionsquery — execute SELECT querieswrite — execute INSERT, UPDATE, DELETE statementsRestrict to read-only
Connect to Google Sheets
Read and write spreadsheet data, create new sheets, apply formatting, and build charts.
Connect to Google Drive
Search, upload, share, and manage files. Use allow and confirm to scope permissions precisely.
Connect to Google Docs & Calendar
Create and edit documents, manage calendar events, and combine these with web search in a single agent.
Connect to any REST API
APITool turns any OpenAPI 3.x specification into a set of callable agent actions. The compiler reads the spec and registers every endpoint automatically.
From a public spec URL
From a local spec with auth headers
Fixed path parameters (multi-tenant APIs)
Use path_params to inject fixed values into URL path templates. The values are URL-encoded and substituted automatically — the LLM never sees or passes them.
Discover available endpoints
Cache compiled specs for large APIs
LLM enrichment
Use Local or Custom OpenAI-compatible Models
Use LLMConfig to configure native providers and any OpenAI-compatible custom endpoint — local models, open-source providers, or enterprise servers.
Mix local and cloud in a single agent
Use Multiple LLMs for Different Tasks
Save money by routing different tasks to different models. Use a fast cheap model for tool selection and a powerful model for code generation.
Quick recipe — cost optimization
With specialized overrides
Control Tool Permissions with allow and confirm
Two independent parameters that let you define what a tool can do and whether a human must approve it before it runs.
Defines which actions the agent is permitted to use at all. Actions not in the list are hidden from the LLM.
Enforced before code generation
Defines which actions must be approved by a human before they execute. The agent can plan them, but execution pauses until you approve or reject.
Enforced before execution
Common patterns
On @tool functions
Require Human Approval Before Actions
Three modes: interactive terminal prompt, custom callback, or programmatic API for background agents.
Custom approval handler
Programmatic approval (background agents)
Run an Agent Asynchronously
Use arun() for async/await workflows, or run_async() to submit a task in the background without blocking.
async/await with arun()
Context manager for automatic cleanup
Use Two Accounts of the Same Type
Instantiate any connection type multiple times by giving each instance a unique name.
Any built-in connection type can be instantiated multiple times as long as each has a unique name.
Enable Tool Prefiltering to Reduce Costs
When you have many tools, a fast model pre-selects only the relevant subset before the expensive code generation step.
Add Long-term Memory to an Agent
Persist facts across program restarts using semantic search. Relevant facts are injected automatically before each task.
Load from a file
Cost Tracking & Budgets
Delfhos tracks token usage and estimates costs automatically. Pricing lives in ~/delfhos/pricing.json.
Read cost after a run
Set a budget limit
Add a System Prompt
Inject a persistent persona, behavioral guardrails, or output format instructions into every LLM call.
Configure the Execution Sandbox
Delfhos executes LLM-generated code in an isolated sandbox. By default it auto-detects Docker and uses the strongest isolation available.
Resource limits (Docker mode only)
Pass input files to the agent workspace
Inject local files into the sandbox so the agent's generated code can read them directly.
files= are read-only. To produce new files, use add_to_output_files().Extract output files from a task result
When the agent needs to return a file, it calls add_to_output_files() inside generated code. After the task completes, the files are available on result.files.
Retry on Failure
On each failure the error message is fed back to the LLM so it can generate corrected code.
The default is retry_count=1 (no retry).
Use rerun() for Replanning
Stop mid-way to hand back what the agent learned at runtime, and ask for a fresh code-generation pass for remaining work.
rerun() is built-in inside every generated script. Use it when the agent cannot write correct code for the next step without first inspecting an API's dynamic response.
