Technical descriptions of the Delfhos API. Accurate and complete. Use this to look up parameters, types, and return values — not to learn how to use Delfhos for the first time.
Agent
The central orchestrator. Manages LLM calls, tool execution, memory, approval gates, and error recovery.
Import
Constructor parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| tools | list | None | Service connections, @tool functions, or both |
| chat | Chat | None | Session memory; enables conversation context across run() calls |
| memory | Memory | None | Persistent semantic memory across runs |
| llm | str | None | Single model for all operations |
| light_llm | str | None | Fast model for prefiltering; must be paired with heavy_llm |
| heavy_llm | str | None | Strong model for code generation; must be paired with light_llm |
| code_llm | str | None | Override model for code generation specifically |
| vision_llm | str | None | Override model for image/multimodal tasks |
| system_prompt | str | None | Instructions injected into every LLM call |
| on_confirm | callable | None | Custom approval callback fn(request) → bool | None |
| providers | dict | None | API key overrides {"google": "...", "openai": "..."} |
| verbose | bool | False | Print full execution traces to stdout |
| enable_prefilter | bool | False | Use light_llm to pre-select tools before code generation |
| retry_count | int | 1 | Max retries on non-fatal execution errors |
| files | list[str] | None | Absolute host paths injected as read-only workspace files |
| budget_usd | float | None | Hard spend limit. New run() calls are rejected once reached. |
| sandbox | str | "auto" | "auto" | "docker" | "local" |
| sandbox_config | dict | None | Docker resource limits (memory_limit, cpu_limit, timeout, network, pids_limit) |
Methods
| Method | Signature | Description |
|---|---|---|
| start | () → self | Initialize and start the agent |
| stop | () | Shut down and free resources |
| run | (task: str, timeout: float = 60.0) → Response | Execute task (blocking) |
| run_async | (task: str) → None | Submit task (background, non-blocking) |
| arun | async (task: str, timeout: float = 60.0) → Response | Execute task (async/await) |
| run_chat | (timeout: float = 120.0) | Launch interactive terminal chat REPL |
| get_pending_approvals | () → list[dict] | List requests awaiting approval |
| approve | (request_id: str, response: str = "Approved") → bool | Approve a pending request |
| reject | (request_id: str, reason: str = "Rejected") → bool | Reject a pending request |
| reset_budget | (new_limit_usd: float = None) | Reset accumulated cost, optionally set new limit |
| info | () → dict | Current agent state |
Response
Returned by agent.run() and agent.arun(). Contains the answer, execution status, cost, timing, and full trace.
| Field | Type | Description |
|---|---|---|
| text | str | Final answer text |
| status | bool | True = success, False = failure |
| error | str | None | Error message if status is False |
| cost_usd | float | None | Estimated USD cost |
| duration_ms | int | Wall-clock time in milliseconds |
| trace | Any | Full execution trace object |
| files | Dict[str, str] | Output files saved during execution. Keys are logical labels; values are absolute host paths. |
@tool Decorator
Marks a Python function as a callable tool. Delfhos extracts the name, docstring, and type hints to build the LLM schema.
Import
Decorator parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| name | str | function name | Override the tool name shown to the LLM |
| description | str | docstring | Override the description |
| handle_error | bool | str | callable | True | True returns the exception message; a string returns that string; a callable receives the exception |
| confirm | bool | True | Require human approval before execution |
ToolException — recoverable errors
Gmail
Read, send, and manage emails.
| Parameter | Type | Default | Description |
|---|---|---|---|
| oauth_credentials | str | None | Path to OAuth JSON file |
| service_account | str | None | Path to Service Account JSON |
| delegated_user | str | None | Email to impersonate (service account only) |
| allow | str | list[str] | None | Permitted actions |
| confirm | bool | list[str] | True | Actions requiring human approval |
| name | str | "gmail" | Unique name when using multiple instances |
SQL
Query and write to PostgreSQL, MySQL, and MariaDB databases.
| Parameter | Type | Default | Description |
|---|---|---|---|
| url | str | None | Full connection string (e.g. postgresql://user:pass@host/db) |
| host | str | None | Database host |
| port | int | None | Database port |
| database | str | None | Database name |
| user | str | None | Database user |
| password | str | None | Database password |
| db_type | str | "postgresql" | "postgresql" | "mysql" | "mariadb" |
| allow | str | list[str] | None | Permitted actions |
| confirm | bool | list[str] | True | Actions requiring human approval |
| name | str | "sql" | Unique name when using multiple instances |
Sheets
Read, write, format, and chart Google Sheets spreadsheets.
Accepts the same parameters as Gmail. Default name: "sheets".
Drive
Search, upload, share, and manage files in Google Drive.
Accepts the same parameters as Gmail. Default name: "drive".
Docs
Read, create, update, and format Google Docs documents.
Accepts the same parameters as Gmail. Default name: "docs".
Calendar
List, create, update, delete, and respond to Google Calendar events.
Accepts the same parameters as Gmail. Default name: "calendar".
WebSearch
Search the web and return summarized results. Requires a Gemini or OpenAI model — Claude is not supported.
| Parameter | Type | Default | Description |
|---|---|---|---|
| llm | str | REQUIRED | Gemini or OpenAI model. Claude/Anthropic not supported. |
| api_key | str | None | Falls back to env var |
| allow | str | list[str] | None | Permitted actions |
| confirm | bool | list[str] | True | Actions requiring human approval |
| name | str | "websearch" | Unique name when using multiple instances |
APITool
Compiles any OpenAPI 3.x specification into callable agent actions. Every endpoint in the spec becomes a function the agent can plan, generate code for, and execute.
| Parameter | Type | Default | Description |
|---|---|---|---|
| spec | str | REQUIRED | URL or file path to an OpenAPI 3.x JSON or YAML spec |
| base_url | str | None | Override for the API base URL; auto-extracted from spec if absent |
| headers | dict | None | HTTP headers injected into every request |
| params | dict | None | Query params injected into every request |
| name | str | None | Custom label; auto-derived from spec title or hostname |
| allow | str | list[str] | None | Restrict which endpoints the agent can use |
| confirm | bool | list[str] | True | Require approval before listed endpoints execute |
| cache | bool | False | Reuse compiled manifest from disk; useful for large specs |
| enrich | bool | False | LLM rewrites endpoint descriptions and infers response schemas once; cached |
| llm | str | None | Model used for enrichment. Only used when enrich=True. |
| sample | bool | True | Capture real response schemas in background thread. No LLM, no tokens. |
Class method — inspect()
LLMConfig
Configures native providers (Google/OpenAI/Anthropic) and any OpenAI-compatible endpoint. Pass a LLMConfig wherever a model string is accepted.
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | str | REQUIRED | Model identifier |
| base_url | str | None | API base URL; defaults to OPENAI_BASE_URL env var, then https://api.openai.com/v1 |
| api_key | str | None | Bearer token; defaults to OPENAI_API_KEY. Pass "local" for auth-free local servers |
| headers | dict[str, str] | None | Extra HTTP headers sent with every request. Can be combined with api_key. |
| settings | dict[str, Any] | None | Per-model generation settings: temperature, top_p, top_k, max_tokens, etc. |
| provider | str | "auto" | Provider routing: "auto" | "google" | "openai" | "anthropic" |
Chat
Session-scoped conversation buffer. Passed in the Agent constructor to enable multi-turn context. Cleared when the Python process ends.
| Parameter | Type | Default | Description |
|---|---|---|---|
| keep | int | 10 | Max messages before auto-summarization |
| summarize | bool | True | Enable automatic message compression |
| persist | bool | False | Save to SQLite (True) or keep in RAM (False) |
| namespace | str | "default" | Isolates multiple chat histories |
| summarizer_llm | str | None | LLM for summarization; required when summarize=True |
Memory
Persistent semantic store backed by SQLite and sentence-transformer embeddings. Facts are retrieved by similarity before each task.
| Parameter | Type | Default | Description |
|---|---|---|---|
| guidelines | str | None | Preamble prepended to retrieved context |
| namespace | str | "default" | Isolates memory across agents or users |
| embedding_model | str | "all-MiniLM-L6-v2" | Any sentence-transformers or HuggingFace model name. Downloaded on first use. |
Methods
| Method | Signature | Description |
|---|---|---|
| save | (content: str) | Store facts (split by newline) |
| add | (content: str) | Store text or read from a .txt / .md file path |
| search | (query: str, top_k=5, threshold=0.3) → list | Semantic similarity search |
| retrieve | (query: str, top_k=5, threshold=0.3) → str | Same as search, returns a formatted string |
| context | () → str | All facts as a string |
| clear | () | Delete all facts in this namespace |
Error Classes
All errors extend DelfhosConfigError and display a structured message with an error code and resolution hint.
| Error class | Code prefix | When raised |
|---|---|---|
| ModelConfigurationError | ERR-MODEL-* | Invalid or missing LLM configuration |
| AgentConfirmationError | ERR-AGENT-* | Invalid confirm or on_confirm value |
| MemorySetupError | ERR-MEM-* | Memory database initialization failure |
| ToolExecutionError | ERR-TOOL-* | Unhandled error during tool execution |
| EnvironmentKeyError | ERR-ENV-* | Required environment variable missing |
| ConnectionConfigurationError | ERR-CONN-* | Invalid connection parameters |
| LLMExecutionError | ERR-LLM-* | LLM API call failed |
| ApprovalRejectedError | ERR-APPROVAL-* | Human rejected the approval request |
| ToolDefinitionError | ERR-TOOL-* | @tool function has an invalid schema |
Supported LLM Models
Pass a model name string for native providers, or use LLMConfig for custom endpoints.
| Family | Examples | Env var | Notes |
|---|---|---|---|
| Google Gemini | gemini-3.1-flash-lite-preview, gemini-3.1-flash, gemini-3.1-pro | GOOGLE_API_KEY | Recommended |
| OpenAI | gpt-5.4, gpt-4o-mini, o1, o3, o4-mini | OPENAI_API_KEY | |
| Anthropic Claude | claude-sonnet-4-6, claude-opus-4-7, claude-3-haiku | ANTHROPIC_API_KEY | Not supported for WebSearch |
| Any OpenAI-compatible | LLMConfig(model=..., base_url=...) | OPENAI_API_KEY or custom | Ollama, vLLM, Groq, Together AI, LM Studio, enterprise gateways |
Environment Variables
Delfhos loads .env files automatically via python-dotenv. You can also pass keys programmatically via the providers parameter.
| Variable | Used by | Description |
|---|---|---|
| GOOGLE_API_KEY | Agent, WebSearch | Google Gemini API key |
| OPENAI_API_KEY | Agent, WebSearch, LLMConfig | OpenAI API key; also used as the default bearer token for custom endpoints |
| ANTHROPIC_API_KEY | Agent | Anthropic Claude API key |
| OPENAI_BASE_URL | LLMConfig | Default base URL for OpenAI-compatible custom endpoints |
