Delfhos
Reference

Technical descriptions of the Delfhos API. Accurate and complete. Use this to look up parameters, types, and return values — not to learn how to use Delfhos for the first time.

Agent

The central orchestrator. Manages LLM calls, tool execution, memory, approval gates, and error recovery.

Import

python
from delfhos import Agent

Constructor parameters

ParameterTypeDefaultDescription
toolslistNoneService connections, @tool functions, or both
chatChatNoneSession memory; enables conversation context across run() calls
memoryMemoryNonePersistent semantic memory across runs
llmstrNoneSingle model for all operations
light_llmstrNoneFast model for prefiltering; must be paired with heavy_llm
heavy_llmstrNoneStrong model for code generation; must be paired with light_llm
code_llmstrNoneOverride model for code generation specifically
vision_llmstrNoneOverride model for image/multimodal tasks
system_promptstrNoneInstructions injected into every LLM call
on_confirmcallableNoneCustom approval callback fn(request) → bool | None
providersdictNoneAPI key overrides {"google": "...", "openai": "..."}
verboseboolFalsePrint full execution traces to stdout
enable_prefilterboolFalseUse light_llm to pre-select tools before code generation
retry_countint1Max retries on non-fatal execution errors
fileslist[str]NoneAbsolute host paths injected as read-only workspace files
budget_usdfloatNoneHard spend limit. New run() calls are rejected once reached.
sandboxstr"auto""auto" | "docker" | "local"
sandbox_configdictNoneDocker resource limits (memory_limit, cpu_limit, timeout, network, pids_limit)

Methods

MethodSignatureDescription
start() → selfInitialize and start the agent
stop()Shut down and free resources
run(task: str, timeout: float = 60.0) → ResponseExecute task (blocking)
run_async(task: str) → NoneSubmit task (background, non-blocking)
arunasync (task: str, timeout: float = 60.0) → ResponseExecute task (async/await)
run_chat(timeout: float = 120.0)Launch interactive terminal chat REPL
get_pending_approvals() → list[dict]List requests awaiting approval
approve(request_id: str, response: str = "Approved") → boolApprove a pending request
reject(request_id: str, reason: str = "Rejected") → boolReject a pending request
reset_budget(new_limit_usd: float = None)Reset accumulated cost, optionally set new limit
info() → dictCurrent agent state

Response

Returned by agent.run() and agent.arun(). Contains the answer, execution status, cost, timing, and full trace.

FieldTypeDescription
textstrFinal answer text
statusboolTrue = success, False = failure
errorstr | NoneError message if status is False
cost_usdfloat | NoneEstimated USD cost
duration_msintWall-clock time in milliseconds
traceAnyFull execution trace object
filesDict[str, str]Output files saved during execution. Keys are logical labels; values are absolute host paths.

@tool Decorator

Marks a Python function as a callable tool. Delfhos extracts the name, docstring, and type hints to build the LLM schema.

Import

python
from delfhos import tool, ToolException

Decorator parameters

ParameterTypeDefaultDescription
namestrfunction nameOverride the tool name shown to the LLM
descriptionstrdocstringOverride the description
handle_errorbool | str | callableTrueTrue returns the exception message; a string returns that string; a callable receives the exception
confirmboolTrueRequire human approval before execution

ToolException — recoverable errors

python
from delfhos import ToolException

@tool
def find_order(order_id: str) -> dict:
    """Look up an order by ID."""
    if not order_id.startswith("ORD-"):
        raise ToolException("Order IDs must start with 'ORD-'. Please check the format.")
    ...

Gmail

Read, send, and manage emails.

python
from delfhos import Gmail
ParameterTypeDefaultDescription
oauth_credentialsstrNonePath to OAuth JSON file
service_accountstrNonePath to Service Account JSON
delegated_userstrNoneEmail to impersonate (service account only)
allowstr | list[str]NonePermitted actions
confirmbool | list[str]TrueActions requiring human approval
namestr"gmail"Unique name when using multiple instances
readsend

SQL

Query and write to PostgreSQL, MySQL, and MariaDB databases.

python
from delfhos import SQL
ParameterTypeDefaultDescription
urlstrNoneFull connection string (e.g. postgresql://user:pass@host/db)
hoststrNoneDatabase host
portintNoneDatabase port
databasestrNoneDatabase name
userstrNoneDatabase user
passwordstrNoneDatabase password
db_typestr"postgresql""postgresql" | "mysql" | "mariadb"
allowstr | list[str]NonePermitted actions
confirmbool | list[str]TrueActions requiring human approval
namestr"sql"Unique name when using multiple instances
schemaquerywrite

Sheets

Read, write, format, and chart Google Sheets spreadsheets.

python
from delfhos import Sheets

Accepts the same parameters as Gmail. Default name: "sheets".

readwritecreateformatchartbatch

Drive

Search, upload, share, and manage files in Google Drive.

python
from delfhos import Drive

Accepts the same parameters as Gmail. Default name: "drive".

searchgetcreateupdatedeletelist_permissionsshareunshare

Docs

Read, create, update, and format Google Docs documents.

python
from delfhos import Docs

Accepts the same parameters as Gmail. Default name: "docs".

readcreateupdateformatdelete

Calendar

List, create, update, delete, and respond to Google Calendar events.

python
from delfhos import Calendar

Accepts the same parameters as Gmail. Default name: "calendar".

listgetcreateupdatedeleterespond

WebSearch

Search the web and return summarized results. Requires a Gemini or OpenAI model — Claude is not supported.

python
from delfhos import WebSearch
ParameterTypeDefaultDescription
llmstrREQUIREDGemini or OpenAI model. Claude/Anthropic not supported.
api_keystrNoneFalls back to env var
allowstr | list[str]NonePermitted actions
confirmbool | list[str]TrueActions requiring human approval
namestr"websearch"Unique name when using multiple instances
search

APITool

Compiles any OpenAPI 3.x specification into callable agent actions. Every endpoint in the spec becomes a function the agent can plan, generate code for, and execute.

python
from delfhos import APITool
ParameterTypeDefaultDescription
specstrREQUIREDURL or file path to an OpenAPI 3.x JSON or YAML spec
base_urlstrNoneOverride for the API base URL; auto-extracted from spec if absent
headersdictNoneHTTP headers injected into every request
paramsdictNoneQuery params injected into every request
namestrNoneCustom label; auto-derived from spec title or hostname
allowstr | list[str]NoneRestrict which endpoints the agent can use
confirmbool | list[str]TrueRequire approval before listed endpoints execute
cacheboolFalseReuse compiled manifest from disk; useful for large specs
enrichboolFalseLLM rewrites endpoint descriptions and infers response schemas once; cached
llmstrNoneModel used for enrichment. Only used when enrich=True.
sampleboolTrueCapture real response schemas in background thread. No LLM, no tokens.

Class method — inspect()

python
APITool.inspect(spec: str, verbose: bool = False)dict
# Returns: {"tool": "...", "methods": [...], "total": N}

LLMConfig

Configures native providers (Google/OpenAI/Anthropic) and any OpenAI-compatible endpoint. Pass a LLMConfig wherever a model string is accepted.

python
from delfhos import LLMConfig
ParameterTypeDefaultDescription
modelstrREQUIREDModel identifier
base_urlstrNoneAPI base URL; defaults to OPENAI_BASE_URL env var, then https://api.openai.com/v1
api_keystrNoneBearer token; defaults to OPENAI_API_KEY. Pass "local" for auth-free local servers
headersdict[str, str]NoneExtra HTTP headers sent with every request. Can be combined with api_key.
settingsdict[str, Any]NonePer-model generation settings: temperature, top_p, top_k, max_tokens, etc.
providerstr"auto"Provider routing: "auto" | "google" | "openai" | "anthropic"

Chat

Session-scoped conversation buffer. Passed in the Agent constructor to enable multi-turn context. Cleared when the Python process ends.

python
from delfhos import Chat
ParameterTypeDefaultDescription
keepint10Max messages before auto-summarization
summarizeboolTrueEnable automatic message compression
persistboolFalseSave to SQLite (True) or keep in RAM (False)
namespacestr"default"Isolates multiple chat histories
summarizer_llmstrNoneLLM for summarization; required when summarize=True

Memory

Persistent semantic store backed by SQLite and sentence-transformer embeddings. Facts are retrieved by similarity before each task.

python
from delfhos import Memory
ParameterTypeDefaultDescription
guidelinesstrNonePreamble prepended to retrieved context
namespacestr"default"Isolates memory across agents or users
embedding_modelstr"all-MiniLM-L6-v2"Any sentence-transformers or HuggingFace model name. Downloaded on first use.

Methods

MethodSignatureDescription
save(content: str)Store facts (split by newline)
add(content: str)Store text or read from a .txt / .md file path
search(query: str, top_k=5, threshold=0.3) → listSemantic similarity search
retrieve(query: str, top_k=5, threshold=0.3) → strSame as search, returns a formatted string
context() → strAll facts as a string
clear()Delete all facts in this namespace

Error Classes

All errors extend DelfhosConfigError and display a structured message with an error code and resolution hint.

python
from delfhos import ModelConfigurationError  # or from delfhos.errors import ...
Error classCode prefixWhen raised
ModelConfigurationErrorERR-MODEL-*Invalid or missing LLM configuration
AgentConfirmationErrorERR-AGENT-*Invalid confirm or on_confirm value
MemorySetupErrorERR-MEM-*Memory database initialization failure
ToolExecutionErrorERR-TOOL-*Unhandled error during tool execution
EnvironmentKeyErrorERR-ENV-*Required environment variable missing
ConnectionConfigurationErrorERR-CONN-*Invalid connection parameters
LLMExecutionErrorERR-LLM-*LLM API call failed
ApprovalRejectedErrorERR-APPROVAL-*Human rejected the approval request
ToolDefinitionErrorERR-TOOL-*@tool function has an invalid schema

Supported LLM Models

Pass a model name string for native providers, or use LLMConfig for custom endpoints.

FamilyExamplesEnv varNotes
Google Geminigemini-3.1-flash-lite-preview, gemini-3.1-flash, gemini-3.1-proGOOGLE_API_KEYRecommended
OpenAIgpt-5.4, gpt-4o-mini, o1, o3, o4-miniOPENAI_API_KEY
Anthropic Claudeclaude-sonnet-4-6, claude-opus-4-7, claude-3-haikuANTHROPIC_API_KEYNot supported for WebSearch
Any OpenAI-compatibleLLMConfig(model=..., base_url=...)OPENAI_API_KEY or customOllama, vLLM, Groq, Together AI, LM Studio, enterprise gateways

Environment Variables

Delfhos loads .env files automatically via python-dotenv. You can also pass keys programmatically via the providers parameter.

VariableUsed byDescription
GOOGLE_API_KEYAgent, WebSearchGoogle Gemini API key
OPENAI_API_KEYAgent, WebSearch, LLMConfigOpenAI API key; also used as the default bearer token for custom endpoints
ANTHROPIC_API_KEYAgentAnthropic Claude API key
OPENAI_BASE_URLLLMConfigDefault base URL for OpenAI-compatible custom endpoints