id: executor-chain
title: "Executor Chain"
description: How tools are resolved and executed through the multi-layer chain
category: internals
tags: [executor, chain, primitive, runtime, resolution]
version: "1.0.0"The executor chain is how Rye routes a tool call from an AI agent down to an OS-level operation. Every tool declares an __executor_id__ that points to the next element in the chain. The chain terminates at a primitive, where __executor_id__ is None.
Layer 3: Tool __executor_id__ = "rye/core/runtimes/python/script"
│
Layer 2: Runtime __executor_id__ = "rye/core/primitives/execute"
│
Layer 1: Primitive __executor_id__ = None → direct Lillux execution
Primitives are the terminal nodes. They have __executor_id__ = None and map directly to Lillux classes via PrimitiveExecutor.PRIMITIVE_MAP:
PRIMITIVE_MAP = {
"rye/core/primitives/execute": ExecutePrimitive,
}Runtimes are YAML configs in .ai/tools/rye/core/runtimes/. They point to a primitive and add configuration: interpreter resolution via ENV_CONFIG, command templates, timeout, anchor setup, dependency verification, and config resolution via CONFIG_RESOLVE. The state-graph/runtime is a special runtime that walks declarative graph YAML tools — it resolves state graph definitions and orchestrates node execution rather than running a single script.
Example — python/script runtime:
tool_type: runtime
executor_id: rye/core/primitives/execute
env_config:
interpreter:
type: local_binary
binary: python
candidates: [python3]
search_paths: [".venv/bin", ".venv/Scripts"]
var: RYE_PYTHON
fallback: python3
config:
command: "${RYE_PYTHON}"
args:
- "{tool_path}"
- "--project-path"
- "{project_path}"
input_data: "{params_json}"
timeout: 300Tools are Python scripts, shell scripts, or other executables with metadata headers. They point to a runtime:
__executor_id__ = "rye/core/runtimes/python/script"PrimitiveExecutor._build_chain(item_id) resolves a chain by following __executor_id__ recursively:
- Cache check — If the chain is cached and all file hashes still match, return cached chain.
- Resolve path — Call
_resolve_tool_path(item_id)to find the file using three-tier space precedence (project → user → system). - Load metadata — Parse the file using AST (Python) or YAML to extract
__executor_id__,ENV_CONFIG,CONFIG,CONFIG_RESOLVE,anchor,verify_deps, etc.CONFIG_RESOLVE(Python) orconfig_resolve:(YAML) declares a config file the tool needs from.ai/config/. The executor resolves it across all 3 tiers before spawning the tool. Two modes:deep_merge(system → user → project, all layers merged) andfirst_match(project → user → system, first found wins). Resolved config is injected into the tool's params asresolved_config. - Create ChainElement — Store the item_id, path, space, and all extracted metadata.
- Check termination — If
executor_idisNone, this is a primitive; stop. - Recurse — Set
current_id = executor_idand repeat from step 2. - Cache result — Store the chain with a combined SHA256 hash of all chain files.
Safety guards:
- Max depth:
MAX_CHAIN_DEPTH = 10— prevents runaway nesting - Circular detection: A
visitedset catchesA → B → Acycles - Missing executor: If an intermediate executor is not found, raises
ValueError
The resulting chain is ordered [tool, runtime, ..., primitive] — the tool is at index 0, the primitive at the last index.
Executing the bash tool rye/bash:
Step 1: Resolve "rye/bash"
→ .ai/tools/rye/bash.py (system space)
→ __executor_id__ = "rye/core/runtimes/python/script"
Step 2: Resolve "rye/core/runtimes/python/script"
→ .ai/tools/rye/core/runtimes/python/script.yaml (system space)
→ executor_id = "rye/core/primitives/execute"
Step 3: Resolve "rye/core/primitives/execute"
→ Matches PRIMITIVE_MAP key (no file needed)
→ executor_id = None (terminal)
Chain: [bash.py, python/script.yaml, execute primitive]
Before execution, ChainValidator.validate_chain() checks every adjacent pair (child, parent) in the chain:
Each space has a precedence number: project=3, user=2, system=1.
Rule: A child can only depend on elements with equal or lower precedence.
| Child Space | Can Depend On |
|---|---|
| project (3) | project, user, system |
| user (2) | user, system |
| system (1) | system only |
A user-space tool cannot depend on a project-space runtime — that would make the user tool break when used in a different project.
Additionally, the validator checks for "system → mutable" transitions within the chain. A system tool delegating to a project or user tool is always invalid.
If both child and parent declare input/output types, the parent's required inputs must be a subset of the child's outputs. Missing declarations are allowed (treated as compatible).
A parent element can specify child_constraints with min_version and max_version. The child's __version__ is checked against these constraints using semver comparison via the packaging library.
After chain validation, _resolve_chain_env() merges environment variables from all chain elements:
- Process the chain in reverse order (primitive → runtime → tool)
- For each element with
env_config, callEnvResolver.resolve():- Load
.envfile from project root - Resolve interpreter path (venv, node_modules, system binary, or version manager)
- Apply static env vars with
${VAR:-default}expansion
- Load
- Merge into the accumulated environment (later elements override)
The result is a fully-resolved environment dict passed to the Lillux primitive.
Runtimes can declare an anchor config for module resolution:
anchor:
enabled: true
mode: auto # auto, always, or never
markers_any: ["__init__.py", "pyproject.toml"] # activation markers
root: tool_dir # anchor root: tool_dir, tool_parent, project_path
lib: lib/python # runtime library path
env_paths:
PYTHONPATH:
prepend: ["{anchor_path}", "{runtime_lib}"]When active, the anchor system:
- Checks if marker files exist in the tool's directory (
mode: auto) - Resolves the anchor root path based on
rootconfig - Prepends/appends paths to environment variables like
PYTHONPATH - Runs dependency verification (
verify_deps) — walks the anchor directory and callsverify_item()on every matching file before subprocess spawn
This allows tools with multi-file dependencies (e.g., a tool with a lib/ directory) to have their entire dependency tree verified and their module paths set up correctly.
When mode: always is set, the anchor activates unconditionally — marker files are not checked. This is used by runtimes like state-graph/runtime where the tool being executed (a graph YAML) won't have marker files in its directory, but the runtime still needs its lib/python path on PYTHONPATH.
After anchor resolution and before execution config building, the executor resolves tool config files:
5.7 Resolve tool config — If the tool declares config_resolve, walk .ai/config/ across system → user → project, merge or match, and inject into params as resolved_config.
Tools declare their config dependency via CONFIG_RESOLVE (Python) or config_resolve: (YAML):
# Python tool
CONFIG_RESOLVE = {"path": "web/websearch.yaml", "mode": "deep_merge"}# YAML runtime/tool
config_resolve:
path: agent/agent.yaml
mode: deep_mergeThe path is relative to .ai/config/ in each space. In deep_merge mode, the executor loads the file from all tiers (system → user → project) and deep-merges them so project-level values override. In first_match mode, it walks project → user → system and returns the first file found without merging.
_build_execution_config() merges configs from the entire chain:
- Start from the primitive and merge configs upward (tool configs override runtime configs)
- Inject execution context:
tool_path,project_path,user_space,system_space - Serialize runtime parameters as
params_json(used ininput_datato pipe via stdin) - Run two-pass templating:
- Pass 1:
${VAR}— environment variable substitution (with shell escaping viashlex.quote) - Pass 2:
{param}— config value substitution (iterates up to 3 times until stable)
- Pass 1:
Anchor context variables (tool_path, tool_dir, tool_parent, anchor_path, runtime_lib) are injected into the execution parameters before _execute_chain(). This makes them available as {param} template variables in subprocess args. User-supplied parameters with names matching primitive config keys (command, args) are excluded from the config merge to prevent collisions — they remain accessible only through {params_json}.
Two caches with hash-based invalidation:
| Cache | Key | Invalidation |
|---|---|---|
_chain_cache |
item_id → CacheEntry(chain, combined_hash) |
Combined SHA256 of all chain files — recomputed on lookup |
_metadata_cache |
file path → CacheEntry(metadata, file_hash) |
SHA256 of file content — recomputed on lookup |
Cache invalidation is automatic: if any file in the chain changes (content hash differs), the cached entry is discarded and the chain is rebuilt from the filesystem.
Pass trace=True to PrimitiveExecutor.execute() to get a detailed event log of every decision point alongside the normal result. The trace is returned in ExecutionResult.trace as a list of event dicts.
Trace events are emitted at each stage:
| Step | Fields | Purpose |
|---|---|---|
resolve |
item_id, path, space, shadowed |
Which file was resolved and what it shadows |
verify_integrity |
item_id, verified, key_fp |
Signature verification result per chain element |
resolve_env |
contributed_by, keys |
Which env vars each chain element contributes |
The shadowed field in resolve events lists items in lower-precedence spaces that would have matched but were overridden:
{
"step": "resolve",
"item_id": "rye/bash",
"path": "/project/.ai/tools/rye/bash.py",
"space": "project",
"shadowed": [
{"path": "/system/.ai/tools/rye/bash.py", "space": "system:ryeos-core"}
]
}Trace mode has no effect on execution — the same chain is built, verified, and run. It only adds observability.