This manual is for Operators running PhariaEngine for their business or department. In case you are more interested in developing Skills or interfacing with PhariaEngine in order to invoke deployed Skills you should consider the developer or user manual instead.
We deploy PhariaEngine as a container image to GitHub. You can pull it with:
podman login ghcr.io -u $GITHUB_USER -p $GITHUB_TOKEN
podman pull ghcr.io/aleph-alpha/pharia-kernel/pharia-kernel:latest
podman tag ghcr.io/aleph-alpha/pharia-kernel/pharia-kernel:latest pharia-kernelYou can start the container and expose its shell at port 8081 like this
podman run -v ./config.toml:/app/config.toml -p 8081:8081 --env-file .env pharia-kernelWe configure the bind address and port via the environment variable KERNEL_ADDRESS.
If not configured it defaults to "0.0.0.0:8081", which is necessary in the container, but locally may cause the firewall to complain.
Note that the Engine can be configured both from environment variables and from a config.toml file.
Mounting the config file is optional.
PhariaEngine wants to enable teams outside of the PhariaEngine Operators to deploy Skills in self service. To achieve this, Skills are organized into namespaces. The PhariaEngine Operators maintain the list of namespaces and associate each one with a configuration file, which in turn is owned by a team.
Each namespace configuration typically would reside in a Git repository owned by the Team which owns the namespace. Changes in this file will be automatically detected by the Engine.
A namespace is also associated with a registry to load the Skills from. These Skill registries can either be directories in filesystem (mostly used for a development setup) or point to an OCI registry (recommended for production).
The Engine provides a directory option for namespaces that serves all Skills from a given directory, without the need to configure them in a file.
You can activate this by setting
[namespaces.dev]
directory = "skills"This will serve all Skills in the skills directory under the namespace dev.
Note that you can also set the environment variable NAMESPACES__DEV__DIRECTORY=skills to achieve the same effect.
# `my-team` is the name of this namespace
[namespaces.my-team]
# Path to the local namespace configuration file
config-url = "file://namespace.toml"
# Path to the local skill directory
path = "skills"With the local configuration above, PhariaEngine will serve any skill deployed at the skills subdirectory of its working directory under the namespace my-team. This is mostly intended for local development of Skills without a remote instance of PhariaEngine. To deploy Skills in production it is recommended to use a remote namespace.
# `my-team` is the name of this namespace
[namespaces.my-team]
# The URL to the configuration listing the Skills of this namespace
# PhariaEngine will use the contents of the `NAMESPACES__MY_TEAM__CONFIG_ACCESS_TOKEN` environment variable to access (authorize) the config
config-url = "https://github.com/Aleph-Alpha/my-team/blob/main/config.toml"
# OCI Registry to load Skills from
# PhariaEngine will use the contents of the `NAMESPACES__MY_TEAM__REGISTRY_USER` and `NAMESPACES__MY_TEAM__REGISTRY_PASSWORD` environment variables to access (authorize) the registry
registry = "registry.acme.com"
# This is the common prefix added to the skill name when composing the OCI repository.
# e.g. ${base-repository}/${skill_name}
base-repository = "my-org/my-team/skills"With the remote configuration above, PhariaEngine will serve any skill deployed on the specified OCI registry under the namespace my-team.
You can provide each namespace in PhariaEngine with credentials to authenticate against the specified OCI registry. Set the environment variables that are expected from the operator config:
NAMESPACES__MY_TEAM__REGISTRY_USER=Joe.Plumber
NAMESPACES__MY_TEAM__REGISTRY_PASSWORD=****The environment variable NAMESPACE_UPDATE_INTERVAL controls how much time the Engine waits between checking for changes in namespace configurations. The default is 10 seconds.
NAMESPACE_UPDATE_INTERVAL=10sBy default, only logs of ERROR level are output. You can change this by setting the LOG_LEVEL environment variable to one of trace, debug, info, warn, or error (default).
LOG_LEVEL=infoPhariaEngine can be configured to use an OpenTelemetry Collector endpoint by setting the OTEL_ENDPOINT environment variable.
OTEL_ENDPOINT=http://127.0.0.1:4317The ratio of traces that are being sampled can be configured by setting the OTEL_SAMPLING_RATIO to a value between 0.0 and 1.0, where 0.0 means no sampling and 1.0 means all traces.
OTEL_SAMPLING_RATIO=0.1For local testing, a supported collector like the Jaeger All in One executable can be used:
podman run -p 4317:4317 -p 16686:16686 jaegertracing/all-in-onepodman login ghcr.io -u $GITHUB_USER -p $GITHUB_TOKEN
podman pull ghcr.io/aleph-alpha/pharia-kernel/pharia-kernel:latest
podman tag ghcr.io/aleph-alpha/pharia-kernel/pharia-kernel:latest pharia-kernelIn order to run PhariaEngine, you need to provide a namespace configuration:
-
Create a
skillsfolderFor local skill development, you need a folder that serves as a skill registry to store all compiled Skills.
mkdir skills
All Skills in this folder are exposed in the namespace "dev" with the environment variable
NAMESPACES__DEV__DIRECTORY. Any changes in this folder will be picked up by the PhariaEngine automatically. Theconfig.tomlandnamespace.tomlshould not be provided. -
Start the container:
podman run \ -v ./skills:/app/skills \ -e NAMESPACES__DEV__DIRECTORY="skills" \ -e NAMESPACE_UPDATE_INTERVAL=1s \ -p 8081:8081 \ pharia-kernel
You can monitor your Skill by connecting the PhariaEngine to an OpenTelemetry collector, e.g. Jaeger:
podman run -p 4317:4317 -p 16686:16686 jaegertracing/all-in-oneSpecify the collector endpoint via the environment variable OTEL_ENDPOINT:
podman run \
-v ./skills:/app/skills \
-e NAMESPACES__DEV__DIRECTORY="skills" \
-e NAMESPACE_UPDATE_INTERVAL=1s \
-e OTEL_ENDPOINT=http://host.containers.internal:4317 \
-p 8081:8081 \
pharia-kernelYou can view the monitoring via your local Jaeger instance at .