Workflow
The workflow.pkl
contains configuration about the AI Agent, namely:
- AI agent
name
,description
,website
,authors
,documentation
andrepository
. - The semver
version
of this AI agent.
Note on version: kdeps uses the version for mapping the graph-based dependency workflow execution order. For this reason, the version is required.
- The
targetActionID
resource to be executed when running the AI agent. This is the ID of the resource. - Existing AI agents
workflows
to be reused in this AI agent. The agent needed to be installed first viakdeps install
command.
Settings
The settings
block allows advanced configuration of the AI agent, covering API settings, routing, Ubuntu and Python packages, and default LLM models.
settings {
APIServerMode = true
APIServer {...}
agentSettings {...}
}
Overview
The settings
block includes the following configurations:
APIServerMode
: A boolean flag that enables or disables API server mode for the project. When set tofalse
, the default action is executed directly, and the program exits upon completion.APIServer
: A configuration block that specifies API settings such ashostIP
,portNum
, androutes
.agentSettings
: A configuration block that includes settings for installing Anaconda,condaPackages
,pythonPackages
, custom or PPA Ubunturepositories
, Ubuntupackages
, and Ollama LLMmodels
.
API Server Settings
The APIServer
block defines API routing configurations for the AI agent. These settings are only applied when APIServerMode
is set to true
.
hostIP
andportNum
: Define the IP address and port for the Docker container. The default values are"127.0.0.1"
forhostIP
and3000
forportNum
.
TrustedProxies
The trustedProxies
allows setting the allowable X-Forwarded-For
header IPv4, IPv5, CIDR addresses, used to limit the trusted request using the service. You can obtain the client's IP address through @(request.IP())
.
Example:
trustedProxies {
"127.0.0.1"
"192.168.1.2"
"10.0.0.0/8"
}
API Routes
routes
: API paths can be configured within theroutes
block. Each route is defined using anew
block, specifying:path
: The defined API endpoint, i.e."/api/v1/items"
.methods
: HTTP methods allowed for the route. Supported HTTP methods include:GET
,POST
,PUT
,PATCH
,OPTIONS
,DELETE
, andHEAD
.
Example:
routes {
new {
path = "/api/v1/user"
methods {
"GET"
}
}
new {
path = "/api/v1/items"
methods {
"POST"
}
}
}
Each route targets a single targetActionID
, meaning every route points to the main action specified in the workflow configuration. If multiple routes are defined, you must use a skipCondition
logic to specify which route a resource should target. See the Workflow for more details.
For instance, to run a resource only on the "/api/v1/items"
route, you can define the following skipCondition
logic:
local allowedPath = "/api/v1/items"
local requestPath = "@(request.path())"
skipCondition {
requestPath != allowedPath
}
In this example:
- The resource is skipped if the
skipCondition
evaluates totrue
. - The resource runs only when the request path equals
"/api/v1/items"
.
For more details, refer to the Skip Conditions documentation.
Lambda Mode
When the APIServerMode
is set to false
in the workflow configuration, the AI agent operates in a single-execution lambda mode. In this mode, the AI agent is designed to execute a specific task or serve a particular purpose, completing its function in a single, self-contained execution cycle.
For example, an AI agent in single-execution lambda mode might be used to analyze data from a form submission, generate a report, be executed as a scheduled cron
job function or provide a response to a one-time query, without the need for maintaining an ongoing state or connection.
AI Agent Settings
This section contains the agent settings that will be used to build the agent's Docker image.
agentSettings {
installAnaconda = false
condaPackages { ... }
pythonPackages { ... }
repositories { ... }
packages { ... }
models { ... }
ollamaImageTag = "0.5.4"
env { ... }
args { ... }
}
Enabling Anaconda
installAnaconda
: "The Operating System for AI", Anaconda, will be installed when set totrue
. However, please take note that if Anaconda is installed, the Docker image size will grow to > 20Gb. That does not includes the additionalcondaPackages
. Defaults tofalse
.
Anaconda Packages
condaPackages
: Anaconda packages to be installed ifinstallAnaconda
istrue
. The environment, channel and packages can be defined in a single entry.
condaPackages {
["base"] {
["main"] = "pip diffusers numpy"
["pytorch"] = "pytorch"
["conda-forge"] = "tensorflow pandas keras transformers"
}
}
This configuration will:
- Creates the
base
isolated Anaconda environment. - Use the channels
main
to installpip
,diffusers
andnumpy
Anaconda packages. - Use the
pytorch
channel to installpytorch
. - Use the
conda-forge
channel to installtensorflow
,pandas
,keras
, andtransformers
.
In order to use the isolated environment, the Python resource should specify the Anaconda environment via the condaEnvironment
setting.
Python Packages
Python packages can also be installed even without Anaconda installed.
pythonPackages {
"diffusers[torch]"
}
Ubuntu Repositories
Additional Ubuntu and Ubuntu PPA repositories can be defined in the repositories
settings.
repositories {
"ppa:alex-p/tesseract-ocr-devel"
}
In this example, a PPA repository is added to installing the latest tesseract-ocr
package.
Ubuntu Packages
Specify the Ubuntu packages that should be pre-installed when building this image.
packages {
"tesseract-ocr"
"poppler-utils"
}
LLM Models
List the local Ollama LLM models that will be pre-installed. You can specify multiple models.
models {
"tinydolphin"
"llama3.3"
"llama3.2-vision"
"mistral"
"gemma"
"mistral"
}
Kdeps uses Ollama as it's LLM backend. You can define as many Ollama compatible models as needed to fit your use case.
For a comprehensive list of available Ollama compatible models, visit the Ollama model library.
Ollama Docker Image Tag
The ollamaImageTag
configuration property allows you to dynamically specify the version of the Ollama base image tag used in your Docker image.
When used in conjunction with a GPU configuration in .kdeps.pkl
file, this configuration can automatically adjust the image version to include hardware-specific extensions, such as 1.0.0-rocm
for AMD environments.
Arguments and Environment Variables
Kdeps allows you to define ENV
(environment variables) that persist across both the Docker image and container runtime, and ARG
(arguments) that are used for passing values during the build process.
To declare ENV
or ARG
parameters, use the env
and args
sections in your workflow configuration:
env {
["API_KEY"] = "example_value"
}
args {
["API_TOKEN"] = ""
}
In this example:
API_KEY
is declared as an environment variable with the value"example_value"
. This variable will persist in both the Docker image and the container at runtime.API_TOKEN
is an argument that does not have a default value and will accept a value at container runtime.
Environment File Support: Additionally, any .env
file in your project will be automatically loaded via kdeps run
, and the variables defined within it will populate the env
or args
sections accordingly.
Important Notes:
ENV
variables must always be assigned a value during declaration.ARG
variables can be declared without a value (e.g.,""
). These will act as standalone runtime arguments.- Values defined in the
.env
file will override default values for any matchingENV
orARG
keys.