YAML-First Configuration
Define workflows with simple, readable YAML. No complex programming required.
Build stateful REST APIs with YAML configuration - handle auth, data flow, storage, and validation without writing boilerplate code
KDeps is a YAML-based workflow orchestration framework for building stateful REST APIs. Built on ~92,000 lines of Go code with 70% test coverage, it packages AI tasks, data processing, and API integrations into portable units, eliminating boilerplate code for common patterns like authentication, data flow, storage, and validation.
Architecture: Clean architecture with 5 distinct layers (CLI → Executor → Parser → Domain → Infrastructure)
Scale: 218 source files, 26 CLI commands, 5 resource executor types, 14 working examples
Testing: 13 integration tests + 35 e2e scripts ensuring production readiness
Multi-Target: Native CLI, Docker containers, and WebAssembly for browser execution
Build workflows using simple, self-contained YAML configuration blocks. No complex programming required - just define your resources and let KDeps handle the orchestration.
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: my-agent
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]Run workflows instantly on your local machine with sub-second startup time. Docker is optional and only needed for deployment.
# Run locally (instant startup)
kdeps run workflow.yaml
# Hot reload for development
kdeps run workflow.yaml --devAccess data from any source with just two functions: get() and set(). No more memorizing 15+ different function names.
# All of these work with get():
query: get('q') # Query parameter
auth: get('Authorization') # Header
data: get('llmResource') # Resource output
user: get('user_name', 'session') # Session storageKDeps v2 supports both traditional expr-lang and simpler Mustache-style expressions. Choose what fits your needs!
# Traditional expr-lang (full power)
prompt: "{{ get('q') }}"
time: "{{ info('current_time') }}"
# Mustache (simpler - 56% less typing!)
prompt: "{{q}}"
time: "{{current_time}}"
# Mix them naturally in the same workflow
message: "Hello {{name}}, your score is {{ get('points') * 2 }}"Key Benefits:
{{var}} = {{ var }}When to use:
{{name}}, {{email}}{{ get('x') }}, {{ a + b }}Use Ollama for local model serving, or connect to any OpenAI-compatible API endpoint.
| Backend | Description |
|---|---|
| Ollama | Local model serving (default) |
| OpenAI-compatible | Any API endpoint with OpenAI-compatible interface |
# Install KDeps (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh
# Or via Homebrew (Mac)
brew install kdeps/tap/kdeps
# Create a new agent interactively
kdeps new my-agentworkflow.yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: chatbot
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]
agentSettings:
models:
- llama3.2:1bresources/llm.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerresources/response.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')Test it:
kdeps run workflow.yaml
curl -X POST http://localhost:16395/api/v1/chat -d '{"q": "What is AI?"}'KDeps implements clean architecture with five distinct layers:
┌─────────────────────────────────────────────────────┐
│ CLI Layer (cmd/) │
│ 26 commands: run, build, validate, package, new... │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Execution Engine (pkg/executor/) │
│ Graph → Engine → Context → Resource Executors │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Parser & Validator (pkg/parser, validator) │
│ YAML parsing, expression evaluation │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Domain Models (pkg/domain/) │
│ Workflow, Resource, RunConfig, Settings │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Infrastructure (pkg/infra/) │
│ Docker, HTTP, Storage, Python, Cloud, ISO, WASM │
└─────────────────────────────────────────────────────┘Five built-in executor types handle different workload types:
| Executor | Implementation | Features |
|---|---|---|
| LLM | 8 files | Ollama, OpenAI-compatible, streaming, tools |
| HTTP | 2 files | REST APIs, auth, retries, caching |
| SQL | 4 files | 5 database drivers, connection pooling |
| Python | 3 files | uv integration (97% smaller images) |
| Exec | 3 files | Secure shell command execution |
| Feature | v1 (PKL) | v2 (YAML) |
|---|---|---|
| Configuration | PKL (Apple's language) | Standard YAML |
| Functions | 15+ to learn | 2 (get, set) |
| Startup time | ~30 seconds | < 1 second |
| Docker | Required | Optional |
| Python env | Anaconda (~20GB) | uv (97% smaller) |
| Learning curve | 2-3 days | ~1 hour |
Explore working examples: