YAML-First Configuration
Define AI agents with simple, readable YAML. No complex programming required.
Build, configure, and deploy AI agent workflows with simple YAML configuration
KDeps is a framework for building, configuring, and deploying AI agent workflows through simple YAML configuration. It packages everything needed for RAG and AI agents, eliminating the complexity of building self-hosted APIs with LLMs.
Build AI agents using simple, self-contained YAML configuration blocks. No complex programming required - just define your resources and let KDeps handle the orchestration.
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: my-agent
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 3000
routes:
- path: /api/v1/chat
methods: [POST]Run workflows instantly on your local machine with sub-second startup time. Docker is optional and only needed for deployment.
# Run locally (instant startup)
kdeps run workflow.yaml
# Hot reload for development
kdeps run workflow.yaml --devAccess data from any source with just two functions: get() and set(). No more memorizing 15+ different function names.
# All of these work with get():
query: get('q') # Query parameter
auth: get('Authorization') # Header
data: get('llmResource') # Resource output
user: get('user_name', 'session') # Session storageUse any LLM backend - local or cloud. Mix and match different models in the same workflow.
| Backend | Description |
|---|---|
| Ollama | Local model serving (default) |
| OpenAI | GPT-4, GPT-3.5 |
| Anthropic | Claude models |
| Gemini models | |
| Mistral | Mistral AI |
| Together | Together AI |
| Groq | Fast inference |
| + more | VLLM, TGI, LocalAI, LlamaCpp |
# Install KDeps (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh
# Or via Homebrew (Mac)
brew install kdeps/tap/kdeps
# Create a new agent interactively
kdeps new my-agentworkflow.yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: chatbot
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 3000
routes:
- path: /api/v1/chat
methods: [POST]
agentSettings:
models:
- llama3.2:1bresources/llm.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerresources/response.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')Test it:
kdeps run workflow.yaml
curl -X POST http://localhost:3000/api/v1/chat -d '{"q": "What is AI?"}'| Feature | v1 (PKL) | v2 (YAML) |
|---|---|---|
| Configuration | PKL (Apple's language) | Standard YAML |
| Functions | 15+ to learn | 2 (get, set) |
| Startup time | ~30 seconds | < 1 second |
| Docker | Required | Optional |
| Python env | Anaconda (~20GB) | uv (97% smaller) |
| Learning curve | 2-3 days | ~1 hour |