Multi-Source I/O
Accept input from audio hardware, cameras, telephony, and HTTP APIs โ simultaneously. Transcribe with local Whisper or cloud STT.

Edge AI Workflow Framework
Build AI agents for edge devices and APIs with YAML โ audio, video, and telephony input, offline LLMs, wake-phrase activation, and speech output. No cloud required.
KDeps is a YAML-based workflow framework for building AI agents on edge devices and API backends. It combines multi-source hardware I/O (audio, video, telephony), offline-capable LLMs, speech recognition, wake-phrase activation, and text-to-speech into portable, self-contained units that run anywhere โ from Raspberry Pi to cloud servers.
KDeps accepts input from hardware devices and HTTP APIs โ simultaneously. Configure audio, video, telephony, and API sources in one workflow.yaml:
settings:
input:
sources: [audio] # audio | video | telephony | api
audio:
device: hw:0,0 # ALSA device (Linux), microphone name (macOS/Windows)
activation:
phrase: "hey kdeps" # Wake phrase โ workflow runs only when heard
mode: offline
offline:
engine: faster-whisper
model: small
transcriber:
mode: offline # Fully local, no cloud required
output: text
offline:
engine: faster-whisper
model: small| Source | Hardware |
|---|---|
audio | ALSA microphone, line-in, USB audio |
video | V4L2 camera, USB webcam, CSI camera |
telephony | SIP/ATA adapter, Twilio |
api | HTTP REST (default) |
Every AI component has an offline alternative โ run completely air-gapped:
| Component | Offline Options | Cloud Options |
|---|---|---|
| LLM | Ollama (llama3, mistral, phi) | OpenAI, Anthropic, Google, Groq |
| STT | Whisper, Faster-Whisper, Vosk, Whisper.cpp | OpenAI Whisper API, Deepgram, Google STT |
| TTS | Piper, eSpeak-NG, Festival, Coqui TTS | OpenAI TTS, ElevenLabs, Azure TTS |
| Wake Phrase | Faster-Whisper, Vosk | Deepgram, AssemblyAI |
Build workflows using simple, self-contained YAML configuration blocks. No complex programming required - just define your resources and let KDeps handle the orchestration.
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: my-agent
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]Run workflows instantly on your local machine with sub-second startup time. Docker is optional and only needed for deployment.
# Run locally (instant startup)
kdeps run workflow.yaml
# Hot reload for development
kdeps run workflow.yaml --devAccess data from any source with just two functions: get() and set(). No more memorizing 15+ different function names.
# All of these work with get():
query: get('q') # Query parameter
auth: get('Authorization') # Header
data: get('llmResource') # Resource output
user: get('user_name', 'session') # Session storageKDeps v2 supports both expr-lang and Mustache-style variable interpolation:
# expr-lang (functions and logic)
prompt: "{{ get('q') }}"
time: "{{ info('current_time') }}"
# Mustache (simple variable access)
prompt: "{{q}}"
time: "{{current_time}}"
# Mix in the same workflow
message: "Hello {{name}}, your score is {{ get('points') * 2 }}"Use Mustache for simple variable access; use expr-lang for function calls, arithmetic, and conditionals. and are identical.
Use Ollama for local model serving or any OpenAI-compatible API. Vision, tools, and streaming are supported.
# Install KDeps (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh
# Or via Homebrew (Mac)
brew install kdeps/tap/kdeps
# Create a new agent interactively
kdeps new my-agentworkflow.yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: chatbot
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
portNum: 16395
routes:
- path: /api/v1/chat
methods: [POST]
agentSettings:
models:
- llama3.2:1bresources/llm.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerresources/response.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')Test it:
kdeps run workflow.yaml
curl -X POST http://localhost:16395/api/v1/chat -d '{"q": "What is AI?"}'| Feature | v1 (PKL) | v2 (YAML) |
|---|---|---|
| Configuration | PKL (Apple's language) | Standard YAML |
| Functions | 15+ to learn | 2 (get, set) |
| Startup time | ~30 seconds | < 1 second |
| Docker | Required | Optional |
| Python env | Anaconda (~20GB) | uv (97% smaller) |
| Learning curve | 2-3 days | ~1 hour |
Explore working examples:
Edge AI / Voice:
API Backends: