Skip to content

KDepsAI Agent Framework

Build, configure, and deploy AI agent workflows with simple YAML configuration

KDeps Logo

Introduction

KDeps is a framework for building, configuring, and deploying AI agent workflows through simple YAML configuration. It packages everything needed for RAG and AI agents, eliminating the complexity of building self-hosted APIs with LLMs.

Key Highlights

YAML-First Configuration

Build AI agents using simple, self-contained YAML configuration blocks. No complex programming required - just define your resources and let KDeps handle the orchestration.

yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
  name: my-agent
  version: "1.0.0"
  targetActionId: responseResource
settings:
  apiServerMode: true
  apiServer:
    portNum: 3000
    routes:
      - path: /api/v1/chat
        methods: [POST]

Local-First Execution

Run workflows instantly on your local machine with sub-second startup time. Docker is optional and only needed for deployment.

bash
# Run locally (instant startup)
kdeps run workflow.yaml

# Hot reload for development
kdeps run workflow.yaml --dev

Unified API

Access data from any source with just two functions: get() and set(). No more memorizing 15+ different function names.

yaml
# All of these work with get():
query: get('q')                    # Query parameter
auth: get('Authorization')         # Header
data: get('llmResource')           # Resource output
user: get('user_name', 'session')  # Session storage

Multi-LLM Support

Use any LLM backend - local or cloud. Mix and match different models in the same workflow.

BackendDescription
OllamaLocal model serving (default)
OpenAIGPT-4, GPT-3.5
AnthropicClaude models
GoogleGemini models
MistralMistral AI
TogetherTogether AI
GroqFast inference
+ moreVLLM, TGI, LocalAI, LlamaCpp

Enterprise-Ready Features

  • Session persistence with SQLite or in-memory storage
  • Connection pooling for databases
  • Retry logic with exponential backoff
  • Response caching with TTL
  • CORS configuration for web applications
  • WebServer mode for static files and reverse proxying

Quick Start

bash
# Install KDeps (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh

# Or via Homebrew (Mac)
brew install kdeps/tap/kdeps

# Create a new agent interactively
kdeps new my-agent

Example: Simple Chatbot

workflow.yaml

yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
  name: chatbot
  version: "1.0.0"
  targetActionId: responseResource
settings:
  apiServerMode: true
  apiServer:
    portNum: 3000
    routes:
      - path: /api/v1/chat
        methods: [POST]
  agentSettings:
    models:
      - llama3.2:1b

resources/llm.yaml

yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
  actionId: llmResource
  name: LLM Chat
run:
  chat:
    model: llama3.2:1b
    prompt: "{{ get('q') }}"
    jsonResponse: true
    jsonResponseKeys:
      - answer

resources/response.yaml

yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
  actionId: responseResource
  requires:
    - llmResource
run:
  apiResponse:
    success: true
    response:
      data: get('llmResource')

Test it:

bash
kdeps run workflow.yaml
curl -X POST http://localhost:3000/api/v1/chat -d '{"q": "What is AI?"}'

Documentation

Getting Started

Configuration

Resources

Concepts

Deployment

Tutorials

Why KDeps v2?

Featurev1 (PKL)v2 (YAML)
ConfigurationPKL (Apple's language)Standard YAML
Functions15+ to learn2 (get, set)
Startup time~30 seconds< 1 second
DockerRequiredOptional
Python envAnaconda (~20GB)uv (97% smaller)
Learning curve2-3 days~1 hour

Community

Released under the MIT License.