Skip to content

Kdeps is an all-in-one AI framework for building Dockerized full-stack AI applications (FE and BE) that includes open-source LLM models out-of-the-box.

Key Features

Kdeps is loaded with powerful features to streamline AI app development:

🧩 Low-code/no-code capabilities Build operational full-stack AI apps, enabling accessible development for non-technical users and production-ready applications.
pkl
// workflow.pkl
name = "ticketResolutionAgent"
description = "Automates customer support ticket resolution with LLM responses."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/ticket"; methods { "POST" } }
    }
    cors { enableCORS = true; allowOrigins { "http://localhost:8080" } }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/fetch_data.pkl
actionID = "httpFetchResource"
name = "CRM Fetch"
description = "Fetches ticket data via CRM API."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/ticket" }
  preflightCheck {
    validations { "@(request.data().ticket_id)" != "" }
  }
  HTTPClient {
    method = "GET"
    url = "https://crm.example.com/api/ticket/@(request.data().ticket_id)"
    headers { ["Authorization"] = "Bearer @(session.getRecord('crm_token'))" }
    timeoutDuration = 30.s
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "LLM Ticket Response"
description = "Generates responses for customer tickets."
requires { "httpFetchResource" }
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/ticket" }
  chat {
    model = "llama3.2:1b"
    role = "assistant"
    prompt = "Provide a professional response to the customer query: @(request.data().query)"
    scenario {
      new { role = "system"; prompt = "You are a customer support assistant. Be polite and concise." }
      new { role = "system"; prompt = "Ticket data: @(client.responseBody("httpFetchResource"))" }
    }
    JSONResponse = true
    JSONResponseKeys { "response_text" }
    timeoutDuration = 60.s
  }
}
pkl
// resources/response.pkl
actionID = "responseResource"
name = "API Response"
description = "Returns ticket resolution response."
requires { "llmResource" }
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/ticket" }
  APIResponse {
    success = true
    response {
      data { "@(llm.response('llmResource'))" }
    }
    meta { headers { ["Content-Type"] = "application/json" } }
  }
}
🐳 Dockerized full-stack AI apps Build applications with batteries included for seamless development and deployment, as detailed in the AI agent settings.
pkl
# Creating a Docker image of the kdeps AI agent is easy!
# First, package the AI agent project.
$ kdeps package tickets-ai/
INFO kdeps package created package-file=tickets-ai-1.0.0.kdeps
# Then build a docker image and run.
$ kdeps run tickets-ai-1.0.0.kdeps
# It also creates a Docker compose configuration file.
pkl
# docker-compose.yml
version: '3.8'
services:
  kdeps-tickets-ai-cpu:
    image: kdeps-tickets-ai:1.0.0
    ports:
      - "127.0.0.1:3000"
    restart: on-failure
    volumes:
      - ollama:/root/.ollama
      - kdeps:/.kdeps
volumes:
  ollama:
    external:
      name: ollama
  kdeps:
    external:
      name: kdeps
🖼️ Support for vision or multimodal LLMs Process text, images, and other data types in a single workflow with vision or multimodal LLMs.
pkl
// workflow.pkl
name = "visualTicketAnalyzer"
description = "Analyzes images in support tickets for defects using a vision model."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/visual-ticket"; methods { "POST" } }
    }
    cors { enableCORS = true; allowOrigins { "http://localhost:8080" } }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2-vision" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/fetch_data.pkl
actionID = "httpFetchResource"
name = "CRM Fetch"
description = "Fetches ticket data via CRM API."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/ticket" }
  preflightCheck {
    validations { "@(request.data().ticket_id)" != "" }
  }
  HTTPClient {
    method = "GET"
    url = "https://crm.example.com/api/ticket/@(request.data().ticket_id)"
    headers { ["Authorization"] = "Bearer @(session.getRecord('crm_token'))" }
    timeoutDuration = 30.s
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "Visual Defect Analyzer"
description = "Analyzes ticket images for defects."
requires { "httpFetchResource" }
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/visual-ticket" }
  preflightCheck {
    validations { "@(request.filecount())" > 0 }
  }
  chat {
    model = "llama3.2-vision"
    role = "assistant"
    prompt = "Analyze the image for product defects and describe any issues found."
    files { "@(request.files()[0])" }
    scenario {
      new { role = "system"; prompt = "You are a support assistant specializing in visual defect detection." }
      new { role = "system"; prompt = "Ticket data: @(client.responseBody("httpFetchResource"))" }
    }
    JSONResponse = true
    JSONResponseKeys { "defect_description"; "severity" }
    timeoutDuration = 60.s
  }
}
pkl
// resources/response.pkl
actionID = "responseResource"
name = "API Response"
description = "Returns defect analysis result."
requires { "llmResource" }
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/visual-ticket" }
  APIResponse {
    success = true
    response {
      data { "@(llm.response('llmResource'))" }
    }
    meta { headers { ["Content-Type"] = "application/json" } }
  }
}
🔌 Create custom AI APIs Serve open-source LLMs through custom AI APIs for robust AI-driven applications.
🌐 Pair APIs with frontend apps Integrate with frontend apps like Streamlit, NodeJS, and more for interactive AI-driven user interfaces, as outlined in web server settings.
pkl
// workflow.pkl
name = "frontendAIApp"
description = "Pairs an AI API with a Streamlit frontend for text summarization."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  WebServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/summarize"; methods { "POST" } }
    }
  }
  WebServer {
    hostIP = "127.0.0.1"
    portNum = 8501
    routes {
      new {
        path = "/app"
        publicPath = "/fe/1.0.0/web/"
        serverType = "app"
        appPort = 8501
        command = "streamlit run app.py"
      }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    pythonPackages { "streamlit" }
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// data/fe/web/app.py (Streamlit frontend)
import streamlit as st
import requests

st.title("Text Summarizer")
text = st.text_area("Enter text to summarize")
if st.button("Summarize"):
  response = requests.post("http://localhost:3000/api/v1/summarize", json={"text": text})
  if response.ok:
    st.write(response.json()['response']['data']['summary'])
  else:
    st.error("Error summarizing text")
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "Text Summarizer"
description = "Summarizes input text using an LLM."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/summarize" }
  chat {
    model = "llama3.2:1b"
    role = "assistant"
    prompt = "Summarize this text in 50 words or less: @(request.data().text)"
    JSONResponse = true
    JSONResponseKeys { "summary" }
    timeoutDuration = 60.s
  }
}
🛠️ Let LLMs run tools automatically (aka MCP or A2A) Enhance functionality through scripts and sequential tool pipelines with external tools and chained tool workflows.
pkl
// workflow.pkl
name = "toolChainingAgent"
description = "Uses LLM to query a database and generate a report via tools."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/report"; methods { "POST" } }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "Report Generator"
description = "Generates a report using a database query tool."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/report" }
  chat {
    model = "llama3.2:1b"
    role = "assistant"
    prompt = "Generate a sales report based on database query results. Date range: @(request.params("date_range"))"
    tools {
      new {
        name = "query_sales_db"
        script = "@(data.filepath('tools/1.0.0', 'query_sales.py'))"
        description = "Queries the sales database for recent transactions"
        parameters {
          ["date_range"] { required = true; type = "string"; description = "Date range for query (e.g., '2025-01-01:2025-05-01')" }
        }
      }
    }
    JSONResponse = true
    JSONResponseKeys { "report" }
    timeoutDuration = 60.s
  }
}
pkl
// data/tools/query_sales.py
import sqlite3
import sys

def query_sales(date_range):
  start, end = date_range.split(':')
  conn = sqlite3.connect('sales.db')
  cursor = conn.execute("SELECT * FROM transactions WHERE date BETWEEN ? AND ?", (start, end))
  results = cursor.fetchall()
  conn.close()
  return results

print(query_sales(sys.argv[1]))

Additional Features

📈 Context-aware RAG workflows Enable accurate, knowledge-intensive tasks with RAG workflows.
📊 Generate structured outputs Create consistent, machine-readable responses from LLMs, as described in the chat block documentation.
pkl
// workflow.pkl
name = "structuredOutputAgent"
description = "Generates structured JSON responses from LLM."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/structured"; methods { "POST" } }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "Structured Response Generator"
description = "Generates structured JSON output."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/structured" }
  chat {
    model = "llama3.2:1b"
    role = "assistant"
    prompt = "Analyze this text and return a structured response: @(request.data().text)"
    JSONResponse = true
    JSONResponseKeys { "summary"; "keywords" }
    timeoutDuration = 60.s
  }
}
🔄 Items iteration Iterate over multiple items in a resource to process them sequentially, using items iteration with `item.current()`, `item.prev()`, and `item.next()`.
pkl
// workflow.pkl
name = "mtvScenarioGenerator"
description = "Generates MTV video scenarios based on song lyrics."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/mtv-scenarios"; methods { "GET" } }
    }
    cors { enableCORS = true; allowOrigins { "http://localhost:8080" } }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "MTV Scenario Generator"
description = "Generates MTV video scenarios for song lyrics."
items {
  "A long, long time ago"
  "I can still remember"
  "How that music used to make me smile"
  "And I knew if I had my chance"
}
run {
  restrictToHTTPMethods { "GET" }
  restrictToRoutes { "/api/v1/mtv-scenarios" }
  skipCondition {
    "@(item.current())" == "And I knew if I had my chance" // Skip this lyric
  }
  chat {
    model = "llama3.2:1b"
    role = "assistant"
    prompt = """
    Based on the lyric @(item.current()) from the song "American Pie," generate a suitable scenario for an MTV music video. The scenario should include a vivid setting, key visual elements, and a mood that matches the lyric's tone.
    """
    scenario {
      new { role = "system"; prompt = "You are a creative director specializing in music video production." }
    }
    JSONResponse = true
    JSONResponseKeys { "setting"; "visual_elements"; "mood" }
    timeoutDuration = 60.s
  }
}
pkl
// resources/response.pkl
actionID = "responseResource"
name = "API Response"
description = "Returns MTV video scenarios."
requires { "llmResource" }
run {
  restrictToHTTPMethods { "GET" }
  restrictToRoutes { "/api/v1/mtv-scenarios" }
  APIResponse {
    success = true
    response {
      data { "@(llm.response('llmResource'))" }
    }
    meta { headers { ["Content-Type"] = "application/json" } }
  }
}
🤖 Leverage multiple open-source LLMs Use LLMs from Ollama and Huggingface for diverse AI capabilities.
pkl
// workflow.pkl
models {
  "tinydolphin"
  "llama3.3"
  "llama3.2-vision"
  "llama3.2:1b"
  "mistral"
  "gemma"
  "mistral"
}
🗂️ Upload documents or files Process documents for LLM analysis, ideal for document analysis tasks, as shown in the file upload tutorial.
pkl
// workflow.pkl
name = "docAnalysisAgent"
description = "Analyzes uploaded documents with LLM."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/doc-analyze"; methods { "POST" } }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2-vision" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/llm.pkl
actionID = "llmResource"
name = "Document Analyzer"
description = "Extracts text from uploaded documents."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/doc-analyze" }
  preflightCheck {
    validations { "@(request.filecount())" > 0 }
  }
  chat {
    model = "llama3.2-vision"
    role = "assistant"
    prompt = "Extract key information from this document."
    files { "@(request.files()[0])" }
    JSONResponse = true
    JSONResponseKeys { "key_info" }
    timeoutDuration = 60.s
  }
}
🔄 Reusable AI agents Create flexible workflows with reusable AI agents.
pkl
// workflow.pkl
name = "docAnalysisAgent"
description = "Analyzes uploaded documents with LLM."
version = "1.0.0"
targetActionID = "responseResource"
workflows { "@ticketResolutionAgent" }
settings {
  APIServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/doc-analyze"; methods { "POST" } }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    models { "llama3.2-vision" }
    ollamaImageTag = "0.6.8"
  }
}
pkl
// resources/response.pkl
actionID = "responseResource"
name = "API Response"
description = "Returns defect analysis result."
requires {
  "llmResource"
  "@ticketResolutionAgent/llmResource:1.0.0"
}
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/doc-analyze" }
  APIResponse {
    success = true
    response {
      data {
        "@(llm.response("llmResource"))"
        "@(llm.response('@ticketResolutionAgent/llmResource:1.0.0'))"
      }
    }
    meta { headers { ["Content-Type"] = "application/json" } }
  }
}
🐍 Execute Python in isolated environments Run Python code securely using Anaconda in isolated environments.
pkl
// resources/python.pkl
actionID = "pythonResource"
name = "Data Formatter"
description = "Formats extracted data for storage."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/scan-document" }
  python {
    script = """
import pandas as pd

def format_data(data):
  df = pd.DataFrame([data])
  return df.to_json()

print(format_data(@(llm.response('llmResource'))))
"""
    timeoutDuration = 60.s
  }
}
🌍 Make API calls Perform API calls directly from configuration, as detailed in the client documentation.
pkl
// resources/http_client.pkl
actionID = "httpResource"
name = "DMS Submission"
description = "Submits extracted data to document management system."
run {
  restrictToHTTPMethods { "POST" }
  restrictToRoutes { "/api/v1/scan-document" }
  HTTPClient {
    method = "POST"
    url = "https://dms.example.com/api/documents"
    data { "@(python.stdout('pythonResource'))" }
    headers { ["Authorization"] = "Bearer @(session.getRecord('dms_token'))" }
    timeoutDuration = 30.s
  }
}
🚀 Run in Lambda or API mode Operate in Lambda mode or API mode for flexible deployment.
✅ Built-in validations and checks Utilize API request validations, custom validation checks, and skip conditions for robust workflows.
pkl
restrictToHTTPMethods { "POST" }
restrictToRoutes { "/api/v1/scan-document" }
preflightCheck {
  validations { "@(request.filetype('document'))" == "image/jpeg" }
}
skipCondition { "@(request.data().query.length)" < 5 }
📁 Serve static websites or reverse-proxied apps Host static websites or reverse-proxied apps directly.
pkl
// workflow.pkl
name = "frontendAIApp"
description = "Pairs an AI API with a Streamlit frontend for text summarization."
version = "1.0.0"
targetActionID = "responseResource"
settings {
  APIServerMode = true
  WebServerMode = true
  APIServer {
    hostIP = "127.0.0.1"
    portNum = 3000
    routes {
      new { path = "/api/v1/summarize"; methods { "POST" } }
    }
  }
  WebServer {
    hostIP = "127.0.0.1"
    portNum = 8501
    routes {
      new {
        path = "/app"
        serverType = "app"
        appPort = 8501
        command = "streamlit run app.py"
      }
    }
  }
  agentSettings {
    timezone = "Etc/UTC"
    pythonPackages { "streamlit" }
    models { "llama3.2:1b" }
    ollamaImageTag = "0.6.8"
  }
}
💾 Manage state with memory operations Store, retrieve, and clear persistent data using memory operations.
pkl
expr {
  "@(memory.setRecord('user_data', request.data().data))"
}
local user_data = "@(memory.getRecord('user_data'))"
🔒 Configure CORS rules Set CORS rules directly in the workflow for secure API access.
pkl
// workflow.pkl
cors {
  enableCORS = true
  allowOrigins { "https://example.com" }
  allowMethods { "GET"; "POST" }
}
🛡️ Set trusted proxies Enhance API and frontend security with trusted proxies.
pkl
// workflow.pkl
APIServerMode = true
APIServer {
  hostIP = "127.0.0.1"
  portNum = 3000
  routes {
    new { path = "/api/v1/proxy"; methods { "GET" } }
  }
  trustedProxies { "192.168.1.1"; "10.0.0.0/8" }
}
🖥️ Run shell scripts Execute shell scripts seamlessly within workflows.
pkl
// resources/exec.pkl
actionID = "execResource"
name = "Shell Script Runner"
description = "Runs a shell script."
run {
  exec {
    command = """
echo "Processing request at $(date)"
"""
    timeoutDuration = 60.s
  }
}
📦 Install Ubuntu packages Install Ubuntu packages via configuration for customized environments.
pkl
// workflow.pkl
agentSettings {
  timezone = "Etc/UTC"
  packages {
    "tesseract-ocr"
    "poppler-utils"
    "npm"
    "ffmpeg"
  }
  ollamaImageTag = "0.6.8"
}
📜 Define Ubuntu repositories or PPAs Configure Ubuntu repositories or PPAs for additional package sources.
pkl
// workflow.pkl
repositories {
  "ppa:alex-p/tesseract-ocr-devel"
}
⚡ Written in high-performance Golang Benefit from the speed and efficiency of Golang for high-performance applications.
📥 Easy to install Install and use Kdeps with a single command, as outlined in the installation guide.
shell
# On macOS
brew install kdeps/tap/kdeps
# Windows, Linux, and macOS
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/refs/heads/main/install.sh | sh

Getting Started

Ready to explore Kdeps? Install it with a single command: Installation Guide.

Check out practical examples to jumpstart your projects.