Building a Chatbot
This tutorial walks you through building a simple chatbot using KDeps v2. You'll learn how to set up a workflow, configure an LLM resource, and handle API requests.
Prerequisites
- KDeps installed (see Installation)
- Ollama installed and running (for local LLM)
- A model pulled in Ollama:
ollama pull llama3.2:1b
Step 1: Create the Workflow
Create a new directory for your chatbot:
mkdir my-chatbot
cd my-chatbotCreate workflow.yaml:
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: chatbot
description: Simple LLM chatbot
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
hostIp: "127.0.0.1"
portNum: 3000
routes:
- path: /api/v1/chat
methods: [POST]
cors:
enableCors: true
allowOrigins:
- http://localhost:8080
agentSettings:
timezone: Etc/UTC
pythonVersion: "3.12"
models:
- llama3.2:1bStep 2: Create the LLM Resource
Create resources/llm.yaml:
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerKey Points:
get('q')retrieves the query parameter from the requestjsonResponse: trueensures structured JSON outputjsonResponseKeysdefines the expected keys in the response
Step 3: Create the Response Resource
Create resources/response.yaml:
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
name: API Response
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')
query: get('q')Key Points:
requires: [llmResource]ensures the LLM resource runs firstget('llmResource')accesses the output from the LLM resourceget('q')includes the original query in the response
Step 4: Run the Chatbot
Start the workflow:
kdeps run workflow.yamlYou should see output indicating the server is running on port 3000.
Step 5: Test the Chatbot
Send a test request:
curl -X POST http://localhost:3000/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"q": "What is artificial intelligence?"}'Expected response:
{
"success": true,
"data": {
"answer": "Artificial intelligence (AI) is the simulation of human intelligence by machines..."
},
"query": "What is artificial intelligence?"
}Understanding the Unified API
This chatbot demonstrates KDeps' unified API with the get() function:
Data Sources
The get() function automatically detects the data source:
# Query parameters
prompt: "{{ get('q') }}"
# Resource outputs
data: get('llmResource')
# Headers
auth: get('Authorization')
# Session storage
user: get('user_name', 'session')Automatic Detection
KDeps automatically determines where to look for data:
get('q')→ Query parameter?q=...get('llmResource')→ Output fromllmResourceget('Authorization')→ HTTP header
Adding Validation
Add input validation to ensure the query is not empty:
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
validations:
- get('q') != ''
- len(get('q')) > 3
chat:
model: llama3.2:1b
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerIf validation fails, the resource returns an error before executing.
Adding Conversation Context
Add system prompts and conversation history:
run:
chat:
model: llama3.2:1b
scenario:
- role: system
prompt: "You are a helpful assistant that provides clear, concise answers."
- role: user
prompt: "{{ get('q') }}"
jsonResponse: true
jsonResponseKeys:
- answerAdding Session Support
Enable session storage to maintain conversation context:
settings:
session:
enabled: true
type: sqlite
path: ./chatbot.dbThen access session data:
run:
chat:
model: llama3.2:1b
scenario:
- role: system
prompt: "You are a helpful assistant."
- role: assistant
prompt: "{{ get('previous_response', 'session') }}"
- role: user
prompt: "{{ get('q') }}"Adding Error Handling
Handle errors gracefully:
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
name: API Response
requires:
- llmResource
run:
apiResponse:
success: true
response:
data: get('llmResource')
query: get('q')
onError:
success: false
response:
error: "Failed to process request"
message: get('error')Next Steps
- Add Tools: Learn about function calling to give your chatbot capabilities
- Add Memory: Use session storage for conversation history
- Add Validation: Implement input validation and error handling
- Deploy: Package your chatbot with Docker
Complete Example
See the full example in examples/chatbot/:
kdeps run examples/chatbot/workflow.yamlRelated Documentation
- LLM Resource - Complete LLM configuration reference
- Unified API - Understanding
get()andset() - Workflow Configuration - Full workflow settings
- Session & Storage - Conversation persistence