Quickstart
This guide walks you through building your first AI agent with KDeps. You'll create a simple chatbot that responds to user queries.
Prerequisites
Make sure KDeps is installed on your system:
bash
# Install via script (Mac/Linux)
curl -LsSf https://raw.githubusercontent.com/kdeps/kdeps/main/install.sh | sh
# Or via Homebrew (Mac)
brew install kdeps/tap/kdepsFor more options, see the Installation Guide.
Option 1: Use the New Command (Easiest)
The new command creates a complete agent project interactively:
bash
kdeps new my-agentFollow the prompts to:
- Select an agent template
- Choose required resources
- Configure basic settings (port, models, etc.)
Or use a template directly:
bash
kdeps new my-agent --template api-serviceOption 2: Create Manually
Step 1: Create Project Structure
bash
mkdir my-agent
cd my-agent
mkdir resourcesStep 2: Create the Workflow
Create workflow.yaml:
yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: my-agent
description: My first AI agent
version: "1.0.0"
targetActionId: responseResource
settings:
apiServerMode: true
apiServer:
hostIp: "127.0.0.1"
portNum: 3000
routes:
- path: /api/v1/chat
methods: [POST]
cors:
enableCors: true
allowOrigins:
- http://localhost:8080
agentSettings:
timezone: Etc/UTC
models:
- llama3.2:1bStep 3: Create the LLM Resource
Create resources/llm.yaml:
yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llmResource
name: LLM Chat
run:
restrictToHttpMethods: [POST]
restrictToRoutes: [/api/v1/chat]
preflightCheck:
validations:
- get('q') != ''
error:
code: 400
message: Query parameter 'q' is required
chat:
model: llama3.2:1b
role: user
prompt: "{{ get('q') }}"
scenario:
- role: assistant
prompt: You are a helpful AI assistant.
jsonResponse: true
jsonResponseKeys:
- answer
timeoutDuration: 60sStep 4: Create the Response Resource
Create resources/response.yaml:
yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: responseResource
name: API Response
requires:
- llmResource
run:
restrictToHttpMethods: [POST]
restrictToRoutes: [/api/v1/chat]
apiResponse:
success: true
response:
data: get('llmResource')
query: get('q')
meta:
headers:
Content-Type: application/jsonStep 5: Validate Your Workflow
bash
kdeps validate workflow.yamlYou should see:
Workflow validated successfullyStep 6: Run the Agent
bash
kdeps run workflow.yamlYou'll see output like:
Starting API server on 127.0.0.1:3000
Routes:
POST /api/v1/chatStep 7: Test the API
In another terminal:
bash
curl -X POST http://localhost:3000/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"q": "What is artificial intelligence?"}'Response:
json
{
"success": true,
"response": {
"data": {
"answer": "Artificial intelligence (AI) refers to..."
},
"query": "What is artificial intelligence?"
}
}Development Mode
For faster development with auto-reload:
bash
kdeps run workflow.yaml --devChanges to your YAML files will automatically reload the server.
Understanding the Flow
- Request arrives at
/api/v1/chatwith{"q": "..."} - Preflight check validates that
qis not empty - LLM resource sends the prompt to the model
- Response resource formats the output as JSON
- API responds with the formatted result
Request → Preflight → LLM → Response → Output
↓
(if invalid)
↓
400 ErrorProject Structure
my-agent/
├── workflow.yaml # Main workflow configuration
└── resources/
├── llm.yaml # LLM chat resource
└── response.yaml # API response formattingKey Concepts
Workflow
The workflow.yaml defines:
- Agent metadata (name, version)
- Target action to execute
- API server settings
- LLM models to use
Resources
Resources are the building blocks:
- Each resource has a unique
actionId - Resources can depend on other resources via
requires - Resources execute in dependency order
Unified API
Access data with get():
get('q')- Get query parameter or body fieldget('llmResource')- Get output from another resource{{ get('q') }}- String interpolation
Next Steps
- CLI Reference - Complete command reference
- Workflow Configuration - Deep dive into settings
- LLM Resource - Advanced LLM configuration
- HTTP Client - Make external API calls
- SQL Resource - Query databases
- Unified API - Master get() and set()
Examples
Explore more examples in the examples directory:
| Example | Description |
|---|---|
| chatbot | Basic LLM chatbot |
| http-advanced | Authentication, caching, retries |
| sql-advanced | Database queries with pooling |
| file-upload | File upload handling |
| vision | Image analysis with vision models |
| tools | LLM function calling |
| session-auth | Session management |
| webserver-static | Static file serving |