Inline Resources
Inline resources allow you to configure multiple LLM, HTTP, Exec, SQL, and Python resources to execute before or after the main resource within a single resource definition.
Overview
Instead of creating separate resource files for preparatory or cleanup tasks, inline resources let you:
- Execute tasks before the main resource runs
- Perform post-processing after the main resource completes
- Keep related operations organized in one place
- Reduce boilerplate and improve readability
Basic Syntax
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: example
name: Example Resource
run:
# Inline resources to run BEFORE the main resource
before:
- httpClient:
method: GET
url: "https://api.example.com/config"
- exec:
command: "echo 'Preparing environment...'"
# Main resource (chat, httpClient, sql, python, or exec)
chat:
model: llama3.2:1b
role: user
prompt: "Process this data"
# Inline resources to run AFTER the main resource
after:
- sql:
connection: "sqlite3://./db.sqlite"
query: "INSERT INTO logs VALUES (?)"
- python:
script: "print('Post-processing complete')"Supported Resource Types
Each inline resource can be one of:
- chat: LLM interaction (Ollama, OpenAI, Anthropic, etc.)
- httpClient: HTTP requests
- sql: Database queries
- python: Python script execution
- exec: Shell command execution
Execution Order
Resources with inline resources execute in the following order:
- ExprBefore expressions (if configured)
- Before inline resources (executed sequentially)
- Main resource (the primary resource type)
- After inline resources (executed sequentially)
- Expr/ExprAfter expressions (if configured)
- APIResponse formatting (if configured)
Example:
run:
exprBefore:
- set('start_time', now())
before:
- httpClient: { ... } # Step 1
- exec: { ... } # Step 2
chat: { ... } # Step 3 (main resource)
after:
- sql: { ... } # Step 4
- python: { ... } # Step 5
expr:
- set('duration', now() - get('start_time'))
apiResponse:
data: { ... }Common Use Cases
1. Data Enrichment
Fetch additional data before processing:
run:
before:
- httpClient:
method: GET
url: "https://api.example.com/user/{{get('user_id')}}"
timeout: 5s
chat:
model: llama3.2:1b
prompt: "Analyze user: {{get('_output')}}"2. Logging and Auditing
Record operations in a database:
run:
chat:
model: llama3.2:1b
prompt: "{{get('prompt')}}"
after:
- sql:
connection: "postgresql://localhost/logs"
query: "INSERT INTO audit_log (action, timestamp) VALUES (?, NOW())"
params: ["chat_completion"]3. Notifications
Send alerts after completion:
run:
python:
script: "process_data.py"
after:
- httpClient:
method: POST
url: "https://api.example.com/notify"
data:
status: "completed"
timestamp: "{{now()}}"4. Environment Setup
Prepare files or environment before execution:
run:
before:
- exec:
command: "mkdir -p /tmp/workspace"
- exec:
command: "cp config.json /tmp/workspace/"
python:
script: "process_with_config.py"
after:
- exec:
command: "rm -rf /tmp/workspace"5. Caching
Store results for future use:
run:
chat:
model: gpt-4
prompt: "{{get('query')}}"
after:
- sql:
connection: "redis://localhost"
query: "SET cache:{{get('query_hash')}} {{get('_output')}}"Multiple Inline Resources
You can have multiple inline resources of the same or different types:
run:
before:
- httpClient:
method: GET
url: "https://api.example.com/config"
- httpClient:
method: GET
url: "https://api.example.com/user"
- exec:
command: "echo 'Starting...'"
chat:
model: llama3.2:1b
prompt: "{{get('prompt')}}"
after:
- sql:
connection: "sqlite3://./db.sqlite"
query: "INSERT INTO results VALUES (?)"
- python:
script: "send_metrics.py"
- httpClient:
method: POST
url: "https://api.example.com/complete"Resources Without Main Type
You can have a resource with only inline resources and no main resource type:
run:
before:
- httpClient:
method: GET
url: "https://api.example.com/data"
after:
- sql:
connection: "sqlite3://./db.sqlite"
query: "INSERT INTO cache VALUES (?)"This is useful for orchestration tasks where you need to coordinate multiple operations.
Error Handling
If an inline resource fails:
- Execution stops immediately
- The error is propagated to the resource level
- Subsequent inline resources are not executed
- The main resource is not executed (if the failure occurred in
before)
You can use the resource's onError configuration to handle errors:
run:
before:
- httpClient:
method: GET
url: "https://api.example.com/config"
chat:
model: llama3.2:1b
prompt: "{{get('prompt')}}"
onError:
action: continue
fallback:
error: true
message: "Processing failed"Accessing Context
Inline resources have access to the full execution context:
run:
exprBefore:
- set('user_id', get('input.user_id'))
before:
# Access variables set in exprBefore
- httpClient:
method: GET
url: "https://api.example.com/user/{{get('user_id')}}"
chat:
model: llama3.2:1b
# Access results from previous steps
prompt: "User data: {{get('_output')}}"Configuration Options
Each inline resource supports the same configuration options as the standalone resource:
HTTP Client
- httpClient:
method: POST
url: "https://api.example.com"
headers:
Authorization: "Bearer {{get('token')}}"
data:
key: "value"
timeout: 10s
retry:
maxAttempts: 3
backoff: 1sSQL
- sql:
connection: "postgresql://localhost/db"
query: "SELECT * FROM users WHERE id = ?"
params:
- "{{get('user_id')}}"
timeout: 5sPython
- python:
script: |
import json
result = process_data()
print(json.dumps(result))
timeout: 30s
venvName: "myenv"Exec
- exec:
command: "process_file.sh"
args:
- "{{get('filename')}}"
timeout: 60sChat (LLM)
- chat:
backend: ollama
model: llama3.2:1b
role: user
prompt: "{{get('prompt')}}"
timeout: 30sBest Practices
- Keep inline resources focused: Each should perform a single, well-defined task
- Use descriptive configurations: Make it clear what each inline resource does
- Handle errors appropriately: Consider using
onErrorfor critical workflows - Set appropriate timeouts: Prevent hanging on slow operations
- Order matters: Inline resources execute sequentially in the order defined
- Use expressions: Access context data with
{{get('variable')}} - Consider alternatives: For complex workflows, separate resources may be clearer
Comparison with Separate Resources
Traditional Approach (Separate Resources)
# 5 separate resource files
- fetch-config.yaml
- prepare-env.yaml
- main-processing.yaml
- store-results.yaml
- send-notification.yamlWith Inline Resources
# Single resource file
run:
before:
- httpClient: { ... } # Fetch config
- exec: { ... } # Prepare env
chat: { ... } # Main processing
after:
- sql: { ... } # Store results
- httpClient: { ... } # Send notificationBenefits:
- Fewer files to manage
- Related operations grouped together
- Clearer execution flow
- Reduced boilerplate
- Easier to understand and maintain
Advanced Patterns
Conditional Inline Resources
Use expressions with inline resources:
run:
exprBefore:
- set('should_notify', get('input.notify') == true)
chat:
model: llama3.2:1b
prompt: "{{get('prompt')}}"
expr:
- if(get('should_notify'),
set('notification_sent', true),
set('notification_sent', false))Combining with Items
Inline resources work with the items feature:
items:
- item1
- item2
run:
before:
- httpClient:
url: "https://api.example.com/prepare/{{item()}}"
chat:
model: llama3.2:1b
prompt: "Process {{item()}}"
after:
- sql:
query: "INSERT INTO results VALUES (?)"
params: ["{{item()}}"]See Also
- Expression Blocks - Using
exprBeforeandexprAfter - Error Handling - Handling errors in resources
- Items - Iterating over collections
- Examples - Complete example with inline resources