Tools
The tools
block lets open-source AI models (like LLaMA or Mistral) run scripts for tasks like math or file operations. It supports Python (.py
), TypeScript (.ts
), JavaScript (.js
), Ruby (.rb
), or shell scripts (e.g., .sh
), with inputs passed via argv
(e.g., sys.argv
in Python) or $1
, $2
, etc., for shell scripts. Scripts run with: .py
uses python3
, .ts
uses ts-node
, .js
uses node
, .rb
uses ruby
, others use sh
. The LLM can automatically pick and chain multiple tools based on a prompt, using one tool’s output as input for the next. With and JSONResponseKeys
, tool outputs are structured as JSON for easier parsing. Tools are triggered via prompts or manually with @(tools.getRecord(id))
, runScript
, or history
. KDeps tools are script-based by design and can be extended via adapters to connect to external tool protocols if needed.
What It Does
Inside a chat
resource, the tools
block lets the AI call scripts automatically via prompts or manually. The LLM can chain tools, passing outputs as inputs, and with JSONResponseKeys
structures results as JSON. It’s KDeps’ open-source script-based tool-calling system.
How It Looks
Create a chat
resource:
kdeps scaffold [aiagent] llm
Define the tools
block in the chat
block. Here’s an excerpt:
Chat {
Model = "llama3.2" // Open-source AI model
Role = "user"
Prompt = "Run the task using tools: @(request.params("q"))"
JSONResponse = true
JSONResponseKeys {
"sum" // Maps calculate_sum output to "result"
"squared" // Maps square_number output
"saved" // Maps write_result output
}
Tools {
new {
Name = "calculate_sum"
Script = "@(data.filepath("tools/1.0.0", "calculate_sum.py"))"
Description = "Add two numbers"
Parameters {
["a"] { Required = true; Type = "number"; Description = "First number" }
["b"] { Required = true; Type = "number"; Description = "Second number" }
}
}
new {
Name = "square_number"
Script = "@(data.filepath("tools/1.0.0", "square_number.js"))"
Description = "Square a number"
Parameters {
["num"] { Required = true; Type = "number"; Description = "Number to square" }
}
}
new {
Name = "write_result"
Script = "@(data.filepath("tools/1.0.0", "write_result.sh"))"
Description = "Write a number to a file"
Parameters {
["path"] { Required = true; Type = "string"; Description = "File path" }
["content"] { Required = true; Type = "string"; Description = "Number to write" }
}
}
}
// Other settings like scenario, files, timeoutDuration...
}
Sample Scripts
Stored in tools/1.0.0/
:
Python (calculate_sum.py)
Runs with python3
, inputs via sys.argv
.
import sys
print(float(sys.argv[1]) + float(sys.argv[2]))
JavaScript (square_number.js)
Runs with node
, inputs via process.argv
.
const num = parseFloat(process.argv[2]);
console.log(num * num);
Shell (write_result.sh)
Runs with sh
, inputs via $1
, $2
.
echo "$2" > "$1"
Key Pieces
- new: Defines a tool.
- name: Unique name, like
calculate_sum
. - script: Script absolute path or using
@(data.filepath(...))
. - description: Tool’s purpose.
- parameters:
- Key: Parameter name, like
a
. - required: If needed.
- type: Type, like
number
orstring
. - description: Parameter’s role.
- Key: Parameter name, like
Schema Functions
- getRecord(id): Gets JSON output via
@(tools.getRecord("id"))
. Returns text or empty string. - runScript(id, script, params): Runs a script with comma-separated parameters, returns JSON output.
- history(id): Returns output history.
Running Scripts
kdeps picks the program by file extension:
.py
:python3
, inputs viasys.argv
..ts
:ts-node
, inputs viaprocess.argv
..js
:node
, inputs viaprocess.argv
..rb
:ruby
, inputs viaARGV
.- Others (e.g.,
.sh
):sh
, inputs as$1
,$2
, etc.
Sample Prompts with Multi-Tool Chaining
The LLM selects and chains tools, structuring outputs as JSON. Prompts don’t name tools:
Prompt: “Add 6 and 4, square the result, and save it to ‘output.txt’.”
- Flow:
- LLM picks
calculate_sum
for 6 + 4 = 10. - Uses
square_number
for 10² = 100. - Calls
write_result
to save 100 tooutput.txt
.
- LLM picks
- JSON Output:json
{ "result": 10, "squared_result": 100, "file_path": "output.txt" }
- Flow:
Prompt: “Sum 8 and 2, then write the sum to ‘sum.txt’.”
- Flow:
- LLM uses
calculate_sum
for 8 + 2 = 10. - Uses
write_result
to save 10 tosum.txt
.
- LLM uses
- JSON Output:json
{ "result": 10, "file_path": "sum.txt" }
- Flow:
Prompt: “Add 5 and 5, square it twice, and save to ‘final.txt’.”
- Flow:
- LLM uses
calculate_sum
for 5 + 5 = 10. - Uses
square_number
for 10² = 100. - Uses
square_number
again for 100² = 10000. - Uses
write_result
to save 10000 to ‘final.txt’.
- LLM uses
- JSON Output:json
{ "result": 10, "squared_result": 10000, "file_path": "final.txt" }
- Flow:
Manual Invocation
Run or get JSON results:
local result = "@(tools.runScript("square_number_123", "<path_to_script>", "10"))"
local output = "@(tools.getRecord("square_number_123"))"
Extending to External Protocols
- KDeps tools are script-first. If you need to integrate external tool protocols, implement a script adapter that bridges between KDeps’ tool interface and the external client/SDK.
- Examples:
- Call an Anthropic MCP server from a tool script using the vendor’s client/SDK or a thin wrapper/CLI.
- Call Google A2A from a tool script via its client/SDK or an HTTP bridge you control.
- Translate KDeps tool JSON inputs/outputs to the external protocol’s request/response format.
Tips
- Use unique
name
values. - Write clear
description
fields for LLM tool selection. - Define
JSONResponseKeys
for structured outputs. - Check inputs with
required
andtype
. - Secure scripts with
@(data.filepath(...))
. - Set higher
timeoutDuration
inchat
for longer tool chains.
Open-Source Only
KDeps focuses on open-source AI models and local inference.
See LLM Resource Functions for more.