Tutorial: Build a Telegram Bot with LLM Replies
This tutorial walks through creating a Telegram bot that replies to every message using an LLM running locally via Ollama. The same pattern applies to Discord, Slack, and WhatsApp with minor config changes.
Prerequisites
- kdeps CLI installed (
kdeps version) - Docker (for running the agent container)
- A Telegram bot token — create one with @BotFather (
/newbot) - Ollama installed locally, or
installOllama: trueinagentSettings
Step 1 — Create the Workflow File
# workflow.yaml
apiVersion: kdeps.io/v1
kind: Workflow
metadata:
name: telegram-llm-bot
description: Telegram bot that answers messages with an LLM
version: "1.0.0"
targetActionId: reply
settings:
agentSettings:
timezone: Etc/UTC
installOllama: true
models:
- llama3.2:3b
input:
sources: [bot]
bot:
executionType: polling
telegram:
botToken: "{{ env('TELEGRAM_BOT_TOKEN') }}"
pollIntervalSeconds: 1Key points:
sources: [bot]enables the bot input subsystemexecutionType: pollingkeeps the process running and polls Telegram for new messagesbotTokenuses theenv()expression so the token is never hard-codedtargetActionId: reply— the workflow ends by executing thereplyresource
Step 2 — Create the LLM Resource
# resources/llm.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: llm
name: LLM Response
run:
chat:
backend: ollama
model: llama3.2:3b
messages:
- role: user
content: "{{ input('message') }}"input('message') retrieves the text the Telegram user sent.
Step 3 — Create the Reply Resource
The botReply resource sends the text back to the originating platform. In polling mode it calls the platform's API; in stateless mode it writes to stdout. The dispatcher loop continues after this resource returns — no explicit restart is needed.
# resources/reply.yaml
apiVersion: kdeps.io/v1
kind: Resource
metadata:
actionId: reply
name: Reply
dependencies:
- llm
run:
botReply:
text: "{{ get('llm') }}"Step 4 — Run the Bot
Export your bot token, then start the workflow:
export TELEGRAM_BOT_TOKEN="1234567890:AAH..."
kdeps run workflow.yamlYou should see:
Bot input sources active:
• Telegram (polling)
Starting bot runners... (press Ctrl+C to stop)Send a message to your bot in Telegram — it replies with the LLM's answer.
Step 5 — Add a System Prompt (Optional)
Give your bot a persona by adding a scenario block to the LLM resource:
run:
chat:
backend: ollama
model: llama3.2:3b
scenario:
- role: assistant
prompt: |
You are Kodi, a helpful AI assistant.
Keep your answers short and friendly.
Always respond in the same language as the user.
messages:
- role: user
content: "{{ input('message') }}"Stateless Mode
Stateless mode runs the workflow exactly once from a shell command — no long-running process needed. Useful for cron jobs, CI pipelines, or custom integrations.
Change executionType to stateless:
settings:
input:
sources: [bot]
bot:
executionType: statelessThen pipe a JSON message to kdeps run:
echo '{"message":"What is 2+2?","chatId":"42","userId":"u1","platform":"custom"}' \
| kdeps run workflow.yamlOutput (stdout):
4Or use environment variables instead of JSON:
export KDEPS_BOT_MESSAGE="What is the capital of France?"
export KDEPS_BOT_PLATFORM="cli"
kdeps run workflow.yamlAdding More Platforms
Extend the bot block to run on Discord and Telegram simultaneously:
settings:
input:
sources: [bot]
bot:
executionType: polling
discord:
botToken: "{{ env('DISCORD_BOT_TOKEN') }}"
telegram:
botToken: "{{ env('TELEGRAM_BOT_TOKEN') }}"The same workflow resources receive messages from both platforms. Use input('platform') to branch if needed:
# resources/reply.yaml
run:
botReply:
text: |
{{ if eq (input('platform')) "discord" }}
**{{ get('llm') }}**
{{ else }}
{{ get('llm') }}
{{ end }}Full Directory Structure
telegram-llm-bot/
├── workflow.yaml
└── resources/
├── llm.yaml
└── reply.yamlSee Also
- Input Sources — All bot platform configs and field reference
- Telegram Bot Example — Ready-to-run example
- Stateless Bot Example — One-shot stdin/stdout example
- LLM Resource — Chat, scenario, backend options