🏗️
Declarative Pipelines
Compose chat, code, and data into YAML pipelines with explicit control flow. No glue code, no boilerplate.
Straightforward LLM dependency orchestration for multi-agent workflows.
Compose chat, code, and data into declarative pipelines in YAML. Export AI workflows as a single binary, ISO, Docker, or Kubernetes pods. Use Ollama, llamafile, or any cloud AI provider.
run:
chat:
model: llama3.2:1b
prompt: "Summarize: {{ get('q') }}"
apiResponse:
success: true
response:
data: "{{ get('chat') }}"
# No glue code. No legacy code.