Skip to content

Straightforward LLM dependency orchestration for multi-agent workflows.

Compose chat, code, and data into declarative pipelines in YAML. Export AI workflows as a single binary, ISO, Docker, or Kubernetes pods. Use Ollama, llamafile, or any cloud AI provider.

workflow.yaml
run:
  chat:
    model: llama3.2:1b
    prompt: "Summarize: {{ get('q') }}"

  apiResponse:
    success: true
    response:
      data: "{{ get('chat') }}"

# No glue code. No legacy code.
running · port 16395startup: 0.1s

Released under the Apache 2.0 License.