GridWork HQ
AI Pipelines

Pipeline Library

Overview of the 45 AI pipelines in GridWork HQ — categories, model routing, multi-pass execution, and how to trigger pipelines.

GridWork HQ ships with 45 AI pipelines organized across five categories. Each pipeline is a Markdown definition file in the Knowledge Vault that the pipeline server reads at runtime. No code changes are needed to add, modify, or remove pipelines.

Pipeline Categories

CategoryCountPurpose
Web Agency12Client delivery workflows — audits, builds, SEO, brand, content
Marketing8Lead generation, outreach, proposals, follow-ups
Design7Brand systems, style guides, design briefs, asset planning
Operations10Internal audits, archiving, knowledge maintenance, scope checks
General8Reports, specs, plans, and cross-category utilities

Model Routing

Pipelines use different Claude models depending on the complexity and purpose of the task:

ModelUsed ForExamples
OpusDirect client deliverables, complex analysisaudit, propose, build, brand
SonnetDraft generation, structured outputcontent, seo, report, outreach
HaikuLightweight cron tasks, quick checkskb-librarian, scope-audit, friday-update

Model assignment is defined in each pipeline's Markdown frontmatter. You can override it by editing the model field in the pipeline definition file.

Multi-Pass Execution

Complex pipelines run in multiple passes rather than a single prompt. Each pass builds on the previous output:

  1. Context gathering — reads relevant files from the Knowledge Vault (client folder, templates, memory)
  2. Analysis — processes the input against gathered context
  3. Generation — produces structured output (Markdown, JSON, or both)
  4. Review — validates output against the pipeline's quality criteria
  5. Storage — saves results to the Knowledge Vault output directory

Not every pipeline uses all five passes. Simple pipelines like kb-librarian may only need two passes, while build or brand may use all five.

Triggering Pipelines

From Mission Control

The Mission Control page in the dashboard shows cards for each pipeline with input fields and a Run button. Select a pipeline, provide the required input, and click Run to start a job.

From the CLI

curl -X POST http://localhost:8750/pipelines/run \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"pipeline": "prospect", "input": "web design agency in Atlanta"}'

From Cron

Automated pipelines are triggered by the cron scheduler. See Cron Configuration for schedule definitions.

Pipeline Definition Format

Each pipeline is defined in knowledge/system/rules/pipelines/:

---
name: prospect
description: "Research and qualify potential leads"
category: marketing
model: sonnet
passes: 3
inputs:
  - name: query
    type: text
    required: true
    description: "Business type or search query"
---

## Instructions

[Pipeline instructions for the AI agent...]

Job Queue

The pipeline server processes a maximum of 3 jobs in parallel (configurable via MAX_PARALLEL_PIPELINES). Additional jobs are queued and processed in order. Job status streams back to the dashboard via SSE.

Adding a Pipeline

  1. Create a definition file in knowledge/system/rules/pipelines/
  2. Register the pipeline in the pipeline server's registry
  3. Add a card to Mission Control if the pipeline is user-triggered
  4. See AI Pipelines Overview for the full reference

On this page