You write echo "a dvd bouncing tui" > idea.md, point a runner at dotpowers.dot, and come back to a tested, reviewed project in a git branch. One DOT file, ~1300 lines. It encodes the superpowers dev methodology as a pipeline graph — brainstorming, planning, TDD implementation, multi-model review, and shipping decisions, all wired together with failure handling and human gates.
Install
git clone https://github.com/2389-research/dotpowers.git
Run it with any attractor-compliant DOT runner:
mkdir my-project && cd my-project
echo "a terminal dashboard that shows system metrics" > idea.md
git init
# tracker, mammoth, smasher — whatever you have
tracker /path/to/dotpowers.dot --tui
What it does
Six phases from idea to shipping decision. Brainstorm reads your idea and asks questions one at a time (YAGNI enforced), then writes a design brief with 2-3 architectural approaches. Plan has GPT-5.2 draft a TDD plan, a shell script reject vague steps, and Opus audit every requirement against the brief — up to 5 iterations. Setup creates a feature branch and installs deps. Implement runs a TDD loop per task: failing test, minimal code, spec review (Opus), quality review (GPT-5.4), commit. Review has three models independently review the finished project, then each critiques the other two (6 cross-critiques), and Opus makes the call. Ship lets you merge locally, push a PR, keep the branch, or discard it.
Four models, each doing what it’s best at. Opus 4.6 handles spec audits, consensus decisions, and debugging. GPT-5.4 writes the code. GPT-5.2 drafts and patches plans. Gemini 3.5 Flash provides the third opinion in reviews.
Graduated failure handling. When something breaks, the pipeline tries four things before asking you: retry the node (up to 2 times), run a debug investigation with root cause analysis, replan the task with a revised approach, then escalate to a human gate. Loop caps prevent runaway token burn — plan validation gets 5 iterations, implementation review gets 5 per task, final rework gets 2 full cycles.
Human-in-the-loop, not human-on-the-hook. You approve the design brief. You review batch checkpoints every 3 completed tasks. You choose how to ship. If the implementer has questions, it pauses and asks. The pipeline runs headless between those gates.
How it works
The pipeline is a single Graphviz DOT file. Each node is either an LLM prompt (with model assignment via CSS-like stylesheets), a shell script (format validation, counter management, build verification), or a human gate (design approval, batch review, shipping decision). Edges encode success/failure routing, loop-backs, and escalation paths.
State lives in two places: docs/plans/ for artifacts the LLMs read and write (brainstorm notes, design brief, TDD plan, audit results), and .tracker/ for pipeline mechanics (loop counters, batch counts, checkpoint data). The plans directory is committed; the tracker directory is gitignored.
Fan-out nodes run the three final reviews in parallel. Fan-in nodes collect the results before cross-critiques begin. The consensus node reads all reviews and critiques, then routes to ship, rework, or fail.
