Architecture docs for converting GTM autoresearch experiment outputs into
client-specialized fine-tuned LLMs.
Branch: feature/finetune-pipeline
Six-phase pipeline from autoresearch experiment logging through fine-tune runner, OpenClaw integration, and flywheel automation. Includes JSONL training record format and Claude Code prompt for Phase 1.
→Full AccountState schema, MCP tool call map for GTM / Google Ads / Pipeboard Meta, GTM container JSON normalization transform steps, and rendered system prompt output for HRE. Includes Claude Code prompt for Phase 2.
→Score filter distribution, Chroma deduplication, system prompt anatomy with token budget, full JSONL record schema, quality gates, CLI output walkthrough, and Claude Code prompt for Phase 3.
→Track A (OpenAI cloud) vs Track B (Ollama local M3 Ultra) comparison, track selection matrix, OpenAI API flow, Modelfile generation, model registry schema, eval harness, CLI output, and Claude Code prompt for Phase 4.
→Request routing architecture through OpenClaw :18789, full middleware stack (Auth → ClientID → ModelRouter → Telemetry → Fallback), per-client config schema, fallback chain, active client status, and Claude Code prompt for Phase 5.
→Complete compounding loop diagram, watcher trigger events, drift detection with auto-rollback visualization, notification events, flywheel config schema, version pruner, and Claude Code prompt for Phase 6 — the final phase.
→