Organized AI // gtm-autoresearch

Fine-Tune Pipeline
Documentation

Architecture docs for converting GTM autoresearch experiment outputs into client-specialized fine-tuned LLMs.
Branch: feature/finetune-pipeline

Phase 1–6 // Full Architecture

Client Intelligence Training Loop

Six-phase pipeline from autoresearch experiment logging through fine-tune runner, OpenClaw integration, and flywheel automation. Includes JSONL training record format and Claude Code prompt for Phase 1.

Phase 2 // Deep Dive

Account State Collector

Full AccountState schema, MCP tool call map for GTM / Google Ads / Pipeboard Meta, GTM container JSON normalization transform steps, and rendered system prompt output for HRE. Includes Claude Code prompt for Phase 2.

Phase 3 // Deep Dive

JSONL Training Data Pipeline

Score filter distribution, Chroma deduplication, system prompt anatomy with token budget, full JSONL record schema, quality gates, CLI output walkthrough, and Claude Code prompt for Phase 3.

Phase 4 // Deep Dive

Fine-Tune Runner — Dual Track

Track A (OpenAI cloud) vs Track B (Ollama local M3 Ultra) comparison, track selection matrix, OpenAI API flow, Modelfile generation, model registry schema, eval harness, CLI output, and Claude Code prompt for Phase 4.

Phase 5 // Deep Dive

OpenClaw Client Brain

Request routing architecture through OpenClaw :18789, full middleware stack (Auth → ClientID → ModelRouter → Telemetry → Fallback), per-client config schema, fallback chain, active client status, and Claude Code prompt for Phase 5.

Phase 6 // Deep Dive

The Flywheel

Complete compounding loop diagram, watcher trigger events, drift detection with auto-rollback visualization, notification events, flywheel config schema, version pruner, and Claude Code prompt for Phase 6 — the final phase.