FORGE · Live
Intelligence · Phase A

The FORGE brain

A closed loop between every draft, review comment, and award outcome. Quality and speed compound as you ship more proposals.

Corpus size
0
Outcomes
00
Win rate
Patterns
00
Architecture
Learning loop
  1. 1Capture

    Every draft, revision, comment, win theme, and debrief is stored with context (agency, NAICS, section type, evaluator criterion).

  2. 2Score

    An evaluator-mirror predicts how a draft section would score against Section M criteria. Trains on your own Pink / Red / Gold history.

  3. 3Retrieve

    pgvector retrieval plus a learned reranker weights past passages by outcome — winning sections in the same agency float to the top.

  4. 4Generate

    Prompts are versioned and attributed. Every generation logs template version, retrieval set, tokens, and the human decision that followed.

  5. 5Feedback

    Win / loss + evaluator notes close the loop. Reranker, evaluator-mirror, and prompt library update overnight from the signal stream.

Corpus
Captured artifacts
0
Drafts, comments, win themes, debriefs
Embedding coverage0%

Every write enqueues an embedding job. Coverage climbs as the background worker processes the stream.

Evaluator mirror
Scoring model
Warming up
0 of 100 training samples
Training readiness0%

Once 100 reviewed sections are captured, nightly jobs fit a classifier that predicts draft scores in the editor.

Outcomes
Win / loss signal
0 outcomes captured
Signal rate0

Win → positive reward to every retrieval used. Loss → negative. The reranker updates on each decided bid.

Prompt library
Versioned prompts with outcome attribution
No prompt runs yet.Each AI generation will write a PromptVersion entry here with acceptance rate, edit distance, and review severity once the Editor's AI assistant is wired in.
Audit
Last 10 signals
No training signals yet.Every Accept / Revise / Reject click in the Editor, every Pink Team comment, and every win or loss writes a signal here.
Learned patterns
What the brain has concluded
No patterns yet — the brain needs data.Patterns surface after at least 10 decided opportunities. Examples: 'NAVSEA wins correlate with explicit latency p95 claims', 'Two-page exec summaries outscore three', 'Pink Team CRITICAL comments on section 3.2 correlate with loss'.
Phase A — plumbing. The brain currently records to localStorage and the UI is stubbed. The Postgres + pgvector backend, the nightly training jobs, and the Editor's live predicted-score pill ship in Phase B.