The FORGE brain
A closed loop between every draft, review comment, and award outcome. Quality and speed compound as you ship more proposals.
- 1Capture
Every draft, revision, comment, win theme, and debrief is stored with context (agency, NAICS, section type, evaluator criterion).
- 2Score
An evaluator-mirror predicts how a draft section would score against Section M criteria. Trains on your own Pink / Red / Gold history.
- 3Retrieve
pgvector retrieval plus a learned reranker weights past passages by outcome — winning sections in the same agency float to the top.
- 4Generate
Prompts are versioned and attributed. Every generation logs template version, retrieval set, tokens, and the human decision that followed.
- 5Feedback
Win / loss + evaluator notes close the loop. Reranker, evaluator-mirror, and prompt library update overnight from the signal stream.
Every write enqueues an embedding job. Coverage climbs as the background worker processes the stream.
Once 100 reviewed sections are captured, nightly jobs fit a classifier that predicts draft scores in the editor.
Win → positive reward to every retrieval used. Loss → negative. The reranker updates on each decided bid.
localStorage and the UI is stubbed. The Postgres + pgvector backend, the nightly training jobs, and the Editor's live predicted-score pill ship in Phase B.