AI tools for federal capture, built by a practitioner.
A catalog of small, sharp tools — Prompt packs, Skills, Plugins, Agents — that compress federal proposal and capture work from hours to minutes. Public-data-only. Run in your own Claude environment. Priced from $49 for the impulse-tier prompt pack to $3,500/yr per seat for the team-tier color-team agent.
Impulse
$19–$49
Solo BD consultant, fractional capture lead
Personal credit card. <90-second decision.
- SAM.gov Daily Triage Pack — $49
- L/M Crosswalk Prompt Pack — $99
- Acronym Builder — $99
<5 min to first useful output
Prosumer
$199–$999
Proposal manager / director at a small-mid prime
Owner-operator approval. 5–15 min decision.
- Compliance Matrix Builder Skill — $499
- Forecast Sweep Skill — $199
- Capability Statement Skill — $299
<10 min to first useful output
Team
$3,500/yr/seat and up
Capture VP / Proposal Director at a mid-tier prime
PO threshold. 1–4 weeks. NET-30 invoicing for 5+ seats.
- Color Team Reviewer Agent — $3,500/yr/seat
- Proposal Stack Bundle — $8,500/yr/10 seats
60-day pilot
The Day-0 catalog
Three tools, one per tier. Each one solves a specific federal capture or proposal problem with a measured benchmark, a concrete time saved, and a price you can buy without a procurement cycle.
Impulse · Prompt pack
SAM.gov Daily Triage Pack
Stop scrolling SAM.gov. Paste your saved-search export, get a 5-bullet daily brief in 5 minutes.
30–60 min/day → 5 min
Prosumer · Skill
Compliance Matrix Builder
Drop a 200-page RFP. Get Sections L, M, C, and H in Word, Excel, and Markdown — in 15 minutes.
6–16 hours → 15 minutes · ≥90% recall on 3-RFP benchmark
Team · Plugin
Pink/Red/Gold Color Team Agent
Three reviewer simulations on a 200-page proposal in 30 minutes per pass. Your humans review the judgment calls.
130–200 reviewer hours → 30 min per color · ≥80% recall
What sits between ChatGPT and a $200K SaaS
Federal capture and proposal work currently has two AI options. Neither one fits the people actually doing the work.
On one end: generic ChatGPT prompts. Free, but unsourced, drifting, and full of the "AI-flavored" prose that makes a Section M reviewer reach for the pass pile. The output looks productive in a demo and falls apart on the second page of an RFP.
On the other end: enterprise federal SaaS. VisibleThread, GovDash, Procurement Sciences. Quoted, sales-gated, feature-loaded — built for shops with $50K+ procurement budgets and an InfoSec team that can run a six-month security review. Most of the actual work happens at solo BD consultants, small-to-mid primes, and proposal shops where that procurement cycle is the whole problem.
Capture.kit sits between them. Productized AI tools, priced from $49 to $3,500/yr/seat. Each one cites verbatim source text for every output. Each one runs inside your own Claude environment — the catalog never sees your RFPs, your proposals, or your pursuit list. Each one ships with a sample input and a sample output, no email gate. You can decide if it's useful in five minutes.
How it works
Pick a tool
Three Day-0 tools, one per tier. Read the page, watch the Loom, download the sample output. No demo call.
Buy with a credit card or a PO
Stripe Checkout for everything under $5,000. NET-30 invoicing on the team-tier subscriptions for 5+ seats. No accounts, no SSO, no enterprise quote.
Drop the asset into your Claude environment
Prompt packs go in a fresh Claude.ai chat. Skills go in Claude.ai Projects or Claude Code. Agents go in Cowork or Claude Code as plugins. License-key validation happens once at install for prosumer and team tiers.
Run on your own work
RFPs, drafts, capability statements, opportunity lists. The asset processes them in your account. We never see your content. Output is verbatim-source-cited, with `[VERIFY]` flags on anything inferred.
Refund if it doesn't work
14-day refund on impulse, 30-day on prosumer, 60-day pilot on team. The bar is honest, not exhaustive. If you opened it, ran it once on a real piece of work, and it didn't help — we refund.
How we handle your data
Public data only
Every tool processes the inputs you bring to it — public RFPs from SAM.gov, your own capability profile, your own proposal drafts. The AUP forbids CUI, SSI, and source-selection-sensitive content. Hard line.
Runs in your Claude environment
Capture.kit ships assets, not a SaaS platform. Skills, Plugins, and Agents execute in your Claude.ai or Claude Code session. We never see your work product. Zero retention via Anthropic's API contract.
Not FedRAMP authorized
Stated plainly because it matters: Capture.kit is not FedRAMP authorized. For FedRAMP-required workflows, use AWS Bedrock GovCloud with a Claude variant — that's a separate product on the roadmap. Don't deploy these tools on classified or controlled-unclassified work.
Verify before submit
Every output cites verbatim source text. Inferred elements are tagged `[VERIFY]`. The catalog does not pretend AI replaces senior judgment; it compresses the rote work so the human can spend hours on the calls that need them.
From the field
Why Section H is where small primes lose proposals
Most compliance tools stop at L and M. The requirements that decide responsiveness usually live in C and H — and the typical proposal-shop workflow doesn't shred them. Here's what that looks like on a real RFP.
7 min read
The $30K-per-RFP rote-work tax
A 200-page proposal pulls 130–200 reviewer hours. Most of those hours go to consistency checking, acronym audits, and readability — work that doesn't need senior judgment but absolutely has to get done. Here's the math.
5 min read
Sample-first, not demo-first
If a federal proposal tool can't be evaluated in 5 minutes from a sample output, the tool is the wrong abstraction. A note on why everything in the catalog ships with a public sample and skips the sales call.
4 min read
10 GovCon AI mistakes most small primes make
A short PDF on the specific failure modes — hallucinated fit scores, missing Section C requirements, ChatGPT-prose in a Section M response — that get small-prime proposals downgraded. Free. No demo call.
Who builds this
Capture.kit is built by one practitioner who has spent enough years on the capture and proposal side of federal contracting to be unimpressed by both the existing tools and the latest AI hype. The catalog is the tooling that didn't exist when I needed it: small, productized, priced for the people actually doing the work, with verifiable output and no enterprise sales motion in front of it.
The tools run in your Claude environment because the procurement and InfoSec ceremony of a centralized SaaS doesn't fit the people the catalog serves. The output cites verbatim source text because federal review culture is "show your work" — and AI tooling that doesn't show its work shouldn't be in a proposal-shop workflow.
If a tool in the catalog doesn't compress real hours into real minutes on your first real piece of work, it gets a refund. That's the contract.
Capture.kit — built by a practitioner
Bundles
Pick a tool. Use it on real work today.
The Day-0 trio is live. Read a product page, watch a Loom, download a sample, decide.