Year in RFPs: 2025 — the data and the narrative
The canonical year-end synthesis. What moved in the RFP category in 2025, what did not, what the public data says about vendors and buyers, and three predictions for 2026 with the evidence behind them. 5,000 words, twenty-six sources.
This is the year-end synthesis for 2025. It is a long report — about 5,000 words — and it draws on the twenty-six public sources we tracked across the year, plus the full corpus of posts we have published on the blog. The point is not to catalog events. The point is to sort signal from noise in a category that produced a lot of both.
A note before starting. PursuitAgent is a participant in the category; we launched publicly inside 2025 and have a position in the market we are reporting on. The obvious bias applies. We have tried to counter it by citing competitors and industry observers against their own customer reviews, filings, and public statements — not against our opinions. Every number has a source. Every competitive claim has a link.
The report is structured in nine sections. The executive summary is first, for readers who will not read the rest. The data, the vendors, the shift to grounded, the SME bottleneck, what did not change, what we shipped, predictions for 2026, and a pointer to the sources in the methodology footer.
1. Executive summary
Seven signals defined 2025 in the RFP category.
First, RFP volume continued to rise in every tracked segment. Federal contracting, enterprise SaaS procurement, state and local government, and DDQ / security-questionnaire traffic all increased on public indicators. The increase is not driven by AI (that is a 2026 story in the making); it is driven by procurement maturity continuing to formalize purchase decisions that were previously relationship-based.
Second, the proposal function’s staffing did not keep up with volume. APMP salary data and practitioner sentiment across the year suggest teams grew under 10% while inbound work grew materially more. The gap was absorbed by process compression and, increasingly, by tooling.
Third, “AI with citations” became category table stakes. Every serious proposal-software vendor shipped some form of citation-rendering in 2025. The gap between citation-rendering and actual claim-level grounding — the gap Stanford HAI’s legal-RAG study mapped at 17–33% hallucination — widened operationally even as marketing narrowed it.
Fourth, the 48% SME-collaboration bottleneck did not move. It has held steady for five consecutive years and did not move in 2025. The practical interventions that might move it — async patterns, draft-packet generation, SME-friendly KBs — saw adoption but not the aggregate impact required to bend the headline number.
Fifth, color-team discipline remained rare outside large federal shops. Shipley’s orthodoxy continues to be imported wholesale by 10-person shops where the ritual overhead exceeds the value. 2025 did not produce a widely-adopted middle-ground process.
Sixth, post-mortems still did not happen in most functions. Every practitioner piece that surveyed the issue in 2025 came back with the same answer. The losses that could teach a function do not, because the retrospective that would extract the lesson is not held.
Seventh, practitioner skepticism of generic AI in proposals intensified. Trident, 1up, and industry commentary across the year made the case that ChatGPT-style output does not meet the trust bar proposal readers apply. The grounded-AI wave rose on that skepticism.
These seven points are the scaffolding for the rest of the report.
2. The data we tracked
Four data streams anchored 2025.
RFP volume signals. SAM.gov, the federal-contracting procurement system, continued to publish active-opportunity data through the year. The public numbers, consistent with the trend from 2025, showed double-digit growth in total active opportunities year-over-year in several categories, particularly IT services, cybersecurity, and managed-services acquisitions. The exact figures shift as the system is refreshed; the directional signal is unambiguous. Federal procurement is not slowing. Enterprise and private-sector RFP volume is harder to track publicly, but practitioner surveys and procurement-technology vendor disclosures from 2025 align with the federal signal — more RFPs, more sellers per RFP, more formalization of what used to be handshake deals.
Proposal-team staffing. The APMP annual compensation survey (2025 release, covering 2024 data, with practitioner commentary through 2025) showed proposal management remained a growing but slow-growing field. Headcount in the proposal function has lagged the growth in proposal volume for several years. The consequence is a compression: fewer hours per bid, more bids per proposal manager, tighter deadlines, higher probability of failure modes like rushed drafting, missed addenda, and skipped post-mortems.
DDQ / security-questionnaire volume. Safe Security’s public reporting put the figure at 500-plus questionnaires per year for some large enterprises, with individual questionnaires running 200-400 questions. This is the most concrete anchor for the “DDQ fatigue” narrative that dominated the procurement-side conversation in 2025. The narrative is accurate. The public data supports it.
AI hallucination rates. Stanford HAI’s legal RAG paper continued to be the most-cited foundational research on the hallucination problem in 2025. The 17–33% hallucination range, from commercial-grade retrieval-augmented tools with purpose-built corpora, remains the benchmark against which any vendor claim of “grounded AI” should be read. The Hacker News threads on RAG and hallucination and reverse-RAG verification are the best public debates on what it takes to get those numbers down.
The through-line across the four data streams is this. Volume is up across RFPs and DDQs, staffing has not kept pace, the bottleneck is a mixture of workflow and talent constraints, and the AI wave claimed to be the answer while the underlying research showed the claim required verification no vendor was yet delivering at scale.
3. The vendors
The five incumbent / challenger vendors whose posture we tracked through the year moved in distinguishable ways. We covered each in more depth in the Wave 1 State of Proposal Tools report; the summary below captures 2025-specific shifts.
Loopio. Entered 2025 with a review base that had coalesced around the “overpriced document repository” framing — documented in autorfp.ai’s review synthesis. The company’s 2025 posture shifted toward content-freshness as a marketed feature, not a hidden maintenance burden. The shift is directionally correct. Whether the product has shifted in step is a question the review base will answer across 2026. Capterra review patterns continued to show the split the category has had for years — teams that actively maintain the library love the product, teams that do not accumulate frustration that the library’s rot becomes the product’s failure mode. The Loopio teardown we published this year walks through the specifics. Our 2025 read: Loopio is attempting a slope-of-enlightenment move. Whether the slope extends depends on proof the library-rot problem has been structurally addressed, not just named.
Responsive (formerly RFPIO). Responsive’s 2025 was quiet at the marketing layer and loud at the review layer. The G2 pros-and-cons aggregation captured the persistent complaints: search that “constantly misidentifies what I’m searching for,” a UX described as “sooooo clunky, impossible to locate exactly what you’re trying to find,” and the framing of the tool as a legacy CMS whose AI “pales in comparison to basic ad-hoc GenAI.” The vendor did not crash in 2025. It plateaued while its review base drifted negative. Our Responsive teardown covered the specifics. The pattern is the pattern of a category leader whose lead is running down rather than a vendor in outright decline.
AutogenAI. AutogenAI’s 2025 was the peak-descent pattern. The company’s own piece on AI hallucination risk remained one of the most-cited vendor pieces in the category. Their conference presence through mid-2025 was the loudest in the category. By late 2025 the differentiation of “AI drafting” had commoditized. The AutogenAI teardown earlier this year argued that the 2026 posture has to be grounded-retrieval-centric or the vendor cedes the ground it pioneered. We do not have public evidence yet on what the 2026 posture will be.
Qorus (QorusDocs). Qorus remained in the trough. Capterra reviews through the year continued to name speed issues, a 10-pursuit dashboard cap that limits the team’s situational awareness, and content-search relevance problems. Our read: the product has a real footprint, the customer base is real, and the public-review surface keeps underperforming the market’s moving expectations. 2026 will be a referendum on whether Qorus’s roadmap is closing the gap.
Upland Qvidian. Qvidian is the vendor least changed by 2025. The G2 review pattern — UI “could be more modern,” AI that is inadequate, performance that is slow, price that is high — is durable, consistent, and not moving. Upland’s portfolio strategy keeps Qvidian alive without the investment that would move the product off the plateau. Plateau forever is a position. Some buyers are fine with it.
The adjacent class. A handful of generalist-first and AI-first tools continued to pressure the bottom of the market — the 1up class of tools, whose critique of RFP software landed in 2025 is the cleanest statement of the generalist position we have seen. Whether any of them graduates to the enterprise tier in 2026 is open. Our guess — supported in section 8 — is that exactly one will.
Pricing posture across the category in 2025. The single most consistent pattern across the incumbent class was defensive pricing. None of the incumbent vendors published meaningful price reductions, and none published material new premium tiers worth buying up into. The practical 2025 procurement conversation, in the feedback we saw from buyers evaluating the category, was negotiated discounts on existing pricing structures — not re-architected pricing that reflected the AI-drafting features being added. This is the posture of a category whose vendors are trying to hold ground rather than expand it. The pricing narrative will shift in 2026 if grounded-retrieval features force a per-consumption or per-verification line item; we do not yet see that shift in the public pricing data.
What did not happen in the vendor layer. No incumbent in the category was acquired in 2025, to our public knowledge. No IPO landed. No meaningfully new entrant broke into the enterprise segment. The category is in a holding pattern that feels different from the normal rhythm of a SaaS market: the incumbents are defending, the challengers are waiting for a trust-bar inflection, and the buyer side is pressuring both with increased volume without increased budget. Holding patterns in technology markets rarely hold for two years running. 2026 is the likely break.
4. The shift to grounded
2025 was the year “AI with citations” became table stakes. Every vendor with a credible proposal-software offering shipped a citation panel, a provenance display, or a grounded-retrieval marketing narrative. The category converged on the word “grounded” faster than it converged on what grounded actually meant.
The gap between citations and verification is the defining technical gap of the year. A citation points at a source. Verification confirms the claim the citation supports is in fact supported by the source. Stanford HAI’s legal RAG research makes the point explicitly: commercial tools with real retrieval systems still hallucinate at 17-33% rates, and citations rendering at the UI level does not mean the claim under the citation is verified.
The practitioner community was honest about this. The Hacker News thread on reverse-RAG — Mayo Clinic’s per-claim verification approach — is a long argument in the comments about whether span-level verification is economically viable at enterprise scale. The consensus is not “yes it works,” it is “yes it works, and the cost is material, and if your application cannot absorb the cost you will ship tools that look grounded without being grounded.” Proposal software sits in that awkward middle — the cost of verification per draft is within reach, but most vendors have not built it yet.
What changed in 2025: the serious vendors started to talk about verification as a separate step from retrieval. What did not change: most vendor implementations stop at retrieval. A UI panel showing which chunk a claim cites does not verify the chunk supports the claim. The 2026 question — which the predictions in section 8 return to — is whether the vendor class that ships real verification separates from the class that ships citation-rendering alone. We think it will.
Our own position is documented in the grounded retrieval pillar from earlier this year. We ship a verification step downstream of retrieval, flag ungrounded spans, and treat span-level support as a first-class output of the drafting pipeline. The year-one regressions post two days ago is honest about where that has broken and where it held up. The broader category will catch up to this architecture over the next 18-24 months. The ones that do not will differentiate on price, not on trust — which is a difficult position to hold once the trust-bar tools mature.
A separate observation on AutogenAI’s hallucination piece — the piece itself is a candid vendor statement of the three failure modes that matter most in proposal AI: invented case studies, incorrect compliance claims, and fabricated statistics. The failure modes are real. The question is whether retrieval alone is sufficient defense against them. We think not. Case studies are particularly vulnerable because they are the exact content type where the buyer is least able to verify independently, which makes them the exact content type where hallucination is most damaging if it ships. A grounded-AI system that does not explicitly defend against case-study invention — by restricting the drafter from generating case-study narratives except from tagged, human-approved source blocks — is leaving the failure mode wide open. We have built toward that defense; most vendors have not published architectures that address it.
The broader shift in 2025 around grounded AI produced two secondary effects worth naming. First, the procurement-side conversation about AI shifted from “are you using AI” to “how are you using AI and what are the controls.” Buyer RFPs increasingly include questions about AI-drafting disclosure, prompt auditability, and hallucination-defense processes. Vendors without clear answers to those questions started losing late-stage evaluations that they would have won a year earlier. Second, the security-questionnaire category absorbed the shift fastest. DDQs now regularly include sections asking vendors to disclose whether AI drafted any portion of the response and what human review was applied — a meta-question that makes the grounded-vs-ungrounded distinction consequential at the contract layer, not just at the draft quality layer. Arphie’s framing captured the operational dimension of this shift.
5. The 5-year SME bottleneck — still there
Qorus published the 48% figure as a five-year trend: 48% of proposal teams cite SME collaboration as their top challenge, and the number has not moved across five years of surveys. 2025 did not move it. The next release will, we expect, continue to not move it.
Why it does not move is the heart of the problem. The best engineers are the busiest engineers. Proposal work competes for their time against billable work, against product deadlines, against incident response. The economics of the SME’s day are against responding to proposal questions, and the tooling interventions of the last five years — ticketing systems, Slack integrations, draft-packet generators — have improved the mechanics without changing the underlying economics.
Quilt’s bottleneck analysis captured the cost dimension: sales engineers spend 100 to 300 hours per RFP response, most of the most expensive talent in the organization pulled from discovery and demos to write about their own product. The SME bottleneck is the direct consequence.
Lohfeld Consulting’s 2025 piece diagnosed the symptomatic layer: proposal managers spend more time chasing SME responses than building strategy. The chasing is visible in the manager’s day. The underlying economics are the reason chasing is the manager’s day.
What might move it in 2026: three candidates. First, grounded AI that drafts from an SME-approved KB reduces the SME’s load to review rather than write, which is a different cost profile. Whether teams adopt the pattern widely enough to show up in next year’s survey is an open question — we covered the design space in SME collaboration reconsidered. Second, organizational shifts that treat proposal contribution as part of the SME’s quota (rather than on top of it) — but these are rare and hard to replicate. Third, the maturation of async-first draft-packet tooling that removes the synchronous interview bottleneck entirely.
Our prediction is that the 48% number drops to something like 42-45% in 2026 if one or two of those conditions land for the larger share of the practitioner base. The 5-year stability of the number suggests we should be cautious about predicting bigger moves.
6. What didn’t change
Beyond the SME number, a lot did not move in 2025.
Procurement vocabulary. The words buyers write in RFPs remained the same as they were in 2022. “Offeror shall provide,” “describe in detail,” “the vendor will demonstrate” — the vocabulary is frozen. VisibleThread’s government proposal writing piece made the point that even experienced contractors struggle with the checklists inside every RFP because the vocabulary is precise and the penalties for mis-parsing are real.
Color-team discipline outside large shops. Shipley’s color-team process — pink, red, gold, white — remained the canonical reference and remained mismatched with the needs of 10-person proposal shops. Shipley’s own piece on color teams acknowledged the review process is “as painful as they can be for writers to hear, and for Proposal Managers to coordinate.” Bid Lab’s piece makes the stronger point: imported wholesale, the process adds meetings to a function that needs fewer meetings. 2025 did not produce a right-sized middle-ground review ritual that broke into the mainstream. Our bet is that 2026 or 2028 will — the pressure is mounting — but 2025 did not.
Post-mortems. Leulu & Co’s piece on post-mortems captured the pattern: the debrief ends, the document is published, people move on. Nobody asks at sprint planning what happened to last month’s actions. Lessons do not embed in the next bid. The same mistakes recur. 2025 did not change this. The saturday retro ritual we published is our attempt at a low-overhead intervention; practitioner adoption is slow.
KBs rotted. The 2022-2025 pattern continued. Sparrow Genie described content libraries full of outdated Google Docs and PDFs written for other verticals. Shelf named the consequence: outdated KB content actively undermines user trust. Proposal-specific KBs, where the user is often an external evaluator, pay a higher trust cost for rot than internal-facing KBs do. Vendor tooling in 2025 began to tackle freshness at the metadata level; the cultural and staffing interventions that actually keep a KB fresh did not get easier.
Win themes stayed generic. PropLibrary’s piece on the “swap test” — if you can swap competitor names with yours and the theme still reads, the theme is too generic — applied as well at the end of 2025 as it did at the beginning. Most proposals shipped in the year had themes that would have passed a competitor-brochure Turing test. The evaluators noticed; the win rates on generic themes continued to underperform specific ones.
Buyer-side RFP quality did not improve. Fairmarkit’s analysis of buyer pain points held across 2025. Operational teams continue to draft RFPs that read like wish lists — every feature the buyer has heard of, bundled into a single procurement without internal prioritization. Vendors respond by inflating price or promising features they cannot deliver; the buyer’s evaluation panel then distrusts every response. The cycle is structural and did not improve in 2025. The category that could fix it — procurement-tooling for the buyer side — saw incremental product improvement without category-level progress. An opening remains for a serious buyer-side toolchain that pressures the demand side of the market to write better inbound. No vendor has credibly taken that opportunity yet.
Color-team vocabulary did not expand. Shipley’s pink/red/gold/white color set remains the category’s shared vocabulary, and 2025 did not introduce a meaningfully new review framework that teams outside federal shops adopted at scale. Practitioner writing on “right-sized review” continued without producing a named alternative that caught. The category does not yet have the equivalent of a “pink team for 10-person shops” that teams can adopt by name. This is an open writing opportunity for the practitioner community in 2026.
7. What we shipped at PursuitAgent
An honest ledger for the year, tied to the shipped-* posts on the blog.
We shipped content-block versioning (post), diff views on answer blocks (post), per-block permissions (post), and bulk edit (post). The theme of the KB-layer work was making the library’s freshness a visible, manipulable property of the tool rather than an invisible background burden.
We shipped diagram extraction (post), multi-document RFP ingest (post), a DDQ classifier (post), and compliance-matrix autogeneration (post). The theme of the intake work was collapsing the three-day manual extraction step that historically opens every RFP response cycle.
We shipped freshness scores on blocks (post), freshness-alerting (post), and the citation-verify button (post). The theme was addressing the trust gap that Stanford HAI’s research and the category’s broader commentary named as the central AI-proposal problem.
We shipped the SME-ticket SLA tracker (post) and the win/loss pair capture mechanism (post). Both are compounding-mechanism features — the former attacks the SME-bottleneck cost, the latter builds the closed-loop write-back that the compounding pillar two weeks ago argued is the product’s entire edge.
We shipped the quarterly eval dashboard (post) and the question-router v2 (post) and the answer-block inheritance model (post). Plus a long tail of smaller infrastructure posts that did not get formal launch announcements because they were build-log notes.
What we did not ship: the federal SAM.gov native integration, the team-level win-rate reporting the way we described it in January, and a self-serve onboarding path. Bo’s year-one letter covered the misses publicly. The lesson is that roadmap commitments made in January do not all survive contact with the year; the work is in being explicit about the ones that did not.
The compounding architecture — KB write-back, theme tagging, style inheritance, account-graph capture intelligence — is the throughline. We are one year into a multi-year architectural commitment. The next year’s report will grade how it held up at higher scale.
8. Predictions for 2026
Three predictions, each with the evidence behind it.
Prediction 1: At least one incumbent proposal-software vendor will see a material public shock in 2026 — a CEO change, a pricing reset, a large public customer departure, or a PE-driven restructuring. The evidence: review sentiment on Responsive and Loopio has drifted negative consistently through 2024-2025. Pricing posture on both has tightened. The category is consolidating on a grounded-retrieval narrative that the incumbents have not fully shipped against. Consolidated pressure of this magnitude historically produces a visible event within 18 months. We will not name which vendor; we think the odds are distributed across two, not locked on one.
Prediction 2: The “AI with citations” claim without verification will become publicly untenable as a competitive position in 2026. The evidence: Stanford HAI’s 17-33% hallucination range is becoming common knowledge in procurement-side conversations. The Hacker News threads on RAG limits circulate in the buyer community. The practitioner pushback against generic AI — Trident’s piece is the clearest statement — is accumulating. The buyers’ technical staff increasingly ask verification questions that citation-rendering alone does not answer. The vendors who ship verification will differentiate; the ones who ship citations-only will lose the trust bar battle.
Prediction 3: The 48% SME-collaboration bottleneck number will move in 2026 — modestly. Our estimate is a drop to the 42-45% range. The evidence: the grounded-drafting tools that move the SME’s load from author to reviewer are maturing. The SME collaboration reconsidered post laid out the design space; vendors are catching up to it. The 5-year flat trend will probably not break cleanly, but the aggregate tooling improvements should be enough to bend the headline number a few points. If the number does not move in 2026, the premise that the bottleneck is primarily a workflow problem — rather than an organizational-incentive problem — is weaker than the industry has assumed, and the 2028 conversation has to restart from a different frame.
One prediction we do not make: that the generalist AI-assistant wave displaces the incumbents at the enterprise layer. It will erode the bottom of the market, where pricing has always been the softest, but the trust bar at the enterprise layer — especially in regulated verticals — will continue to favor purpose-built proposal tools. The generalist wave wins on a segment. The segment is real. It is not the whole market.
A further observation we are confident enough to flag but not to elevate to a prediction. Color-team review methodology will, we think, produce its first widely-adopted right-sized framework in 2026 or 2028, driven not by a Shipley update but by a practitioner-community publication. The pressure for a middle-ground review ritual has been building for several years — Bid Lab’s argument captured the need — and the proposal-craft community’s online output in 2025 had the ingredients of a named alternative. Somebody will publish it. When they do, adoption will be faster than most category observers expect, because the demand has been pent up.
A second observation, similarly flagged. Post-mortem discipline — the year’s most-discussed missing ritual — will see a specific tooling play in 2026. Not from the incumbent vendors (who have six years of not shipping post-mortem tools) but from an adjacent category, possibly from the revenue-operations or sales-enablement toolchain. When it ships, it will catch on fast, because Leulu’s diagnosis is correct and the problem has been unsolved long enough that any credible tool will find adopters.
9. Sources and methodology
This report cites twenty-six public sources plus PursuitAgent’s own blog corpus. The twenty-six external sources are listed in the frontmatter of this post and include G2, Capterra, Stanford HAI, Hacker News, vendor blogs (Loopio, Qorus, AutogenAI), practitioner blogs (Lohfeld, Shipley, Bid Lab, VisibleThread, Trident, Leulu, Sparrow Genie, Shelf, Fairmarkit, Quilt, 1up, Arphie, PropLibrary, Safe Security), and peer-review research (Stanford).
The methodology is the same as the Wave 1 State of Proposal Tools report. We do not cite proprietary data. We do not cite vendor-supplied metrics that cannot be publicly verified. We do not cite anonymous customer claims. Every competitive claim links to a public artifact that produced it.
Wave 2 of the State of Proposal Tools report lands in February 2026. It will grade the three predictions above against their six-month check-in, update the vendor map with any public moves that landed in Q1, and extend the data analysis with new data streams that become available. The annual Year in RFPs 2026 report will land December 31, 2026.
If you read this to the end, thank you. The blog operates on the thesis that a category’s best writing is the writing that treats the reader as a serious operator looking for signal, not a target for a marketing funnel. We try to earn that trust one post at a time. The counter — counter to the category’s default, counter to the “comprehensive guide” conventions of SEO-optimized content farms, counter to the temptation to gild a slow-moving category as a transformational one — is a long project. 2025 was year one of it. 2026 is the second draft.
One last note on methodology before we close. The corpus of posts on this blog now runs past 200 entries across eight pillars. Cross-references inside this report, and inside the other pillars we have published, are deliberate — the eight-stage pipeline, the DDQ response playbook, the grounded retrieval pillar, the vendor teardowns, and this year-end synthesis are meant to compose rather than stand alone. If a reader uses a single post for a single operational decision, it has earned its place. If a team reads four of them together and derives a process change, that is the longer game. The blog is structured for the longer game; we think the longer game compounds in the same way the product is designed to.
See you in January.
Sources
- 1. Capterra — Loopio reviews
- 2. Autorfp — Loopio reviews summary
- 3. G2 — Responsive (formerly RFPIO)
- 4. G2 — Responsive pros and cons
- 5. Capterra — Qorus for proposal management
- 6. G2 — Upland Qvidian
- 7. Loopio — Best DDQ software
- 8. Safe Security — Vendor security questionnaire best practices
- 9. AutogenAI — AI hallucination, how proposal teams reduce risk
- 10. Stanford HAI — Hallucination-Free? Assessing Legal RAG Tools
- 11. Hacker News — RAG and the hallucination question
- 12. Hacker News — Mayo Clinic reverse-RAG
- 13. VisibleThread — Government proposal writing
- 14. Qorus — Winning proposals: stop wrangling SMEs
- 15. Quilt — How to identify bottlenecks in your RFP process
- 16. 1up — The problem with RFP software
- 17. PropLibrary — Proposal win themes
- 18. Lohfeld Consulting — How to fix the proposal processes holding you back
- 19. Leulu & Co — The proposal post-mortem
- 20. Shipley — Color team reviews
- 21. Bid Lab — Color team reviews explained
- 22. Arphie — How AI is transforming security questionnaire processes
- 23. Sparrow Genie — RFP content library best practices
- 24. Shelf — Outdated knowledge base
- 25. Fairmarkit — 4 RFP pain points
- 26. Trident Proposals — Why ChatGPT and AI should not write your next proposal
- 27. PursuitAgent — Grounded retrieval pillar
- 28. PursuitAgent — DDQ response playbook
- 29. PursuitAgent — State of Proposal Tools, Wave 1 2025
- 30. PursuitAgent — The 8-stage RFP pipeline
- 31. PursuitAgent — Loopio teardown
- 32. PursuitAgent — Responsive teardown
- 33. PursuitAgent — AutogenAI teardown
- 34. PursuitAgent — SME collaboration reconsidered
- 35. PursuitAgent — The saturday retro ritual
See grounded retrieval in the product.
Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.