1. Overview & philosophy
BenchmarkHQ aggregates SaaS benchmark data from 15+ published industry research sources and normalizes it into a single, filterable dataset segmented by ARR band, GTM motion, ACV, and vertical.
We are not a primary data collector. We do not survey companies directly. Every data point in our system traces back to a named external source — you can always see which sources inform each metric. Our value is in synthesis: taking data that exists across dozens of reports, normalizing the definitions, resolving conflicts, and making it filterable in ways no single source allows.
Core principle: We never hide methodology behind "proprietary model" language. Every calculation, every inclusion rule, and every reconciliation decision is documented here. If something is unclear, email us.
We target the $1–20M ARR B2B SaaS window because most free public benchmarks (VC-published, analyst reports) target $10M–$100M+ ARR companies. The dynamics at $1–20M ARR are meaningfully different — higher churn rates, longer CAC payback periods, lower NRR — and conflating them with mature-company benchmarks leads to bad target-setting.
ARR segmentation: Public previews show 3 rolled-up ARR bands. Members unlock 5 finer-grained peer groups across the $1–20M range.
2. Data sources
The following sources contribute data to BenchmarkHQ reports. Each source has a different focus, methodology, and company sample — which is why synthesis is necessary.
Benchmarkit
Annual survey-based benchmark report. Strong coverage of $1M–$50M ARR SaaS. Primary metrics: NRR, churn, gross margin, CAC payback.
Survey · Annual
SaaS Capital
Annual "State of the Cloud" and SaaS benchmarks report. Strong on NRR and growth benchmarks across ARR bands.
Survey · Annual
KeyBanc Capital Markets
Annual Private SaaS Company Survey. Strong on sales efficiency, go-to-market metrics, and ARR growth rates.
Survey · Annual
Maxio (formerly SaaSOptics + Chargify)
Billing data aggregates covering thousands of B2B SaaS companies. Strong on MRR growth, churn, and expansion revenue.
Billing data · Quarterly
Recurly
Subscription billing data. Strong on churn, payment failure, and involuntary churn benchmarks.
Billing data · Quarterly
OpenView Partners
Annual "SaaS Benchmarks" report with strong PLG and bottom-up SaaS coverage. Focus on product-led growth metrics.
Survey · Annual
Pacific Crest / KPMG
Annual SaaS survey. Strong on headcount ratios, burn multiples, and sales efficiency for growth-stage companies.
Survey · Annual
hiBob
HR and headcount benchmarks for SaaS companies. Covers hiring ratios, headcount per ARR, and org structure.
HR data · Annual
Lighter Capital
Revenue-based financing firm with benchmark data skewed toward bootstrapped and lightly funded SaaS ($1–5M and $5–10M ARR).
Portfolio data · Annual
Bessemer Cloud Index
Public cloud company benchmarks. Used primarily for gross margin and Rule of 40 calibration at higher ARR bands.
Public co. data · Quarterly
Meritech SaaS Comps
Public SaaS company financial data. Used for calibration and trend analysis on mature-stage benchmarks.
Public co. data · Quarterly
ChartMogul Benchmarks
SaaS metrics from ChartMogul's platform. Strong on ARR growth, MRR, and expansion/contraction breakdown.
Platform data · Quarterly
TomTunguz.com (Redpoint)
Independent analysis from Tomasz Tunguz. Used for trend context, growth rate analysis, and sales efficiency commentary.
Analysis · Irregular
a16z SaaS Metrics
Andreessen Horowitz published benchmark frameworks and data points. Used for metric definition calibration.
Research · Irregular
Stripe Atlas Reports
SMB-skewed SaaS data from Stripe's startup ecosystem. Strong on early-stage ($1–5M ARR) payment and revenue patterns.
Platform data · Annual
+ 3 additional sources
Three additional proprietary and research sources are incorporated on a selective basis for specific metrics and verticals.
Varies
Source attribution: Every data point in our exports includes a "Source(s)" column indicating which sources informed that benchmark. You can always see where numbers come from.
3. Data inclusion criteria
Not all data from source reports is included in BenchmarkHQ. We apply the following inclusion rules to ensure data quality and relevance.
Company type
Only B2B SaaS companies are included. Consumer SaaS, marketplace businesses, hardware/software hybrids, and transactional businesses (even if software-based) are excluded. When source reports don't segment by business model, we apply our best estimate of the B2B SaaS portion based on the source's described methodology.
ARR band eligibility
Data points are only included in an ARR band if the source explicitly segments by ARR range or provides sufficient disaggregation to infer band-level benchmarks. We do not extrapolate overall benchmarks (e.g., "all ARR ranges") into specific bands.
Sample size minimum
We require a minimum sample size of n ≥ 25 for a data point to be reported. Data points with n < 25 are suppressed and marked as "insufficient sample." We report sample sizes in all exports so you can weight data points appropriately.
Recency
For our quarterly reports, we include data published within the past 18 months. Older data is retained in our historical archive but not included in current benchmark calculations. This prevents stale data from diluting current benchmarks.
| Criterion |
Rule |
Rationale |
| Business model |
B2B SaaS only |
Consumer and transactional metrics are not comparable |
| ARR band segmentation |
Must be explicitly segmented or inferable |
No extrapolation from aggregate data |
| Sample size |
n ≥ 25 per data point |
Suppresses high-variance, unrepresentative data |
| Data age |
Published within 18 months |
SaaS benchmarks shift meaningfully year-over-year |
| Geographic bias |
US-centric; non-US data labeled |
Geographic market affects CAC, pricing, and growth norms |
4. Metric definitions & formulas
Different sources define the same metric differently. Our definitions are documented below. When a source uses a different definition, we note how we adjusted its data to conform to our standard.
Net Revenue Retention (NRR)
NRR = (MRR at end of period from cohort) / (MRR at start of period from cohort)
Includes expansion, contraction, and churn from an existing customer cohort. Excludes new logo revenue. Measured over a 12-month rolling period. Some sources call this "Net Dollar Retention (NDR)" — these are equivalent. We exclude churned customers from the denominator (gross ARR retention denominator), consistent with most institutional definitions.
Sources: Benchmarkit, SaaS Capital, KeyBanc, OpenView
CAC Payback Period
CAC Payback = (Total S&M spend in period) / (New ARR added in period × Gross Margin %)
Expressed in months. Uses gross-margin-adjusted new ARR to account for the cost of serving new revenue. Some sources report "blended" CAC payback (including expansion), others use new-logo-only. We standardize on new-logo-only CAC payback, which is the more common institutional definition. Sources using blended CAC are adjusted downward by a correction factor derived from the source's own expansion/new ARR ratio where available.
Sources: KeyBanc, OpenView, Pacific Crest
Gross Margin
Gross Margin % = (Revenue − COGS) / Revenue × 100
COGS includes hosting/infrastructure, customer success headcount (when directly attributed to service delivery), and third-party software costs. It excludes sales, marketing, R&D, and G&A. Capitalized software development costs are excluded from COGS per standard SaaS accounting. "Pure SaaS" gross margins (no professional services, no hardware) are reported separately from blended margins when available.
Sources: Benchmarkit, Meritech, Bessemer, Pacific Crest
Rule of 40
Rule of 40 = ARR Growth Rate (%) + FCF Margin (%)
Primary (BenchmarkHQ standardized): ARR Growth Rate (%) + FCF Margin (%). FCF margin uses free cash flow (operating cash flow minus capex) divided by revenue. ARR growth rate is calculated year-over-year.
Alternative (source-specific variant): Some sources substitute EBITDA margin for FCF margin — when a source makes this substitution, we note it explicitly. EBITDA-based Rule of 40 typically runs 5–10 points higher than FCF-based for growth-stage companies, so the two variants are not directly comparable without labeling.
Sources: SaaS Capital, Benchmarkit, Bessemer
Logo Churn Rate
Annual Logo Churn = Customers churned in 12 months / Customers at start of period
Customer-count based (logos), not revenue-based. Expressed as an annual rate. Monthly churn rates from sources are annualized using the formula: Annual = 1 − (1 − Monthly)^12. Involuntary churn (failed payments) is included in some sources and excluded in others — we report each source's definition and note when involuntary churn is included.
Sources: Benchmarkit, Maxio, Recurly, ChartMogul
Magic Number (Sales Efficiency)
Magic Number = (New ARR in quarter) / (S&M spend in prior quarter)
Measures how much new ARR is generated per dollar of sales and marketing spend. Values > 1.0 indicate strong sales efficiency. Annualized new ARR is sometimes used instead of quarterly — when source uses annual, we divide by 4 to get the quarterly-equivalent. S&M spend is the reported sales and marketing operating expense line; excludes capitalized commissions where relevant.
Sources: OpenView, Pacific Crest, KeyBanc
Burn Multiple
Burn Multiple = Net Cash Burned / Net New ARR
Measures how much cash is burned to generate each dollar of net new ARR. Lower is better. Values < 1.0 are considered efficient for growth-stage companies. This metric has gained prominence post-2022 as a key investor efficiency signal. Note: burn multiple is highly sensitive to growth rate — companies growing faster will have higher burn multiples at the same efficiency level.
Sources: SaaS Capital, Benchmarkit, a16z
Core formulas are shown here; additional metric notes appear in the glossary, report footnotes, and export metadata.
5. Normalization & reconciliation
When multiple sources report the same metric for the same ARR band, they often produce different results. This section explains how we handle conflicts.
Weighted averaging
When sources agree within ±5 percentage points (or ±5 units for non-percentage metrics), we report a weighted average. Weights are assigned based on: (1) sample size (larger sample → higher weight), (2) recency (newer data → higher weight), and (3) methodology similarity (how closely the source's definition matches our standard).
Conflict resolution
When sources disagree by more than ±5 percentage points, we apply the following resolution hierarchy:
- Definition mismatch check — If the conflict stems from definitional differences (e.g., one source includes involuntary churn, another doesn't), we adjust the outlier source to match our standard definition and re-evaluate.
- Sample bias check — If one source has a materially different company profile (e.g., heavy enterprise bias vs. SMB-heavy), we apply a correction or flag the source as a separate data point.
- Credibility weighting — If unresolvable, we weight toward the source with the larger sample size and more transparent methodology. We document which source was de-weighted and why in the report footnotes.
- Disclosure — If sources remain in conflict after the above steps, we report both values with a note explaining the discrepancy rather than presenting a false consensus.
Example reconciliation: For CAC Payback at $1–5M ARR, KeyBanc reported 15 months and OpenView reported 24 months in the same period. Investigating, we found KeyBanc's sample skewed toward PLG/product-led companies with naturally lower CAC, while OpenView's sample was sales-led. We report both segmented results (PLG: 14 mo, Sales-led: 23 mo) rather than blending them.
Percentile reporting
We report p25 (bottom quartile), p50 (median), and p75 (top quartile) rather than averages. This matters because SaaS metric distributions are typically right-skewed — the mean is pulled up by outliers and misrepresents the typical company's experience. The median is a more reliable "what does a normal company look like" signal.
6. Update cadence
BenchmarkHQ publishes a new benchmark report each quarter. Here's how the update cycle works:
| Timeline |
Activity |
| Quarter-end (Day 0) |
New source data reviewed for inclusion; quarterly sources incorporated as they publish |
| Days 1–14 |
New data points normalized, reconciled, and incorporated into the database |
| Days 15–21 |
Report drafted, reviewed, and formatted; metric changes from prior quarter flagged |
| By Day 21 |
Report published. Members notified by email. |
| Ongoing corrections / database updates |
Database remains live and searchable; interim corrections applied with change log |
Commitment: Quarterly reports are published within 2–3 weeks of quarter-end. If a report is delayed, members are notified 48 hours in advance with an estimated date.
Annual sources (survey-based reports) are typically published between January and April each year. When a major annual source publishes new data, we issue a supplemental update to our database, noting which benchmarks changed and by how much.
7. Limitations & caveats
We believe transparency about limitations is more valuable than projecting false confidence. Read these carefully before using the data to make decisions.
We are not a primary data source
BenchmarkHQ does not collect data directly from companies. We depend on the accuracy and representativeness of our source reports. If a source has a selection bias (e.g., companies that use a particular billing platform tend to have higher NRR), that bias may be present in our data.
Survivorship bias
Most benchmark sources survey or measure companies that are still operating. Companies that churned or shut down between data collection and publication are typically excluded. This means benchmarks likely overstate how well "typical" companies do, especially at early ARR stages where failure rates are higher.
Geographic concentration
The majority of our data sources are US-centric. Non-US SaaS companies face different pricing environments, CAC structures, and growth dynamics. We label data as "US-weighted" where relevant and note when a source has meaningful international representation.
Definition drift
Metric definitions evolve over time. What counts as "CAC" has shifted as attribution models have matured. We document our current definitions but acknowledge that year-over-year comparisons may reflect definitional changes as much as actual performance shifts.
Not financial advice
Benchmark data describes what companies have achieved historically, under varying market conditions. It is not a guarantee of what's achievable or appropriate for your company. Use it as directional context, not as a hard target. Discuss with your investors and advisors before setting formal goals.
Questions or corrections? If you find a methodology error, a definition that seems off, or a data point that doesn't match your experience, email us at
support@benchmarkhq.co. We take accuracy seriously and will investigate promptly.
8. Source taxonomy
Sources in the BenchmarkHQ dataset fall into three distinct categories. Understanding the difference matters for interpreting metric confidence and freshness. Every benchmark displayed in reports and exports is tagged with its source type.
Tier 1 — Current recurring sources
These sources publish on a regular cadence (annual or quarterly) and are actively updated in our database each cycle. They form the backbone of every benchmark report.
Benchmarkit
Annual survey. Primary metrics: NRR, GRR, logo churn, gross margin, CAC payback, Magic Number. Updated Q4 each year.
Survey · Annual · Active
SaaS Capital
Annual benchmark report. Primary metrics: NRR, ARR growth by band, efficiency metrics. Updated Q1 each year.
Survey · Annual · Active
KeyBanc Capital Markets (KBCM)
Annual Private SaaS Survey. Primary metrics: ARR growth, CAC, sales efficiency, gross margin. Published each spring.
Survey · Annual · Active
OpenView Partners (OPB)
Annual SaaS Benchmarks report. Strong PLG and product-led coverage. Published each fall.
Survey · Annual · Active
Maxio Institute
Quarterly billing data aggregates from 1,000+ SaaS companies. Primary metrics: MRR growth, churn, expansion. Updated quarterly.
Billing data · Quarterly · Active
Recurly Research
Quarterly subscription billing benchmarks. Primary metrics: churn, payment recovery, involuntary churn.
Billing data · Quarterly · Active
Tier 2 — Historical benchmark series
These sources have published in the past and contributed data to earlier reports. Their data is preserved in our historical trend database to enable QoQ and YoY comparisons, but they are not actively updated in new cycles. Historical data from these sources is labeled with the year it was collected.
Pacific Crest / KPMG
Historical annual SaaS surveys (2014–2022). Used for trend analysis. Not actively updated post-KPMG acquisition restructuring.
Survey · Historical · 2014–2022
Bessemer Venture Partners (State of the Cloud)
Historical cloud benchmarks and growth framework data. Used for long-term trend analysis on Rule of 40, growth frameworks.
VC report · Historical · 2016–2023
Meritech Capital
Historical public SaaS benchmarks used for cross-referencing private vs. public company efficiency metrics.
Public data · Historical · 2018–2023
a16z (Andreessen Horowitz)
Historical SaaS efficiency benchmarks. Burn Multiple framework originated here. Used for historical burn/growth context.
VC research · Historical · 2019–2022
Tier 3 — Reference & comparison sources
These sources are used for spot-checks, cross-referencing, and contextual notes in reports — but are not primary data contributors to calculated benchmarks. They appear in methodology notes and source disclosures when relevant.
Lighter Capital
Revenue-based financing portfolio benchmarks. Useful for bootstrapped/lightly-funded $1–5M ARR segment cross-reference.
Portfolio data · Reference
hiBob
HR/headcount benchmarks. Referenced in reports where headcount-per-ARR context is relevant.
HR data · Reference
ChartMogul
MRR analytics platform aggregate data. Referenced for churn cross-checks and MRR growth context.
Platform data · Reference
Why this matters: When you see a source name like "Pacific Crest" in an export, it refers to historical data (2014–2022) used for trend analysis — not a current benchmark. Current benchmarks are always Tier 1 sources. Tier labels appear in the full metadata column of CSV exports.