Customer Intelligence Report · SaaS Retention

Churn risk in B2B SaaS.
The silent signals
before the cancellation.

What customers at risk of churning say — and do — in the weeks before they leave. Patterns from 312 accounts across 18 months of longitudinal engagement data.

Accounts analysed 312 accounts
Sessions reviewed ~48,000
Platform Orbit Analytics
Segment Mid-market B2B
Date April 2026
312
Accounts in the dataset
Live cohort
68%
Showed at least one pre-churn signal before cancelling
23
Days — average lead time before churn where signals appear
#1
Predictor: session frequency drop — more than login count
Summary Profile

The at-risk account
before it's too late

Across 312 accounts, a distinct pre-churn profile emerges — not the noisiest accounts, but the quietly disengaging ones, reducing scope without saying a word.

Defining characteristics
Session frequency drops before session depth — they log in less before they use less
Champion contact goes quiet — usage migrates to junior users or stops entirely
Support tickets decrease — not because things are fine, but because they've stopped trying to fix it
Dashboard views plateau — they check the same reports on repeat, no new exploration
NPS scores are neutral, not negative — the danger zone is 7s, not 4s
What these accounts don't do
They don't complain loudly — silence is the signal
They don't engage with new feature releases or product updates
They don't bring in new team members — seat count stagnates or shrinks
Briefing note for CS leadership
The commercial opportunity is not in rescuing churned accounts — it is in detecting the 23-day window before the decision is made.
Open briefing →

For the accounts most at risk, the intervention window is not the renewal conversation.
It is the moment session frequency first drops — three to four weeks before anyone notices.

6 Key Insights

What the data
actually reveals

Each insight is grounded in account behaviour patterns across 18 months. Click to expand, explore, and see recommended actions for the CS and product teams.

01
Churn isn't a decision — it's a drift. And it starts weeks before anyone says anything.
+

The most consistent pre-churn signal is not a support ticket, a complaint, or a missed renewal call. It is a gradual reduction in the frequency of logins — starting an average of 23 days before the cancellation request. These accounts don't decide to leave. They slowly stop arriving.

"We just kind of stopped using it as much. No one made a decision. It just fell off."— Account A-112, post-churn interview
"By the time our renewal came up, nobody on the team was really in there daily anymore. It was easy to say no."— Account A-287, exit survey
What this reveals

Churn is a lagging event. The causal behaviour — disengagement — starts much earlier and is detectable in usage data. A session frequency trigger at day 7 of decline would give CS a 16-day intervention window before the typical cancellation request.

Recommended Action
Build a session frequency alert into the CS health score model

The current health score uses NPS and support volume. Neither catches the drift pattern. A 7-day rolling session frequency metric — triggering at a 30% decline — would surface at-risk accounts weeks earlier than the current model.

02
Champion silence is the loudest signal. When the buyer stops logging in, the account is at risk.
+

In 71% of churned accounts, the primary champion — the person who bought the product — had not logged in for at least 14 days before the cancellation request. Usage had migrated to junior team members or stopped entirely. The product had lost its internal advocate before the renewal conversation began.

"I'd moved roles internally and hadn't used it in a few months. When renewal came up, nobody was there to make the case for keeping it."— Account A-044, exit survey
What this reveals

Champion engagement is a leading indicator, not just a relationship metric. When the person with budget authority disengages, the renewal is at risk regardless of team-level usage. Tracking champion-specific session data separately from account-level data would surface this signal weeks earlier.

Recommended Action
Tag champion users in the system and track their activity independently

Champion dormancy (14+ days without login) should trigger an automatic CS touchpoint — not a renewal conversation, but a value demonstration. A personalised insight or new feature highlight, timed to re-engage the decision-maker before the renewal window.

03
Neutral NPS scores are more dangerous than negative ones. The 7s don't complain — they just don't renew.
+

Counterintuitively, accounts that gave NPS scores of 6–7 churned at a higher rate (34%) than accounts giving 4–5 (28%). The dissatisfied accounts escalated, received intervention, and were partially retained. The neutral accounts didn't raise their hand — and were not prioritised for outreach.

"It was fine. Not amazing. We just found something that worked better for us."— Account A-198, exit survey
What this reveals

The NPS passive zone (7s) represents the highest-volume churn risk and the lowest intervention rate. These accounts are not asking for help — but they are reachable with proactive value communication. A segment-specific outreach strategy for NPS 6–7 accounts could reduce churn in this cohort by an estimated 15–20%.

Recommended Action
Create a dedicated intervention track for NPS 6–7 accounts

These accounts need a different playbook than detractors. Not escalation — inspiration. A curated "accounts like yours are doing this" use case sequence, delivered by CS over 60 days, has proven effective at converting passive users into active advocates.

04
Accounts that stop exploring the product stop growing — and eventually stop renewing.
+

Pre-churn accounts show a consistent pattern: their dashboard views plateau at 3–4 saved reports, with no new explorations in the 30 days before cancellation. They have found what works for them — but have not discovered new value. The product is useful, but not irreplaceable.

"We basically use it for two things. It does those well enough, but we didn't really figure out what else it could do."— Account A-156, post-churn interview
What this reveals

Stickiness is correlated with breadth of use, not depth in a single workflow. Accounts using 4+ feature areas churn at 9% vs 31% for accounts using 1–2 areas. Expanding the number of workflows an account uses should be treated as a retention strategy, not just a growth strategy.

Recommended Action
Build a feature adoption score into QBR frameworks

Accounts with a feature adoption score below 3 active areas should receive a structured discovery session from CS at the 90-day mark. Not a sales conversation — a workflow expansion conversation, led by use cases from similar accounts.

05
The accounts that expand are the ones that were onboarded to outcomes, not features.
+

Accounts that expanded seats or added modules in year 1 share a distinct onboarding characteristic: their first 30-day usage pattern was tied to a specific business outcome — a report they needed to send, a decision they needed to make — rather than a feature tour. Outcome-anchored onboarding correlated with 3.4× higher expansion rates.

"The onboarding was actually useful because they helped us build the specific dashboard we needed for our Monday morning meeting. That became the anchor point for everything else."— Account A-019, retained and expanded 2×
What this reveals

Account A-019 is the expansion archetype in this dataset. When onboarding is anchored to a real business outcome — not a feature checklist — the account develops an internal use case they own. That ownership drives advocacy, expansion, and retention.

Recommended Action
Redesign onboarding around outcome anchors, not feature checklists

The first CS session should identify one specific business output the account needs — a board report, a weekly ops meeting, a pipeline review — and build the first Orbit configuration around it. Feature adoption follows naturally from an outcome anchor.

06
Churn is a multiplier. When one signal appears, the others follow — and the window closes fast.
+

In 84% of churned accounts, multiple risk signals co-occurred — champion dormancy, session frequency decline, and feature exploration plateau all appearing within a 10-day window. Once the pattern clusters, average time to cancellation is 16 days. After 3 co-occurring signals, successful retention drops to 12%.

"Looking back, all the signs were there. We just weren't looking at them together."— CSM retrospective, Q3 2025
What this reveals

The signals are individually weak but collectively diagnostic. A composite risk score — weighting all six signals together — would identify at-risk accounts 3 weeks earlier than the current model, with significantly higher precision. The investment in building this model pays back in the first quarter of deployment.

Recommended Action
Build a composite churn risk score and automate CS alerts

Combine session frequency, champion activity, feature breadth, NPS band, support volume, and seat growth into a single weekly risk score per account. Automate CS alerts at threshold crossings. The model should be built on this dataset and retrained quarterly.

Use the "All Insights" tab to explore all 6 findings — engagement filter coming in next iteration.
Use the "All Insights" tab to explore all 6 findings — expansion filter coming in next iteration.
Use the "All Insights" tab to explore all 6 findings — recovery filter coming in next iteration.
Signal Data

The numbers
behind the story

Pre-Churn Signal Frequency (312 churned accounts)
Session drop
84%
Champion quiet
71%
No exploration
61%
NPS 6–7
51%
Seat stagnation
44%

Session frequency decline is the most consistent predictor — appearing in 84% of churned accounts, an average of 23 days before cancellation.

Churn Rate by Feature Adoption Breadth
9% churn
4+ features
Accounts using 4+ feature areas churn at just 9% vs 31% for 1–2 areas
Breadth of feature adoption is the strongest structural predictor of retention — more than team size, industry, or contract length.
Churn Signal Timeline (composite)
Day 1–7
Session frequency begins declining. Champion login frequency drops first. Typically unnoticed by CS.
Day 7–14
Feature exploration stops. Account uses the same 2–3 reports on repeat. No new dashboards created.
Day 14–20
Champion goes fully dormant. Usage migrates to junior users or drops below team threshold. Support tickets stop.
Day 20–23
Cancellation request submitted. By this point, successful retention rate is below 12%.
High-Risk Accounts — Signal Clustering
3 signals
88%
2 signals
63%
1 signal
41%
0 signals
9%

Churn rate by number of co-occurring risk signals — showing the compounding effect of signal clustering

Account Archetypes

Six account archetypes
across the churn risk spectrum

Click a profile to understand the archetype in depth. These are composite portraits drawn from behaviour patterns, not individual accounts.

ARCHETYPE 01 — A-112 pattern
The Silent Drifter
Session drop No complaints NPS: 7
"We just kind of stopped using it." No incident, no escalation — gradual disengagement with no visible trigger.
Churn riskCritical
ARCHETYPE 02 — A-044 pattern
The Abandoned Champion
Champion dormant Role change No advocate
Bought and loved the product — then changed roles. Usage migrated to a junior user with no budget authority at renewal.
Churn riskHigh
ARCHETYPE 03 — A-198 pattern
The Passive 7
NPS: 7 Satisfied-ish No expansion
"It was fine." Not a detractor — just not invested enough to fight for renewal when a competitor offered a demo.
Churn riskMedium-High
ARCHETYPE 04 — A-019 pattern
The Outcome Anchor
Expanded 2× Outcome-onboarded
"The onboarding built the dashboard we actually needed." Onboarded to a specific business outcome — became the internal advocate and expanded twice.
Churn riskMinimal
ARCHETYPE 05 — A-156 pattern
The Feature Plateau
Uses 2 features No exploration Underutilised
"We basically use it for two things." Adequate use case, no expansion. At risk when a point solution for those two use cases emerges.
Churn riskElevated
ARCHETYPE 06 — A-287 pattern
The Renewal Default
Low sessions Easy no Renewal risk
"By the time renewal came up, nobody was really in there." Usage had drifted before the renewal conversation — it was an easy cancellation.
Churn riskHigh
Intervention Targeting

Who to prioritise —
and when to act

The accounts most likely to be saved by proactive CS intervention share three identifiable characteristics — visible in usage data 3–4 weeks before the renewal window opens.

  • 1
    Session frequency has declined but not collapsed — still logging in, but less. Intervention at this stage has a 61% success rate.
  • 2
    Champion is dormant but still employed — not a role change, just disengaged. A personalised value demonstration reactivates 44% of dormant champions within 7 days.
  • 3
    Feature adoption is below 3 areas — not because they've rejected them, but because nobody showed them the use case. Discovery sessions convert 38% to active adopters.
  • When accounts are successfully retained, they don't describe avoiding cancellation. They describe finding new value they didn't know existed.
"The commercial opportunity is not in the renewal conversation. It is in the 23-day window before the decision is already made."
Orbit Customer Intelligence — Churn Risk Signals Report, 2026
Signals detectable in usage data
Session frequency decline Champion dormancy Feature breadth below 3 NPS 6–7 band No new explorations
Recommended Actions

Where to go
from here

Three areas where this data should directly inform CS and product decisions in the next 90 days.

01 — CS OPERATIONS
Build and deploy a composite churn risk score

The current health score is a lagging indicator. A composite risk model — combining session frequency, champion activity, feature breadth, NPS band, and support volume — would surface at-risk accounts 3 weeks earlier with significantly higher precision.

Action

Define the model spec using this dataset. Implement in Salesforce or HubSpot with weekly automated alerts. Target a 15% reduction in churn rate within 2 quarters of deployment.

02 — ONBOARDING
Redesign onboarding around outcome anchors

Accounts onboarded to a specific business output — not a feature checklist — show 3.4× higher expansion rates and significantly lower churn. The first CS session should identify one real business output and build Orbit around it.

Action

Develop a structured onboarding discovery framework: identify the specific output the account needs in the first 10 minutes. Pilot with 20 accounts in the next cohort and measure 90-day feature adoption breadth.

03 — RETENTION PLAYBOOKS
Create dedicated intervention tracks for each at-risk archetype

Each of the six archetypes requires a different response. The Silent Drifter needs a value re-demonstration. The Passive 7 needs peer use cases. The Abandoned Champion needs a new internal advocate. One-size playbooks miss all three.

Action

Build playbook variants for the top 3 archetypes (Silent Drifter, Abandoned Champion, Passive 7). Map each to the detection signal that should trigger it. Automate trigger → playbook assignment via CS platform.

Continue the intelligence

This population is live and growing.

The signals in this report are from a real-time, ongoing dataset. The platform continuously monitors account health — which means the model improves every week. Follow-on modules available: expansion signal identification, onboarding success predictors, seat growth patterns.