Churn risk in B2B SaaS.
The silent signals
before the cancellation.
What customers at risk of churning say — and do — in the weeks before they leave. Patterns from 312 accounts across 18 months of longitudinal engagement data.
The at-risk account
before it's too late
Across 312 accounts, a distinct pre-churn profile emerges — not the noisiest accounts, but the quietly disengaging ones, reducing scope without saying a word.
For the accounts most at risk, the intervention window is not the renewal conversation.
It is the moment session frequency first drops — three to four weeks before anyone notices.
What the data
actually reveals
Each insight is grounded in account behaviour patterns across 18 months. Click to expand, explore, and see recommended actions for the CS and product teams.
The most consistent pre-churn signal is not a support ticket, a complaint, or a missed renewal call. It is a gradual reduction in the frequency of logins — starting an average of 23 days before the cancellation request. These accounts don't decide to leave. They slowly stop arriving.
Churn is a lagging event. The causal behaviour — disengagement — starts much earlier and is detectable in usage data. A session frequency trigger at day 7 of decline would give CS a 16-day intervention window before the typical cancellation request.
The current health score uses NPS and support volume. Neither catches the drift pattern. A 7-day rolling session frequency metric — triggering at a 30% decline — would surface at-risk accounts weeks earlier than the current model.
In 71% of churned accounts, the primary champion — the person who bought the product — had not logged in for at least 14 days before the cancellation request. Usage had migrated to junior team members or stopped entirely. The product had lost its internal advocate before the renewal conversation began.
Champion engagement is a leading indicator, not just a relationship metric. When the person with budget authority disengages, the renewal is at risk regardless of team-level usage. Tracking champion-specific session data separately from account-level data would surface this signal weeks earlier.
Champion dormancy (14+ days without login) should trigger an automatic CS touchpoint — not a renewal conversation, but a value demonstration. A personalised insight or new feature highlight, timed to re-engage the decision-maker before the renewal window.
Counterintuitively, accounts that gave NPS scores of 6–7 churned at a higher rate (34%) than accounts giving 4–5 (28%). The dissatisfied accounts escalated, received intervention, and were partially retained. The neutral accounts didn't raise their hand — and were not prioritised for outreach.
The NPS passive zone (7s) represents the highest-volume churn risk and the lowest intervention rate. These accounts are not asking for help — but they are reachable with proactive value communication. A segment-specific outreach strategy for NPS 6–7 accounts could reduce churn in this cohort by an estimated 15–20%.
These accounts need a different playbook than detractors. Not escalation — inspiration. A curated "accounts like yours are doing this" use case sequence, delivered by CS over 60 days, has proven effective at converting passive users into active advocates.
Pre-churn accounts show a consistent pattern: their dashboard views plateau at 3–4 saved reports, with no new explorations in the 30 days before cancellation. They have found what works for them — but have not discovered new value. The product is useful, but not irreplaceable.
Stickiness is correlated with breadth of use, not depth in a single workflow. Accounts using 4+ feature areas churn at 9% vs 31% for accounts using 1–2 areas. Expanding the number of workflows an account uses should be treated as a retention strategy, not just a growth strategy.
Accounts with a feature adoption score below 3 active areas should receive a structured discovery session from CS at the 90-day mark. Not a sales conversation — a workflow expansion conversation, led by use cases from similar accounts.
Accounts that expanded seats or added modules in year 1 share a distinct onboarding characteristic: their first 30-day usage pattern was tied to a specific business outcome — a report they needed to send, a decision they needed to make — rather than a feature tour. Outcome-anchored onboarding correlated with 3.4× higher expansion rates.
Account A-019 is the expansion archetype in this dataset. When onboarding is anchored to a real business outcome — not a feature checklist — the account develops an internal use case they own. That ownership drives advocacy, expansion, and retention.
The first CS session should identify one specific business output the account needs — a board report, a weekly ops meeting, a pipeline review — and build the first Orbit configuration around it. Feature adoption follows naturally from an outcome anchor.
In 84% of churned accounts, multiple risk signals co-occurred — champion dormancy, session frequency decline, and feature exploration plateau all appearing within a 10-day window. Once the pattern clusters, average time to cancellation is 16 days. After 3 co-occurring signals, successful retention drops to 12%.
The signals are individually weak but collectively diagnostic. A composite risk score — weighting all six signals together — would identify at-risk accounts 3 weeks earlier than the current model, with significantly higher precision. The investment in building this model pays back in the first quarter of deployment.
Combine session frequency, champion activity, feature breadth, NPS band, support volume, and seat growth into a single weekly risk score per account. Automate CS alerts at threshold crossings. The model should be built on this dataset and retrained quarterly.
The numbers
behind the story
Session frequency decline is the most consistent predictor — appearing in 84% of churned accounts, an average of 23 days before cancellation.
Churn rate by number of co-occurring risk signals — showing the compounding effect of signal clustering
Six account archetypes
across the churn risk spectrum
Click a profile to understand the archetype in depth. These are composite portraits drawn from behaviour patterns, not individual accounts.
Who to prioritise —
and when to act
The accounts most likely to be saved by proactive CS intervention share three identifiable characteristics — visible in usage data 3–4 weeks before the renewal window opens.
-
1Session frequency has declined but not collapsed — still logging in, but less. Intervention at this stage has a 61% success rate.
-
2Champion is dormant but still employed — not a role change, just disengaged. A personalised value demonstration reactivates 44% of dormant champions within 7 days.
-
3Feature adoption is below 3 areas — not because they've rejected them, but because nobody showed them the use case. Discovery sessions convert 38% to active adopters.
-
→When accounts are successfully retained, they don't describe avoiding cancellation. They describe finding new value they didn't know existed.
Where to go
from here
Three areas where this data should directly inform CS and product decisions in the next 90 days.
The current health score is a lagging indicator. A composite risk model — combining session frequency, champion activity, feature breadth, NPS band, and support volume — would surface at-risk accounts 3 weeks earlier with significantly higher precision.
Define the model spec using this dataset. Implement in Salesforce or HubSpot with weekly automated alerts. Target a 15% reduction in churn rate within 2 quarters of deployment.
Accounts onboarded to a specific business output — not a feature checklist — show 3.4× higher expansion rates and significantly lower churn. The first CS session should identify one real business output and build Orbit around it.
Develop a structured onboarding discovery framework: identify the specific output the account needs in the first 10 minutes. Pilot with 20 accounts in the next cohort and measure 90-day feature adoption breadth.
Each of the six archetypes requires a different response. The Silent Drifter needs a value re-demonstration. The Passive 7 needs peer use cases. The Abandoned Champion needs a new internal advocate. One-size playbooks miss all three.
Build playbook variants for the top 3 archetypes (Silent Drifter, Abandoned Champion, Passive 7). Map each to the detection signal that should trigger it. Automate trigger → playbook assignment via CS platform.
This population is live and growing.
The signals in this report are from a real-time, ongoing dataset. The platform continuously monitors account health — which means the model improves every week. Follow-on modules available: expansion signal identification, onboarding success predictors, seat growth patterns.