The only 3 Questions you need to predict Churn (and Expansions)
Most revenue teams forecast the same way: they look at usage dashboards or CRM stages, check the sentiment, and make a gut call. Then they act surprised when a "healthy" account churns, or when a deal that looked solid falls apart at the last minute, or when an expansion opportunity appears out of nowhere three weeks before close.
The problem isn't that you're not paying attention. It's that you're paying attention to the wrong things.
Most teams default to usage data or deal stages, but these only tell you what has happened. You’re not really interrogating the things that predict what will happen. And that’s what your boss cares about. Whether my team are calling new business, customer expansions, retention or churn, the same thing is true: I only care about numbers I can validate.
Good forecasting is the difference between the good and the great.
A great forecast is built on 3 things;
Data. Objective data on the current situation.
A narrative that explains the data honestly.
Experience, not instinct. (Experience understands data; instinct often ignores it.)
The great are 6 months ahead. They know what’s likely to happen, and they’re working hard to make sure that the outcome they want is the one that comes to pass. Sure, “healthy” customers still churn, and deals still evaporate in the paper process, but both are significantly more likely if you’re not reading the room right. I’ve sat in a lot of churn reviews and even more closed/lost deal analysis sessions, and 9/10 times, an optimistic deal owner was ignoring a massive red flag. A red flag that, if addressed 6 months ago, would have shown us what to focus on to make a good outcome more likely.
Enter the 3 questions.
Ask them often and answer them honestly, and your forecasts will instantly improve. Whether you’re building an account expansion plan, a save plan, or forecasting new business, the 3 questions work because they point you to the problem areas. The underlying mechanics are simple: is the customer moving towards me, or away?
You need about 5 minutes per account. And they will change how you forecast. And very likely, what your boss thinks of your forecast.
A quick note on scoring - You can use RAG, you can use a 1-5 scale. You can use anything you like. In reality, there is only red or green. Stop making yourself feel better; you either have what you need or you have something to work on. That’s it. Keep it simple.
Question 1: Who Cares?
This is the single biggest predictor of blindside churn and stalled deals. Not usage. Not deal stage. Stakeholder health.
You need to know two things: do you have an active champion, and does someone at the executive level know you exist?
For CS: When both are in place, and both can articulate why they use your product and what it does for their business, you can forecast that renewal with confidence. When your champion is mid-level, a bit vague on the ROI, and you haven't spoken to anyone senior in three months, you've got a problem brewing. When your champion has left the business or changed roles, and you don't know who holds the budget? That's not amber. That's red.
For sales: This is your qualification foundation. A deal without a champion who can sell internally is a deal that stalls. A deal without access to the economic buyer is a deal you can't close. If you're forecasting a deal as "commit" but you can't name both people, you're forecasting based on hope.
Usage can be perfect. A demo can go brilliantly. But if no one with authority is engaged, you are one reorg, one budget review, one new CTO away from losing, and you never saw it coming.
The trap on both sides is assuming that silence is stability. It isn't. It's a gap in your data.
Question 2: How Deep/How Complicated?
This is about switching cost. For existing customers, that means integration depth. For prospects, it means how entrenched the incumbent is, or how painful the switch might be.
For CS: At one end, you've got customers using a single, basic feature. No complex dependencies, could swap you out in a couple of weeks. At the other end, deeply integrated across multiple products, critical production dependencies, a migration that would take months and carry real risk. Most CS teams treat high embedding as automatically "safe." It's not, and that’s why this is question 2. A deeply embedded customer with no stakeholder relationship is one of your most dangerous accounts. They're running on inertia, not intent. And inertia runs out.
For sales: The same question works in reverse. How embedded is the prospect with their current solution? Low switching cost means your deal can move fast, but it also means they could leave you just as easily later. High switching cost means longer sales cycles, but if you win, you win properly. If your incumbent is “build”, consider how complicated even the simplest solution can get at scale. Anything built must be supported and improved over time. This is ultimately why they’re interested in what you’re selling. What’s the pain, and can it be quantified? (if you want to get all MEDDICC-y) An interested customer who doesn’t really have a complicated pain probably won’t get a budget, regardless of how much they love your solution.
For forecasting, think of embedding depth as your risk floor. It tells you how much effort it takes for things to change. But it doesn't tell you whether they will. That's the next question.
Question 3: What's Changing?
This is where most scoring models completely fall over, because they measure state, not direction.
You need to look for momentum. Is it positive, negative, or flat?
Positive momentum looks like: exploring new capabilities, asking about additional products, increasing development velocity, launching new initiatives where you could add value, growing the team that works with you. And critically, the customer or prospect can articulate why they want to work with you and what the value is. That's the foundation. Without it, even apparently positive signals might just be organic drift rather than intentional investment.
Negative momentumlooks like: asking about pricing or competitors, declining engagement, slower response rates, frustrated support interactions, stalled projects, budget concerns. Any one of these on its own might mean nothing. Two or three together? That's a pattern.
Flat is fine for short-term retention forecasting, but flat should never be treated as "good." An account that isn't growing should be considered at risk, because the market around it isn't standing still, even if it is. A deal that isn't progressing is a deal that's dying. Neutral momentum is a warning, not a comfort.
This is where you can add usage or consumption. It’s intentionally last because it’s the least interesting. The two biggest churns of my life have been with customers with astronomical usage and double-figure growth month on month. No champions, no exec contact, no one cared. High usage doesn’t stop your customers from shifting their high usage to a competitor.
For forecasting: This is your leading indicator. Questions 1 and 2 give you the structural picture. Question 3 tells you which direction it's moving. Strong stakeholder relationship plus positive momentum? That's a renewal or a deal you can commit with confidence. Weak stakeholders plus negative momentum? That's a risk you need to action now, not at quarter end.
Why This Works Better Than Health Scores and Deal Stages
I've seen (and built) plenty of models. Weighted metrics, colour-coded dashboards, stage-based probabilities. They all share the same flaw: they aggregate away the signal.
A customer with great usage and decent sentiment but no executive relationship gets a "green" health score. Then they churn. A deal at "verbal commit" stage with no champion and a competitor circling gets a 90% probability. Then it slips.
The model said they were fine. The model was wrong.
These three questions force you to look at the things that actually drive outcomes: relationships, stickiness, and direction. They don't average them into a number. They sit alongside each other so you can see the full picture.
And because they're simple, you can actually do them. Every quarter for account management. Every week for pipeline. For every account. In fifteen minutes or less.
Making It Work
Here's the practical bit.
If you answer these three questions regularly for every account or deal you own, you will know:
Which renewals are safe, and which need intervention now. Not "soon." Now. Because by the time you get to renewal with a stakeholder gap or negative momentum, you've already lost the initiative.
Which deals are real, and which are wishful thinking. Because "they loved the demo" is not a forecast input. "I have an engaged champion and the economic buyer has committed budget" is.
Where expansion is genuinely likely, because the relationship exists and the momentum is there, versus where you're hoping for expansion because the usage looks good on paper.
Which accounts are sitting in your most dangerous position: deeply embedded, zero relationship, running on autopilot. These are your silent risks, and they're the ones that will bite you hardest because nobody flagged them.
The goal isn't perfection. You're making a call based on the data you have. But "the data you have" should include more than a dashboard metric and a feeling. It should include who cares about you, how hard it would be to leave (or how hard it is to get in), and whether things are getting better or worse. I guarantee this is what your boss cares about. That you have the data, you’re using it to quantify the value and pain, and that you’re moving in the right direction.
That's a forecast. Everything else is a guess.
Photo by Vivek Doshi on Unsplash