HubSpot
Lead Scoring
HubSpot Lead Scoring Changed in 2025. Here's What Still Doesn't Work.
Feb 22, 2026

HubSpot overhauled its lead scoring system in 2025. The old single-score model was replaced with three distinct score types: Fit, Engagement, and Combined. It was the most significant change to HubSpot scoring in years — and for good reason. The old approach conflated two fundamentally different questions: does this lead match our ICP, and is this lead actively engaged?
The new model is better. But better doesn't mean solved.
What Actually Changed
Before 2025, HubSpot lead scoring gave you one number. You built rules that mixed fit criteria (industry, company size, job title) with engagement criteria (page views, email opens, form submissions) into a single score. A VP at a Series B SaaS company who visited your pricing page and a marketing intern who downloaded three ebooks could end up with the same score.
The 2025 overhaul split scoring into three types:
Fit Score measures how closely a contact matches your ideal customer profile — industry, company size, job title, revenue, geography, technology stack. This score doesn't care whether the lead has ever visited your website.
Engagement Score tracks behavioral signals — email interactions, page visits, form submissions, meeting bookings. This score doesn't care whether the lead is a good fit.
Combined Score merges both into a single number for teams that need one value to route leads and trigger workflows.
This is a real improvement. Separating fit from engagement means you can finally see whether your pipeline problem is a targeting problem (low-fit, high-engagement leads) or an outreach problem (high-fit, low-engagement leads). That diagnostic clarity didn't exist before.
Three Things That Still Don't Work
The structural improvement is welcome. But the three fundamental problems with rule-based scoring remain.
1. Rules Are Subjective
In HubSpot's manual scoring, you decide that "VP" in a job title is worth +15 points and "Director" is worth +10. You decide that company size 50-500 is worth +20. These aren't data-driven decisions — they're assumptions. And they're usually made by a single RevOps person or marketing manager based on intuition, not on analysis of actual closed-won patterns.
Two teams in the same industry with the same product and similar customers will build completely different scoring models. If the output depends entirely on who builds the rules, the system is measuring the rule-builder's assumptions, not lead quality.
2. Scores Inflate Without Maintenance
HubSpot introduced score decay for engagement scores — a necessary feature that reduces points over time. But most teams either don't enable it or set it too conservatively. The result is predictable: after 3-6 months, scores creep upward. Leads who engaged once months ago still carry those points. The MQL threshold that once represented your top 15-20% of leads now captures 50%+. Sales gets flooded. Trust erodes.
Fit scores don't decay at all — because they shouldn't. But fit criteria do change. If your best customers six months ago were enterprise companies with 500+ employees and your fastest-growing segment today is Series A startups with 30-80 employees, your fit scoring rules are optimizing for yesterday's ICP. Updating them requires manual intervention that most teams don't schedule.
3. Predictive Scoring Is a Black Box
HubSpot Enterprise ($3,600/month+) includes predictive lead scoring powered by machine learning. It analyzes your CRM data and assigns a Likelihood to Close percentage and a Priority tier to each contact. No rules to build. No weights to set. The algorithm figures it out.
The problem: your reps see a score with no explanation. A lead is labeled "High Priority" but nobody can tell you why. When a rep asks "why should I call this lead first?", the answer is "the algorithm says so." That's not a compelling answer. And it doesn't help the rep personalize their outreach. There's a reason similarity-based approaches are gaining traction — they solve the explainability gap that predictive models leave open.
Predictive scoring also requires significant historical data — hundreds or thousands of closed deals with good data quality. Early-stage companies or teams that recently changed their ICP don't have the dataset to train a reliable model.
The Maintenance Problem Nobody Talks About
Setting up HubSpot lead scoring takes a day. Maintaining it is a permanent job.
Every quarter, you should be pulling closed-won and closed-lost deals, comparing their scores at time of first sales contact, and checking whether high scores actually correlate with higher win rates. If they don't, the model needs recalibration.
Every time your ICP shifts — new vertical, new company size, new geography — every fit rule needs review. Every time your marketing team launches a new content type or event series, engagement rules need updating. Every time a score threshold stops separating good leads from bad, the thresholds need adjustment.
Most teams do this diligently for the first quarter after launch. By month six, the scoring model is running on autopilot with outdated rules. By month twelve, sales has stopped looking at scores entirely.
This isn't a HubSpot-specific problem. It's inherent to any scoring system that relies on manually defined rules.
What Would Actually Solve This
The root cause of every problem above is the same: manual rules. Rules require assumptions. Assumptions require maintenance. Maintenance requires discipline that competes with every other RevOps priority.
The alternative is to remove rules entirely and score leads based on actual customer data. Instead of a human deciding that "VP" is worth more than "Director" or that SaaS companies are worth more than FinTech companies, the system analyzes your closed-won customers and determines which attributes actually predict success — based on your data, not your assumptions.
This is the premise of similarity-based scoring. You connect your CRM. The system analyzes your best accounts across hundreds of attributes — firmographics, technographics, growth signals, hiring patterns. New leads are scored by how closely they resemble those customers. The output isn't just a number. It's an explanation: "This lead scored 87 because they're similar to three of your best customers. Matching factors: Series B SaaS, 120 employees, uses HubSpot, recently hired two SDRs."
That explanation is what makes the score trustworthy. A rep who can see why a lead scored high knows how to open the conversation. A RevOps leader who can see which attributes drive scores knows whether the model still reflects reality.
And because the model learns from your actual customers — not from rules you wrote six months ago — it updates automatically as your customer base evolves. No quarterly recalibration. No rule maintenance. No score inflation.
When HubSpot Native Scoring Is Enough
HubSpot's scoring tools aren't broken. For certain teams, they're exactly right:
You have a simple, stable ICP. If your ideal customer hasn't changed in 12 months and can be described in 4-5 attributes, manual fit scoring works fine.
Your sales cycle is behavior-driven. If demo requests and pricing page visits are strong conversion signals and you just need to catch them, engagement scoring does the job.
You have RevOps capacity. If someone on your team can review and update scoring rules quarterly, the maintenance burden is manageable.
You're on Enterprise and have the data. If you have 1,000+ contacts with clear outcomes and your team is comfortable with black-box predictions, predictive scoring adds real value.
When You've Outgrown It
You've outgrown native scoring when:
Sales ignores the scores because they don't match reality
Your MQL volume has inflated to the point where the label is meaningless
Your ICP has shifted but nobody has updated the rules
Your reps can't explain why a lead is high-priority
You're spending more time maintaining scoring rules than acting on scores
At that point, adding more rules doesn't fix the problem. The answer is a fundamentally different approach — one grounded in your actual customer data, with scoring that updates itself and reasoning your team can see.
The Bottom Line
HubSpot's 2025 scoring overhaul was a genuine step forward. Separating fit from engagement gives teams a clearer picture of their pipeline. But the core limitations of rule-based scoring haven't changed: subjective rules, score inflation, and opaque predictions.
For teams that need scoring their reps will actually trust and use, the path forward isn't more rules. It's a model built on what your best customers actually look like — with transparent reasoning behind every score.
Want to see how similarity-based scoring works with your HubSpot data? Book a 15-minute demo and we'll score your leads live.
