Jump to a section
Why a review health score matters
Most businesses monitor reviews in fragments. Someone checks the star rating. Someone else watches new reviews. Another person responds when a bad one appears. The result is activity without a system.
A review health score solves that by combining the signals that actually matter into one monthly operating view. It helps teams answer a practical question: are our review surfaces getting healthier, staying flat, or quietly becoming a risk?
This is especially important when your business depends on multiple platforms. A profile can look fine at a distance while recency drops, response speed slips, or suspicious review patterns begin to compound.
If you need the freshness lens first, read 2026 Review Recency Benchmarks. If you are actively defending against bad actors, pair this scorecard with Navigating Fake Reviews.
The five metrics that matter most
You do not need a huge dashboard. You need a small set of metrics that reveal trust, responsiveness, and risk.
1. Rating trend
Static rating snapshots hide too much. Track rating change over 30 and 90 days, not just the current average. This helps you see whether the profile is stabilizing, improving, or slipping.
2. Review recency
Days since the newest review is one of the clearest trust signals. Buyers notice dates quickly. A quiet profile may still have a strong lifetime average, but stale reviews weaken conversion confidence. This is why freshness benchmarks deserve a permanent place in the score.
3. Review velocity
Volume should be monitored as a steady flow, not a vanity total. Track rolling 30-day review count and compare it to the prior period. Sudden spikes and long gaps both matter.
4. Response performance
Two numbers tell most of the story:
- response rate,
- average response time.
These metrics show whether your team is actually operating the profile, not just collecting reviews. If response quality is part of your recovery strategy, keep negative review response workflows close to the same dashboard.
5. Suspicious-pattern risk
This is the metric most teams skip. You should track whether recent reviews show:
- duplicated phrasing,
- unusual timing clusters,
- weak reviewer credibility,
- claims that do not match customer records.
You are not trying to declare fraud from one signal. You are trying to notice patterns early enough to investigate.
A simple scoring model
The easiest way to run a monthly review health score is to score each category from 1 to 5:
| Metric | 1 | 3 | 5 |
|---|---|---|---|
| Rating trend | declining sharply | stable | improving steadily |
| Recency | stale | acceptable | current |
| Velocity | bursty or weak | uneven but manageable | steady and credible |
| Response performance | slow or inconsistent | mixed | fast and consistent |
| Risk monitoring | no visibility | partial | documented and active |
That gives you a 25-point model that leadership can understand quickly.
The exact thresholds should vary by platform and category. A local service business may need tighter recency than a low-volume B2B firm. A mature SaaS company on G2 should care more about detailed review cadence than a quarterly trickle of generic praise.
How to use the score operationally
The point of the score is not reporting for reporting's sake. It should trigger action.
When the score drops, ask which area moved:
- if recency fell, adjust request flow,
- if response speed slipped, fix ownership,
- if rating trend fell, check product or service issues,
- if risk signals increased, gather evidence and investigate.
If your team is setting recovery targets, connect the score to the TrustScore Calculator and Review Velocity Planner. That turns the dashboard into an operating plan instead of a passive report.
Platform notes
Recency, review count, and response consistency usually matter most. Local buyers compare visible proof quickly, so quiet periods are expensive.
Trustpilot
Trustpilot health should weight recency and moderation-related anomalies more heavily. If reviews are delayed or disappear unexpectedly, the pattern matters. For process coverage, review Trustpilot review removal workflows.
G2 and Capterra
For B2B directories, quality and specificity matter as much as pace. Detailed, current reviews are more valuable than a thin volume spike. The operating model from The Complete Guide to G2 Review Management for SaaS Companies is useful here even if you manage multiple platforms.
Common mistakes when building the score
Measuring only stars
Star rating is too blunt on its own. It hides whether the newest proof is stale, whether the team is slow to respond, and whether suspicious patterns are building.
Ignoring cross-platform differences
One scorecard can work, but the benchmark ranges should vary by platform role and customer volume.
Treating the score as monthly paperwork
The score should trigger ownership and action. If it does not change team behavior, it is just another report.
Recommended monthly workflow
- Pull 30-day and 90-day data for each platform.
- Score the five core metrics.
- Document any sudden anomalies or disputed reviews.
- Decide which single metric needs the most attention next month.
- Set the cadence target in the planner and route teams accordingly.
If you want a central place to connect these workflows, the Resources page is the cleanest internal hub for calculators, planning tools, the Trustpilot exporter, and partner operations.
Conclusion
A review health score gives operators a way to see what star rating alone cannot. It combines trust, responsiveness, and risk into a format that is usable every month.
The most useful version is simple enough to maintain and specific enough to drive action. Track rating trend, recency, velocity, response performance, and suspicious-pattern risk. Then use the result to decide what the team should fix next, not just what the dashboard says.

