Synthetic Monitoring
Active • ControlledScripted tests (browser/API) from chosen regions and devices.
- Best for pre-prod, uptime/SLA, regression checks
- Works even with low traffic or off-hours
- Stable baselines, proactive alerts on journeys
Synthetic runs scripted checks from controlled locations and browsers to catch regressions and ensure uptime 24/7. The best real user monitoring tools captures what real users actually experience on real devices, networks, and geographies, ideal for prioritizing work that moves Core Web Vitals (INP/LCP/CLS) and revenue. The best practice is to combine both: synthetics for proactive reliability, RUM for field truth and business impact.
Scripted tests (browser/API) from chosen regions and devices.
Measures real users on real devices, networks, and geographies.
Pair proactive reliability with real-world experience.
Updated: October 22, 2025 • INP replaced FID as the responsiveness vital (use RUM to track field performance).
Skim this side-by-side to see where each shines. On mobile, scroll the table horizontally.
| Dimension | Synthetic Monitoring | Real User Monitoring (RUM) |
|---|---|---|
| Nature | Active, scripted tests run from chosen regions/browsers/devices. | Passive, field data from real users on real devices & networks. |
| Environment | Great in pre-prod/staging and production canaries. | Best in production (actual traffic & behavior). |
| Traffic dependency | Works with zero traffic. | Needs real traffic (sampling helps at scale). |
| Best for | Uptime/SLA, regression detection, 24/7 journey checks. | Prioritizing by business impact, trends, UX reality. |
| Core Web Vitals | Lab baselines; good for change control & guardrails. | Field CWV at p75: INP, LCP, CLS. |
| Uptime / SLA | Primary use case (HTTP/API + browser flows). | Indirect (errors & availability as experienced by users). |
| Transactions | Deterministic scripted journeys (login, checkout). | Observes real funnels; reveals drop-offs & variance. |
| Alerting | Threshold/availability & step failures (proactive). | Distribution shifts (p75) & outlier segments (geo/ISP/device). |
| Debug depth | Repeatable filmstrips, HAR, controlled repro. | Real-world session context, errors, optional replay. |
| Outlier detection | Limited (unless you simulate many geos/ISPs). | Strong (actual geos, ISPs, devices, pages). |
| Privacy & governance | Lower risk (robots). Data mostly synthetic/logs. | Needs PII masking, consent (CMP), RBAC, EU hosting options. |
| Cost model | By number of checks/locations/frequency. | By sessions/pageviews; sampling controls spend. |
| Limitations | May miss real-world variance & human behavior. | Needs traffic; less deterministic for exact repro. |
| When it shines | Before launch; at night; SLAs; catching regressions early. | After launch; proving impact; SEO/CWV; market/geo insights. |
| Best together | Use both: synthetics for guardrails & early alerts, RUM for field truth, prioritization, and Core Web Vitals outcomes. | |
Tip: align metric names across both (e.g., route names, journey IDs) and correlate to APM/logs for faster root-cause.
Scripted checks that simulate user actions from chosen regions, browsers, and devices.
Real user monitoring measures what real users experience on real devices, networks, geographies, and pages.
Pick the right tool for the moment. Each scenario card recommends Synthetic, RUM, or Both, with a short “why” and a concrete next step.
You need repeatable, controlled tests to catch regressions before production.
RUM can’t alert without users; synthetics give 24/7 availability and baselines.
Only field data reflects real user experience at p75 for INP, LCP, CLS.
Real traffic exposes variance by geography, ISP, and device mix.
Guard rails + real impact: synthetics catch outages; RUM shows actual drop-offs.
RUM finds real sessions & INP outliers; synthetics reproduces with control.
Only field distributions capture perceived gains across devices & networks.
Synthetics minimizes privacy risk; RUM requires strict masking & governance.
Use this lightweight sequence to pair proactive guardrails (synthetics) with field truth (RUM). It’s vendor-neutral and works for web apps, SPAs, and APIs.
List the business-critical flows (login, search, add-to-cart, checkout) and the highest-traffic templates/routes. These will anchor both synthetic checks and RUM dashboards.
Add the browser tag/SDK, enable SPA navigation tracking, and surface p75 distributions for INP, LCP, CLS. Upload source maps to make JS errors readable.
Use the same route names / journey IDs in both RUM and synthetics. Configure sampling and environments (prod/stage) to control cost and noise.
release / app.versionScript journeys for the flows from step 1 and add API checks for dependencies. Run every 5–15 min from 2–3 regions and at least 2 browsers/devices.
When RUM flags a regression (e.g., INP spike in country X), jump to session context (and optional replay), then reproduce with a targeted synthetic scenario. Link to APM/logs for root-cause.
Enforce PII masking, CMP consent, RBAC, and EU hosting options. Set alerts on synthetic availability/latency and on RUM p75 shifts (INP/LCP/CLS). Review weekly.
Default to masking, restrict replay fields, log access, and prefer EU hosting with sovereignty guarantees when required.
Use this vendor-neutral matrix to decide whether a metric is best tracked with RUM, Synthetic, or Both. On mobile, scroll horizontally.
| Metric | Best measured in | Why / when | Typical alert / target |
|---|---|---|---|
| INP (Interaction to Next Paint) | RUM | Represents real responsiveness across the whole visit; needs field data and real interactions. | p75 INP ≤ 200 ms (good), > 200–500 (needs), > 500 (poor). |
| LCP (Largest Contentful Paint) | Both | RUM for impact by geo/device; Synthetic for controlled baselines and regression tests. | p75 LCP ≤ 2.5 s (good), > 2.5–4.0 (needs), > 4.0 (poor). |
| CLS (Cumulative Layout Shift) | RUM | Layout shifts are driven by real content/ads/user behavior; field data tells the truth. | p75 CLS ≤ 0.1 (good), > 0.1–0.25 (needs), > 0.25 (poor). |
| TTFB | Both | RUM reveals real networks/CDNs; Synthetic isolates server/regression with fixed nodes. | Watch p75; common guardrail around 0.8–1.8 s depending on stack & region. |
| Uptime / Availability | Synthetic | Deterministic, round-the-clock checks independent of traffic; ideal for SLAs. | Availability ≥ 99.9% monthly; fail on 2 of 3 probe errors. |
| API latency & 5xx rate | Synthetic | Scripted API assertions and multi-region probes to de-risk dependencies. | 95p latency < baseline +30%; 5xx rate < 1%. |
| Transaction (login/checkout) duration | Both | Synthetic = guardrails; RUM = real drop-offs by segment and device mix. | Synth: duration +25% vs baseline; RUM: p75 step times increasing > 20%. |
| JS error rate / stack traces | RUM | Field stacks + source maps locate real crashes; Synthetic can reproduce once identified. | Error rate > baseline +X% or new top error appears. |
| Long tasks / main-thread blocking | RUM | Often device/CPU dependent; field data surfaces segments causing INP regressions. | Time blocked > threshold on target routes; INP p75 ↑ 20%. |
| 3rd-party / tag impact | Both | Synthetic for clean before/after baselines; RUM for real impact on users & conversions. | Alert when tag adds > X ms to LCP/INP or increases JS errors. |
| DNS / Connect / TLS | Synthetic | Best isolated in lab from multiple nodes to detect provider or routing issues early. | p95 connect/TLS spikes vs baseline +30%. |
| Experience availability (as felt) | RUM | Captures real-world outages or blockers users hit despite green synthetics. | Drop in successful sessions or conversion beyond normal seasonality. |
Tip: align route/journey names across tools and correlate to APM/logs. Reminder: INP replaced FID in 2024 — track INP in the field.
Use a two-lane workflow: synthetics to block regressions before release, then RUM to validate real-world impact after go-live. This section shows who does what — and when — so nothing slips through.
Login, add-to-cart, checkout, account — with assertions on text, status, and timings.
Auth, catalog, payments. Validate 95p latency and error rates against SLAs.
Create stable baselines and catch geo/device-specific regressions early.
Fail the pipeline when steps break or exceed thresholds; store HAR/filmstrips.
Track INP/LCP/CLS at p75 by route/template and cohort (geo/ISP/device).
Use JS errors (with source maps) and APM/logs to find and explain outliers.
Raise alerts when p75 worsens (e.g., INP ↑ +20%) or conversion drops on a step.
Turn new RUM findings into targeted synthetic scenarios to reproduce and prevent.
RUM and Synthetic live across a few clear categories. Skim these neutral cards to understand what type of platform fits your context. Open a card to see common examples and typical strengths.
All-in-one observability with RUM, browser/API synthetics, traces & logs, and alerting in one UI.
Deep diagnostics for INP/LCP/CLS, visual comparisons, and developer-friendly insights.
Straightforward synthetic uptime & transactions with optional RUM overlay for websites/APIs.
Programmable probes and assertions for HTTP(s), auth flows, third-party dependencies, and SLAs.
RUM + Synthetics with options for EU hosting, sovereignty controls, and on-prem/hybrid deployments.
Own the pipeline with community agents; shift cost to infrastructure & operations.
Note: Examples are illustrative and vendor-neutral. Replace/expand based on your ecosystem and compliance needs.
Synthetic runs scripted tests from controlled locations, browsers, and devices — great for uptime, SLAs, and preventing regressions. RUM measures what real users experience in production — ideal for prioritizing work that improves Core Web Vitals (INP/LCP/CLS) and conversions.
Neither is “better” in all cases. Use Synthetic when you need repeatable guardrails and 24/7 coverage; use RUM when you need field truth and business impact. Most teams get the best results by combining both.
They are complementary. Synthetic catches issues before users do; RUM confirms how users are affected. Replacing one with the other usually leaves blind spots (either no field reality or no proactive guardrails).
Yes — RUM provides field measurements for INP, LCP, CLS at the 75th percentile. Improving these for real users strengthens page experience signals and often correlates with better business outcomes.
INP (Interaction to Next Paint) replaced FID as the responsiveness vital in 2024. INP looks at all interactions across a visit, so you should track it in RUM and add Synthetic guardrails for critical user actions.
Common baselines are every 5–15 minutes for critical browser journeys and 1–5 minutes for key API endpoints. Use multiple regions and at least two browsers/devices for coverage.
RUM: enable PII masking by default, integrate your CMP, apply RBAC, and choose EU hosting when required. Synthetic: lower privacy risk (robots), but still treat credentials and test data securely.
Updated: October 22, 2025