3 Things I Wish I Knew About the Soft Navigation Heuristics API Before Measuring My SPA
Measuring performance in Single-Page Apps has always been a game of guesswork—until the browser started treating client-side transitions as first-class citizens.
3 Things I Wish I Knew About the Soft Navigation Heuristics API Before Measuring My SPA
Most of your Single-Page App (SPA) performance metrics are essentially fiction. We’ve spent years patting ourselves on the back for a high Largest Contentful Paint (LCP) score on the initial landing page, while blissfully ignoring the fact that as soon as a user clicks a "Dashboard" link and waits four seconds for a skeleton screen to populate, the browser thinks absolutely nothing has happened.
The Soft Navigation Heuristics API is the browser's attempt to finally admit that client-side transitions are real navigations. But after wrestling with it in production, I realized it's not a "plug and play" magic wand. It has opinions—very specific ones.
If you’re about to start measuring your SPA with this API, here are three things that would have saved me a week of "Why isn't this event firing?" debugging sessions.
1. The Heuristics are a "Three-Way Handshake," not a suggestion
I initially thought that if I changed the URL via the History API, the browser would just *know* I navigated. It turns out the Soft Navigation API is much more cynical. It requires a specific sequence of events to occur within a tight window for a "Soft Navigation" entry to be emitted.
To the browser, a soft navigation only exists if:
1. It is triggered by a user interaction (click, keydown, etc.).
2. It results in a URL change (via the History or Navigation API).
3. It results in a DOM change (specifically, something meaningful being added or removed).
If you’re missing one of these, the API remains silent. For instance, I had a search bar that updated the URL query parameters but didn't actually "navigate" the user to a new view. I wanted to measure the performance of that search result render. Because it didn't feel like a "new page" to the browser's heuristics, I got zero data.
The Lesson: You can’t just trigger history.pushState() in a setTimeout and expect it to work. The URL change must be linked to the task started by the user interaction.
// This will likely trigger a Soft Navigation entry
button.addEventListener('click', async () => {
const data = await fetch('/api/details');
const html = renderDetails(data);
// 1. User Interaction (click)
// 2. URL Change
history.pushState({}, '', '/details');
// 3. DOM Change
document.getElementById('app').innerHTML = html;
});2. The "Task Attribution" trap is real
This is where things get technical and slightly annoying. The browser uses Task Attribution to link your click event to the eventual DOM change. If you "break the chain," the heuristic fails.
I ran into a bug where our navigation logic was wrapped in a complex series of nested promises and custom event emitters. By the time the DOM actually changed, the browser had "lost the thread" and no longer associated that change with the original click.
If your framework uses an asynchronous scheduler (looking at you, React) that defers updates to a different task entirely, you might find your soft navigations aren't being recorded.
Here is how you actually observe these entries to see if your chain is broken:
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
if (entry.entryType === 'soft-navigation') {
console.log('Soft Nav Detected!', {
name: entry.name, // The URL
startTime: entry.startTime,
navigationId: entry.navigationId // This is gold for correlation
});
}
});
});
observer.observe({ type: 'soft-navigation', buffered: true });If you see nothing in the console after a navigation, check if you’re using setTimeout(fn, 0) or something similar that escapes the original interaction's task scope.
3. LCP is recalculated, but it’s a "Soft" LCP
This is the most valuable part of the API, but also the most confusing. When a soft navigation is detected, the browser resets its LCP calculation. This is huge! It means we can finally see how long it takes for the main content of a *second* page to appear.
However, this "Soft LCP" is reported as a separate largest-contentful-paint entry that is tagged with a navigationId. You have to specifically filter for these, or your analytics will just see a bunch of weird LCP spikes that don't make sense in the context of the initial page load.
Here’s the gotcha: The Soft Navigation Heuristics API is still experimental (available behind a flag in Chrome or for Origin Trials). You can't just look at a standard PerformanceObserver and expect your old code to work. You have to look for the navigationId to correlate which LCP belongs to which "page."
const lcpObserver = new PerformanceObserver((list) => {
const entries = list.getEntries();
entries.forEach(entry => {
// If navigationId exists, this LCP happened during a soft nav
if (entry.navigationId) {
console.log(`Soft LCP for ${entry.navigationId}: ${entry.startTime}ms`);
} else {
console.log(`Initial LCP: ${entry.startTime}ms`);
}
});
});
lcpObserver.observe({ type: 'largest-contentful-paint', buffered: true });Why this matters for your SPA
Before this API, we were trying to "fake" these metrics using the User Timing API (marking performance.mark('nav-start') and performance.mark('nav-end')). It was better than nothing, but it didn't account for the actual paint cycles the browser was performing.
The Soft Navigation Heuristics API isn't just another way to time your code; it's a way to see what the user is actually experiencing. It’s the difference between knowing your JavaScript finished and knowing the pixels actually hit the screen.
It’s finicky, it requires your code to follow "good" patterns of task management, and it’s still evolving. But once you get those first few soft-navigation entries appearing in your logs, you’ll realize just how much performance data you’ve been leaving on the table. Stop guessing how your SPA feels—start measuring the handshakes.

