loke.dev
Header image for Measured Memory

Measured Memory

Unlock the ability to track real-world heap usage in production by leveraging the cross-origin isolated Memory Measurement API.

· 4 min read

I used to stare at window.performance.memory.usedJSHeapSize in the Chrome console and genuinely believe I was seeing the truth. It felt like a direct line to the heart of my application's health. Then I tried to use that data to debug a memory leak in production, and everything fell apart. The numbers were inconsistent, they didn't account for iframes or workers, and—worst of all—the API wasn't even standard. It was a Chrome-only playground that lied by omission.

The real world is messier. Your app isn't just one big blob of memory; it’s a collection of execution contexts. If you want to know what’s actually happening under the hood without guessing, you need the Memory Measurement API.

The "Security Tax" for Real Data

Before we touch a single line of code, we have to talk about the catch. The performance.measureUserAgentSpecificMemory() API is powerful, which means it’s also a security risk. If a malicious site could precisely measure how much memory another origin is using, they could potentially execute side-channel attacks (like Specter).

Because of this, the API only works if your page is cross-origin isolated.

To get this working, you have to convince your server to send two specific HTTP headers:

Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp

It’s a bit of a hassle to set up—especially if you rely on third-party scripts that aren't configured for CORS—but it's the price of admission for high-resolution performance data. Without these, the function won't even exist on the performance object.

Invoking the Beast

Unlike the old synchronous property, the new API is asynchronous and returns a Promise. Why? Because the browser needs to perform a garbage collection (GC) cycle to give you an accurate number. You don't want to lock up the main thread while the browser is taking out the trash.

Here is the most basic way to call it:

async function checkMemory() {
  if (!window.crossOriginIsolated) {
    console.warn("Not isolated. Cannot measure memory accurately.");
    return;
  }

  try {
    const result = await performance.measureUserAgentSpecificMemory();
    console.log("Memory usage:", result);
  } catch (error) {
    if (error instanceof DOMException && error.name === 'SecurityError') {
      console.error("The context is not secure or not cross-origin isolated.");
    } else {
      console.error("Something went wrong:", error);
    }
  }
}

Reading the Map

When that promise resolves, you don't just get a single number. You get an object that describes the memory footprint of your entire "browsing context group." This is where it gets interesting. You can see how much memory your main window is using versus that rogue third-party chat widget in an iframe.

The result looks something like this:

{
  "bytes": 25000000,
  "breakdown": [
    {
      "bytes": 20000000,
      "attribution": [
        { "url": "https://myapp.com/", "scope": "Window" }
      ],
      "types": ["JS"]
    },
    {
      "bytes": 5000000,
      "attribution": [
        { "url": "https://chat-widget.io/iframe", "scope": "Window" }
      ],
      "types": ["JS"]
    }
  ]
}

This breakdown is the killer feature. If your total bytes count is climbing, you can look at the breakdown array to see if it’s your core application or a specific web worker that’s bloating.

Don't Spam the Collector

Since this API triggers a garbage collection, you absolutely should not call it in a tight loop or every few seconds. If you do, your users will experience constant micro-stutters as the browser keeps stopping everything to count its bytes.

The best approach is a sampling strategy. I like to trigger a measurement after significant transitions (like navigating to a heavy dashboard) or at a very slow, random interval.

// A simple "don't kill the UX" scheduler
function scheduleMemoryMeasurement() {
  // Wait at least 5 minutes between checks to avoid GC thrashing
  const INTERVAL = 5 * 60 * 1000; 
  
  setTimeout(async () => {
    if (performance.measureUserAgentSpecificMemory) {
      const report = await performance.measureUserAgentSpecificMemory();
      
      // Send this off to your analytics endpoint
      navigator.sendBeacon('/analytics/memory', JSON.stringify({
        total: report.bytes,
        timestamp: Date.now()
      }));
    }
    scheduleMemoryMeasurement();
  }, INTERVAL + Math.random() * 10000); // Add jitter
}

The Gotchas

1. Browser Support: Currently, this is heavily skewed toward Chromium-based browsers. If you're building for Firefox or Safari, this API will likely be undefined. Always feature-detect.
2. The "Slow" Promise: Sometimes the promise takes a long time to resolve. If the browser decides now is a bad time for a GC (e.g., the user is mid-animation), it might delay the result. Don't write code that *depends* on this promise returning quickly.
3. Local Dev: Testing cross-origin isolation locally can be a pain. If you're using Vite, Webpack, or similar, you’ll need to configure the dev server to send those COOP/COEP headers, or you'll be scratching your head wondering why performance.measureUserAgentSpecificMemory is undefined.

Why bother?

We spent years guessing why our SPAs would crash after three hours of use on a low-end laptop. We used to tell users to "just refresh the page." With measured memory, we can actually see the leak happening in the wild, correlate it with specific user actions, and fix the actual problem.

It’s not as simple as a single property access, but the data you get back is actually grounded in reality. And in performance tuning, reality is the only thing that matters.