loke.dev

What Nobody Tells You About Minor GC: Why Short-Lived Objects Are Secretly Sabotaging Your INP

Even if your heap looks healthy, a high frequency of 'Scavenge' cycles creates the micro-stutters that turn a 100 Lighthouse score into a failing Interaction to Next Paint (INP) grade.

· 7 min read

Your 0MB memory leak is lying to you. You can have a perfectly flat memory profile, zero detached DOM nodes, and a heap that looks as clean as a whistle, yet still suffer from a "janky" UI that fails the Interaction to Next Paint (INP) metric.

We’ve been conditioned to fear memory leaks—the slow, creeping growth of the heap that eventually crashes the tab. But for modern web applications, the silent killer isn't the memory you *keep*; it's the memory you *discard*. High-frequency "Scavenge" cycles (Minor GC) create hundreds of microscopic "stop-the-world" pauses. Individually, they take 5ms. Collectively, they are the reason your button click feels like it's wading through molasses.

The Generational Lie

V8 (and most modern engines) operates on the "Generational Hypothesis." This is the assumption that most objects die young. To handle this, the heap is split into two main areas: the New Space (Young Generation) and the Old Space (Old Generation).

The New Space is small (usually between 1MB and 64MB) and incredibly fast to allocate. But it fills up quickly. When it does, V8 triggers a Minor GC, also known as a Scavenge.

Unlike Major GC, which is a heavy-duty operation, Scavenge is designed to be fast. It uses a "Cheney’s Algorithm" (a semi-space copy) where it moves surviving objects from one half of the New Space to the other. If an object survives a couple of these cycles, it gets promoted to the Old Space.

Here is the problem: Minor GC is still a "stop-the-world" event. While it's running, your JavaScript is paused. If you are churning through temporary objects—think object spreading in Redux, temporary arrays in filter/map chains, or coordinate objects in a mouse-move listener—you are triggering Scavenge cycles every few hundred milliseconds.

If a user clicks a button during one of those cycles, the main thread is busy moving memory around. That’s your INP budget being eaten alive before your event listener even fires.

Anatomy of an INP Disaster

INP measures the time from a user interaction until the browser can actually paint a frame showing the result. The budget for a "Good" score is 200ms.

Imagine this scenario:
1. User clicks a "Filter List" button.
2. V8's New Space is 95% full because of some background polling or an animation.
3. The click event triggers.
4. Before the click handler even runs, the engine realizes it needs memory to create the event object.
5. Boom. Minor GC triggers. The main thread stops for 15ms.
6. Your handler runs, calculates a new state, and uses an immutable pattern that creates 5,000 temporary objects.
7. New Space fills again. Another Minor GC triggers during the reconciliation phase of your framework. Another 15ms.
8. The browser finally paints.

You just lost 30ms to garbage collection. That’s 15% of your total INP budget gone, and you haven't even touched your actual business logic yet.

The "Immutability" Tax

We love immutability. It makes state management predictable. But in high-frequency scenarios, it's a performance suicide note. Look at this common pattern in a dashboard that updates every 100ms:

// The "Clean" Functional Way
function updateData(oldData, updates) {
    return oldData.map(item => {
        const update = updates.find(u => u.id === item.id);
        if (update) {
            // Every time this runs, we create a brand new object
            // and throw the old one in the trash.
            return { ...item, ...update, lastModified: Date.now() };
        }
        return item;
    });
}

If oldData has 1,000 items and we update 10 of them every 100ms, we are generating thousands of short-lived objects. In a complex React or Vue app, this happens in the render loop.

I’ve seen apps where the "Minor GC" row in Chrome DevTools Performance tab looks like a bar code. Every one of those little green bars is a moment where the user's input was ignored.

How to Spot the Sabotage

You won't find this by looking at the "Memory" tab's heap snapshot. That only shows you what's currently alive. To see the *churn*, you need the Performance Tab.

1. Open Chrome DevTools -> Performance.
2. Check the "Memory" box.
3. Record a 10-second slice of you interacting with the app.
4. Look at the "Interactions" track.
5. Look at the "Main" track for "Minor GC" or "Scavenge" tasks.
6. Look at the "JS Heap" graph (the blue line). If it looks like a saw-tooth pattern (sharp rises followed by sudden vertical drops), you have high churn.

If those vertical drops align with your interaction latency, you've found your culprit.

Practical Fix 1: The "Lazy Object" Pattern

One of the biggest contributors to New Space pressure is creating objects for one-time use inside loops. We can often reuse a single "carrier" object if the consumer is synchronous.

Instead of this:

function getBoundingBox(elements) {
    return elements.map(el => {
        const rect = el.getBoundingClientRect();
        // Creating a new object every iteration
        return {
            x: rect.left,
            y: rect.top,
            width: rect.width,
            height: rect.height
        };
    });
}

Try a more surgical approach if you're just doing a calculation:

// Reuse a static object for calculations
const REUSABLE_RECT = { x: 0, y: 0, width: 0, height: 0 };

function processElements(elements, callback) {
    for (let i = 0; i < elements.length; i++) {
        const rect = elements[i].getBoundingClientRect();
        REUSABLE_RECT.x = rect.left;
        REUSABLE_RECT.y = rect.top;
        REUSABLE_RECT.width = rect.width;
        REUSABLE_RECT.height = rect.height;
        
        callback(REUSABLE_RECT); 
        // Note: The callback must use the data immediately, 
        // as the object will be mutated in the next iteration.
    }
}

Practical Fix 2: Object Pooling for High-Frequency Events

If you are dealing with mouse tracking, touch events, or WebSockets, you should be using an Object Pool. I recently worked on a canvas-based drawing tool where every mouse move created a "Point" object. The INP was hovering around 250ms. By pooling the points, we dropped it to 40ms.

class PointPool {
    constructor(size) {
        this.pool = Array.from({ length: size }, () => ({ x: 0, y: 0, active: false }));
    }

    get(x, y) {
        const point = this.pool.find(p => !p.active);
        if (point) {
            point.x = x;
            point.y = y;
            point.active = true;
            return point;
        }
        // Fallback if pool is empty
        return { x, y, active: true };
    }

    release(point) {
        point.active = false;
    }
}

const myPool = new PointPool(100);

// In your high-frequency event
window.addEventListener('mousemove', (e) => {
    const p = myPool.get(e.clientX, e.clientY);
    renderCursor(p);
    myPool.release(p); // Put it back for the next frame
});

Practical Fix 3: The Hidden Cost of ...spread

Spread operators are syntactic sugar for Object.assign(), which creates a new object. In many cases, developers use spread to update a single property in a large object.

If you are inside a critical path (like a requestAnimationFrame or a heavy data-processing loop), mutation is not a sin—it's a tool.

// GC Pressure High
const newState = { ...state, count: state.count + 1 };

// GC Pressure Zero
state.count++;

I know, I know. "But my Redux state must be immutable!"

Fine. Keep your global state immutable. But the internal processing of your data doesn't have to be. Use a "Draft" pattern (like Immer) or simply perform your heavy transformations using mutable logic, then commit the final result as a single new object at the end.

The TypedArray Secret

If you are handling large amounts of numeric data (coordinates, pixel data, financial ticks), stop using Arrays of Objects. An array of 10,000 objects like {x: 1, y: 2} is a nightmare for V8's Scavenger. It has to track 10,000 separate pointers.

Use TypedArrays (like Float32Array). They allocate a single, contiguous block of memory in the heap. V8 doesn't have to scan every element for references because it knows the block only contains numbers.

// High Churn
const points = [{x: 1, y: 2}, {x: 3, y: 4}, ...];

// Zero Churn
const points = new Float32Array(20000); // 10,000 pairs of x, y
points[0] = 1; // x1
points[1] = 2; // y1
points[2] = 3; // x2
points[3] = 4; // y2

This is how high-performance libraries like Three.js and PixiJS handle thousands of objects without causing GC spikes.

Stop the "Closure Scrutiny"

Closures are another hidden source of Minor GC. Every time you define a function inside another function, you're potentially creating a new object (the function itself) and an "environment" object to hold the closed-over variables.

// Every time 'heavyProcess' is called, 
// a new 'isValid' function is created.
function heavyProcess(items) {
    const threshold = 10;
    const isValid = (item) => item.value > threshold;
    
    return items.filter(isValid);
}

Move the helper functions outside or memoize them. It seems micro-optimistic until you realize heavyProcess is being called 60 times a second.

Why This Matters for INP

Google’s Core Web Vitals are increasingly focusing on the "Total Blocking Time" and its impact on user perception. While we used to worry about the "Main Thread" being blocked by long-running calculations, we now realize that the "Main Thread" is often blocked by the *cleanup* of our own convenience.

A Minor GC pause of 10ms might not seem like much, but when it happens right after a user taps their screen, it pushes the entire browser rendering pipeline back. It adds to the Input Delay, the Processing Duration, and the Presentation Delay.

Final Thoughts: The Middle Ground

I’m not advocating for writing 1990s-style C code in JavaScript. Immutability, closures, and high-level abstractions make our lives easier and our code safer.

However, we need to stop pretending that memory allocation is free.

If your INP is failing and you’ve already optimized your long tasks, look at the "Scavenge" count. If you're seeing more than 2 or 3 Minor GCs per second during interaction phases, you are sabotaging your performance with short-lived objects.

Clean up your loops, reuse your objects, and remember: The fastest garbage collection is the one that never has to run.