
What Nobody Tells You About the Image Decoder: Why Your 'Optimized' WebP Is Still Freezing the Main Thread
We dive into the browser's hidden rasterization pipeline to explain why perfectly compressed images can still trigger Interaction to Next Paint (INP) failures.
What Nobody Tells You About the Image Decoder: Why Your 'Optimized' WebP Is Still Freezing the Main Thread
You’ve done everything by the book. You’ve run your assets through squoosh.app, converted your JPEGs to WebP, and your Lighthouse score is sitting at a comfortable 98. Yet, when a user scrolls down your landing page or tries to click a menu button while a hero image is loading, there’s a perceptible, frustrating stutter.
We’ve spent a decade obsessing over "bytes on the wire," but we’ve largely ignored what happens once those bytes actually reach the browser. The reality is that your users don't feel the file size; they feel the CPU cost of decompression.
The Lie of the Small File Size
The biggest misconception in web performance is that a 50KB WebP is "lighter" than a 200KB JPEG in every way that matters. On the network? Absolutely. But inside the browser’s memory and CPU, they might be identical—or the WebP might actually be more expensive.
When a browser downloads an image, it can't just slap those compressed bytes onto the screen. It has to go through a process called Decoding.
Regardless of whether your image is a 20KB SVG, a 50KB WebP, or a 5MB PNG, once it is decoded to be painted, it takes up:Width x Height x 4 bytes (RGBA)
A 2000x2000 pixel image, even if it’s a highly compressed 40KB WebP, requires 16 million bytes (16MB) of RAM once it's decoded. If your site triggers this decoding process on the main thread during a critical user interaction, your Interaction to Next Paint (INP) is going to tank.
The Image Decoding Pipeline: Where the Lag Lives
To understand why your "optimized" images are freezing the UI, we have to look at the browser’s internal pipeline. Most developers think the process is:Download -> Display
In reality, it looks more like this:
1. Fetch: The bytes arrive.
2. Identify: The browser reads the header to figure out the format and dimensions.
3. Decode: The CPU takes the compressed data (WebP/AVIF/JPEG) and turns it into a raw bitmap (RGB/RGBA).
4. Rasterize: The bitmap is turned into instructions for the GPU.
5. Composite/Paint: The pixels finally hit the screen.
The Decode step is the silent killer. For complex formats like WebP and especially AVIF, the math required to "unpack" those pixels is significantly more intensive than the old-school discrete cosine transform used by JPEGs. On a high-end MacBook, you won’t notice it. On a mid-range Android device with a budget processor, decoding a large, highly compressed image can lock up the main thread for 100ms or more.
Why This Triggers INP Failures
Interaction to Next Paint (INP) measures how long it takes the browser to respond to a user input (like a click or a keypress) and actually present a frame.
If a user clicks a "Buy Now" button at the exact millisecond the browser starts decoding a large WebP image located just off-screen, the browser is stuck. It's busy grinding through the math of decompression. Because decoding—by default—often happens synchronously on the main thread just before painting, the browser cannot respond to the click. The user sees nothing change, feels the lag, and your INP score turns red.
Measuring the Invisible: How to See Decoding Costs
You can't fix what you can't measure. You won't see "Image Decoding" in your basic network tab. You have to go into the Performance Profile in Chrome DevTools.
Look for the "Image Decode" events in the Main Thread flame chart. If you see a long purple bar while your CPU is pegged at 100%, you’ve found your culprit.
Alternatively, you can use the PerformanceObserver API to track these costs in the wild:
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
if (entry.name === 'image-decode') {
console.log(`Image decoded: ${entry.duration}ms`);
}
});
});
// Note: This is currently experimental/limited in some browsers,
// but you can often find decode timing inside 'paint' or 'layout-shift'
// attribution in the Performance Timeline.
observer.observe({ entryTypes: ['element'] });The Solution Part 1: The decoding Attribute
The simplest way to stop an image from hijacking the main thread is a tiny HTML attribute that most developers ignore: decoding="async".
<!-- The Default: Can block the main thread -->
<img src="heavy-hero.webp" alt="Background">
<!-- The Fix: Tells the browser to decode the image off-main-thread -->
<img src="heavy-hero.webp" decoding="async" alt="Background">By adding decoding="async", you are giving the browser permission to finish the rest of the page layout and handle user inputs *before* it finishes processing the image pixels. The image might "pop in" a few milliseconds later, but the button the user just clicked will actually respond.
Gotcha: Don't use decoding="async" for your LCP (Largest Contentful Paint) image if you can help it. You want that image to appear as fast as possible. But for everything else? It should be your default.
The Solution Part 2: The decode() API
If you are loading images dynamically via JavaScript—say, for a gallery or a slider—you shouldn't just append the element to the DOM immediately. If you do, the browser will likely freeze the UI the moment it tries to render that new element.
Instead, use the decode() method. It returns a promise that resolves once the image is fully "unpacked" in memory.
const loadImage = async (src) => {
const img = new Image();
img.src = src;
try {
// This happens off-main-thread!
await img.decode();
// Now it is safe to append to the DOM
// The browser already has the bitmap ready to go.
document.getElementById('gallery').appendChild(img);
img.classList.add('fade-in');
} catch (err) {
console.error("Decoding failed", err);
}
};
loadImage('high-res-asset.webp');By using await img.decode(), you ensure that the heavy lifting is done *before* the image becomes part of the layout. This is the difference between a jerky, stuttering transition and a smooth 60fps experience.
The Solution Part 3: The "Resolution Trap"
We often optimize for *file size* when we should be optimizing for *dimensions*.
I recently consulted for an e-commerce site where the developers were proud of their 80KB WebP product images. However, the images were 3000px wide, being scaled down via CSS to fit a 300px container.
While the 80KB download was fast, the browser still had to:
1. Decode 3000x3000px (36MB of raw data).
2. Store that 36MB in memory.
3. Execute a downsampling algorithm to shrink it to 300px.
This is a massive waste of CPU cycles. The browser is doing the work of a photo editor every time the page loads.
The Fix: Proper Responsive Images
Use srcset not just for high-DPI screens, but to ensure the browser never has to decode more pixels than it actually needs to show.
<img
src="product-small.webp"
srcset="product-small.webp 300w,
product-medium.webp 600w,
product-large.webp 1200w"
sizes="(max-width: 600px) 300px, 600px"
decoding="async"
loading="lazy"
alt="Product Detail"
>By providing properly sized versions, you reduce the memory pressure and the time the CPU spends in the decoding phase. A 300px image takes roughly 360KB of memory when decoded, compared to the 36MB of the 3000px version. That is a 100x reduction in memory and significantly less CPU work.
AVIF: The Next Frontier (and Next Bottleneck)
If you thought WebP decoding was expensive, wait until you meet AVIF. AVIF offers incredible compression—often 50% better than WebP. But that compression comes at a price. AVIF is based on the AV1 video codec, and decoding an AVIF image is essentially like decoding a single frame of a high-complexity 4K video.
If you are switching to AVIF, you must be diligent about:
1. Strictly sizing your images.
2. Using `decoding="async"`.
3. Providing a WebP fallback for lower-powered devices that might struggle with the AV1 math.
<picture>
<source srcset="image.avif" type="image/avif">
<source srcset="image.webp" type="image/webp">
<img src="image.jpg" decoding="async" loading="lazy" alt="Contextual description">
</picture>The Hidden Interaction: Script Execution vs. Decoding
There's a subtle edge case I've seen in the field. If you have a lot of JavaScript running (React hydration, third-party analytics, etc.) and you are also decoding several large images simultaneously, they are all fighting for the same CPU cores.
Even though the browser *tries* to move decoding to worker threads, there is still a synchronization cost on the main thread. If you are seeing high "Task Duration" in your performance tab, try staggering your image reveals.
Don't load 20 images at once. Use the IntersectionObserver to only trigger the src assignment (and thus the decode) when the image is nearing the viewport.
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
// Start the decode process only when close to the viewport
img.src = img.dataset.src;
observer.unobserve(img);
}
});
}, { rootMargin: '200px' }); // Load 200px before it entersSummary: Stop Chasing Bytes, Start Chasing Cycles
The next time you're looking at your site's performance, don't just look at the Waterfall chart in the Network tab. Look at the Main Thread.
If your "optimized" WebP is 4000px wide, it’s not optimized. If it’s being decoded synchronously while your JavaScript is trying to hydrate your app, it’s a performance bottleneck.
The checklist for a truly optimized image strategy:
1. Dimensions > File Size: Don't ship more pixels than the display needs.
2. `decoding="async"`: Make it your default for non-critical images.
3. Use `img.decode()`: For any JS-driven image logic.
4. Mind the CPU: Remember that AVIF and WebP take more brainpower to unpack than JPEG.
5. Audit for INP: Watch for "Image Decode" events during interaction tests in DevTools.
Performance isn't just about how fast a file downloads; it's about how quickly a device can turn those bytes into an experience. Don't let your image decoder be the reason your site feels "janky."


