loke.dev
Header image for I Finally Stopped Paying the 'Array Tax': How JavaScript Iterator Helpers Rescued My Memory Usage

I Finally Stopped Paying the 'Array Tax': How JavaScript Iterator Helpers Rescued My Memory Usage

Stop creating intermediate arrays for every transformation—learn how to use lazy evaluation to keep your memory footprint low and your code clean.

· 4 min read

I Finally Stopped Paying the 'Array Tax': How JavaScript Iterator Helpers Rescued My Memory Usage

Have you ever looked at a chain of .filter().map().slice() and wondered just how many thousands of objects you’re forcing your user's RAM to juggle simultaneously?

For years, we’ve been writing "clean" functional JavaScript that, under the hood, is actually quite rude to the garbage collector. Every time you chain a standard array method, JavaScript allocates a brand-new array. If you have 100,000 items and you chain four operations, you've just briefly occupied space for 400,000 items.

I call this the Array Tax. It’s the price we pay for readability. But thanks to the new JavaScript Iterator Helpers, we can finally stop paying it.

The Problem: The "Intermediate Array" Bloat

Let's look at a typical piece of code that processes a large dataset. Imagine we’re dealing with a massive list of system logs.

const logs = getMassiveLogs(); // Let's say 100,000 entries

const criticalErrors = logs
  .filter(log => log.type === 'error')
  .map(log => log.message)
  .filter(msg => msg.includes('database'))
  .slice(0, 10);

On the surface, this is beautiful. It’s declarative. But here is what's actually happening:
1. filter creates a new array of all errors.
2. map creates a new array of just the messages.
3. The second filter creates yet another new array.
4. slice creates a final new array of 10 items.

If that initial logs array is huge, you’re potentially spiking memory usage and triggering the Garbage Collector to work overtime, which leads to those annoying micro-stutters in the UI.

Enter Iterator Helpers: The Lazy Revolution

Iterator helpers (now landing in modern browsers and Node.js) allow us to perform these transformations lazily. Instead of processing the whole list at every step, the iterator only does the work when you actually ask for the next value.

It’s like an assembly line where the worker at the end only asks for a part when they are ready to package it, rather than having a giant pile of half-finished parts dumped on their desk.

Here is how that same code looks using the new values() iterator and its helper methods:

const logs = getMassiveLogs();

// logs.values() returns an Iterator
const criticalErrors = logs.values()
  .filter(log => log.type === 'error')
  .map(log => log.message)
  .filter(msg => msg.includes('database'))
  .take(10); // 'take' is the iterator version of slice(0, 10)

// At this point, NO processing has happened. 
// The "work" only happens when we consume the iterator:
const result = Array.from(criticalErrors); 

Why this is a game changer:

1. Zero Intermediate Arrays: No extra arrays are created in the middle.
2. Short-Circuiting: In the array version, filter and map run on *every single item* even if we only need the first 10 results. In the iterator version, as soon as .take(10) gets its 10th item, the entire process stops. We don't even look at the rest of the 100,000 logs.

Transforming Non-Arrays

One of my favorite things about these helpers is that they work on anything that implements the iterable protocol, like Map or Set. Previously, if you wanted to filter a Map, you had to turn it into an array first.

const users = new Map([
  [1, { admin: true, name: 'Alice' }],
  [2, { admin: false, name: 'Bob' }],
  [3, { admin: true, name: 'Charlie' }]
]);

// The old way: Array Tax applied
const adminNames = Array.from(users.values())
  .filter(u => u.admin)
  .map(u => u.name);

// The new way: Efficient and clean
const adminNamesIter = users.values()
  .filter(u => u.admin)
  .map(u => u.name);

for (const name of adminNamesIter) {
  console.log(name); // Alice, Charlie
}

The "Gotchas" (Nothing is free)

As much as I love these new helpers, they aren't a drop-in replacement for *every* situation.

1. It’s a one-way street

An iterator is a stream. Once you've consumed it (e.g., by looping over it or using Array.from()), it's spent. You can't go back to the start. If you need to access the data multiple times, you’ll eventually have to convert it back to an array.

2. No Index Access

You can't do myIterator[5]. If you need random access to elements, you need an array.

3. Missing .length

Iterators don't know how long they are until they finish. If your UI logic depends on list.length to show a count, you'll still need to pay the array tax at some point.

Practical Example: Dealing with Infinite Streams

Because iterators are lazy, they can handle data sources that technically never end (like a paginated API wrapper or a mathematical sequence).

function* infiniteIdGenerator() {
  let id = 1;
  while (true) yield id++;
}

const oddIds = infiniteIdGenerator()
  .filter(id => id % 2 !== 0)
  .map(id => `ID-${id}`)
  .take(5);

console.log([...oddIds]); // ["ID-1", "ID-3", "ID-5", "ID-7", "ID-9"]

Try doing that with .filter() on an array. Your browser tab will explode before it finishes.

How to use it today

As of early 2024, Iterator Helpers are available in Chrome (122+), Firefox (122+), and are being implemented across the ecosystem. If you need to support older environments, the core-js polyfill has excellent support for them.

If you’re dealing with datasets larger than a few hundred items, or if you're working in memory-constrained environments (like Lambda functions or low-end mobile devices), give these a shot. Your RAM—and your users—will thank you.