loke.dev
Header image for Why Your IndexedDB Iteration Is Silently Throttling Your App

Why Your IndexedDB Iteration Is Silently Throttling Your App

Why the classic cursor-based approach to reading web storage is sabotaging your application's startup time and how to switch to a high-throughput bulk-loading strategy.

· 9 min read

If you open up a standard tutorial on IndexedDB, you'll almost certainly find an example featuring openCursor(). It’s the "standard" way to read data. You open a cursor, you wait for the onsuccess event, you grab the value, and you call cursor.continue(). It looks clean, it feels idiomatic, and for 50 records, it works perfectly.

But then your app grows. Your local cache hits 5,000 items. Suddenly, that "lightning-fast" offline-first experience feels like it’s wading through molasses. You check your requestAnimationFrame logs and see massive drops. You check your startup time, and it’s bloated by hundreds of milliseconds.

The culprit isn't the disk speed. It isn't even the size of your data. The culprit is the "ping-pong" effect of the IndexedDB cursor API. Every time you call continue(), you are paying a tax that most developers don't realize they've signed up for.

The Event Loop Ping-Pong

To understand why cursors are slow, you have to look at what happens behind the scenes. JavaScript is single-threaded (mostly), but IndexedDB is not. When you interact with IndexedDB, you aren't talking to a local variable; you are communicating with a separate database process or a storage thread managed by the browser.

When you use a cursor to iterate over 1,000 items, this is the sequence of events:

1. Main Thread: "Hey, give me the first item."
2. Storage Thread: Finds the item, serializes it, and sends it back.
3. Main Thread: onsuccess fires. You process the item. You call cursor.continue().
4. Main Thread: Goes back to the event loop.
5. Storage Thread: Finds the next item, serializes it, and sends it back.
6. Main Thread: onsuccess fires again...

Repeat this 1,000 times. You are performing 1,000 asynchronous round-trips. Even if the storage thread is incredibly fast, you are forcing the browser to schedule 1,000 separate tasks on the main thread's event loop. Between each continue() and the next onsuccess, other tasks (like UI rendering or input handling) can sneak in. While that sounds good for responsiveness, the overhead of the "context switch" between your code and the internal IDB state machine is massive.

I’ve benchmarked this across different browsers. On a mid-range Android device, the overhead of a single cursor "hop" can be anywhere from 0.5ms to 2ms. If you have 2,000 items, you’re looking at up to 4 seconds of just *management overhead*, before you've even spent a millisecond actually processing your data.

The "Bulk" Solution: getAll()

A few years ago, the Web Perf Working Group realized that cursors were a bottleneck for high-performance apps. They introduced IDBObjectStore.getAll() and IDBIndex.getAll().

Instead of asking for one item at a time, getAll() asks the storage thread to grab everything that matches a range (or the whole store) and send it back in one single, massive delivery.

// The Slow Way (Cursor)
const getAllSlowly = (db) => {
  return new Promise((resolve) => {
    const results = [];
    const transaction = db.transaction("products", "readonly");
    const store = transaction.objectStore("products");
    const request = store.openCursor();

    request.onsuccess = (event) => {
      const cursor = event.target.result;
      if (cursor) {
        results.push(cursor.value);
        cursor.continue();
      } else {
        resolve(results);
      }
    };
  });
};

// The Fast Way (Bulk)
const getAllQuickly = (db) => {
  return new Promise((resolve) => {
    const transaction = db.transaction("products", "readonly");
    const store = transaction.objectStore("products");
    const request = store.getAll(); // The magic happens here

    request.onsuccess = (event) => {
      resolve(event.target.result);
    };
  });
};

The difference is night and day. With getAll(), you have one request, one cross-thread communication, and one event loop task. In my tests, fetching 5,000 small objects via getAll() is often 10x to 20x faster than using a cursor.

Why Cursors Exist at All

If getAll() is so much faster, why do we still have cursors? Why were they the default for so long?

The answer is memory.

getAll() pulls every requested record into memory at once. If you have a database of 100,000 high-resolution images stored as Blobs, calling store.getAll() will likely crash your browser tab (or at least trigger a massive GC hit that freezes the UI).

Cursors were designed for a time when "low memory" was the default state of the web. They allow you to process data in a streaming fashion. If you only need to find *one* specific item or if you are looking for the first item that matches a complex condition that can't be expressed in an IDBKeyRange, a cursor makes sense.

But let's be real: most of us are using IndexedDB to store JSON metadata, user preferences, or small cached API responses. For the vast majority of web app use cases, your entire dataset is likely under 10MB. In that world, getAll() is objectively better.

The Middle Ground: Chunked Iteration

What if you actually *do* have a lot of data? Say, 50,000 records, and you can’t risk a single getAll() but the cursor is too slow?

I’ve found that the best approach is a "chunked" strategy. You use a cursor to jump through the database, but instead of moving one-by-one, you use the cursor to grab the keys and then fetch chunks of data using IDBKeyRange.

Actually, there’s an even simpler way to "chunk" using a limit. getAll() takes an optional count parameter.

async function* chunkedLoader(db, storeName, chunkSize = 100) {
  let lastKey = null;
  let done = false;

  while (!done) {
    const items = await new Promise((resolve) => {
      const tx = db.transaction(storeName, "readonly");
      const store = tx.objectStore(storeName);
      
      // If we have a lastKey, we start the range from there
      const range = lastKey ? IDBKeyRange.lowerBound(lastKey, true) : null;
      const request = store.getAll(range, chunkSize);
      
      request.onsuccess = (e) => resolve(e.target.result);
    });

    if (items.length === 0) {
      done = true;
    } else {
      lastKey = items[items.length - 1].id; // Assuming 'id' is the keyPath
      yield items;
      if (items.length < chunkSize) done = true;
    }
  }
}

// Usage
for await (const chunk of chunkedLoader(db, "large_collection")) {
  processItems(chunk); 
}

This gives you the best of both worlds. You get the throughput of bulk loading with the memory safety of cursors. By fetching 100 or 500 items at a time, you drastically reduce the number of event loop turns while keeping the memory footprint predictable.

The Index Trap

Iteration speed is also heavily affected by *how* you access the data. Accessing an Object Store directly by its primary key is the fastest operation in IndexedDB. Accessing it via an Index is slower because the browser has to look up the value in the index and then "hop" over to the object store to get the actual value.

If you are calling index.getAll(), the browser is doing a lot of heavy lifting. It has to iterate the B-Tree of the index, find the primary keys, and then fetch the records.

If you only need the keys, for the love of performance, use index.getAllKeys().

I've seen developers fetch 2,000 full objects just to pluck a single ID property from them. That’s a massive waste of CPU and memory. When you call getAll(), every single object has to be "Structured Cloned." The browser takes the raw data from the disk, creates a brand new JavaScript object, and copies all the properties. If you fetch 2,000 objects, that's 2,000 clones. If you fetch 2,000 keys (which are usually just strings or numbers), the overhead is negligible.

Real-World Optimization: The "Pre-flight" Key Check

I recently worked on a synchronization engine where I needed to compare 10,000 local items with a server-side manifest. Initially, I used a cursor to iterate through and check versions. It took roughly 800ms on a desktop machine.

I switched to this pattern:
1. Fetch all local keys and version timestamps using index.getAll(). I used an index that covered [id, version].
2. Because the index contained the version, getAll() on the index returned an array of small arrays ([id, version]).
3. I performed the diff in pure JavaScript (very fast).
4. I only fetched the *actual* full objects for the items that needed updating.

Performance went from 800ms to about 40ms. By avoiding the cursor and only fetching the bare minimum data needed for the logic, the "bottleneck" simply vanished.

Dealing with the Structured Clone Algorithm

We can't talk about IndexedDB speed without mentioning the Structured Clone Algorithm. Every time data moves out of IDB, it is cloned. This is deep-copying. If your objects have deeply nested arrays or large strings, the time spent in onsuccess isn't just "your code" — it's the browser laboriously reconstructing that object in your heap.

When using getAll(), this happens all at once. If you find your app "freezing" right when the data returns, it's likely the Structured Clone Algorithm choking the main thread.

Pro tip: If you have massive objects, consider splitting them. Keep the "searchable" metadata in IndexedDB and store the "heavy" part (like a large JSON string or a massive array of coordinates) as a Blob. Browsers are much more efficient at handling Blobs because they don't have to parse them into JS objects until you actually need the data (via FileReader or Response).

Transaction Lifecycles and Throughput

Another silent killer is transaction management. A common mistake is opening a new transaction for every single read operation in a loop.

// DO NOT DO THIS
for (const id of idsToFetch) {
  const tx = db.transaction("store", "readonly");
  const store = tx.objectStore("store");
  const item = await wrap(store.get(id)); // Some promise wrapper
  process(item);
}

Every time you create a transaction, the browser has to perform internal bookkeeping. It has to ensure data integrity, manage locks, and potentially spin up resources. If you have 50 IDs to fetch, do it in one transaction.

// DO THIS
const tx = db.transaction("store", "readonly");
const store = tx.objectStore("store");
const promises = idsToFetch.map(id => {
  return new Promise(res => {
    store.get(id).onsuccess = (e) => res(e.target.result);
  });
});
const results = await Promise.all(promises);

While Promise.all with multiple get() calls is faster than individual transactions, getAll(IDBKeyRange.envelope(ids)) or a similar range-based approach is often even better if the IDs are contiguous.

The Browser Differences

It's worth noting that Chromium (Chrome/Edge), WebKit (Safari), and Gecko (Firefox) handle IndexedDB very differently.

* Chromium: Uses a multi-process architecture. IDB usually lives in the Browser process, while your JS lives in the Renderer process. This makes the IPC (Inter-Process Communication) cost of cursors particularly high.
* Safari: Has historically had a rocky relationship with IndexedDB performance and stability. While it has improved, getAll() is significantly more optimized than cursors in recent WebKit versions.
* Firefox: Uses SQLite as a backend. Its cursor implementation is relatively robust, but the event loop overhead still applies.

In all three, getAll() is the clear winner for throughput.

Summary: When to Use What?

I don't believe in "never use cursors." I believe in "don't use cursors as your default."

Here is the hierarchy of retrieval I use when building production apps:

1. Need a single item? Use get(key).
2. Need a small, known set of items? Use a single transaction and fire off multiple get(key) requests in parallel, then Promise.all.
3. Need to load a collection for a list or view? Use getAll(range, limit).
4. Need to search? Use an index with getAll(keyRange).
5. Need to process 100MB+ of data? Use the "Chunked Loader" pattern (Cursor + getAll(limit)).
6. Need to migrate data or perform a complex filter that isn't indexed? This is the only place where the classic openCursor() remains the correct tool.

If your app feels slow, don't just blame the "web platform." The platform is actually quite fast if you stop making it talk to itself a thousand times a second. Kill the cursor ping-pong, switch to bulk loading, and watch your startup times drop. Your users (and their battery life) will thank you.