loke.dev
Header image for The Sub-Frame Precision Gap: Why Your Canvas UI Feels 'Laggy' on High-End Hardware

The Sub-Frame Precision Gap: Why Your Canvas UI Feels 'Laggy' on High-End Hardware

Reclaim the high-frequency polling data trapped between browser frames to build web-based drawing tools that actually rival native performance.

· 4 min read

Your fancy 1000Hz gaming mouse and $3,000 liquid-cooled rig are likely being sabotaged by the very software you’re building. You might think your Canvas-based drawing app is "smooth" because it hits 60 frames per second, but to a user with a high-end stylus or a high-polling-rate mouse, your app probably feels like drawing with a piece of soap on a wet floor.

The culprit isn't your render loop or your GPU. It’s the Sub-Frame Precision Gap.

The 60Hz Illusion

Most developers hook into mousemove or pointermove, update a coordinate, and wait for the next requestAnimationFrame (rAF) to paint. On a standard 60Hz monitor, rAF fires every 16.6ms.

Here is the problem: a modern gaming mouse polls at 1000Hz (once every 1ms). A Wacom tablet or Apple Pencil isn't far behind. This means that between the moment your last frame finished and your current frame started, the user’s input device sent 15 or 16 distinct location updates.

If you only use the coordinates provided by the most recent PointerEvent, you are effectively throwing away 90% of the user’s movement data. You aren't drawing a curve; you're drawing a series of jagged "connect-the-dot" chords that look terrible when the user moves quickly.

The "Naive" Way (Stop Doing This)

Usually, we see code that looks like this:

let isDrawing = false;

canvas.addEventListener('pointermove', (e) => {
  if (!isDrawing) return;
  
  // This only gives us the SINGLE most recent point.
  // We lose everything that happened since the last event fired.
  drawSegment(e.clientX, e.clientY);
});

When the user swings their mouse fast, drawSegment gets called with points that are 50 pixels apart. The result? A geometric, low-poly mess where a smooth circle should be.

Reclaiming the Lost Data: getCoalescedEvents()

Modern browsers (Chromium and Firefox, specifically) realized this was a disaster for creative tools. They introduced a method on the PointerEvent object called getCoalescedEvents().

This method returns an array of every single pointer event that occurred since the last time the browser dispatched an event to your code. It’s the "hidden" data trapped in the sub-frame gap.

Here is how you actually implement high-precision drawing:

canvas.addEventListener('pointermove', (e) => {
  if (e.buttons !== 1) return;

  // Check if the browser supports the API
  if (e.getCoalescedEvents) {
    const events = e.getCoalescedEvents();
    
    for (let event of events) {
      // We iterate through EVERY sub-frame point
      // event.clientX and event.clientY are high-precision here
      renderPathPoint(event.clientX, event.clientY, event.pressure);
    }
  } else {
    // Fallback for browsers living in the stone age
    renderPathPoint(e.clientX, e.clientY, e.pressure);
  }
});

By iterating through the coalesced events, you go from 60 points per second to potentially 1000. The difference in "feel" is immediate. The lines follow the cursor with a fidelity that feels native, not "webby."

Predicting the Future (Literally)

Even with every coalesced point, there is still input latency. By the time the browser draws those points to the screen, the user's hand has already moved another 8-16ms ahead. This creates the "lag" where the line trails behind the brush tip.

To solve this, we use getPredictedEvents().

This API uses an internal heuristic algorithm to guess where the pointer *will* be in the next few milliseconds based on current velocity and acceleration. It sounds like magic (or a recipe for glitches), but for low-latency ink, it's essential.

const draw = (e) => {
  // 1. Clear the "prediction layer" or temporary canvas
  clearPredictionCanvas();

  // 2. Draw the real, historical data
  const historical = e.getCoalescedEvents();
  historical.forEach(point => commitPointToPermanentCanvas(point));

  // 3. Draw the predicted data to a temporary layer
  if (e.getPredictedEvents) {
    const predicted = e.getPredictedEvents();
    predicted.forEach(point => drawToPredictionLayer(point));
  }
};

Warning: Never "commit" predicted events to your actual image data. Predictions can be wrong. You should only use them to draw a temporary "lead" on the line that gets wiped and replaced by real data in the next frame.

The Mathematical Catch: De-duplication

One thing I found out the hard way: getCoalescedEvents() can sometimes include the main event itself, and depending on the OS/Browser combo, you might get duplicate coordinates if the hardware polling rate is lower than the refresh rate (rare, but it happens).

Always check your distances. If the distance between point[n] and point[n-1] is zero, skip it. Your stroke simplification algorithms (like Ramer-Douglas-Peucker) will thank you later.

Why Does This Matter?

If you're building a simple button, you don't need this. But if you're building:
1. A whiteboarding tool (like Excalidraw or FigJam).
2. A photo editor or digital painting app.
3. A signature pad.

Then ignoring the sub-frame gap is the difference between a tool that feels like a toy and a tool that feels like a professional instrument.

We spend so much time optimizing our render() functions and shaders, but we often forget that garbage data in equals garbage frames out. Start polling the hardware for the truth, and stop relying on the rhythm of the display refresh.