loke.dev
Header image for A Native Definition of Visibility

A Native Definition of Visibility

The days of hacking around offsetParent and opacity: 0 are over thanks to a new DOM method that actually understands what 'visible' means in a modern CSS world.

· 4 min read

Checking if an element is actually "visible" on a webpage has historically been a bit of a dark art. For years, we've relied on messy heuristics: checking if offsetWidth is greater than zero, verifying that getComputedStyle(el).display isn't none, or the classic "is the offsetParent null?" trick. These hacks worked most of the time, but they were fragile, slow, and completely ignored modern CSS properties like content-visibility.

Enter element.checkVisibility(). This isn't just another helper method; it's the first time the browser has given us a native, spec-compliant answer to the question: "Is this thing actually being rendered in a way a human could see it?"

The "Old Way" Was Kind of a Mess

Before we look at the new hotness, let's acknowledge the trauma of the old ways. If I wanted to see if a button was visible to prevent a user from clicking a ghost, I’d usually end up with something like this:

function isVisibleLegacy(el) {
  // Is it hidden via display: none?
  if (el.offsetWidth === 0 || el.offsetHeight === 0) return false;
  
  // Is it hidden via a parent?
  if (!el.offsetParent && getComputedStyle(el).position !== 'fixed') return false;
  
  // What about opacity? Or visibility: hidden? 
  // You'd have to manually check every single style property.
  const style = window.getComputedStyle(el);
  if (style.visibility === 'hidden' || style.opacity === '0') return false;

  return true;
}

It’s exhausting. And even this snippet fails to account for content-visibility: hidden (a performance-boosting CSS property) or elements inside a hidden <details> tag.

Using checkVisibility()

The new API simplifies this into a single call. It returns a boolean based on whether the element is "rendered." By default, it returns false if the element or any of its ancestors are display: none.

const myElement = document.querySelector('.sidebar-menu');

if (myElement.checkVisibility()) {
  // It's rendered! We can safely do things like focus it.
  myElement.classList.add('active');
}

This is already cleaner, but the real power lies in the options object. The browser doesn't assume what "visible" means to you, so it lets you toggle specific checks.

Checking for Opacity and Visibility Styles

By default, checkVisibility() doesn't care if your element has opacity: 0 or visibility: hidden. In the browser's eyes, those elements are still "rendered" in the layout. If you need to know if the element is *visually* transparent or hidden via CSS, you pass an options object:

const isTrulyVisible = element.checkVisibility({
  checkOpacity: true,      // Returns false if opacity is 0
  checkVisibilityCSS: true // Returns false if visibility is 'hidden' or 'collapse'
});

if (isTrulyVisible) {
  console.log("The element is not only in the DOM, but it's also not transparent!");
}

Why This Matters for Performance

I found this especially useful when dealing with content-visibility: auto. If you aren't familiar, content-visibility allows the browser to skip the rendering work (layout and painting) for off-screen elements until they are needed.

If you try to use old-school checks on an element that is currently skipped due to content-visibility, you might get inconsistent results. checkVisibility() is explicitly designed to understand these states. It knows that an element skipped for rendering isn't "visible" in the traditional sense, even if it hasn't been explicitly set to display: none.

Real World Example: Focus Management

One of the most annoying bugs in web dev is trying to programmatically focus an input that is currently hidden behind a collapsed menu or a tab panel. Tabbing into a hidden element is a major accessibility failure.

const searchInput = document.querySelector('#global-search');

function openSearch() {
  const container = document.querySelector('.search-overlay');
  container.style.display = 'block';

  // Check if the input is actually ready to be interacted with
  // after the CSS transition or display change.
  if (searchInput.checkVisibility()) {
    searchInput.focus();
  }
}

A Few Gotchas (The "What It Isn't" Section)

I should be clear: checkVisibility() is not a replacement for IntersectionObserver.

1. Viewport checking: checkVisibility() will return true even if the element is 5,000 pixels down the page, as long as it isn't hidden via CSS. It checks *rendered state*, not *viewport position*. If you need to know if the user's eyeballs are currently looking at it, stick with IntersectionObserver.
2. Ancestors: If a parent element has opacity: 0, but you call checkVisibility({ checkOpacity: true }) on the *child*, it will currently still return true. It specifically checks the element's own properties for the opacity and visibility flags. However, it *does* respect display: none on ancestors.
3. Browser Support: It's widely supported in modern Chrome, Edge, and Safari. Firefox has it in the stable release as well. If you’re supporting legacy browsers, you'll still need those old hacks as a fallback.

Wrapping Up

We've spent too long trying to guess what the rendering engine is doing. element.checkVisibility() finally gives us a direct line to that logic. It’s cleaner, it’s more performant than forcing a reflow with offsetHeight, and it handles modern CSS features that old-school hacks simply can't touch.

Next time you find yourself writing a 10-line function to determine if a modal is open, just check its visibility natively. Your codebase will thank you.