
What Nobody Tells You About URL.canParse(): Why Your URL Validation Is a Silent Performance Killer
Stop relying on expensive try-catch blocks for validation and learn how a single native method can reclaim lost cycles in your hot code paths.
Ever wondered why your high-traffic Node.js service or client-side form feels like it’s wading through molasses the moment you start validating a massive list of inputs?
If you've been around the JavaScript block, you know the drill for checking if a string is a valid URL. For years, we’ve been forced into a pattern that is, frankly, a bit of a hack. We’ve been using try...catch blocks as a control flow mechanism, and it’s been silently eating our CPU cycles for breakfast.
The "Try-Catch" Tax
Until recently, the standard way to check a URL's validity was to just... try building it. If the constructor blew up, the URL was bad.
function isValidUrl(string) {
try {
new URL(string);
return true;
} catch (err) {
return false;
}
}This looks innocent. It’s readable. It’s idiomatic. It’s also a performance disaster if you’re processing thousands of strings where many might be malformed.
In JavaScript engines (like V8), throwing an exception is expensive. When an error is thrown, the engine has to capture the current state of the execution, build a stack trace, and look for a handler. Doing this in a hot loop—like validating a CSV upload or a stream of telemetry data—is like hitting the brakes every time you see a yellow light. You’re not just checking the light; you’re coming to a full, screeching halt.
Enter URL.canParse()
A new contender entered the ring recently: URL.canParse(). It’s a static method that does exactly what it says on the tin. It returns a boolean. No exceptions, no stack traces, no drama.
const isValid = URL.canParse("https://developer.mozilla.org");
console.log(isValid); // true
const isInvalid = URL.canParse("not-a-url");
console.log(isInvalid); // falseThe difference is night and day. Because URL.canParse() returns a primitive boolean, the engine doesn't have to prepare for the "worst-case scenario" of an error. It just checks the internal parsing logic and gives you a thumbs up or down.
Just how much faster is it?
In my own informal benchmarks (running on Node 20), URL.canParse() is consistently 4x to 6x faster than the try...catch approach when dealing with invalid strings.
The irony? When the URL *is* valid, both methods are roughly the same speed. The "silent killer" isn't the validation itself; it's the penalty for being wrong. If your data is messy, try...catch will punish your event loop.
Practical Implementation: The Base URL Gotcha
One thing people often forget is that the URL constructor (and canParse) accepts a second base argument. This is vital for validating relative paths.
// This fails because it's relative
console.log(URL.canParse("/api/v1/users")); // false
// This passes because we provide context
console.log(URL.canParse("/api/v1/users", "https://my-app.com")); // trueIf you’re building a scraper or a proxy, you should definitely be using that second argument instead of trying to manually concatenate strings before validating.
Browser and Environment Support
This is the part where I have to be the bearer of "slightly" bad news. URL.canParse() is relatively new. It hit the major browsers (Chrome 120, Firefox 115, Safari 17) and Node.js (v18.17.0+) recently.
If you’re supporting older environments, you can’t just drop it in without a check. But you can write a tiny helper that favors the fast path:
function safelyParseUrl(string, base) {
if (typeof URL.canParse === 'function') {
return URL.canParse(string, base);
}
// Fallback for the old guard
try {
new URL(string, base);
return true;
} catch {
return false;
}
}The "Valid but Weird" Edge Case
Don't let the name fool you into thinking URL.canParse() is a magic security filter. It validates against the WHATWG URL Living Standard, which is surprisingly permissive.
For example, did you know these are "valid" URLs?
URL.canParse("http://localhost:8080"); // true (standard)
URL.canParse("mailto:someone@example.com"); // true (valid scheme)
URL.canParse("http://google"); // true (valid, even if no TLD)If you need to ensure a URL is specifically http or https, canParse won't do that for you. You still need a quick check after the fact:
function isWebUrl(string) {
if (URL.canParse(string)) {
const url = new URL(string);
return ['http:', 'https:'].includes(url.protocol);
}
return false;
}Wait, didn't I just say we're avoiding new URL()? In this case, we only call new URL() *after* we know it won't throw. We get the performance of the boolean check first, and only pay for the object instantiation when we actually need the properties.
The Bottom Line
Stop using try...catch for logic. It’s a habit we picked up because we didn't have a better tool, but now we do. Switching to URL.canParse() makes your code cleaner, signals your intent better to other developers, and gives your performance a nice little boost for free.
If your code is spending its life validating user input or processing webhooks, your CPU will thank you for making the switch.


