Parsing of URLs on the client side has been a common practice for two decades. The early days included using illegible regular expressions but the JavaScript specification eventually evolved into a new URL
method of parsing URLs. While URL
is incredibly useful when a valid URL is provided, an invalid string will throw an error — yikes! A new method, URL.canParse
, will soon be available to validate URLs!
Providing a malformed URL to new URL
will throw an error, so every use of new URL
would need to be within a try/catch
block:
// The correct, safest way
try {
const url = new URL('https://davidwalsh.name/pornhub-interview');
} catch (e) {
console.log("Bad URL provided!");
}
// Oops, these are problematic (mostly relative URLs)
new URL('/');
new URL('../');
new URL('/pornhub-interview');
new URL('?q=search+term');
new URL('davidwalsh.name');
// Also works
new URL('javascript:;');
As you can see, strings that would work properly with an <a>
tag sometimes won’t with new URL
. With URL.canParse
, you can avoid the try/catch
mess to determine URL validity:
// Detect problematic URLs
URL.canParse('/'); // false
URL.canParse('/pornhub-interview'); // false
URL.canParse('davidwalsh.name'); //false
// Proper usage
if (URL.canParse('https://davidwalsh.name/pornhub-interview')) {
const parsed = new URL('https://davidwalsh.name/pornhub-interview');
}
We’ve come a long way from cryptic regexes and burner <a>
elements to this URL
and URL.canParse
APIs. URLs represent so much more than location these days, so having a reliable API has helped web developers so much!
The post URL.canParse appeared first on David Walsh Blog.