config file. Looking back now, things seem to have changed quite significantly.<head>-order and prefetching critical assets (we’ll cover them later).
preload).<body>, and merges its <head>, all without incurring the cost of a full page load. You can check quick detils and full documentation about the stack (Hotwire).requestIdleCallback. Consider lazy loading parts of the UI using WebPack’s dynamic import() support, avoiding the load, parse, and compile cost until users really need them (thanks Addy!).ReactDOMServer module on a Node server like Express, and then call the renderToString method to render the top level components as a static HTML string.renderToString. In Angular, we can use @nguniversal to turn client requests into fully server-rendered HTML pages. A fully server-rendered experience can also be achieved out of the box with Next.js (React) or Nuxt.js (Vue).renderToString(), we can use renderToNodeStream() to pipe the response and send the HTML down in chunks. In Vue, we can use renderToStream() that can be piped and streamed. With React Suspense, we might use asynchronous rendering for that purpose, too.renderToStaticMarkup method instead of renderToString method during builds, with main JS chunk being preloaded and future routes are prefetched, without DOM attributes that aren’t needed for simple static pages.Content-Type header to tell the compressor if it should use a dictionary for HTML, JavaScript or CSS. The result was a "negligible performance impact (1% to 3% more CPU compared to 12% normally) when compressing web content at high compression levels, using a limited dictionary use approach."srcset, sizes and the <picture> element. Especially for sites with a heavy media footprint, we can take it a step further with adaptive media loading (in this example React + Next.js), serving light experience to slow networks and low-memory devices and full experience to fast network and high-memory devices. In the context of React, we can achieve it with client hints on the server and react-adaptive-hooks on the client.
DPR, Viewport-Width, Width, Save-Data, Accept (to specify image format preferences) and others. They are supposed to inform the server about the specifics of user’s browser, screen, connection etc.<picture> element provides the necessary art-direction control in the HTML markup. Client hints provide annotations on resulting image requests that enable resource selection automation. Service Worker provides full request and response management capabilities on the client."<meta> tag for Client Hints, then a supporting browser will evaluate the responsive images markup and request the appropriate image source using the Client Hints HTTP headers.image-set, now supported in Safari 14 and in most modern browsers except Firefox, we can serve responsive background images as well:
background-image: url("fallback.jpg");
background-image:
  image-set( "photo-small.jpg" 1x,
    "photo-large.jpg" 2x,
    "photo-print.jpg" 600dpi);
1x descriptor, and higher-resolution images with 2x descriptor, and even a print-quality image with 600dpi descriptor. Beware though: browsers do not provide any special information on background images to assistive technology, so ideally these photos would be merely decoration.<picture> element and a JPEG fallback if needed (see Andreas Bovens' code snippet) or by using content negotiation (using Accept headers).<picture> element with React, styled components or gatsby-image.picture element. If AVIF is supported, we send an AVIF image; if it’s not the case, we fall back to WebP first, and if WebP isn’t supported either, we switch to JPEG or PNG as fallback (applying @media conditions if needed):
<picture>
  <source srcset="image.avif" type="image/avif">
  <source srcset="image.webp" type="image/webp">
  <img src="image.jpg" alt="Photo" width="450" height="350">
</picture>
picture element though:
<picture>
<source
  sizes="(max-width: 608px) 100vw, 608px"
  srcset="
    /img/Z1s3TKV-1920w.avif 1920w,
    /img/Z1s3TKV-1280w.avif 1280w,
    /img/Z1s3TKV-640w.avif   640w,
    /img/Z1s3TKV-320w.avif   320w"
  type="image/avif"
/>
<source
  sizes="(max-width: 608px) 100vw, 608px"
  srcset="
    /img/Z1s3TKV-1920w.webp 1920w,
    /img/Z1s3TKV-1280w.webp 1280w,
    /img/Z1s3TKV-640w.webp   640w,
    /img/Z1s3TKV-320w.webp   320w"
  type="image/webp"
/>
<source
  sizes="(max-width: 608px) 100vw, 608px"
  srcset="
    /img/Z1s3TKV-1920w.jpg 1920w,
    /img/Z1s3TKV-1280w.jpg 1280w,
    /img/Z1s3TKV-640w.jpg   640w,
    /img/Z1s3TKV-320w.jpg   320w"
  type="image/jpeg"
/>
  <img src="fallback-image.jpg" alt="Photo" width="450" height="350">
</picture>
prefers-reduced-motion:
<picture>
  <source media="(prefers-reduced-motion: reduce)" srcset="no-motion.avif" type="image/avif"></source>
  <source media="(prefers-reduced-motion: reduce)" srcset="no-motion.jpg" type="image/jpeg"></source>
  <source srcset="motion.avif" type="image/avif"></source>
  <img src="motion.jpg" alt="Animated AVIF">
</picture>
accept header, and then add the webp/avif etc. classes as appropriate.<img src=mp4>, outperforming GIF and WebP at large, but MP4 still performs better.<img src=mp4>.
srcset and sizes alone will reap significant benefits.<img src>, and then hide it off the screen.sizes to swap sources in a magnifier component.width and height on images. Watch out for the aspect-ratio property in CSS and intrinsicsize attribute which will allow us to set aspect ratios and dimensions for images, so browser can reserve a pre-defined layout slot early to avoid layout jumps during the page load.-opt for image names — for example, brotli-compression-opt.png; whenever an image contains that postfix, everybody on the team knows that the image has already been optimized.<video> content, but HTML5 videos tend to be much lighter and smaller than GIFs. Not an option? Well, at least we can add lossy compression to GIFs with Lossy GIF, gifsicle or giflossy.img tags in Safari Technology Preview display at least 20× faster and decode 7× faster than the GIF equivalent, in addition to being a fraction in file size. However, it’s not supported in other browsers.
<!-- By Houssein Djirdeh. https://web.dev/replace-gifs-with-videos/ -->
<!-- A common scenartio: MP4 with a WEBM fallback. -->
<video autoplay loop muted playsinline>
  <source src="my-animation.webm" type="video/webm">
  <source src="my-animation.mp4" type="video/mp4">
</video>
AV1 source in your <video> tag is reasonable, as all browser vendors seem to be on board.video elements only allow one image as the poster, which isn’t necessarily optimal. We can use Responsive Video Poster, a JavaScript library that allows you to use different poster images for different screens, while also adding a transitioning overlay and full styling control of video placeholders.autoplay attribute from the video tag altogether and use JavaScript to insert autoplay for larger screens. Additionally, we need to add preload="none" on video to tell the browser to not download any of the video files until it actually needs the file:
<!-- Based on Doug Sillars's post. https://dougsillars.com/2020/01/06/hiding-videos-on-the-mbile-web/ -->
<video id="hero-video"
          preload="none"
          playsinline
          muted
          loop
          width="1920"
          height="1080"
          poster="poster.jpg">
<source src="video.webm" type="video/webm">
<source src="video.mp4" type="video/mp4">
</video>
<!-- Based on Doug Sillars's post. https://dougsillars.com/2020/01/06/hiding-videos-on-the-mbile-web/ -->
<video id="hero-video"
          preload="none"
          playsinline
          muted
          loop
          width="1920"
          height="1080"
          poster="poster.jpg">
<source src="video.av1.mp4" type="video/mp4; codecs=av01.0.05M.08">
<source src="video.hevc.mp4" type="video/mp4; codecs=hevc">
<source src="video.webm" type="video/webm">
<source src="video.mp4" type="video/mp4">
</video>
autoplay over a certain threshold (e.g. 1000px):
/ By Doug Sillars. https://dougsillars.com/2020/01/06/hiding-videos-on-the-mbile-web/ /
<script>
    window.onload = addAutoplay();
    var videoLocation  = document.getElementById("hero-video");
    function addAutoplay() {
        if(window.innerWidth > 1000){
            videoLocation.setAttribute("autoplay","");
      };
    }
</script>
<video id="hero-video"
          preload="none"
          playsinline
          muted
          loop
          width="1920"
          height="1080"
          poster="poster.jpg">
<source src="video-large.av1.mp4" type="video/mp4; codecs=av01.0.05M.08" media="all and (min-width: 1500px)">
<source src="video-small.av1.mp4" type="video/mp4; codecs=av01.0.05M.08" media="all and (min-width: 400px)">
<source src="video-large.mp4" type="video/mp4" media="all and (min-width: 1500px)">
<source src="video-small.mp4" type="video/mp4" media="all and (min-width: 400px)">
</video>
preload and "The Compromise" method. Both of them use a two-stage render for delivering web fonts in steps — first a small supersubset required to render the page fast and accurately with the web font, and then load the rest of the family async. The difference is that "The Compromise" technique loads polyfill asynchronously only if font load events are not supported, so you don’t need to load the polyfill by default. Need a quick win? Zach Leatherman has a quick 23-min tutorial and case study to get your fonts in order.preload resource hint to preload fonts, but in your markup include the hints after the link to critical CSS and JavaScript. With preload, there is a puzzle of priorities, so consider injecting rel="preload" elements into the DOM just before the external blocking scripts. According to Andy Davies, "resources injected using a script are hidden from the browser until the script executes, and we can use this behaviour to delay when the browser discovers the preload hint." Otherwise, font loading will cost you in the first render time.local() value (which refers to a local font by name) when defining a font-family in the @font-face rule:/ Warning! Not a good idea! /
@font-face {
  font-family: Open Sans;
  src: local('Open Sans Regular'),
       local('OpenSans-Regular'),
       url('opensans.woff2') format ('woff2'),
       url('opensans.woff') format('woff');
}@font-face rules. Google Fonts has followed suit by disabling local() on the CSS results for all users, other than Android requests for Roboto.font-display CSS descriptor, we can control the font loading behavior and enable content to be readable immediately (with font-display: optional) or almost immediately (with a timeout of 3s, as long as the font gets successfully downloaded — with font-display: swap). (Well, it’s a bit more complicated than that.)FontFace object, then try to fetch them all, and only then apply them to the page. This way, we group all repaints by loading all fonts asynchronously, and then switch from fallback fonts to the web font exactly once. Take a look at Zach’s explanation, starting at 32:15, and the code snippet):
/* Load two web fonts using JavaScript */
/* Zach Leatherman: https://noti.st/zachleat/KNaZEg/the-five-whys-of-web-font-loading-performance#sWkN4u4 */
// Remove existing @font-face blocks
// Create two
let font = new FontFace("Noto Serif", /* ... */);
let fontBold = new FontFace("Noto Serif, /* ... */);
// Load two fonts
let fonts = await Promise.all([
  font.load(),
  fontBold.load()
])
// Group repaints and render both fonts at the same time!
fonts.forEach(font => documents.fonts.add(font));
nbsp; at the top of the body, and hide it visually with aria-visibility: hidden and a .hidden class:
<body class="no-js">
  <!-- ... Website content ... -->
  <div aria-visibility="hidden" class="hidden" style="font-family: '[web-font-name]'">
      <!-- There is a non-breaking space here -->
  </div>
  <script>
    document.getElementsByTagName("body")[0].classList.remove("no-js");
  </script>
</body>
body:not(.wf-merriweather--loaded):not(.no-js) {
  font-family: [fallback-system-font];
  / Fallback font styles /
}
.wf-merriweather--loaded,
.no-js {
  font-family: "[web-font-name]";
  / Webfont styles /
}
/ Accessible hiding /
.hidden {
  position: absolute;
  overflow: hidden;
  clip: rect(0 0 0 0);
  height: 1px;
  width: 1px;
  margin: -1px;
  padding: 0;
  border: 0;
}
<!-- By Harry Roberts.
https://csswizardry.com/2020/05/the-fastest-google-fonts/
- 1. Preemptively warm up the fonts’ origin.
- 2. Initiate a high-priority, asynchronous fetch for the CSS file. Works in
-    most modern browsers.
- 3. Initiate a low-priority, asynchronous fetch that gets applied to the page
-    only after it’s arrived. Works in all browsers with JavaScript enabled.
- 4. In the unlikely event that a visitor has intentionally disabled
-    JavaScript, fall back to the original method. The good news is that,
-    although this is a render-blocking request, it can still make use of the
-    preconnect which makes it marginally faster than the default.
-->
<!-- [1] -->
<link rel="preconnect"
      href="https://fonts.gstatic.com"
      crossorigin />
<!-- [2] -->
<link rel="preload"
      as="style"
      href="$CSS&display=swap" />
<!-- [3] -->
<link rel="stylesheet"
      href="$CSS&display=swap"
      media="print" onload="this.media='all'" />
<!-- [4] -->
<noscript>
  <link rel="stylesheet"
        href="$CSS&display=swap" />
</noscript>
&text. Plus, the support for font-display was added recently to Google Fonts as well, so we can use it out of the box.font-display: optional, it might be suboptimal to also use preload as it will trigger that web font request early (causing network congestion if you have other critical path resources that need to be fetched). Use preconnect for faster cross-origin font requests, but be cautious with preload as preloading fonts from a different origin wlll incur network contention. All of these techniques are covered in Zach’s Web font loading recipes.Save-Data header), or when the user has a slow connectivity (via Network Information API).prefers-reduced-data CSS media query to not define font declarations if the user has opted into data-saving mode (there are other use-cases, too). The media query would basically expose if the Save-Data request header from the Client Hint HTTP extension is on/off to allow for usage with CSS. Currently supported only in Chrome and Edge behind a flag.font-display descriptor, use Font Loading API to group repaints and store fonts in a persistent service worker’s cache. On the first visit, inject the preloading of scripts just before the blocking external scripts. You could fall back to Bram Stein’s Font Face Observer if necessary. And if you’re interested in measuring the performance of font loading, Andreas Marschke explores performance tracking with Font API and UserTiming API.unicode-range to break down a large font into smaller language-specific fonts, and use Monica Dinculescu’s font-style-matcher to minimize a jarring shift in layout, due to sizing discrepancies between the fallback and the web fonts.<script type="module">, also known as module/nomodule pattern (also introduced by Jeremy Wagner as differential serving).
import chaining can be flattened and converted into one inlined function without compromising the code. With Webpack, we can also use JSON Tree Shaking as well.
<link rel="preload"> or <link rel="prefetch">. Webpack inline directives also give some control over preload/prefetch. (Watch out for prioritization issues though.)/#PURE/ which is supported by Uglify and Terser — done!optimization.splitChunks: 'all' with split-chunks-plugin. This would make webpack automatically code-split your entry bundles for better caching.optimization.runtimeChunk: true. This would move webpack’s runtime into a separate chunk — and would also improve caching.preload/prefetch for all JavaScript chunks.script type="module", plus we can also use dynamic imports for lazy-loading code without blocking execution of the worker.babelEsmPlugin to only transpile ES2017+ features unsupported by the modern browsers you are targeting.
script type="module" to let browsers with ES module support load the file, while older browsers could load legacy builds with script nomodule.<link rel="modulepreload"> header provides a way to initiate early (and high-priority) loading of module scripts. Basically, it’s a nifty way to help in maximizing bandwidth usage, by telling the browser about what it needs to fetch so that it’s not stuck with anything to do during those long roundtrips. Also, Jake Archibald has published a detailed article with gotchas and things to keep in mind with ES Modules that’s worth reading.import() (see the entire thread). Then repeat the coverage profile and validate that it’s now shipping less code on initial load.
dead/ directory, e.g. /assets/img/dead/comments.gif.requestIdleCallback.Data-Saver is on. So is Instant.page if the mode is set to use viewport prefetching (which is a default).async or defer scripts to be parsed on a separate background thread once downloading begins, hence in some cases improving page loading times by up to 10%. Practically, use <script defer> in the <head>, so that the browsers can discover the resource early and then parse it on the background thread.
defer will be ignored, resulting in blocking rendering until the script has been evaluated (thanks Jeremy!).options parameter. So browsers can scroll the page immediately, rather than after the listener has finished. (via Kayce Basques).scroll or touch* listeners, pass passive: true to addEventListener. This tells the browser you’re not planning to call event.preventDefault() inside, so it can optimize the way it handles these events. (via Ivan Akulov)dns-prefetch or preconnect.
/ Before /
const App = () => {
  return <div>
    <script>
      window.dataLayer = window.dataLayer || [];
      function gtag(){...}
      gtg('js', new Date());
    </script>
  </div>
}
/ After /
const App = () => {
  const[isRendered, setRendered] = useState(false);
  useEffect(() => setRendered(true));
  return <div>
  {isRendered ?
    <script>
      window.dataLayer = window.dataLayer || [];
      function gtag(){...}
      gtg('js', new Date());
    </script>
  : null}
  </div>
}
body of the document with opacity: 0, then adds a function that gets called after a few seconds to bring the opacity back. This often results in massive delays in rendering due to massive client-side execution costs.blackhole.webpagetest.org  that you can point specific domains to in your hosts file.<iframe> so that the scripts are running in the context of the iframe and hence don’t have access to the DOM of the page, and can’t run arbitrary code on your domain. Iframes can be further constrained using the sandbox attribute, so you can disable any functionality that iframe may do, e.g. prevent scripts from running, prevent alerts, form submission, plugins, access to the top navigation, and so on.
/ Via Tim Kadlec. https://timkadlec.com/remembers/2020-02-20-in-browser-performance-linting-with-feature-policies/ /
/ Block the use of the Geolocation API with a Feature-Policy header. /
Feature-Policy: geolocation 'none'
allow values on the sandbox attribute. Sandboxing is supported almost everywhere, so constrain third-party scripts to the bare minimum of what they should be allowed to do.expires, max-age, cache-control, and other HTTP cache headers have been set properly. Without proper HTTP cache headers, browsers will set them automatically at 10% of elapsed time since last-modified, ending up with potential under- and over-caching.
Cache-Control and Expires headers to the browser to only allow assets to expire in a year. Hence, the browser wouldn’t even make a request for the asset if it has it in the cache./api/user). To prevent caching, we can use private, no store, and not max-age=0, no-store:Cache-Control: private, no-storeCache-control: immutable to avoid revalidation of long explicit cache lifetimes when users hit the reload button. For the reload case, immutable saves HTTP requests and improves the load time of the dynamic HTML as they no longer compete with the multitude of 304 responses.immutable are CSS/JavaScript assets with a hash in their name. For them, we probably want to cache as long as possible, and ensure they never get re-validated:
Cache-Control: max-age: 31556952, immutable
immutable reduces 304 redirects by around 50% as even with max-age in use, clients still re-validate and block upon refresh. It’s supported in Firefox, Edge and Safari and Chrome is still debating the issue.Cache-Control response header (e.g. Cache-Control: max-age=604800), after max-age expires, the browser will re-fetch the requested content, causing the page to load slower. This slowdown can be avoided with stale-while-revalidate; it basically defines an extra window of time during which a cache can use a stale asset as long as it revalidates it async in the background. Thus, it "hides" latency (both in the network and on the server) from clients.stale-while-revalidate in HTTP Cache-Control header, so as a result, it should improve subsequent page load latencies as stale assets are no longer in the critical path. Result: zero RTT for repeat views.x-powered-by, pragma, x-ua-compatible, expires, X-XSS-Protection and others) and that you include useful security and performance headers (such as Content-Security-Policy, X-Content-Type-Options and others). Finally, keep in mind the performance cost of CORS requests in single-page applications.defer to load critical JavaScript asynchronously?defer and async attributes in HTML.
defer instead of async. Ah, what's the difference again? According to Steve Souders, once async scripts arrive, they are executed immediately — as soon as the script is ready. If that happens very fast, for example when the script is in cache aleady, it can actually block HTML parser. With defer, browser doesn’t execute scripts until HTML is parsed. So, unless you need JavaScript to execute before start render, it’s better to use defer. Also, multiple async files will execute in a non-deterministic order.async and defer. Most importantly, async doesn’t mean that the code will run whenever the script is ready; it means that it will run whenever the scripts is ready and all preceding sync work is done. In Harry Roberts' words, "If you put an async scrip after sync scripts, your async script is only as fast as your slowest sync script."async and defer. Modern browsers support both, but whenever both attributes are used, async will always win.loading attribute (only Chromium). Under the hood, this attribute defers the loading of the resource until it reaches a calculated distance from the viewport.
<!-- Lazy loading for images, iframes, scripts.
Probably for images outside of the viewport. -->
<img loading="lazy" ... />
<iframe loading="lazy" ... />
<!-- Prompt an early download of an asset.
For critical images, e.g. hero images. -->
<img loading="eager" ... />
<iframe loading="eager" ... />
importance attribute (high or low) on a <script>, <img>, or <link> element (Blink only). In fact, it’s a great way to deprioritize images in carousels, as well as re-prioritize scripts. However, sometimes we might need a bit more granular control.
<!--
When the browser assigns "High" priority to an image,
but we don’t actually want that.
-->
<img src="less-important-image.svg" importance="low" ... />
<!--
We want to initiate an early fetch for a resource,
but also deprioritize it.
-->
<link rel="preload" importance="low" href="/script.js" as="script" />
IntersectionObserver object, which receives a callback function and a set of options. Then we add a target to observe.
rootMargin (margin around the root) and threshold (a single number or an array of numbers which indicate at what percentage of the target’s visibility we are aiming).content-visibility?content-visibility: auto, we can prompt the browser to skip the layout of the children while the container is outside of the viewport.
footer {
  content-visibility: auto;
  contain-intrinsic-size: 1000px;
  / 1000px is an estimated height for sections that are not rendered yet. /
}
padding-left and padding-right instead of the default margin-left: auto;, margin-right: auto; and a declared width. The padding basically allows elements to overflow the content-box and enter the padding-box without leaving the box model as a whole and getting cut off.contain-intrinsic-size with a placeholder properly sized (thanks, Una!).contain-intrinsic-size is calculated by the browser, Malte Ubl shows how you can calculate it and a brief video explainer by Jake and Surma explains how it all works.decoding="async"?decoding="async" to give the browser a permission to decode the image off the main thread, avoiding user impact of the CPU-time used to decode the image (via Malte Ubl):
<img decoding="async" … />
<head> of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14KB.
media="print", you can trick browser into fetching the CSS asynchronously but applying to the screen environment once it loads. (thanks, Scott!)
<!-- Via Scott Jehl. https://www.filamentgroup.com/lab/load-css-simpler/ -->
<!-- Load CSS asynchronously, with low priority -->
<link rel="stylesheet"
  href="full.css"
  media="print"
  onload="this.media='all'" />
opacity: 0; in inlined CSS and opacity: 1 in full CSS file, and display it when CSS is available. It has a major downside though, as users on slow connections might never be able to read the content of the page. That’s why it’s better to always keep the content visible, even although it might not be styled properly.<link rel="stylesheet" /> before async snippets. If scripts don’t depend on stylesheets, consider placing blocking scripts above blocking  styles. If they do, split that JavaScript in two and load it either side of your CSS.style element so that it’s easy to find it using JavaScript, then a small piece of JavaScript finds that CSS and uses the Cache API to store it in a local browser cache (with a content type of text/css) for use on subsequent pages. To avoid inlining on subsequent pages and instead reference the cached assets externally, we then set a cookie on the first visit to a site. Voilà!save-data, you might be wondering? 18% of global Android Chrome users have Lite Mode enabled (with Save-Data on), and the number is likely to be higher. According to Simon Hearne’s research, the opt-in rate is highest on cheaper devices, but there are plenty of outliers. For example: users in Canada have an opt-in rate of over 34% (compared to ~7% in the US) and users on the latest Samsung flagship have an opt-in rate of almost 18% globally.Save-Data mode on, Chrome Mobile will provide an optimized experience, i.e. a proxied web experience with deferred scripts, enforced font-display: swap and enforced lazy loading. It’s just more sensible to build the experience on your own rather than relying on the browser to make these optimizations.navigator.connection.effectiveType use RTT, downlink, effectiveType values (and a few others) to provide a representation of the connection and the data that users can handle.<Media /> component in a news article might output:Offline: a placeholder with alt text,2G / save-data mode: a low-resolution image,3G on non-Retina screen: a mid-resolution image,3G on Retina screens: high-res Retina image,4G: an HD video.canplaythrough event and use Promise.race() to timeout the source loading if the canplaythrough event doesn’t fire within 2 seconds.navigator.deviceMemory returns how much RAM the device has in gigabytes, rounded down to the nearest power of two. The API also features a Client Hints Header, Device-Memory, that reports the same value.
dns-prefetch (which performs a DNS lookup in the background), preconnect (which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background), prefetch (which asks the browser to request a resource) and preload (which prefetches resources without executing them, among other things). Well supported in modern browsers, with support coming to Firefox soon.
prerender? The resource hint used to prompt browser to build out the entire page in the background for next navigation. The implementations issues were quite problematic, ranging from a huge memory footprint and bandwidth usage to multiple registered analytics hits and ad impressions.prerender hint as a NoState Prefetch instead, so we can still use it today. As Katie Hempenius explains in that article, "like prerendering, NoState Prefetch fetches resources in advance; but unlike prerendering, it does not execute JavaScript or render any part of the page in advance."IDLE Net Priority. Since Chrome 69, NoState Prefetch adds the header Purpose: Prefetch to all requests in order to make them distinguishable from normal browsing.preview of the content for seamless navigations.preconnect and dns-prefetch, and we’ll be cautious with using prefetch, preload and prerender. Note that even with preconnect and dns-prefetch, the browser has a limit on the number of hosts it will look up/connect to in parallel, so it’s a safe bet to order them based on priority (thanks Philip Tellis!).preload. However, double check if it actually helps performance as there is a puzzle of priorities when preloading fonts: as preload is seen as high importance, it can leapfrog even more critical resources like critical CSS. (thanks, Barry!)
<!-- Loading two rendering-critical fonts, but not all their weights. -->
<!--
crossorigin="anonymous" is required due to CORS.
Without it, preloaded fonts will be ignored.
https://github.com/w3c/preload/issues/32
via https://twitter.com/iamakulov/status/1275790151642423303
-->
<link rel="preload" as="font"
      href="Elena-Regular.woff2"
      type="font/woff2"
      crossorigin="anonymous"
      media="only screen and (min-width: 48rem)" />
<link rel="preload" as="font"
      href="Mija-Bold.woff2"
      type="font/woff2"
      crossorigin="anonymous"
      media="only screen and (min-width: 48rem)" />
<link rel="preload"> accepts a media attribute, you could choose to selectively download resources based on @media query rules, as shown above.imagesrcset and imagesizes attributes to preload late-discovered hero images faster, or any images that are loaded via JavaScript, e.g. movie posters:
<!-- Addy Osmani. https://addyosmani.com/blog/preload-hero-images/ -->
<link rel="preload" as="image"
     href="poster.jpg"
     imagesrcset="
        poster_400px.jpg 400w,
        poster_800px.jpg 800w,
        poster_1600px.jpg 1600w"
    imagesizes="50vw">
<!-- Addy Osmani. https://addyosmani.com/blog/preload-hero-images/ -->
<link rel="preload" as="fetch" href="foo.com/api/movies.json" crossorigin>
/ Adding a preload hint to the head /
var preload = document.createElement("link");
link.href = "myscript.js";
link.rel = "preload";
link.as = "script";
document.head.appendChild(link);
/ Injecting a script when we want it to execute /
var script = document.createElement("script");
script.src = "myscript.js";
document.body.appendChild(script);
preload is good for moving the start download time of an asset closer to the initial request, but preloaded assets land in the memory cache which is tied to the page making the request. preload plays well with the HTTP cache: a network request is never sent if the item is already there in the HTTP cache.background-image, inlining critical CSS (or JavaScript) and pre-loading the rest of the CSS (or JavaScript).preload tag can initiate a preload only after the browser has received the HTML from the server and the lookahead parser has found the preload tag. Preloading via the HTTP header could be a bit faster since we don’t to wait for the browser to parse the HTML to start the request (it’s debated though).preload, as must be defined or nothing loads, plus preloaded fonts without the crossorigin attribute will double fetch. If you’re using prefetch, beware of the Age header issues in Firefox.DOMException: Quota exceeded. error in the browser console, then look into Gerardo’s article When 7KB equals 7MB.crossorigin attribute to the <img> tag.”will-change to inform the browser of which elements and properties will change.
opacity and transform, you’ll be on the right track. Anna Migas has provided a lot of practical advice in her talk on Debugging UI Rendering Performance, too. And to understand how to debug paint performance in DevTools, check Umar’s Paint Performance audit video.overflow-y: scroll on html  to enforce a scrollbar at first paint. The latter helps because scrollbars can cause non-trivial layout shifts due to above the fold content reflowing when width changes. Should mostly happen on platforms with non-overlay scrollbars like Windows though. But: breaks position: sticky because those elements will never scroll out of the container.margin-top on the content. An exception should be cookie consent banners that shouldn’t have impact on CLS, but sometimes they do: it depends on the implementation. There are a few interesting strategies and takeaways in this Twitter thread.0. There is a great guide by Milica Mihajlija and Philip Walton on what CLS is and how to measure it. It’s a good starting point to measure and maintain perceived performance and avoid disruption, especially for business-critical tasks.crossorigin attribute, so the browser would be forced to open a new connection.
tcp_notsent_lowat to 16KB for HTTP/2 prioritization to work reliably on Linux 4.9 kernels and later (thanks, Yoav!). Andy Davies did a similar research for HTTP/2 prioritization across browsers, CDNs and Cloud Hosting Services.
/ Performance Diagnostics CSS /
/ via Harry Roberts. https://twitter.com/csswizardry/status/1346477682544951296 /
img[loading=lazy] {
  outline: 10px solid red;
}
<head> of each template. For CSS/JS, operate within a critical file size budget of max. 170KB gzipped (0.7MB decompressed).<script type="module"> and module/nomodule pattern.dns-lookup, preconnect, prefetch, preload and prerender.font-display in CSS for fast first rendering.