Performance

CSS Containment

When making a web app, or even a complex site, a key performance challenge is limiting the effects of styles, layout and paint. Oftentimes the entirety of the DOM is considered “in scope” for computation work, which can mean that attempting a self-contained “view” in a web app can prove tricky: changes in one part of the DOM can affect other parts, and there’s no way to tell the browser what should be in or out of scope.

[…]

The good news is that modern browsers are getting really smart about limiting the scope of styles, layout, and paint work automatically, meaning that things are getting faster without you having to do anything.

But the even better news is that there’s a new CSS property that hands scope controls over to developers: Containment.

The contain property

CSS Containment is a new property, with the keyword contain, which supports four values:

  • layout
  • paint
  • size
  • style

Each of these values allows you to limit how much rendering work the browser needs to do.

Note that this is currently only supported in Blink browsers. Firefox has a partial implementation behind a flag, and Edge is under consideration.

Using the Page visibility API to optimize an application for background

Web developers should be aware that users often have a lot of tabs open in the background and it can have a serious effect on power usage and battery life. Work in the background should be kept to a minimum unless it’s absolutely necessary to provide a particular user experience. The Page visibility API should be used to detect when page is the backgrounded and suspend all unnecessary work like visual updates.

Code language: JavaScript

var doVisualUpdates = true;
 
document.addEventListener('visibilitychange', function(){
  doVisualUpdates = !document.hidden;
});
 
function update() {
  if (!doVisualUpdates) {
    return;
  }
  doStuff();
}

hyperHTML: A Virtual DOM Alternative

The easiest way to describe hyperHTML is through an example.

Code language: JavaScript

// this is React's first tick example
// https://facebook.github.io/react/docs/state-and-lifecycle.html
function tick() {
  const element = (
    <div>
      <h1>Hello, world!</h1>
      <h2>It is {new Date().toLocaleTimeString()}.</h2>
    </div>
  );
  ReactDOM.render(
    element,
    document.getElementById('root')
  );
}
setInterval(tick, 1000);
 
// this is hyperHTML
function tick(render) {
  render`
    <div>
      <h1>Hello, world!</h1>
      <h2>It is ${new Date().toLocaleTimeString()}.</h2>
    </div>
  `;
}
setInterval(tick, 1000,
  hyperHTML.bind(document.getElementById('root'))
);

Features

  • Zero dependencies and it fits in less than 2KB (minzipped)
  • Uses directly native DOM instead of inventing new syntax/APIs, DOM diffing, or virtual DOM
  • Designed for template literals, a templating feature built in to JS
  • Compatible with vanilla DOM elements and vanilla JS data structures *
  • Also compatible with Babel transpiled output, hence suitable for every browser you can think of

* actually, this is just a 100% vanilla JS utility, that’s why is most likely the fastest and also the smallest. I also feel like I’m writing Assembly these days … anyway …

Understanding the Critical Rendering Path

When a browser receives the HTML response for a page from the server, there are a lot of steps to be taken before pixels are drawn on the screen. This sequence the browsers needs to run through for the initial paint of the page is called the “Critical Rendering Path”.

Knowledge of the CRP is incredibly useful for understanding how a site’s performance can be improved. There are 6 stages to the CRP -

  1. Constructing the DOM Tree
  2. Constructing the CSSOM Tree
  3. Running JavaScript
  4. Creating the Render Tree
  5. Generating the Layout
  6. Painting

Why we do what we do

I think this sums up why I’m so impassioned about web development:

I don’t get excited about frameworks or languages—I get excited about potential; about playing my part in building a more inclusive web.

I care about making something that works well for someone that has only ever known the web by way of a five-year-old Android device, because that’s what they have—someone who might feel like they’re being left behind by the web a little more every day. I want to build something better for them.

Duoload: Simplest website load comparison tool, ever

This is pretty excellent tool. (Note that it cannot be used on sites that disable iframe embedding, however.)

Today I needed a quick tool to compare the loading progression (not just loading time, but also incremental rendering) of two websites, one remote and one in my localhost. Just have them side by side and see how they load relative to each other. Maybe even record the result on video and study it afterwards. That’s all. No special features, no analysis, no stats.

Performant infinite scrolling

TL;DR: Re-use your DOM elements and remove the ones that are far away from the viewport. Use placeholders to account for delayed data. Here’s a demo and the code for the infinite scroller.

Infinite scrollers pop up all over the internet. Google Music’s artist list is one, Facebook’s timeline is one and Twitter’s live feed is one as well. You scroll down and before you reach the bottom, new content magically appears seemingly out of nowhere. It’s a seamless experience for users and it’s easy to see the appeal.

The technical challenge behind an infinite scroller, however, is harder than it seems. The range of problems you encounter when you want to do The Right Thing™ is vast. It starts with simple things like the links in the footer becoming practically unreachable because content keeps pushing the footer away. But the problems get harder. How do you handle a resize event when someone turns their phone from portrait to landscape or how do you prevent your phone from grinding to a painful halt when the list gets too long?

A Comprehensive Guide to Font Loading Strategies

This is an incredibly thorough guide to web font loading strategies, and their pros and cons.

If you’re looking for a specific approach, I’ve prepared some handy links that will take you to the section you need. Let’s say you want an approach that:

  • is the most well rounded approach that will be good enough for most use cases: FOUT with a Class.

  • is the easiest possible thing to implement: I’ve learned a lot about web fonts and at time of writing this article the current browser support is lacking for the easiest methods for effective/robust web font implementation. It is with that in mind that I will admit—if you’re looking for the easy way out already, you should consider not using web fonts. If you don’t know what web fonts are doing to improve your design, they may not be right for you. Don’t get me wrong, web fonts are great. But educate yourself on the benefit first. (In Defense of Web Fonts, The Value of a Web Font by Robin Rendle is a good start. If you have others, please leave a comment below!)

  • is the best performance-oriented approach: Use one of the Critical FOFT approaches. Personally, at time of writing my preference is Critical FOFT with Data URI but will shift toward Critical FOFT with preload as browser support for preload increases.

  • will work well with a large number of web fonts: If you’re web font obsessed (anything more than 4 or 5 web fonts or a total file size of more than 100KB) this one is kind of tricky. I’d first recommend trying to pare your web font usage down, but if that isn’t possible stick with a standard FOFT, or FOUT with Two Stage Render approach. Use separate FOFT approaches for each typeface (grouping of Roman, Bold, Italic, et cetera).

  • will work with my existing cloud/web font hosting solution: FOFT approaches generally require self hosting, so stick with the tried and true FOUT with a Class approach.

Front-End Performance Checklist 2017

This is an incredibly exhaustive list of performance tweaks, improvements, and best practices. Some may be outside the scope of smaller websites, but there are plenty of things for everyone.

Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.

Performance isn’t just a technical concern: It matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).

So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2017 — an overview of the issues you might need to consider to ensure that your response times are fast and your website smooth.

Don't go single-page-app too soon, or how GitHub reimplementing navigation in JavaScript loses streaming capability

A few weeks ago I was at Heathrow airport getting a bit of work done before a flight, and I noticed something odd about the performance of GitHub: It was quicker to open links in a new window than simply click them.

[…]

When you load a page, the browser takes a network stream and pipes it to the HTML parser, and the HTML parser is piped to the document. This means the page can render progressively as it’s downloading. The page may be 100k, but it can render useful content after only 20k is received.

This is a great, ancient browser feature, but as developers we often engineer it away. Most load-time performance advice boils down to “show them what you got” - don’t hold back, don’t wait until you have everything before showing the user anything.

GitHub cares about performance so they server-render their pages. However, when navigating within the same tab navigation is entirely reimplemented using JavaScript. Something like…

Code language: JavaScript

// …lots of code to reimplement browser navigation…
const response = await fetch('page-data.inc');
const html = await response.text();
document.querySelector('.content').innerHTML = html;
// …loads more code to reimplement browser navigation…

This breaks the rule, as all of page-data.inc is downloaded before anything is done with it. The server-rendered version doesn’t hoard content this way, it streams, making it faster. For GitHub’s client-side render, a lot of JavaScript was written to make this slow.

I’m just using GitHub as an example here - this anti-pattern is used by almost every single-page-app.

Switching content in the page can have some benefits, especially if you have some heavy scripts, as you can update content without re-evaluating all that JS. But can we do that without losing streaming?

[…]

Newline-delimited JSON

A lot of sites deliver their dynamic updates as JSON. Unfortunately JSON isn’t a streaming-friendly format. There are streaming JSON parsers out there, but they aren’t easy to use.

So instead of delivering a chunk of JSON:

Code language: JavaScript

{
  "Comments": [
    {"author": "Alex", "body": "…"},
    {"author": "Jake", "body": "…"}
  ]
}

…deliver each JSON object on a new line:

Code language: JavaScript

{"author": "Alex", "body": "…"}
{"author": "Jake", "body": "…"}

This is called “newline-delimited JSON” and there’s a sort-of standard for it. Writing a parser for the above is much simpler. In 2017 we’ll be able to express this as a series of composable transform streams:

Code language: JavaScript

const response = await fetch('comments.ndjson');
const comments = response.body
  // From bytes to text:
  .pipeThrough(new TextDecoder())
  // Buffer until newlines:
  .pipeThrough(splitStream('\n'))
  // Parse chunks as JSON:
  .pipeThrough(parseJSON());
 
for await (const comment of comments) {
  // Process each comment and add it to the page:
  // (via whatever template or VDOM you're using)
  addCommentToPage(comment);
}

…where splitStream and parseJSON are reusable transform streams. But in the meantime, for maximum browser compatibility we can hack it on top of XHR.

Again, I’ve built a little demo where you can compare the two, here are the 3g results:

Versus normal JSON, ND-JSON gets content on screen 1.5 seconds sooner, although it isn’t quite as fast as the iframe solution. It has to wait for a complete JSON object before it can create elements, you may run into a lack-of-streaming if your JSON objects are huge.

Don’t go single-page-app too soon

As I mentioned above, GitHub wrote a lot of code to create this performance problem. Reimplementing navigations on the client is hard, and if you’re changing large parts of the page it might not be worth it.

[…]

[A] simple no-JavaScript browser navigation to a server rendered page is roughly as fast. The test page is really simple aside from the comments list, your mileage may vary if you have a lot of complex content repeated between pages (basically, I mean horrible ad scripts), but always test! You might be writing a lot of code for very little benefit, or even making it slower.