Module 4 | General Web » Performance
Students should be able to:
Performance considerations in applications are often overlooked or ignored until there's an actual problem. Browsers have become so fast at parsing and rendering our front-end code, it's easy to build an application that seems fast, without really putting in thoughtful effort. The problem with waiting until there's a performance issue to address is that by that time, your application is likely quite large and complex, and more difficult to refactor.
If we take a more proactive approach, we'll find that our applications are more scalable, more compatible across platforms, and, of course, faster. While you may not notice your application is slow, even cutting off mere milliseconds from load times can make a significant difference when interacting with your application.
What are some strategies you currently take to make your applications more performant?
We'll talk about more performance strategies in just a minute, but recognize that there are already things you are doing to increase performance in your applications, like bundling and minifying files. Strategies like this are so built-in to the development process that it's easy to forget they are in place for performance reasons.
In the 'Audits' panel of Chrome Canary dev tools, pick a large website or application you frequent. (e.g. nytimes.com, nhl.com, etc.) Run a performance audit on the site and read through the results. Write down all the words or phrases you are unfamiliar with.
Render-blocking resources mean that no further content will be rendered to the DOM until that resource has finished processing. CSS is considered a render-blocking resource. Why, then, do we put our CSS tags in the head of our HTML files?
The short answer is that we need to -- CSS is critical to displaying our content in a visually appropriate and meaningful manner. Rendering a bunch of content to the page with no concept of styling or heirarchy wouldn't be as easy to understand and mentally organize without those visual cues.
We can, however, take a couple steps to reduce the amount of CSS that blocks our rendering. When we have CSS that is only used conditionally - say for print styles or mobile devices - we can extract those out into their own files and use media attributes to denote that they should be non-blocking. For example:
The two CSS tags with media attributes won't block the render of the rest of our DOM because they're marked as only being necessary during print or on a device with a max-width of 480px.
Like CSS, JavaScript is also considered a render-blocking resource unless explicitly told not to be. We've previously learned to keep our JavaScript tags at the end of our body element for this exact reason. Generally we're using JavaScript for interactivity and DOM manipulation, which aren't possible until the DOM tree is rendered and painted to the screen, so there's no need to put them in the head tag.
When we put JavaScript inline in our HTML file, it actually completely blocks the parser. Because the browser doesn't know what the JavaScript is going to do to the page, it hands over complete control, pauses the parser, and lets the script do its thing. When we use a script
tag to fetch an external JavaScript file, the behavior is even worse - not only is the parser blocked, but we also have to wait for that resource to be fetched from the network, which can add even more milliseconds onto our wait time.
One way to get around this and tell the browser to let the script execute only when it's ready is by using an async
attribute:
There's also a defer
attribute that behaves slightly differently. Read about the differences here
When rendering the CSSOM and DOM, it's important to keep the DOM tree light and tiny. The more nodes you have, the more elements that need to be styled, sized and positioned before they can be painted to the page.
One of the more expensive processes in our applications are making network requests. They're unavoidable, but we can do a lot to lessen the toll they take on the performance of our apps.
Many applications will have a build process and tooling in place that bundles files together - it concatenates your JavaScript files into a single file so that only one network request has to be made to get your JavaScript in place. This allows us to reduce the amount of network requests required to get our resources loaded. Not only does every trip to the server take some time to come back, but the more trips we have to make, the more likely we are to have some of our requests stalled or queued.
With the current implementation of the HTTP protocol, requests for assets on the same server can be stalled if there are too many happening all at once. The max amount of requests an HTTP server can handle at a time is about 6. This means we need to make each request as tiny as possible so that it can come back faster and we can move onto the next ones. With HTTP2, this isn't a problem, but because this has been the way for so long, we've become accustomed to minifying our files to reduce the byte size of our requests.
If you've ever worked with Photoshop or some other image editing software, you likely ran into some different options for saving your images. A lot of these options are dependent on whether you're saving the image for print purposes or for web. The resolution images must be printed at for magazines has historically been much higher than that needed for the web, which also lead to larger file sizes.
A lot of the images you'll see on websites are .PNG or .JPGs. JPGs are good for photographs - they can be compressed without losing image quality (called lossless compression). PNGs are good for logos, icons, line drawings and text because they allow for transparency and smaller file sizes.
It's important not only to save your images with the appropriate resolutions and file formats, but also the correct sizing. You don't want to be scaling down a 2,000px wide image for a mobile viewport that can only handle 480. Make multiple copies of the image at different sizes for different viewports.
Another strategy for avoiding performance hits for images is called lazy loading. This means that you don't actually request an image until it's in the visible portion of the viewport. This relies on adding event listeners for scrolling the images into the viewport. Check out this code example on CSS-Tricks
The JavaScript we write in our applications is often littered with places where we can improve performance. There are a lot of factors that go into writing performant JavaScript, and sometimes you have to make compromises for readibility or maintainability purposes. Keeping these strategies in mind will help make prioritizing performance slightly more second-nature.
One quick trick you can use to determine how long a particular function takes is by using the performance.now()
API. This API gives you pretty accurate readings on how long it takes for a particular function to run.
There are also tools like JSPerf where members of the community create benchmarks for multiple approaches of achieving some common functionality.
DOM manipulations are a particularly slow aspect of client-side code. One common mistake new developers make is looping through an array of data, and calling `.append()` during each loop. This means we're doing a DOM manipulation every time that loop runs, which can lock up the UI when we have very large datasets. What we really want to do is build up all that HTML we need in memory, as a "virtual" chunk of HTML, and then append it all at once after it has finishing building up.
We can use Document Fragments for this purpose. Document Fragments allow you to build up a chunk of HTML by appending to a fragment, stored in a variable, in your JavaScript code, rather than appending each item to the DOM one at a time.
Take a look at the following example, and play with the two different solutions to see the performance gains achieved by using document fragments:
See the Pen DocumentFragments Example by Brittany Storoz (@brittanystoroz) on CodePen.
Garbage collection is a process in which the browser reclaims blocks of memory (objects) which are no longer reachable.
Take a look at the following example:
See the Pen Memory Allocation Example by Brittany Storoz (@brittanystoroz) on CodePen.
In the first scenario, simply calling foo()
means that anything within that function is available for garbage collection after the function finishes executing, because nothing in it is reachable after that point.
In the second scenario, because we are storing a reference to foo()
by assigning it to a variable, we cannot clean up the Person instance created within that function because it's technically still reachable by calling baz()
.
We can see this by looking at the Memory tab of devtools and generating a heap snapshot after running through both code examples. The example where our object is not available for memory collection will return this in the snapshot:
A rule of thumb to avoid these types of mistakes is to keep variables close to their scope and where they're needed or being used, and avoid polluting the global scope because those values will never be available for garbage collection as they're reachable anywhere in the code.