Web Performance Optimization: Client Side

No Comments

The last area I want to examine in my series on WPO is the client side. Currently many people look at the browser as the only client. But I think clients like Apps can benefit from the same ideas as they are built with the same concepts nowadays, some even in HTML.

One especially interesting fact about browser optimization is that it is mostly built on guessing. This is a nice commonality with SEO for which this is even more true, as nobody really knows how search engines operate. As a result of this, the majority of WPO tools use a term called “grade”. That’s right. Like in school you don’t know exactly what to do to get better. Just try some more. However there are a several tools available that analyze your sites and can give you pointers as to what you might want to fix or change to improve your grade. Two examples are YSlow by Yahoo and PageSpeed by Google which both use a catalog of best practices to check your sites.

You can also use an aggregator like webpagetest.org or showslow.org to run various tests for you. They all offer check lists to let you know how your site is violating best practices and how to fix it. However, what you do not get is an estimate how much your changes will affect page performance.

Requests waste time

Despite the fact that we unfortunately do not know exactly how a site’s behavior will change when applying best practices, it is very clear that extra requests are to be avoided at all costs. Requests cost time and money. There are two aspectes of requests that can be looked at:

  1. Connection time
  2. Payload transfer time

Of course payload transfer never comes without connection. Avoiding payload transfer can be achieved easily by two practices

  1. Compression
  2. Caching

Compression

Compression is piece of cake, but very often neglected. It comes from a time long back in history when early versions of IE could not handle compressed responses. Adding compression varies but is usually very easy. I recommend to let your container do this (rather than to code it into your application). For Apache this is as simple as:

<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css
  text/javascript application/x-javascript application/javascript
  application/json application/xml text/x-component
</IfModule>

Of course making images as small as possible is a sensible idea too. A few days ago Kornel Lesinski published a neat tutorial on PNGs that work, which also takes care about the transparency issues with IE 6.

Caching

Activating Caching is also easy, but requires a bit of thinking to avoid delivering outdated content. While you can try to cache all your HTML, most people do not need this, but should only cache really static content. So if you have a folder managed by your CMS like “export” or a place where all your images go, the whole folder should have an expiry defined for a long time. In Apache this is easy as well:

<IfModule mod_expires.c>
 ExpiresActive On
 ExpiresDefault "access plus 10 years"
</IfModule>

The only problem is that if you are going to replace an image, it has to have a new name, otherwise it will not get downloaded (we declared the old version to be good for 10 years). The usual practice for solving this is to include a version or timestamp in the file name. Problem solved. The process can be automated partially or fully, as discussed later.

Connection

Regarding connection time, I already described in my second post how this can be cut down using the slow-start trick. But bundling resources can be even more efficient and also avoid multiple requests, especially if they have just a small content. This is usually employed for bundling JavaScripts into a single file and for so called CSS-sprites, a technique from early computer games, where one large image file contains all images and just a section of it is displayed. Of course the best advice is to not include anything on your pages that you do not need.

JavaScripts waste time

Browsers perform multiple duties, but the one that takes most time is JavaScript code parsing and execution. This is also the reason why browser makers keep improving their JS engines constantly. As a consequence, I do not know how long benchmarks like the great JavaScript Loop Benchmark by Greg Reimer will hold true. The main issue I personally see is the amount of really bad JavaScript code on the net. Developers are tempted to copy and paste parts from the internet without actually programming it. JavaScript is a powerful language, but it’s shocking to see how badly it gets used (I did this myself).

The above linked loop benchmark shows how easily you can (or at least could) mess it up. A smart use of language might let the loop take 15ms, a incorrect usage 3203ms (Example: HTML Collection (length=1000, looped 100 times), IE7).
Additionally, most JavaScript execution is blocking page rendering. This is not very desirable and often seen with advertisements. Unobtrusive JavaScript is the answer to that, but rarely used. The way to go is to move all JavaScript to the end of the page, removing all inline JavaScript. However, a change like this is seldom realistic.

What we can do is profile JavaScript code. The timing specs of the W3C-Performance Working Group are still in the works, but Firefox already has a profiler included in the fantastic FireBug extension. Chrome Developer tools are a close follower of Firebug. They both allow us to watch code execution and find hotspots to fix.

Automatic fix

Recently Google released an Apache Module called mod_pagespeed. The idea behind it is that most best practices just should be followed and not cause any issues. However, some are hard to do upfront, but can be easily applied at runtime. This is the job of mod_pagespeed. It will fix your HTML, links, and cache config. It works best on unoptimized sites. However, it is extra code that is getting executed, so it might slow optimized sites down a bit. My usual advice: measure your results. Besides mod_pagespeed, there are multiple commercial solutions available, which also include CDN solutions.

Another kind of automatic fix is the HTML5 Boilerplate, a web site template that covers all the best and proven configuration preconfigured and documented. I can highly recommend it for its ideas, especially for the CSS part of it.

Summary

I do believe that employing best practices makes your webpages faster, but finding the 20% change that will improve your page by 80% is not that easy. Tooling support is still a bit limited, so it is left to us to experiment with ideas and check their effects. Browsers are getting faster day by day, so cheating and voodoo has a short lifespan. Also for the web, we need a clean and simple design. If you want some expert consulting, feel free to contact us. We can find out what makes your pages slow and what will give you the boost with the best return on investment on all layers: infrastructure, server software and client side.

I hope you have enjoyed my short introduction on Web Performance Optimization. Happy Holidays!

My WPO series:

  1. Introduction into Web Performance Optimization
  2. Web Performance Optimization: The Infrastructure
  3. Web Performance Optimization: Serverside Software
  4. Web Performance Optimization: Client Side

Comment

Your email address will not be published. Required fields are marked *