Page speed is one of the most important optimization targets of web-developers nowadays. Where users accessing websites with all kinds of devices and connections, a fast loading page means better user experience, higher conversion rates and last but not least higher search engine rankings.

With Page Speed Insights, Google offers a service both for getting a comparable page speed measure and improving it. The requirements for a very good rating are hard, but with a mix of knowledge and experience mostly implementable.

critical CSS differs with the content of your page and which device you are reading the page with.

There is one requirement, though, that I’ve always had a hard time to fix: “Eliminating render blocking CSS”. It’s fixable, though. You just have to extract the CSS rules important for your above the fold content from your CSS, inline it in a style-tag in the header and load the rest of the CSS in the footer. There is even an automatic solution to extract the critical path CSS.

The problem is that the critical CSS (the layouts for what you see first on your screen), differs with the content of your page and which device you are reading the page with. The critical CSS for a mobile device might be completely different from the one for a desktop device. It might even differ from page to page. Critical path CSS for a page with a big header image might be completely different from one without.

you have to create above the fold CSS for every page and at least for mobile and desktop sizes

To solve these problems, you have to create above the fold CSS for every page and at least for mobile and desktop sizes and load it according to the target device. On top of that, you have to regenerate as soon the content of the page changes. That is nearly impossible to solve when you build the site, even with one of those lovely taskmanagers.  You could do it manually, but depending on the sites update frequency, that could be hours of work every month, week or day.

eliminate render blocking CSS as a service

Another possibility is to generate the critical path CSS automatically and on demand. I’ve been working on a service, which is doing exactly that, the last couple of weeks. And today we could gladly launch our first preview of eliminate-render-blocking-css.com. It’s build on the incredible critical CSS generator by Addy Osmany and will provide an API to extract precisely the above the fold CSS you need. To easily integrate it into your CMS we are working on plugins for WordPress, Drupal and Joomla, which will be published soon.

If this sounds interesting, take a look and register your e-mail address to stay updated!

In one of my bigger projects, we are using Redis with Node.js, as a buffer for large sets of monitoring data.

When we deployed that from a test-case, with just one data-sending instance to about 150 instances, Redis run out of Memory quite fast. Dependent on versions, operating system and some other things, that leads to either a crash of the Redis node or a significant performance decrease.

Redis with Node.js, when zipping is appropriate

Important to know is that Redis limit for storing data is it’s servers memory limit. Basically that’s what’s making Redis so fast. That’s what the Redis FAQ tells us:

Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can’t be larger than memory. (Source Redis FAQ)

However, for large data sets, like ours, that leaves you with a difficult decision: Buy more, expansive memory for your server or use CPU-power on your client to reduce the data-size by compressing. Which way is the right one for your specific setup, is something you have to sort out. First of all, a good starting point is this study about a similar problem:

On the other hand, it seems that when we have an application that uses Redis mostly for reading stuff from it, we might think about compressing the input data. However as I mentioned before – try not to overdo it and implement such a mechanism only when you really need it.

Now, did you make your decision? Read on if you want to know how to implement this with Node.js.

Compress Data for Redis with Node.js

Implementation is build on the following Node.js modules:

  • redis
  • zlib

Here is how you write zipped data to redis:

var zlib = require('zlib'),
  redis = require('redis'),
  client = redis.createClient();

client.on("error", function (err) {
  console.log("Error " + err);
});

var redisKey = 'add_your_key_here';
var redisValue = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim   ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.';

zlib.deflate(redisValue, function (err, zippedValue) {

  if (err) {
    console.log('Error deflating!');
    return;
  }

  // you have to encode the binary zip-data to base64 to be able to read it
  // later
  client.set(redisKey, zippedValue.toString('base64'), function (err) {

    if (err) {
      console.log('Error saving to redis!');
      return;
    }

    console.log('Zipped value saved!');

  });
});

And that’s how you read it:

var zlib = require('zlib'),
  redis = require('redis'),
  client = redis.createClient();

client.on("error", function (err) {
  console.log("Error " + err);
});

var redisKey = 'add_your_key_here';

client.get(redisKey, function (err, zippedValue) {

  if (err) {
    console.log('Error reading from redis!');
    return;
  }

  // you have to decode the base64 binary zip-data to base64
  zlib.inflate(new Buffer(zippedValue, 'base64'), function (err, redisValue) {

    if (err) {
      console.log('Error inflating!');
      return;
    }

    console.log(redisValue);

  });
});

Be aware of the base64 encoding. That’s required so you are able to write and read binary zipped data in JavaScript. More information for example here.

I am a great fan of Elastic Search, the ElasticPress plugin and the ElasticPress WooCommerce module. The other day, when I was working on a clients WooCommerce shop, it was the first time I wasn’t satisfied with the quality of the search results, though.

After some digging with the Debug bar and its ElasticPress extension, I could narrow down the problem to a combination of many resembling titles in our database and the Elastic Search fuzziness parameter. Of course the fantastic folks at 10up provide a filter for that in their ElasticPress plugin.

So here is how to disable fuzziness in the search:

function themeslug_deactivate_ep_fuzziness( $fuzz ) {
    return 0;
}
add_filter( 'ep_fuzziness_arg', 'themeslug_deactivate_ep_fuzziness' );

Of course you can even raise the fuzziness with the same method. If you want to adjust fuzziness dependent on other search parameters, the filter provides two more arguments, $search_fields and $args, which might help.