Zoning out with Moment Timezone

I’ve recently been heavily embedded in implementing time zone sensitivity into a web application and I thought I’d share my first experiences on handling this from the perspective of the browser.

A great little library for handling this kind of tricky number can be found in the form of Moment Timezone, which sits proudly beside Moment.js, but as a full date parsing solution incorporating time zones.

The part of the library that really caught my attention was the time zone inferring abilities of the library; the superbly named ‘guess‘ function (loving the name!). The function, despite the name, is actually pretty sophisticated, so let’s take a look at a working example and how the documentation defines the ‘guts’ of its time zone guessing powers.

Moment Timezone can be installed and used in a number of different ways, as described here, but I went with the good old classic method of adding a NuGet package via Visual Studio:

Adding Moment Timezone via NuGet.

Adding Moment Timezone via NuGet.

Or, if you want to use the Package Manager Console then use this nugget instead:

Install-Package Moment.Timezone.js

Once the package is installed, alive and kicking we need to (as you would expect) reference the supporting Moment JavaScript library followed by the Moment Timezone based library, as follows:

<script src="~/Scripts/moment.min.js" type="text/javascript"></script>
<script src="~/Scripts/moment-timezone-with-data.min.js" type="text/javascript"></script>

You are then ready to utilise the guess function in a stupendous one-liner, just like this (wrapped in a jQuery document ready function, in this example):

<script type="text/javascript">
    // On page load grab a value denoting the time zone of the browser
    $(function () {
        // Log to the console the result of the call to moment.tz.guess()

The screenshots listed here show just a few examples of how the guess function works (by providing a tz database, or IANA database, value denoting which time zone Moment Timezone has inferred the client is in).

Moment Guess Usage London.

Moment Guess Usage London.

Moment Guess Usage Cairo.

Moment Guess Usage Cairo.

Moment Guess Usage Havana.

Moment Guess Usage Havana.

For newer, supporting browsers, Moment Timezone can utilise the Internationalization API (Intl.DateTimeFormat().resolvedOptions().timeZone) to obtain time zone information from the browser. For other browsers, Moment Timezone will gather data for a handful of moments from around the current year, using Date#getTimezoneOffset and Date#toString, to intelligently infer as much about the user’s environment as possible. From this information, a comparison is made against entries in the time zone database and the best match is returned. The most interesting part of this process is what happens in the case of a tied match; in this instance, a cities population becomes a deciding factor (the time zone linking to a city with the largest population is returned).

A full listing of tz database values can be found using the link below, showing the range of options available including historical time zones. It’s worth noting that the tz database also forms the backbone of the very popular Joda-Time and Noda Time date/time and timezone handling libraries (Java and C#, respectively; from the legendary Mr Skeet!).

List of tz database zones

For the project I was involved with, I ended up using Noda Time to actually perform conversions server side, utilising Moment Timezone to provide a ‘best stab’ at a user’s timezone on first access of the system. I’d like to give this the attention it deserves in a follow-up post.

Have a great week everyone, until the next time!

SEO – Gouging Eyes out with Rusty Spoons

A very short piece on my current, off-the-wall, opinions on my little SEO journey so far and the shenanigans we’ve faced up to this point.

Ok, so it hasn’t been that bad! Plus it’s interesting I have opted to use ‘spoons’, and not just a singular spoon. It looks like I plan to gouge out both my eyes at once based on the post title!

I have certainly had moments of scratching my head as to what the hell is going on and SEO often brings up questions like ‘is option a, b, c, z or a combination of some or all, the best way to approach it?’; which quickly leads to the questioning of every blooming decision you plan to make to the nth degree!

With the website ‘fundamentally’ finished (ok, I’m completely and utterly lying, I want to rewrite massive chunks in the usual ‘unhappy with the code as soon as you’ve finished writing it’, developer condition, or sickness if you will!), we have desperately been trying to hike our way up the rankings. This has, after a painstaking process, happened; to some degree at least.

I picked this book up for starters, as all good things start with a good read; don’t be put off by the whopping great name:

SEO 2016: Learn search engine optimization with smart internet marketing strategies

After reading this and (thanks for this by the way) picking a few brains of SEO boffins and asking friends to pick additional boffin brains for me, I was ready to start making some of the larger, more critical changes.

So, in no particular order, here were the ideas and concepts that I feel have made the most impact in the short time we have been climbing the SEO mountain…

Page Titles and Page Descriptions

This, without a shadow of a doubt, had the largest impact. We, quite literally, just added unique metadata to each page and ensured our titles and descriptions were informative, without skipping out on the correct keyword combinations we were targeting.

Up until this point the web applications that I have produced, out in the wild, have been linked to from other sites (i.e. local authorities, for example); these other sites took on the burden of SEO and it most cases were sought out for the services they provide (i.e. it was non-commercial in a sense and the traffic was not being fought over).

Now I find myself competing for these precious nuggets of traffic, this takes the biggest bite of the biscuit for me. We went from completely impossible to find, for a small subset of keywords we were targeting, to being on page 3 of the search results (a little variance across Google and Bing, but just a rough approximation). Not perfect, but a very good start!

Keyword Optimisation

This took a couple of days to do in the end. This boiled down to, in essence, looking for keyword combinations with good levels of traffic but lower degrees of difficulty when it came to competition. We ended up spreading our eggs amongst a few baskets here and did target a few competitive keywords; only in an effort to make our content as natural and accurate as possible (clarity/quality of content is also a ranking factor, after all).

We used the Moz tools for this job; this was to check out keyword densities of comparable sites and then plan some alternatives. These references can be found here (you get a month’s free trial by the way):

Moz Keyword Explorer

Moz Open Site Explorer

This has had some impact, once our site was re-crawled and re-indexed. Just from a content quality perspective, it was well worth running everything through Grammarly. I have the Chrome Plugin but, in this instance, I used a Word Plugin to get the job done (you can get a free account, with additional paid for options being available):

Grammarly for Word

Grammarly for Chrome

Keywords in Content, Image Titles/Alt Attributes and Anchor Text

A simple one but one that, after a few hours of looking around the site and post-building up a new keyword strategy using Moz tools, we knew we could improve on.

We now have a much better spread of keywords across our pages, including LSI keywords (Latent Semantic Indexing); essentially just related phrases.

As for image titles and title/alt attributes, we went on a massive, night long, rampage to improve these; based on our new found keyword knowledge (using hyphens to separate words in resource names).

Lastly, a great deal of effort was placed upon using natural and informative text within anchor tags which, to be honest, we were a little weak on. This appears to have had a positive impact.


This one got me! I generated a sitemap that, on the face of it, looked prim and proper. However, after checking it in the Google Search Console I had been sucker punched by dodgy encoding (UTF-8 BOM, to be exact). There, at the start of my file, sat a Byte Order Mark (otherwise invisible in Notepad/Visual Studio, as these things often are). This equalled an instant red flag and is something worth not getting caught out on.

I found a reasonable looking Sitemaps.xml validator which flagged the issue:

Robots.txt Checker

Recreating the file in Notepad and checking it in fixed the issue (and doing the resubmit via Google Search Console).

One last thing (luckily I didn’t do this!), make sure to double check that in the robots.txt file you aren’t disallowing access to your entire site when it comes to legitimate bots. Thankfully, I was dead careful in this regard!

So, what has raised eyebrows (in the ‘why does this matter’ stakes)? One thing that has really got my goat is the Flesch-Kincaid Reading Ease score. I have spent a fair bit of time wracking my brains as to whether content really needs to be targeted at a thirteen to the fifteen-year-old student (when it comes to reading ability that is). In honesty, I’m not sure on this…

The content outlined in SEO 2016 references Searchmetrics findings show sites appearing in the top ten results, on average, have a Flesch reading score of 76.00; which puts a stake in the ground at the aforementioned thirteen to fifteen-year-old reading level ability.

This doesn’t sound all that bad to me; but, after some initial testing I was finding that our content was ranking below this bar on all occasions. Not by too much, but enough to concern me. Upon testing some other comparable, higher ranking sites, I wasn’t able to determine that any other site had much better scores, to be honest.

We reread the content, multiple times, before coming to the conclusion that it was easy enough to read. I personally felt that certain sentences and words were being penalised too harshly for being ‘too complex’ or as having ‘too many syllables’ when I’m sure that it could be easily read by children (far below the thirteen to fifteen-year-old mark).

It felt as though we had to dumb down our content to a point where it felt very unnatural; as if we couldn’t compose a sentence of more than five words. Possibly borderline insulting to most people; so we’ll be sticking to our guns for the time being.

What’s Next?

Claire and I have a few other strategies going on, such as getting some strong backlinks in play; this is the next port of call. We also need to focus more on the all-important social media. I have additional tasks to perform, such as adjusting how JavaScript is loaded (looking at async/defer options for all of my resources; without busting pages!), looking at resource bundling and caching including resource optimisation (i.e. a little bit more image compression work). I’ll be a busy bee for a fair while yet!

We’ll see how the first batch of alterations plays out first; it is incredibly difficult to shake the feeling that a good chunk of this is highly experimental!

If any SEO experts come across and can offer any further advice or would like to comment please do! I’d love to hear from you.

Until the next time, toodles!