Experimenting with Azure CDN

With the gradual piecing together of the Lego bricks forming the slow move over of the Frog & Pencil website to a more managed approach (building of a custom CMS and an all-around better ASP.NET MVC architecture) I thought it would be interesting to document the move over of Frog & Pencil images to a CDN. I was inspired to give this a go after watching Scott Hanselman make the switch for his podcast site images and other Azure Friday videos, as documented here:

Scott Hanselman lifting and shifting images over to a CDN.
Azure CDN with Akamai.

It seemed like a relatively painless process and is a step in the right direction for our site as a whole; so, let’s give it a go!

NOTE: A short way into this post I realised that I was making a few missteps. This is cool, I think, as I would rather document the journey I took with the mistakes listed, to be honest – #KeepingItReal! However, for sanity (mine and yours) I’ll specify the ‘correct’ order of events that you should follow here that you can marry up with the ramblings below:

  1. Sign in to the Azure Portal.
  2. Create a storage container, if you don’t already have one.
  3. Download and utilise a storage explorer application (such as Azure Storage Explorer).
  4. Create a CDN Profile and CDN endpoint (that ties explicitly to your storage container, in this instance).
  5. Go to your DNS settings and generate a CNAME property, mapping a custom domain to your CDN if you wish to.
  6. Optionally, learn how to programmatically interact with your storage container.

Azure Portal – First Steps (documenting the journey)

First things first, we must hop on over to the Azure Portal. I searched the marketplace for ‘CDN’ and clicked create in the right-hand pane, as shown:

Creating a CDN

Creating a CDN.

The next phase involves configuring a CDN profile. The profile needs to be given a name and should be attached to an Azure Subscription. I’ve created a new Resource Group, by specifying a name for it, but it is possible to select an existing one for use here. There are some guidelines surrounding Resource Groups, such as items within a group should share the same lifecycle; more details can be found within this handy documentation article, read away!

The Azure CDN service is, of course, global but a Resource Group location must be set, which governs where resource metadata is ultimately stored. This could be an interesting facet to consider if there are particular compliance considerations regarding the storage of information and where it should be placed. I’m going with West Europe either way; a nice, easy choice this time around.

As for pricing, I have decided to head down the Akamai route, using the Standard Akamai pricing tier. I will have to see how this ultimately pans out cost wise over time, but it seems reasonable:

Azure CDN Provider Pricing

Azure CDN Provider Pricing.

At this point, we can explicitly create a CDN endpoint (where resources will be ultimately exposed). The endpoint has a suffix of ‘.azureedge.net’ and I’ve simply specified the first part of our domain, ‘frogandpencil’ as the prefix.

This is where I hit a bit of a revelation with the ‘Origin Type’ drop down. You can select from Storage, Cloud service, Web app or Custom origin (which is cool!), of which I want to use Storage. After selecting this I can pick an ‘Origin hostname’. The light bulb moment here, for me, is that I should have created a storage container first! I’d watched enough videos to have dodged this little problem, but I still managed to stumble…all part of the learning process šŸ˜‰

So… Letā€™s Create a Storage Container

Back to the market place then. The obvious pick seems to be ‘Storage account – blob, file, table, queue’, so I’ve gone ahead and clicked create here:

Setup Azure Storage.

Setup Azure Storage.

When creating the storage account there are a fair few options to consider, a good number that read as if they will impact pricing. I had to use the documentation found here to make choices. I settled on the setup described here (for images, and as the site isn’t yet using https, I’ve gone with the secure transfer feature being disabled – one for review in the future):

As an overview, the guidance suggests the use of the ‘Resource manager’ type of ‘Deployment model’ for new applications. There doesn’t seem to be a penalty for using the ‘StorageV2’ ‘Account kind’, which extends the types that can be stored outside of just blob data, so that is what I am going for.

Performance wise, the ‘standard’ option seems like an acceptable setting at the moment and for the kind of data I’ll be storing (images for now, and possibly other static content later down the line) I can opt out of any geo-redundant replication options. In the event of resource downtime, I can easily switch to the use of resources local to the website. Plus, there will not be any data being lost really, all easily rebuilt and recoverable.

As for the ‘Access tier’, I’m heading down the ‘Hot’ route as images will be accessed quite frequently (we have the CDN to consider here so I might tinker later on down the line).

I then pick a Subscription, give the Resource Group a name and select my region of choice before continuing.

I then get a new blade on the dashboard (which took a minute to create) and, on accessing, am presented with the following:

Storage Setup.

Storage Setup.

Managing the Storage Container

The first and perhaps most obvious choice for managing and actually getting some content up into the storage container is the Azure Storage Explorer, which I’ll be downloading and using.

After a painless install process, you should see the following, where you will be asked to connect to Azure Storage:

Connect to Azure Storage.

Connect to Azure Storage.

I simply used my Azure account sign in details here. I did notice however that the Azure Portal does expose, under ‘Access Keys’ (within the storage container dashboard), keys and connection strings. I’m assuming this is for other kinds of, including programmatic, access; which I’ll give a go I think as part of this post (as a wee bonus).

I used the right-click context menu to create a new container called ‘images’ and then used the upload button to push up a test image:

Azure Storage Explorer Upload Image.

Azure Storage Explorer Upload Image.

Again, against the container I used the right-click context menu to select ‘Set Public Access Level…’, which I’ve set as follows to allow public access to the blob data but not the container:

Container Public Access Setup.

Container Public Access Setup.

I now have a blob container with a single image in it with appropriate access rights configured. The question is can I access the image in its current state? We’re looking pretty good from what I can see.

Successful Access.

Successful Access.

Adding a custom domain

Next up, I plan on adding a custom domain to the storage account. To do this, I access the ‘Custom domain’ option as shown here:

Register Custom Domain.

Register Custom Domain.

I followed option 1 as listed here and created a CNAME record to map frogandpencilstorage.blob.core.windows.net to images.frogandpencil.com (I’m happy to wait for this to propagate).

Register images.frogandpencil.com.

Register images.frogandpencil.com.

Once the CNAME record is created you simply have to place your target URL in the text box provided and hit save.

New CNAME property.

New CNAME property.

Lastly, let’s take it for a spin and see whether we can access the image in the storage container via the custom URL…and voila:

Custom Domain Active.

Custom Domain Active.

Back to the CDN bit!

We’ve come full circle! With a storage container in place I can now use that to feed a configured CDN. Consequently, I backtracked and followed the previously listed steps being sure to select my ‘Origin hostname’ to point to the newly created storage container:

CDN Profile & Endpoint Configuration.

CDN Profile & Endpoint Configuration.

On clicking create it takes a short time for the CDN to be configured.

So, what do I do now

Looking through the videos I made another discovery. This is where I want to adjust the previously created CNAME property (that I setup for the storage container) and hook this up to the CDN endpoint instead. The portal exposes custom domain mapping for a CDN much like for a storage container:

Change CNAME to map to CDN.

Change CNAME to map to CDN.

Portal CDN Custom Domain Mapping.

Portal CDN Custom Domain Mapping.

Again, I had to wait a short time for the CNAME property change to propagate but, after that, I was all set. I then spent a little time verifying that the CDN was up and running. There are quite a few optimisation options including the ability to set a custom ‘Origin path’ (such as ‘images’) but I’m leaving these be for the time being.

The Bonus Section – Programmatically Add Items to Azure Storage

As promised, this next section discusses (in a very bare bones fashion) what is required to write to an Azure storage container. I’ve created a stub Console Application to get running with and the process itself is simple (not considering errors, existence checks and threading, of course!).

You need to:

  1. Reference the WindowsAzure.Storage NuGet package.
  2. Add a reference to System.Configuration (if you want to put connection strings, folder paths and container names in configuration files and read them out).
  3. Then simply follow the code outlined below to get started.

In my test setup, the ‘SourceDirectory’ is looking at ‘C:\test-files\’ (contains just images) and the ‘TargetContainer’ is called ‘images’, as per my earlier configuration. The connection string can be obtained from the Azure Portal, under ‘Storage Account > Settings > Access Keys’.

Test Files ready for upload.

Test Files.

Storage Access Keys.

Storage Access Keys.

The App.config for the test application is structured like this, with the connection string being set to the correct value as per the information found in the Azure Portal.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
  <connectionStrings>
    <add name="FrogAndPencilStorageConnection" connectionString="[OBTAINED_FROM_THE_AZURE_PORTAL]" />
  </connectionStrings>
  <appSettings>
    <add key="SourceDirectory" value="C:\test-files\"/>
    <add key="TargetContainer" value="images"/>
  </appSettings>
</configuration>

Then, finally, the actual test code which…

  • Attempts to connect to the storage container creating a CloudStorageAccount object, based on the connection string information supplied.
  • Then uses the CloudStorageAccount object to get create a new CloudBlobContainer object (based on the container name stored in the configuration settings).
  • Finally, utilise this CloudBlobContainer, along with information about the files to process, to actually perform the upload.
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;

namespace WriteToAzureStorageTestApp
{
    /// <summary>
    /// Test application for writing to Azure Storage.
    /// Basic, test code only (throwaway code).
    /// </summary>
    internal class Program
    {
        #region Main (Entry Point) Method

        /// <summary>
        /// Main entry point method for this console application.
        /// </summary>
        /// <param name="args">Optional input arguments.</param>
        private static void Main(string[] args)
        {
            DemoWritingToAzureStorage();
        }

        #endregion Main (Entry Point) Method

        #region Private Static Methods

        /// <summary>
        /// Private static demo method illustrating how to upload to Azure Storage.
        /// </summary>
        private static void DemoWritingToAzureStorage()
        {
            // First use the FrogAndPencilStorageConnection connection string (for Azure Storage) to obtain a CloudStorageAccount, if possible
            CloudStorageAccount.TryParse(ConfigurationManager.ConnectionStrings["FrogAndPencilStorageConnection"].ConnectionString, out CloudStorageAccount storageAccount);
            if (storageAccount != null)
            {
                // We have a CloudStorageAccount...proceed to grab a CloudBlobContainer and attempt to upload any files found in the 'SourceDirectory' to Azure Storage
                Console.WriteLine("Obtaining CloudBlobContainer.");

                CloudBlobContainer container = GetCloudBlobContainer(storageAccount);

                Console.WriteLine("Container resolved.");

                Console.WriteLine("Obtaining files to process.");

                List<string> filesToProcess = Directory.GetFiles(ConfigurationManager.AppSettings["SourceDirectory"]).ToList();

                UploadFilesToStorage(container, filesToProcess);
            }

            Console.WriteLine("Processing complete. Press any key to exit...");
            Console.ReadLine();
        }

        /// <summary>
        /// Private static utility method that obtains a CloudBlobContainer
        /// using the container name stored in app settings.
        /// </summary>
        /// <param name="storageAccount">The cloud storage account to retrieve a container based on.</param>
        /// <returns>A fully instantiated CloudBlobContainer, based on the TargetContainer app setting.</returns>
        private static CloudBlobContainer GetCloudBlobContainer(CloudStorageAccount storageAccount)
        {
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

            return blobClient.GetContainerReference(ConfigurationManager.AppSettings["TargetContainer"]);
        }

        /// <summary>
        /// Private static utility method that, using a CloudBlobContainer, uploads the
        /// files passed in to Azure Storage.
        /// </summary>
        /// <param name="container">A reference to the container to upload to.</param>
        /// <param name="filesToProcess">The files to upload to the container.</param>
        private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
        {
            // Process each file, uploading it to storage and deleting the local file reference as we go
            filesToProcess.ForEach(filePath =>
            {
                Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

                // Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
                container.GetBlockBlobReference(Path.GetFileName(filePath)).UploadFromFile(filePath);

                RemoveFileFromLocalDirectory(filePath);
            });
        }

        /// <summary>
        /// Private static utility method for deleting a file.
        /// </summary>
        /// <param name="filePath">The file path (full) to delete based upon.</param>
        private static void RemoveFileFromLocalDirectory(string filePath)
        {
            // Only attempt the delete if the file exists
            if (File.Exists(filePath))
            {
                File.Delete(filePath);
            }
        }

        #endregion Private Static Methods
    }
}
Test Upload Application Running.

Test Upload Application Running.

Test Files Uploaded.

Test Files Uploaded.

There you have it; a rather around the houses and off the wall tour of setting up an Azure storage container and then linking this to an Azure CDN. Plenty of images still need to be brought over into the new storage container (and a few code changes to boot), but I feel like I am on a pilgrimage to a better place. I hope this proves useful nonetheless and, until the next time, happy coding!

Addendum

After a further play I realised that the C# example I’d knocked up was not setting the content type correctly on upload, as follows:

Incorrect Content Type.

Incorrect Content Type.

To this end, I adjusted the UploadFilesToStorage method to set the content type on a CloudBlockBlob before the upload is triggered, as illustrated here:

/// <summary>
/// Private static utility method that, using a CloudBlobContainer, uploads the
/// files passed in to Azure Storage.
/// </summary>
/// <param name="container">A reference to the container to upload to.</param>
/// <param name="filesToProcess">The files to upload to the container.</param>
private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
{
	CloudBlockBlob blockBlob;

	// Process each file, uploading it to storage and deleting the local file reference as we go
	filesToProcess.ForEach(filePath =>
	{
		Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

		// Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
		blockBlob = container.GetBlockBlobReference(Path.GetFileName(filePath));

		// Correctly configure the content type before uploading
		blockBlob.Properties.ContentType = "image/jpg";

		blockBlob.UploadFromFile(filePath);

		RemoveFileFromLocalDirectory(filePath);
	});
}

You should then see items with the correct content type in the container:

Correct Content Type.

Correct Content Type.

To access images via the custom domain, essentially my CDN, I had to ‘purge’ it also at this point.

Again, happy coding.

Looking for a Lighthouse

Happy New Year all and I truly hope you all enjoyed a terrific festive period.

As a little weekend treat, I decided to pick myself up a copy of the ‘net’ magazine, mainly due to the included feature articles on which web development and design tools are ‘en-flique’ (that one is for Claire; a trip to the urban dictionary is in order) for 2018. There are also some amazing looking articles discussing things like colour palette choices, and tools that assist in creating them, which should prove handy as I will be looking to potentially change up the Frog & Pencil website a little as I craft a fully functional CMS this year; something that has been well overdue.

As a quick aside, I’ve decided to check out Google’s own automated page analysis tool, Lighthouse. This can run performance and accessibility analysis on public or password protected sites. Information for getting started can be found here.

I’m opting to run this within Chrome development tools, but you can run this from the command line or as a Node module if you prefer (which allows you to hook this into a continuous integration setup, which could be incredibly useful).

Up to this point, I have been using YSlow as a web page dissection tool, but I’m happy to bust out an alternative to keep things interesting.

So, what kind of feedback does the tool provide and how does it present it? I’ll run this against the Frog & Pencil homepage and show you the results (no matter how bad they turn out, I’ll be honest!). When using Chrome you just need to inspect the audits tab, within Chrome development tools (accessed via F12 or Ctrl+Shift+I, on windows), to get up and running as follows:

Lighthouse via the Audits Tab.

Audits Tab.

On clicking ‘Perform an audit…’ you be presented with options as below (I’ll leave them all checked for this particular test run):

Audit Options for Lighthouse.

Audit Options.

The report is, on first inspection, very detailed and, as you can see, I have a fair bit of work to do (although I’m happy with the accessibility rating at least). The report is downloadable using the highlighted button:

Lighthouse Report Header.

Lighthouse Report Header.

The tests performed also appear to be more strict that the YSlow V2 test, which is nice to see:

YSlow V2 Test.

YSlow V2 Test.

There have been some surprising opportunities for improvement highlighted. I’ve long known that I should switch out the entire site to run over https and when the site is overhauled I intend to make better use of bundling for static files and will consider the use of a CDN. I have plenty of work to do with image compression also.

Here are a few things that really caught my attention:

1) How poor the site ran under simulated 3G speeds:

Simulated 3G Speeds

Simulated 3G Speeds.

2) The scale of the improvements still to be made by reducing render blocking scripts/stylesheets (a boo boo that I should really be covering) and image management:

Performance Improvement Opportunities.

Performance Improvement Opportunities.

3) The report highlighted that I was using libraries with known vulnerabilities and that I have left in code that was writing errors to the console (doh!):

Third Party Libraries.

Third Party Libraries.

This does bring into focus the core need of revisiting the website this year and giving it a thorough tune-up, as opportunities towards the end of last year were at a premium. All in all, if you’ve not used Lighthouse yet I would suggest giving it a look; especially as it takes seconds to run. I’ll be working my way through the highlighted areas in the report in the meantime!

All the best šŸ˜‰

Session State Behaviour & Async Headaches

I was battling a little issue today surrounding an action method no longer being called asynchronously; the issue turned out to be related to some recent session-based code being added to our code base. In short, the minute session is detected in the underlying code, the ‘default’ behaviour for session state handling throws a monkey wrench in asynchronicity, regardless of the operation being performed on session data (i.e. writing to the session or just reading from the session). This, for me, turned into a performance headache.

There is an attribute that can be placed at controller level that states ‘I’m reading from session only, please continue to allow asynchronous operations’, which when used looks like this:

[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
public class TestController : Controller
{
          ā€¦ā€¦
}

However, if you want to implement a control mechanism at the action level you need to travel down the custom controller factory/attribute route. This post turned out to be a lifesaver: Session State Behaviour Per Action in ASP.NET MVC

In short, this setup enables you to set session state behaviour handling at the action level by adorning the target method with a custom attribute; bonza!

When inspecting this and underlying, base class, implementations you will most likely discover that it’s not immediately clear how to handle scenarios where overridden methods exist (where methods match by name but differ by signature). This, for me, caused several crunches into the dreaded AmbigiousMatchException.

The implementation below shows my modified override of the DefaultControllerFactory GetControllerSessionBehavior method that is designed to a) avoid exceptions and b) only try to ‘discover’ the attribute and apply custom session state behaviour handling where a single method is ‘matched’ (based on the supplied RequestContext). If the custom attribute is not found, or more than one method is found matching by name (or another error occurs) base logic kicks in and takes precedence:

        /// <summary>
        /// Public overridden method that looks at the controller/action method being called and attempts
        /// to see if a custom ActionSessionStateAttribute (determining how session state behaviour should work) is in play.
        /// If it is, return the custom attributes SessionStateBehaviour value via the Behaviour property, in all other instances
        /// refer to the base class for obtaining a SessionStateBehavior value (via base.GetControllerSessionBehavior).
        /// </summary>
        /// <param name="requestContext">The request context object (to get information about the action called).</param>
        /// <param name="controllerType">The controller type linked to this request (used in a reflection operation to access a MethodInfo object).</param>
        /// <returns>A SessionStateBehavior enumeration value (either dictacted by us based on ActionSessionStateAttribute usage or the base implementation).</returns>
        protected override SessionStateBehavior GetControllerSessionBehavior(RequestContext requestContext, Type controllerType)
        {
            try
            {
                // At the time of writing base.GetControllerSessionBehavior just returns SessionStateBehaviour.Default but to make this robust we should just call
                // base.GetControllerSessionBehavior if the controllerType is null so any changes to the base behaviour in future are adhered to
                if (controllerType != null)
                {
                    // Defensive code to check the state of RouteData before proceeding
                    if (requestContext.RouteData != null
                        && requestContext.RouteData.Values != null
                        && requestContext.RouteData.Values["action"] != null)
                    {
                        // Attempt to find the MethodInfo type behind the action method requested. There is a limitation here (just because of what we are provided with) that
                        // this piece of custom attribute handling (for ActionSessionStateAttribute) can only be accurately determined if we find just one matching method
                        string actionName = requestContext.RouteData.Values["action"].ToString();
                        List<MethodInfo> controllerMatchingActionMethods = controllerType.GetMethods(BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance)
                            .Where(method => method.Name.Equals(actionName, StringComparison.InvariantCultureIgnoreCase)).ToList();

                        // In order to avoid ambiguous match exceptions (plus we don't have enough information about method parameter types to pick the correct method in the case
                        // where more than one match exists) I needed to rig this in such a way that it can only work where one matching method, by name, exists (works for our current use cases) 
                        if (controllerMatchingActionMethods != null && controllerMatchingActionMethods.Count == 1)
                        {
                            MethodInfo matchingActionMethod = controllerMatchingActionMethods.FirstOrDefault();

                            if (matchingActionMethod != null)
                            {
                                // Does the action method requested use the custom ActionSessionStateAttribute. If yes, we can return the SessionStateBehaviour specified by the
                                // developer who used the attribute. Otherwise, just fail over to base logic
                                ActionSessionStateAttribute actionSessionStateAttr =
                                    matchingActionMethod.GetCustomAttributes(typeof(ActionSessionStateAttribute), false)
                                        .OfType<ActionSessionStateAttribute>()
                                            .FirstOrDefault();

                                if (actionSessionStateAttr != null)
                                {
                                    return actionSessionStateAttr.Behaviour;
                                }
                            }                       
                        }
                    }
                }
            }
            catch
            {
                // If any issues occur with our custom SessionStateBehavior inferring handling we're best to just let the base method calculate this instead (best efforts 
                // have been made to avoid exceptions where possible). Could consider logging here in future (but we're in an odd place in the MVC lifecycle, could cause
                // ourselves more issues by attempting this so will only do if absolutely required)
            }

            return base.GetControllerSessionBehavior(requestContext, controllerType);   
        }

This appeared to be a pretty robust solution in my case (and we gained back the asynchronous processing on the targetted methods = big plus), so, hopefully, this comes in handy for others at some point.

Cheers all!

Zoning out with Moment Timezone

I’ve recently been heavily embedded in implementing time zone sensitivity into a web application and I thought I’d share my first experiences on handling this from the perspective of the browser.

A great little library for handling this kind of tricky number can be found in the form of Moment Timezone, which sits proudly beside Moment.js, but as a full date parsing solution incorporating time zones.

The part of the library that really caught my attention was the time zone inferring abilities of the library; the superbly named ‘guess‘ function (loving the name!). The function, despite the name, is actually pretty sophisticated, so let’s take a look at a working example and how the documentation defines the ‘guts’ of its time zone guessing powers.

Moment Timezone can be installed and used in a number of different ways, as described here, but I went with the good old classic method of adding a NuGet package via Visual Studio:

Adding Moment Timezone via NuGet.

Adding Moment Timezone via NuGet.

Or, if you want to use the Package Manager Console then use this nugget instead:

Install-Package Moment.Timezone.js

Once the package is installed, alive and kicking we need to (as you would expect) reference the supporting Moment JavaScript library followed by the Moment Timezone based library, as follows:

<script src="~/Scripts/moment.min.js" type="text/javascript"></script>
<script src="~/Scripts/moment-timezone-with-data.min.js" type="text/javascript"></script>

You are then ready to utilise the guess function in a stupendous one-liner, just like this (wrapped in a jQuery document ready function, in this example):

<script type="text/javascript">
    // On page load grab a value denoting the time zone of the browser
    $(function () {
        // Log to the console the result of the call to moment.tz.guess()
        console.log(moment.tz.guess());
    });
</script>

The screenshots listed here show just a few examples of how the guess function works (by providing a tz database, or IANA database, value denoting which time zone Moment Timezone has inferred the client is in).

Moment Guess Usage London.

Moment Guess Usage London.

Moment Guess Usage Cairo.

Moment Guess Usage Cairo.

Moment Guess Usage Havana.

Moment Guess Usage Havana.

For newer, supporting browsers, Moment Timezone can utilise the Internationalization API (Intl.DateTimeFormat().resolvedOptions().timeZone) to obtain time zone information from the browser. For other browsers, Moment Timezone will gather data for a handful of moments from around the current year, using Date#getTimezoneOffset and Date#toString, to intelligently infer as much about the user’s environment as possible. From this information, a comparison is made against entries in the time zone database and the best match is returned. The most interesting part of this process is what happens in the case of a tied match; in this instance, a cities population becomes a deciding factor (the time zone linking to a city with the largest population is returned).

A full listing of tz database values can be found using the link below, showing the range of options available including historical time zones. It’s worth noting that the tz database also forms the backbone of the very popular Joda-Time and Noda Time date/time and timezone handling libraries (Java and C#, respectively; from the legendary Mr Skeet!).

List of tz database zones

For the project I was involved with, I ended up using Noda Time to actually perform conversions server side, utilising Moment Timezone to provide a ‘best stab’ at a user’s timezone on first access of the system. I’d like to give this the attention it deserves in a follow-up post.

Have a great week everyone, until the next time!

SEO – Gouging Eyes out with Rusty Spoons

A very short piece on my current, off-the-wall, opinions on my little SEO journey so far and the shenanigans weā€™ve faced up to this point.

Ok, so it hasn’t beenĀ that bad! Plus it’s interesting I have opted to use ‘spoons’, and not just a singular spoon. It looks like I plan toĀ gougeĀ out both my eyes at once based on the post title!

I have certainly had moments of scratching my head as to what the hell is going on and SEO often brings up questions like ‘isĀ optionĀ a, b, c, z or a combination of some or all, the best way to approach it?’; which quickly leads to the questioning of every blooming decision you plan to make to the nth degree!

With the website ā€˜fundamentallyā€™ finished (ok, Iā€™m completely and utterly lying, I want to rewrite massive chunks in the usual ā€˜unhappy with the code as soon as youā€™ve finished writing itā€™, developer condition, or sickness if you will!), we have desperately been trying to hike our way up the rankings. This has, after a painstaking process, happened; to some degree at least.

I picked this book up for starters, as all good things start with a good read; donā€™t be put off by theĀ whoppingĀ great name:

SEO 2016: Learn search engine optimization with smart internet marketing strategies

After reading this and (thanks for this by the way) picking a few brains of SEO boffins and asking friends to pick additional boffin brains for me, I was ready to start making some of the larger, more critical changes.

So, in no particular order, here were the ideas and concepts that I feel have made the most impact in the short time we have been climbing the SEO mountainā€¦

Page Titles and Page Descriptions

This, without a shadow of a doubt, had the largest impact. We, quite literally, just added unique metadata to each page and ensured our titles and descriptions were informative, without skipping out on the correct keyword combinations we were targeting.

Up until this point the web applications that I have produced, out in the wild, have been linked to from other sites (i.e. local authorities, for example); these other sitesĀ took onĀ the burden of SEO and it most cases were sought out for the services they provide (i.e. it was non-commercial in a sense and the traffic was not being fought over).

Now I find myself competing for these precious nuggets of traffic, this takes the biggest bite of the biscuit for me. We went from completely impossible to find, for a small subset of keywords we were targeting, to being on page 3 of the search results (a little variance across Google and Bing, but just a rough approximation). Not perfect, but a very good start!

Keyword Optimisation

This took a couple of days to do in the end. This boiled down to, in essence, looking for keyword combinations with good levels of traffic but lower degrees of difficulty when it came to competition. We ended up spreading our eggs amongst a few baskets here and did target a few competitive keywords; only in an effort to make our content as natural and accurate as possible (clarity/quality of content is also a ranking factor, after all).

We used the Moz tools for this job; this was to check out keyword densities of comparable sites and then plan some alternatives. These references can be found here (you get a month’s free trial by the way):

Moz Keyword Explorer

Moz Open Site Explorer

This has had some impact, once our site was re-crawled and re-indexed. Just from a content qualityĀ perspective, it was well worth running everything through Grammarly. I have the Chrome Plugin but, in thisĀ instance, I used a Word Plugin to get the job done (you can get a free account, with additional paid for options being available):

Grammarly for Word

Grammarly for Chrome

Keywords in Content, Image Titles/Alt Attributes and Anchor Text

A simple one but one that, after a few hours of looking around the site and post-building up a new keyword strategy using Moz tools, we knew we could improve on.

We now have a much better spread of keywords across our pages, including LSI keywords (Latent Semantic Indexing); essentially just related phrases.

As for image titles and title/alt attributes, we went on a massive, night long, rampage to improve these; based on our new found keyword knowledge (using hyphens to separate words in resource names).

Lastly, a great deal of effort was placed upon using natural and informative text within anchor tags which, to be honest, we were a little weak on. This appears to have had a positive impact.

Sitemaps.xml

This one got me! I generated a sitemap that, on the face of it, looked prim and proper. However, after checking it in the Google Search Console I had been sucker punched by dodgy encoding (UTF-8 BOM, to be exact). There, at the start of my file, sat a Byte Order Mark (otherwise invisible in Notepad/Visual Studio, as these things often are). This equalled an instant red flag and is something worth not getting caught out on.

I found a reasonable looking Sitemaps.xmlĀ validator which flagged the issue:

Robots.txt Checker

Recreating the file in Notepad and checking it in fixed the issue (and doing the resubmit via Google Search Console).

One last thing (luckily I didn’t do this!), make sure to double check that in the robots.txt file you aren’t disallowing access to your entire site when it comes toĀ legitimateĀ bots. Thankfully, I was dead careful in this regard!

So, what has raised eyebrows (in the ‘why does this matter’ stakes)? One thing that has really got my goat is theĀ Flesch-Kincaid Reading EaseĀ score. I have spent a fair bit of time wracking my brains as to whether content really needs to be targeted at aĀ thirteen to the fifteen-year-oldĀ student (when it comes to reading ability that is). In honesty, I’m not sure on this…

The content outlined in SEO 2016 referencesĀ Searchmetrics findings show sites appearing in the top ten results, on average, have a Flesch reading score of 76.00; which puts a stake in the ground at the aforementioned thirteen toĀ fifteen-year-oldĀ reading level ability.

This doesn’t sound all that bad to me; but, after some initial testing I was finding that our content was ranking below this bar on allĀ occasions. Not by too much, but enough to concern me. Upon testing some other comparable, higher ranking sites, I wasn’t ableĀ to determine that any other site had much betterĀ scores, to be honest.

We reread the content, multiple times, before coming to the conclusion that it was easy enough to read. I personally felt that certain sentences and words were being penalised too harshly for being ‘too complex’ or as having ‘too manyĀ syllables’ when I’m sure that it could be easily read by children (far below the thirteen to fifteen-year-old mark).

It felt as though we had to dumb down our content to a point where it felt very unnatural; as if we couldn’t compose a sentence of more than five words. Possibly borderline insulting to most people; so we’ll be sticking to our guns for the time being.

What’s Next?

Claire and I have a few other strategies going on, such as getting some strong backlinks in play; this is the next port of call. We also need to focus more on the all-important social media. I have additional tasks to perform, such as adjusting how JavaScript is loaded (looking at async/defer options for all of my resources; without busting pages!), looking at resource bundling and caching including resource optimisation (i.e. a little bit more image compression work). I’ll be a busy bee for a fair while yet!

We’ll see how the first batch of alterations plays out first; it is incredibly difficult to shake the feeling that a good chunk of this is highly experimental!

If any SEO experts come across and can offer any further advice or would like to comment please do! I’d love to hear from you.

Until the next time, toodles!

The Bears Whereabouts

Where have I been you may ask! Apologies for the lack of content, weā€™ve (included my wife in the equation here) hit a sizeable brick wall of work.

I think this has been mentioned previously, but Iā€™ve recently started a new position and this has involved me spending a good dollop of time getting acclimatised; in particular finding my way around Amazon Web Services (which Iā€™m having a super time with by the way!).

Coupled with this, Iā€™m incredibly busy ploughing through work items for my wifeā€™s business website, frogandpencil, using the magic of a (beastly) Trello board.

What does this mean for bearandhammer posts? Well, Iā€™m kind of hoping that as I work my way through website tasks for frogandpencil and keep chewing AWS related subjects at work that a handful of small posts will come out in the wash, before normal service is resumed (i.e. the larger post on Web API).

So, with any luck, Iā€™ll have something new plastered up on here soon enough. In the meantime, happy coding!

All the best from the Bear!

Christmas Wind-down

Hi everyone,

With the holidays almost upon us I just wanted to wish everyone a very Merry Christmas and all the best for the New Year!

If possible, I will do my best to crank out a further post between now and the New Year. Here’s a teaser for the first part of the year (likely approximately the first quarter), taken from my original ‘pot of things’ to cover with a few additions that are taking precedence, based on current interests:

  • Continued prodding around in F# (likely starting with charting as promised).
  • The Razor View Engine (basic overview).
  • An Arduino starter project.
  • Deeper dives into JavaScript (in the build up to some exams…more to follow here).
  • As an extension to this, I want to do some ES6 coverage.
  • A return to Unity (I’ll probably do a big hit around the start of Spring).
  • Plus other things from my original cooking pot, including Udemy courses and sneak peaks on other libraries/utilities as I can, etc.

I may well revise my posting frequency a little also. After planning on doing two posts a week I’ve become acutely aware that I’m not really hitting that mark, so I’ll either go down to one post a week or learn to produce posts faster (and better!). We’ll have to see how all of this pans out.

Anyway, have a wonderful holiday!