Session State Behaviour & Async Headaches

I was battling a little issue today surrounding an action method no longer being called asynchronously; the issue turned out to be related to some recent session-based code being added to our code base. In short, the minute session is detected in the underlying code, the ‘default’ behaviour for session state handling throws a monkey wrench in asynchronicity, regardless of the operation being performed on session data (i.e. writing to the session or just reading from the session). This, for me, turned into a performance headache.

There is an attribute that can be placed at controller level that states ‘I’m reading from session only, please continue to allow asynchronous operations’, which when used looks like this:

[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
public class TestController : Controller
{
          ……
}

However, if you want to implement a control mechanism at the action level you need to travel down the custom controller factory/attribute route. This post turned out to be a lifesaver: Session State Behaviour Per Action in ASP.NET MVC

In short, this setup enables you to set session state behaviour handling at the action level by adorning the target method with a custom attribute; bonza!

When inspecting this and underlying, base class, implementations you will most likely discover that it’s not immediately clear how to handle scenarios where overridden methods exist (where methods match by name but differ by signature). This, for me, caused several crunches into the dreaded AmbigiousMatchException.

The implementation below shows my modified override of the DefaultControllerFactory GetControllerSessionBehavior method that is designed to a) avoid exceptions and b) only try to ‘discover’ the attribute and apply custom session state behaviour handling where a single method is ‘matched’ (based on the supplied RequestContext). If the custom attribute is not found, or more than one method is found matching by name (or another error occurs) base logic kicks in and takes precedence:

        /// <summary>
        /// Public overridden method that looks at the controller/action method being called and attempts
        /// to see if a custom ActionSessionStateAttribute (determining how session state behaviour should work) is in play.
        /// If it is, return the custom attributes SessionStateBehaviour value via the Behaviour property, in all other instances
        /// refer to the base class for obtaining a SessionStateBehavior value (via base.GetControllerSessionBehavior).
        /// </summary>
        /// <param name="requestContext">The request context object (to get information about the action called).</param>
        /// <param name="controllerType">The controller type linked to this request (used in a reflection operation to access a MethodInfo object).</param>
        /// <returns>A SessionStateBehavior enumeration value (either dictacted by us based on ActionSessionStateAttribute usage or the base implementation).</returns>
        protected override SessionStateBehavior GetControllerSessionBehavior(RequestContext requestContext, Type controllerType)
        {
            try
            {
                // At the time of writing base.GetControllerSessionBehavior just returns SessionStateBehaviour.Default but to make this robust we should just call
                // base.GetControllerSessionBehavior if the controllerType is null so any changes to the base behaviour in future are adhered to
                if (controllerType != null)
                {
                    // Defensive code to check the state of RouteData before proceeding
                    if (requestContext.RouteData != null
                        && requestContext.RouteData.Values != null
                        && requestContext.RouteData.Values["action"] != null)
                    {
                        // Attempt to find the MethodInfo type behind the action method requested. There is a limitation here (just because of what we are provided with) that
                        // this piece of custom attribute handling (for ActionSessionStateAttribute) can only be accurately determined if we find just one matching method
                        string actionName = requestContext.RouteData.Values["action"].ToString();
                        List<MethodInfo> controllerMatchingActionMethods = controllerType.GetMethods(BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance)
                            .Where(method => method.Name.Equals(actionName, StringComparison.InvariantCultureIgnoreCase)).ToList();

                        // In order to avoid ambiguous match exceptions (plus we don't have enough information about method parameter types to pick the correct method in the case
                        // where more than one match exists) I needed to rig this in such a way that it can only work where one matching method, by name, exists (works for our current use cases) 
                        if (controllerMatchingActionMethods != null && controllerMatchingActionMethods.Count == 1)
                        {
                            MethodInfo matchingActionMethod = controllerMatchingActionMethods.FirstOrDefault();

                            if (matchingActionMethod != null)
                            {
                                // Does the action method requested use the custom ActionSessionStateAttribute. If yes, we can return the SessionStateBehaviour specified by the
                                // developer who used the attribute. Otherwise, just fail over to base logic
                                ActionSessionStateAttribute actionSessionStateAttr =
                                    matchingActionMethod.GetCustomAttributes(typeof(ActionSessionStateAttribute), false)
                                        .OfType<ActionSessionStateAttribute>()
                                            .FirstOrDefault();

                                if (actionSessionStateAttr != null)
                                {
                                    return actionSessionStateAttr.Behaviour;
                                }
                            }                       
                        }
                    }
                }
            }
            catch
            {
                // If any issues occur with our custom SessionStateBehavior inferring handling we're best to just let the base method calculate this instead (best efforts 
                // have been made to avoid exceptions where possible). Could consider logging here in future (but we're in an odd place in the MVC lifecycle, could cause
                // ourselves more issues by attempting this so will only do if absolutely required)
            }

            return base.GetControllerSessionBehavior(requestContext, controllerType);   
        }

This appeared to be a pretty robust solution in my case (and we gained back the asynchronous processing on the targetted methods = big plus), so, hopefully, this comes in handy for others at some point.

Cheers all!

Zoning out with Moment Timezone

I’ve recently been heavily embedded in implementing time zone sensitivity into a web application and I thought I’d share my first experiences on handling this from the perspective of the browser.

A great little library for handling this kind of tricky number can be found in the form of Moment Timezone, which sits proudly beside Moment.js, but as a full date parsing solution incorporating time zones.

The part of the library that really caught my attention was the time zone inferring abilities of the library; the superbly named ‘guess‘ function (loving the name!). The function, despite the name, is actually pretty sophisticated, so let’s take a look at a working example and how the documentation defines the ‘guts’ of its time zone guessing powers.

Moment Timezone can be installed and used in a number of different ways, as described here, but I went with the good old classic method of adding a NuGet package via Visual Studio:

Adding Moment Timezone via NuGet.

Adding Moment Timezone via NuGet.

Or, if you want to use the Package Manager Console then use this nugget instead:

Install-Package Moment.Timezone.js

Once the package is installed, alive and kicking we need to (as you would expect) reference the supporting Moment JavaScript library followed by the Moment Timezone based library, as follows:

<script src="~/Scripts/moment.min.js" type="text/javascript"></script>
<script src="~/Scripts/moment-timezone-with-data.min.js" type="text/javascript"></script>

You are then ready to utilise the guess function in a stupendous one-liner, just like this (wrapped in a jQuery document ready function, in this example):

<script type="text/javascript">
    // On page load grab a value denoting the time zone of the browser
    $(function () {
        // Log to the console the result of the call to moment.tz.guess()
        console.log(moment.tz.guess());
    });
</script>

The screenshots listed here show just a few examples of how the guess function works (by providing a tz database, or IANA database, value denoting which time zone Moment Timezone has inferred the client is in).

Moment Guess Usage London.

Moment Guess Usage London.

Moment Guess Usage Cairo.

Moment Guess Usage Cairo.

Moment Guess Usage Havana.

Moment Guess Usage Havana.

For newer, supporting browsers, Moment Timezone can utilise the Internationalization API (Intl.DateTimeFormat().resolvedOptions().timeZone) to obtain time zone information from the browser. For other browsers, Moment Timezone will gather data for a handful of moments from around the current year, using Date#getTimezoneOffset and Date#toString, to intelligently infer as much about the user’s environment as possible. From this information, a comparison is made against entries in the time zone database and the best match is returned. The most interesting part of this process is what happens in the case of a tied match; in this instance, a cities population becomes a deciding factor (the time zone linking to a city with the largest population is returned).

A full listing of tz database values can be found using the link below, showing the range of options available including historical time zones. It’s worth noting that the tz database also forms the backbone of the very popular Joda-Time and Noda Time date/time and timezone handling libraries (Java and C#, respectively; from the legendary Mr Skeet!).

List of tz database zones

For the project I was involved with, I ended up using Noda Time to actually perform conversions server side, utilising Moment Timezone to provide a ‘best stab’ at a user’s timezone on first access of the system. I’d like to give this the attention it deserves in a follow-up post.

Have a great week everyone, until the next time!

SEO – Gouging Eyes out with Rusty Spoons

A very short piece on my current, off-the-wall, opinions on my little SEO journey so far and the shenanigans we’ve faced up to this point.

Ok, so it hasn’t been that bad! Plus it’s interesting I have opted to use ‘spoons’, and not just a singular spoon. It looks like I plan to gouge out both my eyes at once based on the post title!

I have certainly had moments of scratching my head as to what the hell is going on and SEO often brings up questions like ‘is option a, b, c, z or a combination of some or all, the best way to approach it?’; which quickly leads to the questioning of every blooming decision you plan to make to the nth degree!

With the website ‘fundamentally’ finished (ok, I’m completely and utterly lying, I want to rewrite massive chunks in the usual ‘unhappy with the code as soon as you’ve finished writing it’, developer condition, or sickness if you will!), we have desperately been trying to hike our way up the rankings. This has, after a painstaking process, happened; to some degree at least.

I picked this book up for starters, as all good things start with a good read; don’t be put off by the whopping great name:

SEO 2016: Learn search engine optimization with smart internet marketing strategies

After reading this and (thanks for this by the way) picking a few brains of SEO boffins and asking friends to pick additional boffin brains for me, I was ready to start making some of the larger, more critical changes.

So, in no particular order, here were the ideas and concepts that I feel have made the most impact in the short time we have been climbing the SEO mountain…

Page Titles and Page Descriptions

This, without a shadow of a doubt, had the largest impact. We, quite literally, just added unique metadata to each page and ensured our titles and descriptions were informative, without skipping out on the correct keyword combinations we were targeting.

Up until this point the web applications that I have produced, out in the wild, have been linked to from other sites (i.e. local authorities, for example); these other sites took on the burden of SEO and it most cases were sought out for the services they provide (i.e. it was non-commercial in a sense and the traffic was not being fought over).

Now I find myself competing for these precious nuggets of traffic, this takes the biggest bite of the biscuit for me. We went from completely impossible to find, for a small subset of keywords we were targeting, to being on page 3 of the search results (a little variance across Google and Bing, but just a rough approximation). Not perfect, but a very good start!

Keyword Optimisation

This took a couple of days to do in the end. This boiled down to, in essence, looking for keyword combinations with good levels of traffic but lower degrees of difficulty when it came to competition. We ended up spreading our eggs amongst a few baskets here and did target a few competitive keywords; only in an effort to make our content as natural and accurate as possible (clarity/quality of content is also a ranking factor, after all).

We used the Moz tools for this job; this was to check out keyword densities of comparable sites and then plan some alternatives. These references can be found here (you get a month’s free trial by the way):

Moz Keyword Explorer

Moz Open Site Explorer

This has had some impact, once our site was re-crawled and re-indexed. Just from a content quality perspective, it was well worth running everything through Grammarly. I have the Chrome Plugin but, in this instance, I used a Word Plugin to get the job done (you can get a free account, with additional paid for options being available):

Grammarly for Word

Grammarly for Chrome

Keywords in Content, Image Titles/Alt Attributes and Anchor Text

A simple one but one that, after a few hours of looking around the site and post-building up a new keyword strategy using Moz tools, we knew we could improve on.

We now have a much better spread of keywords across our pages, including LSI keywords (Latent Semantic Indexing); essentially just related phrases.

As for image titles and title/alt attributes, we went on a massive, night long, rampage to improve these; based on our new found keyword knowledge (using hyphens to separate words in resource names).

Lastly, a great deal of effort was placed upon using natural and informative text within anchor tags which, to be honest, we were a little weak on. This appears to have had a positive impact.

Sitemaps.xml

This one got me! I generated a sitemap that, on the face of it, looked prim and proper. However, after checking it in the Google Search Console I had been sucker punched by dodgy encoding (UTF-8 BOM, to be exact). There, at the start of my file, sat a Byte Order Mark (otherwise invisible in Notepad/Visual Studio, as these things often are). This equalled an instant red flag and is something worth not getting caught out on.

I found a reasonable looking Sitemaps.xml validator which flagged the issue:

Robots.txt Checker

Recreating the file in Notepad and checking it in fixed the issue (and doing the resubmit via Google Search Console).

One last thing (luckily I didn’t do this!), make sure to double check that in the robots.txt file you aren’t disallowing access to your entire site when it comes to legitimate bots. Thankfully, I was dead careful in this regard!

So, what has raised eyebrows (in the ‘why does this matter’ stakes)? One thing that has really got my goat is the Flesch-Kincaid Reading Ease score. I have spent a fair bit of time wracking my brains as to whether content really needs to be targeted at a thirteen to the fifteen-year-old student (when it comes to reading ability that is). In honesty, I’m not sure on this…

The content outlined in SEO 2016 references Searchmetrics findings show sites appearing in the top ten results, on average, have a Flesch reading score of 76.00; which puts a stake in the ground at the aforementioned thirteen to fifteen-year-old reading level ability.

This doesn’t sound all that bad to me; but, after some initial testing I was finding that our content was ranking below this bar on all occasions. Not by too much, but enough to concern me. Upon testing some other comparable, higher ranking sites, I wasn’t able to determine that any other site had much better scores, to be honest.

We reread the content, multiple times, before coming to the conclusion that it was easy enough to read. I personally felt that certain sentences and words were being penalised too harshly for being ‘too complex’ or as having ‘too many syllables’ when I’m sure that it could be easily read by children (far below the thirteen to fifteen-year-old mark).

It felt as though we had to dumb down our content to a point where it felt very unnatural; as if we couldn’t compose a sentence of more than five words. Possibly borderline insulting to most people; so we’ll be sticking to our guns for the time being.

What’s Next?

Claire and I have a few other strategies going on, such as getting some strong backlinks in play; this is the next port of call. We also need to focus more on the all-important social media. I have additional tasks to perform, such as adjusting how JavaScript is loaded (looking at async/defer options for all of my resources; without busting pages!), looking at resource bundling and caching including resource optimisation (i.e. a little bit more image compression work). I’ll be a busy bee for a fair while yet!

We’ll see how the first batch of alterations plays out first; it is incredibly difficult to shake the feeling that a good chunk of this is highly experimental!

If any SEO experts come across and can offer any further advice or would like to comment please do! I’d love to hear from you.

Until the next time, toodles!

The Bears Whereabouts

Where have I been you may ask! Apologies for the lack of content, we’ve (included my wife in the equation here) hit a sizeable brick wall of work.

I think this has been mentioned previously, but I’ve recently started a new position and this has involved me spending a good dollop of time getting acclimatised; in particular finding my way around Amazon Web Services (which I’m having a super time with by the way!).

Coupled with this, I’m incredibly busy ploughing through work items for my wife’s business website, frogandpencil, using the magic of a (beastly) Trello board.

What does this mean for bearandhammer posts? Well, I’m kind of hoping that as I work my way through website tasks for frogandpencil and keep chewing AWS related subjects at work that a handful of small posts will come out in the wash, before normal service is resumed (i.e. the larger post on Web API).

So, with any luck, I’ll have something new plastered up on here soon enough. In the meantime, happy coding!

All the best from the Bear!

Christmas Wind-down

Hi everyone,

With the holidays almost upon us I just wanted to wish everyone a very Merry Christmas and all the best for the New Year!

If possible, I will do my best to crank out a further post between now and the New Year. Here’s a teaser for the first part of the year (likely approximately the first quarter), taken from my original ‘pot of things’ to cover with a few additions that are taking precedence, based on current interests:

  • Continued prodding around in F# (likely starting with charting as promised).
  • The Razor View Engine (basic overview).
  • An Arduino starter project.
  • Deeper dives into JavaScript (in the build up to some exams…more to follow here).
  • As an extension to this, I want to do some ES6 coverage.
  • A return to Unity (I’ll probably do a big hit around the start of Spring).
  • Plus other things from my original cooking pot, including Udemy courses and sneak peaks on other libraries/utilities as I can, etc.

I may well revise my posting frequency a little also. After planning on doing two posts a week I’ve become acutely aware that I’m not really hitting that mark, so I’ll either go down to one post a week or learn to produce posts faster (and better!). We’ll have to see how all of this pans out.

Anyway, have a wonderful holiday!

Modernizr – Detecting Screen Size Changes

A brief titbit today, but one I felt was worth sharing and has come in handy for work/personal projects recently for me.

I’ve had a couple of requirements to gracefully show/hide and adjust web page layouts based on screen sizes (and screen re-sizing). I came across the following solution which works pretty damn well.

First things first, you’ll need Modernizr, which is in essence a feature detection javascript library. In this case, however, I’m using other features to react to browser re-sizing. There’s a few options for obtaining this for your projects but, as far as Visual Studio is concerned, I used the Package Manager Console using the following command:

Install Modernizr via the Package Manager Console.

Install Modernizr via the Package Manager Console.

Once installed, we end up with the javascript library included under the default Scripts folder:

Modernizr in Scripts Folder.

Modernizr in Scripts Folder.

On installing the package, as I didn’t specify a specific version, I end up with the following declaration in my packages.config file (part of my ASP.NET MVC project) – 2.8.3 denoting the most recent version:

<package id="Modernizr" version="2.8.3" targetFramework="net452" />

Next up, simply chuck the usual script element into your page to reference the library – Now you’re all set!

<script src="~/Scripts/modernizr-2.8.3.js" type="text/javascript"></script>

The following snippet shows the basic scaffolding code to start capturing screen size changes (I’ve declared this code in my jQuery document ready function). The doneResizing function is tied to the window resize event and you can easily use Modernizr to read and react to the screen size as required:

//Function to react to screen re-sizing
function doneResizing() {
	if (Modernizr.mq("screen and (min-width:868px)")) {
		//Implement jQuery/JS to handle a larger screen (i.e. Laptops/Desktops). In my case adding/removing a class to show/hide elements
	}
	else if (Modernizr.mq("screen and (max-width:867px)")) {
		//Implement jQuery/JS to handle a smaller screen (i.e. Tablets/Mobiles). In my case adding/removing a class to show/hide elements
	}
}

//Call doneResizing on re-size of the window
var id;
$(window).resize(function () {
	clearTimeout(id);
	id = setTimeout(doneResizing, 0);
});

//Call doneResizing on instantiation
doneResizing();

Currently, I’m using this to show/hide element containers within a web page based on screen size (and apply/remove a few classes on the fly to ensure everything looks as it should on desktop, tablet and mobile displays). It appears to function very well, one worth investigating for your own projects. See here for the original Stack Overflow article detailing ideas surrounding this concept (including other CSS related solutions).

Bye for now!

Frantic Keyboard Mashing

Howdy all!

I’ve been away mashing keys and helping my wonderful wife get started with her super-awesome Wedding Stationery business. It’s early days, and there is still plenty to do, but here is some shameless link touting:

Website (landing page): frogandpencil.com
Blog: frogandpencil.net

I’ll try to do my best to keep up blogging in the meantime (on a random array of interests); I’m expecting to probably come across a few worthy topics along the way as I work on the final website.

Until the next time…