Zoning out with Moment Timezone

I’ve recently been heavily embedded in implementing time zone sensitivity into a web application and I thought I’d share my first experiences on handling this from the perspective of the browser.

A great little library for handling this kind of tricky number can be found in the form of Moment Timezone, which sits proudly beside Moment.js, but as a full date parsing solution incorporating time zones.

The part of the library that really caught my attention was the time zone inferring abilities of the library; the superbly named ‘guess‘ function (loving the name!). The function, despite the name, is actually pretty sophisticated, so let’s take a look at a working example and how the documentation defines the ‘guts’ of its time zone guessing powers.

Moment Timezone can be installed and used in a number of different ways, as described here, but I went with the good old classic method of adding a NuGet package via Visual Studio:

Adding Moment Timezone via NuGet.

Adding Moment Timezone via NuGet.

Or, if you want to use the Package Manager Console then use this nugget instead:

Install-Package Moment.Timezone.js

Once the package is installed, alive and kicking we need to (as you would expect) reference the supporting Moment JavaScript library followed by the Moment Timezone based library, as follows:

<script src="~/Scripts/moment.min.js" type="text/javascript"></script>
<script src="~/Scripts/moment-timezone-with-data.min.js" type="text/javascript"></script>

You are then ready to utilise the guess function in a stupendous one-liner, just like this (wrapped in a jQuery document ready function, in this example):

<script type="text/javascript">
    // On page load grab a value denoting the time zone of the browser
    $(function () {
        // Log to the console the result of the call to moment.tz.guess()
        console.log(moment.tz.guess());
    });
</script>

The screenshots listed here show just a few examples of how the guess function works (by providing a tz database, or IANA database, value denoting which time zone Moment Timezone has inferred the client is in).

Moment Guess Usage London.

Moment Guess Usage London.

Moment Guess Usage Cairo.

Moment Guess Usage Cairo.

Moment Guess Usage Havana.

Moment Guess Usage Havana.

For newer, supporting browsers, Moment Timezone can utilise the Internationalization API (Intl.DateTimeFormat().resolvedOptions().timeZone) to obtain time zone information from the browser. For other browsers, Moment Timezone will gather data for a handful of moments from around the current year, using Date#getTimezoneOffset and Date#toString, to intelligently infer as much about the user’s environment as possible. From this information, a comparison is made against entries in the time zone database and the best match is returned. The most interesting part of this process is what happens in the case of a tied match; in this instance, a cities population becomes a deciding factor (the time zone linking to a city with the largest population is returned).

A full listing of tz database values can be found using the link below, showing the range of options available including historical time zones. It’s worth noting that the tz database also forms the backbone of the very popular Joda-Time and Noda Time date/time and timezone handling libraries (Java and C#, respectively; from the legendary Mr Skeet!).

List of tz database zones

For the project I was involved with, I ended up using Noda Time to actually perform conversions server side, utilising Moment Timezone to provide a ‘best stab’ at a user’s timezone on first access of the system. I’d like to give this the attention it deserves in a follow-up post.

Have a great week everyone, until the next time!

Groovy JavaScript Regex Name Capitalisation Handling

Greetings!

A tidbit found by a friend of mine online, forming the basis for a small piece of work I’ve done this week surrounding name capitalisation. This was pulled from a stack overflow article so credit where credit is due for starters:

js-regex-for-human-names

This is fairly robust, covering Mc, Mac, O’s and double-barrelled, hyphenated names. It does capitalise the first character directly after an apostrophe (regardless of placement) which may or may not be a problem. As for usage, I went with the following setup (with the relevant JavaScript and jQuery hooks being properly abstracted in the production code of course).

Firstly, the example HTML structure:

<div id="container">
	<!--An example form illustrating the fixNameCasing function being called on a test forename, middle names and surname field (when focus is lost)-->
	<form action="/" method="post">
		<div>
			<label id="forename-txt-label">Forename:</label>
		</div>
		<div>
			<input id="forename-text" name="forename-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<label id="middlename-text-label">Middle names:</label>
		</div>
		<div>
			<input id="middlename-text" name="middlename-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<label id="surname-text-label">Surname:</label>
		</div>
		<div>
			<input id="surname-text" name="surname-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<button id="submit-button" type="submit" class="control-top-margin">Submit</button>
		</div>
	</form>
</div>

Then, our jQuery/JavaScript juicy bits:

<!--Bring jQuery into scope so we can hook up a function to relevant elements on 'blur' event (lost focus)-->
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"></script>
<script type="text/javascript">
	
	// The name casing fix function utilising regex
	function fixNameCasing(name) {
		var replacer = function (whole, prefix, word) {
			ret = [];
			
			if (prefix) {
				ret.push(prefix.charAt(0).toUpperCase());
				ret.push(prefix.substr(1).toLowerCase());
			}
			
			ret.push(word.charAt(0).toUpperCase());
			ret.push(word.substr(1).toLowerCase());
			return ret.join('');
		}
		var pattern = /\b(ma?c)?([a-z]+)/ig;
		return name.replace(pattern, replacer);
	}
	
	// On document ready rig up of relevant controls (based upon using the 'fix-name-casing' class) 'blur' event. When focus is lost, in a given control, we take the controls input and format it based on a return value from fixNameCasing
	$(function() {
		$(".fix-name-casing").blur(function() {
			$(this).val(fixNameCasing($(this).val()));
		});
	});

</script>

The results! Each field in the following screenshot received fully lowercase or uppercase input before being tabbed out of (i.e. lost focus):

Image showing name capitalisation of three example name fields.

Name Capitalisation Test Output.

Lastly, here’s the entire code snippet:

<!DOCTYPE html>
<html>
<head>
	<title>Name Capitalisation Test</title>
	<style type="text/css">
		
		.control-top-margin {
			margin-top: 5px;
		}
	
	</style>
	<!--Bring jQuery into scope so we can hook up a function to relevant elements on 'blur' event (lost focus)-->
	<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"></script>
	<script type="text/javascript">
		
		// The name casing fix function utilising regex
		function fixNameCasing(name) {
			var replacer = function (whole, prefix, word) {
				ret = [];
				
				if (prefix) {
					ret.push(prefix.charAt(0).toUpperCase());
					ret.push(prefix.substr(1).toLowerCase());
				}
				
				ret.push(word.charAt(0).toUpperCase());
				ret.push(word.substr(1).toLowerCase());
				return ret.join('');
			}
			var pattern = /\b(ma?c)?([a-z]+)/ig;
			return name.replace(pattern, replacer);
		}
		
		// On document ready rig up of relevant controls (based upon using the 'fix-name-casing' class) 'blur' event. When focus is lost, in a given control, we take the controls input and format it based on a return value from fixNameCasing
		$(function() {
			$(".fix-name-casing").blur(function() {
				$(this).val(fixNameCasing($(this).val()));
			});
		});

	</script>
</head>
<body>
	<div id="container">
		<!--An example form illustrating the fixNameCasing function being called on a test forename, middle names and surname field (when focus is lost)-->
		<form action="/" method="post">
			<div>
				<label id="forename-txt-label">Forename:</label>
			</div>
			<div>
				<input id="forename-text" name="forename-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<label id="middlename-text-label">Middle names:</label>
			</div>
			<div>
				<input id="middlename-text" name="middlename-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<label id="surname-text-label">Surname:</label>
			</div>
			<div>
				<input id="surname-text" name="surname-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<button id="submit-button" type="submit" class="control-top-margin">Submit</button>
			</div>
		</form>
	</div>
</body>
</html>

The likelihood is that I’ll be using this just as a basis for my current requirements and adjusting as needed.

I hope this proves useful and kudos to my friend who found this and the original stackoverflow contributor. If anyone has any other examples of code that tackles this problem, that they would like to contribute, just let me know by commenting below.

Cheers!

Implementing reCAPTHCA

I wanted to outline some recent work I’ve done with the reCAPTCHA Google API. Although not too difficult to implement, I did struggle a little to find C# based server side examples on how to ultimately validate a CAPTCHA. To start us off however, what is reCAPTCHA?

reCAPTCHA is essentially a mechanism to protect your sites functionality from spam and other kinds of abusive activity. It’s free, which is a massive bonus, and as a cherry on top every solved CAPTCHA is used to annotate images and build on machine learning datasets. This data feeds into solving a myriad of problems, including improving maps and solving AI conundrums. The actual term is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. For any history buffs, details on how this concept came about can be found here (in fact, there seems to be an interesting ‘origin of’ debate here):

Wiki CAPTCHA Documentation

And something a bit more fun:

To get started using reCAPTCHA, and for further information, you just need to visit the following link:

Google reCAPTCHA

Utilising reCAPTCHA version 2.0 seemed like the way to go for me and has a number of benefits. For example, it’s possible for this implementation to automatically confirm some requests as not being malicious in nature, without the need for a CAPTCHA to be solved. In addition, the CAPTCHAs themselves in version 2 are much nicer for a human to solve, relying on a party to pick out characteristics in an image, rather than trying to read ever more complex and convoluted character strings embedded in a given image. Image resolution (to pick out particular objects) is still a field whereby programs struggle somewhat, so this form of reCAPTCHA falls into a more secure bracket also.

Using reCAPTCHA

The basic process boils down to following these steps:

  • Go to the Google reCAPTCHA site and click on Get reCAPTCHA.
  • Sign in or sign up, do what you’ve got to do!
  • Register the domain where you want to embed reCAPTCHA. This will enable you to receive the relevant API keys to create and validate CAPTCHAs.
  • Add the relevant JavaScript to your page.
  • Embed the Site key in the page being served to the user (we’ll go over this below).
  • Use the Secret Key in your server side logic to validate the CAPTCHA response (based on user input). This is done by sending a request to the Google API siteverify address. Again, I’ll cover this below.
  • Get the response and see if the CAPTCHA has been solved correctly, simple as that.

First things first, you’ll want to safely note down your Site and Secret key for further use, these can be viewed again at any time by logging into the reCAPTCHA portal (where you signed up). So you’ve registered your domain and have the relevant keys, we now need to embed reCAPTCHA by adding the following element to the page you want to target:

<head>
...
    <!--Use async/defer as necessary if you desire-->
    <script src='https://www.google.com/recaptcha/api.js'></script>
...
</head>
<body>
    ...
    <!--The id attribute is not absolutely required, but I have listed it here as I make further use of (basically a style choice) for an jQuery AJAX call (could just use the class however)-->
    <div id="g-recaptcha-response" class="g-recaptcha" data-sitekey="YOUR_SITE_KEY_GOES_HERE"></div>
    ...
</body>

Be sure to drop the Site key you were provided with in the data-sitekey attribute, within the div outlined (and add the JavaScript reference listed to your page). Load up your page and you should see something akin to the following:

reCAPTCHA V2 Control.

reCAPTCHA V2 Control.

This is a super start. If you are doing a simple post on submit, you’ll be able to pull information out of the standard request object and use this server side. For me however, I wanted something incredibly lightweight so I went with the following jQuery AJAX call (I may tweak this in my personal implementation so treat this as not yet finalised, but it provides you with an idea of the structure nonetheless):


//Defines an outline (structure) for a javascript contact object
function Contact(name, email, message, recaptchaClientResponse) {
	this.Name = name
	this.Email = email;
	this.Message = message;
	this.RecaptchaClientResponse = recaptchaClientResponse;
}

...

//Submit Enquiry button click handler
$(".submit-enquiry").click(function (e) {

	//Hide the alert bar on every new request (TODO - More code required to tidy up classes on the alert div)
	$(".alert").hide();

	//Use ajax to call the service HandleEmailRequest method
	$.ajax({
		cache: false,
		async: true,
		type: "POST",
		dataType: "json",
		processData: false,
		data: JSON.stringify(
			{
				contactObj: new Contact
					(
						$("#NameTextBox").val(),
						$("#EmailTextBox").val(),
						$("#MessageTextArea").val(),
						$("#g-recaptcha-response").val()
					)
			}),
		url: "URL_TO_A_SERVICE.svc/HandleEmailRequest",
		contentType: "application/json;charset=utf-8",
		success: function (evt) {
			//Evaluate the response and add content to alert bar
			if (evt.SendEmailResult)
			{
				$(".alert").addClass("alert-success").html("<p>Message successfully sent!</p>").slideDown(1000);
			}
			else
			{
				$(".alert").addClass("alert-danger").html("<p>We couldn not send the message, sorry about that.</p>").slideDown(1000);
			}

			//Reset the recaptcha control after every request
			grecaptcha.reset();
		},
		error: function (evt) {
			//Add content to the alert bar to show the request failed
			$(".alert").addClass("alert-danger").html("<p>We could not send the message, sorry about that.</p>").slideDown(1000);

			//Reset the recaptcha control after every request
			grecaptcha.reset();
		}
	});
});

The first part of this code encapsulates the idea of a contact, in my case at least (i.e. a user leaving a message on the web page that will become an email). This is just an easy way for me to encapsulate details during the AJAX call. Using jQuery, I’ve attached a handler to the submit button on my page which, apart from a little UI manipulation (for an alert bar element), in essence just makes a call to a service (via the url parameter) using details that the client has provided, including information on the solved CAPTCHA. This is passed to the service using the data parameter; note the use of jQuery to get details of the CAPTCHA the user has completed ($(“#g-recaptcha-response”).val()). This is passed as JSON to the service. Once a request has been validated, the return value (a simple boolean in my case) is inspected and an alert is shown to the user before resetting the reCAPTCHA control (another spam control mechanism that I’ve added in for extra peace of mind). Lastly, for me, the use of JSON.stringify was absolutely key as I want to work with JSON data over the wire. More details can be found here:

JSON.stringify() Documentation

This is where it got a little trickier to proceed. On the reCAPTCHA documentation site, for version 2.0, I could only see examples for PHP:

reCAPTCHA Code Examples Available.

reCAPTCHA Code Examples Available.

So, what you’ll see next is the culmination of my digging around for a jQuery/AJAX/C# solution to this particular head-scratcher. Hopefully, it proves useful to anyone interested in going down this route.

Let’s get going! On the service side, you’ll need something like the following, to gather up the AJAX request:

/// <summary>
/// Represents a Contact (Potential Customer) contacting
/// the recipient with an enquiry.
/// </summary>
[DataContract]
public class Contact
{
	#region Automatic Properties (Data Members)

	/// <summary>
	/// The Contacts full name.
	/// </summary>
	[DataMember]
	public string Name { get; set; }

	/// <summary>
	/// The Contacts email address.
	/// </summary>
	[DataMember]
	public string Email { get; set; }

	/// <summary>
	/// The Contacts message to the recipient.
	/// </summary>
	[DataMember]
	public string Message { get; set; }

	/// <summary>
	/// A string that represents the clients reCAPTCHA
	/// (V2) response (passed along with other Contact
	/// information and processed before a message can be sent).
	/// </summary>
	[DataMember]
	public string RecaptchaClientResponse { get; set; }

	#endregion Automatic Properties (Data Members)
}

...

/// <summary>
/// Outlines the HandleEmailRequest method that is part of this service.
/// Consumes and returns a JSON format message (called from the client
/// with details that instantiate a Contact object). Method designed to 
/// process reCAPTCHA details and, on success, send an email to
/// the designated (recipient) email address. 
/// </summary>
/// <param name="contactObj">Contact details associated with the person requesting information.</param>
/// <returns></returns>
[OperationContract]
[WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Wrapped)]
bool HandleEmailRequest(Contact contactObj);

...

/// <summary>
/// Public service method that attempts to send a
/// user message to the recipient as an email.
/// </summary>
/// <param name="contactObj">The Contact object constructed from JSON (passed from the client).</param>
/// <returns>A boolean that represents if this process was successful.</returns>
public bool HandleEmailRequest(Contact contactObj) => new EmailSender(contactObj).SendEmail();

I’ve given you a roll-up of an example Contact class (that is instantiated from the JSON automatically on call to the service), an example service interface definition and the outline of a service method (contained in a class implementing this interface). These of course are in separate files, but I’ve lined them up side-by-side to make it easier to absorb. In my case, the details are passed to and wrapped in an EmailSender class, the reCAPTCHA validation being called internally by the SendEmail method (as a private method called ValidateRecaptchaClientResponse):

/// <summary>
/// Private helper method that looks at the Contact object
/// associated with this EmailSender and attempts to verify
/// if the reCAPTCHA client response is valid (before attempting to
/// send an email message to the recipient). 
/// </summary>
/// <returns>A boolean that represents if reCAPTCHA validation succeeded or failed.</returns>
private bool ValidateRecaptchaClientResponse()
{
	//Online reference used as a basis for this solution: http://www.codeproject.com/Tips/851004/How-to-Validate-Recaptcha-V-Server-side
	try
	{
		//Make a web request to the reCAPTCHA siteverify (api) with the clients reCAPTCHA response. Utilise the Response Stream to attempt to resolve the JSON returned
		HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(string.Concat("https://www.google.com/recaptcha/api/siteverify?secret=YOUR_SITE_SECRET_KEY_GOES_HERE&response=", contactObj.RecaptchaClientResponse));

		using (WebResponse response = wr.GetResponse())
		{
			using (StreamReader sr = new StreamReader(response.GetResponseStream()))
			{
				//Use a JavaScriptSerializer to transpose the JSON Response (sr.ReadToEnd()) into a RecaptchaResponse object. Alter the 'Success' string of this object to a bool if possible
				bool success = false;
				bool.TryParse(new JavaScriptSerializer().Deserialize<RecaptcaResponse>(sr.ReadToEnd()).Success, out success);

				//Return a value that denotes if this reCAPTCHA request was a success or failure
				return success;
			}
		}
	}
	catch (Exception ex)
	{
		//Catch any exceptions and write them to the output window (better logging required in future). Return false at the end of this method, issue occurred
		System.Diagnostics.Debug.WriteLine($"An error occurred whilst validating the ReCaptcha user response. Type: { ex.GetType().Name } Error: { ex.Message }.");
	}

	//If we hit this portion of the code something definitely went wrong - Return false
	return false;
}

Lines fourteen and twenty two are of the most interest here. On line fourteen, you will be required to insert your site ‘Secret key’ into a request to the siteverify address we mentioned earlier. The response that needs to be appended to this string is equal to the reCAPTCHA information you gleaned from the client. You’ll notice that line twenty two makes use of a RecaptchaResponse type; which is basically an object wrapper used to contain information from the deserialised JSON response (as part of a reCAPTCHA check). This is outlined as follows:

/// <summary>
/// Represents an object (constructed from JSON)
/// that outlines a reCAPTCHA Response, and the pieces
/// of information returned from a verification check.
/// </summary>
public class RecaptcaResponse
{
	#region Automatic Properties

	/// <summary>
	/// The success status of the reCAPTHCA request.
	/// </summary>
	public string Success { get; set; }

	/// <summary>
	/// The Error Descriptions returned (Possibly to implement
	/// in the future).
	/// </summary>
	//public string ErrorDescription { get; set; }

	#endregion Automatic Properties
}

The actual JSON returned from the response stream takes the following form, so it is possible to extract error codes also if you desire (for me, I’m ripping a simple boolean out of this based on the success value):

{
  "success": true|false,
  "error-codes": [...]   // optional
}

On a very last note, cited in the code above but to reiterate, this link was invaluable:

Code Project reCAPTCHA Validation Example

That’s about it for a basic end to end sample.

The API documentation (and the steps listed on the reCAPTCHA site after registration) are pretty good, so you should be in safe enough hands:

reCAPTCHA API Documentation

Thanks all and take care until the next time.

Future Decoded 2015 Play-by-play

Hello beautiful people!

It’s a fantastic, gorgeous Saturday morning (it’ll be Monday by the time I hit the publish button, such is the enormity of the post!); the birds are chirping, the sun is shining through the balcony windows (and there is a bloody wasp outside, STILL!!!) and my wife has left me…………to go on a girly weekend (that probably sounded more alarming than intended; hmmm, oh well, it stays!). Whilst she is away fighting the good fight, this gives me the opportunity to go over my thoughts on the recent Future Decoded 2015 event that took place at ExCel in London.

The links to outline this event have been posted before on my blog, but just in case, here are the goods again:

Future Decoded 2015
Future Decoded 2015: Technical Day Highlights

Before we begin, it’s worth pointing out that I attended this event a couple of weeks ago, so apologies if any inaccuracies pop up. I’ll do my best to stick to the facts of what I can remember and specific points that interested me; other commitments ended up preventing me from getting to this particular post sooner. You’ll all let me off, being the super gracious, awesome folks you are, I’m sure :-).

So, FIGHT!!!!!

Sorry, I had a dream about Mortal Kombat last night and upper-cutting people into the pit – What a great stage that was! Ah, the memories….Let’s begin/start/get on with it then.

Morning Key Notes

The morning Key Notes were varied and expansive in nature. I won’t discuss all of them here, only the takeaway points from the talks that struck a chord with me.

1) Scott Guthrie. EVP Cloud and Enterprise, Microsoft (Azure).

I was particularly looking forward to this talk being a keen follower of Scott Guthrie (include Scott Hanselman), and I normally try to catch up with Channel 9 features and Azure Fridays whenever possible (I’ve linked both, although I’m sure most of you, if not all, have come across Channel 9 before or heard of Azure Fridays).

The talk did have primer elements as you would expect, i.e. here’s the Azure Portal and what you can expect to find (in relation to resources, templates you can access for applications, services, Content Distribution Networks (CDN), etc). The next bit really caught me cold, who was expecting a giant image slide of a cow! I certainly wasn’t…

Estrus in Cows

What followed was a full example of real-time data recording and assessment surrounding the monitoring of cows in Asia. I’ve provided a link below that sums up the concept of Estrus (being in heat) nicely enough, but it laymen’s terms it relates to cows ‘being in the mooooooood’ (wife insisted I added that joke). Obviously, a farmers’ ability to accurately detect this, urm, state of being in a cow is an incredibly important factor in the ability to produce calves.

It turns out that a cow tends to move more when in the Estrus state; something that can certainly be measured. So, with pedometers attached to cows to measure steps taken and an Azure based service receiving and providing feedback in real-time, the farmer in question was able to take action to maximise calf production. Further to this, analysis of the data gathered was able to identify trends against how long cows have been in the Estrus state, and the gender of offspring. Crazy stuff, but all very interesting. Feel free to read further to your hearts content:

Cow Estrus Detection

The Internet of Things (IoT) was briefly touched on and another brief, live coding example ensued.

Scott produced a small, bog-standard heat sensor (apparently, just a few pounds, I was impressed he didn’t say dollars!) and proceeded to demonstrate a basic WinForms application passing a JSON payload to Azure in real-time (measurements taken a few times a second). This strikes me as exciting territory, and I have friends who do develop applications working in tandem with sensors already, backed up by technologies such as the Raspberry Pi and Arduino, for example. The talk closed with the conceptual idea that the majority of data, in the world today, is still largely unmeasured, and hoped that Azure would be an important platform in unlocking developers potential to measure previously untapped data.

2) Kevin Ashton. Inventor of the “Internet of Things”.

Kevin coined the term the Internet of Things (IoT), and gave a very good talk on what this means, as well as identifying certain ‘predictions’ for the future. For instance, that we, as a species, would survive climate change for one. He quickly noted that calling ‘BS’ on this particular one would be tricky should we suffer a doomsday style event at the hands of climate change (I don’t imagine the last thoughts of humanity to be, ‘oh, Kevin Ashton was so bloody wrong!’). Another interesting prediction; we would all own a self-driving car by 2030. Prototype examples already exist, such as Googles (and Apples) efforts, and the Tesla:

Google/Apple (Titan) Self Driving Cars
The Tesla

Self-driving cars being one of the cases in point, the IoT relates to how a whole new host of devices will now become ‘connected’. Besides cars rigged up to the internet, we are all aware of the hooking up of internal systems in our homes (heating, etc) and utility devices (the washing machine), as to always be online and accessible at a moments notice. This world isn’t coming per say, it’s essentially already here.

Pushing past this initial definition, Kevin was keen to stress that the IoT was not limited in its definition to just ‘the connecting of hardware to the internet’ only. Wiki sums this up quite nicely on this occasion, but software (services and analytics) that moves forward with hardware changes will ultimately change the way we live, work, shop and go about our daily lives. Whether this be data relayed from the fridge to google glasses (yes, you are out of milk!), or perhaps a self-driving car ordering ‘click and collect’ shopping and driving you to the collection point after work (not to mention triggering the heating x miles from home!). Software, and the analysis of the new kinds of data we can record from interconnected elements, will be a huge driving force in how our world changes:

Internet of Things (IoT)

Lastly, before I forget and move on, a key phrase voiced several times (although I cannot remember the exact speaker, so apologies for that, it was probably David Chappell) was to reset your defaults. Standard client/server architecture was discussed, and for those of us that are part of long running businesses this is what we are exclusively, or at least partially, dealing with on a daily basis still. However, the change to the use of mobile devices, tablets, etc, as clients and the cloud as the underpinning location for the services these clients communicate with is becoming the norm. For start-ups today, mobile first development and the cloud (Azure or Amazon Web Services (AWS)) are probably the initial go-to.

For some of us (speaking from a personal standpoint only), a major factor in our success as developers could simply be determined by understanding the cloud and getting the necessary experience to make the transition (for those who are not actively taking part in this world of course).

So, now we have the IoT, let’s talk security…

3) Graham Cluley. Security Analyst, grahamcluley.com.

Graham delivered a funny and insightful talk surrounding everyones’, ‘Oh my God, the horror, please kill me’ subject, the wonderful world of security.

In a nutshell, he argues (and certainly proves his point as you’ll read next) that the IoT will bring wonders to our world, but not without issues. We now have a scenario whereby a breadth of new devices have suddenly become internet connected. However, are the driving forces behind these changes the people who are used to dealing with the murky world of malware, viruses and hacking attempts (such as OS developers)? Probably not, is the initial answer. This is, of course, just a cultural divide between those used to trans-versing the security world and protecting devices from such attacks, and those tasked with bringing new devices to the interconnected world.

The hacking of self-driving cars (big topic it would seem) was discussed:

Fiat Chrysler Recalls

Also, the potential of hacking pacemakers was covered (bluetooth/wifi enabled), famously featured in the TV series Homeland and which actually lead to Vice President Dick Cheney’s cardiologist disabling the wireless functionality of his device:

Pacemaker Hacking
Could Pacemakers Be Hacked?

Although funny, the talk did indeed bring up a very serious issue. The ramifications could be catastrophic, depending on the types of devices that ultimately end up being exposed to the masses via the web. Essentially, as the IoT age develops, extra care must be taken to ensure that security is right on up there, in the hierarchy of priorities, when developing software for these devices.

4) Chris Bishop. Scientist and Lab Director, Microsoft Research.

The last talk I would personally like to discuss briefly was by Chris Bishop; there were a few great nuggets here that are well worth covering.

The idea of Machine Learning (not a topic I was overly familiar with for starters), Neural Networks and Pattern Recognition laid the foundation for a talk looking at the possibility of producing machines with human-level, or even super-human, intelligence.

The Microsoft Kinect was used to demonstrate hand-tracking software that, I have to admit, had an incredible amount of fidelity in recognising hand positions and shapes.

Lastly, a facial recognition demonstration that could estimate, with good accuracy, the emotional state of a person was kicked off for us all to see. Very, very impressive. There was most certainly an underlying feeling here (and as much was hinted at) that his kind of technology has many hurdles to jump. For instance, building something that can consume an image and accurately describe what is in that image is still a flaky concept, at best (and the difficulties of producing something capable of this are relatively vast).

Still, a greatly enjoyable talk! A book was touted, and I believe (please don’t shout at me if I’m wrong) this is the one:

Pattern Recognition and Machine Learning

After the morning Key Notes, a series of smaller talks and break-out sessions were available to us. Here’s how I spent my time…

Unity3D Grok Talk

Josh Taylor. Unity Technologies.

It’s my sincere hope that, on discovering this, my employer won’t decide to sack me! This was over lunch and was a self-indulgent decision I’m afraid! You’ll know from some of my historical posts that I have a keen interest in Unity3D (and have spent time making the odd modest prototype game here and there), and I was interested to see how Unity 5 was progressing, especially as a greater cohesive experience with Visual Studio had been promised.

In this short, 20 minute talk, we experienced how Visual Studio (finally) integrates nicely into the Unity3D content creation pipeline. Unity3D now defaults to using Visual Studio as the editor of choice, with Monodevelop being pushed aside. Apologies to anyone who likes Monodevelop, but I’ve never been able to get behind it. With wacky intellisense and with what I can only describe as a crash-tastic experience in past use, I haven’t seen anything yet to sway me from using Visual Studio. In fact, it was demonstrated that you can even use Visual Studio Code if you wish and, as it’s cross-platform, even Mac and Linux users can switch to this if they wish. More reasons to leave Monodevelop in the dust? It’s not for me to say really, go ahead and do what you’ve got to do at the end of the day!

In order to debug Unity projects in Visual Studio in the past a paid for plugin was required. This particular plugin has been purchased by Microsoft and is now available to all. Being able to easily debug code doesn’t sound like much, but trust me it’s like having a basic human right re-established – such good news!!!

The new licensing model was also commented on, a massive plus for everyone. The previous Free/Pro divide is no more; now everyone gets access to the lions share of the core features. You only need to start spending money as you make it (fair for Unity to ask for a piece of the pie if you start rolling in profit/expanding a team to meet the new demand). For me, this means I actually get to use the Unity Pro water effects, hoorah ;-).

Following this, I spent a bit of time last weekend watching the Unite 2015 Key Notes, discussing 2D game development enhancements, cloud based builds and Oculus support. Well worth a look if and when time allows:

Unite 2015 Key Notes

Plus, if Oculus technology interests you, then it’s definitely worth watching John Carmacks (formerly of ID Software, the mind behind Wolfenstein and Doom) Key Note from the Oculus Connect 2 event:

John Carmack Oculus Keynote

Very exciting times ahead for Unity3D I believe. Self-indulgence over, moving forward then…

Journey to the Intelligent Cloud

Corey Sanders. Director of Program Management, Azure.

Following the Unity3D talk, I made my way back to the ICC Auditorium (I missed a small section of this particular talk, but caught the bulk of it) to catch up on some basic examples of how the new Azure Portal can be used. This took the form of a brief overview of what’s available via the portal, essentially a primer session.

In my recent, personal work with Azure I’ve used the publishing capability within Visual Studio to great affect; it was very transparent and seamless to use by all accounts. A sample was provided within this particular session which demonstrated live coding changes, made in GitHub, being published back to a site hosted on Azure.

Going into a tangent….

Very much a personal opinion here, but I did find (and I wasn’t the only one) that a good portion of the content I wanted to see was a) on at the same time (the 1:15pm slot) and b) was during the core lunch period where everyone was ravenous, I’m a ‘hanger’ sufferer I’m afraid. C# with Mads Torgerson, ASP.NET 5, Nano Servers and Windows 10 (UWP) sessions all occupied this slot, which drove me a little nuts :-(. This felt like a scheduling issue if I’m honest. I’d be interested to hear from anyone who did (or didn’t) feel the same.

I was so disappointed to miss Mads Torgerson, I very much enjoyed the recent C# language features overview and would have loved to have made this breakout session! I did walk past him later in the day, and I hope he never reads this, but he seemed ridiculously tall (perhaps Godly C# skills made him appear several inches taller, who knows!). It doesn’t help that I’m on the shorter side either, I just wanted to be 5′ 11″, that’s all I ever wanted (break out the rack, I need to get stretching!). I should have said hello, but wimped out!

F# Language Breakout Session

Don Syme. Principal Researcher, Microsoft Research.

This was easily the part of the event that resonated the most with me, and strongly influenced the foray into F# that I undertook recently. Don Syme, the designer and architect of the F# language, took us through a quality primer of the syntax and how F# can be used (and scaled) for the cloud.

All of this aside, the most impressive part of the talk was a live demonstration of F# Type Providers. Again, this is fully covered in my previous post so I’ll just direct you to that, which in turn will aid me in cutting down what is now becoming a gargantuan post. In summary, the ability to draw information directly from web pages, rip data straight from files and databases, and combine and aggregate it all together using minimal code produces a terse, easy to understand and pretty darn good experience in my book. Even the code behind producing visual feedback, in the form of the charting API, is succinct; the bar really isn’t set too high for new starters to get involved.

If you decide to give anything a go in the near future, I would give F# the nod (followed closely, just a hair’s breadth away, by jQuery in my opinion). Certainly check it out if you get the chance.

Final Key Note

Professor Brian Cox. Physicist.
Krysta Svore. Senior Researcher, Microsoft Research.

The day proceeded in fast forward and, before we’d really had the chance to gather our thoughts, we were sitting in the main auditorium again faced by Professor Brian Cox, Krysta Svore and a menagerie of confused attendees staring at mathematical formulas outlining quantum theory.

Into the wonderful world of quantum computers we dance, and in my case, dragging my brain along from somewhere back yonder in a desperate attempt to keep up. Thankfully, I’m an avid TED talk fanatic and had, in the run up to the event, brushed up on a few quantum theory and quantum mechanics videos; lucky I did really. The content was dense but, for the most part, well put together and outlined the amazing (and potentially frightening) world of possibilities that quantum computers could unlock for us all.

Professor Brian Cox cruised through the theories we’d need to be intimate with in order to understand the onslaught of oncoming content surrounding quantum computers. In essence, a traditional ‘bit’, has a defined state (like a switch), on or off. However, and this is the simple essence of what they were trying to get to, traditional bits are reaching limitations that will prevent us from solving more complex problems, in a timely manner (you’ll see what I mean in a second). Therefore, qubits, born from quantum theory, are the answer.

Now, I’m not going to insult your intelligence and go into too much detail on a subject that I am clearly not an expert in. So, just in ‘laymen’s bullet points’, here is what I took from all that was said and done across the Key Note:

  • With bits, you are dealing with entities that can have a fixed state (0 or 1). A deterministic system if you will, that has limitations in its problem crunching power.
  • Qubits, however, take us into the realm of a probabilistic system. The qubit can be in a superposition of all of the allowed states, not just 0 or 1.
  • Therefore, the problem crunching powers of qubits are exponential in nature, but the probabilistic nature makes measuring them (and interactions involving them) difficult to get to grips with.

So is it worth fighting through the technical problems in order to harness qubits? What kind of gains are we talking about here?

Krystra Svore outlined an example that displayed that it would take roughly one billion years for a current super computer to crack (more complex than standard) RSA encryption. How long would it take a quantum computer you may ask? Significantly faster is the answer, estimated at around one hundred seconds in fact. This clearly defines for us the amazing problems we’ll be able to solve, whilst simultaneously illustrating the dangerous times that lay ahead from a security standpoint. Let’s just hope cryptography keeps up (I can see a few sniffs to suggest things are in the pipeline, so I will keep an eye out for news as it pops up).

So you want a quantum computer I hear you say! Hmmm, I wouldn’t put it on the Christmas list anytime soon. Due to the fact current quantum computers need to be super cooled (and from the pictures we got to see, didn’t look like you could hike around with it!), we’re not likely to get our hands directly on them in the near future.

Can you get your mitts on quantum simulators today? Apparently yes in the answer (completed untested links, just for you to peruse on your own, good luck):

QC Simulators
Project Liquid

Taking nothing away from the Key Note though, it was a concrete finish to an excellent event. Would I go again? You bet! Should we get the train next time instead of driving? Taking into account the mountains of free beer and wine on offer, of course! To finish up, before summarising the Expo itself, if you haven’t been and get the opportunity (in fact, actively seek the opportunity, enough said) then definitely book this in your calendar, thoroughly brilliant.

Expo

Very, very quickly, as I am acutely aware that your ability to focus on this post (if not already) must have completely diminished by this point, I wanted to describe what the Expo itself had to offer. If you’re still reading, give yourself a pat on the back!

One of the more compelling items we saw was the use of the new Lumia phone as a (kind of) desktop replacement attempt. Let’s get one thing straight, you’re not going to be doing hardcore software development using Visual Studio or any other intensive task on this device anytime soon. However, there was certainly enough evidence to suggest that basic productivity tasks would be possible using a mobile phone as a back bone to facilitate this.

The Lumia can be hooked up to a dock, akin to the Surface Pro 4 (the docks are subtly different apparently, so are not cross-compatible), and that allows it to be tied to a display device. You can also get a folding mouse and keyboard, for a very lightweight, on-the-go experience. Interesting certainly, but there is a definite horse-power issue that will prevent anyone working on anything remotely intensive from getting on board. Anyway, for those interested the link below will get you started:

Lumia Docking Station

I saw a few Surface Pros, and wondered whether we could potentially smuggle a few out of the Expo! Only kidding, no need to call the Police (or for anyone I work with thinking I am some kind of master criminal in the making) :-).

An Oculus demonstration booth was on the Expo floor, and displays were hooked up to show what the participants were experiencing. It was noted that a few of the people using the Oculus seemed to miss the point a bit, and kept their head completely still as they were transported through the experience. Once the heads started moving (to actually take in the world) you could visibly see people getting incredibly immersed. Alas, the queues were pretty darn large every time I made my way past, so I didn’t get a chance to experience it first-hand. One for the future.

There was also a programmable cocktail maker, an IoT masterpiece I think you’ll agree. A perfect union of hardware, software and alcohol, a visionary piece illustrating the future has arrived!

The next time an event like this comes around I will endeavour to get a post up in a timely fashion (which will vastly improve the content I hope).

Thanks for reading and a high five from me if you made it this far. Back to coding in the upcoming post I promise, until the next time, cheers from me (and would you believe it, it’s now Tuesday)!

Modernizr – Detecting Screen Size Changes

A brief titbit today, but one I felt was worth sharing and has come in handy for work/personal projects recently for me.

I’ve had a couple of requirements to gracefully show/hide and adjust web page layouts based on screen sizes (and screen re-sizing). I came across the following solution which works pretty damn well.

First things first, you’ll need Modernizr, which is in essence a feature detection javascript library. In this case, however, I’m using other features to react to browser re-sizing. There’s a few options for obtaining this for your projects but, as far as Visual Studio is concerned, I used the Package Manager Console using the following command:

Install Modernizr via the Package Manager Console.

Install Modernizr via the Package Manager Console.

Once installed, we end up with the javascript library included under the default Scripts folder:

Modernizr in Scripts Folder.

Modernizr in Scripts Folder.

On installing the package, as I didn’t specify a specific version, I end up with the following declaration in my packages.config file (part of my ASP.NET MVC project) – 2.8.3 denoting the most recent version:

<package id="Modernizr" version="2.8.3" targetFramework="net452" />

Next up, simply chuck the usual script element into your page to reference the library – Now you’re all set!

<script src="~/Scripts/modernizr-2.8.3.js" type="text/javascript"></script>

The following snippet shows the basic scaffolding code to start capturing screen size changes (I’ve declared this code in my jQuery document ready function). The doneResizing function is tied to the window resize event and you can easily use Modernizr to read and react to the screen size as required:

//Function to react to screen re-sizing
function doneResizing() {
	if (Modernizr.mq("screen and (min-width:868px)")) {
		//Implement jQuery/JS to handle a larger screen (i.e. Laptops/Desktops). In my case adding/removing a class to show/hide elements
	}
	else if (Modernizr.mq("screen and (max-width:867px)")) {
		//Implement jQuery/JS to handle a smaller screen (i.e. Tablets/Mobiles). In my case adding/removing a class to show/hide elements
	}
}

//Call doneResizing on re-size of the window
var id;
$(window).resize(function () {
	clearTimeout(id);
	id = setTimeout(doneResizing, 0);
});

//Call doneResizing on instantiation
doneResizing();

Currently, I’m using this to show/hide element containers within a web page based on screen size (and apply/remove a few classes on the fly to ensure everything looks as it should on desktop, tablet and mobile displays). It appears to function very well, one worth investigating for your own projects. See here for the original Stack Overflow article detailing ideas surrounding this concept (including other CSS related solutions).

Bye for now!

Back Online: Normal Service Resumed

I’m back from my hiatus which encompassed getting married, eating far too much food and drinking wine and beer on the wonderful Adriatic coast. It’s time to get back to some serious coding and perhaps reconsider the longer term plans for this blog.

To start us off, I’ve been contemplating pushing a little money into this to sharpen up the experience a little and will most likely give the blog some dedicated presence on Facebook/Twitter. Why do it by halves; I’ll go balls deep and hope for the best!

There are numerous items that I previously wanted to, and still plan on, covering but other nuggets of technology have caught my eye in the interim. In addition to just writing code, I would also like to reflect on my own methodologies for learning subject matter and trying to improve comprehension as I progress on this journey. Anything I do to this end will get ‘air time’ within this blog and I’ll you all know if I come across anything that works particularly well (or falls flat on its face!) as and when it happens.

Lastly, although not strictly ‘code’ based, my wife (weird to say that!) plans on starting her own business this year so it provides us both with an opportunity to reimagine our workspace in the home. The plan is to turn our crap-hole of a box room into a useable work area; as we get stuck into this I’ll post updates to show how this evolves.

As we all know, putting something down on paper (or the internet!) is the first step on any journey. Here’s the redefined hubs of activity as I see them covering things you can expect to see on this blog in 2015/2016.

  • Reimagining of the Blog and some kind of dedicated presence on Facebook/Twitter.
  • Changes to our home workspace to show you how this progresses.
  • Updates covering any learning techniques as I study them. If these are useful to coding then expect them to get ‘air time’. For starters, look out for:
  • Coverage on the following topics (not sure on how basic/advanced this will be – Most likely this will comprise of feelers into a topic unless something really takes my fancy):
    • Finishing off the Epic Quest project.
    • F# Forays.
    • Coverage of Python.
    • Some further raw JavaScript coverage including jQuery.
    • Hello World level Raspberry Pi.
    • Coding against the Leap Motion Controller API.
    • Xamarin Tools.
    • ASP.NET MVC.
    • My friend Pete has written a superb object-orientated take on JavaScript – Picket.JS.
    • Further C# Unity game development (I still see myself covering a larger scale project encompassing the use of Blender, Gimp and Unity to make a standalone title).
    • Posts covering C# and TSQL (I’ve done some MySQL work recently so I will incorporate this into the proceedings if possible) as interesting topics catch my eye.
    • WPF (Rooting around in XAML) as time allows.

In and around this, I’m starting to sniff around the idea of completing full Microsoft Certifications in the next year to year and a half, so as I hop hurdles surrounding this I’ll give you all of the details.

This is not really designed to be a personal ransom note and I’m not going to outright hold myself to completing all of these things, but I do want to make a commitment to producing content as and when I can and keeping this fun (for myself and anyone reading along).

All that’s left to say is wish me luck and watch this space!

jQuery is just a little bit sexy!

Cramming the word ‘sexy’ into the title was a sure-fire way of drumming up some attention. I’m happy to resort to dirty tactics as I’m still rocking my noob colours :-). It’s time to take a trip down the yellow brick, urm, jQuery road? Yes, I’ve said it now, so I’m sticking with it. I’ll start this in the way I mean to continue; with all of you wondering whether I’m mentally stable…..moving on!

I’ve been having a tonne of fun with this awesome JavaScript library. So much so, I think I’ve probably been driving all of my work colleagues a little bit nuts talking about it. I’m really not winning many friends here I’m sure but jQuery is, urm, just a little bit sexy dare I say it (sorry Claire, I’m sure I’ll pay for this later!).

I’m positive that I won’t need to tell too many people what this funky little API is all about, most developer buddies I have spoken to know of it, or are well versed (much more me than) in the wonders of what this has to offer.

jQuery allows you to perform HTML manipulation and traversing of the DOM (using CSS class selectors) as well as orchestrating some neat animation tricks.

I’ll aim to cover this API in further detail but I wanted to do a little ‘bear’ bones post to get this started (sorry, I couldn’t help it, bad joke I know!).

You’ve got a couple of choices on how to bring jQuery into scope; you can use a CDN (Content Distribution Network) or simply reference it from a known directory path. This can be done by dropping one of the following into the head element of the web page:

<!--Using the jQuery CDN (and using a designated version)-->
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

<!--Using the Google CDN (and using a designated version)-->
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

<!--Using a downloaded version of the file (stored within the _js directory of the website in this case)-->
<script type="text/javascript" src="_js/jquery-1.11.1.min.js"></script>

I’ll dodge the downloaded file vs CDN debate for now and just concentrate on jQuery’s usage, mainly due to the fact that I get the impression this incites violence among normally civilised folk.

For the purposes of a quick showcase I’ll utilise the jQuery ready function (as we’re going to be manipulating loaded HTML I want to make sure that this process is fully completed before the tinkering begins – Placing code in this function removes the possibility of us attempting to reference unloaded parts of the page).

Immediately beneath the script tag created to bring jQuery into scope bang in the following code:

<!--Add the long-hand ready function declaration to perform actions once the page HTML is loaded-->
<script type="text/javascript">
	//Long hand ready function declaration
	$(document).ready(function () {
		//Code goes here, for instance...
		console.log("Hello world! In the ready function!");
	});
</script>

I’ve decided though, with barely a second thought for the poor long-handed approach, to adopt the short-hand declaration myself:

<script type="text/javascript">
	//Short hand ready function declaration
	$(function () {
		//Code goes here, for instance...
		console.log("Hello world! In the ready function!");
 	});
</script>

You’ve got some choices in regards to how you want to target your code on the various elements found on a given web page. In general, at least for me so far in a very basic sense, I’ve been targeting elements by id, class and the actual tag type (element type) as follows:

<script type="text/javascript">
	//Short hand ready function declaration
	$(function ()
	{
		console.log("Start of ready...");

		//Get access to a DOM element (pure JavaScript) using traditional means. You'll only have access to normal JavaScript members
		var headerContainer = document.getElementById("HeaderContainer");
		console.log(headerContainer.innerHTML);         //Getting container html

		//Retrieve a 'jQuery Object' based on 'id'. You'll get access to jQuery specific members this way (equals awesome). Notice the use of hash in this scenario
		var $headerDiv = $("#HeaderContainer");
		console.log($headerDiv.html());                 //Getting container html

		//Retrieve a 'jQuery Object' based on CSS Class (representing all objects using that class). Notice the use of a period before the class name
		var $specialParagraph = $(".SpecialParagraph");
		console.log($specialParagraph.text());          //Getting element content

		//Retrieve a 'jQuery Object' by tag type (again, representing all objects using that element type). Just specify the element type using a string, no special prefix required
		var $h1Elements = $("h1");
		console.log($h1Elements.text());                //Getting element content

		console.log("End of ready...");
	});
</script>

Here, you can see that there are a couple of nuances when compared to selecting elements via pure JavaScript. The first is the use of the ‘$’ character, which is basically short-hand for specifying ‘jQuery(“selector”).function()’ – Essentially you’re using saying, “hey, I want to work with a jQuery Object please.”. Next up, you’re required to pop in a couple of parentheses and between these you provide a string specifying a ‘selector’, in turn denoting what kind of element/s you are trying to retrieve, and ultimately manipulate. Using an ‘id’ requires you to prefix the value with a ‘#’ and a CSS class requires a ‘.’ prefix. Grabbing elements by type doesn’t require a specific prefix at all so just place in the element you’re looking for and you are good to go.

In the examples above I have stored the DOM Objects and jQuery Objects in variables before operating on them. So far, I’ve been lead to believe that some developers prefix jQuery Object variables with a ‘$’ just to avoid any ambiguity in relation to any DOM Object variables sitting nearby, so I’ve followed this format. Anyone who stumbles along this with thoughts on whether this is correct please feel free to drop me a comment; I’d love to know how developers have decided to proceed in regards to variable naming conventions in production level code.

Just for simplicity, I’ve utilised the innerHTML property on the DOM Object and the .html() and .text() functions on the jQuery Objects (to get the selected elements HTML and text content respectively) and pushed these out to the console. The results of this can be seen below:

Various jQuery selectors in action.

Various jQuery selectors in action.

In order to have a little play around and provide a few simple illustrations of jQuery’s usage I’ve knocked up, what I’m sure you’ll agree, is some of the worlds greatest HTML markup (added to the pages body element) – Booyah:

<div id="HeaderContainer">
	<h1>jQuery Test Site</h1>
	<p>Welcome to the wonderful jQuery test website where we are going to do, urm, not a hell of a lot really.</p>
	<p class="SpecialParagraph">Only people who like craft beer see this paragraph.</p>
</div>
<div id="MainContainer">
	<h2>Here is a list box of my favourite things:</h2>
	<select id="GreatThingsListBox" multiple="multiple">
		<option>Chocolate Cake</option>
		<option>Chocolate Brownie</option>
		<option>Chocolate Something or Other</option>
		<option>Craft Beer</option>
		<option>Video Games</option>
	</select>
</div>
<div id="AnotherContainer">
	<button style="height: 35px; width: 75px" id="ClickMeButton">Click Me!</button>
</div>

So, I won’t be winning any awards for that! I’ve been staring at the colour palette of the test site whilst creating this post and I’m definitely starting to feel a tad nauseous; get the sick buckets at the ready :-).

Firstly, lets take a look at a quick example that shows the simplicity of jQuery at work. The first code snippet shows my attempt at using JavaScript to retrieve all ‘option’ elements on the page. After retrieval, I do a loop through the found elements and call setAttribute to add a title attribute based on the text value of the option:

<script type="text/javascript">
	//Short hand ready function declaration
	$(function () {
		//Possible approach, using pure JavaScript, for adding title attributes to 'option' elements
		var greatThingsListBox = document.getElementsByTagName("option");

		for (var i = 0; i < greatThingsListBox.length; i++) {
			greatThingsListBox[i].setAttribute("title", greatThingsListBox[i].text);
		}
	});
</script>

Here’s the same thing using jQuery (using the .each() function to iterate over the found ‘option’ elements).

<script type="text/javascript">
	//Short hand ready function declaration
	$(function () {
		//Set the title attribute on every option element (based on the options value) - using jQuery
		$("option").each(function () {
			$(this).attr("title", $(this).text());
		});
	});
</script>

The .each() function can be passed a normal JavaScript function name (which will run for each item selected), but in this case I’ve used an anonymous function call. Within this function, and this is something that I really love, it’s a real cinch to get hold of the jQuery Object currently in scope using the ‘$(this)’ syntax (this is the current ‘option’ element). Not specifying the $(this) enables you to act on each element as a traditional DOM element if you want/need to. From a keystrokes perspective, and I think readability, we’re onto a winner.

This is the end result of the above jQuery code (my cursor is placed over the List Box item):

Title attributes (resulting in tool tips) added to each list item.

Title attributes (resulting in tool tips) added to each list item.

Looking at the page, you’ll notice that we have a very special paragraph implemented in the ‘HeaderContainer’ div. If you’re not a craft beer drinker like myself then I would really like to hide this element from view (p.s. beer comments/recommendations welcome to!). jQuery provides a very tidy and convenient way to accomplish this task via the use of the .hide() function.

Note: from this point forward I’ll be omitting the ready function declaration to simply emphasise the jQuery code snippets themselves:

//Hide the paragraph with the '.SpecialParagraph' CSS class
$(".SpecialParagraph").hide();

This removes the paragraph from view as show below:

Paragraph hidden via jQuery.

Paragraph hidden via jQuery.

A quick sniff around the DOM using your browser development tool of choice will indicate that jQuery has applied a style to the element in reaction to the .hide() function call (style=”display: none”). I think you’ll agree this is nice and simple, and with the power of using CSS selectors it’s quite easy to target a number of elements simultaneously. Also, for me, it’s just so darn readable; I’ve really found the syntax incredibly easy to pick up and play with. It’s clear to see that one of the core crowd pulling powers of jQuery is that it’s not all that intimidating and papers over some of the inherent complexities of JavaScript quite nicely. Put simply, it’s bonza!

Once you’ve brought a shiny jQuery Object into scope you are free to draw on a concept called ‘Function Chaining’ where multiple functions can be called on the object using a single statement. Syntactically and aesthetically, coming from a C# background with a serious love of LINQ and extension methods, this ticks all of the right boxes for me.

The code snippet below first hides our special paragraph (using the CSS class applied as the selector), then applies a small piece of custom CSS using a construct called an Object Literal (essentially, property/value pairs) and finally uses the .slideDown function to elegantly animate the paragraph into view, over the specified time frame:

//Retrieve a jQuery Object representing elements with the SpecialParagraph class and hide them on load (just the one is this case). But this time,
//use 'function chaining' and apply some custom css (or we could have called addClass to apply some kind of style) and animate the paragraph/s
//so they 'slide' down into view
$(".SpecialParagraph").hide().css(
	{
		"background-color": "black",
		"color": "red"
	}).slideDown(1500); //Slide down into view over 1.5 seconds
Paragraph sliding into view (with a bit of custom CSS applied).

Paragraph sliding into view (with a bit of custom CSS applied).

Hooking up to events is also a doddle. For example, to hook up a click event to a selected element/s you can call the .click() function and pass it an anonymous function or an existing raw JavaScript function that performs the required task, as below:

//Rig the ClickMeButton's (selected by id here) click event up (tied to an anonymous function in this case)
$("#ClickMeButton").click(function () {
	alert("Click me clicked. I could really do some complicated work now!");
});
//JavaScript function to handle the click
function respondToClick() {
	//complicated code goes here
	alert("Click me clicked. I could really do some complicated work now!");
}

//Short hand ready function declaration (included in this example to provide context)
$(function ()
{
	console.log("Start of ready...");

	$("#ClickMeButton").click(respondToClick); //Could just be a raw JavaScript function

	console.log("End of ready...");
});
Example of a click event firing using jQuery to tie the event up to the element.

Example of a click event firing using jQuery to tie the event up to the element.

Moving further down this lovely yellow brick road, and this is something I’ve really enjoyed playing around with, it’s possible to call the jQuery .on() function to bind multiple events at the same time. As before, you can hook in raw JavaScript functions or anonymous functions, the latter seems to be the order of the day from what I’ve learnt so far (and rightly so, they are ace!). In the next code snippet I’m concurrently binding a click, mouse enter and mouse leave event to a button (again, selected by ‘id’) to build a slightly more interactive element (i.e. chopping and changing styles depending on whether the mouse is over the element):

//Simultaneously hook up the ClickMeButtons click, mouseenter and mouseleave events to various anonymous functions. 'On' comes in handy for 'event delegation'
//which I'll cover in a future post
$("#ClickMeButton").on(
	{
		"click": function () {
			console.log("click.");
			alert("Click me clicked. I could really do some complicated work now!");    //Handle a 'click'
		},
		"mouseenter": function () {
			console.log("AltStyle added.");
			$(this).addClass("AltStyle");                                               //Add the AltStyle to this element on 'mouseenter'
		},
		"mouseleave": function () {
			console.log("AltStyle removed.");
			$(this).removeClass("AltStyle");                                            //Remove the AltStyle from this element on 'mouseleave'
		}
	});
jQuery 'on' function example in motion.

jQuery ‘on’ function example in motion.

The on(). function has a very unique part to play in relation to the concept of Event Delegation, which I’ll cover fully in a later post. To serve as a very brief example, you may have an existing unordered list (ul) element on a web page, simply with no list items yet (li elements). The list items are dynamically created at some point after the page has loaded, perhaps on a button click for instance. In this scenario, if you wanted to tie a click event to each list item, you would have to do it on creation of each element (plus each item would require a separate handler which might not be all that yummy once the list becomes larger). This couldn’t be achieved on page load as the elements wouldn’t exist yet. With Event Delegation it is possible to palm off the click event to the parent unordered list element on load for any list items added later on to respond to, reducing stress levels and hair loss for the developer.

Rounding off the ‘cool’ factor of jQuery I wanted to show you an example of the hover function with a simple animation routine plugged in. I’ve rigged the code formatting a little to make this a little more readable. As I am used to C# formatting I still find the default placement of braces a little annoying from time to time when formatting via Visual Studio. Here’s the weird and wonderful snippet anyway:

//Utilise the hover function on the ClickMeButton that allows you to specify two anonymous functions (or tie in two existing javascript functions). The first one
//handles mouseenter and the second one handles mouseleave. In this example we are using .animate() to manipulate a buttons height/width. The jQuery UI API allows
//you to animate colour transitions and allows you to utilise some interesting 'easing' functions if you bring this into scope also
$("#ClickMeButton").hover
	(
		//Mouseenter
		function ()
		{
			console.log("Entering element.");

			//Make the button larger over half a second. Calling .Stop prevents animations from 'queuing' up
			$(this).stop().animate
				(
					{
						"height": "70px",
						"width": "150px"
					},
					500
				);
		},
		//Mouseleave
		function ()
		{
			console.log("Exiting element.");

			//Make the button smaller (set back to the original size) over half a second. Calling .Stop prevents animations from 'queuing' up
			$(this).stop().animate
				 (
					 {
						 "height": "35px",
						 "width": "75px"
					 },
					 500
				 );
		}
	);
Button before the hover event fires.

Button before the hover event fires.

Button after the hover event fires.

Button after the hover event fires.

The hover function is a compound function that enables the developer to simultaneously handle the mouse enter and mouse leave events in a single statement. I class this as simple syntactic sugar, but I love it all the same (give me lambdas and anonymous methods any day thank you!). This probably boils down to the fact that I’m easily pleased by the smaller things in life.

As the snippet above shows, the hover function is taking two separate anonymous functions as arguments. I’ve coded these functions to manipulate the elements height and width properties, using the .animate() function, to alter the values over a specified time frame.

jQuery UI, another branch off from jQuery, enables more complex animation effects to be kicked off (such as animating an elements colour properties). It’s certainly something to check out if you’re finding any of this interesting.

If you want an easy way to add, remove, copy and move HTML content on the fly easily then you’re really in luck here. There are a good number of functions built-in that will enable you to get your head down and deal with the task of handling changing HTML content dynamically. This next example illustrates one of the more basic functions, .append(), which enables you to specify a string of HTML content to place at the end of the selected element/s:

        //When theClickMeButton is clicked append HTML to the List Box (select element)
        $("#ClickMeButton").click(function () {
            //Add a new hard-coded option to the List Box (this would obviously be a much more dynamic example)
            $("#GreatThingsListBox").append("<option>Coding</option>");

            //Grab the last 'option' element on the page and set it's title attribute based on the text value (giving us a nice little tool tip)
            //You could do this by using: $("option").last().attr("title", $("option").last().text()); but storing the jQuery Object in a variable saves on processing (i.e the re-selection of the element)
            var $lastOptionOnPage = $("option").last();
            $lastOptionOnPage.attr("title", $lastOptionOnPage.text());
        });

This last snippet introduces the Event Object, which can be specified as a function parameter (which the engine is implicitly passing for use) and used within an anonymous function. This stores handy nuggets of information about the event that has been fired. In this case, I’m just pushing screen co-ordinates to the console to illustrate where a click occurred:

//Using the event object (any parameter name can be picked, I'm using 'evt') and pulling out the current screen co-ordinates on click
$(document).click(function (evt) {
	console.log("X Co-ord: " + evt.pageX + ", Y Co-ord: " + evt.pageY);
});
The Event Object in action.

The Event Object in action.

The post is nowhere near as short as I would have liked but I have developed a bit of a soft spot for jQuery so I’ve gone over the top. Why the hell not I guess, it’s free, reliable and heavily used and distributed under an MIT licence, what’s not to like! Whilst I’m on a ‘love rant’ I may as well mention that an extensive amount of Plug-Ins are available, adding to the fun in my opinion. It’s also incredibly easy to draw out hidden and selected elements using jQuery (i.e. from a set of check boxes, for example), so read up if you’re interested. I hope you’ve enjoyed the first, real, coding post. Enjoy, like, comment, subscribe, talk to me about craft beer and code, etc. Whatever takes your fancy.

I haven’t decided on my next topic yet but I’d like to get something out of the door by the end of the week, so watch this space. Cheers to Pete for the jQuery book you’ve loaned me and to Claire for reading through this (it would have been much messier without your intervention!).

DISCLAIMER: All of the code snippets were ruined seconds before posting by the proof reading tool so I had to mend them on the fly, with any luck I’ve caught all of the screw ups!

Useful links:

jQuery Website
jQuery API Reference Guide
The Missing Manual Series JavaScript and jQuery

Cheers all!