Groovy JavaScript Regex Name Capitalisation Handling

Greetings!

A tidbit found by a friend of mine online, forming the basis for a small piece of work I’ve done this week surrounding name capitalisation. This was pulled from a stack overflow article so credit where credit is due for starters:

js-regex-for-human-names

This is fairly robust, covering Mc, Mac, O’s and double-barrelled, hyphenated names. It does capitalise the first character directly after an apostrophe (regardless of placement) which may or may not be a problem. As for usage, I went with the following setup (with the relevant JavaScript and jQuery hooks being properly abstracted in the production code of course).

Firstly, the example HTML structure:

<div id="container">
	<!--An example form illustrating the fixNameCasing function being called on a test forename, middle names and surname field (when focus is lost)-->
	<form action="/" method="post">
		<div>
			<label id="forename-txt-label">Forename:</label>
		</div>
		<div>
			<input id="forename-text" name="forename-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<label id="middlename-text-label">Middle names:</label>
		</div>
		<div>
			<input id="middlename-text" name="middlename-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<label id="surname-text-label">Surname:</label>
		</div>
		<div>
			<input id="surname-text" name="surname-text" class="control-top-margin fix-name-casing" type="text" />
		</div>
		<div>
			<button id="submit-button" type="submit" class="control-top-margin">Submit</button>
		</div>
	</form>
</div>

Then, our jQuery/JavaScript juicy bits:

<!--Bring jQuery into scope so we can hook up a function to relevant elements on 'blur' event (lost focus)-->
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"></script>
<script type="text/javascript">
	
	// The name casing fix function utilising regex
	function fixNameCasing(name) {
		var replacer = function (whole, prefix, word) {
			ret = [];
			
			if (prefix) {
				ret.push(prefix.charAt(0).toUpperCase());
				ret.push(prefix.substr(1).toLowerCase());
			}
			
			ret.push(word.charAt(0).toUpperCase());
			ret.push(word.substr(1).toLowerCase());
			return ret.join('');
		}
		var pattern = /\b(ma?c)?([a-z]+)/ig;
		return name.replace(pattern, replacer);
	}
	
	// On document ready rig up of relevant controls (based upon using the 'fix-name-casing' class) 'blur' event. When focus is lost, in a given control, we take the controls input and format it based on a return value from fixNameCasing
	$(function() {
		$(".fix-name-casing").blur(function() {
			$(this).val(fixNameCasing($(this).val()));
		});
	});

</script>

The results! Each field in the following screenshot received fully lowercase or uppercase input before being tabbed out of (i.e. lost focus):

Image showing name capitalisation of three example name fields.

Name Capitalisation Test Output.

Lastly, here’s the entire code snippet:

<!DOCTYPE html>
<html>
<head>
	<title>Name Capitalisation Test</title>
	<style type="text/css">
		
		.control-top-margin {
			margin-top: 5px;
		}
	
	</style>
	<!--Bring jQuery into scope so we can hook up a function to relevant elements on 'blur' event (lost focus)-->
	<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.2/jquery.min.js"></script>
	<script type="text/javascript">
		
		// The name casing fix function utilising regex
		function fixNameCasing(name) {
			var replacer = function (whole, prefix, word) {
				ret = [];
				
				if (prefix) {
					ret.push(prefix.charAt(0).toUpperCase());
					ret.push(prefix.substr(1).toLowerCase());
				}
				
				ret.push(word.charAt(0).toUpperCase());
				ret.push(word.substr(1).toLowerCase());
				return ret.join('');
			}
			var pattern = /\b(ma?c)?([a-z]+)/ig;
			return name.replace(pattern, replacer);
		}
		
		// On document ready rig up of relevant controls (based upon using the 'fix-name-casing' class) 'blur' event. When focus is lost, in a given control, we take the controls input and format it based on a return value from fixNameCasing
		$(function() {
			$(".fix-name-casing").blur(function() {
				$(this).val(fixNameCasing($(this).val()));
			});
		});

	</script>
</head>
<body>
	<div id="container">
		<!--An example form illustrating the fixNameCasing function being called on a test forename, middle names and surname field (when focus is lost)-->
		<form action="/" method="post">
			<div>
				<label id="forename-txt-label">Forename:</label>
			</div>
			<div>
				<input id="forename-text" name="forename-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<label id="middlename-text-label">Middle names:</label>
			</div>
			<div>
				<input id="middlename-text" name="middlename-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<label id="surname-text-label">Surname:</label>
			</div>
			<div>
				<input id="surname-text" name="surname-text" class="control-top-margin fix-name-casing" type="text" />
			</div>
			<div>
				<button id="submit-button" type="submit" class="control-top-margin">Submit</button>
			</div>
		</form>
	</div>
</body>
</html>

The likelihood is that I’ll be using this just as a basis for my current requirements and adjusting as needed.

I hope this proves useful and kudos to my friend who found this and the original stackoverflow contributor. If anyone has any other examples of code that tackles this problem, that they would like to contribute, just let me know by commenting below.

Cheers!

Developer Testing Hints and Tips

Howdy happy campers.

I want to discuss a piece, somewhat divergent from the topic of physical coding, although still a facet of development that is close to my heart (and easy to overlook in many respects when constantly mashing keys and churning out code); developer testing. More specifically, I want to provide a set of guidelines that ‘may’ (insert disclaimer) help with the process and provide some food for thought.

This is in no way a definitive guide or best practice for that matter; more just a personal take on what I find works for me and the guts of a generally beneficial ‘templated’ approach to follow.

I would love to invite discussion on this one (or just get a take on what works for you), so please do hit me up on twitter or add a comment below, I’d love to hear from you.

My Process

As with any process, ground work and preparation can be vital for achieving a good result. To this end, I invariably start my developer testing on a given work item with a template document that looks like this:

Illustration of how to structure you Developer Testing.

Developer Testing Helper Document Structure.

What goes into your document will largely depend on what technologies you are using of course. For instance, you may never have a database centric element to the development you perform, rendering the ‘Database Upgrade’ section null and void ‘in all cases’. Ultimately, add and remove sections as you see fit but do strive for consistency. I myself test a mixture of items that may or may not include T-SQL elements. However, I choose to include the ‘Database Upgrade’ section in this case on every occasion, preferring to note that ‘there were no T-SQL’ related parts to the item, even just to mark it as ‘N/A’ (for my own sanity and for easy recollection later down the line, without the need to scan a lengthy list of changes elsewhere in the notes). Basically, my OCD kicks in and I start to wonder why I haven’t included a section that I ‘always’ include, leading to paranoia that I’ve missed something!

Each section (other than Notes), which is probably self-explanatory, can result in a PASS, QUERY or PASS-BACK state. Section state obviously knocks on and influences the result recorded against the ‘Developer Testing Summary’ header. PASS denotes an ‘A-Okay’ state, good to rock and roll! QUERY gives you the opportunity to mark a section with ‘discussion points’ or things you would like to check, without necessarily marking it off as incorrect (I tend to do this a lot, as I love to talk!). PASS-BACK is used in the circumstance whereby an error can be replicated/reproduced consistently or a logic problem definitely flies in the face of the ‘Acceptance Criteria’ for the story. In the circumstances whereby things such as coding standards have been contradicted I tend to use a mixture of QUERY/PASS-BACK, depending on the notes the developer has provided (it could be a flat PASS, of course, as there are always occasions where the rules need to be broken!).

So, section by section, let’s go over what we have and why…

Notes

It’s incredibly tempting to start diving into code, comparing files, trying to make sense of what the hell is going on but…I may get in trouble here, I’m going to tell you to stop right here. It’s so easy, and I’ve done it (probably) hundreds of times, to get eye deep in code, wasting large pots of time, before the basic question of ‘what are we doing and why’ has been answered. This is where this section comes in.

Use this area of your notes to compile a few short paragraphs (or bullet points, whatever you prefer) on the following:

  • Read over the developers notes and, after discovering if any changes have occurred to the underlying requirements for the story, start to create…
  • Your own summary of the ‘Acceptance Criteria’ for this particular story (or item, whatever term floats your boat. I’m going to use both interchangeably to alleviate bombarding you with the same term too much!).
  • Then, list any other pertinent information surrounding how the developer has coded the item (e.g. decisions that have shaped how the story has turned out). For example, did they place code into a different service than was originally expected because of ‘x’ reason, or did some logic end up in a different layer in the technology stack than conceived originally.
  • Lastly, note any of your initial thoughts, concerns or things you intend to check/look for based on this initial scoop of information.

The core reason I do this is to try to solidify my expectations, begin thinking about a test plan (yes, I like to always perform (rudimentary at the bare minimum) application testing, this isn’t just down to QA in my mind!) and to try to mitigate the chances of any massive surprises. Surprises, although they will always eventually happen one way or another, will lead to more confusion and increase the chances of things slipping through the net. You’ll be able to, by just following this exercise or a similar routine, cross-reference your expectations with the code changes you see and more easily be able to pick up errors, incorrect logic or unrequired alterations. This will limit the chances that something will slip past your mental filter as an ‘I guess that’s correct’ or ‘perhaps that class needed to be changed also, ok’ moment (don’t lie, we’ve all had them 😉 !).

Cool, we’ve formed in our own minds what this item is for, how it’s been developed and what, as a baseline, we are expecting to see. Let’s test (and along the way, discuss a few more tactics).

Database Upgrade

Some of what I’ll discuss here is formed around how my personal development role operates, so feel free to modify this approach to your needs. Again, if you don’t deal in the realm of database development at all pass go and collect £200, you’ve bypassed this step; congratulations!

The essence of this section surrounds you being able to state that new Stored Procedures, Functions, Views, Triggers, etc. can be ‘created’ without error on a database in a suitable ‘versioned’ state. Also, can ad-hoc data scripts, that are part of the development item, be run without error?

Some other considerations…

  • Are object creation scripts/ad-hoc scripts expected to be re-runnable? If yes, then specifically test and note this down here.
  • If you are in an environment whereby this kind of testing needs to be performed on multiple databases then mark this down here also (splitting notes down into sections against each target database/environment, whatever is applicable).
  • We work with a ‘versioned’ database so I make an effort to state which version I am on at the start of the testing run for reference.

An example of what this section may look like is illustrated below for reference:

Illustration of how to structure the Database Upgrade Developer Testing Document Section.

Developer Testing Database Upgrade Section Example.

A QUERY/PASS-BACK at this stage will bubble up and alter the status listed for the entire developer testing process. An additional note here; depending on how many queries/issues you find (and the length of the testing notes in general), you may want to copy the core query/error text to the top of the notes for easy review by the developer later (this applies to all of the following sections in fact).

Code Review

Moving on to the main filling of your developer testing sandwich, the actual code review! Obviously, you’ll be reviewing files here and looking at scripts, new files or amended code but definitely take a second or two out (unless your setup has automated builds/continuous integration, or some other clever solution, to tell you this) to make sure the code compiles before proceeding (and make the relevant note). A simple step but one easily forgotten, meaning you can get to the end of a code review before realising parts of the code don’t compile, eek!

I tend to, from a structural and sanity point of view (clarity is key), split my testing notes here into sections based on technology (i.e. T-SQL, C#, JavaScript, etc), or, at least, make some effort to order up a single list of files by file type. I tend to, for C# changes, group code files by the related project (given that projects should represent a logical grouping of types, hence allowing you to dice up changes by functional area, i.e. common extensions, data access helpers, etc.).

The point that you should take away from this, however, is that a little bit of thought and structuring at this phase will make your life easier; especially as a number of code files rack up.

If you’re looking for a small sample on how this section could look, after being fleshed out, then here you go:

Illustration of how to structure the Code Review Developer Testing Document Section.

Developer Testing Code Review Section Example.

However, what about the code review procedure itself I hear you cry! What follows next shouldn’t be taken as an exhaustive list, or correct in every given situation for that matter; more just suggestions as to what I’ve found helpful over time (mental kit bag):

  • For C# (and other object-orientated languages that support this concept), ensure that null values are correctly handled. Whether this is by capturing nulls on call to a given method and throwing an ArgumentNullException, or by doing a ‘not equal to null’ check (!= null) around code that would otherwise fail.
  • Strings can be tricky buggers, especially in case-sensitive environments! In most cases, comparisons should be performed taking case-sensitivity out of the equation (another case-by-case situation of course). I’d keep an eye out, again for C#, for the correct use of String.ToUpperInvariant, String.ToLowerInvariant and String.Equals. For String.Equals, use an overload containing a StringComparison enumeration type, for case/culture-insensitive options.
  • Keep an eye out for instances of checks being performed against strings being null or an empty string (either one or the other only). This can quickly lead to chaos, switch out for a null, empty or whitespace check (e.g. String.IsNullOrWhiteSpace).
  • Empty try/catch handlers are evil. Kill any you find.
  • Check up for instances whereby a class consists of all static members, but the class is not marked as static.
  • Train the eye to look for casting operations; you’ll always catch a few where the casting operation ‘could’ throw exceptions and should, therefore, be subject to more careful handling.
  • Big bugbear in the realm of coding; if a method requires scrolling to get through it’s a significant indication right off the bat that it is a prime candidate for refactoring. Unless there is a good reason, or it is clearly performing one logical function, consider having a conversation about breaking the method down.
  • Look for missed opportunities to rational code using inheritance. The most common one I see (and forget myself) is the abstraction of code to base classes and then using virtual methods/overrides in subclasses. Hawk-eye for types that should be abstract.
  • A simple one, but something that could easily slap you in the face if you’re not careful. When ‘language switching’, in a DT sense, take a second to make a mental note that you should be changing mind-sets (i.e. syntax is changing, get your game-face on!). For example, you stare into the abyss of C# for long enough (seeing ‘!= null’) you may, on switching to T-SQL, not notice a ‘!= NULL’ that should have been an ‘IS NOT NULL’. Those trees can be damn hard to find in the woods, after all!
  • Watch out for expensive operations, whereby values should be obtained once, ideally, then cached. It can be easy to let code skip by that repeatedly calls a database, for instance, to the detriment of performance (or possible errors, depending on the nature of the functionality called).
  • I love, love, loooovvvveeeee comments! Probably (ok, to the levels of being a little OCD about it!) too much. As far as C# code goes, I prefer (but don’t fail on this basis alone) XML Comments for C# and like to see comments on bulkier pieces of T-SQL. If there is a sizeable piece of code, whereby its function stretches beyond ‘trivial’, I like to see at least a short statement stating intent (what the developer is expecting the code to do is key by the way…as discussed next).
  • Where you have comments, link the intent in these comments back to what the code is actually doing; then trail it back to the items ‘Acceptance Criteria’ where appropriate. I have been rescued (as a developer, submitting my work for DT) countless times by those performing DT on my code, just by someone relaying to me that ‘what I thought my code was doing’ (based on my comments) doesn’t tie up to the actual functionality being offered up. This has led to me, of course, face-palming myself but being relieved that the gap in my intent, when checked off against my actual code, had been picked up by somebody in time to catch it before QA (or deployment, gulp!). State intent, then reap the rewards when mistakes you make are more rapidly picked up for rectification.
  • Be sure to look for the use of language constructs/keywords or syntactic-sugar that is not permissible on your baseline, minimum supported environment (i.e. older versions of SQL Server or .NET), if what you work on has this concept of course. This is sure to be something that will get picked up by QA causing bounce backs, or by your consumers later on if you’re not careful!
  • Keep a look out for code that could (or should) be shared or has been placed in a project/location that does not make logical sense. At a bare minimum, picking up on this sooner rather than later will keep your code base tidier, allow for ample opportunities to put great code in places to be leveraged as much as possible. In other cases, asking these kinds of questions can expose flaws and issues with the way a solution has been architected, which occasionally will steer you clear of tight spots later down the line.
  • Where shared code has been changed, look for instances whereby other applications/areas of the code base could be broken as a result of the changes. Recompile code to check for this as required. I had a bite on the bum by this recently :-?.
  • Keep up to date with any coding standards documents that should be adhered to and make sure the guidelines are followed (within reason of course; you’ll always find a scenario whereby a rule can, and should, be broken).
  • Really do consider writing and using Unit Tests wherever possible. They are a useful facet in the grand scheme of things (I believe at least) and they do carry weight when pitched up against visually checking code and application testing in general.
  • Last little nuggets, which I see from time to time. Look for objects constantly being created inside loops, heavy amounts of string concatenation not using the correct constructs (e.g. a StringBuilder in C#) or missed opportunities to create sub Stored Procedures in T-SQL (sectioning off code to gain performance boosts and obtain better execution plans). In fact, for T-SQL it can be a useful exercise to check the performance of non-trivial pieces of code yourself by changing how it’s structured, whilst obtaining the same results of course. You may or may not be able to increase performance along the way, but you’ll have far better comprehension of the code by the end regardless.

Hopefully, this little snapshot from my bag o’ tricks is enough to get you started, or get the brain-juices flowing. Let me know what you think of these suggestions anyway; I’d really appreciate the opportunity to collate others general thoughts and get a collective consensus going.

Application Testing

Here is where I will defer the giving of advice to my beloved QA counterparts on this beautiful planet; this, of course, isn’t my area of expertise. My only opinion here is (developers will possibly hate me for stating it) that developers ‘should’ always perform application testing alongside a code review. In fact, I’m a keen advocate for developers being involved in performing QA on the odd occasion. I personally like doing this, provided I have a trusty QA on hand to assist me (thankfully, I work with the best one around ;-), so no worries there). The simple reasons for this are:

  • One way or the other, acquisition of Product Knowledge is going to be invaluable to you. It’s just as valuable to start using your products in anger as it is to analyse code for hours on end. The side-note here is that this is part of your overall ‘worth’ as a developer, so don’t neglect it.
  • At this stage, you get to think as the customer might. Ideas and thoughts you have at this stage, which direct more development or changes to the product, will be amongst some of the best (and most rewarding when it comes to getting that warm and fuzzy feeling!).
  • Urm…it’s embarrassing to say ‘oh yeah, that codes great, thumbs up!’ for it then to explode in someone else’s face on the first press of a button! Easily avoided by following the process through from end to end, no matter what.

Ok, I’ll have a go at channelling one QA thought. Ok, I got it, here’s one from a mysterious and wise QA guru:

Mysterious and wise guru here… a friendly reminder to developers…never, ever, test your items using only one record! The reason? Well, I’ll test it with more than one record and break it instantly!

If anyone doing QA reads this feel free to feed us your arcane knowledge…God knows we need it! I would advise you keep the original item requirements in mind here of course, whilst testing; securing any process variants in your thoughts that could potentially throw carefully laid plans to waste (e.g. what if we go back and forth from screens x and y between completing process z, or we save the same form information twice, etc.). Your knowledge of the code can help at this stage so use the opportunity whilst you have it.

Before I forgot, an example of this could look like this:

Illustration of how to structure the Application Testing Developer Testing Document Section.

Developer Testing Application Testing Section Example.

Code Review/Application Testing – The Most Important Point…

Do it!!! If you’re not sure (as I am still on a regular basis) then ask the question and run the risk of looking like an idiot! Be a spanner, who cares at the end of the day. I dread to think of how many developers have stared at code and, ultimately, let stuff slide because they refused to pipe up and just say they weren’t sure or ‘didn’t get it’. At the end of the day, it’s better to ask questions and if there turns out to be no issues, or it’s a simple misunderstanding, then no harm, no foul. On a good number of occasions I query things to later realise that I missed a line of code meaning it does work as intended, or there’s some process that had slipped my mind…it hasn’t got me sacked (ahem, yet!). So my advice is just to open up and have a natter at the end of day, it’ll be worth the ratio of ‘idiot’ to ‘bug-saving’ moments, trust me :-).

Admin

As with any process, there will always be (and if there isn’t for you then let me know where you work because it’s awesome!) a certain amount of ‘red tape’. Use this last section to keep track of whether any procedural bits and bobs have been handled. For example, I’m expected to cover the creation of a Release Note (as part of the practices I follow) for any item I work on, so it should be marked down in this section as to whether I’ve completed it or not. It could end up just being a very simple section, like the following:

Illustration of how to structure the Admin Developer Testing Document Section.

Developer Testing Admin Section Example.

I hope this has been helpful and informative; or, at least, got the mind going to start thinking about this process. Again, as mentioned above, I would love to hear your thoughts so please do get in touch either here or via social media.

Cheers all, keep smacking keys and producing coding loveliness in the meantime 🙂

A Little FOR XML PATH Nugget

A very small post this one, covering a little nugget that I’d almost forgot until it came up trumps again this week; TSQL FOR XML PATH can be a nice solution for concatenation of strings across rows (in a given column).

It’s fairly common to have the need to concatenate column based data, as the following example illustrates:

--Standard concatenation of column values (comma separated, produces multiple rows)
SELECT 
FORENAME 
+ ', ' 
+ SURNAME AS [FULL_NAME]
FROM dbo.tblTEST t
WHERE t.ID < 4;

However, don’t forget that working through and concatenating row based data, in a particular column, can be achieved simply using the FOR XML PATH construct, just like this:

--Concatenation of row values for a particular column (imagine we wanted comma separated forenames for example) - Provides a single column as structured
SELECT
CONVERT                                     --Conversion required to an NVARCHAR(VALUE) (MAX depending on string size) - The result will be XML when using FOR XML PATH initially
(
    NVARCHAR(100)
    ,
    (
        SELECT
        t.FORENAME + 
        CASE
            WHEN t.ID < 3                    --Don't add a comma after the last value (just for illustration purposes)
            THEN ', '
            ELSE ''
        END
        FROM dbo.tblTEST t
        WHERE t.ID < 4
        FOR XML PATH('')					 --Specify FOR XML PATH using an empty string (we don't want a wrapping element when concatenating strings)
    )
) AS [COMMA_SEPARATED_FORENAMES];

Interesting use of FOR XML PATH that’s well worth keeping in mind, it can come in dead handy. Apologies for the Short and sweet post; it’s the order of the day! I’ve managed to pick up the dreaded lurgy so I’m dosed up on medication and drinking a tonne of coffee! Here’s hoping that this post makes sense when I read it later on.

Until the next time, bye for now!

Implementing reCAPTHCA

I wanted to outline some recent work I’ve done with the reCAPTCHA Google API. Although not too difficult to implement, I did struggle a little to find C# based server side examples on how to ultimately validate a CAPTCHA. To start us off however, what is reCAPTCHA?

reCAPTCHA is essentially a mechanism to protect your sites functionality from spam and other kinds of abusive activity. It’s free, which is a massive bonus, and as a cherry on top every solved CAPTCHA is used to annotate images and build on machine learning datasets. This data feeds into solving a myriad of problems, including improving maps and solving AI conundrums. The actual term is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart. For any history buffs, details on how this concept came about can be found here (in fact, there seems to be an interesting ‘origin of’ debate here):

Wiki CAPTCHA Documentation

And something a bit more fun:

To get started using reCAPTCHA, and for further information, you just need to visit the following link:

Google reCAPTCHA

Utilising reCAPTCHA version 2.0 seemed like the way to go for me and has a number of benefits. For example, it’s possible for this implementation to automatically confirm some requests as not being malicious in nature, without the need for a CAPTCHA to be solved. In addition, the CAPTCHAs themselves in version 2 are much nicer for a human to solve, relying on a party to pick out characteristics in an image, rather than trying to read ever more complex and convoluted character strings embedded in a given image. Image resolution (to pick out particular objects) is still a field whereby programs struggle somewhat, so this form of reCAPTCHA falls into a more secure bracket also.

Using reCAPTCHA

The basic process boils down to following these steps:

  • Go to the Google reCAPTCHA site and click on Get reCAPTCHA.
  • Sign in or sign up, do what you’ve got to do!
  • Register the domain where you want to embed reCAPTCHA. This will enable you to receive the relevant API keys to create and validate CAPTCHAs.
  • Add the relevant JavaScript to your page.
  • Embed the Site key in the page being served to the user (we’ll go over this below).
  • Use the Secret Key in your server side logic to validate the CAPTCHA response (based on user input). This is done by sending a request to the Google API siteverify address. Again, I’ll cover this below.
  • Get the response and see if the CAPTCHA has been solved correctly, simple as that.

First things first, you’ll want to safely note down your Site and Secret key for further use, these can be viewed again at any time by logging into the reCAPTCHA portal (where you signed up). So you’ve registered your domain and have the relevant keys, we now need to embed reCAPTCHA by adding the following element to the page you want to target:

<head>
...
    <!--Use async/defer as necessary if you desire-->
    <script src='https://www.google.com/recaptcha/api.js'></script>
...
</head>
<body>
    ...
    <!--The id attribute is not absolutely required, but I have listed it here as I make further use of (basically a style choice) for an jQuery AJAX call (could just use the class however)-->
    <div id="g-recaptcha-response" class="g-recaptcha" data-sitekey="YOUR_SITE_KEY_GOES_HERE"></div>
    ...
</body>

Be sure to drop the Site key you were provided with in the data-sitekey attribute, within the div outlined (and add the JavaScript reference listed to your page). Load up your page and you should see something akin to the following:

reCAPTCHA V2 Control.

reCAPTCHA V2 Control.

This is a super start. If you are doing a simple post on submit, you’ll be able to pull information out of the standard request object and use this server side. For me however, I wanted something incredibly lightweight so I went with the following jQuery AJAX call (I may tweak this in my personal implementation so treat this as not yet finalised, but it provides you with an idea of the structure nonetheless):


//Defines an outline (structure) for a javascript contact object
function Contact(name, email, message, recaptchaClientResponse) {
	this.Name = name
	this.Email = email;
	this.Message = message;
	this.RecaptchaClientResponse = recaptchaClientResponse;
}

...

//Submit Enquiry button click handler
$(".submit-enquiry").click(function (e) {

	//Hide the alert bar on every new request (TODO - More code required to tidy up classes on the alert div)
	$(".alert").hide();

	//Use ajax to call the service HandleEmailRequest method
	$.ajax({
		cache: false,
		async: true,
		type: "POST",
		dataType: "json",
		processData: false,
		data: JSON.stringify(
			{
				contactObj: new Contact
					(
						$("#NameTextBox").val(),
						$("#EmailTextBox").val(),
						$("#MessageTextArea").val(),
						$("#g-recaptcha-response").val()
					)
			}),
		url: "URL_TO_A_SERVICE.svc/HandleEmailRequest",
		contentType: "application/json;charset=utf-8",
		success: function (evt) {
			//Evaluate the response and add content to alert bar
			if (evt.SendEmailResult)
			{
				$(".alert").addClass("alert-success").html("<p>Message successfully sent!</p>").slideDown(1000);
			}
			else
			{
				$(".alert").addClass("alert-danger").html("<p>We couldn not send the message, sorry about that.</p>").slideDown(1000);
			}

			//Reset the recaptcha control after every request
			grecaptcha.reset();
		},
		error: function (evt) {
			//Add content to the alert bar to show the request failed
			$(".alert").addClass("alert-danger").html("<p>We could not send the message, sorry about that.</p>").slideDown(1000);

			//Reset the recaptcha control after every request
			grecaptcha.reset();
		}
	});
});

The first part of this code encapsulates the idea of a contact, in my case at least (i.e. a user leaving a message on the web page that will become an email). This is just an easy way for me to encapsulate details during the AJAX call. Using jQuery, I’ve attached a handler to the submit button on my page which, apart from a little UI manipulation (for an alert bar element), in essence just makes a call to a service (via the url parameter) using details that the client has provided, including information on the solved CAPTCHA. This is passed to the service using the data parameter; note the use of jQuery to get details of the CAPTCHA the user has completed ($(“#g-recaptcha-response”).val()). This is passed as JSON to the service. Once a request has been validated, the return value (a simple boolean in my case) is inspected and an alert is shown to the user before resetting the reCAPTCHA control (another spam control mechanism that I’ve added in for extra peace of mind). Lastly, for me, the use of JSON.stringify was absolutely key as I want to work with JSON data over the wire. More details can be found here:

JSON.stringify() Documentation

This is where it got a little trickier to proceed. On the reCAPTCHA documentation site, for version 2.0, I could only see examples for PHP:

reCAPTCHA Code Examples Available.

reCAPTCHA Code Examples Available.

So, what you’ll see next is the culmination of my digging around for a jQuery/AJAX/C# solution to this particular head-scratcher. Hopefully, it proves useful to anyone interested in going down this route.

Let’s get going! On the service side, you’ll need something like the following, to gather up the AJAX request:

/// <summary>
/// Represents a Contact (Potential Customer) contacting
/// the recipient with an enquiry.
/// </summary>
[DataContract]
public class Contact
{
	#region Automatic Properties (Data Members)

	/// <summary>
	/// The Contacts full name.
	/// </summary>
	[DataMember]
	public string Name { get; set; }

	/// <summary>
	/// The Contacts email address.
	/// </summary>
	[DataMember]
	public string Email { get; set; }

	/// <summary>
	/// The Contacts message to the recipient.
	/// </summary>
	[DataMember]
	public string Message { get; set; }

	/// <summary>
	/// A string that represents the clients reCAPTCHA
	/// (V2) response (passed along with other Contact
	/// information and processed before a message can be sent).
	/// </summary>
	[DataMember]
	public string RecaptchaClientResponse { get; set; }

	#endregion Automatic Properties (Data Members)
}

...

/// <summary>
/// Outlines the HandleEmailRequest method that is part of this service.
/// Consumes and returns a JSON format message (called from the client
/// with details that instantiate a Contact object). Method designed to 
/// process reCAPTCHA details and, on success, send an email to
/// the designated (recipient) email address. 
/// </summary>
/// <param name="contactObj">Contact details associated with the person requesting information.</param>
/// <returns></returns>
[OperationContract]
[WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Wrapped)]
bool HandleEmailRequest(Contact contactObj);

...

/// <summary>
/// Public service method that attempts to send a
/// user message to the recipient as an email.
/// </summary>
/// <param name="contactObj">The Contact object constructed from JSON (passed from the client).</param>
/// <returns>A boolean that represents if this process was successful.</returns>
public bool HandleEmailRequest(Contact contactObj) => new EmailSender(contactObj).SendEmail();

I’ve given you a roll-up of an example Contact class (that is instantiated from the JSON automatically on call to the service), an example service interface definition and the outline of a service method (contained in a class implementing this interface). These of course are in separate files, but I’ve lined them up side-by-side to make it easier to absorb. In my case, the details are passed to and wrapped in an EmailSender class, the reCAPTCHA validation being called internally by the SendEmail method (as a private method called ValidateRecaptchaClientResponse):

/// <summary>
/// Private helper method that looks at the Contact object
/// associated with this EmailSender and attempts to verify
/// if the reCAPTCHA client response is valid (before attempting to
/// send an email message to the recipient). 
/// </summary>
/// <returns>A boolean that represents if reCAPTCHA validation succeeded or failed.</returns>
private bool ValidateRecaptchaClientResponse()
{
	//Online reference used as a basis for this solution: http://www.codeproject.com/Tips/851004/How-to-Validate-Recaptcha-V-Server-side
	try
	{
		//Make a web request to the reCAPTCHA siteverify (api) with the clients reCAPTCHA response. Utilise the Response Stream to attempt to resolve the JSON returned
		HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(string.Concat("https://www.google.com/recaptcha/api/siteverify?secret=YOUR_SITE_SECRET_KEY_GOES_HERE&response=", contactObj.RecaptchaClientResponse));

		using (WebResponse response = wr.GetResponse())
		{
			using (StreamReader sr = new StreamReader(response.GetResponseStream()))
			{
				//Use a JavaScriptSerializer to transpose the JSON Response (sr.ReadToEnd()) into a RecaptchaResponse object. Alter the 'Success' string of this object to a bool if possible
				bool success = false;
				bool.TryParse(new JavaScriptSerializer().Deserialize<RecaptcaResponse>(sr.ReadToEnd()).Success, out success);

				//Return a value that denotes if this reCAPTCHA request was a success or failure
				return success;
			}
		}
	}
	catch (Exception ex)
	{
		//Catch any exceptions and write them to the output window (better logging required in future). Return false at the end of this method, issue occurred
		System.Diagnostics.Debug.WriteLine($"An error occurred whilst validating the ReCaptcha user response. Type: { ex.GetType().Name } Error: { ex.Message }.");
	}

	//If we hit this portion of the code something definitely went wrong - Return false
	return false;
}

Lines fourteen and twenty two are of the most interest here. On line fourteen, you will be required to insert your site ‘Secret key’ into a request to the siteverify address we mentioned earlier. The response that needs to be appended to this string is equal to the reCAPTCHA information you gleaned from the client. You’ll notice that line twenty two makes use of a RecaptchaResponse type; which is basically an object wrapper used to contain information from the deserialised JSON response (as part of a reCAPTCHA check). This is outlined as follows:

/// <summary>
/// Represents an object (constructed from JSON)
/// that outlines a reCAPTCHA Response, and the pieces
/// of information returned from a verification check.
/// </summary>
public class RecaptcaResponse
{
	#region Automatic Properties

	/// <summary>
	/// The success status of the reCAPTHCA request.
	/// </summary>
	public string Success { get; set; }

	/// <summary>
	/// The Error Descriptions returned (Possibly to implement
	/// in the future).
	/// </summary>
	//public string ErrorDescription { get; set; }

	#endregion Automatic Properties
}

The actual JSON returned from the response stream takes the following form, so it is possible to extract error codes also if you desire (for me, I’m ripping a simple boolean out of this based on the success value):

{
  "success": true|false,
  "error-codes": [...]   // optional
}

On a very last note, cited in the code above but to reiterate, this link was invaluable:

Code Project reCAPTCHA Validation Example

That’s about it for a basic end to end sample.

The API documentation (and the steps listed on the reCAPTCHA site after registration) are pretty good, so you should be in safe enough hands:

reCAPTCHA API Documentation

Thanks all and take care until the next time.

Future Decoded 2015 Play-by-play

Hello beautiful people!

It’s a fantastic, gorgeous Saturday morning (it’ll be Monday by the time I hit the publish button, such is the enormity of the post!); the birds are chirping, the sun is shining through the balcony windows (and there is a bloody wasp outside, STILL!!!) and my wife has left me…………to go on a girly weekend (that probably sounded more alarming than intended; hmmm, oh well, it stays!). Whilst she is away fighting the good fight, this gives me the opportunity to go over my thoughts on the recent Future Decoded 2015 event that took place at ExCel in London.

The links to outline this event have been posted before on my blog, but just in case, here are the goods again:

Future Decoded 2015
Future Decoded 2015: Technical Day Highlights

Before we begin, it’s worth pointing out that I attended this event a couple of weeks ago, so apologies if any inaccuracies pop up. I’ll do my best to stick to the facts of what I can remember and specific points that interested me; other commitments ended up preventing me from getting to this particular post sooner. You’ll all let me off, being the super gracious, awesome folks you are, I’m sure :-).

So, FIGHT!!!!!

Sorry, I had a dream about Mortal Kombat last night and upper-cutting people into the pit – What a great stage that was! Ah, the memories….Let’s begin/start/get on with it then.

Morning Key Notes

The morning Key Notes were varied and expansive in nature. I won’t discuss all of them here, only the takeaway points from the talks that struck a chord with me.

1) Scott Guthrie. EVP Cloud and Enterprise, Microsoft (Azure).

I was particularly looking forward to this talk being a keen follower of Scott Guthrie (include Scott Hanselman), and I normally try to catch up with Channel 9 features and Azure Fridays whenever possible (I’ve linked both, although I’m sure most of you, if not all, have come across Channel 9 before or heard of Azure Fridays).

The talk did have primer elements as you would expect, i.e. here’s the Azure Portal and what you can expect to find (in relation to resources, templates you can access for applications, services, Content Distribution Networks (CDN), etc). The next bit really caught me cold, who was expecting a giant image slide of a cow! I certainly wasn’t…

Estrus in Cows

What followed was a full example of real-time data recording and assessment surrounding the monitoring of cows in Asia. I’ve provided a link below that sums up the concept of Estrus (being in heat) nicely enough, but it laymen’s terms it relates to cows ‘being in the mooooooood’ (wife insisted I added that joke). Obviously, a farmers’ ability to accurately detect this, urm, state of being in a cow is an incredibly important factor in the ability to produce calves.

It turns out that a cow tends to move more when in the Estrus state; something that can certainly be measured. So, with pedometers attached to cows to measure steps taken and an Azure based service receiving and providing feedback in real-time, the farmer in question was able to take action to maximise calf production. Further to this, analysis of the data gathered was able to identify trends against how long cows have been in the Estrus state, and the gender of offspring. Crazy stuff, but all very interesting. Feel free to read further to your hearts content:

Cow Estrus Detection

The Internet of Things (IoT) was briefly touched on and another brief, live coding example ensued.

Scott produced a small, bog-standard heat sensor (apparently, just a few pounds, I was impressed he didn’t say dollars!) and proceeded to demonstrate a basic WinForms application passing a JSON payload to Azure in real-time (measurements taken a few times a second). This strikes me as exciting territory, and I have friends who do develop applications working in tandem with sensors already, backed up by technologies such as the Raspberry Pi and Arduino, for example. The talk closed with the conceptual idea that the majority of data, in the world today, is still largely unmeasured, and hoped that Azure would be an important platform in unlocking developers potential to measure previously untapped data.

2) Kevin Ashton. Inventor of the “Internet of Things”.

Kevin coined the term the Internet of Things (IoT), and gave a very good talk on what this means, as well as identifying certain ‘predictions’ for the future. For instance, that we, as a species, would survive climate change for one. He quickly noted that calling ‘BS’ on this particular one would be tricky should we suffer a doomsday style event at the hands of climate change (I don’t imagine the last thoughts of humanity to be, ‘oh, Kevin Ashton was so bloody wrong!’). Another interesting prediction; we would all own a self-driving car by 2030. Prototype examples already exist, such as Googles (and Apples) efforts, and the Tesla:

Google/Apple (Titan) Self Driving Cars
The Tesla

Self-driving cars being one of the cases in point, the IoT relates to how a whole new host of devices will now become ‘connected’. Besides cars rigged up to the internet, we are all aware of the hooking up of internal systems in our homes (heating, etc) and utility devices (the washing machine), as to always be online and accessible at a moments notice. This world isn’t coming per say, it’s essentially already here.

Pushing past this initial definition, Kevin was keen to stress that the IoT was not limited in its definition to just ‘the connecting of hardware to the internet’ only. Wiki sums this up quite nicely on this occasion, but software (services and analytics) that moves forward with hardware changes will ultimately change the way we live, work, shop and go about our daily lives. Whether this be data relayed from the fridge to google glasses (yes, you are out of milk!), or perhaps a self-driving car ordering ‘click and collect’ shopping and driving you to the collection point after work (not to mention triggering the heating x miles from home!). Software, and the analysis of the new kinds of data we can record from interconnected elements, will be a huge driving force in how our world changes:

Internet of Things (IoT)

Lastly, before I forget and move on, a key phrase voiced several times (although I cannot remember the exact speaker, so apologies for that, it was probably David Chappell) was to reset your defaults. Standard client/server architecture was discussed, and for those of us that are part of long running businesses this is what we are exclusively, or at least partially, dealing with on a daily basis still. However, the change to the use of mobile devices, tablets, etc, as clients and the cloud as the underpinning location for the services these clients communicate with is becoming the norm. For start-ups today, mobile first development and the cloud (Azure or Amazon Web Services (AWS)) are probably the initial go-to.

For some of us (speaking from a personal standpoint only), a major factor in our success as developers could simply be determined by understanding the cloud and getting the necessary experience to make the transition (for those who are not actively taking part in this world of course).

So, now we have the IoT, let’s talk security…

3) Graham Cluley. Security Analyst, grahamcluley.com.

Graham delivered a funny and insightful talk surrounding everyones’, ‘Oh my God, the horror, please kill me’ subject, the wonderful world of security.

In a nutshell, he argues (and certainly proves his point as you’ll read next) that the IoT will bring wonders to our world, but not without issues. We now have a scenario whereby a breadth of new devices have suddenly become internet connected. However, are the driving forces behind these changes the people who are used to dealing with the murky world of malware, viruses and hacking attempts (such as OS developers)? Probably not, is the initial answer. This is, of course, just a cultural divide between those used to trans-versing the security world and protecting devices from such attacks, and those tasked with bringing new devices to the interconnected world.

The hacking of self-driving cars (big topic it would seem) was discussed:

Fiat Chrysler Recalls

Also, the potential of hacking pacemakers was covered (bluetooth/wifi enabled), famously featured in the TV series Homeland and which actually lead to Vice President Dick Cheney’s cardiologist disabling the wireless functionality of his device:

Pacemaker Hacking
Could Pacemakers Be Hacked?

Although funny, the talk did indeed bring up a very serious issue. The ramifications could be catastrophic, depending on the types of devices that ultimately end up being exposed to the masses via the web. Essentially, as the IoT age develops, extra care must be taken to ensure that security is right on up there, in the hierarchy of priorities, when developing software for these devices.

4) Chris Bishop. Scientist and Lab Director, Microsoft Research.

The last talk I would personally like to discuss briefly was by Chris Bishop; there were a few great nuggets here that are well worth covering.

The idea of Machine Learning (not a topic I was overly familiar with for starters), Neural Networks and Pattern Recognition laid the foundation for a talk looking at the possibility of producing machines with human-level, or even super-human, intelligence.

The Microsoft Kinect was used to demonstrate hand-tracking software that, I have to admit, had an incredible amount of fidelity in recognising hand positions and shapes.

Lastly, a facial recognition demonstration that could estimate, with good accuracy, the emotional state of a person was kicked off for us all to see. Very, very impressive. There was most certainly an underlying feeling here (and as much was hinted at) that his kind of technology has many hurdles to jump. For instance, building something that can consume an image and accurately describe what is in that image is still a flaky concept, at best (and the difficulties of producing something capable of this are relatively vast).

Still, a greatly enjoyable talk! A book was touted, and I believe (please don’t shout at me if I’m wrong) this is the one:

Pattern Recognition and Machine Learning

After the morning Key Notes, a series of smaller talks and break-out sessions were available to us. Here’s how I spent my time…

Unity3D Grok Talk

Josh Taylor. Unity Technologies.

It’s my sincere hope that, on discovering this, my employer won’t decide to sack me! This was over lunch and was a self-indulgent decision I’m afraid! You’ll know from some of my historical posts that I have a keen interest in Unity3D (and have spent time making the odd modest prototype game here and there), and I was interested to see how Unity 5 was progressing, especially as a greater cohesive experience with Visual Studio had been promised.

In this short, 20 minute talk, we experienced how Visual Studio (finally) integrates nicely into the Unity3D content creation pipeline. Unity3D now defaults to using Visual Studio as the editor of choice, with Monodevelop being pushed aside. Apologies to anyone who likes Monodevelop, but I’ve never been able to get behind it. With wacky intellisense and with what I can only describe as a crash-tastic experience in past use, I haven’t seen anything yet to sway me from using Visual Studio. In fact, it was demonstrated that you can even use Visual Studio Code if you wish and, as it’s cross-platform, even Mac and Linux users can switch to this if they wish. More reasons to leave Monodevelop in the dust? It’s not for me to say really, go ahead and do what you’ve got to do at the end of the day!

In order to debug Unity projects in Visual Studio in the past a paid for plugin was required. This particular plugin has been purchased by Microsoft and is now available to all. Being able to easily debug code doesn’t sound like much, but trust me it’s like having a basic human right re-established – such good news!!!

The new licensing model was also commented on, a massive plus for everyone. The previous Free/Pro divide is no more; now everyone gets access to the lions share of the core features. You only need to start spending money as you make it (fair for Unity to ask for a piece of the pie if you start rolling in profit/expanding a team to meet the new demand). For me, this means I actually get to use the Unity Pro water effects, hoorah ;-).

Following this, I spent a bit of time last weekend watching the Unite 2015 Key Notes, discussing 2D game development enhancements, cloud based builds and Oculus support. Well worth a look if and when time allows:

Unite 2015 Key Notes

Plus, if Oculus technology interests you, then it’s definitely worth watching John Carmacks (formerly of ID Software, the mind behind Wolfenstein and Doom) Key Note from the Oculus Connect 2 event:

John Carmack Oculus Keynote

Very exciting times ahead for Unity3D I believe. Self-indulgence over, moving forward then…

Journey to the Intelligent Cloud

Corey Sanders. Director of Program Management, Azure.

Following the Unity3D talk, I made my way back to the ICC Auditorium (I missed a small section of this particular talk, but caught the bulk of it) to catch up on some basic examples of how the new Azure Portal can be used. This took the form of a brief overview of what’s available via the portal, essentially a primer session.

In my recent, personal work with Azure I’ve used the publishing capability within Visual Studio to great affect; it was very transparent and seamless to use by all accounts. A sample was provided within this particular session which demonstrated live coding changes, made in GitHub, being published back to a site hosted on Azure.

Going into a tangent….

Very much a personal opinion here, but I did find (and I wasn’t the only one) that a good portion of the content I wanted to see was a) on at the same time (the 1:15pm slot) and b) was during the core lunch period where everyone was ravenous, I’m a ‘hanger’ sufferer I’m afraid. C# with Mads Torgerson, ASP.NET 5, Nano Servers and Windows 10 (UWP) sessions all occupied this slot, which drove me a little nuts :-(. This felt like a scheduling issue if I’m honest. I’d be interested to hear from anyone who did (or didn’t) feel the same.

I was so disappointed to miss Mads Torgerson, I very much enjoyed the recent C# language features overview and would have loved to have made this breakout session! I did walk past him later in the day, and I hope he never reads this, but he seemed ridiculously tall (perhaps Godly C# skills made him appear several inches taller, who knows!). It doesn’t help that I’m on the shorter side either, I just wanted to be 5′ 11″, that’s all I ever wanted (break out the rack, I need to get stretching!). I should have said hello, but wimped out!

F# Language Breakout Session

Don Syme. Principal Researcher, Microsoft Research.

This was easily the part of the event that resonated the most with me, and strongly influenced the foray into F# that I undertook recently. Don Syme, the designer and architect of the F# language, took us through a quality primer of the syntax and how F# can be used (and scaled) for the cloud.

All of this aside, the most impressive part of the talk was a live demonstration of F# Type Providers. Again, this is fully covered in my previous post so I’ll just direct you to that, which in turn will aid me in cutting down what is now becoming a gargantuan post. In summary, the ability to draw information directly from web pages, rip data straight from files and databases, and combine and aggregate it all together using minimal code produces a terse, easy to understand and pretty darn good experience in my book. Even the code behind producing visual feedback, in the form of the charting API, is succinct; the bar really isn’t set too high for new starters to get involved.

If you decide to give anything a go in the near future, I would give F# the nod (followed closely, just a hair’s breadth away, by jQuery in my opinion). Certainly check it out if you get the chance.

Final Key Note

Professor Brian Cox. Physicist.
Krysta Svore. Senior Researcher, Microsoft Research.

The day proceeded in fast forward and, before we’d really had the chance to gather our thoughts, we were sitting in the main auditorium again faced by Professor Brian Cox, Krysta Svore and a menagerie of confused attendees staring at mathematical formulas outlining quantum theory.

Into the wonderful world of quantum computers we dance, and in my case, dragging my brain along from somewhere back yonder in a desperate attempt to keep up. Thankfully, I’m an avid TED talk fanatic and had, in the run up to the event, brushed up on a few quantum theory and quantum mechanics videos; lucky I did really. The content was dense but, for the most part, well put together and outlined the amazing (and potentially frightening) world of possibilities that quantum computers could unlock for us all.

Professor Brian Cox cruised through the theories we’d need to be intimate with in order to understand the onslaught of oncoming content surrounding quantum computers. In essence, a traditional ‘bit’, has a defined state (like a switch), on or off. However, and this is the simple essence of what they were trying to get to, traditional bits are reaching limitations that will prevent us from solving more complex problems, in a timely manner (you’ll see what I mean in a second). Therefore, qubits, born from quantum theory, are the answer.

Now, I’m not going to insult your intelligence and go into too much detail on a subject that I am clearly not an expert in. So, just in ‘laymen’s bullet points’, here is what I took from all that was said and done across the Key Note:

  • With bits, you are dealing with entities that can have a fixed state (0 or 1). A deterministic system if you will, that has limitations in its problem crunching power.
  • Qubits, however, take us into the realm of a probabilistic system. The qubit can be in a superposition of all of the allowed states, not just 0 or 1.
  • Therefore, the problem crunching powers of qubits are exponential in nature, but the probabilistic nature makes measuring them (and interactions involving them) difficult to get to grips with.

So is it worth fighting through the technical problems in order to harness qubits? What kind of gains are we talking about here?

Krystra Svore outlined an example that displayed that it would take roughly one billion years for a current super computer to crack (more complex than standard) RSA encryption. How long would it take a quantum computer you may ask? Significantly faster is the answer, estimated at around one hundred seconds in fact. This clearly defines for us the amazing problems we’ll be able to solve, whilst simultaneously illustrating the dangerous times that lay ahead from a security standpoint. Let’s just hope cryptography keeps up (I can see a few sniffs to suggest things are in the pipeline, so I will keep an eye out for news as it pops up).

So you want a quantum computer I hear you say! Hmmm, I wouldn’t put it on the Christmas list anytime soon. Due to the fact current quantum computers need to be super cooled (and from the pictures we got to see, didn’t look like you could hike around with it!), we’re not likely to get our hands directly on them in the near future.

Can you get your mitts on quantum simulators today? Apparently yes in the answer (completed untested links, just for you to peruse on your own, good luck):

QC Simulators
Project Liquid

Taking nothing away from the Key Note though, it was a concrete finish to an excellent event. Would I go again? You bet! Should we get the train next time instead of driving? Taking into account the mountains of free beer and wine on offer, of course! To finish up, before summarising the Expo itself, if you haven’t been and get the opportunity (in fact, actively seek the opportunity, enough said) then definitely book this in your calendar, thoroughly brilliant.

Expo

Very, very quickly, as I am acutely aware that your ability to focus on this post (if not already) must have completely diminished by this point, I wanted to describe what the Expo itself had to offer. If you’re still reading, give yourself a pat on the back!

One of the more compelling items we saw was the use of the new Lumia phone as a (kind of) desktop replacement attempt. Let’s get one thing straight, you’re not going to be doing hardcore software development using Visual Studio or any other intensive task on this device anytime soon. However, there was certainly enough evidence to suggest that basic productivity tasks would be possible using a mobile phone as a back bone to facilitate this.

The Lumia can be hooked up to a dock, akin to the Surface Pro 4 (the docks are subtly different apparently, so are not cross-compatible), and that allows it to be tied to a display device. You can also get a folding mouse and keyboard, for a very lightweight, on-the-go experience. Interesting certainly, but there is a definite horse-power issue that will prevent anyone working on anything remotely intensive from getting on board. Anyway, for those interested the link below will get you started:

Lumia Docking Station

I saw a few Surface Pros, and wondered whether we could potentially smuggle a few out of the Expo! Only kidding, no need to call the Police (or for anyone I work with thinking I am some kind of master criminal in the making) :-).

An Oculus demonstration booth was on the Expo floor, and displays were hooked up to show what the participants were experiencing. It was noted that a few of the people using the Oculus seemed to miss the point a bit, and kept their head completely still as they were transported through the experience. Once the heads started moving (to actually take in the world) you could visibly see people getting incredibly immersed. Alas, the queues were pretty darn large every time I made my way past, so I didn’t get a chance to experience it first-hand. One for the future.

There was also a programmable cocktail maker, an IoT masterpiece I think you’ll agree. A perfect union of hardware, software and alcohol, a visionary piece illustrating the future has arrived!

The next time an event like this comes around I will endeavour to get a post up in a timely fashion (which will vastly improve the content I hope).

Thanks for reading and a high five from me if you made it this far. Back to coding in the upcoming post I promise, until the next time, cheers from me (and would you believe it, it’s now Tuesday)!

Fighting The Python

A random spin-off for today, but thankfully a much, much shorter post for anyone who bled from their eyeballs when reading my last post! The focus for today is Python; what you’ll read about here is my initial insights. Looking at the clock, this equates to about just under an hour of reading and learning; so don’t expect to see anything too advanced or perhaps even technically perfect (sound the ‘possibly wrong on the internet alarms now please, if you will’).

What is Python?

Everyone, I’m sure, just wants to say it’s a big-ass snake; and of course it is! Programming language wise however, Python is designed to be a very easy to read, terse, dynamically typed language which allows for rapid development. I’m familiar with it from a procedural/scripting sense but Python does support an object-orientated paradigm (something I haven’t looked into as of the time of writing).

So, some re-iteration here but bear with me; the key takeaway points are:

  • Dynamically typed.
  • Standard files use the .py extension.
  • Whitespace sensitive (uses indentation, like F#, to control flow).
  • Similar ethos to F#, easy to read and terse.
  • Allows for rapid development cycles.
  • Information that I’ve gathered so far touts this as a great starter programming language.

Setup

You’ve got a few options for getting started. There appears to be multiple online interpreters where you can go and code in Python without downloading any resources:

Python Online Interpreters (Google)

Codecademy also has a course on offer which you can review, which I used as a primer for writing this post (the first few sections at least). The python.org site also has, based on an initial nosey around, some solid looking documentation along with downloads for the latest versions of Python:

Codecademy Python Course
www.python.org (Downloads/Documentation)

As I’m a Windows/Visual Studio kind of guy (something I should probably step away from occasionally to properly fly the ‘Random Coding Journeys’ banner in future!) the examples you’ll see next are formed using the Python Visual Studio templates (for creating a Python Command Line Application).

On debugging the application for the first time you will be prompted to download an interpreter; CPython is the option I selected, but there were various options to peruse so if you try this yourself have a good root around. After one, simple, installation I was away and debugging.

Python VS Interpreter.

Python VS Interpreter.

Python Command Line.

Python Command Line.

So, without further ado, let’s get to some coding!

The Basics

As Python is dynamically typed, as stated before, to get up and running you simply declare a variable name, followed by the equals (‘=’) operator, then a value to start working with data as follows:

language = "Python"

Python then uses a type inference system (much akin to F# again) as you would expect.

Single-line comments are defined using #, with multi-line comments requiring content to be wrapped inside triple double quotes:

#A single-line comment
language = "Python"

"""
A multi-line comment
"""
intNumber = 10

A super-fast blast through the documentation on the python.org site and codecademy illustrates that the +, -, *, / and % operators all function as you would expect. The ** syntactic rule is used for exponential operations. The input/print functions can be used read in/output information to the console respectively. The def keyword is used to define functions (parameters can be supplied using parentheses). In a slight syntactic twist to what I’m used to, colons are used at the end of if, elif, else, try and except statements before any newlines/indentation:

#Classic Hello World - Just in case you really, really wanted to see it!
print ("Hello World")

print (5 + 5)       #Addition (10)
print (10 - 5)      #Subtraction (5)
print (5 * 5)       #Multiplication (25)
print (10 / 5)      #Division (2.0)
print (10 % 4)      #Modulo/Modulus (2)
print (10 ** 2)     #Exponential (100) 

#Get a number input from the user
intValue = input("Enter any number: ")

#No checking on ints here - Completely unsafe cast (gulp!)

#Notice the use of colons here. Stardard ==, <, >, etc operators are fine. Can also define an 'in' statement (if, elif and else supported)
if int(intValue) <= 8:
    print("intValue less than or equal to 9")
elif int(intValue) in (9, 10, 11):
    print("intValue in 9, 10, 11")
else:
    print("intValue is greater than or equal to 12")

#Retrieve another input from the user
intValueTwo = input("Enter another number: ")

#Illustrate some other decision making constructs. 'or' and 'and' are substituted (when compared to C#) for && and ||
if int(intValueTwo) == 0 or int(intValueTwo) == 1:
    print("intValueTwo is 0 or 1")
elif int(intValue) > 10 and int(intValueTwo) == 2:
    print("intValue is greater than 10 and intValueTwo is equal to 2")
else:
    print("Some other condition")

Sample Application

To finish this post off here’s an incredibly rudimentary code sample that’s designed to calculate the hypotenuse of a right-angled triangle (with the lengths of the two shorter sides of the triangle provided):

"""
A basic example of using python: Pythagoras' Theorem (and a multi-line comment!)
"""
#Import math helpers as required
from math import sqrt, floor

#Create a function up front to parse the input to an integer (that's all I'm allowing for now). Basically to demonstrate very simple error handling
def intTryParse(value):
    try:
        int(value.strip())
        return True
    except ValueError:
        print("Could not convert input value to an integer.")
    except:
        print("Unknown error occurred whilst processing the input.")
    return False

#Create a function to calculate the third side (assuming we have a right angled triangle!) of a triangle based on the two side lengths provided
def calculateTrianglesThirdSide(firstSideLen, secondSideLen):
    return floor(sqrt((firstSideLen ** 2) + (secondSideLen ** 2))) #Use floor to round, ok with the slight inaccuracy (I just wanted to use more helper functions)

#print to the console - We're here and we are alive!
print ("Pythagoras Example (calculate Hypotenuse)\r\n=========================\r\n")

#Get string based input from the user for the first two sides of the triangle
sideOne = input("How long is the first shortest side of the triangle: ")
SideTwo = input("How long is the second shortest side of the triangle: ")

#Only proceed if both values provided are integers
if intTryParse(sideOne) and intTryParse(SideTwo):
    #Both values provided for the first two sides parse correctly (strip space from the values)
    sideOneInt = int(sideOne.strip())
    sideTwoInt = int(SideTwo.strip())

    #Calculate the remaining side using the values provided (output to the console)
    print (calculateTrianglesThirdSide(sideOneInt, sideTwoInt))
else:
    #Invalid input - Cease processing
    print ("Processing halted due to invalid input.")

Thanks for reading, until the next time…

A Few Hours With….F#

What can I learn about F# in the time it takes to drink a coffee, eat some biscuits and listen to a few tracks on Spotify; let’s find out. This post was supposed to be entitled “Thirty Minutes With…F#” but I ended up getting far too engrossed and a couple of hours skipped by (followed by more coffee and biscuits)!

This is something that I’ve been meaning to cover and is a topic that has come up among friends and colleagues; there’s certainly an interest in this language so I’ll pop the hood and have a little look.

For starters, although it’s a few years old now, I found this discussion on the topic interesting:

F# with Richard Minerich & Phillip-Trelford (Hanselminutes.com)

As a basis for this little peek, I’ll be using http://www.tryfsharp.org, instead of any of the integrated Visual Studio features (which do exist, so I may do a follow up after this taster).

EXTRA NOTE: In the interim since starting this post, I actually attended the Future Decoded 2015 event where there was a fantastic F# breakout session by Don Syme; this has very much peaked my interest further.

What is F#?

Prior to writing this post I’d pegged F# as a scripting language geared to solving complex mathematical problems in a clean manner; hence its inherent link with the financial market and software created to function in this particular sector. However, based on the Hanselminutes.com podcast and the content presented on tryfsharp.org, it seems as if I may possibly be underselling it. We find that F# is now pushing itself as a fully-featured and rich alternative (or at least considering itself as a powerful language to commonly interoperate with, not just for mathematical problems) to existing object-orientated languages such as C# and Java.

So, the take away points:

  • Strongly-typed, functional-first programming language.
  • Open source.
  • Cross platform.
  • Expressive and concise (i.e. terse).
  • Whitespace Sensitive.

Setup

When using the tryfsharp.org site, the first hurdle that you’ll have to jump is compatibility issues with the Silverlight plugin used to drive the web editor. During my testing I had issues in Chrome and Edge, as outlined below:

Chrome Unsupported.

Chrome Unsupported.

I came up trumps in FireFox, IE 11 and Opera (for now):

Opera Supported!

Opera Supported!

I settled on using FireFox and didn’t come across any issues whilst going through the tutorials presented in the getting started section. Perhaps the browser compatibility is something that will get addressed or is something that I just need to read up on. Silverlight is still slowly coughing and spluttering its way to an eventual demise either way!

Getting Started in F# – Section by Section

I’m going to give a very basic and brief rundown of the content presented in the first few sections provided on the site. This isn’t an exhaustive list of all of the content presented or a full representation of the information available; this just represents my first ‘comb through’.

Bindings, Values and REPLs

To begin with, we are presented with the REPL (Read Evaluate Print Loop), which executes the code within the editor. The REPL provides an on-going output so you can build up programs incrementally (i.e. keep a track of basic mathematical output for instance). You can opt to execute code in its entirety, in blocks or line by line by highlighting statements and using the run command (or using Ctrl+Enter). One gotcha here, the equivalent command in Visual Studio called F# Interactive, which allows you to execute scripts line by line or in blocks, requires you to use Alt+Enter. I’ve seen resources that state this could cross over with some Re-Sharper keyboard shortcuts; this is one to look out for and adjust key bindings as required.

Binding names to values (declaring and setting variables in a traditional sense) is achieved using the let keyword:

//Declare and set a variable called bookCount to 10
let bookCount = 10

In a variation to C#, let bindings are immutable in F#, meaning that out of the gate the following with cause errors and behaviour you may not expect if you’re not on your guard:

//The following code results in "Book Count: 10" - bookCount does not allow mutation after the original assignment (binding)
let bookCount = 10 
bookCount = 11 
printfn "Book Count: %d" bookCount          //printfn is a utility function (in FSharp.Core.dll) that allows printing to the output window (various formats are allowed for variables that you want to inject into the resultant string)

//Duplicate definition of value 'valueOne' error thrown
let valueOne = "Value One"
let valueOne = "Value Two"

You’ll notice that I’ve thrown in a reference to the printfn function; a real staple for providing formatted output as and when needed. Further help can be found here:

Core.Printf Module

So what if you need to actually ‘mutate’ a value after its original assignment? Thankfully, this is possible via the mutable keyword, as illustrated in this simple example of a looping construct:

//Declare a particular variable using the mutable keyword so the value can change after assignment
let mutable countVar = 1
 
for item in itemArray do
    //Perform some kind of function (using the countVar value in this case)...
    countVar <- countVar + 1 //Increment countVar

As a side note, you can see that ‘<-' is used for assignment purposes (as opposed to the usual '=' operator).

Also, if executing statements singularly, it is possible to use a technique called shadowing to overwrite an initial binding as in this example:

//Executing this line...
let shadowed = "first value"

//Then this line, separately...
let shadowed = "second value"

//Prints "value = second value"
printfn "value = %s" shadowed

It’s important to note that the second statement here (let shadowed = “second value”) creates a new binding that shadows, or masks, the original.

F# is very much statically typed, as this code illustrates (types are inferred correctly in the output window on inspection):

let intTest = 10
let floatTest = 10.55
let stringTest = "Twenty Two"

A key plus of F# (as touted anyway) is that it goes to much greater lengths to perform type inference, to a larger degree than most statically/strongly typed languages. It’s looking to walk the very fine line between the enhanced readability of a dynamic language and the more robust nature of a statically typed language. My early impressions suggest that F# is doing a good job at finding a balance.

Functions

Functions in F#, in-line with traditional data values, are also created using the let keyword to bind a name to a function. You’ll notice that in a very simple function definition, such as in the next example, the specifying of types in relation to parameters isn’t strictly required. Based on how I’ve called the function F# infers the type of x and y to be integer:

let calculateRectPerimeter x y =
    ((x * 2) + (y * 2))

calculateRectPerimeter 4 6

As another example, this call to a function named multiply was defined using float values for both arguments; F# infers the types of these arguments correctly:

let multiply x y =
    x * y 

multiply 4.5, 6.2

A few gotchas crept into the fold here, namely surrounding mixing of types (integers and floats in my case) which, as opposed to C#, is not allowed in F# without explicit casting. Here’s a useful troubleshooting guide for anyone interested in doing some further reading:

F# Troubleshooting

So far you’ll have observed that the scaffolding required to define and use a simple function is very lightweight. No curly braces, brackets (until you start to enforce types as specified below) or comma-separation of arguments is required at this stage (which is quite nice I have to admit).

This leads nicely onto the topic of Type Annotations. In a nutshell, this simply means to define types in a function signature when the function operates on an input parameter in a type specific manner. This example is pulled directly from the learning resources provided online:

let toHackerTalk (phrase:string) =
    phrase.Replace('t', '7').Replace('o', '0')

toHackerTalk "terry ono"

F#, in wording stripped from the site, treats functions as First Class Citizens. This means, due to their nature of being bound to a name using the let keyword, you can create helpers functions within functions and use functions (as with C# methods) as arguments to other functions (termed Higher Order Functions):

//1) FUNCTIONS WITHIN FUNCTIONS

//Monthly pay is 4x weekly salary (addBonus doubles this value, if only!)
let calculateMonthlyPay x =    
    //Function defined within calculateMonthlyPay as an additional helper (called directly by this function)
    let addBonus x =
        x * 2

    addBonus(x * 4)

//Call calculateMonthlyPay ((250 * 4) * 2) = 2000    
let payResult = calculateMonthlyPay 250

//2) FUNCTIONS AS ARGUMENTS

//First, we define a simple test function that returns a fixed value
let magicNumber number =
    number 7

//Next, we create a function to test an input    
let testMagicNumber number =
    if number = 7 then
        "It's the magic number!"
    else
        "It's not the magic number..."

//Call testMagicNumber with the magicNumber function as an argument        
magicNumber testMagicNumber

This seemed very intuitive apart from, being completely honest, the nature in which you provide functions as arguments; these function arguments are defined before the function you want to call. A definite switch up from C# and something that made me stare at the screen for a couple of minutes :-?. Also, note the use of the ‘=’ operator for use in the comparison here, this doesn’t mean an assignment is occurring.

The last thing presented in this section is Lambda Functions, the syntax for which seems tight and easy to interpret for someone with a C# background:

//Lambdas (Anonymous Function) in action
let divide = (fun x y -> x / y)
divide 8 2

//testFunction returns 16
let testFunction test =
    test 16

//Using testFunction as an argument for an anonymous function (x will be 16). Divide (using our previous function) 32 by x, if the result is 2 or greater return true, else false
testFunction (fun x -> divide 32 x >= 2)

Chaining Functions and the Forward Pipe Operator

Hopefully anyone reading this hasn’t dozed off, head hitting the keyboard, etc so apologies in advance; this has turned into a bit of a ‘beast’! Thankfully, this particular section will be a little shorter.

For starters, lists in F# are handled in an incredibly succinct manner. You can create lists of defined values or a list based on a value range very easily as follows:

//A list of ages (note the semi-colon separation of values)
let ages = [24; 31; 32; 50]

//A list of numbers (0 through to 250 using the '..' syntax)
let aLotOfNumbers = [0..250]

//Just to prove it...
printfn "aLotOfNumbers has a length of: %d" (aLotOfNumbers.Length - 1)

Here, the lists are inferred to be of type int based on the contained values. I can see how this could be an incredibly useful way to do some load testing or as part of a larger unit testing strategy.

The List type provides functions for operating on lists, whether this be performing calculations, aggregation, filtering or projecting whole new lists. I worked with a few additional examples in my ‘culmination experiment’, but here I’ll focus on List.map and List.filter. Firstly, List.map is used to project a new data set based on the mapping function supplied. List.filter uses a predicate, which only provides results that return true for the given predicate. If using more than one ‘List’ function, as denoted by the training resources, you can use the Forward Pipe Operator to improve readability significantly:

//Bind a list using the let keyword (numbers 0 to 100)
let firstHundred = [0..100]

//Project a new list that contains numbers in the firstHundred list doubled
let doubledValues = List.map (fun x -> x * 2) firstHundred

//Filter values from 0 to 100 to even values only. Project a new list that contains these values doubled
firstHundred
List.map (fun x -> x * 2)
    (List.filter (fun x -> x % 2 = 0) firstHundred)
    
//Not using the let keyword this time, specify a starting list containing numbers 0 to 50 (Total = 1300) - Using Forward Pipe Operator
[0..50]
|> List.filter (fun x -> x % 2 = 0)     //Filter to only even numbers
|> List.map (fun x -> x * 2)            //Double the remaining numbers after filtering
|> List.sum                             //Sum the total values remaining (after filtering/doubling)

There’s a great stock data based example on the site listed at this point which I’ll leave for you to discover on your own 😉 (taking into account the size of this post so far!); needless to say it encapsulates the ideas covered here and served as a grounding for my forays in building the larger program below.

Data Structures

I haven’t yet delved into the F# object-orientated type system, but I did have just enough time to look over the more simplistic and lightweight options.

Record Types allow for the basic grouping together of data. The tutorial uses the example of a book, so I’ll use the source material directly to serve as an example:

//Book Record Type (outline for a 'Book' which defines fields and underlying types)
type Book = 
  { Name: string;
    AuthorName: string;
    Rating: int;
    ISBN: string }

//Type of Book inferred here based on the fields (properties) defined here matching the Book Record Type
let expertFSharp = 
  { Name = "Expert F#";
    AuthorName = "Don Syme, Adam Granicz, Antonio Cisternino";
    Rating = 5;
    ISBN = "1590598504" }

//Use the books Rating to provide output    
printfn "I give this book %d stars out of 5!" 
    expertFSharp.Rating

Record bindings are immutable also, so altering a books ‘Name’ for example will error. F# provides syntax for creating a new Record Type with an updated field value:

//Record Type bindings are immutable also; this throws an error
expertFSharp.AuthorName <- "Chris Marinos"    

//Create a new Book type, based on the existing book, with a new 'Name'
let partDeux = { expertFSharp with Name = "Expert F# 2.0" }

The type inference that occurs here can cause issues if another Record Type contains the same field names but with an opposing type. Thankfully, explicitly stating which type you want a bound value to relate to is as easy as specifying a prefix label on one of the field assignments (you only need to do one to cover the entire type):

type Book = 
  { Name: string;
    AuthorName: string;
    Rating: int;
    ISBN: string }

type VHS =
  { Name: string;
    AuthorName: string;
    Rating: string;                                                 // Videos use a different rating system.
    ISBN: string }

//Binding fixed here by using Book.Rating
let expertFSharp = 
  { Name = "Expert F#";
    AuthorName = "Don Syme, Adam Granicz, Antonio Cisternino";
    Book.Rating = 5;                                                //Using the explicit label 'Book' here means this in its entirety is now treated as a Book, neat trick            
    ISBN = "1590598504" }

In the scenario whereby a particular field may or may not contain a value (the classic nullable scenario), F# defines the option, Some and None keywords. I’d love to read up on this further; the approach, to me at least, seems to be to create a safety blanket that doesn’t allow for the pesky null reference exception. When calling traditional .Net code this will still be an issue of course; but within the realm of F# the idea of making the developer accountable for this state management (and not just letting something ‘be null’) really intrigues me – Note to self for further reading! In addition, you are able to use a Pattern Matching check using the match/with construct. Here’s a complete example to whet the whistle:

//All people have a Name and Age, but not all people have a Book Club Membership Number
type Person = 
    { Name: string;
      Age: int;
      BookClubMemNo: int option }   //option denotes that data may not exist

//Define Steve - He has no Book Club Membership (None keyword used represent this)   
let steve = 
    { Name = "Steve Smith";
      Age = 32;
      BookClubMemNo = None }
      
//Define Jane - She loves her books, therefore she has a Book Club Membership (Some 'value' used to represent this)
let jane = 
    { Name = "Jane Smith";
      Age = 29;
      BookClubMemNo = Some 125485422 } 
      
//Define a function to 'Pattern Match' - Does the BookClubMemNo of a person have a value?
let bookClubMemDetails person =
    match person.BookClubMemNo with
        | Some memNo -> printfn "%s has a Book Club Membership Number of %d" person.Name memNo
        | None -> printfn "%s has no Book Club Membership" person.Name

//Test our function
bookClubMemDetails steve
bookClubMemDetails jane
Pattern Matching In Action.

Pattern Matching In Action.

The last thing I’d like to discuss, and is the final item on the learning resources checklist for getting started, is Discriminated Unions. These look and feel, on the face of it, like something akin to a C# enum type. You can use the same Pattern Match construct (which is very much an equivalent, in a basic sense, to a ‘case’ statement) as above to operate on a Discriminated Union. As a final note, and you’ll see this in the code sample below, Discriminated Union members can be ‘bundled’ together to create more complex scenarios (i.e. defining a sponsor type that’s associated with large football clubs sponsor).

Melting Pot Example

So, as a culmination to what I’d managed to glean in the first hour and a bit, I came up with a little fictional, football data interrogation sample (because…urm, I’m geeking out!). Here’s the final code sample and full comments to boot:

//Test Scenario
//------------------------
//Calculate football team point totals (only for teams whereby they have 10 or more wins) and provide the name of the team who is winning the league

//Defines a teams strip colour options
type StripColour =
| Red
| Yellow
| Green
| Blue

//Defines some secondary information about teams sponsors and sponsor sizes
type LargeSponsorSubTypes =
| Finance
| Oil
| Military

type SponsorSizeType = 
| Large of LargeSponsorSubTypes     //Large sponsors have a defined sub-type as above
| Medium
| Small

//Defines the structure of a football team data item
type FootballTeam =
    { Name: string;
      Wins: int;
      Draws: int; 
      MainKitColour: StripColour;
      SponsorSize: SponsorSizeType option
      Sponsor: string option; }

//Defines a data set surrounding football teams win/draw total (data items type inferred based on 'property' names)
let footballTeamData = 
    [
        { Name = "Roaring Tigers FC"; Wins = 8; Draws = 10; MainKitColour = Red; SponsorSize = Some Small; Sponsor = Some "Tiger Juice" };
        { Name = "Squeaking Mice Town"; Wins = 13; Draws = 6; MainKitColour = Green; SponsorSize = None; Sponsor = None };
        { Name = "Elephant City FC"; Wins = 12; Draws = 3; MainKitColour = Yellow; SponsorSize = Some Medium; Sponsor = Some "Elephant House Movers" };
        { Name = "Lazy Leopard Wanderers"; Wins = 3; Draws = 6; MainKitColour = Blue; SponsorSize = None; Sponsor = None };
        { Name = "Jolly Pirates FC"; Wins = 16; Draws = 6; MainKitColour = Green; SponsorSize = Some Small; Sponsor = Some "Pirate Clothing"; };
        { Name = "Norwich City FC"; Wins = 9; Draws = 9; MainKitColour = Yellow; SponsorSize = Some (Large Finance); Sponsor = Some "Aviva"; };
    ]

//Helper function to calculate a teams points based on wins and draws
let calculatePoints wins draws =
    ((wins * 3) + draws)
 
//1) Who is leading the league as it stands (based on total points) - In my weird and wonderful world you must have at least 10 wins to be included in the calculation
printfn "\r\nCurrent League Leader\r\n==============================" 
   
footballTeamData 
|> List.filter(fun team -> team.Wins >= 10)                                                                                     //10 win minimum
|> List.maxBy(fun team -> calculatePoints team.Wins team.Draws)                                                                 //Get the max by the amount of points the team has got                    
|> (fun team -> printfn "%s is leading the league with %d points.\r\n" team.Name (calculatePoints team.Wins team.Draws))        //Print out the teams name and points total

//DO SOME OTHER PROCESSING FOR FUN
//2) Print out a league table (all teams included - Order by point totals)
printfn "Current Standings and Team Information\r\n=============================================" 

let orderedTeamData = 
    footballTeamData
    |> List.sortBy(fun team -> -calculatePoints team.Wins team.Draws)       //Seems to be a sortByDescending in F# 4.0, the negative hack fits purpose here for the time being
    
let mutable teamPlace = 1 
for team in orderedTeamData do
    printfn "%s is at position %d in the league with %d points and a %A kit colour." team.Name teamPlace (calculatePoints team.Wins team.Draws) team.MainKitColour
    teamPlace <- teamPlace + 1
    
//3) Process team sponsors and types and do some pattern matching to finish up
printfn "\r\nTeam Sponsor Information\r\n=============================================" 

//Define a helper function to process each teams sponsor information
let handleTeamSponsorInfo team =
    match team.SponsorSize with
    //Team has a sponsor size so keep interrogating data (we have a sponsor name)
    | Some sponsorSize -> match sponsorSize with
                            //Handle each sponsor size separately (large sponsors have a sub-type so demonstrate how these could be handled differently)
                            | Large largeSponsorSubType -> match largeSponsorSubType with
                                                            | Finance -> printfn "%s has a Large-sized Sponsor (name: '%A') with a Finance subtype. As this is Finance based do blah blah..." team.Name team.Sponsor
                                                            | Oil -> printfn "%s has a Large-sized Sponsor (name: '%A') with an Oil subtype. As this is Oil based do blah blah..." team.Name team.Sponsor
                                                            | Military -> printfn "%s has a Large-sized Sponsor (name: '%A') with a Military subtype. As this is Military based do blah blah..." team.Name team.Sponsor
                            | Medium -> printfn "%s has a Medium-sized Sponsor (name: '%A')." team.Name team.Sponsor
                            | Small -> printfn "%s has a Small-sized Sponsor (name: '%A')." team.Name team.Sponsor
    //No sponsor size (hence, no sponsor type as defined by the data I've rigged for this example)
    | None -> printfn "%s has no sponsor." team.Name
    
//Call the function to inspect team sponsor info (for each and every team in the original, un-ordered data set)
for team in footballTeamData do
    handleTeamSponsorInfo team
    
//TO IMPROVE 
//-> Include a team 'Points' property and calculate this up front (to reduce calls to calculatePoints)
//-> Alter formatting options to remove the 'Some' part of the output within the handleTeamSponsorInfo function

Type Providers

I had another half hour to spare, and I desperately wanted to (following the Future Decoded demonstrations) have a very quick play with F# Type Providers. This example doesn’t go into charting, which I’ll reserve as a cherry-on-top topic for another day, but covers an example of scraping a web page for data using the HtmlProvider. In the example below I’m pulling data on browser usage from the w3schools website (directly from a table on the browser_stats.asp page of the site).

The basic premise is that you provide a sample HTML snippet to define a ‘type’; this essentially gives you strongly typed members to run with in order to rip information out of a web page on-demand. You’ll notice that this approach is indeed sensitive to restructuring changes made at the website page level, which is to be expected, but it’s an interesting premise for interrogating online data sources in real-time.

Here’s the full code snippet, produced in Visual Studio this time around with a NuGet package installed for FSharp.Data:

//Bring the FSharp.Data namespace into scope
open FSharp.Data

//Define a type to house browser stats information (based on an example file)
type BrowserStats = HtmlProvider<"DataFormat.htm">

[<EntryPoint>]
let main argv = 

    //Retrieve the latest, 2015, browser statistics for w3schools (example file defines a 'outline' so we know how the data is structured - A schema of sorts)
    let browserStatInfo = BrowserStats.Load("http://www.w3schools.com/browsers/browsers_stats.asp").Tables.``Browser Statistics``

    //Write a title to the console
    System.Console.WriteLine("2015 Browser Statistics\r\n=============================\r\n")

    //Write, to the console, stats for each browser (row by row from the relevant html table)
    for row in browserStatInfo.Rows do
        System.Console.WriteLine("{0} -> Chrome: {1}, IE: {2}, FireFox {3}, Safari {4}, Opera {5}", 
            row.``2015``, row.Chrome, row.IE, row.Firefox, row.Safari, row.Opera)

    //Do not close until we have had time to inspect the data
    System.Console.ReadKey() |> ignore

    0 // return an integer exit code

Example output from the console application utilising the above code:

Browser Statistics Console Output.

Browser Statistics Console Output.

In summary, I’ve really enjoyed working with F# (in this initial exploratory sense). I can certainly see enough here to warrant a further deep dive and possibly purchasing a book here or there down the road. I hope you’ve enjoyed reading through. Now, where did I put my semi-colons again, I can’t find them at the end of any statements…

All the best!