Coding Snippets in SSMS (Microsoft SQL Server)

Hi all,

Continuing on with the idea of coding snippets, here’s a follow up showing the (very similar) procedure for creating snippets in SQL Server Management Studio (SSMS).

As before, you’ll need a file with the ‘.snippet’ extension to begin, which contains a snippet in an XML format. A basic example is shown below for reference, encapsulating some code that can quickly search a Stored Procedure definition for a particular search term:

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<_locDefinition xmlns="urn:locstudio">
    <_locDefault _loc="locNone" />
    <_locTag _loc="locData">Title</_locTag>
    <_locTag _loc="locData">Description</_locTag>
    <_locTag _loc="locData">Author</_locTag>
    <_locTag _loc="locData">ToolTip</_locTag>
</_locDefinition>
    <CodeSnippet Format="1.0.0">
		<Header>
		<Title>Sys.procedures SQL Check</Title>
			<Shortcut></Shortcut>
		<Description>Creates SQL to check sys.procedures for a literal string.</Description>
		<Author>Lewis Grint</Author>
		<SnippetTypes>
			<SnippetType>Expansion</SnippetType>
		</SnippetTypes>
		</Header>
		<Snippet>
			<Declarations>
				<Literal>
					<ID>SearchTerm</ID>
					<ToolTip>The term to search for.</ToolTip>
					<Default>SearchTerm</Default>
				</Literal>
			</Declarations>
			<Code Language="SQL">
				<![CDATA[--Check SP content for a literal string
SELECT 
name
FROM sys.procedures 
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%$SearchTerm$%';
	]]>
			</Code>
		</Snippet>
    </CodeSnippet>
</CodeSnippets>

As you can see, this file takes the familiar ‘.snippet’ format, supporting Snippet, Code and Declarations elements. These include the code snippet itself, and placeholder definitions, as before. You will also find the Header element, where pertinent information about the snippet can be stored, such as author and description.

Easily recognisable territory so far, I’ll think you’ll agree.

Within SSMS, much as in Visual Studio, navigate to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B) to get underway:

Image showing how to access the Code Snippets Manager.

Access Code Snippets Manager.

Again, pretty much plain sailing from here on in. You’ll have the same options as in Visual Studio to import a snippet file (just use the Import button):

Image showing the Code Snippets Manager before Import.

Code Snippets Manager before Import.

Navigate to your snippet file and select it. You’ll now have the opportunity to map the snippet to a nominated folder. In this instance, we’ll use the My Code Snippets folder. Click Finish to complete the process.

Image showing the MSSQL Server Code Snippets Manager Import Dialog.

SSMS Code Snippets Manager Import Dialog.

You should then be able to see your snippet under the nominated folder:

Image showing the Code Snippets Manager after Import.

Code Snippets Manager after Import.

That’s it, you’ve imported the snippet! You can now open a new query and use Ctrl + K, Ctrl + X to launch the Code Snippet Picker, whereby you can use the arrow keys to move between folder and snippets:

Image showing a Snippet before being expanded.

Snippet before Expansion.

Hit Enter to select your snippet; you should then see the following:

Image showing a Snippet after being expanded.

Snippet after Expansion.

Then you can start using your code, utilising tab to cycle through any placeholders. Another trick, that I’ve used, is to Add an entire folder of snippets using the Code Snippets Manager (instead of using Import). Using a ‘tactically’ named folder, you can gain faster access to your snippets:

Shows the results of adding a Custom Snippets Folder.

Adding a Custom Snippets Folder.

This type of snippet is an Expansion snippet. You can write other kinds, such as the Surrounds With snippet, an example of which can be found here on MSDN, illustrating a try/catch construct:

Surrounds With Snippet Example

I hope this has been helpful. Please comment below if you have any thoughts!

Cheerio!

Code Snippets in Visual Studio (C#)

Hi all,

This post forms part of a mini-series that I’d like to do surrounding the topic of code snippets. I have, for some time, been using these merrily within Microsoft SQL Server (which I’ll cover) but have neglected to look at creating custom snippets in every other environment I am privy to. Time to put that to a stop :-).

The following is a small demonstration of how to knock up a simple, custom code snippet in C#. Along the way, I’ll discuss a known limitation of creating snippets for this language specifically (as opposed to Visual Basic, for example). With any luck, this will get you to investigate the topic if you haven’t already done so.

Creating Custom Code Snippets

If you’re looking for a simple introduction, this MSDN article is a great place to head:

Walkthrough: Creating a Code Snippet

Once you’re primed and at the start line you should be pushing out snippets in no time.

You’ll need to craft your own file with the ‘.snippet’ extension to begin, where the content takes the form of standard XML. The one I have built for this example is below; I’ll discuss it as we move forward.

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
    xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Full Definition Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class (including namespace/using statements) implementing INotifyPropertyChanged fully.</Description>
			<Shortcut>inotifyfull</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
			    <Literal>
					<ID>NamespaceName</ID>
					<ToolTip>Replace with the namespace.</ToolTip>
					<Default>NamespaceName</Default>
				</Literal>
			    <Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>				
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace $NamespaceName$
{
    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }
}]]>
            </Code>
        </Snippet>
    </CodeSnippet>	
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Class Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class implementing INotifyPropertyChanged.</Description>
			<Shortcut>inotify</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
				<Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }]]>
            </Code>
        </Snippet>
    </CodeSnippet>
</CodeSnippets>

After a wrapping CodeSnippets element, you construct a CodeSnippet containing a Header and Snippet element, at a bare minimum. Within the Header you are able to express metadata relating to the snippet, such as the Title and Author, alongside the Description (which shows as an IntelliSense tooltip on use) and the actual Shortcut value associated with bringing the snippet into scope, which is typed directly into the IDE.

The Snippet element contains a Code element and is, as you would expect, where you place your physical code ‘slice of goodness’. It’s important to place your code within the square brackets of the CDATA declaration.

A smaller, valid C# snippet, for example could look like:

</CodeSnippet>
    <CodeSnippet Format="1.0.0">
    <Header>
        <Title>MessageBox Show Snippet</Title>
        <Author>Lewis Grint</Author>
        <Description>Generates a full MessageBox Show call.</Description>
        <Shortcut>mb</Shortcut>
    </Header>
    <Snippet>
        <Code Language="CSharp">
            <![CDATA[MessageBox.Show("Hello World");]]>
        </Code>
    </Snippet>
</CodeSnippet>

Not the most useful example, as within Visual Studio you already have the ‘mbox’ snippet covering this (which uses a fully qualified expression for MessageBox.Show), but illustrates the most basic form of a snippet nicely.

If you want placeholders in your code, that can be accessed and changed sequentially by using tab after you fully bring the snippet into scope within the IDE, you need to create a Declarations element under Snippet. Here you can create a Literal, to be altered by the snippet user on-the-fly, with tooltip and default values to boot. Embed these in your code snippets by copying the ID element value into the code wrapped in ‘$’ symbols (again, as shown above in the more detailed example).

That should be enough to run with, as it stands. We have an idea of how to create snippets, but how do you import them into Visual Studio? Let’s find out…

Importing Custom Code Snippets into Visual Studio

Now you have your snippets file in tow it’s time to rock on over to Visual Studio. You can access the Code Snippets Manager by navigating to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B), as illustrated here:

Image showing how to access the Code Snippets Manager within Visual Studio.

Accessing the Code Snippets Manager.

Once inside the Code Snippets Manager alter the language to the one of your choosing, which is ‘CSharp’ in my case, and click the My Code Snippets folder to select it as the repository for your snippets, before clicking Import to proceed:

Shows the Code Snippets Manager Dialog.

Code Snippets Manager Dialog.

You can now browse to your snippet file and, on selection, will be presented with a confirmation screen that should look like this:

Shows the Code Snippets Manager Import Dialog.

Code Snippets Manager Import Dialog.

Your snippet file will be validated at this stage and you will be notified if the markup is malformed/invalid.

After clicking Finish, you’ll find yourself back in the Code Snippets Manager screen and, with any luck, you should be able to spot your individual snippets under the My Code Snippets directory:

Shows custom snippets in the Code Snippets Manager after import.

Code Snippets Manager After Import.

As easy as pie. The snippets are imported, so let’s use them.

Using Imported Custom Code Snippets

My test bed snippets file provides two snippets for generating a stub class definition, implementing the INotifyPropertyChanged interface (which comes into play when constructing MVVM applications in WPF, for example, so that a client can be notified (and adjust) based on an underlying property change (in a binding scenario)).

Firstly, let’s inspect a partial snippet that generates the class code only, required event object and OnPropertyChanged method definition (just start typing and press tab twice to push the whole snippet to the IDE, whereby you can tab to each placeholder):

Shows a code file before using the partial custom snippet.

Before Partial Snippet Usage.

Shows a code file after using the partial custom snippet.

After Partial Snippet Usage.

You can see that some using statements need to be brought into play to resolve some types here. Frustratingly, Visual Basic supports the concept of including References and Import statements in the snippets XML, whereby C# does not. More details can be found here:

How to: Create a New Snippet with Imports and References

In my scenario, this is an entire stub class so I didn’t have a particular issue with placing the using statements, including the namespace definition, into the code snippet itself (with placeholders):

Shows a code file before using the full custom snippet.

Before Full Snippet Usage.

Shows a code file after using the full custom snippet.

After Full Snippet Usage.

Voila!

I’ve seen some murmurings that this Snippet Editor on Codeplex is worth a look if you are looking for a more robust editor to create your snippets. Support is only listed for Visual Studio 2010 and its version date is listed as 2009, so it doesn’t appear to be that ‘current’. Perhaps a better tool is out there (an extension, or some such); worthy of more investigation:

Codeplex Snippet Editor

That’s all for now, thanks for reading through and I’ll catch you next time!

Developer Testing Hints and Tips

Howdy happy campers.

I want to discuss a piece, somewhat divergent from the topic of physical coding, although still a facet of development that is close to my heart (and easy to overlook in many respects when constantly mashing keys and churning out code); developer testing. More specifically, I want to provide a set of guidelines that ‘may’ (insert disclaimer) help with the process and provide some food for thought.

This is in no way a definitive guide or best practice for that matter; more just a personal take on what I find works for me and the guts of a generally beneficial ‘templated’ approach to follow.

I would love to invite discussion on this one (or just get a take on what works for you), so please do hit me up on twitter or add a comment below, I’d love to hear from you.

My Process

As with any process, ground work and preparation can be vital for achieving a good result. To this end, I invariably start my developer testing on a given work item with a template document that looks like this:

Illustration of how to structure you Developer Testing.

Developer Testing Helper Document Structure.

What goes into your document will largely depend on what technologies you are using of course. For instance, you may never have a database centric element to the development you perform, rendering the ‘Database Upgrade’ section null and void ‘in all cases’. Ultimately, add and remove sections as you see fit but do strive for consistency. I myself test a mixture of items that may or may not include T-SQL elements. However, I choose to include the ‘Database Upgrade’ section in this case on every occasion, preferring to note that ‘there were no T-SQL’ related parts to the item, even just to mark it as ‘N/A’ (for my own sanity and for easy recollection later down the line, without the need to scan a lengthy list of changes elsewhere in the notes). Basically, my OCD kicks in and I start to wonder why I haven’t included a section that I ‘always’ include, leading to paranoia that I’ve missed something!

Each section (other than Notes), which is probably self-explanatory, can result in a PASS, QUERY or PASS-BACK state. Section state obviously knocks on and influences the result recorded against the ‘Developer Testing Summary’ header. PASS denotes an ‘A-Okay’ state, good to rock and roll! QUERY gives you the opportunity to mark a section with ‘discussion points’ or things you would like to check, without necessarily marking it off as incorrect (I tend to do this a lot, as I love to talk!). PASS-BACK is used in the circumstance whereby an error can be replicated/reproduced consistently or a logic problem definitely flies in the face of the ‘Acceptance Criteria’ for the story. In the circumstances whereby things such as coding standards have been contradicted I tend to use a mixture of QUERY/PASS-BACK, depending on the notes the developer has provided (it could be a flat PASS, of course, as there are always occasions where the rules need to be broken!).

So, section by section, let’s go over what we have and why…

Notes

It’s incredibly tempting to start diving into code, comparing files, trying to make sense of what the hell is going on but…I may get in trouble here, I’m going to tell you to stop right here. It’s so easy, and I’ve done it (probably) hundreds of times, to get eye deep in code, wasting large pots of time, before the basic question of ‘what are we doing and why’ has been answered. This is where this section comes in.

Use this area of your notes to compile a few short paragraphs (or bullet points, whatever you prefer) on the following:

  • Read over the developers notes and, after discovering if any changes have occurred to the underlying requirements for the story, start to create…
  • Your own summary of the ‘Acceptance Criteria’ for this particular story (or item, whatever term floats your boat. I’m going to use both interchangeably to alleviate bombarding you with the same term too much!).
  • Then, list any other pertinent information surrounding how the developer has coded the item (e.g. decisions that have shaped how the story has turned out). For example, did they place code into a different service than was originally expected because of ‘x’ reason, or did some logic end up in a different layer in the technology stack than conceived originally.
  • Lastly, note any of your initial thoughts, concerns or things you intend to check/look for based on this initial scoop of information.

The core reason I do this is to try to solidify my expectations, begin thinking about a test plan (yes, I like to always perform (rudimentary at the bare minimum) application testing, this isn’t just down to QA in my mind!) and to try to mitigate the chances of any massive surprises. Surprises, although they will always eventually happen one way or another, will lead to more confusion and increase the chances of things slipping through the net. You’ll be able to, by just following this exercise or a similar routine, cross-reference your expectations with the code changes you see and more easily be able to pick up errors, incorrect logic or unrequired alterations. This will limit the chances that something will slip past your mental filter as an ‘I guess that’s correct’ or ‘perhaps that class needed to be changed also, ok’ moment (don’t lie, we’ve all had them 😉 !).

Cool, we’ve formed in our own minds what this item is for, how it’s been developed and what, as a baseline, we are expecting to see. Let’s test (and along the way, discuss a few more tactics).

Database Upgrade

Some of what I’ll discuss here is formed around how my personal development role operates, so feel free to modify this approach to your needs. Again, if you don’t deal in the realm of database development at all pass go and collect £200, you’ve bypassed this step; congratulations!

The essence of this section surrounds you being able to state that new Stored Procedures, Functions, Views, Triggers, etc. can be ‘created’ without error on a database in a suitable ‘versioned’ state. Also, can ad-hoc data scripts, that are part of the development item, be run without error?

Some other considerations…

  • Are object creation scripts/ad-hoc scripts expected to be re-runnable? If yes, then specifically test and note this down here.
  • If you are in an environment whereby this kind of testing needs to be performed on multiple databases then mark this down here also (splitting notes down into sections against each target database/environment, whatever is applicable).
  • We work with a ‘versioned’ database so I make an effort to state which version I am on at the start of the testing run for reference.

An example of what this section may look like is illustrated below for reference:

Illustration of how to structure the Database Upgrade Developer Testing Document Section.

Developer Testing Database Upgrade Section Example.

A QUERY/PASS-BACK at this stage will bubble up and alter the status listed for the entire developer testing process. An additional note here; depending on how many queries/issues you find (and the length of the testing notes in general), you may want to copy the core query/error text to the top of the notes for easy review by the developer later (this applies to all of the following sections in fact).

Code Review

Moving on to the main filling of your developer testing sandwich, the actual code review! Obviously, you’ll be reviewing files here and looking at scripts, new files or amended code but definitely take a second or two out (unless your setup has automated builds/continuous integration, or some other clever solution, to tell you this) to make sure the code compiles before proceeding (and make the relevant note). A simple step but one easily forgotten, meaning you can get to the end of a code review before realising parts of the code don’t compile, eek!

I tend to, from a structural and sanity point of view (clarity is key), split my testing notes here into sections based on technology (i.e. T-SQL, C#, JavaScript, etc), or, at least, make some effort to order up a single list of files by file type. I tend to, for C# changes, group code files by the related project (given that projects should represent a logical grouping of types, hence allowing you to dice up changes by functional area, i.e. common extensions, data access helpers, etc.).

The point that you should take away from this, however, is that a little bit of thought and structuring at this phase will make your life easier; especially as a number of code files rack up.

If you’re looking for a small sample on how this section could look, after being fleshed out, then here you go:

Illustration of how to structure the Code Review Developer Testing Document Section.

Developer Testing Code Review Section Example.

However, what about the code review procedure itself I hear you cry! What follows next shouldn’t be taken as an exhaustive list, or correct in every given situation for that matter; more just suggestions as to what I’ve found helpful over time (mental kit bag):

  • For C# (and other object-orientated languages that support this concept), ensure that null values are correctly handled. Whether this is by capturing nulls on call to a given method and throwing an ArgumentNullException, or by doing a ‘not equal to null’ check (!= null) around code that would otherwise fail.
  • Strings can be tricky buggers, especially in case-sensitive environments! In most cases, comparisons should be performed taking case-sensitivity out of the equation (another case-by-case situation of course). I’d keep an eye out, again for C#, for the correct use of String.ToUpperInvariant, String.ToLowerInvariant and String.Equals. For String.Equals, use an overload containing a StringComparison enumeration type, for case/culture-insensitive options.
  • Keep an eye out for instances of checks being performed against strings being null or an empty string (either one or the other only). This can quickly lead to chaos, switch out for a null, empty or whitespace check (e.g. String.IsNullOrWhiteSpace).
  • Empty try/catch handlers are evil. Kill any you find.
  • Check up for instances whereby a class consists of all static members, but the class is not marked as static.
  • Train the eye to look for casting operations; you’ll always catch a few where the casting operation ‘could’ throw exceptions and should, therefore, be subject to more careful handling.
  • Big bugbear in the realm of coding; if a method requires scrolling to get through it’s a significant indication right off the bat that it is a prime candidate for refactoring. Unless there is a good reason, or it is clearly performing one logical function, consider having a conversation about breaking the method down.
  • Look for missed opportunities to rational code using inheritance. The most common one I see (and forget myself) is the abstraction of code to base classes and then using virtual methods/overrides in subclasses. Hawk-eye for types that should be abstract.
  • A simple one, but something that could easily slap you in the face if you’re not careful. When ‘language switching’, in a DT sense, take a second to make a mental note that you should be changing mind-sets (i.e. syntax is changing, get your game-face on!). For example, you stare into the abyss of C# for long enough (seeing ‘!= null’) you may, on switching to T-SQL, not notice a ‘!= NULL’ that should have been an ‘IS NOT NULL’. Those trees can be damn hard to find in the woods, after all!
  • Watch out for expensive operations, whereby values should be obtained once, ideally, then cached. It can be easy to let code skip by that repeatedly calls a database, for instance, to the detriment of performance (or possible errors, depending on the nature of the functionality called).
  • I love, love, loooovvvveeeee comments! Probably (ok, to the levels of being a little OCD about it!) too much. As far as C# code goes, I prefer (but don’t fail on this basis alone) XML Comments for C# and like to see comments on bulkier pieces of T-SQL. If there is a sizeable piece of code, whereby its function stretches beyond ‘trivial’, I like to see at least a short statement stating intent (what the developer is expecting the code to do is key by the way…as discussed next).
  • Where you have comments, link the intent in these comments back to what the code is actually doing; then trail it back to the items ‘Acceptance Criteria’ where appropriate. I have been rescued (as a developer, submitting my work for DT) countless times by those performing DT on my code, just by someone relaying to me that ‘what I thought my code was doing’ (based on my comments) doesn’t tie up to the actual functionality being offered up. This has led to me, of course, face-palming myself but being relieved that the gap in my intent, when checked off against my actual code, had been picked up by somebody in time to catch it before QA (or deployment, gulp!). State intent, then reap the rewards when mistakes you make are more rapidly picked up for rectification.
  • Be sure to look for the use of language constructs/keywords or syntactic-sugar that is not permissible on your baseline, minimum supported environment (i.e. older versions of SQL Server or .NET), if what you work on has this concept of course. This is sure to be something that will get picked up by QA causing bounce backs, or by your consumers later on if you’re not careful!
  • Keep a look out for code that could (or should) be shared or has been placed in a project/location that does not make logical sense. At a bare minimum, picking up on this sooner rather than later will keep your code base tidier, allow for ample opportunities to put great code in places to be leveraged as much as possible. In other cases, asking these kinds of questions can expose flaws and issues with the way a solution has been architected, which occasionally will steer you clear of tight spots later down the line.
  • Where shared code has been changed, look for instances whereby other applications/areas of the code base could be broken as a result of the changes. Recompile code to check for this as required. I had a bite on the bum by this recently :-?.
  • Keep up to date with any coding standards documents that should be adhered to and make sure the guidelines are followed (within reason of course; you’ll always find a scenario whereby a rule can, and should, be broken).
  • Really do consider writing and using Unit Tests wherever possible. They are a useful facet in the grand scheme of things (I believe at least) and they do carry weight when pitched up against visually checking code and application testing in general.
  • Last little nuggets, which I see from time to time. Look for objects constantly being created inside loops, heavy amounts of string concatenation not using the correct constructs (e.g. a StringBuilder in C#) or missed opportunities to create sub Stored Procedures in T-SQL (sectioning off code to gain performance boosts and obtain better execution plans). In fact, for T-SQL it can be a useful exercise to check the performance of non-trivial pieces of code yourself by changing how it’s structured, whilst obtaining the same results of course. You may or may not be able to increase performance along the way, but you’ll have far better comprehension of the code by the end regardless.

Hopefully, this little snapshot from my bag o’ tricks is enough to get you started, or get the brain-juices flowing. Let me know what you think of these suggestions anyway; I’d really appreciate the opportunity to collate others general thoughts and get a collective consensus going.

Application Testing

Here is where I will defer the giving of advice to my beloved QA counterparts on this beautiful planet; this, of course, isn’t my area of expertise. My only opinion here is (developers will possibly hate me for stating it) that developers ‘should’ always perform application testing alongside a code review. In fact, I’m a keen advocate for developers being involved in performing QA on the odd occasion. I personally like doing this, provided I have a trusty QA on hand to assist me (thankfully, I work with the best one around ;-), so no worries there). The simple reasons for this are:

  • One way or the other, acquisition of Product Knowledge is going to be invaluable to you. It’s just as valuable to start using your products in anger as it is to analyse code for hours on end. The side-note here is that this is part of your overall ‘worth’ as a developer, so don’t neglect it.
  • At this stage, you get to think as the customer might. Ideas and thoughts you have at this stage, which direct more development or changes to the product, will be amongst some of the best (and most rewarding when it comes to getting that warm and fuzzy feeling!).
  • Urm…it’s embarrassing to say ‘oh yeah, that codes great, thumbs up!’ for it then to explode in someone else’s face on the first press of a button! Easily avoided by following the process through from end to end, no matter what.

Ok, I’ll have a go at channelling one QA thought. Ok, I got it, here’s one from a mysterious and wise QA guru:

Mysterious and wise guru here… a friendly reminder to developers…never, ever, test your items using only one record! The reason? Well, I’ll test it with more than one record and break it instantly!

If anyone doing QA reads this feel free to feed us your arcane knowledge…God knows we need it! I would advise you keep the original item requirements in mind here of course, whilst testing; securing any process variants in your thoughts that could potentially throw carefully laid plans to waste (e.g. what if we go back and forth from screens x and y between completing process z, or we save the same form information twice, etc.). Your knowledge of the code can help at this stage so use the opportunity whilst you have it.

Before I forgot, an example of this could look like this:

Illustration of how to structure the Application Testing Developer Testing Document Section.

Developer Testing Application Testing Section Example.

Code Review/Application Testing – The Most Important Point…

Do it!!! If you’re not sure (as I am still on a regular basis) then ask the question and run the risk of looking like an idiot! Be a spanner, who cares at the end of the day. I dread to think of how many developers have stared at code and, ultimately, let stuff slide because they refused to pipe up and just say they weren’t sure or ‘didn’t get it’. At the end of the day, it’s better to ask questions and if there turns out to be no issues, or it’s a simple misunderstanding, then no harm, no foul. On a good number of occasions I query things to later realise that I missed a line of code meaning it does work as intended, or there’s some process that had slipped my mind…it hasn’t got me sacked (ahem, yet!). So my advice is just to open up and have a natter at the end of day, it’ll be worth the ratio of ‘idiot’ to ‘bug-saving’ moments, trust me :-).

Admin

As with any process, there will always be (and if there isn’t for you then let me know where you work because it’s awesome!) a certain amount of ‘red tape’. Use this last section to keep track of whether any procedural bits and bobs have been handled. For example, I’m expected to cover the creation of a Release Note (as part of the practices I follow) for any item I work on, so it should be marked down in this section as to whether I’ve completed it or not. It could end up just being a very simple section, like the following:

Illustration of how to structure the Admin Developer Testing Document Section.

Developer Testing Admin Section Example.

I hope this has been helpful and informative; or, at least, got the mind going to start thinking about this process. Again, as mentioned above, I would love to hear your thoughts so please do get in touch either here or via social media.

Cheers all, keep smacking keys and producing coding loveliness in the meantime 🙂

Future Decoded 2015 Play-by-play

Hello beautiful people!

It’s a fantastic, gorgeous Saturday morning (it’ll be Monday by the time I hit the publish button, such is the enormity of the post!); the birds are chirping, the sun is shining through the balcony windows (and there is a bloody wasp outside, STILL!!!) and my wife has left me…………to go on a girly weekend (that probably sounded more alarming than intended; hmmm, oh well, it stays!). Whilst she is away fighting the good fight, this gives me the opportunity to go over my thoughts on the recent Future Decoded 2015 event that took place at ExCel in London.

The links to outline this event have been posted before on my blog, but just in case, here are the goods again:

Future Decoded 2015
Future Decoded 2015: Technical Day Highlights

Before we begin, it’s worth pointing out that I attended this event a couple of weeks ago, so apologies if any inaccuracies pop up. I’ll do my best to stick to the facts of what I can remember and specific points that interested me; other commitments ended up preventing me from getting to this particular post sooner. You’ll all let me off, being the super gracious, awesome folks you are, I’m sure :-).

So, FIGHT!!!!!

Sorry, I had a dream about Mortal Kombat last night and upper-cutting people into the pit – What a great stage that was! Ah, the memories….Let’s begin/start/get on with it then.

Morning Key Notes

The morning Key Notes were varied and expansive in nature. I won’t discuss all of them here, only the takeaway points from the talks that struck a chord with me.

1) Scott Guthrie. EVP Cloud and Enterprise, Microsoft (Azure).

I was particularly looking forward to this talk being a keen follower of Scott Guthrie (include Scott Hanselman), and I normally try to catch up with Channel 9 features and Azure Fridays whenever possible (I’ve linked both, although I’m sure most of you, if not all, have come across Channel 9 before or heard of Azure Fridays).

The talk did have primer elements as you would expect, i.e. here’s the Azure Portal and what you can expect to find (in relation to resources, templates you can access for applications, services, Content Distribution Networks (CDN), etc). The next bit really caught me cold, who was expecting a giant image slide of a cow! I certainly wasn’t…

Estrus in Cows

What followed was a full example of real-time data recording and assessment surrounding the monitoring of cows in Asia. I’ve provided a link below that sums up the concept of Estrus (being in heat) nicely enough, but it laymen’s terms it relates to cows ‘being in the mooooooood’ (wife insisted I added that joke). Obviously, a farmers’ ability to accurately detect this, urm, state of being in a cow is an incredibly important factor in the ability to produce calves.

It turns out that a cow tends to move more when in the Estrus state; something that can certainly be measured. So, with pedometers attached to cows to measure steps taken and an Azure based service receiving and providing feedback in real-time, the farmer in question was able to take action to maximise calf production. Further to this, analysis of the data gathered was able to identify trends against how long cows have been in the Estrus state, and the gender of offspring. Crazy stuff, but all very interesting. Feel free to read further to your hearts content:

Cow Estrus Detection

The Internet of Things (IoT) was briefly touched on and another brief, live coding example ensued.

Scott produced a small, bog-standard heat sensor (apparently, just a few pounds, I was impressed he didn’t say dollars!) and proceeded to demonstrate a basic WinForms application passing a JSON payload to Azure in real-time (measurements taken a few times a second). This strikes me as exciting territory, and I have friends who do develop applications working in tandem with sensors already, backed up by technologies such as the Raspberry Pi and Arduino, for example. The talk closed with the conceptual idea that the majority of data, in the world today, is still largely unmeasured, and hoped that Azure would be an important platform in unlocking developers potential to measure previously untapped data.

2) Kevin Ashton. Inventor of the “Internet of Things”.

Kevin coined the term the Internet of Things (IoT), and gave a very good talk on what this means, as well as identifying certain ‘predictions’ for the future. For instance, that we, as a species, would survive climate change for one. He quickly noted that calling ‘BS’ on this particular one would be tricky should we suffer a doomsday style event at the hands of climate change (I don’t imagine the last thoughts of humanity to be, ‘oh, Kevin Ashton was so bloody wrong!’). Another interesting prediction; we would all own a self-driving car by 2030. Prototype examples already exist, such as Googles (and Apples) efforts, and the Tesla:

Google/Apple (Titan) Self Driving Cars
The Tesla

Self-driving cars being one of the cases in point, the IoT relates to how a whole new host of devices will now become ‘connected’. Besides cars rigged up to the internet, we are all aware of the hooking up of internal systems in our homes (heating, etc) and utility devices (the washing machine), as to always be online and accessible at a moments notice. This world isn’t coming per say, it’s essentially already here.

Pushing past this initial definition, Kevin was keen to stress that the IoT was not limited in its definition to just ‘the connecting of hardware to the internet’ only. Wiki sums this up quite nicely on this occasion, but software (services and analytics) that moves forward with hardware changes will ultimately change the way we live, work, shop and go about our daily lives. Whether this be data relayed from the fridge to google glasses (yes, you are out of milk!), or perhaps a self-driving car ordering ‘click and collect’ shopping and driving you to the collection point after work (not to mention triggering the heating x miles from home!). Software, and the analysis of the new kinds of data we can record from interconnected elements, will be a huge driving force in how our world changes:

Internet of Things (IoT)

Lastly, before I forget and move on, a key phrase voiced several times (although I cannot remember the exact speaker, so apologies for that, it was probably David Chappell) was to reset your defaults. Standard client/server architecture was discussed, and for those of us that are part of long running businesses this is what we are exclusively, or at least partially, dealing with on a daily basis still. However, the change to the use of mobile devices, tablets, etc, as clients and the cloud as the underpinning location for the services these clients communicate with is becoming the norm. For start-ups today, mobile first development and the cloud (Azure or Amazon Web Services (AWS)) are probably the initial go-to.

For some of us (speaking from a personal standpoint only), a major factor in our success as developers could simply be determined by understanding the cloud and getting the necessary experience to make the transition (for those who are not actively taking part in this world of course).

So, now we have the IoT, let’s talk security…

3) Graham Cluley. Security Analyst, grahamcluley.com.

Graham delivered a funny and insightful talk surrounding everyones’, ‘Oh my God, the horror, please kill me’ subject, the wonderful world of security.

In a nutshell, he argues (and certainly proves his point as you’ll read next) that the IoT will bring wonders to our world, but not without issues. We now have a scenario whereby a breadth of new devices have suddenly become internet connected. However, are the driving forces behind these changes the people who are used to dealing with the murky world of malware, viruses and hacking attempts (such as OS developers)? Probably not, is the initial answer. This is, of course, just a cultural divide between those used to trans-versing the security world and protecting devices from such attacks, and those tasked with bringing new devices to the interconnected world.

The hacking of self-driving cars (big topic it would seem) was discussed:

Fiat Chrysler Recalls

Also, the potential of hacking pacemakers was covered (bluetooth/wifi enabled), famously featured in the TV series Homeland and which actually lead to Vice President Dick Cheney’s cardiologist disabling the wireless functionality of his device:

Pacemaker Hacking
Could Pacemakers Be Hacked?

Although funny, the talk did indeed bring up a very serious issue. The ramifications could be catastrophic, depending on the types of devices that ultimately end up being exposed to the masses via the web. Essentially, as the IoT age develops, extra care must be taken to ensure that security is right on up there, in the hierarchy of priorities, when developing software for these devices.

4) Chris Bishop. Scientist and Lab Director, Microsoft Research.

The last talk I would personally like to discuss briefly was by Chris Bishop; there were a few great nuggets here that are well worth covering.

The idea of Machine Learning (not a topic I was overly familiar with for starters), Neural Networks and Pattern Recognition laid the foundation for a talk looking at the possibility of producing machines with human-level, or even super-human, intelligence.

The Microsoft Kinect was used to demonstrate hand-tracking software that, I have to admit, had an incredible amount of fidelity in recognising hand positions and shapes.

Lastly, a facial recognition demonstration that could estimate, with good accuracy, the emotional state of a person was kicked off for us all to see. Very, very impressive. There was most certainly an underlying feeling here (and as much was hinted at) that his kind of technology has many hurdles to jump. For instance, building something that can consume an image and accurately describe what is in that image is still a flaky concept, at best (and the difficulties of producing something capable of this are relatively vast).

Still, a greatly enjoyable talk! A book was touted, and I believe (please don’t shout at me if I’m wrong) this is the one:

Pattern Recognition and Machine Learning

After the morning Key Notes, a series of smaller talks and break-out sessions were available to us. Here’s how I spent my time…

Unity3D Grok Talk

Josh Taylor. Unity Technologies.

It’s my sincere hope that, on discovering this, my employer won’t decide to sack me! This was over lunch and was a self-indulgent decision I’m afraid! You’ll know from some of my historical posts that I have a keen interest in Unity3D (and have spent time making the odd modest prototype game here and there), and I was interested to see how Unity 5 was progressing, especially as a greater cohesive experience with Visual Studio had been promised.

In this short, 20 minute talk, we experienced how Visual Studio (finally) integrates nicely into the Unity3D content creation pipeline. Unity3D now defaults to using Visual Studio as the editor of choice, with Monodevelop being pushed aside. Apologies to anyone who likes Monodevelop, but I’ve never been able to get behind it. With wacky intellisense and with what I can only describe as a crash-tastic experience in past use, I haven’t seen anything yet to sway me from using Visual Studio. In fact, it was demonstrated that you can even use Visual Studio Code if you wish and, as it’s cross-platform, even Mac and Linux users can switch to this if they wish. More reasons to leave Monodevelop in the dust? It’s not for me to say really, go ahead and do what you’ve got to do at the end of the day!

In order to debug Unity projects in Visual Studio in the past a paid for plugin was required. This particular plugin has been purchased by Microsoft and is now available to all. Being able to easily debug code doesn’t sound like much, but trust me it’s like having a basic human right re-established – such good news!!!

The new licensing model was also commented on, a massive plus for everyone. The previous Free/Pro divide is no more; now everyone gets access to the lions share of the core features. You only need to start spending money as you make it (fair for Unity to ask for a piece of the pie if you start rolling in profit/expanding a team to meet the new demand). For me, this means I actually get to use the Unity Pro water effects, hoorah ;-).

Following this, I spent a bit of time last weekend watching the Unite 2015 Key Notes, discussing 2D game development enhancements, cloud based builds and Oculus support. Well worth a look if and when time allows:

Unite 2015 Key Notes

Plus, if Oculus technology interests you, then it’s definitely worth watching John Carmacks (formerly of ID Software, the mind behind Wolfenstein and Doom) Key Note from the Oculus Connect 2 event:

John Carmack Oculus Keynote

Very exciting times ahead for Unity3D I believe. Self-indulgence over, moving forward then…

Journey to the Intelligent Cloud

Corey Sanders. Director of Program Management, Azure.

Following the Unity3D talk, I made my way back to the ICC Auditorium (I missed a small section of this particular talk, but caught the bulk of it) to catch up on some basic examples of how the new Azure Portal can be used. This took the form of a brief overview of what’s available via the portal, essentially a primer session.

In my recent, personal work with Azure I’ve used the publishing capability within Visual Studio to great affect; it was very transparent and seamless to use by all accounts. A sample was provided within this particular session which demonstrated live coding changes, made in GitHub, being published back to a site hosted on Azure.

Going into a tangent….

Very much a personal opinion here, but I did find (and I wasn’t the only one) that a good portion of the content I wanted to see was a) on at the same time (the 1:15pm slot) and b) was during the core lunch period where everyone was ravenous, I’m a ‘hanger’ sufferer I’m afraid. C# with Mads Torgerson, ASP.NET 5, Nano Servers and Windows 10 (UWP) sessions all occupied this slot, which drove me a little nuts :-(. This felt like a scheduling issue if I’m honest. I’d be interested to hear from anyone who did (or didn’t) feel the same.

I was so disappointed to miss Mads Torgerson, I very much enjoyed the recent C# language features overview and would have loved to have made this breakout session! I did walk past him later in the day, and I hope he never reads this, but he seemed ridiculously tall (perhaps Godly C# skills made him appear several inches taller, who knows!). It doesn’t help that I’m on the shorter side either, I just wanted to be 5′ 11″, that’s all I ever wanted (break out the rack, I need to get stretching!). I should have said hello, but wimped out!

F# Language Breakout Session

Don Syme. Principal Researcher, Microsoft Research.

This was easily the part of the event that resonated the most with me, and strongly influenced the foray into F# that I undertook recently. Don Syme, the designer and architect of the F# language, took us through a quality primer of the syntax and how F# can be used (and scaled) for the cloud.

All of this aside, the most impressive part of the talk was a live demonstration of F# Type Providers. Again, this is fully covered in my previous post so I’ll just direct you to that, which in turn will aid me in cutting down what is now becoming a gargantuan post. In summary, the ability to draw information directly from web pages, rip data straight from files and databases, and combine and aggregate it all together using minimal code produces a terse, easy to understand and pretty darn good experience in my book. Even the code behind producing visual feedback, in the form of the charting API, is succinct; the bar really isn’t set too high for new starters to get involved.

If you decide to give anything a go in the near future, I would give F# the nod (followed closely, just a hair’s breadth away, by jQuery in my opinion). Certainly check it out if you get the chance.

Final Key Note

Professor Brian Cox. Physicist.
Krysta Svore. Senior Researcher, Microsoft Research.

The day proceeded in fast forward and, before we’d really had the chance to gather our thoughts, we were sitting in the main auditorium again faced by Professor Brian Cox, Krysta Svore and a menagerie of confused attendees staring at mathematical formulas outlining quantum theory.

Into the wonderful world of quantum computers we dance, and in my case, dragging my brain along from somewhere back yonder in a desperate attempt to keep up. Thankfully, I’m an avid TED talk fanatic and had, in the run up to the event, brushed up on a few quantum theory and quantum mechanics videos; lucky I did really. The content was dense but, for the most part, well put together and outlined the amazing (and potentially frightening) world of possibilities that quantum computers could unlock for us all.

Professor Brian Cox cruised through the theories we’d need to be intimate with in order to understand the onslaught of oncoming content surrounding quantum computers. In essence, a traditional ‘bit’, has a defined state (like a switch), on or off. However, and this is the simple essence of what they were trying to get to, traditional bits are reaching limitations that will prevent us from solving more complex problems, in a timely manner (you’ll see what I mean in a second). Therefore, qubits, born from quantum theory, are the answer.

Now, I’m not going to insult your intelligence and go into too much detail on a subject that I am clearly not an expert in. So, just in ‘laymen’s bullet points’, here is what I took from all that was said and done across the Key Note:

  • With bits, you are dealing with entities that can have a fixed state (0 or 1). A deterministic system if you will, that has limitations in its problem crunching power.
  • Qubits, however, take us into the realm of a probabilistic system. The qubit can be in a superposition of all of the allowed states, not just 0 or 1.
  • Therefore, the problem crunching powers of qubits are exponential in nature, but the probabilistic nature makes measuring them (and interactions involving them) difficult to get to grips with.

So is it worth fighting through the technical problems in order to harness qubits? What kind of gains are we talking about here?

Krystra Svore outlined an example that displayed that it would take roughly one billion years for a current super computer to crack (more complex than standard) RSA encryption. How long would it take a quantum computer you may ask? Significantly faster is the answer, estimated at around one hundred seconds in fact. This clearly defines for us the amazing problems we’ll be able to solve, whilst simultaneously illustrating the dangerous times that lay ahead from a security standpoint. Let’s just hope cryptography keeps up (I can see a few sniffs to suggest things are in the pipeline, so I will keep an eye out for news as it pops up).

So you want a quantum computer I hear you say! Hmmm, I wouldn’t put it on the Christmas list anytime soon. Due to the fact current quantum computers need to be super cooled (and from the pictures we got to see, didn’t look like you could hike around with it!), we’re not likely to get our hands directly on them in the near future.

Can you get your mitts on quantum simulators today? Apparently yes in the answer (completed untested links, just for you to peruse on your own, good luck):

QC Simulators
Project Liquid

Taking nothing away from the Key Note though, it was a concrete finish to an excellent event. Would I go again? You bet! Should we get the train next time instead of driving? Taking into account the mountains of free beer and wine on offer, of course! To finish up, before summarising the Expo itself, if you haven’t been and get the opportunity (in fact, actively seek the opportunity, enough said) then definitely book this in your calendar, thoroughly brilliant.

Expo

Very, very quickly, as I am acutely aware that your ability to focus on this post (if not already) must have completely diminished by this point, I wanted to describe what the Expo itself had to offer. If you’re still reading, give yourself a pat on the back!

One of the more compelling items we saw was the use of the new Lumia phone as a (kind of) desktop replacement attempt. Let’s get one thing straight, you’re not going to be doing hardcore software development using Visual Studio or any other intensive task on this device anytime soon. However, there was certainly enough evidence to suggest that basic productivity tasks would be possible using a mobile phone as a back bone to facilitate this.

The Lumia can be hooked up to a dock, akin to the Surface Pro 4 (the docks are subtly different apparently, so are not cross-compatible), and that allows it to be tied to a display device. You can also get a folding mouse and keyboard, for a very lightweight, on-the-go experience. Interesting certainly, but there is a definite horse-power issue that will prevent anyone working on anything remotely intensive from getting on board. Anyway, for those interested the link below will get you started:

Lumia Docking Station

I saw a few Surface Pros, and wondered whether we could potentially smuggle a few out of the Expo! Only kidding, no need to call the Police (or for anyone I work with thinking I am some kind of master criminal in the making) :-).

An Oculus demonstration booth was on the Expo floor, and displays were hooked up to show what the participants were experiencing. It was noted that a few of the people using the Oculus seemed to miss the point a bit, and kept their head completely still as they were transported through the experience. Once the heads started moving (to actually take in the world) you could visibly see people getting incredibly immersed. Alas, the queues were pretty darn large every time I made my way past, so I didn’t get a chance to experience it first-hand. One for the future.

There was also a programmable cocktail maker, an IoT masterpiece I think you’ll agree. A perfect union of hardware, software and alcohol, a visionary piece illustrating the future has arrived!

The next time an event like this comes around I will endeavour to get a post up in a timely fashion (which will vastly improve the content I hope).

Thanks for reading and a high five from me if you made it this far. Back to coding in the upcoming post I promise, until the next time, cheers from me (and would you believe it, it’s now Tuesday)!

Visual Studio 2015 First Impressions – PerfTips

I’ve had a super quick tour of Visual Studio 2015 Community Edition and I have to say I’m pretty impressed as it stands. I’m currently downloading Unity 5 and will be installing the latest toolkits shortly that go hand in hand with this.

In the meantime, I’ve been looking at some of the new debugging performance aids; namely, PerfTips.

Full information can be found here:

Visual Studio PerfTips

In the example screenshot below I have rigged some methods (using the new async syntax in a rough and ready way) to simulate a slight delay in the retrieval of information. I’ve placed breakpoints in the code and you can see that, when hit, a small diagnostic message is shown in the editor detailing the time, in milliseconds, that have elapsed during the previous method call. Very, very nice:

PerfTips In Action

PerfTips In Action

You will also notice that some extra diagnostics, listing the events that have occurred so far and interesting metrics such as memory usage, are being shown in the Diagnostic Tools window on the far right. This is something I’ll have a further dig into; looks interesting on first glance.

The only thing I didn’t like off the bat was the default colour of the PerfTip text itself (which is non-descript when not highlighted, as standard). I have a penchant for the Visual Studio Dark theme so I prefer colours to pop a little more, just personal taste really. Thankfully, this can be easily rectified by navigating to Tools > Options > Environment > Fonts and ‘Colours’ (sorry, I’m British, I can’t help myself!) as shown here:

PerfTips Before Customisation.

PerfTips Before Customisation.

From here, you can adjust the Item foreground value (ensuring that PerfTips is selected from the Show Settings for drop down and that Text is highlighted in the Display items selection box). Of course, you’re also free to set the colour of the PerfTip text when highlighted as well by selecting the Highlighted Text item from the Display Items selection box and adjusting the Item foreground to the desired colour. If the colours on offer don’t float your boat then the usual custom colour picker is present also.

You can fly by this handy, dandy post for more details:

Customise PerfTips

The end result (depending on the colour you pick) is as follows:

PerfTips After Customisation.

PerfTips After Customisation.

I did receive a message stating that the application may need a restart for changes to take hold. In this case, I got away with it and the change kicked in immediately.

Anyway, just a short post for tonight. I will detail any other interesting finds as they crop up.

Until the next time!

Decompiling/Recompiling to fix a real issue – It’s been a while!

I won’t be using ‘real world’ coding samples here, for obvious reasons, but I’ll lay out a scenario so you can (hopefully!) see how a spot of decompiling/recompiling could come in handy.

An issue rolled its way to me that related to a legacy/archaic .Net website utilising a WCF service.

First issue, important parts of the source code were not available to me. I’m sure you’ll agree, this is not a great position to be in considering we had only just reached the starting block!

After a small amount of toiling, I hit the inevitable point whereby I’d solved any IIS/WCF configuration issues and now, desperately, needed to get into the meat of the source code. The only option was to extract the dlls that made up the application and supporting services and get decompiling!

There are some great tools for this. If you want to pay money and want a nice little selection of addition perks, then Redgates .Net Reflector is still a great pick:

Redgate .Net Reflector

On the free end of the spectrum I prefer (and this certainly seems to be a common pick amongst my colleagues) Teleriks JustDecompile:

Telerik JustDecompile

Armed with the dll dump and JustDecompile I went to work. Logical steps to enable basic logging at the service level had solved a handful of issues which is all fine and dandy. Unfortunately, I hit a dead end when the application surfaced an ‘unknown error’ in the UI and there was no sniff of an exception in the trace logs; bloody dark days if you ask me!

I took the offending dlls and started digging. A situation similar to the following presented itself:

JustDecompile Showing Exception Handling Structure.

JustDecompile Showing Exception Handling Structure.

Using the powers of imagination simply insert 20 or 30 lines of code (some to other helper methods all using types capable of throwing exceptions) in place of the MethodThatCanThrowAMultitudeOfExceptionTypes and you’re all set!

So, the core issue here is that setting an integer variable really isn’t giving us the feedback we need in every exception case (I don’t like this approach across the board but I digress!). Based on knowledge further up the call chain I knew that I must be hitting the general ‘Exception’ handler here which made my plight even worse; I could be hitting any old exception type (insert big sigh here).

Without the source files, I’m unable to directly manipulate the original code and improve my situation. Enter, ildasm.exe (launched here from a Visual Studio command prompt):

Example of Running ildasm.exe.

Example of Running ildasm.exe.

A full explanation of this tool can be found here:

Ildasm.exe

The tool, in a nutshell, can be used to inspect an assembly’s manifest file in a visual way and, most importantly, access the Common Intermediate Language (CIL, MSIL or just IL) that describe the assembly. I won’t go into an in-depth discussion of what IL is but, in short, it comprises of the platform-agnostic instructions formed after compilation along with type metadata. This code is compiled, on the fly, when required by a JIT (just-in-time) compiler that is geared towards the specific platform environment. In addition, the manifest file describes the actual assembly. Although lower-level in nature, IL is still fairly readable and can be dumped into an IL file from ildasm.exe by using the File > Dump command as shown below.

Dump Options for ildasm.exe.

Dump Options for ildasm.exe.

If you want to read up on the nature of what CIL is/how it is used, here is a wiki-based primer:

Common Intermediate Language

Anyway, here is the CIL for our troublesome member as it stands (along with a screen shot to show you roughly what you can expect to see after producing an IL file):

Generating and Inspecting CIL (IL).

Generating and Inspecting CIL (IL).

  .method public hidebysig instance int32 
          DoSomethingReallyCool() cil managed
  {
    // Code size       42 (0x2a)
    .maxstack  1
    .locals init (int32 V_0,
             int32 V_1)
    IL_0000:  nop
    IL_0001:  ldc.i4.0
    IL_0002:  stloc.0
    .try
    {
      IL_0003:  nop
      IL_0004:  ldarg.0
      IL_0005:  call       instance void RandomApplication.MyApplication::MethodThatCanThrowAMultitudeOfExceptionTypes()
      IL_000a:  nop
      IL_000b:  nop
      IL_000c:  leave.s    IL_0023

    }  // end .try
    catch [mscorlib]System.InvalidOperationException 
    {
      IL_000e:  pop
      IL_000f:  nop
      IL_0010:  ldc.i4.1
      IL_0011:  stloc.0
      IL_0012:  nop
      IL_0013:  leave.s    IL_0023

    }  // end handler
    catch [mscorlib]System.IO.IOException 
    {
      IL_0015:  pop
      IL_0016:  nop
      IL_0017:  ldc.i4.2
      IL_0018:  stloc.0
      IL_0019:  nop
      IL_001a:  leave.s    IL_0023

    }  // end handler
    catch [mscorlib]System.Exception 
    {
      IL_001c:  pop
      IL_001d:  nop
      IL_001e:  ldc.i4.3
      IL_001f:  stloc.0
      IL_0020:  nop
      IL_0021:  leave.s    IL_0023

    }  // end handler
    IL_0023:  nop
    IL_0024:  ldloc.0
    IL_0025:  stloc.1
    IL_0026:  br.s       IL_0028

    IL_0028:  ldloc.1
    IL_0029:  ret
  } // end of method MyApplication::DoSomethingReallyCool

The key issue, as previously stated, was the catch statement for the general ‘Exception’ type (IL defined as follows):

    catch [mscorlib]System.Exception 
    {
      IL_001c:  pop
      IL_001d:  nop
      IL_001e:  ldc.i4.3
      IL_001f:  stloc.0
      IL_0020:  nop
      IL_0021:  leave.s    IL_0023

    }  // end handler

This certainly isn’t going to be a masterclass in IL coding but just as a demonstration I’m going to alter the catch handler as follows (using IL geared to throw the exception back up to the calling code). I’m sure this is an art in itself but I’m definitely not the one to teach you (maybe I’ll add it to the hit list as an interesting learning topic!). The statement begins with a ‘label’ (starting with ‘IL_’ and ending with a colon); one golden rule to remember here is not to duplicate any labels across statements. Doing so will result in errors when you try to recompile these instructions back into an exe or dll.

    catch [mscorlib]System.Exception 
    {
      IL_001c:  pop
      IL_001d:  nop
      IL_001e:  rethrow
    }  // end handler

With this change made I can now use another tool, called ilasm.exe, as follows to recompile these instructions back into an exe (although, this was a dll in the scenario I previously dealt with. Ilasm.exe allows for additional command flags to be specified, such as /dll, to determine the output file type):

Example of Running ilasm.exe.

Example of Running ilasm.exe.

Another word of warning at this point; I made the mistake here of recompiling my dll at the time (in the test scenario) using an incorrect version of ilasm.exe (targeting an incorrect .Net Framework). I lucked out by simply changing the IIS Application Pool to run under a newer .Net Framework version to save on recompiling the dll again. Certainly something to keep in mind however.

Full details on this specific tool can be found here:

Ilasm.exe

Looking at this recompiled assembly you can see that I now have the desired ‘throw’ statement, exactly where I need it:

JustDecompile Showing Updated Code.

JustDecompile Showing Updated Code.

In my test scenario I was able to push this dll into the given environment, use service trace logs to identify the exception (including stack trace and additional information) and finally fix all of the remaining issues.

So, if you’re stuck in future with .Net code you can’t directly alter and you have (caveat, you’ll have to work harder if the assembly is strongly-named/signed) an exe/dll that you need to make changes to in order to get meaningful outputs, consider this approach.

Until the next time, thanks all!

Work Practices and the Quest for (a little) Self Improvement

An interesting week at work has just been topped off (this was a great little delectable cherry as far as I’m concerned) by a Tech Day, providing developers an opportunity to spend time reviewing content on a myriad of new technologies and coding methodologies. As we all know, the life of a developer can occasionally feel like sprinting, barefoot, over ground covered in broken glass with hands outstretched to grab the prize…the Holy Grail of knowledge…which is unfortunately dangling from a piece of string attached to stealth bomber hauling ass up, up and anyway. Yeah, yeah; perhaps a little over the top agreed, but the underlying feeling this particular vision instils has likely been experienced by every developer at one time or another – The feeling of utter helplessness in the face of technologies and learning curves far outpacing the average mortal. So as an aside to coding I’d like to briefly cover some material that I myself have found useful, in part at least, in pushing back the tide.

I managed to get a video, featuring Scott Hanselman (Program Manager at Microsoft, details below) entitled “It’s not what you read, it’s what you ignore”, featured as part of our Tech Day and it seemed to get a good reception. Viewing this several months ago and having time to experiment with a few of the suggestions made in the presentation I was confident it would certainly push a few buttons with the other developers I work with. Overall, the general feelings on this seemed fairly good and I’ve had some feedback to suggest this has, at the very least, got juices flowing regarding how we approach work practices in general. It’s from a few years ago but still feels relevant:

Scott Hanselman Presentation

Other links:

www.hanselman.com
www.hanselman.com/blog

Scott is a charismatic, funny and ultimately candid presenter who portrays his opinions with grace and decisiveness. He has produced a myriad of other talks on coding and technology (i.e. surrounding web development, ASP.NET and Azure) which is his area of expertise, but this window into his day to day thoughts and work management strategies could well be just as invaluable as any technical presentation.

Here’s a few take away points from this content:

  • The Pomodoro Technique. A technique for time management/instigating short, focused work ‘bursts’.
  • Trello Boards. A scrum-like piece of software for workload management/to-do lists.
  • Rescue Time. Software that monitors a user’s activity to show work trends over days, months and years.
  • Instapaper. A way to receive information, at a time convenient for you, in a clean, newspaper format.
  • Outlook tips and tricks for handling email.

First up is the Pomodoro Technique; so called due to its link with the ‘Pomodoro’ (tomato) kitchen timer. The concept is deceptively simple but, has for me anyway, had noticeable effects. The idea is that you fire up a timer for 25 minutes and sit down to do a piece of focused, uninterrupted work during this time. At the end of the 25 minute period (a ‘Pomodoro’) a short 5 minute break is taken before starting a further Pomodoro. The cycle is then repeated for up to 4 Pomodoro’s before an extended break is taken (in the implementation of the technique I have been following). During each Pomodoro it’s necessary to note down how many distractions…………………………….So, I’ve been hunting through steam just looking at the featured games this weekend…Damn it – You see what I mean!!!

Distraction!!!

Distraction!!!

Ahem…So you log how many times you get distracted during the Pomodoro, covering both times you distract yourself and when others distract you, in the hopes of limiting this down as much as possible. As noted, it’s surprising just how difficult 25 minutes of focused work is. Try it for yourself and see. This has, in no small part, contributed to the development of what Claire calls my ‘arse-face’ – The face pulled when I don’t want to be distracted! Sorry Claire…Kudos to you for having to look at my face for any length of time – Especially during any kind of coding (hat-tip).

Here’s some links that may prove useful (One for the Pomodoro Techniques official site, another for an online timer and an amazon search for kitchen timers if you need the physical object!):

The Pomodoro Technique
Online Pomodoro Stopwatch
Amazon Search

Moving on, Trello Boards are a useful tool for general work management and for treating work entities that drop on your desk in a scrum-like fashion. It’s certainly worth a poke around, it can be found here:

Trello

In fact, outside of the scope of ‘development’, Claire and I have found this a great way to visualise things we have to do for our Wedding (yes, I have tried to ‘scrum’ our Wedding, full blown nerd alert!). The boards can be shared with other people so contributions can be made by multiple team members for example (supporting notes, due dates, to-do lists and basic colour coding of board items).

Below is an example Trello Board outlining how it may look on any given day for me (well, with a few additions that probably won’t normally make an appearance but you get the picture):

Example Trello Board

Example Trello Board

I have a few explicit sections on my board:

  • Categories which acts as a key (colour coded) for board items.
  • Long Term/Massive Asteroids’ outlining those big, bad-boy hopes and dreams to achieve in the long-term (currently on a personal development level).
  • Rule or 3 section where I like to list the 3 key things I would like to achieve on a given day. These can be physical pieces of work or more abstract concepts (such as understand a particular coding concept/do some investigation or research).
  • A Today list where I list the actual items of work I plan to complete on a given day (colour coded with due dates and task lists where appropriate).
  • Once items are completed they get moved to the final ‘Done’ list for archiving later that day.

There are obviously a multitude of ways to leverage this organisational tool but this is just my personal preference.

RescueTime is a utility, after reviewing the content of the presentation, that I adopted right off the bat to monitor my own working habits. Personally, the results it collates on a daily basis I’m not entirely sure are 100% accurate but as an indicative tool it’s great.

Here’s the link for anyone who wants to try it out:

RescueTime

My own standpoint on this is that these metrics are very useful for deciding if you are spending your time in the right places as per your job placement (i.e. are you spending too much time on distracting tasks or unnecessary administration tasks instead of physical coding). If you are looking for an indication of your effectiveness then this could be your first ‘finger in the wind’ – A basis for reconsidering your general working habits if required should you find yourself straying from the tasks you should be chewing into.

A friend at work contacted me today and made me aware of this alternative (which works fully offline):

ManicTime

Thanks to Mike for this. I haven’t had a chance to look into this yet but I know he plans on checking it out so I’ll grab a copy to see how the information provided stacks up to Rescue Time.

A particular problem that I started to face (thankfully, quickly nipped in the bud), which I know is a commonplace office plague, is ‘craptonneofchrometabsitis’. I’m sure, if you’re not already this person, you’ve walked past someone with a gazillion chrome tabs open; slowly squeezing the life out of whatever box is wheezing its way through this malady. Either this or a billion favourites are currently sitting in your browsers favourite list as ‘to read’ sites if and when ‘time allows’. This is where Instapaper comes in.

Via a browser extension (shown below) you are able to identify a piece of web content for addition in your queue of Instapaper ‘to read’ articles. Instapaper is great for cutting out all of the fluff and rubbish associated with a site; so you are literally just left with the base content. The bit I like most, however, is that you get an ability (for free) to configure a schedule to ‘post’ a digital newspaper containing all of the content you’ve been marking as interesting in a given time frame. In my case, I get information pushed directly to my Kindle every Friday afternoon (if I’ve marked at least three pieces of content each week as of interest). Bloody brilliant…A great excuse to close tabs and destroy some unneeded favourites!

Instapaper Extension

Instapaper Extension

You can have a little read up on Instapaper by following this link:

Instapaper

Here’s an example of how the web interface looks (and a few articles I read many moons ago) – It’s themeable and contains great configuration options for pushing out content as and when to designated devices (or just reading the content there and then minus adverts and other guff found on the related sites):

Instapaper Web Interface

Instapaper Web Interface

Lastly, if you watch the presentation and read into the advice surrounding managing Outlook I think you’ll find a few great golden nuggets of sage advice. Some of these I’ve had trouble implementing, although I agree with them in principle and completely see the point, such as explicitly responding to email in the afternoons (and not even checking it in the mornings). Others have been a massive triumph, such as how to manage and split your Outlook Inbox. I now have folders containing emails from Management, CC only emails, External emails in addition to my physical Inbox (i.e. things intended for me specifically). All other junk has been moved out to a dumping ground folder that simply isn’t my Inbox – This is the first time my actual Inbox has been empty for seven years! If anything, this at the bare minimum produces a feeling of liberation so I’m counting it as a win!

Whilst this was fresh in my mind I wanted to blurt it out into a post with the caveat being that I’ll try to get some damn code up again on this blog soon! Either way, I hope all or some of this turns out interesting and/or helpful.

Happy Easter! Eat Chocolate, drink beer, etc. Thanks all and bye for now.