PERT Estimation 101

bear-to-do-list

Task estimates; the very real enemy of the developer. The numbers you provide for a task or piece of project work (whether this is in hours, days, story points, etc.), as an ‘estimate’, are often not being viewed as an approximation by all involved. This particular subject is covered, rather marvellously by Robert C.Martin, aka Uncle Bob, in his book The Clean Coder: A Code of Conduct for Professional Programmers; a book that I implore every developer to read. To quote Robert C.Martin directly, I believe this covers the underlying issue pretty well in a good number of cases:

The problem is that we view estimates in different ways. Business likes to view estimates as commitments. Developers like to view estimates as guesses. The difference is profound.

On a good number of occasions, I have found myself taking a good hiding, due to the estimates I have offered up on project work; normally where you find the estimate has been relayed to the customer as a concrete guarantee. In many cases, both parties should have shouldered more responsibility for this disparity; as the developer, I should have been more explicit as to what my estimates actually meant. In turn, those relaying information to the customer should have probably dug deeper into the heart of the matter, to extract further raw information as to the ‘likelihood’ that something could be achieved by a certain time.

Good dialogue and communication is a key factor, of course, as is making sure that you are always clear when you are providing an ‘estimate’ and making a ‘commitment’.

This is a subject that I always find myself circling back to and that I truly want to get better at, plain and simple. For that reason, this post will be geared towards looking at one potential strategy for improving the estimation process; the Program Evaluation and Review Technique (which is covered again pretty well in ‘The Clean Coder’).

What is PERT?

The Program Evaluation and Review Technique (PERT) encapsulates an analysis technique that is used for more accurately estimating large and complex projects. The basic idea is to reveal a distribution, or create a probabilistic structure, to reduce inaccuracies in estimates and attempt to account for uncertainty in a project’s schedule. It was created in 1957 for use by the U.S. Navy Special Projects Office to manage the Polaris nuclear submarine project.

In short, the process involves identifying the likely values for the minimum time and maximum time a task will take whilst pinpointing the ‘most likely’ estimate for a task’s completion, from observing a calculated distribution. Uncertainty is factored in using some standard deviation values.

How does it work?

The concept is as follows; first, when an estimate is required, you provide three numbers:

  • O: Optimistic Estimate (known as the ‘wow, how the hell did I do it that quick!’ estimate).
  • N: Nominal Estimate (the estimate with the greatest chance of being successful, or the correct estimate essentially).
  • P: Pessimistic Estimate (the doomsday estimate, where everything goes wrong except the Earths tectonic plates opening up to consume you, as you code the feature).

The first figure, ‘O’, involves thinking about the ‘pipe dream’ figure, i.e. if you could type the code at 1000 thousand lines a minute, making zero errors, and everything was to plug into place and work the first time. If Chuck Norris was coding the feature, this would be his estimate (this should have less than a one percent chance of being the actual time taken in order for the calculation to be meaningful). Next up, come up with the figure for N, which is defined as the estimate with the highest ‘chance’ of being the correct one. On a distribution bar chart of estimates, this would be the highest bar. Lastly, define P, which should be presented as the ‘worst case’ scenario estimate, stopping short of accounting for absolute catastrophes (again, this should have less than a one percent chance of being the ‘correct’ estimate, when all is said and done).

You’ve got three numbers, hoorah! A probability distribution can be gleaned for these figures simply enough (with ET being the expected time, I’ve changed the name of the variables to make more sense to me, when compared to ‘The Clean Coder’ or Wiki, but feel free to go with what fits for you):

ET = (O + 4N + P) / 6

The wiki definitions for all of these numbers are good, so I’ve included them below:

Wiki Definitions for PERT Calculation Facets

Wiki Definitions for PERT Calculation Facets.

The last part of this wonderful, little, puzzle is calculating the standard deviation (SD); which equates to getting your mitts on a value that indicates the degree of uncertainty that is associated with a given task:

SD = (P - O) / 6

Let’s observe a detailed example to get to grips with this fully, concentrating on how some extra elements come into play for a series of tasks.

A detailed example

In this example, Ryan (a name that came totally off the top of my head, milliseconds of thought went into that!) has a total of five items that he needs to estimate, that are part of a full project. Here are the specific tasks and Ryan’s initial estimates (expressed in days), before applying PERT calculations:

SIDE NOTE: Breaking a large project or piece of work into smaller tasks can often be a good strategy for mitigating risk up front and weeding out bugbears ahead of time.

Task Estimate
Write stored procedures for the customer export page 3
Write Web API components for the customer export page 2
Build the business logic layer for the customer export page 5
Build and style the customer export page (UI) 3
Assist Hayley with automated tests 1

That comes to a grand total of 14 days.

Ryan is now asked to apply the PERT approach to his estimates, and the mental cogs start to turn. Here’s a run-down of Ryan’s thoughts on each part of the project, in order:

  1. Stored Procedures: “Ok, I think I am fairly likely to complete this within 3 days, but there are quite a few elements of Dave’s existing helper stored procedures that I don’t understand. So, I’ll actually have to factor that in. Oh, and I have to do a data conversion!”
  2. Web API Components: “I’ve written most the of Web API components up to this point, so I’m well versed in this area at least and am confident with this figure. Even if everything went wrong it would only stretch to 3 days.”
  3. Business Logic Layer: “Jane wrote most of this layer and I’m not that well-versed in how it hangs together. The domain behind this is actually quite new to me. This could take significantly more time…”
  4. Building the page and styling: “The specification looks good and I’ve done a few similar pages in the past. It could take slightly longer if things did go wrong, as there are a few UI bells and whistles, including effects, that are required…”
  5. Assisting Hayley: “I was just kind of time boxing this in my mind, it’s almost certain she will need support beyond this, however – it really is an unknown!”

After a bit of head scratching and discussion with other team members, the figures below come out of the boiling pot. PERT has been applied, times are expressed in days and abbreviations used for the sake of space. Values are rounded to one decimal place:

Task O N P ET SD
Write stored procedures for the customer export page 3 5 11 5.7 1.3
Write Web API components for the customer export page 1 2 3 2 0.3
Build the business logic layer for the customer export page 2 6 13 6.5 1.8
Build and style the customer export page (UI) 1 4 9 4.3 1.3
Assist Hayley with automated tests 1 3 7 3.3 1

ET and SD values are calculated using the formulas previously discussed. Firstly, using the ET values, we can understand that Ryan will ‘likely’ be finished with these tasks (based on the sum of the ET values) in 21.8 days, let’s say 22 days then. To get a feel for the ‘uncertainty’ behind these tasks we have to look at the square root of the sum of the squares of the SD values for the tasks (a bit of a mouthful), as shown here:

(1.3 2 + 0.3 2 + 1.8 2 + 1.3 2 + 1 2) 1/2 =
(1.69 + 0.09 + 3.24 + 1.69 + 1) 1/2 =
7.71 1/2 = ~2.8 days

We can surmise from this that there could be 2.8 days, so let’s say 3 days, of uncertainty to factor in; therefore Ryan could well take somewhere in the region of 25 days to complete the series of tasks, or possibly even 28 days (if we double the time calculated for SD across the sequence). However, we know from this that anything over these values is an unlikely outcome. We are a far cry from the original estimate of 14 days to cover all of the tasks.

Summary

The general premise is to add some pragmatism and realism to the whole process of estimation. I have, on many occasions, estimated too optimistically and then have struggled to hit the mark; this is just one technique for attempting to mitigate that risk by being more methodical in the estimation process.

I hope you’ve enjoyed this primer and happy coding, estimating, etc. Until the next time!

SEO – Gouging Eyes out with Rusty Spoons

A very short piece on my current, off-the-wall, opinions on my little SEO journey so far and the shenanigans we’ve faced up to this point.

Ok, so it hasn’t been that bad! Plus it’s interesting I have opted to use ‘spoons’, and not just a singular spoon. It looks like I plan to gouge out both my eyes at once based on the post title!

I have certainly had moments of scratching my head as to what the hell is going on and SEO often brings up questions like ‘is option a, b, c, z or a combination of some or all, the best way to approach it?’; which quickly leads to the questioning of every blooming decision you plan to make to the nth degree!

With the website ‘fundamentally’ finished (ok, I’m completely and utterly lying, I want to rewrite massive chunks in the usual ‘unhappy with the code as soon as you’ve finished writing it’, developer condition, or sickness if you will!), we have desperately been trying to hike our way up the rankings. This has, after a painstaking process, happened; to some degree at least.

I picked this book up for starters, as all good things start with a good read; don’t be put off by the whopping great name:

SEO 2016: Learn search engine optimization with smart internet marketing strategies

After reading this and (thanks for this by the way) picking a few brains of SEO boffins and asking friends to pick additional boffin brains for me, I was ready to start making some of the larger, more critical changes.

So, in no particular order, here were the ideas and concepts that I feel have made the most impact in the short time we have been climbing the SEO mountain…

Page Titles and Page Descriptions

This, without a shadow of a doubt, had the largest impact. We, quite literally, just added unique metadata to each page and ensured our titles and descriptions were informative, without skipping out on the correct keyword combinations we were targeting.

Up until this point the web applications that I have produced, out in the wild, have been linked to from other sites (i.e. local authorities, for example); these other sites took on the burden of SEO and it most cases were sought out for the services they provide (i.e. it was non-commercial in a sense and the traffic was not being fought over).

Now I find myself competing for these precious nuggets of traffic, this takes the biggest bite of the biscuit for me. We went from completely impossible to find, for a small subset of keywords we were targeting, to being on page 3 of the search results (a little variance across Google and Bing, but just a rough approximation). Not perfect, but a very good start!

Keyword Optimisation

This took a couple of days to do in the end. This boiled down to, in essence, looking for keyword combinations with good levels of traffic but lower degrees of difficulty when it came to competition. We ended up spreading our eggs amongst a few baskets here and did target a few competitive keywords; only in an effort to make our content as natural and accurate as possible (clarity/quality of content is also a ranking factor, after all).

We used the Moz tools for this job; this was to check out keyword densities of comparable sites and then plan some alternatives. These references can be found here (you get a month’s free trial by the way):

Moz Keyword Explorer

Moz Open Site Explorer

This has had some impact, once our site was re-crawled and re-indexed. Just from a content quality perspective, it was well worth running everything through Grammarly. I have the Chrome Plugin but, in this instance, I used a Word Plugin to get the job done (you can get a free account, with additional paid for options being available):

Grammarly for Word

Grammarly for Chrome

Keywords in Content, Image Titles/Alt Attributes and Anchor Text

A simple one but one that, after a few hours of looking around the site and post-building up a new keyword strategy using Moz tools, we knew we could improve on.

We now have a much better spread of keywords across our pages, including LSI keywords (Latent Semantic Indexing); essentially just related phrases.

As for image titles and title/alt attributes, we went on a massive, night long, rampage to improve these; based on our new found keyword knowledge (using hyphens to separate words in resource names).

Lastly, a great deal of effort was placed upon using natural and informative text within anchor tags which, to be honest, we were a little weak on. This appears to have had a positive impact.

Sitemaps.xml

This one got me! I generated a sitemap that, on the face of it, looked prim and proper. However, after checking it in the Google Search Console I had been sucker punched by dodgy encoding (UTF-8 BOM, to be exact). There, at the start of my file, sat a Byte Order Mark (otherwise invisible in Notepad/Visual Studio, as these things often are). This equalled an instant red flag and is something worth not getting caught out on.

I found a reasonable looking Sitemaps.xml validator which flagged the issue:

Robots.txt Checker

Recreating the file in Notepad and checking it in fixed the issue (and doing the resubmit via Google Search Console).

One last thing (luckily I didn’t do this!), make sure to double check that in the robots.txt file you aren’t disallowing access to your entire site when it comes to legitimate bots. Thankfully, I was dead careful in this regard!

So, what has raised eyebrows (in the ‘why does this matter’ stakes)? One thing that has really got my goat is the Flesch-Kincaid Reading Ease score. I have spent a fair bit of time wracking my brains as to whether content really needs to be targeted at a thirteen to the fifteen-year-old student (when it comes to reading ability that is). In honesty, I’m not sure on this…

The content outlined in SEO 2016 references Searchmetrics findings show sites appearing in the top ten results, on average, have a Flesch reading score of 76.00; which puts a stake in the ground at the aforementioned thirteen to fifteen-year-old reading level ability.

This doesn’t sound all that bad to me; but, after some initial testing I was finding that our content was ranking below this bar on all occasions. Not by too much, but enough to concern me. Upon testing some other comparable, higher ranking sites, I wasn’t able to determine that any other site had much better scores, to be honest.

We reread the content, multiple times, before coming to the conclusion that it was easy enough to read. I personally felt that certain sentences and words were being penalised too harshly for being ‘too complex’ or as having ‘too many syllables’ when I’m sure that it could be easily read by children (far below the thirteen to fifteen-year-old mark).

It felt as though we had to dumb down our content to a point where it felt very unnatural; as if we couldn’t compose a sentence of more than five words. Possibly borderline insulting to most people; so we’ll be sticking to our guns for the time being.

What’s Next?

Claire and I have a few other strategies going on, such as getting some strong backlinks in play; this is the next port of call. We also need to focus more on the all-important social media. I have additional tasks to perform, such as adjusting how JavaScript is loaded (looking at async/defer options for all of my resources; without busting pages!), looking at resource bundling and caching including resource optimisation (i.e. a little bit more image compression work). I’ll be a busy bee for a fair while yet!

We’ll see how the first batch of alterations plays out first; it is incredibly difficult to shake the feeling that a good chunk of this is highly experimental!

If any SEO experts come across and can offer any further advice or would like to comment please do! I’d love to hear from you.

Until the next time, toodles!

Coding Snippets in SSMS (Microsoft SQL Server)

Hi all,

Continuing on with the idea of coding snippets, here’s a follow up showing the (very similar) procedure for creating snippets in SQL Server Management Studio (SSMS).

As before, you’ll need a file with the ‘.snippet’ extension to begin, which contains a snippet in an XML format. A basic example is shown below for reference, encapsulating some code that can quickly search a Stored Procedure definition for a particular search term:

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<_locDefinition xmlns="urn:locstudio">
    <_locDefault _loc="locNone" />
    <_locTag _loc="locData">Title</_locTag>
    <_locTag _loc="locData">Description</_locTag>
    <_locTag _loc="locData">Author</_locTag>
    <_locTag _loc="locData">ToolTip</_locTag>
</_locDefinition>
    <CodeSnippet Format="1.0.0">
		<Header>
		<Title>Sys.procedures SQL Check</Title>
			<Shortcut></Shortcut>
		<Description>Creates SQL to check sys.procedures for a literal string.</Description>
		<Author>Lewis Grint</Author>
		<SnippetTypes>
			<SnippetType>Expansion</SnippetType>
		</SnippetTypes>
		</Header>
		<Snippet>
			<Declarations>
				<Literal>
					<ID>SearchTerm</ID>
					<ToolTip>The term to search for.</ToolTip>
					<Default>SearchTerm</Default>
				</Literal>
			</Declarations>
			<Code Language="SQL">
				<![CDATA[--Check SP content for a literal string
SELECT 
name
FROM sys.procedures 
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%$SearchTerm$%';
	]]>
			</Code>
		</Snippet>
    </CodeSnippet>
</CodeSnippets>

As you can see, this file takes the familiar ‘.snippet’ format, supporting Snippet, Code and Declarations elements. These include the code snippet itself, and placeholder definitions, as before. You will also find the Header element, where pertinent information about the snippet can be stored, such as author and description.

Easily recognisable territory so far, I’ll think you’ll agree.

Within SSMS, much as in Visual Studio, navigate to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B) to get underway:

Image showing how to access the Code Snippets Manager.

Access Code Snippets Manager.

Again, pretty much plain sailing from here on in. You’ll have the same options as in Visual Studio to import a snippet file (just use the Import button):

Image showing the Code Snippets Manager before Import.

Code Snippets Manager before Import.

Navigate to your snippet file and select it. You’ll now have the opportunity to map the snippet to a nominated folder. In this instance, we’ll use the My Code Snippets folder. Click Finish to complete the process.

Image showing the MSSQL Server Code Snippets Manager Import Dialog.

SSMS Code Snippets Manager Import Dialog.

You should then be able to see your snippet under the nominated folder:

Image showing the Code Snippets Manager after Import.

Code Snippets Manager after Import.

That’s it, you’ve imported the snippet! You can now open a new query and use Ctrl + K, Ctrl + X to launch the Code Snippet Picker, whereby you can use the arrow keys to move between folder and snippets:

Image showing a Snippet before being expanded.

Snippet before Expansion.

Hit Enter to select your snippet; you should then see the following:

Image showing a Snippet after being expanded.

Snippet after Expansion.

Then you can start using your code, utilising tab to cycle through any placeholders. Another trick, that I’ve used, is to Add an entire folder of snippets using the Code Snippets Manager (instead of using Import). Using a ‘tactically’ named folder, you can gain faster access to your snippets:

Shows the results of adding a Custom Snippets Folder.

Adding a Custom Snippets Folder.

This type of snippet is an Expansion snippet. You can write other kinds, such as the Surrounds With snippet, an example of which can be found here on MSDN, illustrating a try/catch construct:

Surrounds With Snippet Example

I hope this has been helpful. Please comment below if you have any thoughts!

Cheerio!

Code Snippets in Visual Studio (C#)

Hi all,

This post forms part of a mini-series that I’d like to do surrounding the topic of code snippets. I have, for some time, been using these merrily within Microsoft SQL Server (which I’ll cover) but have neglected to look at creating custom snippets in every other environment I am privy to. Time to put that to a stop :-).

The following is a small demonstration of how to knock up a simple, custom code snippet in C#. Along the way, I’ll discuss a known limitation of creating snippets for this language specifically (as opposed to Visual Basic, for example). With any luck, this will get you to investigate the topic if you haven’t already done so.

Creating Custom Code Snippets

If you’re looking for a simple introduction, this MSDN article is a great place to head:

Walkthrough: Creating a Code Snippet

Once you’re primed and at the start line you should be pushing out snippets in no time.

You’ll need to craft your own file with the ‘.snippet’ extension to begin, where the content takes the form of standard XML. The one I have built for this example is below; I’ll discuss it as we move forward.

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
    xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Full Definition Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class (including namespace/using statements) implementing INotifyPropertyChanged fully.</Description>
			<Shortcut>inotifyfull</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
			    <Literal>
					<ID>NamespaceName</ID>
					<ToolTip>Replace with the namespace.</ToolTip>
					<Default>NamespaceName</Default>
				</Literal>
			    <Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>				
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace $NamespaceName$
{
    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }
}]]>
            </Code>
        </Snippet>
    </CodeSnippet>	
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Class Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class implementing INotifyPropertyChanged.</Description>
			<Shortcut>inotify</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
				<Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }]]>
            </Code>
        </Snippet>
    </CodeSnippet>
</CodeSnippets>

After a wrapping CodeSnippets element, you construct a CodeSnippet containing a Header and Snippet element, at a bare minimum. Within the Header you are able to express metadata relating to the snippet, such as the Title and Author, alongside the Description (which shows as an IntelliSense tooltip on use) and the actual Shortcut value associated with bringing the snippet into scope, which is typed directly into the IDE.

The Snippet element contains a Code element and is, as you would expect, where you place your physical code ‘slice of goodness’. It’s important to place your code within the square brackets of the CDATA declaration.

A smaller, valid C# snippet, for example could look like:

</CodeSnippet>
    <CodeSnippet Format="1.0.0">
    <Header>
        <Title>MessageBox Show Snippet</Title>
        <Author>Lewis Grint</Author>
        <Description>Generates a full MessageBox Show call.</Description>
        <Shortcut>mb</Shortcut>
    </Header>
    <Snippet>
        <Code Language="CSharp">
            <![CDATA[MessageBox.Show("Hello World");]]>
        </Code>
    </Snippet>
</CodeSnippet>

Not the most useful example, as within Visual Studio you already have the ‘mbox’ snippet covering this (which uses a fully qualified expression for MessageBox.Show), but illustrates the most basic form of a snippet nicely.

If you want placeholders in your code, that can be accessed and changed sequentially by using tab after you fully bring the snippet into scope within the IDE, you need to create a Declarations element under Snippet. Here you can create a Literal, to be altered by the snippet user on-the-fly, with tooltip and default values to boot. Embed these in your code snippets by copying the ID element value into the code wrapped in ‘$’ symbols (again, as shown above in the more detailed example).

That should be enough to run with, as it stands. We have an idea of how to create snippets, but how do you import them into Visual Studio? Let’s find out…

Importing Custom Code Snippets into Visual Studio

Now you have your snippets file in tow it’s time to rock on over to Visual Studio. You can access the Code Snippets Manager by navigating to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B), as illustrated here:

Image showing how to access the Code Snippets Manager within Visual Studio.

Accessing the Code Snippets Manager.

Once inside the Code Snippets Manager alter the language to the one of your choosing, which is ‘CSharp’ in my case, and click the My Code Snippets folder to select it as the repository for your snippets, before clicking Import to proceed:

Shows the Code Snippets Manager Dialog.

Code Snippets Manager Dialog.

You can now browse to your snippet file and, on selection, will be presented with a confirmation screen that should look like this:

Shows the Code Snippets Manager Import Dialog.

Code Snippets Manager Import Dialog.

Your snippet file will be validated at this stage and you will be notified if the markup is malformed/invalid.

After clicking Finish, you’ll find yourself back in the Code Snippets Manager screen and, with any luck, you should be able to spot your individual snippets under the My Code Snippets directory:

Shows custom snippets in the Code Snippets Manager after import.

Code Snippets Manager After Import.

As easy as pie. The snippets are imported, so let’s use them.

Using Imported Custom Code Snippets

My test bed snippets file provides two snippets for generating a stub class definition, implementing the INotifyPropertyChanged interface (which comes into play when constructing MVVM applications in WPF, for example, so that a client can be notified (and adjust) based on an underlying property change (in a binding scenario)).

Firstly, let’s inspect a partial snippet that generates the class code only, required event object and OnPropertyChanged method definition (just start typing and press tab twice to push the whole snippet to the IDE, whereby you can tab to each placeholder):

Shows a code file before using the partial custom snippet.

Before Partial Snippet Usage.

Shows a code file after using the partial custom snippet.

After Partial Snippet Usage.

You can see that some using statements need to be brought into play to resolve some types here. Frustratingly, Visual Basic supports the concept of including References and Import statements in the snippets XML, whereby C# does not. More details can be found here:

How to: Create a New Snippet with Imports and References

In my scenario, this is an entire stub class so I didn’t have a particular issue with placing the using statements, including the namespace definition, into the code snippet itself (with placeholders):

Shows a code file before using the full custom snippet.

Before Full Snippet Usage.

Shows a code file after using the full custom snippet.

After Full Snippet Usage.

Voila!

I’ve seen some murmurings that this Snippet Editor on Codeplex is worth a look if you are looking for a more robust editor to create your snippets. Support is only listed for Visual Studio 2010 and its version date is listed as 2009, so it doesn’t appear to be that ‘current’. Perhaps a better tool is out there (an extension, or some such); worthy of more investigation:

Codeplex Snippet Editor

That’s all for now, thanks for reading through and I’ll catch you next time!

Developer Testing Hints and Tips

Howdy happy campers.

I want to discuss a piece, somewhat divergent from the topic of physical coding, although still a facet of development that is close to my heart (and easy to overlook in many respects when constantly mashing keys and churning out code); developer testing. More specifically, I want to provide a set of guidelines that ‘may’ (insert disclaimer) help with the process and provide some food for thought.

This is in no way a definitive guide or best practice for that matter; more just a personal take on what I find works for me and the guts of a generally beneficial ‘templated’ approach to follow.

I would love to invite discussion on this one (or just get a take on what works for you), so please do hit me up on twitter or add a comment below, I’d love to hear from you.

My Process

As with any process, ground work and preparation can be vital for achieving a good result. To this end, I invariably start my developer testing on a given work item with a template document that looks like this:

Illustration of how to structure you Developer Testing.

Developer Testing Helper Document Structure.

What goes into your document will largely depend on what technologies you are using of course. For instance, you may never have a database centric element to the development you perform, rendering the ‘Database Upgrade’ section null and void ‘in all cases’. Ultimately, add and remove sections as you see fit but do strive for consistency. I myself test a mixture of items that may or may not include T-SQL elements. However, I choose to include the ‘Database Upgrade’ section in this case on every occasion, preferring to note that ‘there were no T-SQL’ related parts to the item, even just to mark it as ‘N/A’ (for my own sanity and for easy recollection later down the line, without the need to scan a lengthy list of changes elsewhere in the notes). Basically, my OCD kicks in and I start to wonder why I haven’t included a section that I ‘always’ include, leading to paranoia that I’ve missed something!

Each section (other than Notes), which is probably self-explanatory, can result in a PASS, QUERY or PASS-BACK state. Section state obviously knocks on and influences the result recorded against the ‘Developer Testing Summary’ header. PASS denotes an ‘A-Okay’ state, good to rock and roll! QUERY gives you the opportunity to mark a section with ‘discussion points’ or things you would like to check, without necessarily marking it off as incorrect (I tend to do this a lot, as I love to talk!). PASS-BACK is used in the circumstance whereby an error can be replicated/reproduced consistently or a logic problem definitely flies in the face of the ‘Acceptance Criteria’ for the story. In the circumstances whereby things such as coding standards have been contradicted I tend to use a mixture of QUERY/PASS-BACK, depending on the notes the developer has provided (it could be a flat PASS, of course, as there are always occasions where the rules need to be broken!).

So, section by section, let’s go over what we have and why…

Notes

It’s incredibly tempting to start diving into code, comparing files, trying to make sense of what the hell is going on but…I may get in trouble here, I’m going to tell you to stop right here. It’s so easy, and I’ve done it (probably) hundreds of times, to get eye deep in code, wasting large pots of time, before the basic question of ‘what are we doing and why’ has been answered. This is where this section comes in.

Use this area of your notes to compile a few short paragraphs (or bullet points, whatever you prefer) on the following:

  • Read over the developers notes and, after discovering if any changes have occurred to the underlying requirements for the story, start to create…
  • Your own summary of the ‘Acceptance Criteria’ for this particular story (or item, whatever term floats your boat. I’m going to use both interchangeably to alleviate bombarding you with the same term too much!).
  • Then, list any other pertinent information surrounding how the developer has coded the item (e.g. decisions that have shaped how the story has turned out). For example, did they place code into a different service than was originally expected because of ‘x’ reason, or did some logic end up in a different layer in the technology stack than conceived originally.
  • Lastly, note any of your initial thoughts, concerns or things you intend to check/look for based on this initial scoop of information.

The core reason I do this is to try to solidify my expectations, begin thinking about a test plan (yes, I like to always perform (rudimentary at the bare minimum) application testing, this isn’t just down to QA in my mind!) and to try to mitigate the chances of any massive surprises. Surprises, although they will always eventually happen one way or another, will lead to more confusion and increase the chances of things slipping through the net. You’ll be able to, by just following this exercise or a similar routine, cross-reference your expectations with the code changes you see and more easily be able to pick up errors, incorrect logic or unrequired alterations. This will limit the chances that something will slip past your mental filter as an ‘I guess that’s correct’ or ‘perhaps that class needed to be changed also, ok’ moment (don’t lie, we’ve all had them 😉 !).

Cool, we’ve formed in our own minds what this item is for, how it’s been developed and what, as a baseline, we are expecting to see. Let’s test (and along the way, discuss a few more tactics).

Database Upgrade

Some of what I’ll discuss here is formed around how my personal development role operates, so feel free to modify this approach to your needs. Again, if you don’t deal in the realm of database development at all pass go and collect £200, you’ve bypassed this step; congratulations!

The essence of this section surrounds you being able to state that new Stored Procedures, Functions, Views, Triggers, etc. can be ‘created’ without error on a database in a suitable ‘versioned’ state. Also, can ad-hoc data scripts, that are part of the development item, be run without error?

Some other considerations…

  • Are object creation scripts/ad-hoc scripts expected to be re-runnable? If yes, then specifically test and note this down here.
  • If you are in an environment whereby this kind of testing needs to be performed on multiple databases then mark this down here also (splitting notes down into sections against each target database/environment, whatever is applicable).
  • We work with a ‘versioned’ database so I make an effort to state which version I am on at the start of the testing run for reference.

An example of what this section may look like is illustrated below for reference:

Illustration of how to structure the Database Upgrade Developer Testing Document Section.

Developer Testing Database Upgrade Section Example.

A QUERY/PASS-BACK at this stage will bubble up and alter the status listed for the entire developer testing process. An additional note here; depending on how many queries/issues you find (and the length of the testing notes in general), you may want to copy the core query/error text to the top of the notes for easy review by the developer later (this applies to all of the following sections in fact).

Code Review

Moving on to the main filling of your developer testing sandwich, the actual code review! Obviously, you’ll be reviewing files here and looking at scripts, new files or amended code but definitely take a second or two out (unless your setup has automated builds/continuous integration, or some other clever solution, to tell you this) to make sure the code compiles before proceeding (and make the relevant note). A simple step but one easily forgotten, meaning you can get to the end of a code review before realising parts of the code don’t compile, eek!

I tend to, from a structural and sanity point of view (clarity is key), split my testing notes here into sections based on technology (i.e. T-SQL, C#, JavaScript, etc), or, at least, make some effort to order up a single list of files by file type. I tend to, for C# changes, group code files by the related project (given that projects should represent a logical grouping of types, hence allowing you to dice up changes by functional area, i.e. common extensions, data access helpers, etc.).

The point that you should take away from this, however, is that a little bit of thought and structuring at this phase will make your life easier; especially as a number of code files rack up.

If you’re looking for a small sample on how this section could look, after being fleshed out, then here you go:

Illustration of how to structure the Code Review Developer Testing Document Section.

Developer Testing Code Review Section Example.

However, what about the code review procedure itself I hear you cry! What follows next shouldn’t be taken as an exhaustive list, or correct in every given situation for that matter; more just suggestions as to what I’ve found helpful over time (mental kit bag):

  • For C# (and other object-orientated languages that support this concept), ensure that null values are correctly handled. Whether this is by capturing nulls on call to a given method and throwing an ArgumentNullException, or by doing a ‘not equal to null’ check (!= null) around code that would otherwise fail.
  • Strings can be tricky buggers, especially in case-sensitive environments! In most cases, comparisons should be performed taking case-sensitivity out of the equation (another case-by-case situation of course). I’d keep an eye out, again for C#, for the correct use of String.ToUpperInvariant, String.ToLowerInvariant and String.Equals. For String.Equals, use an overload containing a StringComparison enumeration type, for case/culture-insensitive options.
  • Keep an eye out for instances of checks being performed against strings being null or an empty string (either one or the other only). This can quickly lead to chaos, switch out for a null, empty or whitespace check (e.g. String.IsNullOrWhiteSpace).
  • Empty try/catch handlers are evil. Kill any you find.
  • Check up for instances whereby a class consists of all static members, but the class is not marked as static.
  • Train the eye to look for casting operations; you’ll always catch a few where the casting operation ‘could’ throw exceptions and should, therefore, be subject to more careful handling.
  • Big bugbear in the realm of coding; if a method requires scrolling to get through it’s a significant indication right off the bat that it is a prime candidate for refactoring. Unless there is a good reason, or it is clearly performing one logical function, consider having a conversation about breaking the method down.
  • Look for missed opportunities to rational code using inheritance. The most common one I see (and forget myself) is the abstraction of code to base classes and then using virtual methods/overrides in subclasses. Hawk-eye for types that should be abstract.
  • A simple one, but something that could easily slap you in the face if you’re not careful. When ‘language switching’, in a DT sense, take a second to make a mental note that you should be changing mind-sets (i.e. syntax is changing, get your game-face on!). For example, you stare into the abyss of C# for long enough (seeing ‘!= null’) you may, on switching to T-SQL, not notice a ‘!= NULL’ that should have been an ‘IS NOT NULL’. Those trees can be damn hard to find in the woods, after all!
  • Watch out for expensive operations, whereby values should be obtained once, ideally, then cached. It can be easy to let code skip by that repeatedly calls a database, for instance, to the detriment of performance (or possible errors, depending on the nature of the functionality called).
  • I love, love, loooovvvveeeee comments! Probably (ok, to the levels of being a little OCD about it!) too much. As far as C# code goes, I prefer (but don’t fail on this basis alone) XML Comments for C# and like to see comments on bulkier pieces of T-SQL. If there is a sizeable piece of code, whereby its function stretches beyond ‘trivial’, I like to see at least a short statement stating intent (what the developer is expecting the code to do is key by the way…as discussed next).
  • Where you have comments, link the intent in these comments back to what the code is actually doing; then trail it back to the items ‘Acceptance Criteria’ where appropriate. I have been rescued (as a developer, submitting my work for DT) countless times by those performing DT on my code, just by someone relaying to me that ‘what I thought my code was doing’ (based on my comments) doesn’t tie up to the actual functionality being offered up. This has led to me, of course, face-palming myself but being relieved that the gap in my intent, when checked off against my actual code, had been picked up by somebody in time to catch it before QA (or deployment, gulp!). State intent, then reap the rewards when mistakes you make are more rapidly picked up for rectification.
  • Be sure to look for the use of language constructs/keywords or syntactic-sugar that is not permissible on your baseline, minimum supported environment (i.e. older versions of SQL Server or .NET), if what you work on has this concept of course. This is sure to be something that will get picked up by QA causing bounce backs, or by your consumers later on if you’re not careful!
  • Keep a look out for code that could (or should) be shared or has been placed in a project/location that does not make logical sense. At a bare minimum, picking up on this sooner rather than later will keep your code base tidier, allow for ample opportunities to put great code in places to be leveraged as much as possible. In other cases, asking these kinds of questions can expose flaws and issues with the way a solution has been architected, which occasionally will steer you clear of tight spots later down the line.
  • Where shared code has been changed, look for instances whereby other applications/areas of the code base could be broken as a result of the changes. Recompile code to check for this as required. I had a bite on the bum by this recently :-?.
  • Keep up to date with any coding standards documents that should be adhered to and make sure the guidelines are followed (within reason of course; you’ll always find a scenario whereby a rule can, and should, be broken).
  • Really do consider writing and using Unit Tests wherever possible. They are a useful facet in the grand scheme of things (I believe at least) and they do carry weight when pitched up against visually checking code and application testing in general.
  • Last little nuggets, which I see from time to time. Look for objects constantly being created inside loops, heavy amounts of string concatenation not using the correct constructs (e.g. a StringBuilder in C#) or missed opportunities to create sub Stored Procedures in T-SQL (sectioning off code to gain performance boosts and obtain better execution plans). In fact, for T-SQL it can be a useful exercise to check the performance of non-trivial pieces of code yourself by changing how it’s structured, whilst obtaining the same results of course. You may or may not be able to increase performance along the way, but you’ll have far better comprehension of the code by the end regardless.

Hopefully, this little snapshot from my bag o’ tricks is enough to get you started, or get the brain-juices flowing. Let me know what you think of these suggestions anyway; I’d really appreciate the opportunity to collate others general thoughts and get a collective consensus going.

Application Testing

Here is where I will defer the giving of advice to my beloved QA counterparts on this beautiful planet; this, of course, isn’t my area of expertise. My only opinion here is (developers will possibly hate me for stating it) that developers ‘should’ always perform application testing alongside a code review. In fact, I’m a keen advocate for developers being involved in performing QA on the odd occasion. I personally like doing this, provided I have a trusty QA on hand to assist me (thankfully, I work with the best one around ;-), so no worries there). The simple reasons for this are:

  • One way or the other, acquisition of Product Knowledge is going to be invaluable to you. It’s just as valuable to start using your products in anger as it is to analyse code for hours on end. The side-note here is that this is part of your overall ‘worth’ as a developer, so don’t neglect it.
  • At this stage, you get to think as the customer might. Ideas and thoughts you have at this stage, which direct more development or changes to the product, will be amongst some of the best (and most rewarding when it comes to getting that warm and fuzzy feeling!).
  • Urm…it’s embarrassing to say ‘oh yeah, that codes great, thumbs up!’ for it then to explode in someone else’s face on the first press of a button! Easily avoided by following the process through from end to end, no matter what.

Ok, I’ll have a go at channelling one QA thought. Ok, I got it, here’s one from a mysterious and wise QA guru:

Mysterious and wise guru here… a friendly reminder to developers…never, ever, test your items using only one record! The reason? Well, I’ll test it with more than one record and break it instantly!

If anyone doing QA reads this feel free to feed us your arcane knowledge…God knows we need it! I would advise you keep the original item requirements in mind here of course, whilst testing; securing any process variants in your thoughts that could potentially throw carefully laid plans to waste (e.g. what if we go back and forth from screens x and y between completing process z, or we save the same form information twice, etc.). Your knowledge of the code can help at this stage so use the opportunity whilst you have it.

Before I forgot, an example of this could look like this:

Illustration of how to structure the Application Testing Developer Testing Document Section.

Developer Testing Application Testing Section Example.

Code Review/Application Testing – The Most Important Point…

Do it!!! If you’re not sure (as I am still on a regular basis) then ask the question and run the risk of looking like an idiot! Be a spanner, who cares at the end of the day. I dread to think of how many developers have stared at code and, ultimately, let stuff slide because they refused to pipe up and just say they weren’t sure or ‘didn’t get it’. At the end of the day, it’s better to ask questions and if there turns out to be no issues, or it’s a simple misunderstanding, then no harm, no foul. On a good number of occasions I query things to later realise that I missed a line of code meaning it does work as intended, or there’s some process that had slipped my mind…it hasn’t got me sacked (ahem, yet!). So my advice is just to open up and have a natter at the end of day, it’ll be worth the ratio of ‘idiot’ to ‘bug-saving’ moments, trust me :-).

Admin

As with any process, there will always be (and if there isn’t for you then let me know where you work because it’s awesome!) a certain amount of ‘red tape’. Use this last section to keep track of whether any procedural bits and bobs have been handled. For example, I’m expected to cover the creation of a Release Note (as part of the practices I follow) for any item I work on, so it should be marked down in this section as to whether I’ve completed it or not. It could end up just being a very simple section, like the following:

Illustration of how to structure the Admin Developer Testing Document Section.

Developer Testing Admin Section Example.

I hope this has been helpful and informative; or, at least, got the mind going to start thinking about this process. Again, as mentioned above, I would love to hear your thoughts so please do get in touch either here or via social media.

Cheers all, keep smacking keys and producing coding loveliness in the meantime 🙂