PERT Estimation 101

bear-to-do-list

Task estimates; the very real enemy of the developer. The numbers you provide for a task or piece of project work (whether this is in hours, days, story points, etc.), as an ‘estimate’, are often not being viewed as an approximation by all involved. This particular subject is covered, rather marvellously by Robert C.Martin, aka Uncle Bob, in his book The Clean Coder: A Code of Conduct for Professional Programmers; a book that I implore every developer to read. To quote Robert C.Martin directly, I believe this covers the underlying issue pretty well in a good number of cases:

The problem is that we view estimates in different ways. Business likes to view estimates as commitments. Developers like to view estimates as guesses. The difference is profound.

On a good number of occasions, I have found myself taking a good hiding, due to the estimates I have offered up on project work; normally where you find the estimate has been relayed to the customer as a concrete guarantee. In many cases, both parties should have shouldered more responsibility for this disparity; as the developer, I should have been more explicit as to what my estimates actually meant. In turn, those relaying information to the customer should have probably dug deeper into the heart of the matter, to extract further raw information as to the ‘likelihood’ that something could be achieved by a certain time.

Good dialogue and communication is a key factor, of course, as is making sure that you are always clear when you are providing an ‘estimate’ and making a ‘commitment’.

This is a subject that I always find myself circling back to and that I truly want to get better at, plain and simple. For that reason, this post will be geared towards looking at one potential strategy for improving the estimation process; the Program Evaluation and Review Technique (which is covered again pretty well in ‘The Clean Coder’).

What is PERT?

The Program Evaluation and Review Technique (PERT) encapsulates an analysis technique that is used for more accurately estimating large and complex projects. The basic idea is to reveal a distribution, or create a probabilistic structure, to reduce inaccuracies in estimates and attempt to account for uncertainty in a project’s schedule. It was created in 1957 for use by the U.S. Navy Special Projects Office to manage the Polaris nuclear submarine project.

In short, the process involves identifying the likely values for the minimum time and maximum time a task will take whilst pinpointing the ‘most likely’ estimate for a task’s completion, from observing a calculated distribution. Uncertainty is factored in using some standard deviation values.

How does it work?

The concept is as follows; first, when an estimate is required, you provide three numbers:

  • O: Optimistic Estimate (known as the ‘wow, how the hell did I do it that quick!’ estimate).
  • N: Nominal Estimate (the estimate with the greatest chance of being successful, or the correct estimate essentially).
  • P: Pessimistic Estimate (the doomsday estimate, where everything goes wrong except the Earths tectonic plates opening up to consume you, as you code the feature).

The first figure, ‘O’, involves thinking about the ‘pipe dream’ figure, i.e. if you could type the code at 1000 thousand lines a minute, making zero errors, and everything was to plug into place and work the first time. If Chuck Norris was coding the feature, this would be his estimate (this should have less than a one percent chance of being the actual time taken in order for the calculation to be meaningful). Next up, come up with the figure for N, which is defined as the estimate with the highest ‘chance’ of being the correct one. On a distribution bar chart of estimates, this would be the highest bar. Lastly, define P, which should be presented as the ‘worst case’ scenario estimate, stopping short of accounting for absolute catastrophes (again, this should have less than a one percent chance of being the ‘correct’ estimate, when all is said and done).

You’ve got three numbers, hoorah! A probability distribution can be gleaned for these figures simply enough (with ET being the expected time, I’ve changed the name of the variables to make more sense to me, when compared to ‘The Clean Coder’ or Wiki, but feel free to go with what fits for you):

ET = (O + 4N + P) / 6

The wiki definitions for all of these numbers are good, so I’ve included them below:

Wiki Definitions for PERT Calculation Facets

Wiki Definitions for PERT Calculation Facets.

The last part of this wonderful, little, puzzle is calculating the standard deviation (SD); which equates to getting your mitts on a value that indicates the degree of uncertainty that is associated with a given task:

SD = (P - O) / 6

Let’s observe a detailed example to get to grips with this fully, concentrating on how some extra elements come into play for a series of tasks.

A detailed example

In this example, Ryan (a name that came totally off the top of my head, milliseconds of thought went into that!) has a total of five items that he needs to estimate, that are part of a full project. Here are the specific tasks and Ryan’s initial estimates (expressed in days), before applying PERT calculations:

SIDE NOTE: Breaking a large project or piece of work into smaller tasks can often be a good strategy for mitigating risk up front and weeding out bugbears ahead of time.

Task Estimate
Write stored procedures for the customer export page 3
Write Web API components for the customer export page 2
Build the business logic layer for the customer export page 5
Build and style the customer export page (UI) 3
Assist Hayley with automated tests 1

That comes to a grand total of 14 days.

Ryan is now asked to apply the PERT approach to his estimates, and the mental cogs start to turn. Here’s a run-down of Ryan’s thoughts on each part of the project, in order:

  1. Stored Procedures: “Ok, I think I am fairly likely to complete this within 3 days, but there are quite a few elements of Dave’s existing helper stored procedures that I don’t understand. So, I’ll actually have to factor that in. Oh, and I have to do a data conversion!”
  2. Web API Components: “I’ve written most the of Web API components up to this point, so I’m well versed in this area at least and am confident with this figure. Even if everything went wrong it would only stretch to 3 days.”
  3. Business Logic Layer: “Jane wrote most of this layer and I’m not that well-versed in how it hangs together. The domain behind this is actually quite new to me. This could take significantly more time…”
  4. Building the page and styling: “The specification looks good and I’ve done a few similar pages in the past. It could take slightly longer if things did go wrong, as there are a few UI bells and whistles, including effects, that are required…”
  5. Assisting Hayley: “I was just kind of time boxing this in my mind, it’s almost certain she will need support beyond this, however – it really is an unknown!”

After a bit of head scratching and discussion with other team members, the figures below come out of the boiling pot. PERT has been applied, times are expressed in days and abbreviations used for the sake of space. Values are rounded to one decimal place:

Task O N P ET SD
Write stored procedures for the customer export page 3 5 11 5.7 1.3
Write Web API components for the customer export page 1 2 3 2 0.3
Build the business logic layer for the customer export page 2 6 13 6.5 1.8
Build and style the customer export page (UI) 1 4 9 4.3 1.3
Assist Hayley with automated tests 1 3 7 3.3 1

ET and SD values are calculated using the formulas previously discussed. Firstly, using the ET values, we can understand that Ryan will ‘likely’ be finished with these tasks (based on the sum of the ET values) in 21.8 days, let’s say 22 days then. To get a feel for the ‘uncertainty’ behind these tasks we have to look at the square root of the sum of the squares of the SD values for the tasks (a bit of a mouthful), as shown here:

(1.3 2 + 0.3 2 + 1.8 2 + 1.3 2 + 1 2) 1/2 =
(1.69 + 0.09 + 3.24 + 1.69 + 1) 1/2 =
7.71 1/2 = ~2.8 days

We can surmise from this that there could be 2.8 days, so let’s say 3 days, of uncertainty to factor in; therefore Ryan could well take somewhere in the region of 25 days to complete the series of tasks, or possibly even 28 days (if we double the time calculated for SD across the sequence). However, we know from this that anything over these values is an unlikely outcome. We are a far cry from the original estimate of 14 days to cover all of the tasks.

Summary

The general premise is to add some pragmatism and realism to the whole process of estimation. I have, on many occasions, estimated too optimistically and then have struggled to hit the mark; this is just one technique for attempting to mitigate that risk by being more methodical in the estimation process.

I hope you’ve enjoyed this primer and happy coding, estimating, etc. Until the next time!

Coding Snippets in SSMS (Microsoft SQL Server)

Hi all,

Continuing on with the idea of coding snippets, here’s a follow up showing the (very similar) procedure for creating snippets in SQL Server Management Studio (SSMS).

As before, you’ll need a file with the ‘.snippet’ extension to begin, which contains a snippet in an XML format. A basic example is shown below for reference, encapsulating some code that can quickly search a Stored Procedure definition for a particular search term:

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<_locDefinition xmlns="urn:locstudio">
    <_locDefault _loc="locNone" />
    <_locTag _loc="locData">Title</_locTag>
    <_locTag _loc="locData">Description</_locTag>
    <_locTag _loc="locData">Author</_locTag>
    <_locTag _loc="locData">ToolTip</_locTag>
</_locDefinition>
    <CodeSnippet Format="1.0.0">
		<Header>
		<Title>Sys.procedures SQL Check</Title>
			<Shortcut></Shortcut>
		<Description>Creates SQL to check sys.procedures for a literal string.</Description>
		<Author>Lewis Grint</Author>
		<SnippetTypes>
			<SnippetType>Expansion</SnippetType>
		</SnippetTypes>
		</Header>
		<Snippet>
			<Declarations>
				<Literal>
					<ID>SearchTerm</ID>
					<ToolTip>The term to search for.</ToolTip>
					<Default>SearchTerm</Default>
				</Literal>
			</Declarations>
			<Code Language="SQL">
				<![CDATA[--Check SP content for a literal string
SELECT 
name
FROM sys.procedures 
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%$SearchTerm$%';
	]]>
			</Code>
		</Snippet>
    </CodeSnippet>
</CodeSnippets>

As you can see, this file takes the familiar ‘.snippet’ format, supporting Snippet, Code and Declarations elements. These include the code snippet itself, and placeholder definitions, as before. You will also find the Header element, where pertinent information about the snippet can be stored, such as author and description.

Easily recognisable territory so far, I’ll think you’ll agree.

Within SSMS, much as in Visual Studio, navigate to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B) to get underway:

Image showing how to access the Code Snippets Manager.

Access Code Snippets Manager.

Again, pretty much plain sailing from here on in. You’ll have the same options as in Visual Studio to import a snippet file (just use the Import button):

Image showing the Code Snippets Manager before Import.

Code Snippets Manager before Import.

Navigate to your snippet file and select it. You’ll now have the opportunity to map the snippet to a nominated folder. In this instance, we’ll use the My Code Snippets folder. Click Finish to complete the process.

Image showing the MSSQL Server Code Snippets Manager Import Dialog.

SSMS Code Snippets Manager Import Dialog.

You should then be able to see your snippet under the nominated folder:

Image showing the Code Snippets Manager after Import.

Code Snippets Manager after Import.

That’s it, you’ve imported the snippet! You can now open a new query and use Ctrl + K, Ctrl + X to launch the Code Snippet Picker, whereby you can use the arrow keys to move between folder and snippets:

Image showing a Snippet before being expanded.

Snippet before Expansion.

Hit Enter to select your snippet; you should then see the following:

Image showing a Snippet after being expanded.

Snippet after Expansion.

Then you can start using your code, utilising tab to cycle through any placeholders. Another trick, that I’ve used, is to Add an entire folder of snippets using the Code Snippets Manager (instead of using Import). Using a ‘tactically’ named folder, you can gain faster access to your snippets:

Shows the results of adding a Custom Snippets Folder.

Adding a Custom Snippets Folder.

This type of snippet is an Expansion snippet. You can write other kinds, such as the Surrounds With snippet, an example of which can be found here on MSDN, illustrating a try/catch construct:

Surrounds With Snippet Example

I hope this has been helpful. Please comment below if you have any thoughts!

Cheerio!

Code Snippets in Visual Studio (C#)

Hi all,

This post forms part of a mini-series that I’d like to do surrounding the topic of code snippets. I have, for some time, been using these merrily within Microsoft SQL Server (which I’ll cover) but have neglected to look at creating custom snippets in every other environment I am privy to. Time to put that to a stop :-).

The following is a small demonstration of how to knock up a simple, custom code snippet in C#. Along the way, I’ll discuss a known limitation of creating snippets for this language specifically (as opposed to Visual Basic, for example). With any luck, this will get you to investigate the topic if you haven’t already done so.

Creating Custom Code Snippets

If you’re looking for a simple introduction, this MSDN article is a great place to head:

Walkthrough: Creating a Code Snippet

Once you’re primed and at the start line you should be pushing out snippets in no time.

You’ll need to craft your own file with the ‘.snippet’ extension to begin, where the content takes the form of standard XML. The one I have built for this example is below; I’ll discuss it as we move forward.

<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
    xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Full Definition Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class (including namespace/using statements) implementing INotifyPropertyChanged fully.</Description>
			<Shortcut>inotifyfull</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
			    <Literal>
					<ID>NamespaceName</ID>
					<ToolTip>Replace with the namespace.</ToolTip>
					<Default>NamespaceName</Default>
				</Literal>
			    <Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>				
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[using System.ComponentModel;
using System.Runtime.CompilerServices;

namespace $NamespaceName$
{
    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }
}]]>
            </Code>
        </Snippet>
    </CodeSnippet>	
    <CodeSnippet Format="1.0.0">
		<Header>
			<Title>INotifyPropertyChanged Class Snippet</Title>
			<Author>Lewis Grint</Author>
			<Description>Generates a stub class implementing INotifyPropertyChanged.</Description>
			<Shortcut>inotify</Shortcut>
		</Header>
        <Snippet>
			<Declarations>
				<Literal>
					<ID>ClassName</ID>
					<ToolTip>Replace with the class name.</ToolTip>
					<Default>ClassName</Default>
				</Literal>
			</Declarations>
            <Code Language="CSharp">
                <![CDATA[    public class $ClassName$ : INotifyPropertyChanged
    {
        #region INotifyPropertyChanged Interface Support

        /// <summary>
        /// PropertyChangedEventHandler for INotifyPropertyChanged support.
        /// </summary>
        public event PropertyChangedEventHandler PropertyChanged;

        /// <summary>
        /// Protected virtual method that should be called when a
        /// supporting properties value in changed (for binding to 
        /// work correctly in the application).
        /// </summary>
        /// <param name="propertyName">The property that has been altered.</param>
        protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = "")
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion INotifyPropertyChanged Interface Support
    }]]>
            </Code>
        </Snippet>
    </CodeSnippet>
</CodeSnippets>

After a wrapping CodeSnippets element, you construct a CodeSnippet containing a Header and Snippet element, at a bare minimum. Within the Header you are able to express metadata relating to the snippet, such as the Title and Author, alongside the Description (which shows as an IntelliSense tooltip on use) and the actual Shortcut value associated with bringing the snippet into scope, which is typed directly into the IDE.

The Snippet element contains a Code element and is, as you would expect, where you place your physical code ‘slice of goodness’. It’s important to place your code within the square brackets of the CDATA declaration.

A smaller, valid C# snippet, for example could look like:

</CodeSnippet>
    <CodeSnippet Format="1.0.0">
    <Header>
        <Title>MessageBox Show Snippet</Title>
        <Author>Lewis Grint</Author>
        <Description>Generates a full MessageBox Show call.</Description>
        <Shortcut>mb</Shortcut>
    </Header>
    <Snippet>
        <Code Language="CSharp">
            <![CDATA[MessageBox.Show("Hello World");]]>
        </Code>
    </Snippet>
</CodeSnippet>

Not the most useful example, as within Visual Studio you already have the ‘mbox’ snippet covering this (which uses a fully qualified expression for MessageBox.Show), but illustrates the most basic form of a snippet nicely.

If you want placeholders in your code, that can be accessed and changed sequentially by using tab after you fully bring the snippet into scope within the IDE, you need to create a Declarations element under Snippet. Here you can create a Literal, to be altered by the snippet user on-the-fly, with tooltip and default values to boot. Embed these in your code snippets by copying the ID element value into the code wrapped in ‘$’ symbols (again, as shown above in the more detailed example).

That should be enough to run with, as it stands. We have an idea of how to create snippets, but how do you import them into Visual Studio? Let’s find out…

Importing Custom Code Snippets into Visual Studio

Now you have your snippets file in tow it’s time to rock on over to Visual Studio. You can access the Code Snippets Manager by navigating to Tools > Code Snippets Manager (Ctrl + K, Ctrl + B), as illustrated here:

Image showing how to access the Code Snippets Manager within Visual Studio.

Accessing the Code Snippets Manager.

Once inside the Code Snippets Manager alter the language to the one of your choosing, which is ‘CSharp’ in my case, and click the My Code Snippets folder to select it as the repository for your snippets, before clicking Import to proceed:

Shows the Code Snippets Manager Dialog.

Code Snippets Manager Dialog.

You can now browse to your snippet file and, on selection, will be presented with a confirmation screen that should look like this:

Shows the Code Snippets Manager Import Dialog.

Code Snippets Manager Import Dialog.

Your snippet file will be validated at this stage and you will be notified if the markup is malformed/invalid.

After clicking Finish, you’ll find yourself back in the Code Snippets Manager screen and, with any luck, you should be able to spot your individual snippets under the My Code Snippets directory:

Shows custom snippets in the Code Snippets Manager after import.

Code Snippets Manager After Import.

As easy as pie. The snippets are imported, so let’s use them.

Using Imported Custom Code Snippets

My test bed snippets file provides two snippets for generating a stub class definition, implementing the INotifyPropertyChanged interface (which comes into play when constructing MVVM applications in WPF, for example, so that a client can be notified (and adjust) based on an underlying property change (in a binding scenario)).

Firstly, let’s inspect a partial snippet that generates the class code only, required event object and OnPropertyChanged method definition (just start typing and press tab twice to push the whole snippet to the IDE, whereby you can tab to each placeholder):

Shows a code file before using the partial custom snippet.

Before Partial Snippet Usage.

Shows a code file after using the partial custom snippet.

After Partial Snippet Usage.

You can see that some using statements need to be brought into play to resolve some types here. Frustratingly, Visual Basic supports the concept of including References and Import statements in the snippets XML, whereby C# does not. More details can be found here:

How to: Create a New Snippet with Imports and References

In my scenario, this is an entire stub class so I didn’t have a particular issue with placing the using statements, including the namespace definition, into the code snippet itself (with placeholders):

Shows a code file before using the full custom snippet.

Before Full Snippet Usage.

Shows a code file after using the full custom snippet.

After Full Snippet Usage.

Voila!

I’ve seen some murmurings that this Snippet Editor on Codeplex is worth a look if you are looking for a more robust editor to create your snippets. Support is only listed for Visual Studio 2010 and its version date is listed as 2009, so it doesn’t appear to be that ‘current’. Perhaps a better tool is out there (an extension, or some such); worthy of more investigation:

Codeplex Snippet Editor

That’s all for now, thanks for reading through and I’ll catch you next time!

Developer Testing Hints and Tips

Howdy happy campers.

I want to discuss a piece, somewhat divergent from the topic of physical coding, although still a facet of development that is close to my heart (and easy to overlook in many respects when constantly mashing keys and churning out code); developer testing. More specifically, I want to provide a set of guidelines that ‘may’ (insert disclaimer) help with the process and provide some food for thought.

This is in no way a definitive guide or best practice for that matter; more just a personal take on what I find works for me and the guts of a generally beneficial ‘templated’ approach to follow.

I would love to invite discussion on this one (or just get a take on what works for you), so please do hit me up on twitter or add a comment below, I’d love to hear from you.

My Process

As with any process, ground work and preparation can be vital for achieving a good result. To this end, I invariably start my developer testing on a given work item with a template document that looks like this:

Illustration of how to structure you Developer Testing.

Developer Testing Helper Document Structure.

What goes into your document will largely depend on what technologies you are using of course. For instance, you may never have a database centric element to the development you perform, rendering the ‘Database Upgrade’ section null and void ‘in all cases’. Ultimately, add and remove sections as you see fit but do strive for consistency. I myself test a mixture of items that may or may not include T-SQL elements. However, I choose to include the ‘Database Upgrade’ section in this case on every occasion, preferring to note that ‘there were no T-SQL’ related parts to the item, even just to mark it as ‘N/A’ (for my own sanity and for easy recollection later down the line, without the need to scan a lengthy list of changes elsewhere in the notes). Basically, my OCD kicks in and I start to wonder why I haven’t included a section that I ‘always’ include, leading to paranoia that I’ve missed something!

Each section (other than Notes), which is probably self-explanatory, can result in a PASS, QUERY or PASS-BACK state. Section state obviously knocks on and influences the result recorded against the ‘Developer Testing Summary’ header. PASS denotes an ‘A-Okay’ state, good to rock and roll! QUERY gives you the opportunity to mark a section with ‘discussion points’ or things you would like to check, without necessarily marking it off as incorrect (I tend to do this a lot, as I love to talk!). PASS-BACK is used in the circumstance whereby an error can be replicated/reproduced consistently or a logic problem definitely flies in the face of the ‘Acceptance Criteria’ for the story. In the circumstances whereby things such as coding standards have been contradicted I tend to use a mixture of QUERY/PASS-BACK, depending on the notes the developer has provided (it could be a flat PASS, of course, as there are always occasions where the rules need to be broken!).

So, section by section, let’s go over what we have and why…

Notes

It’s incredibly tempting to start diving into code, comparing files, trying to make sense of what the hell is going on but…I may get in trouble here, I’m going to tell you to stop right here. It’s so easy, and I’ve done it (probably) hundreds of times, to get eye deep in code, wasting large pots of time, before the basic question of ‘what are we doing and why’ has been answered. This is where this section comes in.

Use this area of your notes to compile a few short paragraphs (or bullet points, whatever you prefer) on the following:

  • Read over the developers notes and, after discovering if any changes have occurred to the underlying requirements for the story, start to create…
  • Your own summary of the ‘Acceptance Criteria’ for this particular story (or item, whatever term floats your boat. I’m going to use both interchangeably to alleviate bombarding you with the same term too much!).
  • Then, list any other pertinent information surrounding how the developer has coded the item (e.g. decisions that have shaped how the story has turned out). For example, did they place code into a different service than was originally expected because of ‘x’ reason, or did some logic end up in a different layer in the technology stack than conceived originally.
  • Lastly, note any of your initial thoughts, concerns or things you intend to check/look for based on this initial scoop of information.

The core reason I do this is to try to solidify my expectations, begin thinking about a test plan (yes, I like to always perform (rudimentary at the bare minimum) application testing, this isn’t just down to QA in my mind!) and to try to mitigate the chances of any massive surprises. Surprises, although they will always eventually happen one way or another, will lead to more confusion and increase the chances of things slipping through the net. You’ll be able to, by just following this exercise or a similar routine, cross-reference your expectations with the code changes you see and more easily be able to pick up errors, incorrect logic or unrequired alterations. This will limit the chances that something will slip past your mental filter as an ‘I guess that’s correct’ or ‘perhaps that class needed to be changed also, ok’ moment (don’t lie, we’ve all had them 😉 !).

Cool, we’ve formed in our own minds what this item is for, how it’s been developed and what, as a baseline, we are expecting to see. Let’s test (and along the way, discuss a few more tactics).

Database Upgrade

Some of what I’ll discuss here is formed around how my personal development role operates, so feel free to modify this approach to your needs. Again, if you don’t deal in the realm of database development at all pass go and collect £200, you’ve bypassed this step; congratulations!

The essence of this section surrounds you being able to state that new Stored Procedures, Functions, Views, Triggers, etc. can be ‘created’ without error on a database in a suitable ‘versioned’ state. Also, can ad-hoc data scripts, that are part of the development item, be run without error?

Some other considerations…

  • Are object creation scripts/ad-hoc scripts expected to be re-runnable? If yes, then specifically test and note this down here.
  • If you are in an environment whereby this kind of testing needs to be performed on multiple databases then mark this down here also (splitting notes down into sections against each target database/environment, whatever is applicable).
  • We work with a ‘versioned’ database so I make an effort to state which version I am on at the start of the testing run for reference.

An example of what this section may look like is illustrated below for reference:

Illustration of how to structure the Database Upgrade Developer Testing Document Section.

Developer Testing Database Upgrade Section Example.

A QUERY/PASS-BACK at this stage will bubble up and alter the status listed for the entire developer testing process. An additional note here; depending on how many queries/issues you find (and the length of the testing notes in general), you may want to copy the core query/error text to the top of the notes for easy review by the developer later (this applies to all of the following sections in fact).

Code Review

Moving on to the main filling of your developer testing sandwich, the actual code review! Obviously, you’ll be reviewing files here and looking at scripts, new files or amended code but definitely take a second or two out (unless your setup has automated builds/continuous integration, or some other clever solution, to tell you this) to make sure the code compiles before proceeding (and make the relevant note). A simple step but one easily forgotten, meaning you can get to the end of a code review before realising parts of the code don’t compile, eek!

I tend to, from a structural and sanity point of view (clarity is key), split my testing notes here into sections based on technology (i.e. T-SQL, C#, JavaScript, etc), or, at least, make some effort to order up a single list of files by file type. I tend to, for C# changes, group code files by the related project (given that projects should represent a logical grouping of types, hence allowing you to dice up changes by functional area, i.e. common extensions, data access helpers, etc.).

The point that you should take away from this, however, is that a little bit of thought and structuring at this phase will make your life easier; especially as a number of code files rack up.

If you’re looking for a small sample on how this section could look, after being fleshed out, then here you go:

Illustration of how to structure the Code Review Developer Testing Document Section.

Developer Testing Code Review Section Example.

However, what about the code review procedure itself I hear you cry! What follows next shouldn’t be taken as an exhaustive list, or correct in every given situation for that matter; more just suggestions as to what I’ve found helpful over time (mental kit bag):

  • For C# (and other object-orientated languages that support this concept), ensure that null values are correctly handled. Whether this is by capturing nulls on call to a given method and throwing an ArgumentNullException, or by doing a ‘not equal to null’ check (!= null) around code that would otherwise fail.
  • Strings can be tricky buggers, especially in case-sensitive environments! In most cases, comparisons should be performed taking case-sensitivity out of the equation (another case-by-case situation of course). I’d keep an eye out, again for C#, for the correct use of String.ToUpperInvariant, String.ToLowerInvariant and String.Equals. For String.Equals, use an overload containing a StringComparison enumeration type, for case/culture-insensitive options.
  • Keep an eye out for instances of checks being performed against strings being null or an empty string (either one or the other only). This can quickly lead to chaos, switch out for a null, empty or whitespace check (e.g. String.IsNullOrWhiteSpace).
  • Empty try/catch handlers are evil. Kill any you find.
  • Check up for instances whereby a class consists of all static members, but the class is not marked as static.
  • Train the eye to look for casting operations; you’ll always catch a few where the casting operation ‘could’ throw exceptions and should, therefore, be subject to more careful handling.
  • Big bugbear in the realm of coding; if a method requires scrolling to get through it’s a significant indication right off the bat that it is a prime candidate for refactoring. Unless there is a good reason, or it is clearly performing one logical function, consider having a conversation about breaking the method down.
  • Look for missed opportunities to rational code using inheritance. The most common one I see (and forget myself) is the abstraction of code to base classes and then using virtual methods/overrides in subclasses. Hawk-eye for types that should be abstract.
  • A simple one, but something that could easily slap you in the face if you’re not careful. When ‘language switching’, in a DT sense, take a second to make a mental note that you should be changing mind-sets (i.e. syntax is changing, get your game-face on!). For example, you stare into the abyss of C# for long enough (seeing ‘!= null’) you may, on switching to T-SQL, not notice a ‘!= NULL’ that should have been an ‘IS NOT NULL’. Those trees can be damn hard to find in the woods, after all!
  • Watch out for expensive operations, whereby values should be obtained once, ideally, then cached. It can be easy to let code skip by that repeatedly calls a database, for instance, to the detriment of performance (or possible errors, depending on the nature of the functionality called).
  • I love, love, loooovvvveeeee comments! Probably (ok, to the levels of being a little OCD about it!) too much. As far as C# code goes, I prefer (but don’t fail on this basis alone) XML Comments for C# and like to see comments on bulkier pieces of T-SQL. If there is a sizeable piece of code, whereby its function stretches beyond ‘trivial’, I like to see at least a short statement stating intent (what the developer is expecting the code to do is key by the way…as discussed next).
  • Where you have comments, link the intent in these comments back to what the code is actually doing; then trail it back to the items ‘Acceptance Criteria’ where appropriate. I have been rescued (as a developer, submitting my work for DT) countless times by those performing DT on my code, just by someone relaying to me that ‘what I thought my code was doing’ (based on my comments) doesn’t tie up to the actual functionality being offered up. This has led to me, of course, face-palming myself but being relieved that the gap in my intent, when checked off against my actual code, had been picked up by somebody in time to catch it before QA (or deployment, gulp!). State intent, then reap the rewards when mistakes you make are more rapidly picked up for rectification.
  • Be sure to look for the use of language constructs/keywords or syntactic-sugar that is not permissible on your baseline, minimum supported environment (i.e. older versions of SQL Server or .NET), if what you work on has this concept of course. This is sure to be something that will get picked up by QA causing bounce backs, or by your consumers later on if you’re not careful!
  • Keep a look out for code that could (or should) be shared or has been placed in a project/location that does not make logical sense. At a bare minimum, picking up on this sooner rather than later will keep your code base tidier, allow for ample opportunities to put great code in places to be leveraged as much as possible. In other cases, asking these kinds of questions can expose flaws and issues with the way a solution has been architected, which occasionally will steer you clear of tight spots later down the line.
  • Where shared code has been changed, look for instances whereby other applications/areas of the code base could be broken as a result of the changes. Recompile code to check for this as required. I had a bite on the bum by this recently :-?.
  • Keep up to date with any coding standards documents that should be adhered to and make sure the guidelines are followed (within reason of course; you’ll always find a scenario whereby a rule can, and should, be broken).
  • Really do consider writing and using Unit Tests wherever possible. They are a useful facet in the grand scheme of things (I believe at least) and they do carry weight when pitched up against visually checking code and application testing in general.
  • Last little nuggets, which I see from time to time. Look for objects constantly being created inside loops, heavy amounts of string concatenation not using the correct constructs (e.g. a StringBuilder in C#) or missed opportunities to create sub Stored Procedures in T-SQL (sectioning off code to gain performance boosts and obtain better execution plans). In fact, for T-SQL it can be a useful exercise to check the performance of non-trivial pieces of code yourself by changing how it’s structured, whilst obtaining the same results of course. You may or may not be able to increase performance along the way, but you’ll have far better comprehension of the code by the end regardless.

Hopefully, this little snapshot from my bag o’ tricks is enough to get you started, or get the brain-juices flowing. Let me know what you think of these suggestions anyway; I’d really appreciate the opportunity to collate others general thoughts and get a collective consensus going.

Application Testing

Here is where I will defer the giving of advice to my beloved QA counterparts on this beautiful planet; this, of course, isn’t my area of expertise. My only opinion here is (developers will possibly hate me for stating it) that developers ‘should’ always perform application testing alongside a code review. In fact, I’m a keen advocate for developers being involved in performing QA on the odd occasion. I personally like doing this, provided I have a trusty QA on hand to assist me (thankfully, I work with the best one around ;-), so no worries there). The simple reasons for this are:

  • One way or the other, acquisition of Product Knowledge is going to be invaluable to you. It’s just as valuable to start using your products in anger as it is to analyse code for hours on end. The side-note here is that this is part of your overall ‘worth’ as a developer, so don’t neglect it.
  • At this stage, you get to think as the customer might. Ideas and thoughts you have at this stage, which direct more development or changes to the product, will be amongst some of the best (and most rewarding when it comes to getting that warm and fuzzy feeling!).
  • Urm…it’s embarrassing to say ‘oh yeah, that codes great, thumbs up!’ for it then to explode in someone else’s face on the first press of a button! Easily avoided by following the process through from end to end, no matter what.

Ok, I’ll have a go at channelling one QA thought. Ok, I got it, here’s one from a mysterious and wise QA guru:

Mysterious and wise guru here… a friendly reminder to developers…never, ever, test your items using only one record! The reason? Well, I’ll test it with more than one record and break it instantly!

If anyone doing QA reads this feel free to feed us your arcane knowledge…God knows we need it! I would advise you keep the original item requirements in mind here of course, whilst testing; securing any process variants in your thoughts that could potentially throw carefully laid plans to waste (e.g. what if we go back and forth from screens x and y between completing process z, or we save the same form information twice, etc.). Your knowledge of the code can help at this stage so use the opportunity whilst you have it.

Before I forgot, an example of this could look like this:

Illustration of how to structure the Application Testing Developer Testing Document Section.

Developer Testing Application Testing Section Example.

Code Review/Application Testing – The Most Important Point…

Do it!!! If you’re not sure (as I am still on a regular basis) then ask the question and run the risk of looking like an idiot! Be a spanner, who cares at the end of the day. I dread to think of how many developers have stared at code and, ultimately, let stuff slide because they refused to pipe up and just say they weren’t sure or ‘didn’t get it’. At the end of the day, it’s better to ask questions and if there turns out to be no issues, or it’s a simple misunderstanding, then no harm, no foul. On a good number of occasions I query things to later realise that I missed a line of code meaning it does work as intended, or there’s some process that had slipped my mind…it hasn’t got me sacked (ahem, yet!). So my advice is just to open up and have a natter at the end of day, it’ll be worth the ratio of ‘idiot’ to ‘bug-saving’ moments, trust me :-).

Admin

As with any process, there will always be (and if there isn’t for you then let me know where you work because it’s awesome!) a certain amount of ‘red tape’. Use this last section to keep track of whether any procedural bits and bobs have been handled. For example, I’m expected to cover the creation of a Release Note (as part of the practices I follow) for any item I work on, so it should be marked down in this section as to whether I’ve completed it or not. It could end up just being a very simple section, like the following:

Illustration of how to structure the Admin Developer Testing Document Section.

Developer Testing Admin Section Example.

I hope this has been helpful and informative; or, at least, got the mind going to start thinking about this process. Again, as mentioned above, I would love to hear your thoughts so please do get in touch either here or via social media.

Cheers all, keep smacking keys and producing coding loveliness in the meantime 🙂

Codecademy.com – JavaScript Course Impressions

In my day to day job we use JavaScript in anger (or at least have to deal with it) on a sporadic basis. We dive into T-SQL, C# (either for server side logic in web applications or for desktop applications) and other parts of web development (CSS and HTML) on a rapidly spinning merry-go-round of coding fun. The variation is great but getting some solid exposure time is also nice to have. JavaScript in particular, as a pillar of web development (alongside HTML and CSS), is something that I’ve wanted to spend more time on (even though I have a solid grounding now); so I’ve decided to slowly edge my way towards a Microsoft Certification. Namely, the MCSD: Web Applications Certification.

This ties in nicely to a personal goal of mine to test out codecademy.com; just to take it for a spin more than anything else. So, I’ve signed up in order to run through the beginner JavaScript course; just to give my impressions on it (and to see if anything can be gleaned – Start with the basics and work on up!).

Although the content is basic, I actually found it pretty well laid out; I can imagine somebody new to coding finding this fairly easy to follow which is fantastic. The interface is clean and simple.

The basics of throwing dialogs are covered early on (confirm and prompt – I haven’t seen alert yet however), along with a simple discussion on datatypes (numbers, strings and booleans as you’d expected). The classic debugging aid that is console.log is also discussed (always a bit of a go to I find!).

Beyond simple mathematical operations and string manipulation techniques I came across the first thing, that is always on the periphery of my mind and can easily catch you unawares (that differs slightly to the C# world so all the more easy to get it wrong); the equality operators. In order to compare that something is of equal value and equal type ‘!==’ and ‘===’ must be used, a subtlety worth remembering and easy to trip up on when the old noggin’ is not switched on.

Comparison Operator Differences.

Comparison Operator Differences.

The full w3schools documentation relating to this can be found here:

JavaScript Comparison Operators

Also, I tried to throw the site out a few times by leaving some humdinger code in place; all handled well in every instance:

Does It Catch Mistakes?

Does It Catch Mistakes?

Does Not Compute!

Does Not Compute!

I did, in some areas, go out of my way to code the examples slightly differently to how it was specified (looking to get the same result of course in all cases) and, for the most part, it seemed quite resilient to this (including the use of functions not specifically mentioned in the examples).

The little section regarding comments was a joy to behold. I’m well known for writing a tonne of comments; pretty sure everyone hates trawling through them (what can I say, it’s a habit!). This is certainly what comments often boil down to however:

What Code Comments Are Really About!

What Code Comments Are Really About!

You also get offered up the classic programming golden nugget; the piece of knowledge no one can do without – The origin of the term debugging! Always a classic!

Lastly, I actually quite like working with zero intellisense. It’s always useful to get that little nudge in the back; a prod to say ‘hey you, switch on and get your act in gear. I’m not going to auto-complete everything for you and let you tab your way to glory!’. I’ve actually sat down in the past and done rudimentary C# programming and very basic websites using notepad alone; well worth doing in my opinion.

It’s been an interesting side-line at any rate so I’ll continue on for now. I’ll plan to move on to the w3schools Certification as a middle ground, before MCSD Certification, so more to follow shortly.

Cheerio!

Back Online: Normal Service Resumed

I’m back from my hiatus which encompassed getting married, eating far too much food and drinking wine and beer on the wonderful Adriatic coast. It’s time to get back to some serious coding and perhaps reconsider the longer term plans for this blog.

To start us off, I’ve been contemplating pushing a little money into this to sharpen up the experience a little and will most likely give the blog some dedicated presence on Facebook/Twitter. Why do it by halves; I’ll go balls deep and hope for the best!

There are numerous items that I previously wanted to, and still plan on, covering but other nuggets of technology have caught my eye in the interim. In addition to just writing code, I would also like to reflect on my own methodologies for learning subject matter and trying to improve comprehension as I progress on this journey. Anything I do to this end will get ‘air time’ within this blog and I’ll you all know if I come across anything that works particularly well (or falls flat on its face!) as and when it happens.

Lastly, although not strictly ‘code’ based, my wife (weird to say that!) plans on starting her own business this year so it provides us both with an opportunity to reimagine our workspace in the home. The plan is to turn our crap-hole of a box room into a useable work area; as we get stuck into this I’ll post updates to show how this evolves.

As we all know, putting something down on paper (or the internet!) is the first step on any journey. Here’s the redefined hubs of activity as I see them covering things you can expect to see on this blog in 2015/2016.

  • Reimagining of the Blog and some kind of dedicated presence on Facebook/Twitter.
  • Changes to our home workspace to show you how this progresses.
  • Updates covering any learning techniques as I study them. If these are useful to coding then expect them to get ‘air time’. For starters, look out for:
  • Coverage on the following topics (not sure on how basic/advanced this will be – Most likely this will comprise of feelers into a topic unless something really takes my fancy):
    • Finishing off the Epic Quest project.
    • F# Forays.
    • Coverage of Python.
    • Some further raw JavaScript coverage including jQuery.
    • Hello World level Raspberry Pi.
    • Coding against the Leap Motion Controller API.
    • Xamarin Tools.
    • ASP.NET MVC.
    • My friend Pete has written a superb object-orientated take on JavaScript – Picket.JS.
    • Further C# Unity game development (I still see myself covering a larger scale project encompassing the use of Blender, Gimp and Unity to make a standalone title).
    • Posts covering C# and TSQL (I’ve done some MySQL work recently so I will incorporate this into the proceedings if possible) as interesting topics catch my eye.
    • WPF (Rooting around in XAML) as time allows.

In and around this, I’m starting to sniff around the idea of completing full Microsoft Certifications in the next year to year and a half, so as I hop hurdles surrounding this I’ll give you all of the details.

This is not really designed to be a personal ransom note and I’m not going to outright hold myself to completing all of these things, but I do want to make a commitment to producing content as and when I can and keeping this fun (for myself and anyone reading along).

All that’s left to say is wish me luck and watch this space!

Work Practices and the Quest for (a little) Self Improvement

An interesting week at work has just been topped off (this was a great little delectable cherry as far as I’m concerned) by a Tech Day, providing developers an opportunity to spend time reviewing content on a myriad of new technologies and coding methodologies. As we all know, the life of a developer can occasionally feel like sprinting, barefoot, over ground covered in broken glass with hands outstretched to grab the prize…the Holy Grail of knowledge…which is unfortunately dangling from a piece of string attached to stealth bomber hauling ass up, up and anyway. Yeah, yeah; perhaps a little over the top agreed, but the underlying feeling this particular vision instils has likely been experienced by every developer at one time or another – The feeling of utter helplessness in the face of technologies and learning curves far outpacing the average mortal. So as an aside to coding I’d like to briefly cover some material that I myself have found useful, in part at least, in pushing back the tide.

I managed to get a video, featuring Scott Hanselman (Program Manager at Microsoft, details below) entitled “It’s not what you read, it’s what you ignore”, featured as part of our Tech Day and it seemed to get a good reception. Viewing this several months ago and having time to experiment with a few of the suggestions made in the presentation I was confident it would certainly push a few buttons with the other developers I work with. Overall, the general feelings on this seemed fairly good and I’ve had some feedback to suggest this has, at the very least, got juices flowing regarding how we approach work practices in general. It’s from a few years ago but still feels relevant:

Scott Hanselman Presentation

Other links:

www.hanselman.com
www.hanselman.com/blog

Scott is a charismatic, funny and ultimately candid presenter who portrays his opinions with grace and decisiveness. He has produced a myriad of other talks on coding and technology (i.e. surrounding web development, ASP.NET and Azure) which is his area of expertise, but this window into his day to day thoughts and work management strategies could well be just as invaluable as any technical presentation.

Here’s a few take away points from this content:

  • The Pomodoro Technique. A technique for time management/instigating short, focused work ‘bursts’.
  • Trello Boards. A scrum-like piece of software for workload management/to-do lists.
  • Rescue Time. Software that monitors a user’s activity to show work trends over days, months and years.
  • Instapaper. A way to receive information, at a time convenient for you, in a clean, newspaper format.
  • Outlook tips and tricks for handling email.

First up is the Pomodoro Technique; so called due to its link with the ‘Pomodoro’ (tomato) kitchen timer. The concept is deceptively simple but, has for me anyway, had noticeable effects. The idea is that you fire up a timer for 25 minutes and sit down to do a piece of focused, uninterrupted work during this time. At the end of the 25 minute period (a ‘Pomodoro’) a short 5 minute break is taken before starting a further Pomodoro. The cycle is then repeated for up to 4 Pomodoro’s before an extended break is taken (in the implementation of the technique I have been following). During each Pomodoro it’s necessary to note down how many distractions…………………………….So, I’ve been hunting through steam just looking at the featured games this weekend…Damn it – You see what I mean!!!

Distraction!!!

Distraction!!!

Ahem…So you log how many times you get distracted during the Pomodoro, covering both times you distract yourself and when others distract you, in the hopes of limiting this down as much as possible. As noted, it’s surprising just how difficult 25 minutes of focused work is. Try it for yourself and see. This has, in no small part, contributed to the development of what Claire calls my ‘arse-face’ – The face pulled when I don’t want to be distracted! Sorry Claire…Kudos to you for having to look at my face for any length of time – Especially during any kind of coding (hat-tip).

Here’s some links that may prove useful (One for the Pomodoro Techniques official site, another for an online timer and an amazon search for kitchen timers if you need the physical object!):

The Pomodoro Technique
Online Pomodoro Stopwatch
Amazon Search

Moving on, Trello Boards are a useful tool for general work management and for treating work entities that drop on your desk in a scrum-like fashion. It’s certainly worth a poke around, it can be found here:

Trello

In fact, outside of the scope of ‘development’, Claire and I have found this a great way to visualise things we have to do for our Wedding (yes, I have tried to ‘scrum’ our Wedding, full blown nerd alert!). The boards can be shared with other people so contributions can be made by multiple team members for example (supporting notes, due dates, to-do lists and basic colour coding of board items).

Below is an example Trello Board outlining how it may look on any given day for me (well, with a few additions that probably won’t normally make an appearance but you get the picture):

Example Trello Board

Example Trello Board

I have a few explicit sections on my board:

  • Categories which acts as a key (colour coded) for board items.
  • Long Term/Massive Asteroids’ outlining those big, bad-boy hopes and dreams to achieve in the long-term (currently on a personal development level).
  • Rule or 3 section where I like to list the 3 key things I would like to achieve on a given day. These can be physical pieces of work or more abstract concepts (such as understand a particular coding concept/do some investigation or research).
  • A Today list where I list the actual items of work I plan to complete on a given day (colour coded with due dates and task lists where appropriate).
  • Once items are completed they get moved to the final ‘Done’ list for archiving later that day.

There are obviously a multitude of ways to leverage this organisational tool but this is just my personal preference.

RescueTime is a utility, after reviewing the content of the presentation, that I adopted right off the bat to monitor my own working habits. Personally, the results it collates on a daily basis I’m not entirely sure are 100% accurate but as an indicative tool it’s great.

Here’s the link for anyone who wants to try it out:

RescueTime

My own standpoint on this is that these metrics are very useful for deciding if you are spending your time in the right places as per your job placement (i.e. are you spending too much time on distracting tasks or unnecessary administration tasks instead of physical coding). If you are looking for an indication of your effectiveness then this could be your first ‘finger in the wind’ – A basis for reconsidering your general working habits if required should you find yourself straying from the tasks you should be chewing into.

A friend at work contacted me today and made me aware of this alternative (which works fully offline):

ManicTime

Thanks to Mike for this. I haven’t had a chance to look into this yet but I know he plans on checking it out so I’ll grab a copy to see how the information provided stacks up to Rescue Time.

A particular problem that I started to face (thankfully, quickly nipped in the bud), which I know is a commonplace office plague, is ‘craptonneofchrometabsitis’. I’m sure, if you’re not already this person, you’ve walked past someone with a gazillion chrome tabs open; slowly squeezing the life out of whatever box is wheezing its way through this malady. Either this or a billion favourites are currently sitting in your browsers favourite list as ‘to read’ sites if and when ‘time allows’. This is where Instapaper comes in.

Via a browser extension (shown below) you are able to identify a piece of web content for addition in your queue of Instapaper ‘to read’ articles. Instapaper is great for cutting out all of the fluff and rubbish associated with a site; so you are literally just left with the base content. The bit I like most, however, is that you get an ability (for free) to configure a schedule to ‘post’ a digital newspaper containing all of the content you’ve been marking as interesting in a given time frame. In my case, I get information pushed directly to my Kindle every Friday afternoon (if I’ve marked at least three pieces of content each week as of interest). Bloody brilliant…A great excuse to close tabs and destroy some unneeded favourites!

Instapaper Extension

Instapaper Extension

You can have a little read up on Instapaper by following this link:

Instapaper

Here’s an example of how the web interface looks (and a few articles I read many moons ago) – It’s themeable and contains great configuration options for pushing out content as and when to designated devices (or just reading the content there and then minus adverts and other guff found on the related sites):

Instapaper Web Interface

Instapaper Web Interface

Lastly, if you watch the presentation and read into the advice surrounding managing Outlook I think you’ll find a few great golden nuggets of sage advice. Some of these I’ve had trouble implementing, although I agree with them in principle and completely see the point, such as explicitly responding to email in the afternoons (and not even checking it in the mornings). Others have been a massive triumph, such as how to manage and split your Outlook Inbox. I now have folders containing emails from Management, CC only emails, External emails in addition to my physical Inbox (i.e. things intended for me specifically). All other junk has been moved out to a dumping ground folder that simply isn’t my Inbox – This is the first time my actual Inbox has been empty for seven years! If anything, this at the bare minimum produces a feeling of liberation so I’m counting it as a win!

Whilst this was fresh in my mind I wanted to blurt it out into a post with the caveat being that I’ll try to get some damn code up again on this blog soon! Either way, I hope all or some of this turns out interesting and/or helpful.

Happy Easter! Eat Chocolate, drink beer, etc. Thanks all and bye for now.