Top Ten Tips to Prevent Insanity (Software Psychosis)

Yoyo you wonderful bear followers, I hope today finds you well and for those of you in the UK I hope your snow-related torment hasn’t been too painful to endure and you’ve all stayed safe ๐Ÿ™‚

In the shower today (apologies for the mental imagery!) I started to mull over what I would tell someone, off the cuff of course, if they asked me ‘give me your top ten tips for surviving the day to day stuff. Y’know, as quickfire as possible!’. Don’t ask me why this was relevant to my shower; but it’s fairly commonplace for me to get highly introspective standing under a stream of water, it’s one of my things if you will.

So, in no particular order of importance, here we go. Some are geared towards software development and some are straight up general comments on how I try to hop, skip and jump from day to day.

  1. Take a deep breath and time-box your day, as best as you can at least. Here I would recommend, outside of any other work management software you have to bump heads with, a simple Trello board to get yourself organised. Create cards and put a ‘due date’ on them. Two columns can suffice, an unembellished setup of two columns called ‘TODO’ and ‘DONE’ to drop cards between will serve you just fine; give it a go (I’ve been using it pretty much daily for years now).
  2. Adopt the rule of three. Three things to accomplish for the day, week and year; although I have to admit I struggle with the year one, I’m still a lowly padawan most likely! Scott Hanselman discusses this concept in this excellent video.
  3. Since coming into contact with Git I have learned two things that are having a profound effect on a) my sanity and b) the way I work. Commit often, push often and, as far as possible, keep change sets small! I like Unit Tests to so, without getting into the politics of whether you should or shouldn’t adopt them, I will settle with a simple ‘try them and see how you get on with them’ at the very least. It’s a good place to leave it for now ๐Ÿ˜‰
  4. Get comfortable with having patchy knowledge! It’s going to happen, no question. Sometimes just knowing that a ‘thing’ exists is enough to get yourself going in the right direction or nudge a colleague so they can find the right solution. You can always follow-up and learn the ins and outs of something later. Don’t stress yourself out with the crazy notion of knowing the nitty-gritty on everything you come into contact with. That’s the problem with knowledge…..the more you have the more you realise there are massive expanses of information out there (at the end of one horizon is another, don’t sweat it)!
  5. Walk away, take a break, have a shower…do something else when you’re stuck. I’m a hardcore breaker of this rule and suffer for it!
  6. Listen to others when they tell you to stop, from time to time at least (outside observers often know best and will see the crazy-loon face you have adopted in a time of stress; my wife often braves this and politely says I need to stop)!
  7. Pomodoros are good!!! The basic setup is a) pick a task and b) work at it for 25 minutes in a focused manner and finally c) take a 5-minute break. Wash, rinse and repeat (with a larger break after several ‘pomodoros’). For complex tasks where I need to perform focused bursts to produce ‘mini-sprints’ of work, this is an excellent way of working to adopt.
  8. I like to talk…try it more often! Instants chats are all good and well but if a message begins to span into the scope of ‘non-trivial’, for example, several paragraphs of information that could be misinterpreted (or just takes too long to actually key in!) just opt for walking around the office or picking up the phone. I like nattering and connecting, it’s a liberating feeling that will break down walls; especially if you’ve been cooped up for an extended period of time crunching a problem.
  9. Be careful on that lunch break skipping behaviour! It’s an easy habit to get into and almost everyone I know does it – I’ve recently been attempting to, at a minimum, always get in thirty minutes unless an apocalyptic development event is in progress. Drink water, eat food and read an article on something unrelated (and mix up your work environment from time to time)!
  10. One from my wife and an excellent piece of advice – do a ‘power dance’ (five or ten minutes is the recommended time I’m told). Harder if you work in an office, but see what you can get away with I guess (I take no responsibility for dance-related disciplinary events)!

If any of these are helpful to you then I’ll walk away from writing this a happy man (I mean, bear, ahem). If this is completely useless, perhaps I’ll steer you in a direction where you find something that works for you; in which case bravo.

I hope you’ve enjoyed this little stream of consciousness and, until the next time, happy coding and honey scoffing!

Looking for a Lighthouse

Happy New Year all and I truly hope you all enjoyed a terrific festive period.

As a little weekend treat, I decided to pick myself up a copy of the ‘net’ magazine, mainly due to the included feature articles on which web development and design tools are ‘en-flique’ (that one is for Claire; a trip to the urban dictionary is in order) for 2018. There are also some amazing looking articles discussing things like colour palette choices, and tools that assist in creating them, which should prove handy as I will be looking to potentially change up the Frog & Pencil website a little as I craft a fully functional CMS this year; something that has been well overdue.

As a quick aside, I’ve decided to check out Google’s own automated page analysis tool, Lighthouse. This can run performance and accessibility analysis on public or password protected sites. Information for getting started can be found here.

I’m opting to run this within Chrome development tools, but you can run this from the command line or as a Node module if you prefer (which allows you to hook this into a continuous integration setup, which could be incredibly useful).

Up to this point, I have been using YSlow as a web page dissection tool, but I’m happy to bust out an alternative to keep things interesting.

So, what kind of feedback does the tool provide and how does it present it? I’ll run this against the Frog & Pencil homepage and show you the results (no matter how bad they turn out, I’ll be honest!). When using Chrome you just need to inspect the audits tab, within Chrome development tools (accessed via F12 or Ctrl+Shift+I, on windows), to get up and running as follows:

Lighthouse via the Audits Tab.

Audits Tab.

On clicking ‘Perform an audit…’ you be presented with options as below (I’ll leave them all checked for this particular test run):

Audit Options for Lighthouse.

Audit Options.

The report is, on first inspection, very detailed and, as you can see, I have a fair bit of work to do (although I’m happy with the accessibility rating at least). The report is downloadable using the highlighted button:

Lighthouse Report Header.

Lighthouse Report Header.

The tests performed also appear to be more strict that the YSlow V2 test, which is nice to see:

YSlow V2 Test.

YSlow V2 Test.

There have been some surprising opportunities for improvement highlighted. I’ve long known that I should switch out the entire site to run over https and when the site is overhauled I intend to make better use of bundling for static files and will consider the use of a CDN. I have plenty of work to do with image compression also.

Here are a few things that really caught my attention:

1) How poor the site ran under simulated 3G speeds:

Simulated 3G Speeds

Simulated 3G Speeds.

2) The scale of the improvements still to be made by reducing render blocking scripts/stylesheets (a boo boo that I should really be covering) and image management:

Performance Improvement Opportunities.

Performance Improvement Opportunities.

3) The report highlighted that I was using libraries with known vulnerabilities and that I have left in code that was writing errors to the console (doh!):

Third Party Libraries.

Third Party Libraries.

This does bring into focus the core need of revisiting the website this year and giving it a thorough tune-up, as opportunities towards the end of last year were at a premium. All in all, if you’ve not used Lighthouse yet I would suggest giving it a look; especially as it takes seconds to run. I’ll be working my way through the highlighted areas in the report in the meantime!

All the best ๐Ÿ˜‰

Where You At…

Hi there wonderful people! I hope everyone is keeping well and remembering to be awesome :o)

The great wave of life has risen up and crashed back down a fair few times in the last handful of months which has, in turn, lead to a serious drought on new content; apologies for this.

Being brutally honest I’ve found myself a little bit zapped of energy and inspiration in general. We had a very long and protracted house move which involved a good number of twists and turns, mixing that in with a busy period at work (and a stretch without good internet access at the new house, in the midst of all this) and I think I did approach (and am just getting out of the other side of) something close to burnout; it happens to all of us at some point or another. It has meant that I’ve fallen into a little rut of ‘imposter syndrome’; a dose of passion is required to climb out! With all of the change going on around me nagging self-doubt as to my abilities has come into the fold and I’ve been scratching my head on how best to deal with it; nothing but the honest truth from me.

Therefore, I’m planning on trying to take some responsibility for getting back into the game and rolling some more content on to the board; reflecting a drive to inject some confidence and passion into the proceedings once again. Thanks for sticking with this, it’s a key thing in the development business to keep reaching for new horizons (and to pick which horizons should be pursued and which should be ignored, as there are so many damn options) so getting the engine kick-started again, and carving out the time to keep it ticking over, is now a priority.

Content is on the way then…happy coding until the next time :o)

PERT Estimation 101

bear-to-do-list

Task estimates; the very real enemy of the developer. The numbers you provide for a task or piece of project work (whether this is in hours, days, story points, etc.), as an ‘estimate’, are often not being viewed as an approximation by all involved. This particular subject is covered, rather marvellously by Robert C.Martin, aka Uncle Bob, in his book The Clean Coder: A Code of Conduct for Professional Programmers; a book that I implore every developer to read. To quote Robert C.Martin directly, I believe this covers the underlying issue pretty well in a good number of cases:

The problem is that we view estimates in different ways. Business likes to view estimates as commitments. Developers like to view estimates as guesses. The difference is profound.

On a good number of occasions, I have found myself taking a good hiding, due to the estimates I have offered up on project work; normally where you find the estimate has been relayed to the customer as a concrete guarantee. In many cases, both parties should have shouldered more responsibility for this disparity; as the developer, I should have been more explicit as to what my estimates actually meant. In turn, those relaying information to the customer should have probably dug deeper into the heart of the matter, to extract further raw information as to the ‘likelihood’ that something could be achieved by a certain time.

Good dialogue and communication is a key factor, of course, as is making sure that you are always clear when you are providing an ‘estimate’ and making a ‘commitment’.

This is a subject that I always find myself circling back to and that I truly want to get better at, plain and simple. For that reason, this post will be geared towards looking at one potential strategy for improving the estimation process; the Program Evaluation and Review Technique (which is covered again pretty well in ‘The Clean Coder’).

What is PERT?

The Program Evaluation and Review Technique (PERT) encapsulates an analysis technique that is used for more accurately estimating large and complex projects. The basic idea is to reveal a distribution, or create a probabilistic structure, to reduce inaccuracies in estimates and attempt to account for uncertainty in a project’s schedule. It was created in 1957 for use by the U.S. Navy Special Projects Office to manage the Polaris nuclear submarine project.

In short, the process involves identifying the likely values for the minimum time and maximum time a task will take whilst pinpointing the ‘most likely’ estimate for a task’s completion, from observing a calculated distribution. Uncertainty is factored in using some standard deviation values.

How does it work?

The concept is as follows; first, when an estimate is required, you provide three numbers:

  • O: Optimistic Estimate (known as the ‘wow, how the hell did I do it that quick!’ estimate).
  • N: Nominal Estimate (the estimate with the greatest chance of being successful, or the correct estimate essentially).
  • P: Pessimistic Estimate (the doomsday estimate, where everything goes wrong except the Earths tectonic plates opening up to consume you, as you code the feature).

The first figure, ‘O’, involves thinking about the ‘pipe dream’ figure, i.e. if you could type the code at 1000 thousand lines a minute, making zero errors, and everything was to plug into place and work the first time. If Chuck Norris was coding the feature, this would be his estimate (this should have less than a one percent chance of being the actual time taken in order for the calculation to be meaningful). Next up, come up with the figure for N, which is defined as the estimate with the highest ‘chance’ of being the correct one. On a distribution bar chart of estimates, this would be the highest bar. Lastly, define P, which should be presented as the ‘worst case’ scenario estimate, stopping short of accounting for absolute catastrophes (again, this should have less than a one percent chance of being the ‘correct’ estimate, when all is said and done).

You’ve got three numbers, hoorah! A probability distribution can be gleaned for these figures simply enough (with ET being the expected time, I’ve changed the name of the variables to make more sense to me, when compared to ‘The Clean Coder’ or Wiki, but feel free to go with what fits for you):

ET = (O + 4N + P) / 6

The wiki definitions for all of these numbers are good, so I’ve included them below:

Wiki Definitions for PERT Calculation Facets

Wiki Definitions for PERT Calculation Facets.

The last part of this wonderful, little, puzzle is calculating the standard deviation (SD); which equates to getting your mitts on a value that indicates the degree of uncertainty that is associated with a given task:

SD = (P - O) / 6

Let’s observe a detailed example to get to grips with this fully, concentrating on how some extra elements come into play for a series of tasks.

A detailed example

In this example, Ryan (a name that came totally off the top of my head, milliseconds of thought went into that!) has a total of five items that he needs to estimate, that are part of a full project. Here are the specific tasks and Ryanโ€™s initial estimates (expressed in days), before applying PERT calculations:

SIDE NOTE: Breaking a large project or piece of work into smaller tasks can often be a good strategy for mitigating risk up front and weeding out bugbears ahead of time.

Task Estimate
Write stored procedures for the customer export page 3
Write Web API components for the customer export page 2
Build the business logic layer for the customer export page 5
Build and style the customer export page (UI) 3
Assist Hayley with automated tests 1

That comes to a grand total of 14 days.

Ryan is now asked to apply the PERT approach to his estimates, and the mental cogs start to turn. Here’s a run-down of Ryanโ€™s thoughts on each part of the project, in order:

  1. Stored Procedures: “Ok, I think I am fairly likely to complete this within 3 days, but there are quite a few elements of Dave’s existing helper stored procedures that I don’t understand. So, I’ll actually have to factor that in. Oh, and I have to do a data conversion!”
  2. Web API Components: “I’ve written most the of Web API components up to this point, so I’m well versed in this area at least and am confident with this figure. Even if everything went wrong it would only stretch to 3 days.”
  3. Business Logic Layer: “Jane wrote most of this layer and I’m not that well-versed in how it hangs together. The domain behind this is actually quite new to me. This could take significantly more time…”
  4. Building the page and styling: “The specification looks good and I’ve done a few similar pages in the past. It could take slightly longer if things did go wrong, as there are a few UI bells and whistles, including effects, that are required…”
  5. Assisting Hayley: “I was just kind of time boxing this in my mind, it’s almost certain she will need support beyond this, however – it really is an unknown!”

After a bit of head scratching and discussion with other team members, the figures below come out of the boiling pot. PERT has been applied, times are expressed in days and abbreviations used for the sake of space. Values are rounded to one decimal place:

Task O N P ET SD
Write stored procedures for the customer export page 3 5 11 5.7 1.3
Write Web API components for the customer export page 1 2 3 2 0.3
Build the business logic layer for the customer export page 2 6 13 6.5 1.8
Build and style the customer export page (UI) 1 4 9 4.3 1.3
Assist Hayley with automated tests 1 3 7 3.3 1

ET and SD values are calculated using the formulas previously discussed. Firstly, using the ET values, we can understand that Ryan will ‘likely’ be finished with these tasks (based on the sum of the ET values) in 21.8 days, let’s say 22 days then. To get a feel for the ‘uncertainty’ behind these tasks we have to look at the square root of the sum of the squares of the SD values for the tasks (a bit of a mouthful), as shown here:

(1.3 2 + 0.3 2 + 1.8 2 + 1.3 2 + 1 2) 1/2 =
(1.69 + 0.09 + 3.24 + 1.69 + 1) 1/2 =
7.71 1/2 = ~2.8 days

We can surmise from this that there could be 2.8 days, so let’s say 3 days, of uncertainty to factor in; therefore Ryan could well take somewhere in the region of 25 days to complete the series of tasks, or possibly even 28 days (if we double the time calculated for SD across the sequence). However, we know from this that anything over these values is an unlikely outcome. We are a far cry from the original estimate of 14 days to cover all of the tasks.

Summary

The general premise is to add some pragmatism and realism to the whole process of estimation. I have, on many occasions, estimated too optimistically and then have struggled to hit the mark; this is just one technique for attempting to mitigate that risk by being more methodical in the estimation process.

I hope you’ve enjoyed this primer and happy coding, estimating, etc. Until the next time!