CloudWatch Event Trigger Setup.

AWS Scheduled Lambda – Starting & Stopping EC2 Instances

I’ve been meaning to give this a go for a month or two now and, alas, I finally have the spare hours needed to actually bring it all together.

The outcome I have in my head, which is actually to replace a process running as a Task Scheduler job on an EC2 instance where I work, is to write as scheduled Lambda (using CloudWatch Events, utilising a configured rule, with a ‘cron’ expression in tow) that can be triggered to start and stop EC2 instances; completely removing the need for the instance this process runs on. EC2 instances will be targeted by the presence of a specific tag. For something that kicks in once a day a Lambda, where you pay per execution, is much more in line with what we want. This is opposed to having to pay for an EC2 to be permanently spun up to service this kind of request.

Here are some resource links, before we get started, which illustrate some of the things I found useful to read in the run-up to trying this myself:

This was also a nice opportunity to play around a bit more within my personal AWS ‘space’, a nice bonus as I’ve not done a hell of a lot with it of late.

.NET Core SDK

I decided to get this separately as it could come in handy for general .NET Core development. I did originally think this was tied to the ability to be able to use the .NET Core Lambda templates within Visual Studio, although I don’t actually think this is the case (The AWS Toolkit for Visual Studio 2017 is the core component that governs this).

Either way, the SDK can be found here.

AWS Toolkit for Visual Studio 2017

I’ve been bringing my poor, ageing laptop at home up to date. I’ve installed Visual Studio Community Edition 2017 and will look to get it ‘Lambda creation’ ready in short order. To get all of the lovely, sugar-coated, AWS support within Visual Studio (including access to an easy method of publishing Lambdas) I’m going to grab the AWS Toolkit for Visual Studio 2017. Navigate to Tools > Extensions and Updates > Online > Search for ‘AWS’ > This should bring back the AWS Toolkit for Visual Studio 2017. Go ahead and install this if you don’t already have it (closing Visual Studio to trigger the installation):

AWS Toolkit for Visual Studio 2017.

AWS Toolkit for Visual Studio 2017.

AWS Toolkit User Configuration

AWS Toolkit Credential Setup.

AWS Toolkit Credential Setup.

Before creating the Lambda, I’ve followed the provided configuration advice to go and create a new user via the IAM console. I’ll detail the whole process I followed just for clarity.

Start by accessing the AWS Console and open the IAM Management Console > Users section and click ‘Add user’. The user we are going to create needs ‘programmatic’ access, so be sure to check the correct box, also giving your user an appropriate name in the process:

After hitting ‘Next’, we need to assign an existing group with appropriate permissions or, as I am going to do, create a new group using the ‘Create group’ button. A modal popup will launch, here the ‘Group name’ can be added along with an opportunity to link the group to an existing policy (or link it to a brand new one). I’m keeping this simple and, as outlined in the guidance, assigning the ‘AdministratorAccess’ policy. Click ‘Create group’ and then ‘Next:Review’ to proceed.

Bash the ‘Create user’ button and you should be golden! Make sure to hit the ‘Download .csv’ button on the subsequent screen to get credentials at the ready.

Visual Studio AWS Toolkit Setup Screen

On the Visual Studio ‘Getting Started with the AWS Toolkit for Visual Studio’ screen, I opted to download the CSV for my ‘lew-admin-programmatic-user’ and use the ‘Import from a csv file…’ button. I left the profile name as ‘default’ for now. After selecting the relevant CSV credential file, hit ‘Save & Close’ to continue.

To cement my place as a ‘completionist geek’ I also updated Visual Studio at this point as I was a touch behind, so follow suit if you want to.

A little tip – If you’ve already closed the AWS Toolkit ‘setup’ screen a ‘Profile’ can be configured via the AWS Explorer window. This can be accessed within Visual Studio via View > AWS Explorer:

New AWS Profile.

New AWS Profile.

Creating the Lambda functions and supporting project

I had to close and reopen Visual Studio at this point to get the .NET Core Lambda templates to do their magic trick and appear. Navigate to File > New Project > Visual C# > AWS Lambda and you should be presented with an ‘AWS Lambda Project (.NET Core)’ option. I’m going to create a project called ‘LGAws.StartInstances’, wrapping everything in a solution for good measure. Once the solution is loaded I then opted to create a second Lambda project called ‘LGAws.StopInstances’. In both cases, I used the ‘Empty function’ blueprint as I want to roll with this fully from scratch.

For the purposes of keeping a clean abstraction between the Lambda functions and the logic behind them, I have also created a separate .NET Core project called ‘LGAws.Operations’. This will be a helper library that will act as a repository for the logic that calls the AWS EC2 SDK (which we’ll get to in a bit). All projects are then modified to use .NET Core 2.0 using the right-click context menu and selecting ‘Properties’.

We’re on to actually writing the code then, which I’ll detail as best I can as we go (providing full samples to boot so you can follow along with every decision made).

The code

Let’s start with inspecting the solution:

Solution Configuration.

Solution Configuration.

The LGAws.Operations project represents, as previously discussed, a supporting library which avoids the need to embed all of the logic within the Lambda functions themselves. Don’t treat this as a fully-fledged, complete solution or an absolute authority on how to structure this, I just thought a little separation of concerns wouldn’t go amiss here. Apart from the physical code that actually calls the AWS EC2 SDK, nothing else you see here is technically required to get going with your own version of this.

First up, the extensions folder is a nice little haven for a couple of small extension classes called ExceptionExtensions and MessagingExtensions. Nothing magical here, just types that provide some nicely formatted output for exceptions and other messaging. The content is as follows:

using System;
using System.Text;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding exception type extension methods.
    /// </summary>
    public static class ExceptionExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static exception extension designed to produce a formatted string
        /// from the targeted exception.
        /// </summary>
        /// <param name="exception">The exception to process.</param>
        /// <param name="includeStack">A boolean that denotes if we should include stack trace information in the returned string.</param>
        /// <returns>A formatted exception string based on the supplied parameters.</returns>
        public static string ToFriendlyExceptionString(this Exception exception, bool includeStack = true)
        {
            StringBuilder exceptionStringBuilder = new StringBuilder();

            if (exception != null)
            {
                // A valid exception is in scope - append messages from this exception and any inner exception (if present)
                exceptionStringBuilder.AppendLine($"The following exception has occurred: { exception.Message }");
                exceptionStringBuilder.AppendLine(exception.InnerException != null
                    ? $"An inner exception was detected as follows: { exception.InnerException.Message }" : "No inner exception was detected.");

                // Include stack information as specified by the caller
                if (includeStack && !string.IsNullOrWhiteSpace(exception.StackTrace))
                {
                    exceptionStringBuilder.AppendLine($"Stack trace: { exception.StackTrace }");
                }
            }

            return exceptionStringBuilder.ToString();
        }

        #endregion Extension Methods
    }
}
using System.Net;
using System.Runtime.CompilerServices;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding 'messaging' type extension methods.
    /// </summary>
    public static class MessagingExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static 'http status code' extension designed to produce a formatted string
        /// from the targeted httpstatuscode.
        /// </summary>
        /// <param name="statusCode">The http status code to inspect and provided a formatted string based on.</param>
        /// <param name="methodName">The calling methods name (when called via async you'll get 'MoveNext', based on async state machine antics).</param>
        /// <returns>A formatted string for reporting, based on the supplied http status code and method name parameters.</returns>
        public static string GetStatusMessageFromHttpStatusCode(this HttpStatusCode statusCode, [CallerMemberName] string methodName = "") =>
            statusCode == HttpStatusCode.OK
                ? $"The { methodName } method returned 'OK' - the operation completed successfully."
                : $"The { methodName } method returned an HTTP Status Code of { (int)statusCode } ({ statusCode }). Please check that the operation completed as expected.";

        #endregion Extension Methods
    }
}

Within the Models folder, I’ve created a basic object hierarchy to encapsulate the idea of different AWS operations, such as describing and manipulating EC2 instances. The BaseOperationModel is the top-level base class that contains a single string property called OperationReport; the idea here is that all AWS operations will support a ‘report’ that details how the operation went. I then have two derived classes in the mix named DescribeEC2Operation and ManipulateEC2Operation (the ‘manipulate’ class itself is just an empty stub, but acts as a ‘marker’ object to make the return value and operation being performed easily identifiable and unique in future). I utilise these types as return values when triggering logic to obtain instance ids (by a specific tag) and physically starting and stopping EC2 instances. These classes as defined as follows:

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Base class model for AWS operations.
    /// </summary>
    public abstract class BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// All AWS operations surface a string to detail
        /// a 'report' on the operation.
        /// </summary>
        public string OperationReport { get; set; }

        #endregion Public Properties
    }
}
using System.Collections.Generic;

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'describe' EC2 operations.
    /// </summary>
    public class DescribeEC2Operation : BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// Represents the obtained instance ids.
        /// </summary>
        public List<string> InstanceIds { get; set; } = new List<string>();

        #endregion Public Properties
    }
}
namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'manipulate' EC2 operations (such as 
    /// starting and stopping instances).
    /// </summary>
    public class ManipulateEC2Operation : BaseOperationModel
    {
        // Further implementation details for a ManipulateEC2Operation to be added here as and when needed
    }
}

There is also a static utility class for some constant strings used throughout the library.

namespace LGAws.Operations.Shared
{
    /// <summary>
    /// Public static helper class that hold constants to use
    /// for all AWS-based operations.
    /// </summary>
    public static class Constants
    {
        #region Constant Definitions

        /// <summary>
        /// Represents a stock message for when a response is null.
        /// </summary>
        public const string NULL_RESPONSE_MESSAGE = "The returned response was null. Please investigate the cause and/or try again.";

        /// <summary>
        /// Represents the stock EC2 auto start 'tag'.
        /// </summary>
        public const string AUTO_START_TAG = "auto-start";

        /// <summary>
        /// Represents the stock EC2 auto stop 'tag'.
        /// </summary>
        public const string AUTO_STOP_TAG = "auto-stop";

        #endregion Constant Definitions
    }
}

Lastly, the EC2OperationsHelper class is the core utility wrapper that encapsulates the code to obtain instance ids, by tag, and utilise those instance ids to start and stop the relevant instances (using the model classes and extensions previously observed). In order to actually use the relevant AWS EC2 APIs you’ll need to right-click this project (if you’re following along) and select ‘Manage Nuget Packages…’. Then, add the AWSSDK.EC2 package to begin using the AmazonEC2Client type – you’ll be looking for the following after installing the package:

AWSSDK.EC2 Nuget Package.

AWSSDK.EC2 Nuget Package.

The AmazonEC2Client type is the gateway to the underlying methods we require to obtain EC2 instance ids by tag and subsequently start and stop those instances. This is done via the DescribeInstancesRequest/DescribeInstancesResponse, StartInstancesRequest/StartInstancesResponse and StopInstancesRequest/StopInstancesResponse constructs. You’ll notice that the AmazonEC2Client type implements IDisposable so, as is good practice with any type implementing this particular interface, I have used the good old using statement to ensure everything is mopped up after use. A DescribeInstancesRequest type can except a List of type ‘Filter’, which is our way of searching for instances by tag name. This particular implementation does not concern itself with the value behind the tag, but there are ways to factor this in if required. Lastly, the AmazonEC2Client uses its parameterless constructor which essentially means AWS credentials will be inferred; we’ll see this all come together when we ‘Publish’ the Lambda to AWS (the role specified at this point determines what the Lambda will be able to access and what credentials it ultimately runs under). See below for the entire code listing for this class:

using Amazon.EC2;
using Amazon.EC2.Model;
using LGAws.Operations.Extensions;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace LGAws.Operations.EC2
{
    /// <summary>
    /// Helper class that represents operations that can be triggered
    /// against EC2 instances (such as starting/stopping instances).
    /// </summary>
    public class EC2OperationsHelper
    {
        #region EC2 Operation Methods

        /// <summary>
        /// Method that returns a custom DescribeEC2Operation object that holds details
        /// on EC2 instances discovered by the tag supplied.
        /// </summary>
        /// <param name="tag">Specifies the tag 'key' to identify targeted EC2 instances by.</param>
        /// <returns>A Task containing a custom DescribeEC2Operation object (containing discovered instance ids).</returns>
        public async Task<DescribeEC2Operation> GetInstancesByTag(string tag)
        {
            DescribeEC2Operation describeOperation = new DescribeEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the DescribeInstancesRequest/DescribeInstancesResponse objects to find instances by tag
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    DescribeInstancesRequest describeRequest = new DescribeInstancesRequest
                    {
                        Filters = new List<Filter> { new Filter("tag-key", new List<string> { tag }) }
                    };

                    DescribeInstancesResponse describeResponse = await ec2Client.DescribeInstancesAsync(describeRequest);

                    // The response stores instance details in a Reservation wrapper, so drill down as required to obtain the instance ids
                    if (describeResponse?.Reservations?.Count > 0)
                    {
                        describeResponse.Reservations.ForEach(reservation =>
                        {
                            if (reservation?.Instances?.Count > 0)
                            {
                                reservation.Instances.ForEach(instance =>
                                {
                                    // Add discovered instance ids to the describeOperation helper object
                                    describeOperation.InstanceIds.Add(instance.InstanceId);
                                });
                            }
                        });
                    }

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    describeOperation.OperationReport = describeResponse != null
                        ? describeResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                describeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return describeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'start' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to start.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the start operation).</returns>
        public async Task<ManipulateEC2Operation> StartEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StartInstancesRequest/StartInstancesResponse objects to attempt to start the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StartInstancesRequest startRequest = new StartInstancesRequest(instanceIds);

                    StartInstancesResponse startResponse = await ec2Client.StartInstancesAsync(startRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = startResponse != null
                        ? startResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'stop' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to stop.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the stop operation).</returns>
        public async Task<ManipulateEC2Operation> StopEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StopInstancesRequest/StopInstancesResponse objects to attempt to stop the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StopInstancesRequest stopRequest = new StopInstancesRequest(instanceIds);

                    StopInstancesResponse stopResponse = await ec2Client.StopInstancesAsync(stopRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = stopResponse != null
                        ? stopResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        #endregion EC2 Operation Methods
    }
}

The documentation surrounding what operations the AWS SDK for .NET supports was fairly detailed and well laid out, it can be found here for anyone interested in digging around further.

So, we move on lastly to the key component of this entire configuration; the physical Lambda functions. I’ve created two distinct functions, as discussed previously – one to cover the starting of EC2 instances and another one to kick off the stopping operation. Lambda functions are relatively simplistic in their setup, with the stock template providing a class called Function containing a singular method called FunctionHandler. I’ve amended the signature of this method in my sample to not return any value, the template returns a string, as is. Also, the signature is geared to accept an input string argument, along with an ILambdaContext implementing object. I’m not interested in accepting input at the moment, so I’ve cut the input arguments down and just left the ILambdaContext implementing object in scope, which is a cool little object that exposes metadata about the Lambda function triggered (i.e. the function name, allocated memory limits, etc.).

The main idea I’ve gone with here is abstracting, as discussed previously also, all of the core logic to the external ‘business logic’ library. The Lambda simply creates an instance of the EC2OperationHelper class and then uses that as the workhorse, meaning our function definition is as simple as possible. The only other additional statements in play undertake logging, the details of which can be seen in AWS CloudWatch, which we’ll review later.

using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StartInstances
{
    /// <summary>
    /// Holds logic for the Start EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Start EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to start
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_START_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // start instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StartEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}
using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StopInstances
{
    /// <summary>
    /// Holds logic for the Stop EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Stop EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to stop
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_STOP_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // Stop instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StopEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}

We’ve now reached the stage of finally getting our Lambdas published to AWS, which we’ll review now.

Upload of the Lambda function to AWS

The AWS Toolkit for Visual Studio provides a publishing wizard, but Lambda functions can be zipped and then uploaded using the AWS Console > Lambda admin screen if you prefer. Let’s review the upload process for one of our two Lambda functions, a process that I will repeat for the other function to (behind the scenes for brevity).

I want my Lambdas to be able to run wild with EC2 instances, so I’ve again popped on over to the AWS Console > IAM > Roles > ‘Create role’ to generate the ‘lg-ec2-full-access-role’. The role should look like this after creation, you’ll want to select ‘Lambda’ as the AWS service type when creating the role. I also attached the ‘AmazonEC2FullAccess’ and ‘AWSLambdaFullAccess’ policies to the role:

EC2 Full Access Role Summary.

EC2 Full Access Role Summary.

EC2 Full Access Role Attached Policies.

EC2 Full Access Role Attached Policies.

We’re going to need this role in the next step.

To start with the publishing process, right-click the Lambda function project in the Solution Explorer within Visual Studio and select the ‘Publish to AWS Lambda…’ context menu item. You should be presented with a modal popup that looks similar to the image listed below. I’ve modified a few of the options at this point, which you may need to also do:

  • The functions I have created are using .NET Core version 2.0, so I’ve adjusted the ‘Language Runtime’ to ‘.NET Core v2.0’.
  • I’ve listed my function name as ‘LGAwsStartInstances’, not using the period character which is invalid in this instance.
  • For convenience, I’ve set the ‘Save settings to aws-lambda-tools-defaults.json for future deployments’ flag.
  • All other options should be valid at this point. I’ll be using the ‘default’ profile, in the ‘EU (Ireland)’ region (I could have switched to ‘EU (London)’ I guess, but I invariably remember too late that this exists!), adjust your region as needed.
Upload Lambda Function.

Upload Lambda Function.

Click ‘Next’ to proceed, where you’ll be presented with one last modal screen, which allows you to set further configuration details, such as memory execution limits and timeouts for your particular function. The key thing on this particular screen, which we will need to address, is selecting a fitting value for the ‘Role Name’ dropdown:

Advanced Function Details.

Advanced Function Details.

Here, in my case at least, I ensure that the recently created ‘lg-ec2-full-access-role’ role is selected – be sure to select an appropriate value and then click ‘Upload’ to complete the process. I’ve then, at this point, performed the same steps for the other Lambda function project. With any luck, the upload will be error-free and, on completion, we’ll be able to go back to the AWS Console and create our test EC2 instance. You’ll notice that Visual Studio also (there is a settings flag that governs this on the upload progress modal) loads a ‘test’ screen for you to trigger your function with. Lambdas are also testable within the AWS Console itself.

Creation of a test EC2, with tag, to turn on and off

We now need to actually create the targeted entity of our Lambda functions; an EC2 that sports the appropriate ‘tags’. We’re going to create a bare-bones EC2 from a standard Windows, base AMI, although it really doesn’t matter what you opt to use so fill your boots with whatever you want. The AMI I am using is eligible for free-tier usage, depending on the current state of your AWS account.

To begin, we need to run on over to the AWS Console > EC2 > Launch Instance option and pick an AMI to begin. I’m opting to go with this:

Choose Base AMI.

Choose Base AMI.

After hitting ‘Select’ I go through the following motions to generate the AMI.

  1. Choose an Instance Type > Pick t2.micro.
  2. Configure Instance Details > Skip over this.
  3. Add Storage > Defaults are fine here also, skip over this.
  4. Add Tags > We’ll add three here. Add a ‘Name’, ‘auto-start’ and ‘auto-stop’ tag as shown in the screenshot below.
  5. Configure Security Group > Skip over this (in the real world, of course, you’d want some clearly defined Security Groups but for the purposes of testing our Lambda this is fine for now).
  6. Launch the instance! Create a new key pair if you need to (keeping the .pem file to one side, although we’ll be decommissioning this instance right after our testing anyway) or use an existing key pair.
Lambda EC2 Tag Setup.

Lambda EC2 Tag Setup.

Once launched feel free to stop the instance for now. We’ll be using a Lambda to spin it up very shortly!

Test Instance Ready.

Test Instance Ready.

CloudWatch Event Rule trigger

The whole concept behind what I’m looking for is to trigger a Lambda on a cron schedule. The method I’m going to use to achieve this involves utilising a CloudWatch Event ‘Rule’, which can be configured manually via the CloudWatch section of the AWS Console or, more conveniently, via the Lambda section of the AWS Console instead. Therefore, to complete the ‘scheduling’ setup on a Lambda function go to the AWS Console > Lambda, then in the ‘Designer’ and the ‘Add triggers’ sidebar click ‘CloudWatch Events’. This will add a node that serves as a step to ‘feed’ the triggering of the Lambda:

CloudWatch Event Trigger Setup.

CloudWatch Event Trigger Setup.

Scroll down to configure the CloudWatch Event further and in the ‘Rule’ drop-down select ‘Create a new rule’. You can then give the rule a name, description and most importantly (with the ‘Schedule expression’ radio option set) a cron schedule. The sample expression I’ve used here will trigger the Lambda every 10 minutes, Monday to Sunday (you can use the documentation to configure any schedule you like). I’ve used this particular format so I can easily switch this to run Monday to Friday instead, with one trigger per day being the end game I’m looking for. Click add to complete setting up the rule and then ‘Save’ in the top-right hand corner of the screen to finish up.

Is it working?

At this moment in time our test EC2 instance is stopped so the desired effect we are looking for is the CloudWatch Event to trigger, based on the configured rule, and thus run the ‘LGAwsStartInstances’ Lambda function – our EC2 should then be kicked into life! On the Lambda function page, the link to the rule can be clicked to see details of the schedule, as displayed below:

Start EC2 Rule.

Start EC2 Rule.

CloudWatch Event Rule Schedule.

CloudWatch Event Rule Schedule.

After waiting for the next ‘schedule slot’ to roll around the ‘Logs’ menu option within CloudWatch can be accessed. A log group for our Lambda can be seen which, when drilled into, shows the logging statements produced by the ‘LGAwsStartInstances’ function; this directly ties to the use of the ‘LambdaLogger’ type in the sample code.

CloudWatch Logs.

CloudWatch Logs.

Start Instance Lambda Logs Content.

Start Instance Lambda Logs Content.

After verifying the existence of log data, reporting a successful operation, we can finally go over to the EC2 admin section of the AWS Console and witness the EC2 instance started:

EC2 Started.

EC2 Started.

After proving this operation works correctly I opted to disable the event rule tied to this Lambda and created another event, mirroring the setup process already listed above, to prove the ‘LGAwsStopInstances’ function correctly triggers as expected:

Stop Instance Lambda Logs Content.

Stop Instance Lambda Logs Content.

So, success then – happy days all around!

Asides and final thoughts

One really interesting thing to note with the sample code, which I didn’t end up changing just to bring it up as a discussion point, is that if an exception occurs within the ‘meat’ of the Lambda code the use of ‘[CallerMemberName]’ will not give you the results you may expect. During testing, I triggered some test exceptions, with the aim to be sure that my logging code was registering the correct calling method name. I discovered that the calling method name, however, was getting logged as ‘MoveNext’ in all instances. After a few minutes of pondering, I realised that we were in scope of asynchronous code, which actually explains everything. When using asynchronous methods everything is bundled into a ‘state machine’ construct, with an iterator controlling the flow of how we move through the code. This construct, behind the scenes, has a ‘MoveNext’ method where the code I’d created would now be housed; hence the reason for the little logging nuance. One to be aware of; more details are available here if you’re interested (this is true regardless of whether you use MethodBase.GetCurrentMethods().Name as a calling parameter or the [CallerMemberName] attribute).

There is more I plan to add to this; one example of which is the assigning of elastic IPs to the EC2 instances on startup. However, as a grassroots template, this serves pretty well and I hope this helps anyone else looking to do something similar. A pretty long post then but one I’ve enjoyed knocking up! Until the next time happy coding as always 🙂

Experimenting with Azure CDN

With the gradual piecing together of the Lego bricks forming the slow move over of the Frog & Pencil website to a more managed approach (building of a custom CMS and an all-around better ASP.NET MVC architecture) I thought it would be interesting to document the move over of Frog & Pencil images to a CDN. I was inspired to give this a go after watching Scott Hanselman make the switch for his podcast site images and other Azure Friday videos, as documented here:

Scott Hanselman lifting and shifting images over to a CDN.
Azure CDN with Akamai.

It seemed like a relatively painless process and is a step in the right direction for our site as a whole; so, let’s give it a go!

NOTE: A short way into this post I realised that I was making a few missteps. This is cool, I think, as I would rather document the journey I took with the mistakes listed, to be honest – #KeepingItReal! However, for sanity (mine and yours) I’ll specify the ‘correct’ order of events that you should follow here that you can marry up with the ramblings below:

  1. Sign in to the Azure Portal.
  2. Create a storage container, if you don’t already have one.
  3. Download and utilise a storage explorer application (such as Azure Storage Explorer).
  4. Create a CDN Profile and CDN endpoint (that ties explicitly to your storage container, in this instance).
  5. Go to your DNS settings and generate a CNAME property, mapping a custom domain to your CDN if you wish to.
  6. Optionally, learn how to programmatically interact with your storage container.

Azure Portal – First Steps (documenting the journey)

First things first, we must hop on over to the Azure Portal. I searched the marketplace for ‘CDN’ and clicked create in the right-hand pane, as shown:

Creating a CDN

Creating a CDN.

The next phase involves configuring a CDN profile. The profile needs to be given a name and should be attached to an Azure Subscription. I’ve created a new Resource Group, by specifying a name for it, but it is possible to select an existing one for use here. There are some guidelines surrounding Resource Groups, such as items within a group should share the same lifecycle; more details can be found within this handy documentation article, read away!

The Azure CDN service is, of course, global but a Resource Group location must be set, which governs where resource metadata is ultimately stored. This could be an interesting facet to consider if there are particular compliance considerations regarding the storage of information and where it should be placed. I’m going with West Europe either way; a nice, easy choice this time around.

As for pricing, I have decided to head down the Akamai route, using the Standard Akamai pricing tier. I will have to see how this ultimately pans out cost wise over time, but it seems reasonable:

Azure CDN Provider Pricing

Azure CDN Provider Pricing.

At this point, we can explicitly create a CDN endpoint (where resources will be ultimately exposed). The endpoint has a suffix of ‘.azureedge.net’ and I’ve simply specified the first part of our domain, ‘frogandpencil’ as the prefix.

This is where I hit a bit of a revelation with the ‘Origin Type’ drop down. You can select from Storage, Cloud service, Web app or Custom origin (which is cool!), of which I want to use Storage. After selecting this I can pick an ‘Origin hostname’. The light bulb moment here, for me, is that I should have created a storage container first! I’d watched enough videos to have dodged this little problem, but I still managed to stumble…all part of the learning process 😉

So… Let’s Create a Storage Container

Back to the market place then. The obvious pick seems to be ‘Storage account – blob, file, table, queue’, so I’ve gone ahead and clicked create here:

Setup Azure Storage.

Setup Azure Storage.

When creating the storage account there are a fair few options to consider, a good number that read as if they will impact pricing. I had to use the documentation found here to make choices. I settled on the setup described here (for images, and as the site isn’t yet using https, I’ve gone with the secure transfer feature being disabled – one for review in the future):

As an overview, the guidance suggests the use of the ‘Resource manager’ type of ‘Deployment model’ for new applications. There doesn’t seem to be a penalty for using the ‘StorageV2’ ‘Account kind’, which extends the types that can be stored outside of just blob data, so that is what I am going for.

Performance wise, the ‘standard’ option seems like an acceptable setting at the moment and for the kind of data I’ll be storing (images for now, and possibly other static content later down the line) I can opt out of any geo-redundant replication options. In the event of resource downtime, I can easily switch to the use of resources local to the website. Plus, there will not be any data being lost really, all easily rebuilt and recoverable.

As for the ‘Access tier’, I’m heading down the ‘Hot’ route as images will be accessed quite frequently (we have the CDN to consider here so I might tinker later on down the line).

I then pick a Subscription, give the Resource Group a name and select my region of choice before continuing.

I then get a new blade on the dashboard (which took a minute to create) and, on accessing, am presented with the following:

Storage Setup.

Storage Setup.

Managing the Storage Container

The first and perhaps most obvious choice for managing and actually getting some content up into the storage container is the Azure Storage Explorer, which I’ll be downloading and using.

After a painless install process, you should see the following, where you will be asked to connect to Azure Storage:

Connect to Azure Storage.

Connect to Azure Storage.

I simply used my Azure account sign in details here. I did notice however that the Azure Portal does expose, under ‘Access Keys’ (within the storage container dashboard), keys and connection strings. I’m assuming this is for other kinds of, including programmatic, access; which I’ll give a go I think as part of this post (as a wee bonus).

I used the right-click context menu to create a new container called ‘images’ and then used the upload button to push up a test image:

Azure Storage Explorer Upload Image.

Azure Storage Explorer Upload Image.

Again, against the container I used the right-click context menu to select ‘Set Public Access Level…’, which I’ve set as follows to allow public access to the blob data but not the container:

Container Public Access Setup.

Container Public Access Setup.

I now have a blob container with a single image in it with appropriate access rights configured. The question is can I access the image in its current state? We’re looking pretty good from what I can see.

Successful Access.

Successful Access.

Adding a custom domain

Next up, I plan on adding a custom domain to the storage account. To do this, I access the ‘Custom domain’ option as shown here:

Register Custom Domain.

Register Custom Domain.

I followed option 1 as listed here and created a CNAME record to map frogandpencilstorage.blob.core.windows.net to images.frogandpencil.com (I’m happy to wait for this to propagate).

Register images.frogandpencil.com.

Register images.frogandpencil.com.

Once the CNAME record is created you simply have to place your target URL in the text box provided and hit save.

New CNAME property.

New CNAME property.

Lastly, let’s take it for a spin and see whether we can access the image in the storage container via the custom URL…and voila:

Custom Domain Active.

Custom Domain Active.

Back to the CDN bit!

We’ve come full circle! With a storage container in place I can now use that to feed a configured CDN. Consequently, I backtracked and followed the previously listed steps being sure to select my ‘Origin hostname’ to point to the newly created storage container:

CDN Profile & Endpoint Configuration.

CDN Profile & Endpoint Configuration.

On clicking create it takes a short time for the CDN to be configured.

So, what do I do now

Looking through the videos I made another discovery. This is where I want to adjust the previously created CNAME property (that I setup for the storage container) and hook this up to the CDN endpoint instead. The portal exposes custom domain mapping for a CDN much like for a storage container:

Change CNAME to map to CDN.

Change CNAME to map to CDN.

Portal CDN Custom Domain Mapping.

Portal CDN Custom Domain Mapping.

Again, I had to wait a short time for the CNAME property change to propagate but, after that, I was all set. I then spent a little time verifying that the CDN was up and running. There are quite a few optimisation options including the ability to set a custom ‘Origin path’ (such as ‘images’) but I’m leaving these be for the time being.

The Bonus Section – Programmatically Add Items to Azure Storage

As promised, this next section discusses (in a very bare bones fashion) what is required to write to an Azure storage container. I’ve created a stub Console Application to get running with and the process itself is simple (not considering errors, existence checks and threading, of course!).

You need to:

  1. Reference the WindowsAzure.Storage NuGet package.
  2. Add a reference to System.Configuration (if you want to put connection strings, folder paths and container names in configuration files and read them out).
  3. Then simply follow the code outlined below to get started.

In my test setup, the ‘SourceDirectory’ is looking at ‘C:\test-files\’ (contains just images) and the ‘TargetContainer’ is called ‘images’, as per my earlier configuration. The connection string can be obtained from the Azure Portal, under ‘Storage Account > Settings > Access Keys’.

Test Files ready for upload.

Test Files.

Storage Access Keys.

Storage Access Keys.

The App.config for the test application is structured like this, with the connection string being set to the correct value as per the information found in the Azure Portal.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
  <connectionStrings>
    <add name="FrogAndPencilStorageConnection" connectionString="[OBTAINED_FROM_THE_AZURE_PORTAL]" />
  </connectionStrings>
  <appSettings>
    <add key="SourceDirectory" value="C:\test-files\"/>
    <add key="TargetContainer" value="images"/>
  </appSettings>
</configuration>

Then, finally, the actual test code which…

  • Attempts to connect to the storage container creating a CloudStorageAccount object, based on the connection string information supplied.
  • Then uses the CloudStorageAccount object to get create a new CloudBlobContainer object (based on the container name stored in the configuration settings).
  • Finally, utilise this CloudBlobContainer, along with information about the files to process, to actually perform the upload.
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;

namespace WriteToAzureStorageTestApp
{
    /// <summary>
    /// Test application for writing to Azure Storage.
    /// Basic, test code only (throwaway code).
    /// </summary>
    internal class Program
    {
        #region Main (Entry Point) Method

        /// <summary>
        /// Main entry point method for this console application.
        /// </summary>
        /// <param name="args">Optional input arguments.</param>
        private static void Main(string[] args)
        {
            DemoWritingToAzureStorage();
        }

        #endregion Main (Entry Point) Method

        #region Private Static Methods

        /// <summary>
        /// Private static demo method illustrating how to upload to Azure Storage.
        /// </summary>
        private static void DemoWritingToAzureStorage()
        {
            // First use the FrogAndPencilStorageConnection connection string (for Azure Storage) to obtain a CloudStorageAccount, if possible
            CloudStorageAccount.TryParse(ConfigurationManager.ConnectionStrings["FrogAndPencilStorageConnection"].ConnectionString, out CloudStorageAccount storageAccount);
            if (storageAccount != null)
            {
                // We have a CloudStorageAccount...proceed to grab a CloudBlobContainer and attempt to upload any files found in the 'SourceDirectory' to Azure Storage
                Console.WriteLine("Obtaining CloudBlobContainer.");

                CloudBlobContainer container = GetCloudBlobContainer(storageAccount);

                Console.WriteLine("Container resolved.");

                Console.WriteLine("Obtaining files to process.");

                List<string> filesToProcess = Directory.GetFiles(ConfigurationManager.AppSettings["SourceDirectory"]).ToList();

                UploadFilesToStorage(container, filesToProcess);
            }

            Console.WriteLine("Processing complete. Press any key to exit...");
            Console.ReadLine();
        }

        /// <summary>
        /// Private static utility method that obtains a CloudBlobContainer
        /// using the container name stored in app settings.
        /// </summary>
        /// <param name="storageAccount">The cloud storage account to retrieve a container based on.</param>
        /// <returns>A fully instantiated CloudBlobContainer, based on the TargetContainer app setting.</returns>
        private static CloudBlobContainer GetCloudBlobContainer(CloudStorageAccount storageAccount)
        {
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

            return blobClient.GetContainerReference(ConfigurationManager.AppSettings["TargetContainer"]);
        }

        /// <summary>
        /// Private static utility method that, using a CloudBlobContainer, uploads the
        /// files passed in to Azure Storage.
        /// </summary>
        /// <param name="container">A reference to the container to upload to.</param>
        /// <param name="filesToProcess">The files to upload to the container.</param>
        private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
        {
            // Process each file, uploading it to storage and deleting the local file reference as we go
            filesToProcess.ForEach(filePath =>
            {
                Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

                // Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
                container.GetBlockBlobReference(Path.GetFileName(filePath)).UploadFromFile(filePath);

                RemoveFileFromLocalDirectory(filePath);
            });
        }

        /// <summary>
        /// Private static utility method for deleting a file.
        /// </summary>
        /// <param name="filePath">The file path (full) to delete based upon.</param>
        private static void RemoveFileFromLocalDirectory(string filePath)
        {
            // Only attempt the delete if the file exists
            if (File.Exists(filePath))
            {
                File.Delete(filePath);
            }
        }

        #endregion Private Static Methods
    }
}
Test Upload Application Running.

Test Upload Application Running.

Test Files Uploaded.

Test Files Uploaded.

There you have it; a rather around the houses and off the wall tour of setting up an Azure storage container and then linking this to an Azure CDN. Plenty of images still need to be brought over into the new storage container (and a few code changes to boot), but I feel like I am on a pilgrimage to a better place. I hope this proves useful nonetheless and, until the next time, happy coding!

Addendum

After a further play I realised that the C# example I’d knocked up was not setting the content type correctly on upload, as follows:

Incorrect Content Type.

Incorrect Content Type.

To this end, I adjusted the UploadFilesToStorage method to set the content type on a CloudBlockBlob before the upload is triggered, as illustrated here:

/// <summary>
/// Private static utility method that, using a CloudBlobContainer, uploads the
/// files passed in to Azure Storage.
/// </summary>
/// <param name="container">A reference to the container to upload to.</param>
/// <param name="filesToProcess">The files to upload to the container.</param>
private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
{
	CloudBlockBlob blockBlob;

	// Process each file, uploading it to storage and deleting the local file reference as we go
	filesToProcess.ForEach(filePath =>
	{
		Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

		// Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
		blockBlob = container.GetBlockBlobReference(Path.GetFileName(filePath));

		// Correctly configure the content type before uploading
		blockBlob.Properties.ContentType = "image/jpg";

		blockBlob.UploadFromFile(filePath);

		RemoveFileFromLocalDirectory(filePath);
	});
}

You should then see items with the correct content type in the container:

Correct Content Type.

Correct Content Type.

To access images via the custom domain, essentially my CDN, I had to ‘purge’ it also at this point.

Again, happy coding.

Generic Value Type List CSV Extension

I came across a piece of code on my travels whereby a comma-separated string was split and then parsed into long values, ultimately returned to the method caller as a list of longs. A similar method was also created for operating on and converting values to integers, not particularly DRY code, but functioned fine. All good and well, I thought, but there is no reason not to encapsulate this into a method (I opted to create a string extension) that encompasses working with value types in general. Not a 100% solution, but a start on the right track.

Thirty minutes of tinkering yielded the following string extension (and supporting unit tests), which is currently constrained to value types only but could possibly be further constrained. I’ve provided some unit test declarations to give you an idea of its usage. It has a piece of cheeky boolean handling in it which is perhaps not best placed as you can quickly end up on a dark road when shoehorning type-specific code into generic methods. For now, though it seems like an acceptable solution:

Here’s the extension for starters:

using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;

namespace GenericExtensions
{
    public static class ObjectExtensions
    {
        /// <summary>
        /// Public static string extension can convert a comma-separated list (string)
        /// into a List of type T (where T is a struct, just to make this more constrained).
        /// </summary>
        /// <typeparam name="T">The struct type to attempt a conversion to (for each value in the comma-separated string).</typeparam>
        /// <param name="csvString">The comma-separated source string to split into values and then attempt conversions on.</param>
        /// <param name="errorList">A List of type string that catches conversion errors.</param>
        /// <returns>A list containing types of T where a conversion is possible.</returns>
        public static List<T> GetValuesFromCsvString<T>(this string csvString, out List<string> errorList) where T : struct
        {
            List<T> convertedValues = new List<T>();

            errorList = new List<string>();

            // Only proceed (and attempt conversions) where the string provided contains content
            if (!string.IsNullOrWhiteSpace(csvString))
            {
                // Trim up csv values (we don't want whitespace to intefer with the conversion)
                IEnumerable<string> trimmedCsvValues = csvString.Split(',').Select(csv => csv.Trim());

                // Attempt the conversion for each value in the comma-separated list (value to type T). Note errors if and when they occur and store
                // errors/converted values in the appropriate lists
                foreach (string csv in trimmedCsvValues)
                {
                    try
                    {
                        // Trigger manual conversion for bool types. Not the most ideal but sufficient for basic needs
                        if (typeof(T) == typeof(bool))
                        {
                            switch (csv.ToLowerInvariant())
                            {
                                case "1":
                                case "yes":
                                case "on":
                                    convertedValues.Add((T)Convert.ChangeType(true, typeof(T), CultureInfo.InvariantCulture));
                                    break;

                                case "0":
                                case "no":
                                case "off":
                                    convertedValues.Add((T)Convert.ChangeType(false, typeof(T), CultureInfo.InvariantCulture));
                                    break;

                                default:
                                    // Conversion is not possible
                                    throw new InvalidCastException();
                            }
                        }
                        else
                        {
                            // Standard conversion attempt for other structs
                            convertedValues.Add((T)Convert.ChangeType(csv, typeof(T), CultureInfo.InvariantCulture));
                        }
                    }
                    catch (Exception ex)
                    {
                        errorList.Add($"Could not convert value '{ csv }' to type '{ typeof(T).Name }'. Exception type: { ex.GetType().Name }; Exception Message: { ex.Message }");
                    }
                }
            }

            // Return successfully converted values
            return convertedValues;
        }
    }
}

Lastly, here are the unit tests to put the extension through its paces:

using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Collections.Generic;

namespace GenericExtensions.Tests
{
    [TestClass]
    public class ObjectExtensionTests
    {
        [TestMethod]
        public void GetValuesFromCsvString_ConversionToBool_InRangeValuesConverted()
        {
            List<bool> booleans = "1,0, 1 ,20,test,on,off,No,YES,  3.40282347E+38   , 99.9 "
                .GetValuesFromCsvString<bool>(out List<string> errorList);

            Assert.IsTrue(booleans.Count == 7);
            Assert.IsTrue(errorList.Count == 4);

            Assert.AreEqual(true, booleans[0]);
            Assert.AreEqual(false, booleans[1]);
            Assert.AreEqual(true, booleans[2]);
            Assert.AreEqual(true, booleans[3]);
            Assert.AreEqual(false, booleans[4]);
            Assert.AreEqual(false, booleans[5]);
            Assert.AreEqual(true, booleans[6]);
        }

        [TestMethod]
        public void GetValuesFromCsvString_ConversionToLong_InRangeValuesConverted()
        {
            List<long> longs = "test,99.9,9223372036854775807 ,9223372036854775808"
                .GetValuesFromCsvString<long>(out List<string> errorList);

            Assert.IsTrue(longs.Count == 1);
            Assert.IsTrue(errorList.Count == 3);

            Assert.AreEqual(9223372036854775807, longs[0]);
        }

        [TestMethod]
        public void GetValuesFromCsvString_ConversionToFloat_InRangeValuesConverted()
        {
            List<float> floats = "1,99.998, 3.40282347E+38 ,9223372036854775808, random string "
                .GetValuesFromCsvString<float>(out List<string> errorList);

            Assert.IsTrue(floats.Count == 4);
            Assert.IsTrue(errorList.Count == 1);

            Assert.AreEqual(1f, floats[0]);
            Assert.AreEqual(99.998f, floats[1]);
            Assert.AreEqual(3.40282347E+38f, floats[2]);
            Assert.AreEqual(9.223372E+18f, floats[3]);
        }

        [TestMethod]
        public void GetValuesFromCsvString_ConversionToInt_InRangeValuesConverted()
        {
            List<int> ints = "1,2147483647, random string  , 3.40282347E+38 ,-2147483649, 22 "
                .GetValuesFromCsvString<int>(out List<string> errorList);

            Assert.IsTrue(ints.Count == 3);
            Assert.IsTrue(errorList.Count == 3);

            Assert.AreEqual(1, ints[0]);
            Assert.AreEqual(2147483647, ints[1]);
            Assert.AreEqual(22, ints[2]);
        }

        [TestMethod]
        public void GetValuesFromCsvString_ConversionToShort_InRangeValuesConverted()
        {
            List<short> shorts = "1,2147483647, random string  , 32767 ,32768,-32768,-32769,-2147483649, 22 "
                .GetValuesFromCsvString<short>(out List<string> errorList);

            Assert.IsTrue(shorts.Count == 4);
            Assert.IsTrue(errorList.Count == 5);

            Assert.AreEqual(1, shorts[0]);
            Assert.AreEqual(32767, shorts[1]);
            Assert.AreEqual(-32768, shorts[2]);
            Assert.AreEqual(22, shorts[3]);
        }

        [TestMethod]
        public void GetValuesFromCsvString_ConversionToUInt16_InRangeValuesConverted()
        {
            List<UInt16> UInt16s = "1,2147483647, random string, -1, -22,  , 65535 ,65536,-32768,-32769,-2147483649, 22 "
                .GetValuesFromCsvString<UInt16>(out List<string> errorList);

            Assert.IsTrue(UInt16s.Count == 3);
            Assert.IsTrue(errorList.Count == 9);

            Assert.AreEqual(1, UInt16s[0]);
            Assert.AreEqual(65535, UInt16s[1]);
            Assert.AreEqual(22, UInt16s[2]);
        }
    }
}

A very quick prototype piece of code for sure, which needs further testing. I’d also like to performance test this implementation and perhaps get a better idea of how Convert.ChangeType works under the hood. I hope everyone is having a super weekend and take care until the next time. 🙂

Session State Behaviour & Async Headaches

I was battling a little issue today surrounding an action method no longer being called asynchronously; the issue turned out to be related to some recent session-based code being added to our code base. In short, the minute session is detected in the underlying code, the ‘default’ behaviour for session state handling throws a monkey wrench in asynchronicity, regardless of the operation being performed on session data (i.e. writing to the session or just reading from the session). This, for me, turned into a performance headache.

There is an attribute that can be placed at controller level that states ‘I’m reading from session only, please continue to allow asynchronous operations’, which when used looks like this:

[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
public class TestController : Controller
{
          ……
}

However, if you want to implement a control mechanism at the action level you need to travel down the custom controller factory/attribute route. This post turned out to be a lifesaver: Session State Behaviour Per Action in ASP.NET MVC

In short, this setup enables you to set session state behaviour handling at the action level by adorning the target method with a custom attribute; bonza!

When inspecting this and underlying, base class, implementations you will most likely discover that it’s not immediately clear how to handle scenarios where overridden methods exist (where methods match by name but differ by signature). This, for me, caused several crunches into the dreaded AmbigiousMatchException.

The implementation below shows my modified override of the DefaultControllerFactory GetControllerSessionBehavior method that is designed to a) avoid exceptions and b) only try to ‘discover’ the attribute and apply custom session state behaviour handling where a single method is ‘matched’ (based on the supplied RequestContext). If the custom attribute is not found, or more than one method is found matching by name (or another error occurs) base logic kicks in and takes precedence:

        /// <summary>
        /// Public overridden method that looks at the controller/action method being called and attempts
        /// to see if a custom ActionSessionStateAttribute (determining how session state behaviour should work) is in play.
        /// If it is, return the custom attributes SessionStateBehaviour value via the Behaviour property, in all other instances
        /// refer to the base class for obtaining a SessionStateBehavior value (via base.GetControllerSessionBehavior).
        /// </summary>
        /// <param name="requestContext">The request context object (to get information about the action called).</param>
        /// <param name="controllerType">The controller type linked to this request (used in a reflection operation to access a MethodInfo object).</param>
        /// <returns>A SessionStateBehavior enumeration value (either dictacted by us based on ActionSessionStateAttribute usage or the base implementation).</returns>
        protected override SessionStateBehavior GetControllerSessionBehavior(RequestContext requestContext, Type controllerType)
        {
            try
            {
                // At the time of writing base.GetControllerSessionBehavior just returns SessionStateBehaviour.Default but to make this robust we should just call
                // base.GetControllerSessionBehavior if the controllerType is null so any changes to the base behaviour in future are adhered to
                if (controllerType != null)
                {
                    // Defensive code to check the state of RouteData before proceeding
                    if (requestContext.RouteData != null
                        && requestContext.RouteData.Values != null
                        && requestContext.RouteData.Values["action"] != null)
                    {
                        // Attempt to find the MethodInfo type behind the action method requested. There is a limitation here (just because of what we are provided with) that
                        // this piece of custom attribute handling (for ActionSessionStateAttribute) can only be accurately determined if we find just one matching method
                        string actionName = requestContext.RouteData.Values["action"].ToString();
                        List<MethodInfo> controllerMatchingActionMethods = controllerType.GetMethods(BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance)
                            .Where(method => method.Name.Equals(actionName, StringComparison.InvariantCultureIgnoreCase)).ToList();

                        // In order to avoid ambiguous match exceptions (plus we don't have enough information about method parameter types to pick the correct method in the case
                        // where more than one match exists) I needed to rig this in such a way that it can only work where one matching method, by name, exists (works for our current use cases) 
                        if (controllerMatchingActionMethods != null && controllerMatchingActionMethods.Count == 1)
                        {
                            MethodInfo matchingActionMethod = controllerMatchingActionMethods.FirstOrDefault();

                            if (matchingActionMethod != null)
                            {
                                // Does the action method requested use the custom ActionSessionStateAttribute. If yes, we can return the SessionStateBehaviour specified by the
                                // developer who used the attribute. Otherwise, just fail over to base logic
                                ActionSessionStateAttribute actionSessionStateAttr =
                                    matchingActionMethod.GetCustomAttributes(typeof(ActionSessionStateAttribute), false)
                                        .OfType<ActionSessionStateAttribute>()
                                            .FirstOrDefault();

                                if (actionSessionStateAttr != null)
                                {
                                    return actionSessionStateAttr.Behaviour;
                                }
                            }                       
                        }
                    }
                }
            }
            catch
            {
                // If any issues occur with our custom SessionStateBehavior inferring handling we're best to just let the base method calculate this instead (best efforts 
                // have been made to avoid exceptions where possible). Could consider logging here in future (but we're in an odd place in the MVC lifecycle, could cause
                // ourselves more issues by attempting this so will only do if absolutely required)
            }

            return base.GetControllerSessionBehavior(requestContext, controllerType);   
        }

This appeared to be a pretty robust solution in my case (and we gained back the asynchronous processing on the targetted methods = big plus), so, hopefully, this comes in handy for others at some point.

Cheers all!

Classes and instances…what gives!

My brother, who is a DevOps and integrations whizz, got around to quizzing me, after hearing chatter amongst the nearby developer folk in his building, about the wonderful world of classes and instances, as they pertain to C#.

I reeled off the best explanation I could as I sipped on the best damn gin ever (actually, voted the UK’s best, check this out) and scoffed down some superb steak and chips. I didn’t think my musings were all that bad, but I got to thinking that formalising and solidify my thoughts on the matter wouldn’t hurt. Last aside, if you’re in Norfolk and fancy a good meal this is worth hitting up:

The Boars Spooner Row

What is a steak…I mean, class!?

Food on the brain! Ok, in layman’s terms, a class simply defines a template or blueprint for anything being represented in a given computer program. This blueprint contains (but doesn’t have to and is not limited to), on a basic level, properties that describe the thing being templated and methods that represent actions or functions (that may or may not receive external stimuli, or variables) the, for want of a better term, thing can perform. The class, in and of itself, does nothing up until the point it is brought into life…meaning when an instance is created (ignoring static classes, for the purposes of this explanation).

So, what is an instance?

Instances, typically, are brought to life for actual use, in C#, using the new keyword and all we are doing here is bringing an occurrence (to try and avoid typing instance, again) of a given blueprint into being so the descriptive values of the object can be accessed and the functionality triggered.

I would normally use the tried and tested example of vehicles to show how this actually works, with a little dip into inheritance to boot, but I’m going off piste with the first thing that came into my head…different types of homes is what I’m going with.

Let’s start with a blueprint (or class) for a Home. I don’t want this to be too complicated but going too trivial may not get the key points across, so hopefully this middle ground will make sense:

/// <summary>
/// The blueprint, in our application, for a place
/// to live.
/// </summary>
public class Home
{
	#region Private Readonly Data Fields

	/// <summary>
	/// Every home is expected to have rooms. This value, as it's marked
	/// as readonly, can only be set with a value here as part of the declaration or
	/// as part of a 'constructor' (that is involved in building an instance of a home) - in this 
	/// first iteration the number of rooms in a home isn't going to change (we'll come back to this!).
	/// </summary>
	private readonly int numberOfRooms;

	#endregion Private Readonly Data Fields

	#region Private Data Fields

	/// <summary>
	/// A private variable that keeps track of whether the 
	/// door to the home is open or closed. The door to a home
	/// can only be opened/closed by triggering the OpenDoor/CloseDoor
	/// methods on an 'instance' of the type, no direct 
	/// access is allowed = encapsulation.
	/// </summary>
	private bool doorOpen = false;

	#endregion Private Data Fields

	#region Public Properties

	/// <summary>
	/// Allow an object user to get a value representing if a home's
	/// door is open or closed, without allowing them to directly 
	/// change the state of the door.
	/// </summary>
	public bool IsDoorOpen
	{
		get
		{
			return doorOpen;
		}
	}

	/// <summary>
	/// Much like with IsDoorOpen, allow an object user to get a 
	/// readout of the number of rooms in this home without any direct
	/// access to change it at this point (and the underlying variable
	/// is currently readonly anyway, disallowing changes at this time).
	/// </summary>
	public int NumberOfRooms
	{
		get
		{
			return numberOfRooms;
		}
	}

	#endregion Public Properties

	#region Constructor

	/// <summary>
	/// The 'constructor' for a Home that is used to setup object
	/// state for each and every instance of a home.
	/// </summary>
	/// <param name="roomCount">The number of rooms that are in this house (provided by the object user).</param>
	public Home(int roomCount)
	{
		numberOfRooms = roomCount;
	}

	#endregion Constructor

	#region Public Methods

	/// <summary>
	/// Public method that triggers an action on this home, i.e. opens
	/// the door of this home.
	/// </summary>
	public void OpenDoor()
	{
		// Opens the door to the house
		doorOpen = true;

		// Perhaps other things happen as a result of this...
		Console.WriteLine("The door on this home has been opened.");
	}

	/// <summary>
	/// Public method that triggers an action on this home, i.e. closes
	/// the door of this home. 
	/// </summary>
	public void CloseDoor()
	{
		// Closes the door to the house
		doorOpen = false;

		// Perhaps other things happen a result of this...
		Console.WriteLine("The door on this home has been closed.");
	}

	#endregion Public Methods
}

I’ve outlined the starting concept of what I think a ‘Home’ looks and feels like. A home has, from my very barebones view (forgetting about things like walls, ahem!):

  • A number of rooms.
  • A door.
  • A way for the door to be opened and closed.

Obviously, homes are far more complicated than this, but this will get us going. Regardless of the keywords and definitions used this is nothing more than a blueprint, an instance of an object is required to start interacting with a home, as follows:

        /// Create an instance of a home, using the blueprint provided, and open
        /// then close the door (as well as read out the number of rooms).
        /// </summary>
        private static void PlayWithAHome()
        {
            // Use the 'Home' class blueprint to create an 'instance' of a Home so we can actually start reading/triggering facets of it
            // The Home blueprint demands, in this case, that we provide the number or rooms (as part of the constructor)
            Home testHome = new Home(6);

            // Let's use our home...
            Console.WriteLine($"The home has { testHome.NumberOfRooms } rooms.");               // How many rooms does the home have
            Console.WriteLine($"The door is { (testHome.IsDoorOpen ? "open" : "closed") }.");   // Is the door open or closed (should start closed)

            // Let's open the door (we should get a console readout as part of triggering this functionality on a Home)
            testHome.OpenDoor();

            Console.WriteLine($"The door is now { (testHome.IsDoorOpen ? "open" : "closed") }.");   // Is the door open or closed (should now be open)

            // Stop the application so we can read the output
            Console.Read();
        }
Home object being used.

Home object being used.

A simple run through then; a home has a blueprint that defines it will contain a certain number of rooms, a door, a way to read out the number of rooms and whether the door is ajar (private fields and properties) and a mechanism for opening and closing the door (methods). This is the class (or type). To actually get a readout on the number of rooms and start opening and closing the door we need to build the home, end of; this is the instance.

There are a few extra comments in the Home class that discuss ‘readonly’ variables, ‘getter only’ properties (which ties in encapsulation) and the constructor; I’ll leave you to peruse them as I’ve covered the meat of classes and instances at this point.

Sideline question…how does inheritance come into this

Just before my poor Mum looked destined to snooze off at the dinner table, meaning for everyone’s sanity the subject had to be changed, we also skimmed inheritance; so I’ll give one brief example below using our ‘Home’ class from before (modified to make it simpler this time around).

Inheritance, in short, is the idea of building a ‘chain’ of related classes, by building common functionality into a ‘base’ class and then reusing/overriding this functionality in one or more sub-classes; basically, new classes can be created using an existing class as a starting point. The core concept behind classical inheritance is the ‘is a’ relationship between types; below we have a one man tent, bungalow and house; these can all be prefixed with the term ‘is a’ to establish a valid sounding relationship (a house ‘is a’ home, for example).

Firstly, although not required for inheritance, I’ve created an interface, or contract, that outlines the common functionality that any implementing class must define (and subsequently, will be tied to subclasses). This wasn’t mandatory for the example I was putting together but I’ve opted to roll with it.

namespace HomeApplication
{
    /// <summary>
    /// Public interface that defines the properties and behaviours
    /// that all homes should exhibit. The Home class will use this interface
    /// that basically states that the class will implement the described properties/methods - 
    /// This can be thought of as a contract (a promise that the facets will be found on the class).
    /// </summary>
    public interface IHome
    {
        /// <summary>
        /// Homes all have a certain number of floors, or living 'levels'.
        /// </summary>
        int NumberOfFloors { get; }

        /// <summary>
        /// Homes all have a certain number of rooms.
        /// </summary>
        int NumberOfRooms { get; }

        /// <summary>
        /// Homes all have a way to tell if the door is open or closed.
        /// </summary>
        bool IsDoorOpen { get; }

        /// <summary>
        /// Homes (for my example) are expected to have a way to open the door.
        /// </summary>
        void OpenDoor();

        /// <summary>
        /// Homes (for my example) are expected to have a way to close the door.
        /// </summary>
        void CloseDoor();

        /// <summary>
        /// Homes (for my example) are expected to have a way to turn on the heating.
        /// </summary>
        void TurnOnHeating();
    }
}

Using our IHome interface, the Home class outlines common functionality and properties to be shared by all subclasses; we are ultimately just using, as stated before, this class as a starting point to create other classes.

This class has been listed as abstract (which is not a requirement for implementing inheritance), which means that a ‘Home’ is an abstract concept only and I want to disallow users from creating an instance of this type; only instances of subclasses should be created. In as short a description as possible, virtual members provide a default implementation but can be optionally overridden by subclasses, abstract members, however, require subclasses to provide the full implementation (we are simply stating, in this case, that subclasses should implement a particular flavour of functionality). Other than that, I’ve described other pertinent details in the comments within the class definition itself.

using System;

namespace HomeApplication
{
    /// <summary>
    /// The blueprint, in our application, for a place
    /// to live. This is 'abstract', meaning no one can create
    /// a home as an instance to use directly, they can only create
    /// sub-classes of 'Home' for use in an application.
    /// </summary>
    public abstract class Home : IHome      // IHome defines a contract that 'Home' has to conform to (and therefore, that all sub-classes will be locked in to)
    {
        #region Public Properties

        /// <summary>
        /// Allow an object user to read the number of floors
        /// in this home (this value can only be set privately
        /// within this class, not from another class or sub-class directly).
        /// </summary>
        public int NumberOfFloors { get; private set; }

        /// <summary>
        /// Allow an object user to read the number of rooms
        /// in this home (this value can only be set privately
        /// within this class, not from another class or sub-class directly).
        /// </summary>
        public int NumberOfRooms { get; private set; }

        #endregion Public Properties

        #region Public Virtual Properties

        /// <summary>
        /// Allow an object user to obtain a value that represents if
        /// the door is open or closed. This is virtual as I want to allow
        /// derived types to optionally override how this is determined.
        /// </summary>
        public virtual bool IsDoorOpen { get; private set; }

        #endregion Public Virtual Properties

        #region Constructor

        /// <summary>
        /// When an 'instance' of a home is created we expect
        /// to be provided with the number of floors and rooms
        /// available within the home.
        /// </summary>
        /// <param name="floors">The default number of floors on offer.</param>
        /// <param name="rooms">The default number of rooms on offer.</param>
        public Home(int floors, int rooms)
        {
            // Store the provided values in the appropriate properties
            NumberOfFloors = floors;
            NumberOfRooms = rooms;
        }

        #endregion Constructor

        #region Protected Methods

        /// <summary>
        /// Protected members or only accessible from within this type and from direct
        /// descendant types, not from an external class. I want sub-types to possibly alter
        /// how many rooms (by adding a room) can be found in the home.
        /// </summary>
        /// <param name="numberOfRooms">The number of rooms to add.</param>
        protected void AddExtraRooms(int numberOfRooms)
        {
            NumberOfRooms += numberOfRooms;
        }

        #endregion Protected Methods

        #region Public Virtual Methods

        /// <summary>
        /// Public virtual method that closes a home's door - this represents
        /// the 'default' implementation only. This is virtual as I want derived 
        /// classes to be able to optionally override how this process 
        /// happens (see the OneManTent, for example).
        /// </summary>
        public virtual void CloseDoor()
        {
            // Closes the door to the house (enhanced to fully use auto properties)
            IsDoorOpen = false;

            // Perhaps other things happen a result of this...
            Console.WriteLine("The door on this home has been closed.");
        }


        /// <summary>
        /// Public virtual method that opens a home's door - this represents
        /// the 'default' implementation only. This is virtual as I want derived 
        /// classes to be able to optionally override how this process 
        /// happens (see the OneManTent, for example).
        /// </summary>
        public virtual void OpenDoor()
        {
            // Opens the door to the house (enhanced to fully use auto properties)
            IsDoorOpen = true;

            // Perhaps other things happen as a result of this...
            Console.WriteLine("The door on this home has been opened.");
        }

        #endregion Public Virtual Methods

        #region Public Abstract Methods

        /// <summary>
        /// Final method...this is abstract as we are enforcing a situation whereby derived
        /// types of 'Home' have to implement this themselves (every home's method of heating will
        /// vary in my test setup) - there is no default implementation.
        /// </summary>
        public abstract void TurnOnHeating();

        #endregion Public Abstract Methods
    }
}

Our other classes are utilising inheritance directly, using the Home class as a ‘template’ and using ‘overrides’ where applicable to provide their own spin on functionality, as required.

For example, all types support opening and closing of the door; however, tents override this functionality to take the ‘zip getting stuck’ into account. Houses allow for extensions to be built, which ultimately means that further rooms get added to the home. Further in line comments are there for more in-depth explanations as to what is going on.

using System;

namespace HomeApplication
{
    /// <summary>
    /// Blueprint that defines what a house looks like
    /// (this 'is a' home in my example).
    /// </summary>
    public class House : Home       // A house 'is a' home, but has some differences, which this class outlines
    {
        #region Constructor

        /// <summary>
        /// The constructor for a house consumes values that represent
        /// the number of floors and rooms that are available - these are passed
        /// directly to the Home base classes constructor.
        /// </summary>
        /// <param name="floors">The default number of floors on offer.</param>
        /// <param name="rooms">The default number of rooms on offer.</param>
        public House(int floors, int rooms) 
            : base(floors, rooms)
        {

        }

        #endregion Constructor

        #region Public Methods

        /// <summary>
        /// This method is house specific, in my example (could apply to a bungalow, of course, but
        /// I've opted to not allow this for now). A house can have an extension added by calling the protected
        /// (only accessible from the Home class or derived types, like this 'House') AddExtraRooms method. The room
        /// count for this House will therefore be increased by one.
        /// </summary>
        public void AddAnExtension()
        {
            Console.WriteLine("Adding an extension to the house (+1 rooms).");
            AddExtraRooms(1);
        }

        #endregion Public Methods

        #region Public Overridden Methods

        /// <summary>
        /// This represents what happens when the heating is turned on in a house 
        /// (remember, this was marked as abstract on the base class so this class
        /// has no choice but to offer up some kind of implementation). Super toasty
        /// central heating is on offer here!
        /// </summary>
        public override void TurnOnHeating()
        {
            Console.WriteLine("Turning on the central heating in the house.");
        }

        #endregion Public Overriden Methods
    }
}
using System;

namespace HomeApplication
{
    /// <summary>
    /// Blueprint that defines what a bungalow looks like
    /// (this 'is a' home in my example).
    /// </summary>
    public class Bungalow : Home        // A bungalow 'is a' home, but has some differences, which this class outlines
    {
        #region Constructor

        /// <summary>
        /// The constructor for a bungalow consumes a value that represent
        /// the number of rooms that are available - this is passed
        /// directly to the Home base classes constructor. Notice that we are internally
        /// setting the amount of floors to 1 (illustration only, to show how a derived type
        /// can take control of it's own state).
        /// </summary>
        /// <param name="rooms">The default number of rooms on offer.</param>
        public Bungalow(int rooms) 
            : base(1, rooms)            // Bungalows - we only allow a single floor in our example
        {

        }

        #endregion Constructor

        #region Public Overridden Methods

        /// <summary>
        /// This represents what happens when the heating is turned on in a bungalow 
        /// (remember, this was marked as abstract on the base class so this class
        /// has no choice but to offer up some kind of implementation). A Coal fire
        /// have been selected as the weapon of choice in this case.
        /// </summary>
        public override void TurnOnHeating()
        {
            Console.WriteLine("Lighting up the coal fire in the bungalow.");
        }

        #endregion Public Overriden Methods
    }
}
using System;

namespace HomeApplication
{
    /// <summary>
    /// Blueprint that defines what a one man tent looks like
    /// (this 'is a' home in my example).
    /// </summary>
    public class OneManTent : Home      // A one man tent 'is a' home, but has some differences, which this class outlines
    {
        #region Public Properties

        /// <summary>
        /// The door for a tent has an added element to worry about...the bloody zip!
        /// If the zip is broken the door (in my example) is classed as stuck open, might not
        /// be true to reality but serves as illustrative only.
        /// </summary>
        public bool IsZipBroken { get; set; }

        #endregion Public Properties

        #region Public Overridden Properties

        /// <summary>
        /// Overriden functionality from the 'Home' base class. If the zip is broken
        /// the door is classed as open. If the zip isn't broken we simply read if the door
        /// is open or closed from the base class.
        /// </summary>
        public override bool IsDoorOpen
        {
            get
            {
                return IsZipBroken ? true : base.IsDoorOpen;
            }
        }

        #endregion Public Overridden Properties

        #region Constructor

        /// <summary>
        /// The constructor for a one man tent consumes a value that represent
        /// the number of rooms that are available - this is passed
        /// directly to the Home base classes constructor. Notice that we are internally
        /// setting the amount of floors to 1 (illustration only, to show how a derived type
        /// can take control of it's own state).
        /// </summary>
        /// <param name="rooms">The default number of rooms on offer.</param>
        public OneManTent(int rooms) 
            : base(1, rooms)                // Tents - we only allow a single floor in our example
        {

        }

        #endregion Constructor

        #region Public Overridden Methods

        /// <summary>
        /// A tent overrides how a the door is opened. If the zip is broken the tent
        /// door is stuck open. Otherwise, the door opens as normal (via functionality
        /// found on the 'base' class).
        /// </summary>
        public override void OpenDoor()
        {
            if (!IsZipBroken)
            {
                // Zip is not stuck, open the door as normal
                base.OpenDoor();
            }
            else
            {
                // The zip is stuck!!!
                Console.WriteLine("The zip is broken so the tent door is stuck open");
            }
        }

        /// <summary>
        /// A tent overrides how a the door is closed. If the zip is broken the tent
        /// door is stuck open. Otherwise, the door opens as normal (via functionality
        /// found on the 'base' class).
        /// </summary>
        public override void CloseDoor()
        {
            if (!IsZipBroken)
            {
                // Zip is not stuck, close the door as normal
                base.CloseDoor();
            }
            else
            {
                // The zip is stuck!!!
                Console.WriteLine("The zip is broken so the tent door is stuck open");
            }
        }

        /// <summary>
        /// This represents what happens when the heating is turned on in a one man
        /// tent (remember, this was marked as abstract on the base class so this class
        /// has no choice but to offer up some kind of implementation). Hot water bottles
        /// are the only choice here!
        /// </summary>
        public override void TurnOnHeating()
        {
            Console.WriteLine("Urm...using the hotwater bottle for extra heat!");
        }

        #endregion Public Overriden Methods
    }
}
/// <summary>
/// Further fun and games with homes!
/// </summary>
private static void PlayWithHomes()
{
	// A House, Bungalow and OneManTent are 'Homes', therefore share some of the blueprint information (as they are derived classes). Let's use them, and explore the differences

	// Configure instances, with floor and room numbers, as available to us
	House myHouse = new House(2, 8);
	Bungalow myBungalow = new Bungalow(7);
	OneManTent myTent = new OneManTent(2);

	// 1) The House...
	Console.WriteLine("Details about myHouse..." + Environment.NewLine);
	Console.WriteLine($"The house has { myHouse.NumberOfRooms } rooms.");
	Console.WriteLine($"The house has { myHouse.NumberOfFloors } floors.");
	Console.WriteLine($"The house door is { (myHouse.IsDoorOpen ? "open" : "closed") }.");

	// Open the door and check the door state
	myHouse.OpenDoor();
	Console.WriteLine($"The house door is { (myHouse.IsDoorOpen ? "open" : "closed") }.");

	// Turn on the heating in the house
	myHouse.TurnOnHeating();

	// Add an extension (house specific)
	myHouse.AddAnExtension();
	Console.WriteLine($"The house has { myHouse.NumberOfRooms } rooms (after adding an extension)." + Environment.NewLine);

	// ---------------------------------------------------------------------------------------------------

	// 2) The Bungalow...
	Console.WriteLine("Details about myBungalow..." + Environment.NewLine);
	Console.WriteLine($"The bungalow has { myBungalow.NumberOfRooms } rooms.");
	Console.WriteLine($"The bungalow has { myBungalow.NumberOfFloors } floor.");
	Console.WriteLine($"The bungalow door is { (myBungalow.IsDoorOpen ? "open" : "closed") }.");

	// Open the door and check the door state
	myBungalow.OpenDoor();
	Console.WriteLine($"The bungalow door is { (myBungalow.IsDoorOpen ? "open" : "closed") }.");

	// And close it this time, for good measure
	myBungalow.CloseDoor();
	Console.WriteLine($"The bungalow door is { (myBungalow.IsDoorOpen ? "open" : "closed") }.");

	// Turn on the heating in the bungalow
	myBungalow.TurnOnHeating();

	Console.WriteLine();

	// ---------------------------------------------------------------------------------------------------

	// 3) The One Man Tent...
	Console.WriteLine("Details about myTent..." + Environment.NewLine);
	Console.WriteLine($"The tent has { myTent.NumberOfRooms } rooms.");
	Console.WriteLine($"The tent has { myTent.NumberOfFloors } floor.");
	Console.WriteLine($"The tent door is { (myTent.IsDoorOpen ? "open" : "closed") }.");

	// Let's break the zip!
	myTent.IsZipBroken = true;

	// Open the door and check the door state (it should be stuck open)
	myTent.OpenDoor();
	Console.WriteLine($"The tent door is { (myTent.IsDoorOpen ? "open" : "closed") }.");

	// And close it this time, for good measure
	myTent.CloseDoor();
	Console.WriteLine($"The tent door is { (myTent.IsDoorOpen ? "open" : "closed") }.");

	// Fix the zip and try to re-open and close the door
	myTent.IsZipBroken = false;

	myTent.OpenDoor();
	Console.WriteLine($"The tent door is { (myTent.IsDoorOpen ? "open" : "closed") }.");

	myTent.CloseDoor();
	Console.WriteLine($"The tent door is { (myTent.IsDoorOpen ? "open" : "closed") }.");

	// Turn on the heating in the tent
	myTent.TurnOnHeating();

	// Stop the application so we can read the output
	Console.Read();
}

Finally, the following diagram shows that tents, bungalows and houses ‘are’ homes; they share the common facets of a home whilst providing their own functionality and overridden logic, that’s essentially it!

Home class diagram.

Home class diagram.

Home instances output.

Home instances output.

I’ll do a more in depth OOP principle post in the future so watch this space.

Happy Easter!!!

OpenCover UI – Unit Test Code Coverage

A little sideline post to tide everyone over (as I’m still working on the Alexa piece, which I want to do proper justice to when it’s released). I’ve been messing around with a few rough and ready projects and wanted to get an idea of how to dig into code coverage, in respect of Unit Tests.

I’m currently using Visual Studio 2015 Community Edition and from what I gather no built-in support exists for non-enterprise editions, at the moment. The first hit I found was for the OpenCover UI extension; so I thought I’d take it for a spin to see what it’s made of:

Stack Overflow OpenCover UI Mention

Just so that you can get a feel for where I am at, here is an image outlining a home-brew project that shows some Unit Tests in play:

Unit Test Structure.

Unit Test Structure.

Nothing too miraculous here, I’m just using the standard Unit Testing framework and a little Moq for kicks. To follow this up, I then grabbed hold of the OpenCover UI (.vsix extension) from here and installed it:

OpenCover UI VS Marketplace Link

Let’s roll on from here with some ‘off the cuff’ observations, rather than in-depth review of features, etc. This serves as simply my first impressions and, ultimately, an insight into if we can get the coverage metrics I am after quickly and easily. For starters, you’ll notice a new context menu for ‘OpenCover’ when Visual Studio boots up:

OpenCover Menu.

OpenCover Menu.

I have quickly shoved the inbuilt Test Explorer window next to the OpenCover variant; they appear to offer a similar ‘look and feel’, in addition to functional grouping options (the default Test Explorer windows appears to have a few more options, in fact). The OpenCover Test Explorer oddly doesn’t have a ‘Run’ or ‘Run All’ tests buttons, on the face of it anyway (or debugging options). Right clicking a test gives a ‘Cover with OpenCover’ context menu option…guess I’ll see what that does now!

Cover with OpenCover Context Menu Option.

Cover with OpenCover Context Menu Option.

At this point I hit the following, immediate, explosion:

Open Cover EXE Error.

OpenCover EXE Error.

You then get prompted to hunt down the relevant .exe file. As I was fishing around for this I decided to go back to the trusty Stack Overflow, to see what wisdom could be uncovered. This was the first hit, which outlined that a configuration file, with set content, needed to be stuffed in with the solution content:

Further Stack Overflow Wisdom

Further comment sniffing did highlight, under Tools > Options, that additional configuration should be performed (i.e. the .exe path should be specified):

Open Cover VS Options.

Open Cover VS Options.

I decided that hunting on NuGet might be the best way to expose an .exe file here (i.e. getting something dropped into a packages directory, which I can easily pick up). So, I followed the hunch by adding this package (just to the ‘tests’ project, for starters, as I wasn’t sure which projects needed targeting):

OpenCover Nuget Package.

OpenCover Nuget Package.

I don’t feel as if we’ve fallen into a rabbit hole just yet, but at this point, I’ve started to wonder if ‘storms’ are on the horizon! Hopefully, we won’t have to tread too much further to get this machine churning. Installing the NuGet package had the desired effect, I have now got the .exe I was looking for lingering in a ‘Tools’ directory, under the OpenCover folder within packages, which I’ve setup in the Visual Studio Options section:

OpenCover EXE Path Configured.

OpenCover EXE Path Configured.

This shouldn’t be marked down as ‘ideal’ configuration, of course; we’re more leaning towards a ‘just get it working’ stance.

The moment of truth…..right clicking and selecting ‘Cover with OpenCover’ now….success! Well, good things appear to have happened anyway, let’s have a quick review to see if we can make sense of it (code with incomplete XML comments is about to be on show, so apologies about that!). I only ran the one test by the way:

Unit Test Code Coverage.

Unit Test Code Coverage.

First observation; it did seem to take a good few seconds before all of the lines covered (green dots) and not covered (red dots) seemed to be highlighted correctly, nothing too catastrophic in this, however. As far as the unit testing specific code goes, here you can clearly see which tests I ran in this instance, the UI pointers are very self-explanatory. One additional observation, it looks like it could be a touch tricky to pick out breakpoints amongst the code coverage markers, but I don’t see this as a big issue at the moment (I’ll have to see how I feel after extended use). In fact, the OpenCover Results window has an option for enabling/disabling these markers, so we’re all good.

You’ll notice that the unit test method denoted here is placing the ‘AddItemCleansingMappingElementToConfiguration’ method under test, so I am keen to see what lines we hit (or ‘covered’) within the targeted method:

Code Actually Under Test Covered.

Code Actually Under Test Covered.

The idea here is that the XML configuration passed to this method is, in fact, malformed so the statement where the ‘addSuccessful’ variable is set to true is not hit (an exception is triggered, and caught, by the preceding line of code); which mirrors the indicator provided by OpenCover, nice! I call this a success!

I’m now going to run this across the board and see what floats to the surface.

Ah, look at this! For starters, OpenCover has highlighted a problem with one of my unit tests in a very solid, visual way (I was wondering why no lines of this test were covered, until I realised I omitted the ‘Test Method’ attribute!):

Missing Test Method Attribute.

Missing Test Method Attribute.

A couple of quirks that deserve to be noted; firstly, I did have to run the ‘Cover with OpenCover’ command twice to register coverage on all tests (some seemed to be omitted from the process, but then included on the second run through). Also, tests that are geared to expect exceptions to be thrown are always marked with their closing brace as ‘not covered’ (I’m assuming that an exception being thrown legitimately causes the final line to never be hit, therefore not covered, which in my head is expected behaviour – it would be good if there was a way to disregard these instances):

Unit Test Missing Coverage.

Unit Test Missing Coverage.

So what about the actual code ‘under test’ and the metrics provided to show how much of it has been covered? In instances whereby code had been highlighted as not covered (spot checks only, of course), I have to say it appears accurate and has been useful in flagging areas I should really have tested.

As for the actual report metrics, it is exactly what I was after when I started on my way down this road. You get to see the percentage of code coverage at project, class and member level (along with ‘Sequence Points’ visited against the total count of possible points):

OpenCover Metrics Report.

OpenCover Metrics Report.

Sequence points don’t tie directly to ‘lines’, as outlined here. You’ll notice that this is a link detailing a ‘Report Generator’, which uses XML extracted using OpenCover directly. To finish up, I’ll follow the steps outlined on Stack Overflow again (got to love it, especially if you need information on the double!):

Using the Report Generator

The Report Generator can be downloaded by using NuGet again:

OpenCover Report Generator on NuGet.

OpenCover Report Generator on NuGet.

The Report Generator source code itself can be downloaded from this link.

It looks like you can create a custom report via C#, by implementing an interface, etc. For now, I’m going to do a simple run through using the command line interface. This is the command (after a bit of trial and error) that got me the XML report, for starters:

"C:\Source\Utility Applications\DesktopManager\packages\OpenCover.4.6.519\tools\OpenCover.Console.exe" -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\mstest.exe" -targetargs:"/noisolation /testcontainer:\"C:\Source\Utility Applications\DesktopManager\DesktopManager.Tests\bin\Debug\DesktopManager.Tests.dll\" /resultsfile:C:\Reports\MSTest\.trx" -mergebyhash -output:C:\Reports\MSTest\projectCoverageReport.xml

This was just a case of specifying locations for the OpenCover.console.exe, the mstest.exe and the location of the ‘Tests’ dll for my specific application. XML file in hand, I trigerred this command to generate the final report resources:

"C:\Source\Utility Applications\DesktopManager\packages\ReportGenerator.2.5.2\tools\ReportGenerator.exe" -reports:"C:\Reports\MSTest\projectCoverageReport.xml" -targetdir:"C:\Reports\CodeCoverage"

In a few quick steps you’ll have a set of HTML ‘reports’, as you can see here:

HTML Reports Generated.

HTML Reports Generated.

Let’s finish up with a couple of examples illustrating the outputs:

Index Report.

Index Report.

Configuration Helper Report Details.

Configuration Helper Report Details.

Configuration Helper Report Details.

Configuration Helper Report Line Coverage.

I think that brings us to a close. This seems like pretty powerful stuff; but, I think I’ll need more time to go through some of the outputs and try this with a larger project. I hope this has been fun and/or useful.

Thanks all!

LINQ Joins – Multiple Match Conditions

Hi all,

Just a very quick post on an interesting piece of LINQ knowledge demonstrating how to find common matches between two C# lists, whereby you want to include multiple matching conditions.

To start, here is a typical LINQ statement (and setup) dealing with a single join condition, using the extension method approach (which I prefer), joining two lists; one defining humans and one defining aliens (just because!):

// Setup
public interface ILivingBeing
{
    string Name { get; set; }
    int Age { get; set; }
    void WriteInfo();
}

...

public abstract class LivingBeing : ILivingBeing
{
    public int Age { get; set; }
    public string Name { get; set; }
    public abstract void WriteInfo();
}

...

public class Human : LivingBeing
{
    public override void WriteInfo()
    {
        Console.WriteLine("I am a human who is {0} years old and my name is {1}", Age, Name);
    }
}

...

public class Alien : LivingBeing
{
    public override void WriteInfo()
    {
        Console.WriteLine("I am an alien who is {0} years old and my name is {1}", Age, Name);
    }
}

...

// And finally, the List setup and LINQ itself:
List<Human> humans = new List<Human>
{
    new Human { Name = "Steve", Age = 31 },
    new Human { Name = "Dave", Age = 34 },
    new Human { Name = "Alexa", Age = 56 }
};

List<Alien> aliens = new List<Alien>
{
    new Alien { Name = "Steve", Age = 31 },
    new Alien { Name = "Dave", Age = 76 },
    new Alien { Name = "Poppy", Age = 56 }
};

var matches = humans.Join(aliens, human => human.Name, alien => alien.Name, (human, alien) => human);

Here is an illustration showing the first set of results being used:

LINQ Single Join Condition.

LINQ Single Join Condition.

In the case above the results returned by ‘matches’ include both ‘Steve’ and ‘Dave’ (humans), as a match on the ‘Name’ property can be established across both lists. Everything is fine up until the point we want matches based on, say, the Name and Age properties (on the assumption we want to stick with the extension method approach). This is surprisingly easy to achieve, using anonymous types in our LINQ statement join condition, as follows:

// Return just a list containing the human 'Steve' (as he is the only one who directly matches an alien based on 'Name' and 'Age'
var matches = humans.Join(aliens, human => new { human.Name, human.Age }, alien => new { alien.Name, alien.Age }, (human, alien) => human);

The inner and outer key selectors simply use anonymous types, constructed using the new keyword, to bring the related properties into scope that we wish to match on. Then, hey presto, you can easily handle multiple matching conditions. In this instance, only the human ‘Steve’ is returned, based on a direct match with an alien of the same name/age:

LINQ Multiple Join Condition.

LINQ Multiple Join Condition.

I hope this comes in handy at some point. I’ve been kept fairly busy on the work front recently, hence my blog and twitter feed haven’t exactly been a hive of activity…I’ll work to change that as best I can in the coming weeks and months and carve out more time to publish content.

Cheers, until the next time!