CloudWatch Event Trigger Setup.

AWS Scheduled Lambda – Starting & Stopping EC2 Instances

I’ve been meaning to give this a go for a month or two now and, alas, I finally have the spare hours needed to actually bring it all together.

The outcome I have in my head, which is actually to replace a process running as a Task Scheduler job on an EC2 instance where I work, is to write as scheduled Lambda (using CloudWatch Events, utilising a configured rule, with a ‘cron’ expression in tow) that can be triggered to start and stop EC2 instances; completely removing the need for the instance this process runs on. EC2 instances will be targeted by the presence of a specific tag. For something that kicks in once a day a Lambda, where you pay per execution, is much more in line with what we want. This is opposed to having to pay for an EC2 to be permanently spun up to service this kind of request.

Here are some resource links, before we get started, which illustrate some of the things I found useful to read in the run-up to trying this myself:

This was also a nice opportunity to play around a bit more within my personal AWS ‘space’, a nice bonus as I’ve not done a hell of a lot with it of late.

.NET Core SDK

I decided to get this separately as it could come in handy for general .NET Core development. I did originally think this was tied to the ability to be able to use the .NET Core Lambda templates within Visual Studio, although I don’t actually think this is the case (The AWS Toolkit for Visual Studio 2017 is the core component that governs this).

Either way, the SDK can be found here.

AWS Toolkit for Visual Studio 2017

I’ve been bringing my poor, ageing laptop at home up to date. I’ve installed Visual Studio Community Edition 2017 and will look to get it ‘Lambda creation’ ready in short order. To get all of the lovely, sugar-coated, AWS support within Visual Studio (including access to an easy method of publishing Lambdas) I’m going to grab the AWS Toolkit for Visual Studio 2017. Navigate to Tools > Extensions and Updates > Online > Search for ‘AWS’ > This should bring back the AWS Toolkit for Visual Studio 2017. Go ahead and install this if you don’t already have it (closing Visual Studio to trigger the installation):

AWS Toolkit for Visual Studio 2017.

AWS Toolkit for Visual Studio 2017.

AWS Toolkit User Configuration

AWS Toolkit Credential Setup.

AWS Toolkit Credential Setup.

Before creating the Lambda, I’ve followed the provided configuration advice to go and create a new user via the IAM console. I’ll detail the whole process I followed just for clarity.

Start by accessing the AWS Console and open the IAM Management Console > Users section and click ‘Add user’. The user we are going to create needs ‘programmatic’ access, so be sure to check the correct box, also giving your user an appropriate name in the process:

After hitting ‘Next’, we need to assign an existing group with appropriate permissions or, as I am going to do, create a new group using the ‘Create group’ button. A modal popup will launch, here the ‘Group name’ can be added along with an opportunity to link the group to an existing policy (or link it to a brand new one). I’m keeping this simple and, as outlined in the guidance, assigning the ‘AdministratorAccess’ policy. Click ‘Create group’ and then ‘Next:Review’ to proceed.

Bash the ‘Create user’ button and you should be golden! Make sure to hit the ‘Download .csv’ button on the subsequent screen to get credentials at the ready.

Visual Studio AWS Toolkit Setup Screen

On the Visual Studio ‘Getting Started with the AWS Toolkit for Visual Studio’ screen, I opted to download the CSV for my ‘lew-admin-programmatic-user’ and use the ‘Import from a csv file…’ button. I left the profile name as ‘default’ for now. After selecting the relevant CSV credential file, hit ‘Save & Close’ to continue.

To cement my place as a ‘completionist geek’ I also updated Visual Studio at this point as I was a touch behind, so follow suit if you want to.

A little tip – If you’ve already closed the AWS Toolkit ‘setup’ screen a ‘Profile’ can be configured via the AWS Explorer window. This can be accessed within Visual Studio via View > AWS Explorer:

New AWS Profile.

New AWS Profile.

Creating the Lambda functions and supporting project

I had to close and reopen Visual Studio at this point to get the .NET Core Lambda templates to do their magic trick and appear. Navigate to File > New Project > Visual C# > AWS Lambda and you should be presented with an ‘AWS Lambda Project (.NET Core)’ option. I’m going to create a project called ‘LGAws.StartInstances’, wrapping everything in a solution for good measure. Once the solution is loaded I then opted to create a second Lambda project called ‘LGAws.StopInstances’. In both cases, I used the ‘Empty function’ blueprint as I want to roll with this fully from scratch.

For the purposes of keeping a clean abstraction between the Lambda functions and the logic behind them, I have also created a separate .NET Core project called ‘LGAws.Operations’. This will be a helper library that will act as a repository for the logic that calls the AWS EC2 SDK (which we’ll get to in a bit). All projects are then modified to use .NET Core 2.0 using the right-click context menu and selecting ‘Properties’.

We’re on to actually writing the code then, which I’ll detail as best I can as we go (providing full samples to boot so you can follow along with every decision made).

The code

Let’s start with inspecting the solution:

Solution Configuration.

Solution Configuration.

The LGAws.Operations project represents, as previously discussed, a supporting library which avoids the need to embed all of the logic within the Lambda functions themselves. Don’t treat this as a fully-fledged, complete solution or an absolute authority on how to structure this, I just thought a little separation of concerns wouldn’t go amiss here. Apart from the physical code that actually calls the AWS EC2 SDK, nothing else you see here is technically required to get going with your own version of this.

First up, the extensions folder is a nice little haven for a couple of small extension classes called ExceptionExtensions and MessagingExtensions. Nothing magical here, just types that provide some nicely formatted output for exceptions and other messaging. The content is as follows:

using System;
using System.Text;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding exception type extension methods.
    /// </summary>
    public static class ExceptionExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static exception extension designed to produce a formatted string
        /// from the targeted exception.
        /// </summary>
        /// <param name="exception">The exception to process.</param>
        /// <param name="includeStack">A boolean that denotes if we should include stack trace information in the returned string.</param>
        /// <returns>A formatted exception string based on the supplied parameters.</returns>
        public static string ToFriendlyExceptionString(this Exception exception, bool includeStack = true)
        {
            StringBuilder exceptionStringBuilder = new StringBuilder();

            if (exception != null)
            {
                // A valid exception is in scope - append messages from this exception and any inner exception (if present)
                exceptionStringBuilder.AppendLine($"The following exception has occurred: { exception.Message }");
                exceptionStringBuilder.AppendLine(exception.InnerException != null
                    ? $"An inner exception was detected as follows: { exception.InnerException.Message }" : "No inner exception was detected.");

                // Include stack information as specified by the caller
                if (includeStack && !string.IsNullOrWhiteSpace(exception.StackTrace))
                {
                    exceptionStringBuilder.AppendLine($"Stack trace: { exception.StackTrace }");
                }
            }

            return exceptionStringBuilder.ToString();
        }

        #endregion Extension Methods
    }
}
using System.Net;
using System.Runtime.CompilerServices;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding 'messaging' type extension methods.
    /// </summary>
    public static class MessagingExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static 'http status code' extension designed to produce a formatted string
        /// from the targeted httpstatuscode.
        /// </summary>
        /// <param name="statusCode">The http status code to inspect and provided a formatted string based on.</param>
        /// <param name="methodName">The calling methods name (when called via async you'll get 'MoveNext', based on async state machine antics).</param>
        /// <returns>A formatted string for reporting, based on the supplied http status code and method name parameters.</returns>
        public static string GetStatusMessageFromHttpStatusCode(this HttpStatusCode statusCode, [CallerMemberName] string methodName = "") =>
            statusCode == HttpStatusCode.OK
                ? $"The { methodName } method returned 'OK' - the operation completed successfully."
                : $"The { methodName } method returned an HTTP Status Code of { (int)statusCode } ({ statusCode }). Please check that the operation completed as expected.";

        #endregion Extension Methods
    }
}

Within the Models folder, I’ve created a basic object hierarchy to encapsulate the idea of different AWS operations, such as describing and manipulating EC2 instances. The BaseOperationModel is the top-level base class that contains a single string property called OperationReport; the idea here is that all AWS operations will support a ‘report’ that details how the operation went. I then have two derived classes in the mix named DescribeEC2Operation and ManipulateEC2Operation (the ‘manipulate’ class itself is just an empty stub, but acts as a ‘marker’ object to make the return value and operation being performed easily identifiable and unique in future). I utilise these types as return values when triggering logic to obtain instance ids (by a specific tag) and physically starting and stopping EC2 instances. These classes as defined as follows:

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Base class model for AWS operations.
    /// </summary>
    public abstract class BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// All AWS operations surface a string to detail
        /// a 'report' on the operation.
        /// </summary>
        public string OperationReport { get; set; }

        #endregion Public Properties
    }
}
using System.Collections.Generic;

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'describe' EC2 operations.
    /// </summary>
    public class DescribeEC2Operation : BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// Represents the obtained instance ids.
        /// </summary>
        public List<string> InstanceIds { get; set; } = new List<string>();

        #endregion Public Properties
    }
}
namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'manipulate' EC2 operations (such as 
    /// starting and stopping instances).
    /// </summary>
    public class ManipulateEC2Operation : BaseOperationModel
    {
        // Further implementation details for a ManipulateEC2Operation to be added here as and when needed
    }
}

There is also a static utility class for some constant strings used throughout the library.

namespace LGAws.Operations.Shared
{
    /// <summary>
    /// Public static helper class that hold constants to use
    /// for all AWS-based operations.
    /// </summary>
    public static class Constants
    {
        #region Constant Definitions

        /// <summary>
        /// Represents a stock message for when a response is null.
        /// </summary>
        public const string NULL_RESPONSE_MESSAGE = "The returned response was null. Please investigate the cause and/or try again.";

        /// <summary>
        /// Represents the stock EC2 auto start 'tag'.
        /// </summary>
        public const string AUTO_START_TAG = "auto-start";

        /// <summary>
        /// Represents the stock EC2 auto stop 'tag'.
        /// </summary>
        public const string AUTO_STOP_TAG = "auto-stop";

        #endregion Constant Definitions
    }
}

Lastly, the EC2OperationsHelper class is the core utility wrapper that encapsulates the code to obtain instance ids, by tag, and utilise those instance ids to start and stop the relevant instances (using the model classes and extensions previously observed). In order to actually use the relevant AWS EC2 APIs you’ll need to right-click this project (if you’re following along) and select ‘Manage Nuget Packages…’. Then, add the AWSSDK.EC2 package to begin using the AmazonEC2Client type – you’ll be looking for the following after installing the package:

AWSSDK.EC2 Nuget Package.

AWSSDK.EC2 Nuget Package.

The AmazonEC2Client type is the gateway to the underlying methods we require to obtain EC2 instance ids by tag and subsequently start and stop those instances. This is done via the DescribeInstancesRequest/DescribeInstancesResponse, StartInstancesRequest/StartInstancesResponse and StopInstancesRequest/StopInstancesResponse constructs. You’ll notice that the AmazonEC2Client type implements IDisposable so, as is good practice with any type implementing this particular interface, I have used the good old using statement to ensure everything is mopped up after use. A DescribeInstancesRequest type can except a List of type ‘Filter’, which is our way of searching for instances by tag name. This particular implementation does not concern itself with the value behind the tag, but there are ways to factor this in if required. Lastly, the AmazonEC2Client uses its parameterless constructor which essentially means AWS credentials will be inferred; we’ll see this all come together when we ‘Publish’ the Lambda to AWS (the role specified at this point determines what the Lambda will be able to access and what credentials it ultimately runs under). See below for the entire code listing for this class:

using Amazon.EC2;
using Amazon.EC2.Model;
using LGAws.Operations.Extensions;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace LGAws.Operations.EC2
{
    /// <summary>
    /// Helper class that represents operations that can be triggered
    /// against EC2 instances (such as starting/stopping instances).
    /// </summary>
    public class EC2OperationsHelper
    {
        #region EC2 Operation Methods

        /// <summary>
        /// Method that returns a custom DescribeEC2Operation object that holds details
        /// on EC2 instances discovered by the tag supplied.
        /// </summary>
        /// <param name="tag">Specifies the tag 'key' to identify targeted EC2 instances by.</param>
        /// <returns>A Task containing a custom DescribeEC2Operation object (containing discovered instance ids).</returns>
        public async Task<DescribeEC2Operation> GetInstancesByTag(string tag)
        {
            DescribeEC2Operation describeOperation = new DescribeEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the DescribeInstancesRequest/DescribeInstancesResponse objects to find instances by tag
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    DescribeInstancesRequest describeRequest = new DescribeInstancesRequest
                    {
                        Filters = new List<Filter> { new Filter("tag-key", new List<string> { tag }) }
                    };

                    DescribeInstancesResponse describeResponse = await ec2Client.DescribeInstancesAsync(describeRequest);

                    // The response stores instance details in a Reservation wrapper, so drill down as required to obtain the instance ids
                    if (describeResponse?.Reservations?.Count > 0)
                    {
                        describeResponse.Reservations.ForEach(reservation =>
                        {
                            if (reservation?.Instances?.Count > 0)
                            {
                                reservation.Instances.ForEach(instance =>
                                {
                                    // Add discovered instance ids to the describeOperation helper object
                                    describeOperation.InstanceIds.Add(instance.InstanceId);
                                });
                            }
                        });
                    }

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    describeOperation.OperationReport = describeResponse != null
                        ? describeResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                describeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return describeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'start' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to start.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the start operation).</returns>
        public async Task<ManipulateEC2Operation> StartEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StartInstancesRequest/StartInstancesResponse objects to attempt to start the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StartInstancesRequest startRequest = new StartInstancesRequest(instanceIds);

                    StartInstancesResponse startResponse = await ec2Client.StartInstancesAsync(startRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = startResponse != null
                        ? startResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'stop' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to stop.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the stop operation).</returns>
        public async Task<ManipulateEC2Operation> StopEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StopInstancesRequest/StopInstancesResponse objects to attempt to stop the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StopInstancesRequest stopRequest = new StopInstancesRequest(instanceIds);

                    StopInstancesResponse stopResponse = await ec2Client.StopInstancesAsync(stopRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = stopResponse != null
                        ? stopResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        #endregion EC2 Operation Methods
    }
}

The documentation surrounding what operations the AWS SDK for .NET supports was fairly detailed and well laid out, it can be found here for anyone interested in digging around further.

So, we move on lastly to the key component of this entire configuration; the physical Lambda functions. I’ve created two distinct functions, as discussed previously – one to cover the starting of EC2 instances and another one to kick off the stopping operation. Lambda functions are relatively simplistic in their setup, with the stock template providing a class called Function containing a singular method called FunctionHandler. I’ve amended the signature of this method in my sample to not return any value, the template returns a string, as is. Also, the signature is geared to accept an input string argument, along with an ILambdaContext implementing object. I’m not interested in accepting input at the moment, so I’ve cut the input arguments down and just left the ILambdaContext implementing object in scope, which is a cool little object that exposes metadata about the Lambda function triggered (i.e. the function name, allocated memory limits, etc.).

The main idea I’ve gone with here is abstracting, as discussed previously also, all of the core logic to the external ‘business logic’ library. The Lambda simply creates an instance of the EC2OperationHelper class and then uses that as the workhorse, meaning our function definition is as simple as possible. The only other additional statements in play undertake logging, the details of which can be seen in AWS CloudWatch, which we’ll review later.

using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StartInstances
{
    /// <summary>
    /// Holds logic for the Start EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Start EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to start
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_START_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // start instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StartEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}
using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StopInstances
{
    /// <summary>
    /// Holds logic for the Stop EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Stop EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to stop
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_STOP_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // Stop instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StopEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}

We’ve now reached the stage of finally getting our Lambdas published to AWS, which we’ll review now.

Upload of the Lambda function to AWS

The AWS Toolkit for Visual Studio provides a publishing wizard, but Lambda functions can be zipped and then uploaded using the AWS Console > Lambda admin screen if you prefer. Let’s review the upload process for one of our two Lambda functions, a process that I will repeat for the other function to (behind the scenes for brevity).

I want my Lambdas to be able to run wild with EC2 instances, so I’ve again popped on over to the AWS Console > IAM > Roles > ‘Create role’ to generate the ‘lg-ec2-full-access-role’. The role should look like this after creation, you’ll want to select ‘Lambda’ as the AWS service type when creating the role. I also attached the ‘AmazonEC2FullAccess’ and ‘AWSLambdaFullAccess’ policies to the role:

EC2 Full Access Role Summary.

EC2 Full Access Role Summary.

EC2 Full Access Role Attached Policies.

EC2 Full Access Role Attached Policies.

We’re going to need this role in the next step.

To start with the publishing process, right-click the Lambda function project in the Solution Explorer within Visual Studio and select the ‘Publish to AWS Lambda…’ context menu item. You should be presented with a modal popup that looks similar to the image listed below. I’ve modified a few of the options at this point, which you may need to also do:

  • The functions I have created are using .NET Core version 2.0, so I’ve adjusted the ‘Language Runtime’ to ‘.NET Core v2.0’.
  • I’ve listed my function name as ‘LGAwsStartInstances’, not using the period character which is invalid in this instance.
  • For convenience, I’ve set the ‘Save settings to aws-lambda-tools-defaults.json for future deployments’ flag.
  • All other options should be valid at this point. I’ll be using the ‘default’ profile, in the ‘EU (Ireland)’ region (I could have switched to ‘EU (London)’ I guess, but I invariably remember too late that this exists!), adjust your region as needed.
Upload Lambda Function.

Upload Lambda Function.

Click ‘Next’ to proceed, where you’ll be presented with one last modal screen, which allows you to set further configuration details, such as memory execution limits and timeouts for your particular function. The key thing on this particular screen, which we will need to address, is selecting a fitting value for the ‘Role Name’ dropdown:

Advanced Function Details.

Advanced Function Details.

Here, in my case at least, I ensure that the recently created ‘lg-ec2-full-access-role’ role is selected – be sure to select an appropriate value and then click ‘Upload’ to complete the process. I’ve then, at this point, performed the same steps for the other Lambda function project. With any luck, the upload will be error-free and, on completion, we’ll be able to go back to the AWS Console and create our test EC2 instance. You’ll notice that Visual Studio also (there is a settings flag that governs this on the upload progress modal) loads a ‘test’ screen for you to trigger your function with. Lambdas are also testable within the AWS Console itself.

Creation of a test EC2, with tag, to turn on and off

We now need to actually create the targeted entity of our Lambda functions; an EC2 that sports the appropriate ‘tags’. We’re going to create a bare-bones EC2 from a standard Windows, base AMI, although it really doesn’t matter what you opt to use so fill your boots with whatever you want. The AMI I am using is eligible for free-tier usage, depending on the current state of your AWS account.

To begin, we need to run on over to the AWS Console > EC2 > Launch Instance option and pick an AMI to begin. I’m opting to go with this:

Choose Base AMI.

Choose Base AMI.

After hitting ‘Select’ I go through the following motions to generate the AMI.

  1. Choose an Instance Type > Pick t2.micro.
  2. Configure Instance Details > Skip over this.
  3. Add Storage > Defaults are fine here also, skip over this.
  4. Add Tags > We’ll add three here. Add a ‘Name’, ‘auto-start’ and ‘auto-stop’ tag as shown in the screenshot below.
  5. Configure Security Group > Skip over this (in the real world, of course, you’d want some clearly defined Security Groups but for the purposes of testing our Lambda this is fine for now).
  6. Launch the instance! Create a new key pair if you need to (keeping the .pem file to one side, although we’ll be decommissioning this instance right after our testing anyway) or use an existing key pair.
Lambda EC2 Tag Setup.

Lambda EC2 Tag Setup.

Once launched feel free to stop the instance for now. We’ll be using a Lambda to spin it up very shortly!

Test Instance Ready.

Test Instance Ready.

CloudWatch Event Rule trigger

The whole concept behind what I’m looking for is to trigger a Lambda on a cron schedule. The method I’m going to use to achieve this involves utilising a CloudWatch Event ‘Rule’, which can be configured manually via the CloudWatch section of the AWS Console or, more conveniently, via the Lambda section of the AWS Console instead. Therefore, to complete the ‘scheduling’ setup on a Lambda function go to the AWS Console > Lambda, then in the ‘Designer’ and the ‘Add triggers’ sidebar click ‘CloudWatch Events’. This will add a node that serves as a step to ‘feed’ the triggering of the Lambda:

CloudWatch Event Trigger Setup.

CloudWatch Event Trigger Setup.

Scroll down to configure the CloudWatch Event further and in the ‘Rule’ drop-down select ‘Create a new rule’. You can then give the rule a name, description and most importantly (with the ‘Schedule expression’ radio option set) a cron schedule. The sample expression I’ve used here will trigger the Lambda every 10 minutes, Monday to Sunday (you can use the documentation to configure any schedule you like). I’ve used this particular format so I can easily switch this to run Monday to Friday instead, with one trigger per day being the end game I’m looking for. Click add to complete setting up the rule and then ‘Save’ in the top-right hand corner of the screen to finish up.

Is it working?

At this moment in time our test EC2 instance is stopped so the desired effect we are looking for is the CloudWatch Event to trigger, based on the configured rule, and thus run the ‘LGAwsStartInstances’ Lambda function – our EC2 should then be kicked into life! On the Lambda function page, the link to the rule can be clicked to see details of the schedule, as displayed below:

Start EC2 Rule.

Start EC2 Rule.

CloudWatch Event Rule Schedule.

CloudWatch Event Rule Schedule.

After waiting for the next ‘schedule slot’ to roll around the ‘Logs’ menu option within CloudWatch can be accessed. A log group for our Lambda can be seen which, when drilled into, shows the logging statements produced by the ‘LGAwsStartInstances’ function; this directly ties to the use of the ‘LambdaLogger’ type in the sample code.

CloudWatch Logs.

CloudWatch Logs.

Start Instance Lambda Logs Content.

Start Instance Lambda Logs Content.

After verifying the existence of log data, reporting a successful operation, we can finally go over to the EC2 admin section of the AWS Console and witness the EC2 instance started:

EC2 Started.

EC2 Started.

After proving this operation works correctly I opted to disable the event rule tied to this Lambda and created another event, mirroring the setup process already listed above, to prove the ‘LGAwsStopInstances’ function correctly triggers as expected:

Stop Instance Lambda Logs Content.

Stop Instance Lambda Logs Content.

So, success then – happy days all around!

Asides and final thoughts

One really interesting thing to note with the sample code, which I didn’t end up changing just to bring it up as a discussion point, is that if an exception occurs within the ‘meat’ of the Lambda code the use of ‘[CallerMemberName]’ will not give you the results you may expect. During testing, I triggered some test exceptions, with the aim to be sure that my logging code was registering the correct calling method name. I discovered that the calling method name, however, was getting logged as ‘MoveNext’ in all instances. After a few minutes of pondering, I realised that we were in scope of asynchronous code, which actually explains everything. When using asynchronous methods everything is bundled into a ‘state machine’ construct, with an iterator controlling the flow of how we move through the code. This construct, behind the scenes, has a ‘MoveNext’ method where the code I’d created would now be housed; hence the reason for the little logging nuance. One to be aware of; more details are available here if you’re interested (this is true regardless of whether you use MethodBase.GetCurrentMethods().Name as a calling parameter or the [CallerMemberName] attribute).

There is more I plan to add to this; one example of which is the assigning of elastic IPs to the EC2 instances on startup. However, as a grassroots template, this serves pretty well and I hope this helps anyone else looking to do something similar. A pretty long post then but one I’ve enjoyed knocking up! Until the next time happy coding as always πŸ™‚

Experimenting with Azure CDN

With the gradual piecing together of the Lego bricks forming the slow move over of the Frog & Pencil website to a more managed approach (building of a custom CMS and an all-around better ASP.NET MVC architecture) I thought it would be interesting to document the move over of Frog & Pencil images to a CDN. I was inspired to give this a go after watching Scott Hanselman make the switch for his podcast site images and other Azure Friday videos, as documented here:

Scott Hanselman lifting and shifting images over to a CDN.
Azure CDN with Akamai.

It seemed like a relatively painless process and is a step in the right direction for our site as a whole; so, let’s give it a go!

NOTE: A short way into this post I realised that I was making a few missteps. This is cool, I think, as I would rather document the journey I took with the mistakes listed, to be honest – #KeepingItReal! However, for sanity (mine and yours) I’ll specify the ‘correct’ order of events that you should follow here that you can marry up with the ramblings below:

  1. Sign in to the Azure Portal.
  2. Create a storage container, if you don’t already have one.
  3. Download and utilise a storage explorer application (such as Azure Storage Explorer).
  4. Create a CDN Profile and CDN endpoint (that ties explicitly to your storage container, in this instance).
  5. Go to your DNS settings and generate a CNAME property, mapping a custom domain to your CDN if you wish to.
  6. Optionally, learn how to programmatically interact with your storage container.

Azure Portal – First Steps (documenting the journey)

First things first, we must hop on over to the Azure Portal. I searched the marketplace for ‘CDN’ and clicked create in the right-hand pane, as shown:

Creating a CDN

Creating a CDN.

The next phase involves configuring a CDN profile. The profile needs to be given a name and should be attached to an Azure Subscription. I’ve created a new Resource Group, by specifying a name for it, but it is possible to select an existing one for use here. There are some guidelines surrounding Resource Groups, such as items within a group should share the same lifecycle; more details can be found within this handy documentation article, read away!

The Azure CDN service is, of course, global but a Resource Group location must be set, which governs where resource metadata is ultimately stored. This could be an interesting facet to consider if there are particular compliance considerations regarding the storage of information and where it should be placed. I’m going with West Europe either way; a nice, easy choice this time around.

As for pricing, I have decided to head down the Akamai route, using the Standard Akamai pricing tier. I will have to see how this ultimately pans out cost wise over time, but it seems reasonable:

Azure CDN Provider Pricing

Azure CDN Provider Pricing.

At this point, we can explicitly create a CDN endpoint (where resources will be ultimately exposed). The endpoint has a suffix of ‘.azureedge.net’ and I’ve simply specified the first part of our domain, ‘frogandpencil’ as the prefix.

This is where I hit a bit of a revelation with the ‘Origin Type’ drop down. You can select from Storage, Cloud service, Web app or Custom origin (which is cool!), of which I want to use Storage. After selecting this I can pick an ‘Origin hostname’. The light bulb moment here, for me, is that I should have created a storage container first! I’d watched enough videos to have dodged this little problem, but I still managed to stumble…all part of the learning process πŸ˜‰

So… Let’s Create a Storage Container

Back to the market place then. The obvious pick seems to be ‘Storage account – blob, file, table, queue’, so I’ve gone ahead and clicked create here:

Setup Azure Storage.

Setup Azure Storage.

When creating the storage account there are a fair few options to consider, a good number that read as if they will impact pricing. I had to use the documentation found here to make choices. I settled on the setup described here (for images, and as the site isn’t yet using https, I’ve gone with the secure transfer feature being disabled – one for review in the future):

As an overview, the guidance suggests the use of the ‘Resource manager’ type of ‘Deployment model’ for new applications. There doesn’t seem to be a penalty for using the ‘StorageV2’ ‘Account kind’, which extends the types that can be stored outside of just blob data, so that is what I am going for.

Performance wise, the ‘standard’ option seems like an acceptable setting at the moment and for the kind of data I’ll be storing (images for now, and possibly other static content later down the line) I can opt out of any geo-redundant replication options. In the event of resource downtime, I can easily switch to the use of resources local to the website. Plus, there will not be any data being lost really, all easily rebuilt and recoverable.

As for the ‘Access tier’, I’m heading down the ‘Hot’ route as images will be accessed quite frequently (we have the CDN to consider here so I might tinker later on down the line).

I then pick a Subscription, give the Resource Group a name and select my region of choice before continuing.

I then get a new blade on the dashboard (which took a minute to create) and, on accessing, am presented with the following:

Storage Setup.

Storage Setup.

Managing the Storage Container

The first and perhaps most obvious choice for managing and actually getting some content up into the storage container is the Azure Storage Explorer, which I’ll be downloading and using.

After a painless install process, you should see the following, where you will be asked to connect to Azure Storage:

Connect to Azure Storage.

Connect to Azure Storage.

I simply used my Azure account sign in details here. I did notice however that the Azure Portal does expose, under ‘Access Keys’ (within the storage container dashboard), keys and connection strings. I’m assuming this is for other kinds of, including programmatic, access; which I’ll give a go I think as part of this post (as a wee bonus).

I used the right-click context menu to create a new container called ‘images’ and then used the upload button to push up a test image:

Azure Storage Explorer Upload Image.

Azure Storage Explorer Upload Image.

Again, against the container I used the right-click context menu to select ‘Set Public Access Level…’, which I’ve set as follows to allow public access to the blob data but not the container:

Container Public Access Setup.

Container Public Access Setup.

I now have a blob container with a single image in it with appropriate access rights configured. The question is can I access the image in its current state? We’re looking pretty good from what I can see.

Successful Access.

Successful Access.

Adding a custom domain

Next up, I plan on adding a custom domain to the storage account. To do this, I access the ‘Custom domain’ option as shown here:

Register Custom Domain.

Register Custom Domain.

I followed option 1 as listed here and created a CNAME record to map frogandpencilstorage.blob.core.windows.net to images.frogandpencil.com (I’m happy to wait for this to propagate).

Register images.frogandpencil.com.

Register images.frogandpencil.com.

Once the CNAME record is created you simply have to place your target URL in the text box provided and hit save.

New CNAME property.

New CNAME property.

Lastly, let’s take it for a spin and see whether we can access the image in the storage container via the custom URL…and voila:

Custom Domain Active.

Custom Domain Active.

Back to the CDN bit!

We’ve come full circle! With a storage container in place I can now use that to feed a configured CDN. Consequently, I backtracked and followed the previously listed steps being sure to select my ‘Origin hostname’ to point to the newly created storage container:

CDN Profile & Endpoint Configuration.

CDN Profile & Endpoint Configuration.

On clicking create it takes a short time for the CDN to be configured.

So, what do I do now

Looking through the videos I made another discovery. This is where I want to adjust the previously created CNAME property (that I setup for the storage container) and hook this up to the CDN endpoint instead. The portal exposes custom domain mapping for a CDN much like for a storage container:

Change CNAME to map to CDN.

Change CNAME to map to CDN.

Portal CDN Custom Domain Mapping.

Portal CDN Custom Domain Mapping.

Again, I had to wait a short time for the CNAME property change to propagate but, after that, I was all set. I then spent a little time verifying that the CDN was up and running. There are quite a few optimisation options including the ability to set a custom ‘Origin path’ (such as ‘images’) but I’m leaving these be for the time being.

The Bonus Section – Programmatically Add Items to Azure Storage

As promised, this next section discusses (in a very bare bones fashion) what is required to write to an Azure storage container. I’ve created a stub Console Application to get running with and the process itself is simple (not considering errors, existence checks and threading, of course!).

You need to:

  1. Reference the WindowsAzure.Storage NuGet package.
  2. Add a reference to System.Configuration (if you want to put connection strings, folder paths and container names in configuration files and read them out).
  3. Then simply follow the code outlined below to get started.

In my test setup, the ‘SourceDirectory’ is looking at ‘C:\test-files\’ (contains just images) and the ‘TargetContainer’ is called ‘images’, as per my earlier configuration. The connection string can be obtained from the Azure Portal, under ‘Storage Account > Settings > Access Keys’.

Test Files ready for upload.

Test Files.

Storage Access Keys.

Storage Access Keys.

The App.config for the test application is structured like this, with the connection string being set to the correct value as per the information found in the Azure Portal.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
  <connectionStrings>
    <add name="FrogAndPencilStorageConnection" connectionString="[OBTAINED_FROM_THE_AZURE_PORTAL]" />
  </connectionStrings>
  <appSettings>
    <add key="SourceDirectory" value="C:\test-files\"/>
    <add key="TargetContainer" value="images"/>
  </appSettings>
</configuration>

Then, finally, the actual test code which…

  • Attempts to connect to the storage container creating a CloudStorageAccount object, based on the connection string information supplied.
  • Then uses the CloudStorageAccount object to get create a new CloudBlobContainer object (based on the container name stored in the configuration settings).
  • Finally, utilise this CloudBlobContainer, along with information about the files to process, to actually perform the upload.
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;

namespace WriteToAzureStorageTestApp
{
    /// <summary>
    /// Test application for writing to Azure Storage.
    /// Basic, test code only (throwaway code).
    /// </summary>
    internal class Program
    {
        #region Main (Entry Point) Method

        /// <summary>
        /// Main entry point method for this console application.
        /// </summary>
        /// <param name="args">Optional input arguments.</param>
        private static void Main(string[] args)
        {
            DemoWritingToAzureStorage();
        }

        #endregion Main (Entry Point) Method

        #region Private Static Methods

        /// <summary>
        /// Private static demo method illustrating how to upload to Azure Storage.
        /// </summary>
        private static void DemoWritingToAzureStorage()
        {
            // First use the FrogAndPencilStorageConnection connection string (for Azure Storage) to obtain a CloudStorageAccount, if possible
            CloudStorageAccount.TryParse(ConfigurationManager.ConnectionStrings["FrogAndPencilStorageConnection"].ConnectionString, out CloudStorageAccount storageAccount);
            if (storageAccount != null)
            {
                // We have a CloudStorageAccount...proceed to grab a CloudBlobContainer and attempt to upload any files found in the 'SourceDirectory' to Azure Storage
                Console.WriteLine("Obtaining CloudBlobContainer.");

                CloudBlobContainer container = GetCloudBlobContainer(storageAccount);

                Console.WriteLine("Container resolved.");

                Console.WriteLine("Obtaining files to process.");

                List<string> filesToProcess = Directory.GetFiles(ConfigurationManager.AppSettings["SourceDirectory"]).ToList();

                UploadFilesToStorage(container, filesToProcess);
            }

            Console.WriteLine("Processing complete. Press any key to exit...");
            Console.ReadLine();
        }

        /// <summary>
        /// Private static utility method that obtains a CloudBlobContainer
        /// using the container name stored in app settings.
        /// </summary>
        /// <param name="storageAccount">The cloud storage account to retrieve a container based on.</param>
        /// <returns>A fully instantiated CloudBlobContainer, based on the TargetContainer app setting.</returns>
        private static CloudBlobContainer GetCloudBlobContainer(CloudStorageAccount storageAccount)
        {
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

            return blobClient.GetContainerReference(ConfigurationManager.AppSettings["TargetContainer"]);
        }

        /// <summary>
        /// Private static utility method that, using a CloudBlobContainer, uploads the
        /// files passed in to Azure Storage.
        /// </summary>
        /// <param name="container">A reference to the container to upload to.</param>
        /// <param name="filesToProcess">The files to upload to the container.</param>
        private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
        {
            // Process each file, uploading it to storage and deleting the local file reference as we go
            filesToProcess.ForEach(filePath =>
            {
                Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

                // Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
                container.GetBlockBlobReference(Path.GetFileName(filePath)).UploadFromFile(filePath);

                RemoveFileFromLocalDirectory(filePath);
            });
        }

        /// <summary>
        /// Private static utility method for deleting a file.
        /// </summary>
        /// <param name="filePath">The file path (full) to delete based upon.</param>
        private static void RemoveFileFromLocalDirectory(string filePath)
        {
            // Only attempt the delete if the file exists
            if (File.Exists(filePath))
            {
                File.Delete(filePath);
            }
        }

        #endregion Private Static Methods
    }
}
Test Upload Application Running.

Test Upload Application Running.

Test Files Uploaded.

Test Files Uploaded.

There you have it; a rather around the houses and off the wall tour of setting up an Azure storage container and then linking this to an Azure CDN. Plenty of images still need to be brought over into the new storage container (and a few code changes to boot), but I feel like I am on a pilgrimage to a better place. I hope this proves useful nonetheless and, until the next time, happy coding!

Addendum

After a further play I realised that the C# example I’d knocked up was not setting the content type correctly on upload, as follows:

Incorrect Content Type.

Incorrect Content Type.

To this end, I adjusted the UploadFilesToStorage method to set the content type on a CloudBlockBlob before the upload is triggered, as illustrated here:

/// <summary>
/// Private static utility method that, using a CloudBlobContainer, uploads the
/// files passed in to Azure Storage.
/// </summary>
/// <param name="container">A reference to the container to upload to.</param>
/// <param name="filesToProcess">The files to upload to the container.</param>
private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
{
	CloudBlockBlob blockBlob;

	// Process each file, uploading it to storage and deleting the local file reference as we go
	filesToProcess.ForEach(filePath =>
	{
		Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

		// Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
		blockBlob = container.GetBlockBlobReference(Path.GetFileName(filePath));

		// Correctly configure the content type before uploading
		blockBlob.Properties.ContentType = "image/jpg";

		blockBlob.UploadFromFile(filePath);

		RemoveFileFromLocalDirectory(filePath);
	});
}

You should then see items with the correct content type in the container:

Correct Content Type.

Correct Content Type.

To access images via the custom domain, essentially my CDN, I had to ‘purge’ it also at this point.

Again, happy coding.