CloudWatch Event Trigger Setup.

AWS Scheduled Lambda – Starting & Stopping EC2 Instances

I’ve been meaning to give this a go for a month or two now and, alas, I finally have the spare hours needed to actually bring it all together.

The outcome I have in my head, which is actually to replace a process running as a Task Scheduler job on an EC2 instance where I work, is to write as scheduled Lambda (using CloudWatch Events, utilising a configured rule, with a ‘cron’ expression in tow) that can be triggered to start and stop EC2 instances; completely removing the need for the instance this process runs on. EC2 instances will be targeted by the presence of a specific tag. For something that kicks in once a day a Lambda, where you pay per execution, is much more in line with what we want. This is opposed to having to pay for an EC2 to be permanently spun up to service this kind of request.

Here are some resource links, before we get started, which illustrate some of the things I found useful to read in the run-up to trying this myself:

This was also a nice opportunity to play around a bit more within my personal AWS ‘space’, a nice bonus as I’ve not done a hell of a lot with it of late.

.NET Core SDK

I decided to get this separately as it could come in handy for general .NET Core development. I did originally think this was tied to the ability to be able to use the .NET Core Lambda templates within Visual Studio, although I don’t actually think this is the case (The AWS Toolkit for Visual Studio 2017 is the core component that governs this).

Either way, the SDK can be found here.

AWS Toolkit for Visual Studio 2017

I’ve been bringing my poor, ageing laptop at home up to date. I’ve installed Visual Studio Community Edition 2017 and will look to get it ‘Lambda creation’ ready in short order. To get all of the lovely, sugar-coated, AWS support within Visual Studio (including access to an easy method of publishing Lambdas) I’m going to grab the AWS Toolkit for Visual Studio 2017. Navigate to Tools > Extensions and Updates > Online > Search for ‘AWS’ > This should bring back the AWS Toolkit for Visual Studio 2017. Go ahead and install this if you don’t already have it (closing Visual Studio to trigger the installation):

AWS Toolkit for Visual Studio 2017.

AWS Toolkit for Visual Studio 2017.

AWS Toolkit User Configuration

AWS Toolkit Credential Setup.

AWS Toolkit Credential Setup.

Before creating the Lambda, I’ve followed the provided configuration advice to go and create a new user via the IAM console. I’ll detail the whole process I followed just for clarity.

Start by accessing the AWS Console and open the IAM Management Console > Users section and click ‘Add user’. The user we are going to create needs ‘programmatic’ access, so be sure to check the correct box, also giving your user an appropriate name in the process:

After hitting ‘Next’, we need to assign an existing group with appropriate permissions or, as I am going to do, create a new group using the ‘Create group’ button. A modal popup will launch, here the ‘Group name’ can be added along with an opportunity to link the group to an existing policy (or link it to a brand new one). I’m keeping this simple and, as outlined in the guidance, assigning the ‘AdministratorAccess’ policy. Click ‘Create group’ and then ‘Next:Review’ to proceed.

Bash the ‘Create user’ button and you should be golden! Make sure to hit the ‘Download .csv’ button on the subsequent screen to get credentials at the ready.

Visual Studio AWS Toolkit Setup Screen

On the Visual Studio ‘Getting Started with the AWS Toolkit for Visual Studio’ screen, I opted to download the CSV for my ‘lew-admin-programmatic-user’ and use the ‘Import from a csv file…’ button. I left the profile name as ‘default’ for now. After selecting the relevant CSV credential file, hit ‘Save & Close’ to continue.

To cement my place as a ‘completionist geek’ I also updated Visual Studio at this point as I was a touch behind, so follow suit if you want to.

A little tip – If you’ve already closed the AWS Toolkit ‘setup’ screen a ‘Profile’ can be configured via the AWS Explorer window. This can be accessed within Visual Studio via View > AWS Explorer:

New AWS Profile.

New AWS Profile.

Creating the Lambda functions and supporting project

I had to close and reopen Visual Studio at this point to get the .NET Core Lambda templates to do their magic trick and appear. Navigate to File > New Project > Visual C# > AWS Lambda and you should be presented with an ‘AWS Lambda Project (.NET Core)’ option. I’m going to create a project called ‘LGAws.StartInstances’, wrapping everything in a solution for good measure. Once the solution is loaded I then opted to create a second Lambda project called ‘LGAws.StopInstances’. In both cases, I used the ‘Empty function’ blueprint as I want to roll with this fully from scratch.

For the purposes of keeping a clean abstraction between the Lambda functions and the logic behind them, I have also created a separate .NET Core project called ‘LGAws.Operations’. This will be a helper library that will act as a repository for the logic that calls the AWS EC2 SDK (which we’ll get to in a bit). All projects are then modified to use .NET Core 2.0 using the right-click context menu and selecting ‘Properties’.

We’re on to actually writing the code then, which I’ll detail as best I can as we go (providing full samples to boot so you can follow along with every decision made).

The code

Let’s start with inspecting the solution:

Solution Configuration.

Solution Configuration.

The LGAws.Operations project represents, as previously discussed, a supporting library which avoids the need to embed all of the logic within the Lambda functions themselves. Don’t treat this as a fully-fledged, complete solution or an absolute authority on how to structure this, I just thought a little separation of concerns wouldn’t go amiss here. Apart from the physical code that actually calls the AWS EC2 SDK, nothing else you see here is technically required to get going with your own version of this.

First up, the extensions folder is a nice little haven for a couple of small extension classes called ExceptionExtensions and MessagingExtensions. Nothing magical here, just types that provide some nicely formatted output for exceptions and other messaging. The content is as follows:

using System;
using System.Text;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding exception type extension methods.
    /// </summary>
    public static class ExceptionExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static exception extension designed to produce a formatted string
        /// from the targeted exception.
        /// </summary>
        /// <param name="exception">The exception to process.</param>
        /// <param name="includeStack">A boolean that denotes if we should include stack trace information in the returned string.</param>
        /// <returns>A formatted exception string based on the supplied parameters.</returns>
        public static string ToFriendlyExceptionString(this Exception exception, bool includeStack = true)
        {
            StringBuilder exceptionStringBuilder = new StringBuilder();

            if (exception != null)
            {
                // A valid exception is in scope - append messages from this exception and any inner exception (if present)
                exceptionStringBuilder.AppendLine($"The following exception has occurred: { exception.Message }");
                exceptionStringBuilder.AppendLine(exception.InnerException != null
                    ? $"An inner exception was detected as follows: { exception.InnerException.Message }" : "No inner exception was detected.");

                // Include stack information as specified by the caller
                if (includeStack && !string.IsNullOrWhiteSpace(exception.StackTrace))
                {
                    exceptionStringBuilder.AppendLine($"Stack trace: { exception.StackTrace }");
                }
            }

            return exceptionStringBuilder.ToString();
        }

        #endregion Extension Methods
    }
}
using System.Net;
using System.Runtime.CompilerServices;

namespace LGAws.Operations.Extensions
{
    /// <summary>
    /// Public static class holding 'messaging' type extension methods.
    /// </summary>
    public static class MessagingExtensions
    {
        #region Extension Methods

        /// <summary>
        /// Public static 'http status code' extension designed to produce a formatted string
        /// from the targeted httpstatuscode.
        /// </summary>
        /// <param name="statusCode">The http status code to inspect and provided a formatted string based on.</param>
        /// <param name="methodName">The calling methods name (when called via async you'll get 'MoveNext', based on async state machine antics).</param>
        /// <returns>A formatted string for reporting, based on the supplied http status code and method name parameters.</returns>
        public static string GetStatusMessageFromHttpStatusCode(this HttpStatusCode statusCode, [CallerMemberName] string methodName = "") =>
            statusCode == HttpStatusCode.OK
                ? $"The { methodName } method returned 'OK' - the operation completed successfully."
                : $"The { methodName } method returned an HTTP Status Code of { (int)statusCode } ({ statusCode }). Please check that the operation completed as expected.";

        #endregion Extension Methods
    }
}

Within the Models folder, I’ve created a basic object hierarchy to encapsulate the idea of different AWS operations, such as describing and manipulating EC2 instances. The BaseOperationModel is the top-level base class that contains a single string property called OperationReport; the idea here is that all AWS operations will support a ‘report’ that details how the operation went. I then have two derived classes in the mix named DescribeEC2Operation and ManipulateEC2Operation (the ‘manipulate’ class itself is just an empty stub, but acts as a ‘marker’ object to make the return value and operation being performed easily identifiable and unique in future). I utilise these types as return values when triggering logic to obtain instance ids (by a specific tag) and physically starting and stopping EC2 instances. These classes as defined as follows:

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Base class model for AWS operations.
    /// </summary>
    public abstract class BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// All AWS operations surface a string to detail
        /// a 'report' on the operation.
        /// </summary>
        public string OperationReport { get; set; }

        #endregion Public Properties
    }
}
using System.Collections.Generic;

namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'describe' EC2 operations.
    /// </summary>
    public class DescribeEC2Operation : BaseOperationModel
    {
        #region Public Properties

        /// <summary>
        /// Represents the obtained instance ids.
        /// </summary>
        public List<string> InstanceIds { get; set; } = new List<string>();

        #endregion Public Properties
    }
}
namespace LGAws.Operations.Models
{
    /// <summary>
    /// Model that represents 'manipulate' EC2 operations (such as 
    /// starting and stopping instances).
    /// </summary>
    public class ManipulateEC2Operation : BaseOperationModel
    {
        // Further implementation details for a ManipulateEC2Operation to be added here as and when needed
    }
}

There is also a static utility class for some constant strings used throughout the library.

namespace LGAws.Operations.Shared
{
    /// <summary>
    /// Public static helper class that hold constants to use
    /// for all AWS-based operations.
    /// </summary>
    public static class Constants
    {
        #region Constant Definitions

        /// <summary>
        /// Represents a stock message for when a response is null.
        /// </summary>
        public const string NULL_RESPONSE_MESSAGE = "The returned response was null. Please investigate the cause and/or try again.";

        /// <summary>
        /// Represents the stock EC2 auto start 'tag'.
        /// </summary>
        public const string AUTO_START_TAG = "auto-start";

        /// <summary>
        /// Represents the stock EC2 auto stop 'tag'.
        /// </summary>
        public const string AUTO_STOP_TAG = "auto-stop";

        #endregion Constant Definitions
    }
}

Lastly, the EC2OperationsHelper class is the core utility wrapper that encapsulates the code to obtain instance ids, by tag, and utilise those instance ids to start and stop the relevant instances (using the model classes and extensions previously observed). In order to actually use the relevant AWS EC2 APIs you’ll need to right-click this project (if you’re following along) and select ‘Manage Nuget Packages…’. Then, add the AWSSDK.EC2 package to begin using the AmazonEC2Client type – you’ll be looking for the following after installing the package:

AWSSDK.EC2 Nuget Package.

AWSSDK.EC2 Nuget Package.

The AmazonEC2Client type is the gateway to the underlying methods we require to obtain EC2 instance ids by tag and subsequently start and stop those instances. This is done via the DescribeInstancesRequest/DescribeInstancesResponse, StartInstancesRequest/StartInstancesResponse and StopInstancesRequest/StopInstancesResponse constructs. You’ll notice that the AmazonEC2Client type implements IDisposable so, as is good practice with any type implementing this particular interface, I have used the good old using statement to ensure everything is mopped up after use. A DescribeInstancesRequest type can except a List of type ‘Filter’, which is our way of searching for instances by tag name. This particular implementation does not concern itself with the value behind the tag, but there are ways to factor this in if required. Lastly, the AmazonEC2Client uses its parameterless constructor which essentially means AWS credentials will be inferred; we’ll see this all come together when we ‘Publish’ the Lambda to AWS (the role specified at this point determines what the Lambda will be able to access and what credentials it ultimately runs under). See below for the entire code listing for this class:

using Amazon.EC2;
using Amazon.EC2.Model;
using LGAws.Operations.Extensions;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace LGAws.Operations.EC2
{
    /// <summary>
    /// Helper class that represents operations that can be triggered
    /// against EC2 instances (such as starting/stopping instances).
    /// </summary>
    public class EC2OperationsHelper
    {
        #region EC2 Operation Methods

        /// <summary>
        /// Method that returns a custom DescribeEC2Operation object that holds details
        /// on EC2 instances discovered by the tag supplied.
        /// </summary>
        /// <param name="tag">Specifies the tag 'key' to identify targeted EC2 instances by.</param>
        /// <returns>A Task containing a custom DescribeEC2Operation object (containing discovered instance ids).</returns>
        public async Task<DescribeEC2Operation> GetInstancesByTag(string tag)
        {
            DescribeEC2Operation describeOperation = new DescribeEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the DescribeInstancesRequest/DescribeInstancesResponse objects to find instances by tag
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    DescribeInstancesRequest describeRequest = new DescribeInstancesRequest
                    {
                        Filters = new List<Filter> { new Filter("tag-key", new List<string> { tag }) }
                    };

                    DescribeInstancesResponse describeResponse = await ec2Client.DescribeInstancesAsync(describeRequest);

                    // The response stores instance details in a Reservation wrapper, so drill down as required to obtain the instance ids
                    if (describeResponse?.Reservations?.Count > 0)
                    {
                        describeResponse.Reservations.ForEach(reservation =>
                        {
                            if (reservation?.Instances?.Count > 0)
                            {
                                reservation.Instances.ForEach(instance =>
                                {
                                    // Add discovered instance ids to the describeOperation helper object
                                    describeOperation.InstanceIds.Add(instance.InstanceId);
                                });
                            }
                        });
                    }

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    describeOperation.OperationReport = describeResponse != null
                        ? describeResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                describeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return describeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'start' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to start.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the start operation).</returns>
        public async Task<ManipulateEC2Operation> StartEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StartInstancesRequest/StartInstancesResponse objects to attempt to start the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StartInstancesRequest startRequest = new StartInstancesRequest(instanceIds);

                    StartInstancesResponse startResponse = await ec2Client.StartInstancesAsync(startRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = startResponse != null
                        ? startResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        /// <summary>
        /// Method that returns a custom ManipulateEC2Operation object that holds details
        /// on the attempted operation to 'stop' EC2 instances.
        /// </summary>
        /// <param name="instanceIds">The list of EC2 instance ids to stop.</param>
        /// <returns>A Task containing a custom ManipulateEC2Operation object (containing details on the stop operation).</returns>
        public async Task<ManipulateEC2Operation> StopEC2InstancesByInstanceIds(List<string> instanceIds)
        {
            ManipulateEC2Operation changeOperation = new ManipulateEC2Operation();

            try
            {
                // Establish an AmazonEC2Client and use the StopInstancesRequest/StopInstancesResponse objects to attempt to stop the instances passed in (by id)
                using (AmazonEC2Client ec2Client = new AmazonEC2Client())
                {
                    StopInstancesRequest stopRequest = new StopInstancesRequest(instanceIds);

                    StopInstancesResponse stopResponse = await ec2Client.StopInstancesAsync(stopRequest);

                    // Set the OperationReport property for logging purposes (to be handled by the caller) - details how this operation went
                    changeOperation.OperationReport = stopResponse != null
                        ? stopResponse.HttpStatusCode.GetStatusMessageFromHttpStatusCode()
                        : Constants.NULL_RESPONSE_MESSAGE;
                }
            }
            catch (Exception ex)
            {
                // Get a 'friendly', formatted version of the exception on error (storing it against the OperationReport property on the returned object)
                changeOperation.OperationReport = ex.ToFriendlyExceptionString();
            }

            return changeOperation;
        }

        #endregion EC2 Operation Methods
    }
}

The documentation surrounding what operations the AWS SDK for .NET supports was fairly detailed and well laid out, it can be found here for anyone interested in digging around further.

So, we move on lastly to the key component of this entire configuration; the physical Lambda functions. I’ve created two distinct functions, as discussed previously – one to cover the starting of EC2 instances and another one to kick off the stopping operation. Lambda functions are relatively simplistic in their setup, with the stock template providing a class called Function containing a singular method called FunctionHandler. I’ve amended the signature of this method in my sample to not return any value, the template returns a string, as is. Also, the signature is geared to accept an input string argument, along with an ILambdaContext implementing object. I’m not interested in accepting input at the moment, so I’ve cut the input arguments down and just left the ILambdaContext implementing object in scope, which is a cool little object that exposes metadata about the Lambda function triggered (i.e. the function name, allocated memory limits, etc.).

The main idea I’ve gone with here is abstracting, as discussed previously also, all of the core logic to the external ‘business logic’ library. The Lambda simply creates an instance of the EC2OperationHelper class and then uses that as the workhorse, meaning our function definition is as simple as possible. The only other additional statements in play undertake logging, the details of which can be seen in AWS CloudWatch, which we’ll review later.

using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StartInstances
{
    /// <summary>
    /// Holds logic for the Start EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Start EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to start
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_START_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // start instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StartEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}
using Amazon.Lambda.Core;
using LGAws.Operations.EC2;
using LGAws.Operations.Models;
using LGAws.Operations.Shared;
using System.Threading.Tasks;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace LGAws.StopInstances
{
    /// <summary>
    /// Holds logic for the Stop EC2 Instance Lambda function.
    /// </summary>
    public class Function
    {
        #region Function Handler Definition

        /// <summary>
        /// Stop EC2 Instance Lambda function definition.
        /// </summary>
        /// <param name="context">An implementation of the ILambdaContext interface (for extracting information about the Lambda).</param>
        /// <returns>A task wrapping this operation.</returns>
        public async Task FunctionHandler(ILambdaContext context)
        {
            LambdaLogger.Log($"Executing the { context.FunctionName } function with a { context.MemoryLimitInMB }MB limit.");

            EC2OperationsHelper helper = new EC2OperationsHelper();

            // First, obtain instance ids to stop
            DescribeEC2Operation describeOperation = await helper.GetInstancesByTag(Constants.AUTO_STOP_TAG);
            LambdaLogger.Log(describeOperation.OperationReport);

            // Stop instances based on the returned ids
            ManipulateEC2Operation changeOperation = await helper.StopEC2InstancesByInstanceIds(describeOperation.InstanceIds);
            LambdaLogger.Log(changeOperation.OperationReport);

            LambdaLogger.Log($"Finished executing the { context.FunctionName } function.");
        }

        #endregion Function Handler Definition
    }
}

We’ve now reached the stage of finally getting our Lambdas published to AWS, which we’ll review now.

Upload of the Lambda function to AWS

The AWS Toolkit for Visual Studio provides a publishing wizard, but Lambda functions can be zipped and then uploaded using the AWS Console > Lambda admin screen if you prefer. Let’s review the upload process for one of our two Lambda functions, a process that I will repeat for the other function to (behind the scenes for brevity).

I want my Lambdas to be able to run wild with EC2 instances, so I’ve again popped on over to the AWS Console > IAM > Roles > ‘Create role’ to generate the ‘lg-ec2-full-access-role’. The role should look like this after creation, you’ll want to select ‘Lambda’ as the AWS service type when creating the role. I also attached the ‘AmazonEC2FullAccess’ and ‘AWSLambdaFullAccess’ policies to the role:

EC2 Full Access Role Summary.

EC2 Full Access Role Summary.

EC2 Full Access Role Attached Policies.

EC2 Full Access Role Attached Policies.

We’re going to need this role in the next step.

To start with the publishing process, right-click the Lambda function project in the Solution Explorer within Visual Studio and select the ‘Publish to AWS Lambda…’ context menu item. You should be presented with a modal popup that looks similar to the image listed below. I’ve modified a few of the options at this point, which you may need to also do:

  • The functions I have created are using .NET Core version 2.0, so I’ve adjusted the ‘Language Runtime’ to ‘.NET Core v2.0’.
  • I’ve listed my function name as ‘LGAwsStartInstances’, not using the period character which is invalid in this instance.
  • For convenience, I’ve set the ‘Save settings to aws-lambda-tools-defaults.json for future deployments’ flag.
  • All other options should be valid at this point. I’ll be using the ‘default’ profile, in the ‘EU (Ireland)’ region (I could have switched to ‘EU (London)’ I guess, but I invariably remember too late that this exists!), adjust your region as needed.
Upload Lambda Function.

Upload Lambda Function.

Click ‘Next’ to proceed, where you’ll be presented with one last modal screen, which allows you to set further configuration details, such as memory execution limits and timeouts for your particular function. The key thing on this particular screen, which we will need to address, is selecting a fitting value for the ‘Role Name’ dropdown:

Advanced Function Details.

Advanced Function Details.

Here, in my case at least, I ensure that the recently created ‘lg-ec2-full-access-role’ role is selected – be sure to select an appropriate value and then click ‘Upload’ to complete the process. I’ve then, at this point, performed the same steps for the other Lambda function project. With any luck, the upload will be error-free and, on completion, we’ll be able to go back to the AWS Console and create our test EC2 instance. You’ll notice that Visual Studio also (there is a settings flag that governs this on the upload progress modal) loads a ‘test’ screen for you to trigger your function with. Lambdas are also testable within the AWS Console itself.

Creation of a test EC2, with tag, to turn on and off

We now need to actually create the targeted entity of our Lambda functions; an EC2 that sports the appropriate ‘tags’. We’re going to create a bare-bones EC2 from a standard Windows, base AMI, although it really doesn’t matter what you opt to use so fill your boots with whatever you want. The AMI I am using is eligible for free-tier usage, depending on the current state of your AWS account.

To begin, we need to run on over to the AWS Console > EC2 > Launch Instance option and pick an AMI to begin. I’m opting to go with this:

Choose Base AMI.

Choose Base AMI.

After hitting ‘Select’ I go through the following motions to generate the AMI.

  1. Choose an Instance Type > Pick t2.micro.
  2. Configure Instance Details > Skip over this.
  3. Add Storage > Defaults are fine here also, skip over this.
  4. Add Tags > We’ll add three here. Add a ‘Name’, ‘auto-start’ and ‘auto-stop’ tag as shown in the screenshot below.
  5. Configure Security Group > Skip over this (in the real world, of course, you’d want some clearly defined Security Groups but for the purposes of testing our Lambda this is fine for now).
  6. Launch the instance! Create a new key pair if you need to (keeping the .pem file to one side, although we’ll be decommissioning this instance right after our testing anyway) or use an existing key pair.
Lambda EC2 Tag Setup.

Lambda EC2 Tag Setup.

Once launched feel free to stop the instance for now. We’ll be using a Lambda to spin it up very shortly!

Test Instance Ready.

Test Instance Ready.

CloudWatch Event Rule trigger

The whole concept behind what I’m looking for is to trigger a Lambda on a cron schedule. The method I’m going to use to achieve this involves utilising a CloudWatch Event ‘Rule’, which can be configured manually via the CloudWatch section of the AWS Console or, more conveniently, via the Lambda section of the AWS Console instead. Therefore, to complete the ‘scheduling’ setup on a Lambda function go to the AWS Console > Lambda, then in the ‘Designer’ and the ‘Add triggers’ sidebar click ‘CloudWatch Events’. This will add a node that serves as a step to ‘feed’ the triggering of the Lambda:

CloudWatch Event Trigger Setup.

CloudWatch Event Trigger Setup.

Scroll down to configure the CloudWatch Event further and in the ‘Rule’ drop-down select ‘Create a new rule’. You can then give the rule a name, description and most importantly (with the ‘Schedule expression’ radio option set) a cron schedule. The sample expression I’ve used here will trigger the Lambda every 10 minutes, Monday to Sunday (you can use the documentation to configure any schedule you like). I’ve used this particular format so I can easily switch this to run Monday to Friday instead, with one trigger per day being the end game I’m looking for. Click add to complete setting up the rule and then ‘Save’ in the top-right hand corner of the screen to finish up.

Is it working?

At this moment in time our test EC2 instance is stopped so the desired effect we are looking for is the CloudWatch Event to trigger, based on the configured rule, and thus run the ‘LGAwsStartInstances’ Lambda function – our EC2 should then be kicked into life! On the Lambda function page, the link to the rule can be clicked to see details of the schedule, as displayed below:

Start EC2 Rule.

Start EC2 Rule.

CloudWatch Event Rule Schedule.

CloudWatch Event Rule Schedule.

After waiting for the next ‘schedule slot’ to roll around the ‘Logs’ menu option within CloudWatch can be accessed. A log group for our Lambda can be seen which, when drilled into, shows the logging statements produced by the ‘LGAwsStartInstances’ function; this directly ties to the use of the ‘LambdaLogger’ type in the sample code.

CloudWatch Logs.

CloudWatch Logs.

Start Instance Lambda Logs Content.

Start Instance Lambda Logs Content.

After verifying the existence of log data, reporting a successful operation, we can finally go over to the EC2 admin section of the AWS Console and witness the EC2 instance started:

EC2 Started.

EC2 Started.

After proving this operation works correctly I opted to disable the event rule tied to this Lambda and created another event, mirroring the setup process already listed above, to prove the ‘LGAwsStopInstances’ function correctly triggers as expected:

Stop Instance Lambda Logs Content.

Stop Instance Lambda Logs Content.

So, success then – happy days all around!

Asides and final thoughts

One really interesting thing to note with the sample code, which I didn’t end up changing just to bring it up as a discussion point, is that if an exception occurs within the ‘meat’ of the Lambda code the use of ‘[CallerMemberName]’ will not give you the results you may expect. During testing, I triggered some test exceptions, with the aim to be sure that my logging code was registering the correct calling method name. I discovered that the calling method name, however, was getting logged as ‘MoveNext’ in all instances. After a few minutes of pondering, I realised that we were in scope of asynchronous code, which actually explains everything. When using asynchronous methods everything is bundled into a ‘state machine’ construct, with an iterator controlling the flow of how we move through the code. This construct, behind the scenes, has a ‘MoveNext’ method where the code I’d created would now be housed; hence the reason for the little logging nuance. One to be aware of; more details are available here if you’re interested (this is true regardless of whether you use MethodBase.GetCurrentMethods().Name as a calling parameter or the [CallerMemberName] attribute).

There is more I plan to add to this; one example of which is the assigning of elastic IPs to the EC2 instances on startup. However, as a grassroots template, this serves pretty well and I hope this helps anyone else looking to do something similar. A pretty long post then but one I’ve enjoyed knocking up! Until the next time happy coding as always 🙂

A Couple of Hours with Azure Maps

I’m having a random ‘pick a Channel 9 video and blog from there’ session; the subject of the day is Azure Maps and the inspiration came in the form of this video.

The plan is to see what I can achieve in an hour or two, so here’s my quick rundown to wet your whistle. Firstly, a quick resource list to get you going which gives an idea of the product itself as well as details on pricing and the core API:

  1. Azure Maps
  2. Pricing
  3. Quick Starts

I’ll be (partly) following this quick start guide, but I may break away a bit and engage ‘rebel’ mode as that’s my style. 😛

Within the Azure Portal start by creating a new resource, searching using the keyword ‘Maps’; nice and simple for starters. Click ‘create’ as shown below:

Creating a Maps Resource.

Creating a Maps Resource.

For our next course of yumminess, simply fill in the mandatory fields specifying a Name, selecting a Subscription, an existing Resource Group (or creating a new one, which I did here for some clean separation) and finally selecting a Resource Group location that makes sense for you. I’ve opted to pin this resource to my dashboard for easy access later.

Create a Maps Account.

Create a Maps Account.

Once created, like many resources, we then just need to obtain the access key by going to ‘YOUR_MAP_RESOURCE’ in the Azure Portal > Settings > Keys. The sample application referenced on the demo resources page is doing a wonderful 404 trick at the time of writing, so I’ll see what I can put together as a basic sample myself, as I have the key in tow.

At this point I engaged ‘full nosiness mode’ and poking around further lead me to some step-by-step samples; this looks like a good starting template. Using this template to generate my own code example (and throwing in some ES6 concepts for good measure) I came up with this lightweight, ‘one-shot’ HTML page in VS Code (I really need to use VS Code more as it’s going great guns now and is getting excellent traction in the development community from what I can gather):

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, user-scalable=no" />
    <title>Azure Map Test</title>
    <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1.0" type="text/css" />
    <script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1.0"></script>
    <style>
        html,
        body {
            width: 100%;
            height: 100%;
            padding: 0;
            margin: 0;
        }

        #mapContainer {
            width: 100%;
            height: 100%;
        }
    </style>
</head>
<body>    
    <div id="mapContainer"></div>
    <script>
        // Encapsulation class that is a holding container for search parameters
        class SearchOptions {
            constructor(subscriptionKey, searchTerm, startLatitude, startLongitude, searchRadius ) {
                this.subscriptionKey = subscriptionKey;
                this.searchTerm = searchTerm;
                this.startLatitude = startLatitude;
                this.startLongitude = startLongitude;
                this.searchRadius = searchRadius;
            }
            // Utility function for generating a search url based on the class properties
            generateSearchUrl() {
                return `https://atlas.microsoft.com/search/fuzzy/json?api-version=1.0&query=${ this.searchTerm }&subscription-key=${ this.subscriptionKey }&lat=${ this.startLatitude }&lon=${ this.startLongitude }&radius=${ this.searchRadius }`;
            }
        }

        // Function for generating a map (using the mapContainer element reference provided and subscription key)
        function getMap(mapContainer, subscriptionKey) {
            return new atlas.Map(mapContainer, {
                "subscription-key": subscriptionKey
            });
        }

        // Function for preparing the pin layer on the targeted map using the provided layer name
        function prepareMapPins(map, searchLayerName, pinType) {
            map.addPins([], {
                name: searchLayerName,
                cluster: false,
                icon: pinType
            });
        }

        // Function that processes the data from 'fetch' and adds pins (POIs) the the map using the returned json data
        function processMapData(data, searchLayerName, map, cameraPadding) {
            if (data != null && data.results != null && data.results.length > 0) {
                // Initialise a searchPins array and limit the returned json data to those that are marked as POIs
                let searchPins = [],
                    poiResults = data.results.filter((result) => { return result.type === "POI" }) || [];

                // Extract features from the returned data and add it to the searchPins array (this contains location-based information)
                searchPins = poiResults.map((poiResult) => {
                    let poiPosition = [poiResult.position.lon, poiResult.position.lat];

                    return new atlas.data.Feature(new atlas.data.Point(poiPosition), {
                        name: poiResult.poi.name,
                        address: poiResult.address.freeformAddress,
                        position: poiResult.position.lat + ", " + poiResult.position.lon
                    });
                });

                // Add POIs discovered to the appropriate search layer
                map.addPins(searchPins, {
                    name: searchLayerName
                });

                // Set the map camera to be fixed on the 'searchPins'
                let lons = searchPins.map((pin) => pin.geometry.coordinates[0] ),
                    lats = searchPins.map((pin) => pin.geometry.coordinates[1] ),
                    swLon = Math.min.apply(null, lons),
                    swLat = Math.min.apply(null, lats),
                    neLon = Math.max.apply(null, lons),
                    neLat = Math.max.apply(null, lats);

                map.setCameraBounds({
                    bounds: [swLon, swLat, neLon, neLat],
                    padding: cameraPadding
                });             
            }
        }

        // Function that is triggered on 'mouseover' of a pin element to display extra information
        function createMouseOverPopUps(e, popup, map) {
            let popupContentElement = document.createElement("div");
            popupContentElement.style.padding = "5px";

            let popupNameElement = document.createElement("div");
            popupNameElement.innerText = e.features[0].properties.name;
            popupContentElement.appendChild(popupNameElement);

            let popupAddressElement = document.createElement("div");
            popupAddressElement.innerText = e.features[0].properties.address;
            popupContentElement.appendChild(popupAddressElement);

            let popupPositionElement = document.createElement("div");
            popupPositionElement.innerText = e.features[0].properties.name;
            popupContentElement.appendChild(popupPositionElement);

            popup.setPopupOptions({
                position: e.features[0].geometry.coordinates,
                content: popupContentElement
            });

            popup.open(map);
        }

        // Function to actually create the map
        function createMap() {
            // Alter the query parameters here for testing, add a subscription key, search term (e.g. 'hairdressers' or 'pubs'), 
            // the latitude/longitude to begin the search from and the radius to search (in metres)
            const subscriptionKey = "INSERT_SUBSCRIPTION_KEY_HERE",
                  searchTerm = 'pubs',
                  startLatitude = '52.630181',
                  startLongitude = '1.297415',
                  searchRadius = 1000,
                  // The 'search layer' that will contain the discovered 'pins' and will tie to mouse over pop-ups
                  searchLayerName = "search-results",
                  // Use this to switch out the pin type on render (https://docs.microsoft.com/en-us/javascript/api/azure-maps-javascript/pinproperties?view=azure-iot-typescript-latest)
                  pinType = "pin-red",
                  // Had issues when searching a small radius and having this value too high (overlapping pins???) - but adjust as necessary
                  cameraPadding = 1;

            // Encapsulate the search constants into a utility class which containts a function for calculating a 
            // search url. Also, generate a map/popup object pre-search to get us started
            let options = new SearchOptions(subscriptionKey, searchTerm, startLatitude, startLongitude, searchRadius),
                popup = new atlas.Popup();
                map = getMap('mapContainer', subscriptionKey);

            // Initialise the pin layer for the targeted map
            prepareMapPins(map, searchLayerName, pinType);

            // Use fetch to call the generated search URL and process the response to add data points (POIs in this case) to the map
            fetch(options.generateSearchUrl())
                .then(response => response.json())
                .then(data => processMapData(data, searchLayerName, map, cameraPadding));

            // Add a popup to the map which will display some basic information about a search result on hover over a pin
            map.addEventListener("mouseover", searchLayerName, (e) => {
                createMouseOverPopUps(e, popup, map);
            });
        }

        // Create the sample map!
        createMap();
    </script>
</body>
</html>

I’ve added inline comments to try and explain the core workings of the objects on show. In essence, you just need to:

  1. Ensure the atlas.min.css style sheet is in scope.
  2. Ensure the atlas.min.js script is in scope.
  3. Create a div with a selector (using an id in this instance) so it can be targeted.
  4. Call atlas.Map specifying the container (div you previous created) you want to render the map within, along with a valid subscription key.

In this example, I create a SearchOptions class that acts a way of encapsulating configurable parts of a search and provides a way of generating a dynamic search URL using a template string (template literal). The createMap function is called first and creates a SearchOptions instance up front, this function is where you can modify search parameters as you see fit. When using this sample code be sure to switch out ‘INSERT_SUBSCRIPTION_KEY_HERE’ for a valid subscription key. You can specify a latitude/longitude as a linchpin for the search, a search radius in metres and a search term to target specific points of interest (POIs).

Along with a SearchOptions object, a ‘popup’ utility object (to handle how popups are rendered when a map pin is ‘moused over’) and the physical map is created, using the getMap function. This is where atlas.Map is called, for reference.

To render pins on the map for POIs a named ‘layer’ must be created against the map object in scope. This is handled via a call to prepareMapPins. There is some ability to customise how a rendered pin looks so see the URL listed against the pinType constant, in the sample code, for more details.

I use ‘fetch’ to call the API with a generated URL/embedded query and then process the returned JSON data using the processMapData function. This is where the physical pins for POIs are added. Each POI discovered has a latitude/longitude, which is extracted in the form of an atlas.data.Feature (for each POI discovered). These are added to the map via a call to the addPins function, specifying the search layer to attach the pin to (so I’ve inferred here that you can indeed have multiple layers rendering different information which is awesome).

Some calculations are then performed to generate values to set the ‘camera’ location so that it focuses in on the area of the discovered POIs. All in all, it is actually pretty simple and is easy to get fully up and running within the first hour or so.

Lastly, a small mouseover event listener is added to provide a popup (using the previously created popup utility object) for each pin. The createMouseOverPopUps function takes care of this little monkey for us.

The only difficulty I had was that large padding values on the camera didn’t seem to play ball when using a small search radius, it took me a while to figure out that results were a) inaccurate when doing this and b) pins were overlapping and looked as if they were missing, so this is something to watch out for! Not sure why this knocked on to the pin locations, as it appears to be a camera setting. I’ve left this as 1, but a value of around 5 appeared to work fine.

So….what does it look like I hear you ask. Here’s the first results set for pubs which, for anyone who knows me, is most likely not a surprise! 😉

Brewdog Location.

Brewdog Location.

The accuracy here, being from Norwich is….a little mixed. The location of Brewdog is near enough and another pin for Gonzos is on the mark, although the returned metadata lists this as ‘Havanas’, which is out of date. Some of the other listed POIs are flat out wrong (or omitted, perhaps as they are listed as ‘bars’ or ‘restaurants’, for example, even when I know they are in range based on radius). I did a follow-up search for hairdressers which seemed to be much more on the mark:

Anglian Hair Academy Map.

Anglian Hair Academy Map.

Anglian Hair Academy Street View.

Anglian Hair Academy Street View.

I had no idea that the Anglian Hair Academy even existed and thankfully my wife was there to set me straight, it’s in the right place apparently. From what I know, the other pins are pretty accurate (in this instance Google Maps looked a little out of date this time around). I tested this one last time on supermarkets in the area and it was reasonably accurate in the majority of cases.

This is an interesting little API to experiment with and please feel free to take this code and play around with it as you see fit. Also, please get in touch if the inaccuracies I saw here are due to some kind of error on my part, I’d love to hear how you all get on.

Thanks all and keep coding!

Future Decoded 2015 Play-by-play

Hello beautiful people!

It’s a fantastic, gorgeous Saturday morning (it’ll be Monday by the time I hit the publish button, such is the enormity of the post!); the birds are chirping, the sun is shining through the balcony windows (and there is a bloody wasp outside, STILL!!!) and my wife has left me…………to go on a girly weekend (that probably sounded more alarming than intended; hmmm, oh well, it stays!). Whilst she is away fighting the good fight, this gives me the opportunity to go over my thoughts on the recent Future Decoded 2015 event that took place at ExCel in London.

The links to outline this event have been posted before on my blog, but just in case, here are the goods again:

Future Decoded 2015
Future Decoded 2015: Technical Day Highlights

Before we begin, it’s worth pointing out that I attended this event a couple of weeks ago, so apologies if any inaccuracies pop up. I’ll do my best to stick to the facts of what I can remember and specific points that interested me; other commitments ended up preventing me from getting to this particular post sooner. You’ll all let me off, being the super gracious, awesome folks you are, I’m sure :-).

So, FIGHT!!!!!

Sorry, I had a dream about Mortal Kombat last night and upper-cutting people into the pit – What a great stage that was! Ah, the memories….Let’s begin/start/get on with it then.

Morning Key Notes

The morning Key Notes were varied and expansive in nature. I won’t discuss all of them here, only the takeaway points from the talks that struck a chord with me.

1) Scott Guthrie. EVP Cloud and Enterprise, Microsoft (Azure).

I was particularly looking forward to this talk being a keen follower of Scott Guthrie (include Scott Hanselman), and I normally try to catch up with Channel 9 features and Azure Fridays whenever possible (I’ve linked both, although I’m sure most of you, if not all, have come across Channel 9 before or heard of Azure Fridays).

The talk did have primer elements as you would expect, i.e. here’s the Azure Portal and what you can expect to find (in relation to resources, templates you can access for applications, services, Content Distribution Networks (CDN), etc). The next bit really caught me cold, who was expecting a giant image slide of a cow! I certainly wasn’t…

Estrus in Cows

What followed was a full example of real-time data recording and assessment surrounding the monitoring of cows in Asia. I’ve provided a link below that sums up the concept of Estrus (being in heat) nicely enough, but it laymen’s terms it relates to cows ‘being in the mooooooood’ (wife insisted I added that joke). Obviously, a farmers’ ability to accurately detect this, urm, state of being in a cow is an incredibly important factor in the ability to produce calves.

It turns out that a cow tends to move more when in the Estrus state; something that can certainly be measured. So, with pedometers attached to cows to measure steps taken and an Azure based service receiving and providing feedback in real-time, the farmer in question was able to take action to maximise calf production. Further to this, analysis of the data gathered was able to identify trends against how long cows have been in the Estrus state, and the gender of offspring. Crazy stuff, but all very interesting. Feel free to read further to your hearts content:

Cow Estrus Detection

The Internet of Things (IoT) was briefly touched on and another brief, live coding example ensued.

Scott produced a small, bog-standard heat sensor (apparently, just a few pounds, I was impressed he didn’t say dollars!) and proceeded to demonstrate a basic WinForms application passing a JSON payload to Azure in real-time (measurements taken a few times a second). This strikes me as exciting territory, and I have friends who do develop applications working in tandem with sensors already, backed up by technologies such as the Raspberry Pi and Arduino, for example. The talk closed with the conceptual idea that the majority of data, in the world today, is still largely unmeasured, and hoped that Azure would be an important platform in unlocking developers potential to measure previously untapped data.

2) Kevin Ashton. Inventor of the “Internet of Things”.

Kevin coined the term the Internet of Things (IoT), and gave a very good talk on what this means, as well as identifying certain ‘predictions’ for the future. For instance, that we, as a species, would survive climate change for one. He quickly noted that calling ‘BS’ on this particular one would be tricky should we suffer a doomsday style event at the hands of climate change (I don’t imagine the last thoughts of humanity to be, ‘oh, Kevin Ashton was so bloody wrong!’). Another interesting prediction; we would all own a self-driving car by 2030. Prototype examples already exist, such as Googles (and Apples) efforts, and the Tesla:

Google/Apple (Titan) Self Driving Cars
The Tesla

Self-driving cars being one of the cases in point, the IoT relates to how a whole new host of devices will now become ‘connected’. Besides cars rigged up to the internet, we are all aware of the hooking up of internal systems in our homes (heating, etc) and utility devices (the washing machine), as to always be online and accessible at a moments notice. This world isn’t coming per say, it’s essentially already here.

Pushing past this initial definition, Kevin was keen to stress that the IoT was not limited in its definition to just ‘the connecting of hardware to the internet’ only. Wiki sums this up quite nicely on this occasion, but software (services and analytics) that moves forward with hardware changes will ultimately change the way we live, work, shop and go about our daily lives. Whether this be data relayed from the fridge to google glasses (yes, you are out of milk!), or perhaps a self-driving car ordering ‘click and collect’ shopping and driving you to the collection point after work (not to mention triggering the heating x miles from home!). Software, and the analysis of the new kinds of data we can record from interconnected elements, will be a huge driving force in how our world changes:

Internet of Things (IoT)

Lastly, before I forget and move on, a key phrase voiced several times (although I cannot remember the exact speaker, so apologies for that, it was probably David Chappell) was to reset your defaults. Standard client/server architecture was discussed, and for those of us that are part of long running businesses this is what we are exclusively, or at least partially, dealing with on a daily basis still. However, the change to the use of mobile devices, tablets, etc, as clients and the cloud as the underpinning location for the services these clients communicate with is becoming the norm. For start-ups today, mobile first development and the cloud (Azure or Amazon Web Services (AWS)) are probably the initial go-to.

For some of us (speaking from a personal standpoint only), a major factor in our success as developers could simply be determined by understanding the cloud and getting the necessary experience to make the transition (for those who are not actively taking part in this world of course).

So, now we have the IoT, let’s talk security…

3) Graham Cluley. Security Analyst, grahamcluley.com.

Graham delivered a funny and insightful talk surrounding everyones’, ‘Oh my God, the horror, please kill me’ subject, the wonderful world of security.

In a nutshell, he argues (and certainly proves his point as you’ll read next) that the IoT will bring wonders to our world, but not without issues. We now have a scenario whereby a breadth of new devices have suddenly become internet connected. However, are the driving forces behind these changes the people who are used to dealing with the murky world of malware, viruses and hacking attempts (such as OS developers)? Probably not, is the initial answer. This is, of course, just a cultural divide between those used to trans-versing the security world and protecting devices from such attacks, and those tasked with bringing new devices to the interconnected world.

The hacking of self-driving cars (big topic it would seem) was discussed:

Fiat Chrysler Recalls

Also, the potential of hacking pacemakers was covered (bluetooth/wifi enabled), famously featured in the TV series Homeland and which actually lead to Vice President Dick Cheney’s cardiologist disabling the wireless functionality of his device:

Pacemaker Hacking
Could Pacemakers Be Hacked?

Although funny, the talk did indeed bring up a very serious issue. The ramifications could be catastrophic, depending on the types of devices that ultimately end up being exposed to the masses via the web. Essentially, as the IoT age develops, extra care must be taken to ensure that security is right on up there, in the hierarchy of priorities, when developing software for these devices.

4) Chris Bishop. Scientist and Lab Director, Microsoft Research.

The last talk I would personally like to discuss briefly was by Chris Bishop; there were a few great nuggets here that are well worth covering.

The idea of Machine Learning (not a topic I was overly familiar with for starters), Neural Networks and Pattern Recognition laid the foundation for a talk looking at the possibility of producing machines with human-level, or even super-human, intelligence.

The Microsoft Kinect was used to demonstrate hand-tracking software that, I have to admit, had an incredible amount of fidelity in recognising hand positions and shapes.

Lastly, a facial recognition demonstration that could estimate, with good accuracy, the emotional state of a person was kicked off for us all to see. Very, very impressive. There was most certainly an underlying feeling here (and as much was hinted at) that his kind of technology has many hurdles to jump. For instance, building something that can consume an image and accurately describe what is in that image is still a flaky concept, at best (and the difficulties of producing something capable of this are relatively vast).

Still, a greatly enjoyable talk! A book was touted, and I believe (please don’t shout at me if I’m wrong) this is the one:

Pattern Recognition and Machine Learning

After the morning Key Notes, a series of smaller talks and break-out sessions were available to us. Here’s how I spent my time…

Unity3D Grok Talk

Josh Taylor. Unity Technologies.

It’s my sincere hope that, on discovering this, my employer won’t decide to sack me! This was over lunch and was a self-indulgent decision I’m afraid! You’ll know from some of my historical posts that I have a keen interest in Unity3D (and have spent time making the odd modest prototype game here and there), and I was interested to see how Unity 5 was progressing, especially as a greater cohesive experience with Visual Studio had been promised.

In this short, 20 minute talk, we experienced how Visual Studio (finally) integrates nicely into the Unity3D content creation pipeline. Unity3D now defaults to using Visual Studio as the editor of choice, with Monodevelop being pushed aside. Apologies to anyone who likes Monodevelop, but I’ve never been able to get behind it. With wacky intellisense and with what I can only describe as a crash-tastic experience in past use, I haven’t seen anything yet to sway me from using Visual Studio. In fact, it was demonstrated that you can even use Visual Studio Code if you wish and, as it’s cross-platform, even Mac and Linux users can switch to this if they wish. More reasons to leave Monodevelop in the dust? It’s not for me to say really, go ahead and do what you’ve got to do at the end of the day!

In order to debug Unity projects in Visual Studio in the past a paid for plugin was required. This particular plugin has been purchased by Microsoft and is now available to all. Being able to easily debug code doesn’t sound like much, but trust me it’s like having a basic human right re-established – such good news!!!

The new licensing model was also commented on, a massive plus for everyone. The previous Free/Pro divide is no more; now everyone gets access to the lions share of the core features. You only need to start spending money as you make it (fair for Unity to ask for a piece of the pie if you start rolling in profit/expanding a team to meet the new demand). For me, this means I actually get to use the Unity Pro water effects, hoorah ;-).

Following this, I spent a bit of time last weekend watching the Unite 2015 Key Notes, discussing 2D game development enhancements, cloud based builds and Oculus support. Well worth a look if and when time allows:

Unite 2015 Key Notes

Plus, if Oculus technology interests you, then it’s definitely worth watching John Carmacks (formerly of ID Software, the mind behind Wolfenstein and Doom) Key Note from the Oculus Connect 2 event:

John Carmack Oculus Keynote

Very exciting times ahead for Unity3D I believe. Self-indulgence over, moving forward then…

Journey to the Intelligent Cloud

Corey Sanders. Director of Program Management, Azure.

Following the Unity3D talk, I made my way back to the ICC Auditorium (I missed a small section of this particular talk, but caught the bulk of it) to catch up on some basic examples of how the new Azure Portal can be used. This took the form of a brief overview of what’s available via the portal, essentially a primer session.

In my recent, personal work with Azure I’ve used the publishing capability within Visual Studio to great affect; it was very transparent and seamless to use by all accounts. A sample was provided within this particular session which demonstrated live coding changes, made in GitHub, being published back to a site hosted on Azure.

Going into a tangent….

Very much a personal opinion here, but I did find (and I wasn’t the only one) that a good portion of the content I wanted to see was a) on at the same time (the 1:15pm slot) and b) was during the core lunch period where everyone was ravenous, I’m a ‘hanger’ sufferer I’m afraid. C# with Mads Torgerson, ASP.NET 5, Nano Servers and Windows 10 (UWP) sessions all occupied this slot, which drove me a little nuts :-(. This felt like a scheduling issue if I’m honest. I’d be interested to hear from anyone who did (or didn’t) feel the same.

I was so disappointed to miss Mads Torgerson, I very much enjoyed the recent C# language features overview and would have loved to have made this breakout session! I did walk past him later in the day, and I hope he never reads this, but he seemed ridiculously tall (perhaps Godly C# skills made him appear several inches taller, who knows!). It doesn’t help that I’m on the shorter side either, I just wanted to be 5′ 11″, that’s all I ever wanted (break out the rack, I need to get stretching!). I should have said hello, but wimped out!

F# Language Breakout Session

Don Syme. Principal Researcher, Microsoft Research.

This was easily the part of the event that resonated the most with me, and strongly influenced the foray into F# that I undertook recently. Don Syme, the designer and architect of the F# language, took us through a quality primer of the syntax and how F# can be used (and scaled) for the cloud.

All of this aside, the most impressive part of the talk was a live demonstration of F# Type Providers. Again, this is fully covered in my previous post so I’ll just direct you to that, which in turn will aid me in cutting down what is now becoming a gargantuan post. In summary, the ability to draw information directly from web pages, rip data straight from files and databases, and combine and aggregate it all together using minimal code produces a terse, easy to understand and pretty darn good experience in my book. Even the code behind producing visual feedback, in the form of the charting API, is succinct; the bar really isn’t set too high for new starters to get involved.

If you decide to give anything a go in the near future, I would give F# the nod (followed closely, just a hair’s breadth away, by jQuery in my opinion). Certainly check it out if you get the chance.

Final Key Note

Professor Brian Cox. Physicist.
Krysta Svore. Senior Researcher, Microsoft Research.

The day proceeded in fast forward and, before we’d really had the chance to gather our thoughts, we were sitting in the main auditorium again faced by Professor Brian Cox, Krysta Svore and a menagerie of confused attendees staring at mathematical formulas outlining quantum theory.

Into the wonderful world of quantum computers we dance, and in my case, dragging my brain along from somewhere back yonder in a desperate attempt to keep up. Thankfully, I’m an avid TED talk fanatic and had, in the run up to the event, brushed up on a few quantum theory and quantum mechanics videos; lucky I did really. The content was dense but, for the most part, well put together and outlined the amazing (and potentially frightening) world of possibilities that quantum computers could unlock for us all.

Professor Brian Cox cruised through the theories we’d need to be intimate with in order to understand the onslaught of oncoming content surrounding quantum computers. In essence, a traditional ‘bit’, has a defined state (like a switch), on or off. However, and this is the simple essence of what they were trying to get to, traditional bits are reaching limitations that will prevent us from solving more complex problems, in a timely manner (you’ll see what I mean in a second). Therefore, qubits, born from quantum theory, are the answer.

Now, I’m not going to insult your intelligence and go into too much detail on a subject that I am clearly not an expert in. So, just in ‘laymen’s bullet points’, here is what I took from all that was said and done across the Key Note:

  • With bits, you are dealing with entities that can have a fixed state (0 or 1). A deterministic system if you will, that has limitations in its problem crunching power.
  • Qubits, however, take us into the realm of a probabilistic system. The qubit can be in a superposition of all of the allowed states, not just 0 or 1.
  • Therefore, the problem crunching powers of qubits are exponential in nature, but the probabilistic nature makes measuring them (and interactions involving them) difficult to get to grips with.

So is it worth fighting through the technical problems in order to harness qubits? What kind of gains are we talking about here?

Krystra Svore outlined an example that displayed that it would take roughly one billion years for a current super computer to crack (more complex than standard) RSA encryption. How long would it take a quantum computer you may ask? Significantly faster is the answer, estimated at around one hundred seconds in fact. This clearly defines for us the amazing problems we’ll be able to solve, whilst simultaneously illustrating the dangerous times that lay ahead from a security standpoint. Let’s just hope cryptography keeps up (I can see a few sniffs to suggest things are in the pipeline, so I will keep an eye out for news as it pops up).

So you want a quantum computer I hear you say! Hmmm, I wouldn’t put it on the Christmas list anytime soon. Due to the fact current quantum computers need to be super cooled (and from the pictures we got to see, didn’t look like you could hike around with it!), we’re not likely to get our hands directly on them in the near future.

Can you get your mitts on quantum simulators today? Apparently yes in the answer (completed untested links, just for you to peruse on your own, good luck):

QC Simulators
Project Liquid

Taking nothing away from the Key Note though, it was a concrete finish to an excellent event. Would I go again? You bet! Should we get the train next time instead of driving? Taking into account the mountains of free beer and wine on offer, of course! To finish up, before summarising the Expo itself, if you haven’t been and get the opportunity (in fact, actively seek the opportunity, enough said) then definitely book this in your calendar, thoroughly brilliant.

Expo

Very, very quickly, as I am acutely aware that your ability to focus on this post (if not already) must have completely diminished by this point, I wanted to describe what the Expo itself had to offer. If you’re still reading, give yourself a pat on the back!

One of the more compelling items we saw was the use of the new Lumia phone as a (kind of) desktop replacement attempt. Let’s get one thing straight, you’re not going to be doing hardcore software development using Visual Studio or any other intensive task on this device anytime soon. However, there was certainly enough evidence to suggest that basic productivity tasks would be possible using a mobile phone as a back bone to facilitate this.

The Lumia can be hooked up to a dock, akin to the Surface Pro 4 (the docks are subtly different apparently, so are not cross-compatible), and that allows it to be tied to a display device. You can also get a folding mouse and keyboard, for a very lightweight, on-the-go experience. Interesting certainly, but there is a definite horse-power issue that will prevent anyone working on anything remotely intensive from getting on board. Anyway, for those interested the link below will get you started:

Lumia Docking Station

I saw a few Surface Pros, and wondered whether we could potentially smuggle a few out of the Expo! Only kidding, no need to call the Police (or for anyone I work with thinking I am some kind of master criminal in the making) :-).

An Oculus demonstration booth was on the Expo floor, and displays were hooked up to show what the participants were experiencing. It was noted that a few of the people using the Oculus seemed to miss the point a bit, and kept their head completely still as they were transported through the experience. Once the heads started moving (to actually take in the world) you could visibly see people getting incredibly immersed. Alas, the queues were pretty darn large every time I made my way past, so I didn’t get a chance to experience it first-hand. One for the future.

There was also a programmable cocktail maker, an IoT masterpiece I think you’ll agree. A perfect union of hardware, software and alcohol, a visionary piece illustrating the future has arrived!

The next time an event like this comes around I will endeavour to get a post up in a timely fashion (which will vastly improve the content I hope).

Thanks for reading and a high five from me if you made it this far. Back to coding in the upcoming post I promise, until the next time, cheers from me (and would you believe it, it’s now Tuesday)!