Execution Plans via T-SQL – Super Simple

I recently had to do a very quick ‘rip and tear’ (high fives for the ‘Doom’ reference!) of stored execution plans for a bit of exploratory work I was involved with. It seems like a simple, shareable piece so here it is for you to grab, use and modify as and when you need it.

The backbone of this query relies on the following System Dynamic Management Views:

  1. sys.dm_exec_cached_plans – returns a row for each cached query plan. This is used to infer the object type linked to the cached plan along with details such as the amount of times this plan has been reused.
  2. sys.dm_exec_query_plan – the primary function of this view is to obtain the XML query plan content.
  3. sys.dm_exec_sql_text – used here to get the physical SQL tied to the cached execution plan.
  4. sys.dm_exec_query_stats – acts as a gateway to performance metrics for a given query plan.

You’ll notice that some of the views are configured as table-valued functions that consume a plan handle, the common examples use a CROSS APPLY to link entities and I’ve stuck with this approach. Other than that, the physical SQL is fairly straightforward; some inline comments have been added for clarity:

SELECT
COALESCE(OBJECT_NAME(st.objectid, st.[dbid]), '[No Object]') AS [ObjectName]
, cp.objtype AS [ObjectType]
, cp.usecounts AS [TimesExecuted]
, DB_NAME(qp.[dbid]) AS [DatabaseName]
, cp.size_in_bytes AS [SizeInBytes]
, cp.cacheobjtype AS [CacheObjectType]
, st.[text] AS [PhysicalQuery]
, qp.query_plan AS [QueryPlanXml]
, qs.last_execution_time AS [LastRan]
, qs.last_elapsed_time AS [LastRunElapsedTime]
FROM sys.dm_exec_cached_plans cp
	CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
	CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
	-- Extra join added for optional execution stats (like the last 'run' time and execution times)
	LEFT OUTER JOIN sys.dm_exec_query_stats qs ON cp.plan_handle = qs.plan_handle
-- Omit this is you want results for all databases contained on an instance
WHERE DB_NAME(qp.[dbid]) = 'JsonForTestDatabase'
-- Place the most 'used' first within the results set
ORDER BY cp.usecounts DESC;

I’ve gathered a small subset of the returnable columns to show the object name, type, query text (along with the XML query plan) as well as some simple usage information. I’m filtering the returned data by database name and placing the most used query plans first in the returned results; feel free to modify as needed.

You should retrieve output in the following vein:

Querying Cached Plans.

Querying Cached Plans.

Simply clicking content in the QueryPlanXml column will display the execution plan in the usual, expected format:

Execution Plan Content.

Execution Plan Content.

All pretty easy to get to grips with. Happy coding!

SQL FOR JSON Tour

I’ve worked on numerous occasions with the various flavours of the FOR XML clause within SQL Server. Taking that a step further, this seemed like as good a time as ever to check out how to work with the FOR JSON clause.

This clause can, simply put, be used to boil down data retrieved via a SELECT statement into formatted JSON, with a custom structure as required – simples. So, how do we go about this?

To illustrate how this all strings together I’ve scripted together a couple of examples that you can lift and test yourself, as desired. Coming up with examples can be the hardest thing sometimes; in an effort to avoid falling into the classic ‘Person’ table example (Car for C# and so on and so forth πŸ˜‰ !) I’ve gone with two ‘super imaginative’ (well, maybe not; but give me some credit here!) fictitious relational data setups! Firstly, I use the concept of ‘Robot’ and ‘RobotDanceMoves’ for a look at one-to-one relationships (which gets extended a little later one). Next up, I use the idea of ‘Houses’ to representing a more complex example. Houses have ‘Rooms’ which in turn contain ‘RoomObjects’, this gives us some opportunities to explore one-to-many relationships and how to structure the SQL to get the desired JSON output.

Our test table structures look like this:

Robot Table Structure.

Robot Table Structure.

House Table Structure.

House Table Structure.

We’ll start with utilising the Robot and RobotDanceMoves tables and then use the other tables in the last couple of examples to show a more complex, nested setup.

To get us out of the gate, our first piece of SQL uses FOR JSON PATH in its simplest form, we’re enforcing a one-to-one relationship via the INNER JOIN clause by specifying ‘FavouriteMove = 1’.

SQL:

-- Produce a very simple, flat representation of a Robot and their favourite dance move
SELECT 
r.Id
, r.[Name]
, r.Active
, rdm.[Name] AS [FavouriteDanceMove]
FROM dbo.Robots r
	INNER JOIN dbo.RobotDanceMoves rdm ON 
	(
		r.Id = rdm.RobotId
		AND rdm.FavouriteMove = 1
	)
FOR JSON PATH;

JSON output:

[
    {
        "Id": 1,
        "Name": "Barry",
        "Active": true,
        "FavouriteDanceMove": "Moonwalk"
    },
    {
        "Id": 2,
        "Name": "Steve",
        "Active": false,
        "FavouriteDanceMove": "Thunder Clap"
    },
    {
        "Id": 3,
        "Name": "Dave",
        "Active": true,
        "FavouriteDanceMove": "Moonwalk"
    },
    {
        "Id": 4,
        "Name": "Zoe",
        "Active": true,
        "FavouriteDanceMove": "The Robot"
    },
    {
        "Id": 5,
        "Name": "Claire",
        "Active": true,
        "FavouriteDanceMove": "The Robot"
    },
    {
        "Id": 6,
        "Name": "Tracey",
        "Active": false,
        "FavouriteDanceMove": "Moonwalk"
    }
]

It is possible to compartmentalise properties further by using column aliases directly to create a key and nested object. Here the ‘Name’ and the ‘Active’ flag for a Robot is nested within ‘RobotCoreDetails’ and the ‘Name’ of a Robots favourite dance move is nested within ‘RobotDanceMove’ as ‘FavouriteDanceMove’.

SQL:

-- Illustration of how the JSON can be manipulated and formatted further by altering column aliases
SELECT 
r.Id AS [Id]
, r.[Name] AS [RobotCoreDetails.Name]
, r.Active AS [RobotCoreDetails.Active]
, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
FROM dbo.Robots r
	INNER JOIN dbo.RobotDanceMoves rdm ON 
	(
		r.Id = rdm.RobotId
		AND rdm.FavouriteMove = 1
	)
FOR JSON PATH;

JSON output:

[
    {
        "Id": 1,
        "RobotCoreDetails": {
            "Name": "Barry",
            "Active": true
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "Moonwalk"
        }
    },
    {
        "Id": 2,
        "RobotCoreDetails": {
            "Name": "Steve",
            "Active": false
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "Thunder Clap"
        }
    },
    {
        "Id": 3,
        "RobotCoreDetails": {
            "Name": "Dave",
            "Active": true
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "Moonwalk"
        }
    },
    {
        "Id": 4,
        "RobotCoreDetails": {
            "Name": "Zoe",
            "Active": true
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "The Robot"
        }
    },
    {
        "Id": 5,
        "RobotCoreDetails": {
            "Name": "Claire",
            "Active": true
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "The Robot"
        }
    },
    {
        "Id": 6,
        "RobotCoreDetails": {
            "Name": "Tracey",
            "Active": false
        },
        "RobotDanceMove": {
            "FavouriteDanceMove": "Moonwalk"
        }
    }
]

As per the FOR XML clause, it is possible to apply a ‘ROOT’ clause. Here, this is used as a wrapper to Robot objects.

SQL:

-- Illustration of adding a ROOT clause to wrap the produced JSON content
SELECT 
r.Id AS [Id]
, r.[Name] AS [RobotCoreDetails.Name]
, r.Active AS [RobotCoreDetails.Active]
, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
FROM dbo.Robots r
	INNER JOIN dbo.RobotDanceMoves rdm ON 
	(
		r.Id = rdm.RobotId
		AND rdm.FavouriteMove = 1
	)
FOR JSON PATH, ROOT('Robots');

JSON output:

{
    "Robots": [
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        }
    ]
}

In this next example, we are moving forward into a one-to-many relationship by removing ‘FavouriteMove = 1’ from the INNER JOIN (I’ve left the alias to say ‘FavouriteDanceMove’ at the moment, but this gets tidied up in a subsequent example). You may be tempted to structure the SQL in this way when getting started but the outcome, as shown below, results in a duplication of the Robot object – not a singular Robot supporting multiple dance moves.

SQL:

-- Represents the incorrect way to produce correctly nested results, on this occassion
SELECT 
r.Id AS [Id]
, r.[Name] AS [RobotCoreDetails.Name]
, r.Active AS [RobotCoreDetails.Active]
, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
FROM dbo.Robots r
	INNER JOIN dbo.RobotDanceMoves rdm ON r.Id = rdm.RobotId
FOR JSON PATH, ROOT('Robots');

JSON output:

{
    "Robots": [
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Moonwalk"
            }
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "Thunder Clap"
            }
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMove": {
                "FavouriteDanceMove": "The Robot"
            }
        }
    ]
}

This modification correctly provides a single Robot with multiple dance moves. The way to achieve this is by using a subquery SELECT which is directly linked to the outer SELECT (see the ‘WHERE rdm.RobotId = r.Id’ clause). All SELECT statements require a FOR JSON PATH clause to be present. Some additional formatting has been put in place here to output ‘Yep’ or ‘Nope’ to denote if this particular dance move is the Robots favourite.

SQL:

-- Correctly nested results, illustration one
SELECT 
r.Id AS [Id]
, r.[Name] AS [RobotCoreDetails.Name]
, r.Active AS [RobotCoreDetails.Active] 
,
(
	SELECT
	rdm.[Name]
	, 
	-- Use of CASE statement just to show that normal manipulation of data is possible, as expected
	CASE
		WHEN rdm.FavouriteMove = 1
		THEN 'Yep'
		ELSE 'Nope'
	END AS [FavouriteMove]
	FROM dbo.RobotDanceMoves rdm
	WHERE rdm.RobotId = r.Id
	FOR JSON PATH
) AS [RobotDanceMoves]
FROM dbo.Robots r
FOR JSON PATH, ROOT('Robots');

JSON output:

{
    "Robots": [
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Yep"
                },
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Nope"
                }
            ]
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMoves": [
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Yep"
                }
            ]
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Yep"
                },
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Nope"
                }
            ]
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Yep"
                },
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Nope"
                }
            ]
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Yep"
                }
            ]
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMoves": [
                {
                    "Name": "Moonwalk",
                    "FavouriteMove": "Yep"
                },
                {
                    "Name": "Thunder Clap",
                    "FavouriteMove": "Nope"
                },
                {
                    "Name": "The Robot",
                    "FavouriteMove": "Nope"
                }
            ]
        }
    ]
}

Finally, here we have altered the dance move output so that it sits inside a ‘Move’ object. It’s really interesting how aliasing name parts, separated by a period, are directly involved in managing how the content is pushed into nested objects.

SQL:

-- Correctly nested results, illustration two (RobotDanceMoves abstracted into 'Move' using aliases)
SELECT 
r.Id AS [Id]
, r.[Name] AS [RobotCoreDetails.Name]
, r.Active AS [RobotCoreDetails.Active] 
,
(
	SELECT
	rdm.[Name] AS [Move.Name]
	, 
	-- Use of CASE statement just to show that normal manipulation of data is possible, as expected
	CASE
		WHEN rdm.FavouriteMove = 1
		THEN 'Yep'
		ELSE 'Nope'
	END AS [Move.Favourite]
	FROM dbo.RobotDanceMoves rdm
	WHERE rdm.RobotId = r.Id
	FOR JSON PATH
) AS [RobotDanceMoves]
FROM dbo.Robots r
FOR JSON PATH, ROOT('Robots');

JSON output:

{
    "Robots": [
        {
            "Id": 1,
            "RobotCoreDetails": {
                "Name": "Barry",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Yep"
                    }
                },
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Nope"
                    }
                }
            ]
        },
        {
            "Id": 2,
            "RobotCoreDetails": {
                "Name": "Steve",
                "Active": false
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Yep"
                    }
                }
            ]
        },
        {
            "Id": 3,
            "RobotCoreDetails": {
                "Name": "Dave",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Yep"
                    }
                },
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Nope"
                    }
                }
            ]
        },
        {
            "Id": 4,
            "RobotCoreDetails": {
                "Name": "Zoe",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Yep"
                    }
                },
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Nope"
                    }
                }
            ]
        },
        {
            "Id": 5,
            "RobotCoreDetails": {
                "Name": "Claire",
                "Active": true
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Yep"
                    }
                }
            ]
        },
        {
            "Id": 6,
            "RobotCoreDetails": {
                "Name": "Tracey",
                "Active": false
            },
            "RobotDanceMoves": [
                {
                    "Move": {
                        "Name": "Moonwalk",
                        "Favourite": "Yep"
                    }
                },
                {
                    "Move": {
                        "Name": "Thunder Clap",
                        "Favourite": "Nope"
                    }
                },
                {
                    "Move": {
                        "Name": "The Robot",
                        "Favourite": "Nope"
                    }
                }
            ]
        }
    ]
}

In an effort to show a more complex example the following SQL produces nested content illustrating the relationship between ‘Houses’, ‘Rooms’ and ‘RoomObjects’. This uses the same principle of aliasing subqueries and linking inner/outer queries as before.

SQL:

-- Illustration of producing a query that handles one-to-many relationships gracefully
SELECT 
h.Id AS [Id]
, h.[Name] AS [Name]
,
(
	SELECT
	r.Id AS [Id]
	, r.[Name] AS [Name]
	,
	(
		SELECT
		ro.Id AS [Id]
		, ro.[Name] AS [Name]
		FROM dbo.RoomObjects ro
		WHERE ro.RoomId = r.Id
		FOR JSON PATH
	) AS [RoomObjects]
	FROM dbo.Rooms r
	WHERE r.HouseId = h.Id
	FOR JSON PATH
) AS [Rooms]
FROM dbo.Houses h
FOR JSON PATH, ROOT('Houses');

JSON output:

{
    "Houses": [
        {
            "Id": 1,
            "Name": "House One",
            "Rooms": [
                {
                    "Id": 1,
                    "Name": "Lounge",
                    "RoomObjects": [
                        {
                            "Id": 1,
                            "Name": "Lamp"
                        },
                        {
                            "Id": 2,
                            "Name": "Sofa"
                        }
                    ]
                },
                {
                    "Id": 2,
                    "Name": "Kitchen",
                    "RoomObjects": [
                        {
                            "Id": 3,
                            "Name": "Knife"
                        },
                        {
                            "Id": 4,
                            "Name": "Kettle"
                        }
                    ]
                }
            ]
        },
        {
            "Id": 2,
            "Name": "House Two",
            "Rooms": [
                {
                    "Id": 3,
                    "Name": "Lounge Diner",
                    "RoomObjects": [
                        {
                            "Id": 5,
                            "Name": "Coffee Table"
                        },
                        {
                            "Id": 6,
                            "Name": "Armchair"
                        }
                    ]
                },
                {
                    "Id": 4,
                    "Name": "Kitchen Utility",
                    "RoomObjects": [
                        {
                            "Id": 7,
                            "Name": "Coffee Machine"
                        },
                        {
                            "Id": 8,
                            "Name": "Microwave"
                        }
                    ]
                }
            ]
        }
    ]
}

To really improve the nesting of the output it is possible to combine and align column and subquery aliases to get a nice, clean structure.

SQL:

-- Example extended with some more aliases to format the structure further 
-- (full nesting of House -> Rooms -> Room -> RoomObjects -> Object)
SELECT 
h.Id AS [House.Id]
, h.[Name] AS [House.Name]
,
(
	SELECT
	r.Id AS [Room.Id]
	, r.[Name] AS [Room.Name]
	,
	(
		SELECT
		ro.Id AS [Object.Id]
		, ro.[Name] AS [Object.Name]
		FROM dbo.RoomObjects ro
		WHERE ro.RoomId = r.Id
		FOR JSON PATH
	) AS [Room.RoomObjects]
	FROM dbo.Rooms r
	WHERE r.HouseId = h.Id
	FOR JSON PATH
) AS [House.Rooms]
FROM dbo.Houses h
FOR JSON PATH, ROOT('Houses');

JSON output:

{
    "Houses": [
        {
            "House": {
                "Id": 1,
                "Name": "House One",
                "Rooms": [
                    {
                        "Room": {
                            "Id": 1,
                            "Name": "Lounge",
                            "RoomObjects": [
                                {
                                    "Object": {
                                        "Id": 1,
                                        "Name": "Lamp"
                                    }
                                },
                                {
                                    "Object": {
                                        "Id": 2,
                                        "Name": "Sofa"
                                    }
                                }
                            ]
                        }
                    },
                    {
                        "Room": {
                            "Id": 2,
                            "Name": "Kitchen",
                            "RoomObjects": [
                                {
                                    "Object": {
                                        "Id": 3,
                                        "Name": "Knife"
                                    }
                                },
                                {
                                    "Object": {
                                        "Id": 4,
                                        "Name": "Kettle"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        },
        {
            "House": {
                "Id": 2,
                "Name": "House Two",
                "Rooms": [
                    {
                        "Room": {
                            "Id": 3,
                            "Name": "Lounge Diner",
                            "RoomObjects": [
                                {
                                    "Object": {
                                        "Id": 5,
                                        "Name": "Coffee Table"
                                    }
                                },
                                {
                                    "Object": {
                                        "Id": 6,
                                        "Name": "Armchair"
                                    }
                                }
                            ]
                        }
                    },
                    {
                        "Room": {
                            "Id": 4,
                            "Name": "Kitchen Utility",
                            "RoomObjects": [
                                {
                                    "Object": {
                                        "Id": 7,
                                        "Name": "Coffee Machine"
                                    }
                                },
                                {
                                    "Object": {
                                        "Id": 8,
                                        "Name": "Microwave"
                                    }
                                }
                            ]
                        }
                    }
                ]
            }
        }
    ]
}

The full code sample is available here (creation of the tables, test data and some example stored procedures are clubbed together for you to play with as you want):

/*

	FOR JSON PATH - Full code example

*/

PRINT ('FOR JSON PATH - Full code example' + CHAR(13) + '--------------------------------------------');

-- 1) Create the 'JsonForTestDatabase', if required
IF (DB_ID('JsonForTestDatabase') IS NULL)
BEGIN

	PRINT ('Creating the ''JsonForTestDatabase'' sample database.');

	CREATE DATABASE JsonForTestDatabase;

END;
GO

-- 2) Explicitly switch the session to 'point' at the JsonForTestDatabase
USE [JsonForTestDatabase]
GO

-- 3) Create the test tables for example purposes (Robot/House tables)
IF(OBJECT_ID('Robots', 'U') IS NULL)
BEGIN

	PRINT ('Creating the ''Robots'' table.');

	CREATE TABLE dbo.Robots
	(
		Id					BIGINT			NOT NULL	PRIMARY KEY		IDENTITY(1, 1)
		, [Name]			NVARCHAR(20)	NOT NULL
		, Active			BIT				NOT NULL	DEFAULT(1)
	);

END;

IF(OBJECT_ID('RobotDanceMoves', 'U') IS NULL)
BEGIN

	PRINT ('Creating the ''RobotDanceMoves'' table.');

	CREATE TABLE dbo.RobotDanceMoves
	(
		Id					BIGINT			NOT NULL	PRIMARY KEY		IDENTITY(1, 1)
		, RobotId			BIGINT			FOREIGN KEY (RobotId) REFERENCES dbo.Robots(Id)
		, [Name]			NVARCHAR(20)	NOT NULL
		, FavouriteMove		BIT				NOT NULL	DEFAULT(0)
	);

END;

IF(OBJECT_ID('Houses', 'U') IS NULL)
BEGIN

	PRINT ('Creating the ''Houses'' table.');

	CREATE TABLE dbo.Houses
	(
		Id					BIGINT			PRIMARY KEY		IDENTITY(1, 1)
		, [Name]			NVARCHAR(20)
	);

END;

IF(OBJECT_ID('Rooms', 'U') IS NULL)
BEGIN

	PRINT ('Creating the ''Rooms'' table.');

	CREATE TABLE dbo.Rooms 
	( 
		Id					BIGINT			PRIMARY KEY		IDENTITY(1, 1)
		, HouseId			BIGINT			FOREIGN KEY (HouseId) REFERENCES dbo.Houses(Id)
		, [Name]			NVARCHAR(20)
	);

END;

IF(OBJECT_ID('RoomObjects', 'U') IS NULL)
BEGIN

	PRINT ('Creating the ''RoomObjects'' table.');

	CREATE TABLE dbo.RoomObjects 
	( 
		Id					BIGINT			PRIMARY KEY		IDENTITY(1, 1)
		, RoomId			BIGINT			FOREIGN KEY (RoomId) REFERENCES dbo.Rooms(Id)
		, [Name]			NVARCHAR(20)
	);

END;

-- 4) Drop/recreate the Utility SPs
IF(OBJECT_ID('GetRobotsWithFavouriteDanceMoveJson', 'P') IS NOT NULL)
BEGIN
	
	PRINT ('Dropping the ''GetRobotsWithFavouriteDanceMoveJson'' stored procedure.');

	DROP PROCEDURE dbo.GetRobotsWithFavouriteDanceMoveJson;

END;

PRINT ('Creating the ''GetRobotsWithFavouriteDanceMoveJson'' stored procedure.');
GO

CREATE PROCEDURE dbo.GetRobotsWithFavouriteDanceMoveJson
AS
BEGIN

	-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
	SET NOCOUNT ON;

	-- Produce a very simple, flat representation of a Robot and their favourite dance move
	SELECT 
	r.Id
	, r.[Name]
	, r.Active
	, rdm.[Name] AS [FavouriteDanceMove]
	FROM dbo.Robots r
		INNER JOIN dbo.RobotDanceMoves rdm ON 
		(
			r.Id = rdm.RobotId
			AND rdm.FavouriteMove = 1
		)
	FOR JSON PATH;

	-- Return execution status of success
	RETURN 0;

END;
GO

IF(OBJECT_ID('GetFormattedRobotsWithFavouriteDanceMoveJson', 'P') IS NOT NULL)
BEGIN
	
	PRINT ('Dropping the ''GetFormattedRobotsWithFavouriteDanceMoveJson'' stored procedure.');

	DROP PROCEDURE dbo.GetFormattedRobotsWithFavouriteDanceMoveJson;

END;
	
PRINT ('Creating the ''GetFormattedRobotsWithFavouriteDanceMoveJson'' stored procedure.');
GO

CREATE PROCEDURE dbo.GetFormattedRobotsWithFavouriteDanceMoveJson
AS
BEGIN

	-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
	SET NOCOUNT ON;

	-- Illustration of how the JSON can be manipulated and formatted further by altering column aliases
	SELECT 
	r.Id AS [Id]
	, r.[Name] AS [RobotCoreDetails.Name]
	, r.Active AS [RobotCoreDetails.Active]
	, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
	FROM dbo.Robots r
		INNER JOIN dbo.RobotDanceMoves rdm ON 
		(
			r.Id = rdm.RobotId
			AND rdm.FavouriteMove = 1
		)
	FOR JSON PATH;

	-- Return execution status of success
	RETURN 0;

END;
GO

IF(OBJECT_ID('GetRobotsWithFavouriteDanceMoveAndRootJson', 'P') IS NOT NULL)
BEGIN

	PRINT ('Dropping the ''GetRobotsWithFavouriteDanceMoveAndRootJson'' stored procedure.');
	
	DROP PROCEDURE dbo.GetRobotsWithFavouriteDanceMoveAndRootJson;

END;

PRINT ('Creating the ''GetRobotsWithFavouriteDanceMoveAndRootJson'' stored procedure.');
GO

CREATE PROCEDURE dbo.GetRobotsWithFavouriteDanceMoveAndRootJson
AS
BEGIN

	-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
	SET NOCOUNT ON;

	-- Illustration of adding a ROOT clause to wrap the produced JSON content
	SELECT 
	r.Id AS [Id]
	, r.[Name] AS [RobotCoreDetails.Name]
	, r.Active AS [RobotCoreDetails.Active]
	, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
	FROM dbo.Robots r
		INNER JOIN dbo.RobotDanceMoves rdm ON 
		(
			r.Id = rdm.RobotId
			AND rdm.FavouriteMove = 1
		)
	FOR JSON PATH, ROOT('Robots');

	-- Return execution status of success
	RETURN 0;

END;
GO

IF(OBJECT_ID('GetFullRobotDetailAsJson', 'P') IS NOT NULL)
BEGIN

	PRINT ('Dropping the ''GetFullRobotDetailAsJson'' stored procedure.');
	
	DROP PROCEDURE dbo.GetFullRobotDetailAsJson;

END;

PRINT ('Creating the ''GetFullRobotDetailAsJson'' stored procedure.');
GO

CREATE PROCEDURE dbo.GetFullRobotDetailAsJson
AS
BEGIN

	-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
	SET NOCOUNT ON;

	/*
	-- Represents the incorrect way to produce correctly nested results, on this occassion
	SELECT 
	r.Id AS [Id]
	, r.[Name] AS [RobotCoreDetails.Name]
	, r.Active AS [RobotCoreDetails.Active]
	, rdm.[Name] AS [RobotDanceMove.FavouriteDanceMove]
	FROM dbo.Robots r
		INNER JOIN dbo.RobotDanceMoves rdm ON r.Id = rdm.RobotId
	FOR JSON PATH, ROOT('Robots');

	-- Correctly nested results, illustration one
	SELECT 
	r.Id AS [Id]
	, r.[Name] AS [RobotCoreDetails.Name]
	, r.Active AS [RobotCoreDetails.Active] 
	,
	(
		SELECT
		rdm.[Name]
		, 
		-- Use of CASE statement just to show that normal manipulation of data is possible, as expected
		CASE
			WHEN rdm.FavouriteMove = 1
			THEN 'Yep'
			ELSE 'Nope'
		END AS [FavouriteMove]
		FROM dbo.RobotDanceMoves rdm
		WHERE rdm.RobotId = r.Id
		FOR JSON PATH
	) AS [RobotDanceMoves]
	FROM dbo.Robots r
	FOR JSON PATH, ROOT('Robots');
	*/

	-- Correctly nested results, illustration two (RobotDanceMoves abstracted into 'Move' using aliases)
	SELECT 
	r.Id AS [Id]
	, r.[Name] AS [RobotCoreDetails.Name]
	, r.Active AS [RobotCoreDetails.Active] 
	,
	(
		SELECT
		rdm.[Name] AS [Move.Name]
		, 
		-- Use of CASE statement just to show that normal manipulation of data is possible, as expected
		CASE
			WHEN rdm.FavouriteMove = 1
			THEN 'Yep'
			ELSE 'Nope'
		END AS [Move.Favourite]
		FROM dbo.RobotDanceMoves rdm
		WHERE rdm.RobotId = r.Id
		FOR JSON PATH
	) AS [RobotDanceMoves]
	FROM dbo.Robots r
	FOR JSON PATH, ROOT('Robots');

	-- Return execution status of success
	RETURN 0;

END;
GO

IF(OBJECT_ID('GetHouseDetailAsJson', 'P') IS NOT NULL)
BEGIN

	PRINT ('Dropping the ''GetHouseDetailAsJson'' stored procedure.');
	
	DROP PROCEDURE dbo.GetHouseDetailAsJson;

END;

PRINT ('Creating the ''GetHouseDetailAsJson'' stored procedure.');
GO

CREATE PROCEDURE dbo.GetHouseDetailAsJson
AS
BEGIN

	-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
	SET NOCOUNT ON;

	/*
	-- Illustration of producing a query that handles one-to-many relationships gracefully
	SELECT 
	h.Id AS [Id]
	, h.[Name] AS [Name]
	,
	(
		SELECT
		r.Id AS [Id]
		, r.[Name] AS [Name]
		,
		(
			SELECT
			ro.Id AS [Id]
			, ro.[Name] AS [Name]
			FROM dbo.RoomObjects ro
			WHERE ro.RoomId = r.Id
			FOR JSON PATH
		) AS [RoomObjects]
		FROM dbo.Rooms r
		WHERE r.HouseId = h.Id
		FOR JSON PATH
	) AS [Rooms]
	FROM dbo.Houses h
	FOR JSON PATH, ROOT('Houses');
	*/

	-- Example extended with some more aliases to format the structure further 
	-- (full nesting of House -> Rooms -> Room -> RoomObjects -> Object)
	SELECT 
	h.Id AS [House.Id]
	, h.[Name] AS [House.Name]
	,
	(
		SELECT
		r.Id AS [Room.Id]
		, r.[Name] AS [Room.Name]
		,
		(
			SELECT
			ro.Id AS [Object.Id]
			, ro.[Name] AS [Object.Name]
			FROM dbo.RoomObjects ro
			WHERE ro.RoomId = r.Id
			FOR JSON PATH
		) AS [Room.RoomObjects]
		FROM dbo.Rooms r
		WHERE r.HouseId = h.Id
		FOR JSON PATH
	) AS [House.Rooms]
	FROM dbo.Houses h
	FOR JSON PATH, ROOT('Houses');

	-- Return execution status of success
	RETURN 0;

END;
GO

-- 5) Insert sample data into the test tables...part 1, robots...
IF NOT EXISTS
(
	SELECT 1
	FROM dbo.Robots
)
BEGIN

	PRINT ('Adding dbo.Robot sample data.');

	INSERT INTO dbo.Robots
	(
		[Name]
		, Active
	)
	VALUES
	('Barry', 1)
	, ('Steve', 0)
	, ('Dave', 1)
	, ('Zoe', 1)
	, ('Claire', 1)
	, ('Tracey', 0);

END;

IF NOT EXISTS
(
	SELECT 1
	FROM dbo.RobotDanceMoves
)
BEGIN

	PRINT ('Adding dbo.RobotDanceMoves sample data.');
	
	--If RobotDanceMoves have been inserted yet the produce a mock sample of randomised data where Robots receive
	--all three dance moves but a 'favourite' on the fly
	INSERT INTO dbo.RobotDanceMoves
	(
		 RobotId
		 , [Name]
		 , FavouriteMove
	)
	SELECT 
	r.Id
	, rm.DanceMove
	, rm.FavouriteMove
	FROM dbo.Robots r
		CROSS APPLY 
		( 
			SELECT 
			TOP 3 
			am.DanceMove
			, 
			CASE 
				WHEN ROW_NUMBER() OVER (ORDER BY NEWID()) = 3
				THEN CAST(1 AS BIT)
				ELSE CAST(0 AS BIT)
			END AS [FavouriteMove]
			FROM
			(
				SELECT
				'Moonwalk' AS [DanceMove]
				UNION
				SELECT
				'Thunder Clap'
				UNION
				SELECT
				'The Robot'
			) AS am
			WHERE r.Id = r.Id
			ORDER BY NEWID()
		) AS rm;

END;

--...Part two, houses (very fixed, just as an illustration)
IF NOT EXISTS
(
	SELECT 1
	FROM dbo.Houses
)
BEGIN

	PRINT ('Adding dbo.Houses sample data.');

	INSERT INTO dbo.Houses
	(
		[Name]
	)
	VALUES
	('House One')
	, ('House Two');

END;

IF NOT EXISTS
(
	SELECT 1
	FROM dbo.Rooms
)
BEGIN

	PRINT ('Adding dbo.Rooms sample data.');

	INSERT INTO dbo.Rooms
	(
		HouseId
		, [Name]
	)
	VALUES
	(1, 'Lounge')
	, (1, 'Kitchen')
	, (2, 'Lounge Diner')
	, (2, 'Kitchen Utility');

END;

IF NOT EXISTS
(
	SELECT 1
	FROM dbo.RoomObjects
)
BEGIN

	PRINT ('Adding dbo.RoomObjects sample data.');

	INSERT INTO dbo.RoomObjects
	(
		RoomId
		, [Name]
	)
	VALUES
	(1, 'Lamp')
	, (1, 'Sofa')
	, (2, 'Knife')
	, (2, 'Kettle')
	, (3, 'Coffee Table')
	, (3, 'Armchair')
	, (4, 'Coffee Machine')
	, (4, 'Microwave');

END;
GO

--6) Execute all of the sample stored procedures
PRINT (CHAR(13) + 'Processing complete...running example stored procedures...');

EXEC dbo.GetRobotsWithFavouriteDanceMoveJson;
EXEC dbo.GetFormattedRobotsWithFavouriteDanceMoveJson;
EXEC dbo.GetRobotsWithFavouriteDanceMoveAndRootJson;
EXEC dbo.GetFullRobotDetailAsJson;
EXEC dbo.GetHouseDetailAsJson;

I hope there are some valuable and useful ideas in this post and, until the next time, happy coding.

Going Up…or…Going Down? OOP Inheritance!

A completely trivial subject coming up next…triggered by a really interesting debate I was involved with the other day. When dealing with class hierarchies and chaining constructor arguments through this hierarchy, what’s our mindset as to the direction of travel? Do we think of passing arguments ‘up’ or ‘down’ the hierarchy of objects (initially anyway, before any other classes constructors in the chain are called in the opposing direction)? The direction of travel, granted, is totally inconsequential and we’re dealing with a non-concrete concept anyway, but it seems like fun to debate it so why the hell not!

I’ve always thought of constructor arguments being chained ‘up’ to the ‘base’ class; which is where we hit the area that causes conceptual confusion. It was debated that a ‘base’ class ‘underpins’ something and sits as a stabilising entity at the ‘root’ (i.e. the bottom of a pyramid, the roots of a tree, foundations of a building, etc.). This is an argument that seems to make a lot of sense and made me wonder if I’d been inverting this in my head all along. There seemed to be a split amongst us about the direction of travel, which turned out to be a good laugh to be fair!

So, the word ‘base’ – it certainly does lend some weight to the idea that we should be visualising our way travelling down, even though I’ve always thought of flying my way upwards in this regard! I decided to see what I could find, to answer what is (most likely) the most non-sensical and unimportant of questions!

1) A poor man’s Google image search for ‘object inheritance example’:

Object Hierarchies.

Object Hierarchies.

There are some interesting things going on here, just from a visual perspective. Classic inheritance (X ‘is a’ Y) appears to show the ‘base’ class at the top of (or in a spatial sense, above other ‘nodes’) most object hierarchies; this is where I would always term a base class as a parent class. You can also see the odd example of the containment/delegation model on show (X ‘has a’ Y) with objects shown to the side of the referencing class (now we have to think about left and right! πŸ™‚ ).

2) Blogs and Wiki:

The above two resources again show a hierarchy configuration with the parent class being above inheriting (or derived classes) – This is how I’ve always viewed it. Sub-classes inherit from the parent class, like a family tree; this is where you consider a parent class as a family predecessor (like a Grandfather), with sub-classes inheriting traits (genes), like classes that represent the Father and subsequent Child. Therefore, passing of constructor arguments, in my head, occurs in an upwards direction (from the Child back towards the Grandfather).

3) Good Old Fashioned Book:

The next source is a good old fashioned book (a really great one, by the way): Pro C# 7: With .NET and .NET Core.

Again, the terms ‘parent’ class are substituted frequently for ‘base’ and when inheritance is discussed derived classes are denoted as ‘sub-classes’ or ‘children. These are always visualised with the parent/base class being shown at the top of the hierarchy tree. My mindset has always followed the logic outlaid here:

Constructor chaining occurs through the use of inheritance. A sub-class constructor method’s first task is to call its superclass’ constructor method. This ensures that the creation of the subclass object starts with the initialization of the classes above it in the inheritance chain.

4) Default behaviour of a Class Diagram:

Lastly, what does a standard class diagram look like if you drag and drop a set of related classes on to it (Enemy being the top level superclass, with Blob and Green Blob being descendants):

Example Classes.

Example Classes.

Example Default Diagram Generated.

Example Default Diagram Generated.

In this instance, the Enemy ‘base/parent’ class is shown at the top of the diagram with it’s derived classes below it. When a Green Blob is created we are chained through Blob to Enemy (via the ‘base’ keyword) in the first instance to instantiate the Enemy class members first – did we go up or down to get there, that is the question (and are we going up or down when each sub-classes constructor is called, in turn πŸ˜‰ )?

So, from a hierarchical perspective, should we illustrate diagrams with a ‘base’ class ‘above’ or ‘below’ it’s derived class? Or perhaps the debate lies in the semantics of how the process works? It could just be however the particular developer is wired in regards to how they see hierarchies in their own minds.

Everything I see (visually and from every resource I seem to pick up) seems to back up my initial idea that constructor arguments should be considered to be chained ‘up’ through the hierarchy to a parent class (as opposed to chained ‘down’ to the parent class), but it doesn’t hold true to everyone (with the classic debate over the word ‘base’, suggesting something lurking at the ‘root’); Perhaps my mind is slightly warped! Is there an answer, I don’t know. Does it matter to anyone, again I’m not sure! Has it been a waste of a few hours on a Saturday, almost certainly!

I did have a good search for constructor chaining in relation to going ‘down’ and it does come up, but only in regards to overloading constructors and chaining calls on a singular object (not inheritance-based). I couldn’t see an object hierarchy illustrated with the base class at the, um, base (as a bottom node) – but it could well be out there! If anyone else fancies chipping into a debate that probably has never been once asked for (ever!) then feel free to pitch in a comment below, I’d love to hear from you. Cheers all and happy coding!

SQL Identity Column Monitoring Script

I recently found myself in a position whereby I needed to know how the identity columns within a given database where configured. The information I was hunting for was as follows:

  1. Identity column name and table the column belonged to.
  2. The column Data Type
  3. The last seed value used.
  4. The difference between the maximum value allowed for the Data Type and the current seed value.

I wanted something that gave a rough, first pass account of this detail that I could script in under ten minutes (so not a complete solution by any account). I thought I’d share the result, that can be taken and expanded upon as needed. The core part of this script relies on the sys.identity_columns view, which contains the essential detail behind an identity column such as the seed value, increment and last seed value used. As this view uses sys.columns as its backbone, you also get all of the standard column metadata included like the column name, information on which table the column relates to and the column Data Type to boot; bonus!

You’ll notice that the following code assumes a seed ‘increment_value’ of 1, but this is an enhancement that can be easily made. Here is the current SQL in its raw form:

-- A first stab at some operational monitoring. Grab all of the columns that support 'seeding' and, based on type, look at the last seed value recorded against
-- the maximum supported value (could expand this, use it for reporting, etc.)
SELECT 
st.[name] AS [TableName]
, ic.[Name] AS [SeedColumnName]
, stype.system_type_id AS [DataTypeId]
, stype.[name] AS [DataTypeName]
, 
CASE
	WHEN stype.system_type_id = 56
	THEN '2147483647'
	WHEN stype.system_type_id = 127
	THEN '9223372036854775807'
	ELSE 'Unknown'
END AS [MaximumAllowedPositiveValue]
, COALESCE(CONVERT(NVARCHAR(20), ic.last_value), 'Last Value Unset') AS [LastSeedValue]
, 
CASE
	WHEN COALESCE(CONVERT(NVARCHAR(20), ic.last_value), '') = ''
	THEN 'Cannot Calculate'
	WHEN stype.system_type_id = 56
	THEN CONVERT(NVARCHAR(20), (2147483647 - CAST(ic.last_value AS INT)))
	WHEN stype.system_type_id = 127
	THEN CONVERT(NVARCHAR(20), (9223372036854775807 - CAST(ic.last_value AS BIGINT)))
	ELSE 'Unknown'
END AS [CurrentDifferenceBetweenSeedAndMax]
FROM sys.identity_columns ic
	INNER JOIN sys.tables st ON ic.object_id = st.object_id	
	INNER JOIN sys.types stype ON ic.system_type_id = stype.system_type_id
ORDER by st.[Name];

You should get output a little something like this:

Identity Columns Script Example Output.

Identity Columns Script Example Output.

The ‘last_value’ column is of type sql_variant so note the use of casting where appropriate. I’ve observed that there is a possibility of the ‘last_value’ being NULL, so I’ve attempted to cater for this using a case statement (more robust code is most likely needed here). The use of NVARCHAR(20) for conversion is built with INT/BIGINT in mind, this can be adjusted for identity values that use Data Types outside of this range.

I hope this comes in handy and/or forms the basis for a more complete solution in the future, I’ll tinker with this in due course no doubt as I need to expand it.

Here’s me signing off until the next time.

A Couple of Hours with Azure Maps

I’m having a random ‘pick a Channel 9 video and blog from there’ session; the subject of the day is Azure Maps and the inspiration came in the form of this video.

The plan is to see what I can achieve in an hour or two, so here’s my quick rundown to wet your whistle. Firstly, a quick resource list to get you going which gives an idea of the product itself as well as details on pricing and the core API:

  1. Azure Maps
  2. Pricing
  3. Quick Starts

I’ll be (partly) following this quick start guide, but I may break away a bit and engage ‘rebel’ mode as that’s my style. πŸ˜›

Within the Azure Portal start by creating a new resource, searching using the keyword ‘Maps’; nice and simple for starters. Click ‘create’ as shown below:

Creating a Maps Resource.

Creating a Maps Resource.

For our next course of yumminess, simply fill in the mandatory fields specifying a Name, selecting a Subscription, an existing Resource Group (or creating a new one, which I did here for some clean separation) and finally selecting a Resource Group location that makes sense for you. I’ve opted to pin this resource to my dashboard for easy access later.

Create a Maps Account.

Create a Maps Account.

Once created, like many resources, we then just need to obtain the access key by going to ‘YOUR_MAP_RESOURCE’ in the Azure Portal > Settings > Keys. The sample application referenced on the demo resources page is doing a wonderful 404 trick at the time of writing, so I’ll see what I can put together as a basic sample myself, as I have the key in tow.

At this point I engaged ‘full nosiness mode’ and poking around further lead me to some step-by-step samples; this looks like a good starting template. Using this template to generate my own code example (and throwing in some ES6 concepts for good measure) I came up with this lightweight, ‘one-shot’ HTML page in VS Code (I really need to use VS Code more as it’s going great guns now and is getting excellent traction in the development community from what I can gather):

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, user-scalable=no" />
    <title>Azure Map Test</title>
    <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1.0" type="text/css" />
    <script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1.0"></script>
    <style>
        html,
        body {
            width: 100%;
            height: 100%;
            padding: 0;
            margin: 0;
        }

        #mapContainer {
            width: 100%;
            height: 100%;
        }
    </style>
</head>
<body>    
    <div id="mapContainer"></div>
    <script>
        // Encapsulation class that is a holding container for search parameters
        class SearchOptions {
            constructor(subscriptionKey, searchTerm, startLatitude, startLongitude, searchRadius ) {
                this.subscriptionKey = subscriptionKey;
                this.searchTerm = searchTerm;
                this.startLatitude = startLatitude;
                this.startLongitude = startLongitude;
                this.searchRadius = searchRadius;
            }
            // Utility function for generating a search url based on the class properties
            generateSearchUrl() {
                return `https://atlas.microsoft.com/search/fuzzy/json?api-version=1.0&query=${ this.searchTerm }&subscription-key=${ this.subscriptionKey }&lat=${ this.startLatitude }&lon=${ this.startLongitude }&radius=${ this.searchRadius }`;
            }
        }

        // Function for generating a map (using the mapContainer element reference provided and subscription key)
        function getMap(mapContainer, subscriptionKey) {
            return new atlas.Map(mapContainer, {
                "subscription-key": subscriptionKey
            });
        }

        // Function for preparing the pin layer on the targeted map using the provided layer name
        function prepareMapPins(map, searchLayerName, pinType) {
            map.addPins([], {
                name: searchLayerName,
                cluster: false,
                icon: pinType
            });
        }

        // Function that processes the data from 'fetch' and adds pins (POIs) the the map using the returned json data
        function processMapData(data, searchLayerName, map, cameraPadding) {
            if (data != null && data.results != null && data.results.length > 0) {
                // Initialise a searchPins array and limit the returned json data to those that are marked as POIs
                let searchPins = [],
                    poiResults = data.results.filter((result) => { return result.type === "POI" }) || [];

                // Extract features from the returned data and add it to the searchPins array (this contains location-based information)
                searchPins = poiResults.map((poiResult) => {
                    let poiPosition = [poiResult.position.lon, poiResult.position.lat];

                    return new atlas.data.Feature(new atlas.data.Point(poiPosition), {
                        name: poiResult.poi.name,
                        address: poiResult.address.freeformAddress,
                        position: poiResult.position.lat + ", " + poiResult.position.lon
                    });
                });

                // Add POIs discovered to the appropriate search layer
                map.addPins(searchPins, {
                    name: searchLayerName
                });

                // Set the map camera to be fixed on the 'searchPins'
                let lons = searchPins.map((pin) => pin.geometry.coordinates[0] ),
                    lats = searchPins.map((pin) => pin.geometry.coordinates[1] ),
                    swLon = Math.min.apply(null, lons),
                    swLat = Math.min.apply(null, lats),
                    neLon = Math.max.apply(null, lons),
                    neLat = Math.max.apply(null, lats);

                map.setCameraBounds({
                    bounds: [swLon, swLat, neLon, neLat],
                    padding: cameraPadding
                });             
            }
        }

        // Function that is triggered on 'mouseover' of a pin element to display extra information
        function createMouseOverPopUps(e, popup, map) {
            let popupContentElement = document.createElement("div");
            popupContentElement.style.padding = "5px";

            let popupNameElement = document.createElement("div");
            popupNameElement.innerText = e.features[0].properties.name;
            popupContentElement.appendChild(popupNameElement);

            let popupAddressElement = document.createElement("div");
            popupAddressElement.innerText = e.features[0].properties.address;
            popupContentElement.appendChild(popupAddressElement);

            let popupPositionElement = document.createElement("div");
            popupPositionElement.innerText = e.features[0].properties.name;
            popupContentElement.appendChild(popupPositionElement);

            popup.setPopupOptions({
                position: e.features[0].geometry.coordinates,
                content: popupContentElement
            });

            popup.open(map);
        }

        // Function to actually create the map
        function createMap() {
            // Alter the query parameters here for testing, add a subscription key, search term (e.g. 'hairdressers' or 'pubs'), 
            // the latitude/longitude to begin the search from and the radius to search (in metres)
            const subscriptionKey = "INSERT_SUBSCRIPTION_KEY_HERE",
                  searchTerm = 'pubs',
                  startLatitude = '52.630181',
                  startLongitude = '1.297415',
                  searchRadius = 1000,
                  // The 'search layer' that will contain the discovered 'pins' and will tie to mouse over pop-ups
                  searchLayerName = "search-results",
                  // Use this to switch out the pin type on render (https://docs.microsoft.com/en-us/javascript/api/azure-maps-javascript/pinproperties?view=azure-iot-typescript-latest)
                  pinType = "pin-red",
                  // Had issues when searching a small radius and having this value too high (overlapping pins???) - but adjust as necessary
                  cameraPadding = 1;

            // Encapsulate the search constants into a utility class which containts a function for calculating a 
            // search url. Also, generate a map/popup object pre-search to get us started
            let options = new SearchOptions(subscriptionKey, searchTerm, startLatitude, startLongitude, searchRadius),
                popup = new atlas.Popup();
                map = getMap('mapContainer', subscriptionKey);

            // Initialise the pin layer for the targeted map
            prepareMapPins(map, searchLayerName, pinType);

            // Use fetch to call the generated search URL and process the response to add data points (POIs in this case) to the map
            fetch(options.generateSearchUrl())
                .then(response => response.json())
                .then(data => processMapData(data, searchLayerName, map, cameraPadding));

            // Add a popup to the map which will display some basic information about a search result on hover over a pin
            map.addEventListener("mouseover", searchLayerName, (e) => {
                createMouseOverPopUps(e, popup, map);
            });
        }

        // Create the sample map!
        createMap();
    </script>
</body>
</html>

I’ve added inline comments to try and explain the core workings of the objects on show. In essence, you just need to:

  1. Ensure the atlas.min.css style sheet is in scope.
  2. Ensure the atlas.min.js script is in scope.
  3. Create a div with a selector (using an id in this instance) so it can be targeted.
  4. Call atlas.Map specifying the container (div you previous created) you want to render the map within, along with a valid subscription key.

In this example, I create a SearchOptions class that acts a way of encapsulating configurable parts of a search and provides a way of generating a dynamic search URL using a template string (template literal). The createMap function is called first and creates a SearchOptions instance up front, this function is where you can modify search parameters as you see fit. When using this sample code be sure to switch out ‘INSERT_SUBSCRIPTION_KEY_HERE’ for a valid subscription key. You can specify a latitude/longitude as a linchpin for the search, a search radius in metres and a search term to target specific points of interest (POIs).

Along with a SearchOptions object, a ‘popup’ utility object (to handle how popups are rendered when a map pin is ‘moused over’) and the physical map is created, using the getMap function. This is where atlas.Map is called, for reference.

To render pins on the map for POIs a named ‘layer’ must be created against the map object in scope. This is handled via a call to prepareMapPins. There is some ability to customise how a rendered pin looks so see the URL listed against the pinType constant, in the sample code, for more details.

I use ‘fetch’ to call the API with a generated URL/embedded query and then process the returned JSON data using the processMapData function. This is where the physical pins for POIs are added. Each POI discovered has a latitude/longitude, which is extracted in the form of an atlas.data.Feature (for each POI discovered). These are added to the map via a call to the addPins function, specifying the search layer to attach the pin to (so I’ve inferred here that you can indeed have multiple layers rendering different information which is awesome).

Some calculations are then performed to generate values to set the ‘camera’ location so that it focuses in on the area of the discovered POIs. All in all, it is actually pretty simple and is easy to get fully up and running within the first hour or so.

Lastly, a small mouseover event listener is added to provide a popup (using the previously created popup utility object) for each pin. The createMouseOverPopUps function takes care of this little monkey for us.

The only difficulty I had was that large padding values on the camera didn’t seem to play ball when using a small search radius, it took me a while to figure out that results were a) inaccurate when doing this and b) pins were overlapping and looked as if they were missing, so this is something to watch out for! Not sure why this knocked on to the pin locations, as it appears to be a camera setting. I’ve left this as 1, but a value of around 5 appeared to work fine.

So….what does it look like I hear you ask. Here’s the first results set for pubs which, for anyone who knows me, is most likely not a surprise! πŸ˜‰

Brewdog Location.

Brewdog Location.

The accuracy here, being from Norwich is….a little mixed. The location of Brewdog is near enough and another pin for Gonzos is on the mark, although the returned metadata lists this as ‘Havanas’, which is out of date. Some of the other listed POIs are flat out wrong (or omitted, perhaps as they are listed as ‘bars’ or ‘restaurants’, for example, even when I know they are in range based on radius). I did a follow-up search for hairdressers which seemed to be much more on the mark:

Anglian Hair Academy Map.

Anglian Hair Academy Map.

Anglian Hair Academy Street View.

Anglian Hair Academy Street View.

I had no idea that the Anglian Hair Academy even existed and thankfully my wife was there to set me straight, it’s in the right place apparently. From what I know, the other pins are pretty accurate (in this instance Google Maps looked a little out of date this time around). I tested this one last time on supermarkets in the area and it was reasonably accurate in the majority of cases.

This is an interesting little API to experiment with and please feel free to take this code and play around with it as you see fit. Also, please get in touch if the inaccuracies I saw here are due to some kind of error on my part, I’d love to hear how you all get on.

Thanks all and keep coding!

Top Ten Tips to Prevent Insanity (Software Psychosis)

Yoyo you wonderful bear followers, I hope today finds you well and for those of you in the UK I hope your snow-related torment hasn’t been too painful to endure and you’ve all stayed safe πŸ™‚

In the shower today (apologies for the mental imagery!) I started to mull over what I would tell someone, off the cuff of course, if they asked me ‘give me your top ten tips for surviving the day to day stuff. Y’know, as quickfire as possible!’. Don’t ask me why this was relevant to my shower; but it’s fairly commonplace for me to get highly introspective standing under a stream of water, it’s one of my things if you will.

So, in no particular order of importance, here we go. Some are geared towards software development and some are straight up general comments on how I try to hop, skip and jump from day to day.

  1. Take a deep breath and time-box your day, as best as you can at least. Here I would recommend, outside of any other work management software you have to bump heads with, a simple Trello board to get yourself organised. Create cards and put a ‘due date’ on them. Two columns can suffice, an unembellished setup of two columns called ‘TODO’ and ‘DONE’ to drop cards between will serve you just fine; give it a go (I’ve been using it pretty much daily for years now).
  2. Adopt the rule of three. Three things to accomplish for the day, week and year; although I have to admit I struggle with the year one, I’m still a lowly padawan most likely! Scott Hanselman discusses this concept in this excellent video.
  3. Since coming into contact with Git I have learned two things that are having a profound effect on a) my sanity and b) the way I work. Commit often, push often and, as far as possible, keep change sets small! I like Unit Tests to so, without getting into the politics of whether you should or shouldn’t adopt them, I will settle with a simple ‘try them and see how you get on with them’ at the very least. It’s a good place to leave it for now πŸ˜‰
  4. Get comfortable with having patchy knowledge! It’s going to happen, no question. Sometimes just knowing that a ‘thing’ exists is enough to get yourself going in the right direction or nudge a colleague so they can find the right solution. You can always follow-up and learn the ins and outs of something later. Don’t stress yourself out with the crazy notion of knowing the nitty-gritty on everything you come into contact with. That’s the problem with knowledge…..the more you have the more you realise there are massive expanses of information out there (at the end of one horizon is another, don’t sweat it)!
  5. Walk away, take a break, have a shower…do something else when you’re stuck. I’m a hardcore breaker of this rule and suffer for it!
  6. Listen to others when they tell you to stop, from time to time at least (outside observers often know best and will see the crazy-loon face you have adopted in a time of stress; my wife often braves this and politely says I need to stop)!
  7. Pomodoros are good!!! The basic setup is a) pick a task and b) work at it for 25 minutes in a focused manner and finally c) take a 5-minute break. Wash, rinse and repeat (with a larger break after several ‘pomodoros’). For complex tasks where I need to perform focused bursts to produce ‘mini-sprints’ of work, this is an excellent way of working to adopt.
  8. I like to talk…try it more often! Instants chats are all good and well but if a message begins to span into the scope of ‘non-trivial’, for example, several paragraphs of information that could be misinterpreted (or just takes too long to actually key in!) just opt for walking around the office or picking up the phone. I like nattering and connecting, it’s a liberating feeling that will break down walls; especially if you’ve been cooped up for an extended period of time crunching a problem.
  9. Be careful on that lunch break skipping behaviour! It’s an easy habit to get into and almost everyone I know does it – I’ve recently been attempting to, at a minimum, always get in thirty minutes unless an apocalyptic development event is in progress. Drink water, eat food and read an article on something unrelated (and mix up your work environment from time to time)!
  10. One from my wife and an excellent piece of advice – do a ‘power dance’ (five or ten minutes is the recommended time I’m told). Harder if you work in an office, but see what you can get away with I guess (I take no responsibility for dance-related disciplinary events)!

If any of these are helpful to you then I’ll walk away from writing this a happy man (I mean, bear, ahem). If this is completely useless, perhaps I’ll steer you in a direction where you find something that works for you; in which case bravo.

I hope you’ve enjoyed this little stream of consciousness and, until the next time, happy coding and honey scoffing!

Experimenting with Azure CDN

With the gradual piecing together of the Lego bricks forming the slow move over of the Frog & Pencil website to a more managed approach (building of a custom CMS and an all-around better ASP.NET MVC architecture) I thought it would be interesting to document the move over of Frog & Pencil images to a CDN. I was inspired to give this a go after watching Scott Hanselman make the switch for his podcast site images and other Azure Friday videos, as documented here:

Scott Hanselman lifting and shifting images over to a CDN.
Azure CDN with Akamai.

It seemed like a relatively painless process and is a step in the right direction for our site as a whole; so, let’s give it a go!

NOTE: A short way into this post I realised that I was making a few missteps. This is cool, I think, as I would rather document the journey I took with the mistakes listed, to be honest – #KeepingItReal! However, for sanity (mine and yours) I’ll specify the ‘correct’ order of events that you should follow here that you can marry up with the ramblings below:

  1. Sign in to the Azure Portal.
  2. Create a storage container, if you don’t already have one.
  3. Download and utilise a storage explorer application (such as Azure Storage Explorer).
  4. Create a CDN Profile and CDN endpoint (that ties explicitly to your storage container, in this instance).
  5. Go to your DNS settings and generate a CNAME property, mapping a custom domain to your CDN if you wish to.
  6. Optionally, learn how to programmatically interact with your storage container.

Azure Portal – First Steps (documenting the journey)

First things first, we must hop on over to the Azure Portal. I searched the marketplace for ‘CDN’ and clicked create in the right-hand pane, as shown:

Creating a CDN

Creating a CDN.

The next phase involves configuring a CDN profile. The profile needs to be given a name and should be attached to an Azure Subscription. I’ve created a new Resource Group, by specifying a name for it, but it is possible to select an existing one for use here. There are some guidelines surrounding Resource Groups, such as items within a group should share the same lifecycle; more details can be found within this handy documentation article, read away!

The Azure CDN service is, of course, global but a Resource Group location must be set, which governs where resource metadata is ultimately stored. This could be an interesting facet to consider if there are particular compliance considerations regarding the storage of information and where it should be placed. I’m going with West Europe either way; a nice, easy choice this time around.

As for pricing, I have decided to head down the Akamai route, using the Standard Akamai pricing tier. I will have to see how this ultimately pans out cost wise over time, but it seems reasonable:

Azure CDN Provider Pricing

Azure CDN Provider Pricing.

At this point, we can explicitly create a CDN endpoint (where resources will be ultimately exposed). The endpoint has a suffix of ‘.azureedge.net’ and I’ve simply specified the first part of our domain, ‘frogandpencil’ as the prefix.

This is where I hit a bit of a revelation with the ‘Origin Type’ drop down. You can select from Storage, Cloud service, Web app or Custom origin (which is cool!), of which I want to use Storage. After selecting this I can pick an ‘Origin hostname’. The light bulb moment here, for me, is that I should have created a storage container first! I’d watched enough videos to have dodged this little problem, but I still managed to stumble…all part of the learning process πŸ˜‰

So… Let’s Create a Storage Container

Back to the market place then. The obvious pick seems to be ‘Storage account – blob, file, table, queue’, so I’ve gone ahead and clicked create here:

Setup Azure Storage.

Setup Azure Storage.

When creating the storage account there are a fair few options to consider, a good number that read as if they will impact pricing. I had to use the documentation found here to make choices. I settled on the setup described here (for images, and as the site isn’t yet using https, I’ve gone with the secure transfer feature being disabled – one for review in the future):

As an overview, the guidance suggests the use of the ‘Resource manager’ type of ‘Deployment model’ for new applications. There doesn’t seem to be a penalty for using the ‘StorageV2’ ‘Account kind’, which extends the types that can be stored outside of just blob data, so that is what I am going for.

Performance wise, the ‘standard’ option seems like an acceptable setting at the moment and for the kind of data I’ll be storing (images for now, and possibly other static content later down the line) I can opt out of any geo-redundant replication options. In the event of resource downtime, I can easily switch to the use of resources local to the website. Plus, there will not be any data being lost really, all easily rebuilt and recoverable.

As for the ‘Access tier’, I’m heading down the ‘Hot’ route as images will be accessed quite frequently (we have the CDN to consider here so I might tinker later on down the line).

I then pick a Subscription, give the Resource Group a name and select my region of choice before continuing.

I then get a new blade on the dashboard (which took a minute to create) and, on accessing, am presented with the following:

Storage Setup.

Storage Setup.

Managing the Storage Container

The first and perhaps most obvious choice for managing and actually getting some content up into the storage container is the Azure Storage Explorer, which I’ll be downloading and using.

After a painless install process, you should see the following, where you will be asked to connect to Azure Storage:

Connect to Azure Storage.

Connect to Azure Storage.

I simply used my Azure account sign in details here. I did notice however that the Azure Portal does expose, under ‘Access Keys’ (within the storage container dashboard), keys and connection strings. I’m assuming this is for other kinds of, including programmatic, access; which I’ll give a go I think as part of this post (as a wee bonus).

I used the right-click context menu to create a new container called ‘images’ and then used the upload button to push up a test image:

Azure Storage Explorer Upload Image.

Azure Storage Explorer Upload Image.

Again, against the container I used the right-click context menu to select ‘Set Public Access Level…’, which I’ve set as follows to allow public access to the blob data but not the container:

Container Public Access Setup.

Container Public Access Setup.

I now have a blob container with a single image in it with appropriate access rights configured. The question is can I access the image in its current state? We’re looking pretty good from what I can see.

Successful Access.

Successful Access.

Adding a custom domain

Next up, I plan on adding a custom domain to the storage account. To do this, I access the ‘Custom domain’ option as shown here:

Register Custom Domain.

Register Custom Domain.

I followed option 1 as listed here and created a CNAME record to map frogandpencilstorage.blob.core.windows.net to images.frogandpencil.com (I’m happy to wait for this to propagate).

Register images.frogandpencil.com.

Register images.frogandpencil.com.

Once the CNAME record is created you simply have to place your target URL in the text box provided and hit save.

New CNAME property.

New CNAME property.

Lastly, let’s take it for a spin and see whether we can access the image in the storage container via the custom URL…and voila:

Custom Domain Active.

Custom Domain Active.

Back to the CDN bit!

We’ve come full circle! With a storage container in place I can now use that to feed a configured CDN. Consequently, I backtracked and followed the previously listed steps being sure to select my ‘Origin hostname’ to point to the newly created storage container:

CDN Profile & Endpoint Configuration.

CDN Profile & Endpoint Configuration.

On clicking create it takes a short time for the CDN to be configured.

So, what do I do now

Looking through the videos I made another discovery. This is where I want to adjust the previously created CNAME property (that I setup for the storage container) and hook this up to the CDN endpoint instead. The portal exposes custom domain mapping for a CDN much like for a storage container:

Change CNAME to map to CDN.

Change CNAME to map to CDN.

Portal CDN Custom Domain Mapping.

Portal CDN Custom Domain Mapping.

Again, I had to wait a short time for the CNAME property change to propagate but, after that, I was all set. I then spent a little time verifying that the CDN was up and running. There are quite a few optimisation options including the ability to set a custom ‘Origin path’ (such as ‘images’) but I’m leaving these be for the time being.

The Bonus Section – Programmatically Add Items to Azure Storage

As promised, this next section discusses (in a very bare bones fashion) what is required to write to an Azure storage container. I’ve created a stub Console Application to get running with and the process itself is simple (not considering errors, existence checks and threading, of course!).

You need to:

  1. Reference the WindowsAzure.Storage NuGet package.
  2. Add a reference to System.Configuration (if you want to put connection strings, folder paths and container names in configuration files and read them out).
  3. Then simply follow the code outlined below to get started.

In my test setup, the ‘SourceDirectory’ is looking at ‘C:\test-files\’ (contains just images) and the ‘TargetContainer’ is called ‘images’, as per my earlier configuration. The connection string can be obtained from the Azure Portal, under ‘Storage Account > Settings > Access Keys’.

Test Files ready for upload.

Test Files.

Storage Access Keys.

Storage Access Keys.

The App.config for the test application is structured like this, with the connection string being set to the correct value as per the information found in the Azure Portal.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
  <connectionStrings>
    <add name="FrogAndPencilStorageConnection" connectionString="[OBTAINED_FROM_THE_AZURE_PORTAL]" />
  </connectionStrings>
  <appSettings>
    <add key="SourceDirectory" value="C:\test-files\"/>
    <add key="TargetContainer" value="images"/>
  </appSettings>
</configuration>

Then, finally, the actual test code which…

  • Attempts to connect to the storage container creating a CloudStorageAccount object, based on the connection string information supplied.
  • Then uses the CloudStorageAccount object to get create a new CloudBlobContainer object (based on the container name stored in the configuration settings).
  • Finally, utilise this CloudBlobContainer, along with information about the files to process, to actually perform the upload.
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;

namespace WriteToAzureStorageTestApp
{
    /// <summary>
    /// Test application for writing to Azure Storage.
    /// Basic, test code only (throwaway code).
    /// </summary>
    internal class Program
    {
        #region Main (Entry Point) Method

        /// <summary>
        /// Main entry point method for this console application.
        /// </summary>
        /// <param name="args">Optional input arguments.</param>
        private static void Main(string[] args)
        {
            DemoWritingToAzureStorage();
        }

        #endregion Main (Entry Point) Method

        #region Private Static Methods

        /// <summary>
        /// Private static demo method illustrating how to upload to Azure Storage.
        /// </summary>
        private static void DemoWritingToAzureStorage()
        {
            // First use the FrogAndPencilStorageConnection connection string (for Azure Storage) to obtain a CloudStorageAccount, if possible
            CloudStorageAccount.TryParse(ConfigurationManager.ConnectionStrings["FrogAndPencilStorageConnection"].ConnectionString, out CloudStorageAccount storageAccount);
            if (storageAccount != null)
            {
                // We have a CloudStorageAccount...proceed to grab a CloudBlobContainer and attempt to upload any files found in the 'SourceDirectory' to Azure Storage
                Console.WriteLine("Obtaining CloudBlobContainer.");

                CloudBlobContainer container = GetCloudBlobContainer(storageAccount);

                Console.WriteLine("Container resolved.");

                Console.WriteLine("Obtaining files to process.");

                List<string> filesToProcess = Directory.GetFiles(ConfigurationManager.AppSettings["SourceDirectory"]).ToList();

                UploadFilesToStorage(container, filesToProcess);
            }

            Console.WriteLine("Processing complete. Press any key to exit...");
            Console.ReadLine();
        }

        /// <summary>
        /// Private static utility method that obtains a CloudBlobContainer
        /// using the container name stored in app settings.
        /// </summary>
        /// <param name="storageAccount">The cloud storage account to retrieve a container based on.</param>
        /// <returns>A fully instantiated CloudBlobContainer, based on the TargetContainer app setting.</returns>
        private static CloudBlobContainer GetCloudBlobContainer(CloudStorageAccount storageAccount)
        {
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

            return blobClient.GetContainerReference(ConfigurationManager.AppSettings["TargetContainer"]);
        }

        /// <summary>
        /// Private static utility method that, using a CloudBlobContainer, uploads the
        /// files passed in to Azure Storage.
        /// </summary>
        /// <param name="container">A reference to the container to upload to.</param>
        /// <param name="filesToProcess">The files to upload to the container.</param>
        private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
        {
            // Process each file, uploading it to storage and deleting the local file reference as we go
            filesToProcess.ForEach(filePath =>
            {
                Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

                // Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
                container.GetBlockBlobReference(Path.GetFileName(filePath)).UploadFromFile(filePath);

                RemoveFileFromLocalDirectory(filePath);
            });
        }

        /// <summary>
        /// Private static utility method for deleting a file.
        /// </summary>
        /// <param name="filePath">The file path (full) to delete based upon.</param>
        private static void RemoveFileFromLocalDirectory(string filePath)
        {
            // Only attempt the delete if the file exists
            if (File.Exists(filePath))
            {
                File.Delete(filePath);
            }
        }

        #endregion Private Static Methods
    }
}
Test Upload Application Running.

Test Upload Application Running.

Test Files Uploaded.

Test Files Uploaded.

There you have it; a rather around the houses and off the wall tour of setting up an Azure storage container and then linking this to an Azure CDN. Plenty of images still need to be brought over into the new storage container (and a few code changes to boot), but I feel like I am on a pilgrimage to a better place. I hope this proves useful nonetheless and, until the next time, happy coding!

Addendum

After a further play I realised that the C# example I’d knocked up was not setting the content type correctly on upload, as follows:

Incorrect Content Type.

Incorrect Content Type.

To this end, I adjusted the UploadFilesToStorage method to set the content type on a CloudBlockBlob before the upload is triggered, as illustrated here:

/// <summary>
/// Private static utility method that, using a CloudBlobContainer, uploads the
/// files passed in to Azure Storage.
/// </summary>
/// <param name="container">A reference to the container to upload to.</param>
/// <param name="filesToProcess">The files to upload to the container.</param>
private static void UploadFilesToStorage(CloudBlobContainer container, List<string> filesToProcess)
{
	CloudBlockBlob blockBlob;

	// Process each file, uploading it to storage and deleting the local file reference as we go
	filesToProcess.ForEach(filePath =>
	{
		Console.WriteLine($"Processing and uploading file from path '{ filePath } (then deleting)'.");

		// Upload the file based on name (note - there is no existence check or guarantee of uniqueness - production code would need this)
		blockBlob = container.GetBlockBlobReference(Path.GetFileName(filePath));

		// Correctly configure the content type before uploading
		blockBlob.Properties.ContentType = "image/jpg";

		blockBlob.UploadFromFile(filePath);

		RemoveFileFromLocalDirectory(filePath);
	});
}

You should then see items with the correct content type in the container:

Correct Content Type.

Correct Content Type.

To access images via the custom domain, essentially my CDN, I had to ‘purge’ it also at this point.

Again, happy coding.