Aidan Garnish

Collaboration Not Competition

On the move!

After 7 years blogging at using the platform I have decided to try something new. I will now be blogging at using the Ghost blogging platform.

All the existing SharePoint content still gets a reasonable number of hits (it is remarkable how many people are still interested in the STSAdm commands for deploying a wsp!) so rather than trying to complete the almost impossible task of moving it without breaking existing links I will be leaving it in place. If I produce any SharePoint blogs in future I will most likely put them here but for now new content is likely to be slim pickings on this blog. Fear not though as there is already a scintillating tale of the Internet of Things and home automation over on the new blog so go and take a look.

Support Spanish City Regeneration

No, not Granada or Seville, the Spanish City in Whitley Bay, a grade II listed building on the north east coast just down the road from Newcastle!

North Tyneside council are currently in the process of working to regenerate the iconic building after years of neglect and need your support to convince the Heritage Lottery Fund that some of their cash is worth spending on this project.

After visiting Spanish City yesterday on a rare open day I decided to put together a website to make it easier to find the online form that enables people to express their support for the scheme (I still haven't found a link to it from the council's own website!)

So, all you need to do is visit the Spanish City site to find out more and to pledge your support by filling in an extremely short form.

If you also fancy sharing the link and encouraging your friends to add their support too, well, the more the merrier. The majestic dome thanks you for your support!

SQLServerSpatial.dll and Azure Cloud Services

There is a great article here by Alistair Aitchison detailing how to get the SQL Server spatial types working with Azure roles. Even with the aid of that article it took a little bit of trial and error to get it right.

The main issue I found was that when adding SQLServerSpatial.dll from C:\Windows\System32 the Add Exisiting Item dialogue was picking up the SQLServerSpatial.dll file from the SysWOW64 folder.

You can tell the difference by the file size, the SysWOW64 file is around 230k whilst the System32 file is over 400k. Make sure you add the larger file. I eventually had to copy the file to another location and add from there to get the correct file added.

I also found that I didn't need to add the msvcp and msvcr dlls - this is probably because the standard VM running an Azure Cloud Service now comes with these available.

Hope this saves someone the few hours it took me to get this working!

Configuring Azure AD as an Identity Provider in ACS

There have been some big improvements recently in the ease of configuring applications to authenticate using Azure AD. It is now possible to manage configuration of your applications through the Azure portal as part of managing Azure AD.

There are lots of tutorials on how to set up Azure AD to work with your apps.

There is also lots of information on using ACS to work with Google, Facebook, Yahoo, Microsoft accounts and on premise AD FS 2.0.

Where there is a slight gap is for the scenario where you want to authenticate your users using Azure AD through ACS. The app I am building allows users to register using a Microsoft account or a Google account but we also want to add Azure AD to allow organizations to take advantage of single sign on using their own organisation AD credentials.

I am starting from a position where my web app is already configured to use ACS and is happily authenticating users with Microsoft and Google accounts.

To also include Azure AD in the identity provider mix is a three step process:

1. Configure ACS

In ACS select Identity Providers and click Add

Leave the default selection of WS-Federation identity provider and click Next

Enter a display name and then get the url for your Azure AD WS-Federation metadata. This can be found here[myTenantName] - replacing [myTenantName] with whatever your tenant name is.

Enter some login link text - this is what will be displayed when your user is selecting the IP they want to use.

Select the relying party applications you want to make the Azure AD IP available for and then hit save.

In the rule group for your relying party application you will need to add a new rule to pass through claims from Azure AD (or do whatever transformations are appropriate)

2. Configure your application in Azure AD

This step is only really necessary if you want to make the app available to external users or you want to enable your app to read or write directory data. If you only require straight authentication this step could be skipped.

Login to the Azure portal and In the applications tab of your Azure AD directory click Add

Follow the wizard and fill in the fields as relevant for your app

3. Provision a service principal in the directory tenant for the ACS namespace

After completing the first two steps I was getting the following error when logging into my app using Azure AD as the IP:

HTTP Error Code:400

Message:ACS50000: There was an error issuing a token.

Inner Message:ACS50001: Relying party with identifier 'https://[mynamespace]' was not found

The solution is to provision a service principal in the AD tenant for your ACS namespace. This is the bit that took me some time to figure out as it looks like it is still something that can only be done using PowerShell. Hat tip to Ross Dargan for suggesting this could be the issue.

For a full explanation see Vittorio Bertocci's post but the crucial bit you need is this (remember to replace the urls with your ACS namespace):





 MSOnlineExtended -Force




 –Address ""





“LeFederateur ACS Namespace”




Once those steps are complete you should be able to start up your app and select Azure AD as the IP option and sign in using your Azure AD account.

Logging Elmah Exceptions To Azure Storage

I wanted to store the Elmah errors from a web application using Windows Azure Table Storage and came across this post by Dina Berry which outlines the steps required very nicely.

However, it was written in 2011 and the Windows Azure Table Storage client has moved on since then so some of the syntax is out of date.

Below is the code updated for the current Windows Azure Storage client (2.0) as of 13/06/2013.

I wanted to use the Azure Storage connection string that I already have set up in the Azure portal rather than exposing the username and password in the Elmah web.config settings so this code no longer uses the IDictionary config and instead has a reference to the ConnectionString via ConfigurationManager. 

using Elmah;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;
using Microsoft.WindowsAzure.Storage.Table.DataServices;
using System;
using System.Collections;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace TIROne.ElmahAzureStorage
    public class WindowsAzureErrorLogs : ErrorLog
         /// <summary>
        /// Table Name To Use In Windows Azure Storage
        /// </summary>
        private readonly string tableName = "ElmahExceptions";

        /// <summary>
        /// Cloud Table Client To Use When Accessing Windows Azure Storage
        /// </summary>
        private readonly CloudTableClient cloudTableClient;

        /// <summary>
        /// Initialize a new instance of the WindowsAzureErrorLogs class.
        /// </summary>
        /// <param name="config"></param>
        public WindowsAzureErrorLogs(IDictionary config)
            if (!(ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString is string))
                throw new Elmah.ApplicationException("Connection string is missing for the Windows Azure error log.");

            if (string.IsNullOrWhiteSpace((string)ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString))
                throw new Elmah.ApplicationException("Connection string is missing for the Windows Azure error log.");

            CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString);
            this.cloudTableClient = cloudStorageAccount.CreateCloudTableClient();

        /// <summary>
        /// </summary>
        /// <param name="error"></param>
        /// <returns></returns>
        public override string Log(Error error)
            ErrorEntity entity = new ErrorEntity(error.Time, Guid.NewGuid())
                HostName = error.HostName,
                Type = error.Type,
                ErrorXml = ErrorXml.EncodeString(error),
                Message = error.Message,
                StatusCode = error.StatusCode,
                User = error.User,
                Source = error.Source

            CloudTable table = this.cloudTableClient.GetTableReference(this.tableName);

            TableOperation insertOperation = TableOperation.Insert(entity);

            if (entity.RowKey != null && entity.PartitionKey != null)
                TableResult result = table.Execute(insertOperation);

            return entity.Id.ToString();

        /// <summary>
        /// Get a Error From Windows Azure Storage
        /// </summary>
        /// <param name="id">Error Identifier (Guid)</param>
        /// <returns>Error Fetched (or Null If Not Found)</returns>
        public override ErrorLogEntry GetError(string id)
             TableServiceContext tableServiceContext = this.cloudTableClient.GetTableServiceContext();

            var query = from entity in tableServiceContext.CreateQuery<ErrorEntity>(this.tableName).AsTableServiceQuery(tableServiceContext)
                        where ErrorEntity.GetRowKey(Guid.Parse(id)) == entity.RowKey
                        select entity;

            ErrorEntity errorEntity = query.FirstOrDefault();
            if (errorEntity == null)
                return null;

            return new ErrorLogEntry(this, id, ErrorXml.DecodeString(errorEntity.ErrorXml));


        public override int GetErrors(int pageIndex, int pageSize, System.Collections.IList errorEntryList)
            if (pageIndex < 0)
                throw new ArgumentOutOfRangeException("pageIndex", pageIndex, null);

            if (pageSize < 0)
                throw new ArgumentOutOfRangeException("pageSize", pageSize, null);

            TableServiceContext tableServiceContext = this.cloudTableClient.GetTableServiceContext();

            // WWB: Server Side Call To Get All Data
            ErrorEntity[] serverSideQuery = tableServiceContext.CreateQuery<ErrorEntity>(this.tableName).AsTableServiceQuery(tableServiceContext).Execute().ToArray();

            // WWB: Sorted in Reverse Order So Oldest are First
            var sorted = serverSideQuery.OrderByDescending(entity => entity.TimeUtc);

            // WWB: Trim To Just a Page From The End
            ErrorEntity[] page = sorted.Skip(pageIndex * pageSize).Take(pageSize).ToArray();

            // WWB: Convert To ErrorLogEntry classes From Windows Azure Table Entities
            IEnumerable<ErrorLogEntry> errorLogEntries = page.Select(errorEntity => new ErrorLogEntry(this, errorEntity.Id.ToString(), ErrorXml.DecodeString(errorEntity.ErrorXml)));

            // WWB: Stuff them into the class we were passed
            foreach (var errorLogEntry in errorLogEntries)

            return serverSideQuery.Length;


Get XML Node InnerText from SPFile

A method to get the value from an XML node stored in an InfoPath XML document in SharePoint:

public static string GetValueFromSPListItemXml(SPListItem item, string node)

SPFile file = item.File;
byte[] xmlFile = file.OpenBinary();
MemoryStream ms = new MemoryStream(xmlFile);
XmlDocument xml = new XmlDocument();
string s = xml.OuterXml;
XPathDocument x = new XPathDocument(new StringReader(s));
XPathNavigator xPathNav = x.CreateNavigator();
IDictionary<string, string> namespaceDictionary = xPathNav.GetNamespacesInScope(XmlNamespaceScope.All);
XmlNamespaceManager nsmgr = new XmlNamespaceManager(xml.NameTable);
nsmgr.AddNamespace("my", namespaceDictionary["my"]);
XmlNode root = xml.DocumentElement;
return root.SelectSingleNode(node, nsmgr).InnerText;



Example use:

GetValueFromSPListItemXml(item, "//my:Details//my:Title")

CV Once

CV Once is the answer to your job hunting woes!

When applying for jobs it isn't unusual to send copies of your CV out to 10 or more companies and recruitment agents. This is usually done by emailing a static document like a PDF or Word file.

The downside of this approach is that those 10+ copies of your CV become stale and out-of-date very quickly as you gain new skills and experience.

This is where CV Once comes to the rescue by providing an online version of your CV that companies can access from their own systems so that they always have the most recent version of your CV.

Simply login with your Google or Microsoft account and create your CV. A link is then generated for you that you can give to companies and recruitment agents so that they can keep their systems up to date with the most recent CV you have to offer.

You can even download your own JSON or XML version of your CV to host on your own website or use the API call to display your CV on your own web page.

A new blog theme

I have been thinking about giving my blog a bit of a freshen up for a while and this is the result. It is still a work in progress, the about page still needs a proper rework, but the aim is to remove any unecessary clutter and leave only the content. The SharePoint banner ads are gone and so is the post and category navigation. I have also removed comments and ping backs.

The vast majority of visitors to my blog arrive via a search engine so internal navigation is largely unused, I may relent and add the search box and some navigation somewhere on my about page. Very few people comment, despite getting a respectable amount of traffic for a small blog like mine (~5000 visits per month), and the comments that are left are 50% spam and 50% a simple "thanks for the info" so comments won't be missed. If anyone wants to get in touch with me directly they can find my email address in the footer. What is left is simply the information they were looking for when they typed their search terms into Google/Bing....hopefully!

I would ask what people think of this approach (is it too severe?) but as comments are now turned off that would be pointless...if you care that much you can always email me...

Getting a SharePoint 2013 App Submitted to the Office Store

I recently had my first SharePoint 2013 app accepted to the Office Store and thought it would be worth sharing some of the lessons I have learned over the last 3 and a bit months whilst trying to get it through the validation process.

The app I submitted is a CSV Uploader that I had previously developed as a full trust wsp solution that used server side code to upload a selected CSV file into a SharePoint custom list. To try and get a better understanding of the new SharePoint 2013 app model I decided to redevelop this functionality using client side code in a SharePoint hosted app. For more information on the three types of app (SharePoint Hosted; Auto Hosted and Provider Hosted) take a look at this Apps for SharePoint overview.

In the end it took me quite a while to get the app submitted to the store despite the initial development process being relatively quick. There were a few reasons for this that were mostly my own daft fault. However, there were some initial teething problems with the process that meant my submission disappeared down a rabbit hole for a few weeks early on and my Office 365 preview developer site undergoing maintenance for several weeks also didn't help.

So...these are my top tips for a smooth SharePoint app submission process. Of course submitting to the App Store is entirely optional, you could always just distribute the app directly to your sites/clients and avoid Microsoft's Office Store validation rules but where is the fun in that? Laughing

1. Read the validation guidelines very carefully. The two pages you want to look at are Validation policies for the apps submitted to the Office Store (version 1.2) and Validation policies for apps FAQ

  • The main thing I missed was that the app has to work in IE8/9 as well as IE10. Since I was originally using the HTML 5 File API to read the CSV file this caused my app to be rejected as IE8/9 does not support File API.
  • Completing the version number correctly on the submission forms was another brief stumbling block. The version number you enter must match the version number in the AppManifest.xml of your solution.
  • Make sure you include the SupportedLocales tag in your AppManifest.xml - Locale support information is required for all apps in the store.

2. Test in Firefox and Chrome as well as IE8/9/10

  • This might sound obvious but it is easy to assume that once you have your app working in a couple of these browsers your testing is done. This cost me a couple of failed submissions highlighting small things that were due to browser inconsistencies. What didn't help was that one of the Microsoft examples includes code to populate a dropdown list with SharePoint list names but the code to add items to the dropdown list did not work in Firefox despite working fine in IE and Chrome! The example is here and I have submitted a comment to flag the issue.
  • It is also worth checking that any of the newer HTML5 features that your app relies on will function in all browsers supported by SharePoint (IE8/9/10; latest release of Chrome, Firefox, Safari) using the handy Can I Use website. E.g. I originally started out using File API but had to switch to using to support IE8/9

3. Make sure that the test steps you submit to the validation team are crystal clear

  • This probably caused me the most head scratching! The test I asked the validation team to carry out was to try and upload a CSV file with a header row of "Title, Description" to a custom list with a "Title" and a "Description" column. Time and again they came back with an error about the "Description" field not being found but it was working fine on my machine, damn it! I eventually figured out that they had added a site column called "Description" to their custom list but the site column had a static name of "kpidescription" which was causing this error. Once that was figured out and the validation team created a column that had a static name of "Description" the tests passed and all was good.
  • The main learning point here is that if your test steps can be misinterpreted it is far more likely that you will have problems. Trying to debug an issue raised by a remote testing team is very frustrating so it is up to you to keep your tests as clear and as unambiguous as possible.

It has been a long and at times frustrating but ultimately satisfying experience getting my first app approved and I am confident that my future submissions will be made much faster by following the advice above.

Finally, I would like to thank a few people who helped along the way. Thanks to Jeremy Thake who nudged the right people when my submission got lost in the process for three weeks. Huge thanks go to Jes Brown of the Office Store Dev Communications team for all his help, communication and general hand holding through the process. He really went above and beyond in the assisstance he provided even going so far as making some sightseeing suggestions while I was in Seattle! Finally, thanks to the validation team for the excellent and detailed feedback that ultimately helped my app to pass validation and hit the store!

Journalism is broken but where are all the new business models?

Journalism as a business is failing, not a particularly new message but one that was driven home by last night's SuperMondays "Local and Hyperlocal" event.
Speakers from Addiply, Keep Your Eyes Open and JesmondLocal all lined up to deliver the same story of falling readership figures for traditional print media and the subsequent reduction in advertising revenues that have led to many newspapers running at a loss or closing their doors entirely. The resulting shift to online distribution channels has gotten off to a shaky start and doesn't seem to be profitable for most people either, so what is the answer?
Addiply was introduced by the CEO, Rick Waghorn, who has developed a site to connect local advertisers with local web based content producers. This allows locally focussed websites to carry better targetted advertising and retain a good portion (90%) of the revenue generated. However, this just feels like "more of the same", an ad supported business model that only works for a small number of businesses with very high page views. Whilst this could be the basis of a successful business for Addiply it didn't sound like it was providing a large enough revenue for sites like JesmondLocal to make them sustainable businesses.
Stephen Noble talked about his site Keep Your Eyes Open which is "The North East’s Arts and Culture Dispatch". KYEO is producing some great content and making clever use of the tools at it's disposal to create good quality video articles. However, as Stephen disucussed, the KYEO side of the business does not turn a profit and relies on repurposing the skills of the company to offer corporate video production services to remain viable. Maybe this is it, maybe the future of journalism is as a loss leader for other services. KYEO have been able to successfully showcase their video production skills and develop a business selling them to corporates but based on his talk Stephen's passion is journalism rather than making coroporate videos.
JesmondLocal was started by ex-Guardian journalist Ian Wylie to provide a "‘hyperlocal’ news service for the people who live and work in Jesmond". The site does a great job of reporting on local issues and carrying out the 4th estate role of journalism by keeping local democracy in check. The production of JesmondLocal is achieved using a small army of student and local volunteers which is a great accolade for local collaboration and community building but it doesn't offer a sustainable business model for the future of journalism. It sounds like this year Ian is going to be getting the latest crop of student intake to look more closely at ways to make journalism pay which will hopefully lead to some radically new business models.
The facts that traditional media is struggling to survive and that very few people are making online journalism pay are nothing new but I was surprised at the lack of radically new ideas coming from journalists who are trying to come up with new models. There seemed to be a general consensus that this is just how things are and that they will get much worse before they get better. There was some support for public funding for journalism but little agreement on what form that would take or who should receive it and in any case a business based on politically vulnerable public funding is always going to be built on shaky foundations.
One of the core themes that came through during the event was that society needs organisations that we can trust to provide news and that this needs to be paid for, somehow. In reality I am not sure that people really do trust organisations though. Instead people tend to trust people (not faceless organisations), who behave consistently and with integrity and maybe building this trust could be one route to a sustainable business for some journalists. I know that most of the content I consume on a daily basis comes from individuals that I trust and that produce content I am interested in. For example, people like Seth Godin consistently produce free content and have built decent size audiences and huge amounts of trust. I would be interested to know if anyone is already trying this approach in journalism, ie. building an audience by giving content away and behaving in a consistently trust worthy way and then occasionally releasing a more substantial piece of paid for content.
It feels like we need to spend some more time thinking about how people really want to consume content and how they are already consuming it. With the rise of sites like Instapaper and the use of RSS feed readers and eBook readers people are not consuming content on the original publishing site anymore. Instead people are choosing to access content without branding or advertising in increasingly innovative ways as demonstrated by this post from Scott Hanselman. The challenge then, is how to engage with those readers and turn them into loyal fans - *hint* the answer isn't more advertising.

I don't think the issue is a lack of quality content as one commenter suggested during the Q&A session. The quality of content is better than ever if you know where to look and how to curate. I also don't think that the answer is content being displayed in increasingly novel ways like this Pictchfork/Bat For Lashes example as the novelty soon wears off and becomes a distraction from the actual content.

It is clear that nobody has the answers yet but what is also clear is that far more radical thought is required to come up with sustainable business models for journalism. Any ideas anyone?