A solution to WinRM in a NLB cluster…

I’ve written a couple of posts discussing the remoting options for PowerShell:
• fan-out model – Windows Remote Management service (WinRM)
• fan-in model – IIS hosted PowerShell endpoint (using the IIS WinRM extension)

When running load balanced WCF services in IIS that are secured using Windows Authentication, the web applications are mapped to app pools that use a domain account. This is required by kerberos to ensure that the encrypted messages can be decoded using a common set of credentials. By default, the HTTP SPN would be registered against the machine account, however this is changed to map to the domain account. This broke WinRM which is also an HTTP endpoint but runs as the network service, therefore the kerberos authentication failed because it is expected to be running under the domain account.

PowerShell supports two machine name formats, when setting the Invoke-Command -ComputerName parameter: the NETBIOS name and the fully qualified domain name (FQDN). To be able to call the WinRM service and authenticate using kerberos, you need to use the machine name format that is not used in the SPN. For example, if

HTTP/myserver.domain.com

is the SPN registered against the domain account used by the application pools, then

PS>icm -ComputerName myserver.domain.com -scriptblock { ‘foo’}

will fail, however

PS>icm -ComputerName myserver -ScriptBlock {‘foo’}

will succeed. It works because the SPN must be an exact match for the machine name used (though case insensitive on Windows). If HTTP/myserver was registered, the command would fail. [I tried using the IP address too but PowerShell reports an error saying it does not support that scenario unless the IP address is in the TrustedHosts list]. This is still a little ‘magic’ and the better way to do this is to enable CredSSP in PowerShell.

This discovery removes the need to use the fan-in model, which we’ve found to be more problematic than the WinRM Windows Service:
• Cannot use the IIS:/AppPools/ path, returns no results
• Cannot use IIS:/Sites/, throws a COM exception
• AppPool identity must have ‘Generate Security Audit’ right on the machine
• Intermittent failures with the Windows Process Activation

Another recent discovery is around the effect of the NETBIOS name with IE zone security. If a resource is consider to be outside the local intranet or trusted sites zone, then kerberos does not work – the ticket is not issued. Therefore using the FQDN requires the domain to be added to the local intranet zone sites. The use of the NETBIOS name however is considered to be within the local intranet zone and therefore no amendment to the zones are required.

One last tangential gotcha… it is possible to extend the probe path that IIS uses when looking for assemblies beyond the standard bin directory.

<runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <probing privatePath="bin;SharedBin" />
    </assemblyBinding>
</runtime>

However files not in the normal bin directory are not shadow copied, therefore you can get file locking that you don’t expect when updating the files – in the case above, the SharedBin.

Co-ordinating deployments using the Parallel class in .NET 4.0

It’s been a long time since the last entry, the new year brings with it a fresh post based on some of the deployment work I’ve been looking at recently. This work has opened my eyes to the support for parallel co-ordination of work within .NET 4…

Recently I’ve been looking at the deployment approach we have for our services with an eye to reducing the time it takes for a full deployment. There are two simple concepts that leapt out: the first is to use a pull rather than a push model; the second is to deploy to all of the servers in parallel. This second point becomes increasing important as more servers get involved in hosting the services.

Pull versus Push
One of the most basic operations performed by the deployment engine is the copying of files to the application servers that host the various services within our product. The file copying was originally implemented as a push: the deployment agent performs the copy to the target server using an administration share, e.g. \\appserver01.domain.com\d$\AderantExpert\Live\ . This requires the deployment engine to run with administrator privilege on the remote machines which is not ideal.

An alternative is to send a script to the target server containing the copy the commands, the target server is then responsible for pulling the file to its local storage from a network share (which can be secured appropriately). The deployment engine is responsible for creating the script from the deployment model and co-ordinating the execution of the scripts across the various application servers.

PowerShell remoting is a great option for the remote execution of scripts and it’s quite straight forward to transform an object model into a PowerShell script using LINQ. I created a small script library class that provides common functions, for example:

internal class PowerShellScriptLibrary {
    internal static void ImportModules(StringBuilder script) {
    script.AppendLine("import-module WebAdministration");
    script.AppendLine("import-module ApplicationServer");
}

internal static void StopWindowsServices(string filter, StringBuilder script) {
    script.AppendLine("# Stop Windows Services");
    script.AppendLine(string.Format("Stop-Service {0}", filter));
}

internal static void CreateTargetDirectories(string rootPath, IEnumerable fileSpecifications, StringBuilder script) {
    script.AppendLine("# Create the required folder structure");
    fileSpecifications
        .Where(spec => !string.IsNullOrWhiteSpace(spec.TargetFile.TargetRelativePath))
        .Select(x => x.TargetFile)
        .Distinct()
        .ToList()
        .ForEach(targetFile => {
            string path = Path.Combine(rootPath, targetFile.TargetRelativePath);
            script.AppendLine(string.Format("if(-not(Test-Path '{0}'))", path));
            script.AppendLine("{");
            script.AppendLine(string.Format("\tNew-Item '{0}' -ItemType directory", path));
            script.AppendLine("}");
        });
 }


The library is then used to create the required script by calling the various functions, the examples below are for the patching approach that allows updates to be installed without requiring a full remove and redeploy:

private string GenerateInstallScriptForPatch(Server server, IEnumerable filesToDeploy, Environment environment, string patchFolder) {
    StringBuilder powershellScript = new StringBuilder();

    PowerShellScriptLibrary.ImportModules(powershellScript);
    PowerShellScriptLibrary.StopWindowsServices("ADERANT*", powershellScript);
    PowerShellScriptLibrary.StopAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.CreateTargetDirectories(server.ExpertPath, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.CreatePatchRollback(server, patchFolder, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.CopyFilesFromSourceToServer(environment, server, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.UpdateFactoryBinFromExpertShare(server, environment.NetworkSharePath, powershellScript);
    PowerShellScriptLibrary.StartAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.StartWindowsServices("ADERANT*", powershellScript);

    return powershellScript.ToString();
}

Though it is possible to treat NTFS as a transactional system (see http://msdn.microsoft.com/en-us/library/bb968806(v=VS.85).aspx ), and therefore have it participate in atomic actions, I didn’t walk this path. Instead I chose the compensation route and so when the model is transformed into a script I create both an install script and a compensate script which is executed in the event of anything going wrong.

private string GenerateRollbackScriptForPatch(Server server, IEnumerable filesToDeploy, Environment environment, string patchFolder) {
    StringBuilder powershellScript = new StringBuilder();

    PowerShellScriptLibrary.ImportModules(powershellScript);
    PowerShellScriptLibrary.StopWindowsServices("ADERANT*", powershellScript);
    PowerShellScriptLibrary.StopAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.RollbackPatchedFiles(server, patchFolder, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.StartAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.StartWindowsServices("ADERANT*", powershellScript);

    return powershellScript.ToString();
}

The scripts simply take a copy of the existing files that will be replaced before replacing them with the new versions. If anything goes wrong during the patch install, the compensating script is executed to restore the previous files.

Given that a server specific script is now generated per application server, because different servers host different roles and therefore require different files, the deployment engine has the opportunity to pass the script to the server; ask it to execute it and then wait for the OK from each server. If one server has an error then all can have the compensation script executed as required.

Parallelizing a deployment
Before looking at ome co-ordination code for the deployment engine, I want to explicitly note that there are two different and often confused concepts:
• Asynchronous execution
• Parallel execution

An asynchronous execution involves a call to begin a method and then a callback from that method when the work is complete. IO operations are natural candidates for asynchronous calls to ensure that the calling thread is not blocked waiting on the IO to complete. Single threaded frameworks such as UI are the most common place to see a push for asynchronous programming. In .NET 3, the Windows Workflow Foundation provided an excellent asynchronous programming model where asynchronous activities are co-ordinated by a single scheduler thread. It is bad practice to have this scheduler thread block or perform long running operations as it stalls the workflow progress when in a parallel activity. It is better to schedule multiple asynchronous activities in parallel when possible and have these execute on separate worker threads.

Parallel execution involves breaking a problem into small parts that can be executed in parallel due to the multi-core nature of todays CPUs. Rather than having a single core work towards an answer, many cores can participate in the calculation. To reduce the elapsed time, the time experienced by the end user, of a calculation, it may be possible to execute a LINQ query over all available cores (typically 2, 4 or 8). Linq now has the .AsParallel() extension method which can be applied to queries to enable parallel execution of the query. Of course, profiling is required to determine if the query performs better in parallel for typical data sets.

.NET 4 added the Task Parallel Library into the core runtime. This library adds numerous classes to the BCL to make parallel programming and the writing of co-ordination logic much simpler. In particular the Parallel class can be used to easily schedule multiple threads of work. For example:

Parallel.Invoke(
    () => Parallel.ForEach(updateMap, server =>
        serverInstallationScripts.Add(server.Key, GenerateInstallScriptForPatch(server.Key, server.Value, environment, patchFolder))),
    () => Parallel.ForEach(updateMap, server =>
        serverRollbackScripts.Add(server.Key, GenerateRollbackScriptForPatch(server.Key, server.Value, environment, patchFolder)))
);

The above code is responsible for creating the install and compensate PowerShell scripts from the deployment model discussed above. There are two levels of parallelism going on here. First the generation of the install and compensate scripts are scheduled at the same time using a Parallel.Invoke() call. Then a Parallel.ForEach() is used to generate the required script for each application server defined in the environment in parallel. The runtime is responsible for figuring out how best to achieve this, as a programmer we simply declare what we want to happen. In the above code the updateMap is an IDictionary<server, IList>, this is a list of files to deploy to each server keyed on the server.

I was simply blown away by how simple and yet how powerful this programming model is.

Accessing the LSA from managed code

This blog entry would be filed under the ‘it should not be this hard’ category if I had one. A reasonably common requirement is to determine the rights a user has and then to add additional rights as necessary. After much searching I could not find a ‘managed’ way to do this so I ended up with the following…

This post very much stands on the shoulders of others and so here are the links to the original articles I used:

LSA .NET from Code Project
“RE: Unmarshalling LsaEnumerateAccountRights() list of privileges”

When installing a new service, it is often necessary to add additional rights to the user that the service runs as, for example ‘Log on a service’. To do so from either managed code or PowerShell would seem like a reasonably obvious ask but I could not find any type that allowed me to. The security information is managed by the Local Security Authority (LSA) which has an unmanaged API available from advapi32.dll, to access this from C# requires P/Invoke and a reasonable amount of code to marshall the types. I’m not a C++ programmer and so I first looked for an alternative.

The Windows Server 2003 Resource Kit includes a utility NTRights.exe which allows rights to be added and removed from a user via the command line. Unfortunately this tool no longer ships in the Windows Server 2008 Resource Kit but the 2003 version still works on both Windows 7 and Server 2008 (R2). The tool provided part of the solution but I also wanted to be able to find out the rights that have already been assigned to the user as well as add and remove.

No matter which way I turned, I was always led back to the advapi32 and writing a wrapper to allow the functions to be called from C#. Thankfully most of the hard work had already been done and documented by Corinna John, with a sample project posted on Code Project. The original article comes from 2003, so I was a little surprised that it still hasn’t made it into a managed library. The sample by Corinna showed show to add rights to a user but unfortunately did not include listing the rights. For that I have to thank Seng who lists sample code here.

By combining the efforts of both together and cleaning up the code a little I ended up with the wrapper class given at the end of the posting (there is plenty of room for improvement in my code). This was compiled in VS2010, the API ended up as:

public IList GetRights(string accountName)
public void SetRight(string accountName, string privilegeName)
public void SetRights(string accountName, IList rights)

I had to compile for .NET 2.0 so that I could call it from PowerShell…

[void][Reflection.Assembly]::LoadFile('C:\Samples\LSAController.dll') # void suppresses the output of the message text
$LsaController = New-Object -TypeName 'LSAController.LocalSecurityAuthorityController'
$LsaRights = New-Object -TypeName 'LSAController.LocalSecurityAuthorityRights' # a convenience class containing common rights
$LsaController.SetRight('ADERANT_AP\stefan.sewell', [LSAController.LocalSecurityAuthorityRights]::LogonAsBatchJob)
$LsaController.GetRights('ADERANT_AP\stefan.sewell')

The code for the wrapper follows, I hope this saves someone the 2 days I spend on this.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
//
// This code has been adapted from http://www.codeproject.com/KB/cs/lsadotnet.aspx
// The rights enumeration code came from http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.interop/2004-11/0394.html
//
// Windows Security via .NET is covered on by Pluralsight:http://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook/HomePage.html
//

namespace LSAController {
    //
    // Provides methods the local security authority which controls user rights. Managed via secpol.msc normally.
    //
    public class LocalSecurityAuthorityController {
        private const int Access = (int)(
            LSA_AccessPolicy.POLICY_AUDIT_LOG_ADMIN |
            LSA_AccessPolicy.POLICY_CREATE_ACCOUNT |
            LSA_AccessPolicy.POLICY_CREATE_PRIVILEGE |
            LSA_AccessPolicy.POLICY_CREATE_SECRET |
            LSA_AccessPolicy.POLICY_GET_PRIVATE_INFORMATION |
            LSA_AccessPolicy.POLICY_LOOKUP_NAMES |
            LSA_AccessPolicy.POLICY_NOTIFICATION |
            LSA_AccessPolicy.POLICY_SERVER_ADMIN |
            LSA_AccessPolicy.POLICY_SET_AUDIT_REQUIREMENTS |
            LSA_AccessPolicy.POLICY_SET_DEFAULT_QUOTA_LIMITS |
            LSA_AccessPolicy.POLICY_TRUST_ADMIN |
            LSA_AccessPolicy.POLICY_VIEW_AUDIT_INFORMATION |
            LSA_AccessPolicy.POLICY_VIEW_LOCAL_INFORMATION
            );

        [DllImport("advapi32.dll", PreserveSig = true)]
        private static extern UInt32 LsaOpenPolicy(ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle);

        [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)]
        private static extern int LsaAddAccountRights(IntPtr PolicyHandle, IntPtr AccountSid, LSA_UNICODE_STRING[] UserRights, int CountOfRights);

        [DllImport("advapi32")]
        public static extern void FreeSid(IntPtr pSid);

        [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true, PreserveSig = true)]
        private static extern bool LookupAccountName(string lpSystemName, string lpAccountName, IntPtr psid, ref int cbsid, StringBuilder domainName, ref int cbdomainLength, ref int use);

        [DllImport("advapi32.dll")]
        private static extern bool IsValidSid(IntPtr pSid);

        [DllImport("advapi32.dll")]
        private static extern int LsaClose(IntPtr ObjectHandle);

        [DllImport("kernel32.dll")]
        private static extern int GetLastError();

        [DllImport("advapi32.dll")]
        private static extern int LsaNtStatusToWinError(int status);

        [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)]
        private static extern int LsaEnumerateAccountRights(IntPtr PolicyHandle, IntPtr AccountSid, out IntPtr UserRightsPtr, out int CountOfRights);

        [StructLayout(LayoutKind.Sequential)]
        private struct LSA_UNICODE_STRING {
            public UInt16 Length;
            public UInt16 MaximumLength;
            public IntPtr Buffer;
        }

        [StructLayout(LayoutKind.Sequential)]
        private struct LSA_OBJECT_ATTRIBUTES {
            public int Length;
            public IntPtr RootDirectory;
            public LSA_UNICODE_STRING ObjectName;
            public UInt32 Attributes;
            public IntPtr SecurityDescriptor;
            public IntPtr SecurityQualityOfService;
        }

        [Flags]
        private enum LSA_AccessPolicy : long {
            POLICY_VIEW_LOCAL_INFORMATION = 0x00000001L,
            POLICY_VIEW_AUDIT_INFORMATION = 0x00000002L,
            POLICY_GET_PRIVATE_INFORMATION = 0x00000004L,
            POLICY_TRUST_ADMIN = 0x00000008L,
            POLICY_CREATE_ACCOUNT = 0x00000010L,
            POLICY_CREATE_SECRET = 0x00000020L,
            POLICY_CREATE_PRIVILEGE = 0x00000040L,
            POLICY_SET_DEFAULT_QUOTA_LIMITS = 0x00000080L,
            POLICY_SET_AUDIT_REQUIREMENTS = 0x00000100L,
            POLICY_AUDIT_LOG_ADMIN = 0x00000200L,
            POLICY_SERVER_ADMIN = 0x00000400L,
            POLICY_LOOKUP_NAMES = 0x00000800L,
            POLICY_NOTIFICATION = 0x00001000L
        }

        // Returns the Local Security Authority rights granted to the account
        public IList<string> GetRights(string accountName) {
            IList<string> rights = new List<string>();
            string errorMessage = string.Empty;

            long winErrorCode = 0;
            IntPtr sid = IntPtr.Zero;
            int sidSize = 0;
            StringBuilder domainName = new StringBuilder();
            int nameSize = 0;
            int accountType = 0;

            LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType);

            domainName = new StringBuilder(nameSize);
            sid = Marshal.AllocHGlobal(sidSize);

            if (!LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType)) {
                winErrorCode = GetLastError();
                errorMessage = ("LookupAccountName failed: " + winErrorCode);
            } else {
                LSA_UNICODE_STRING systemName = new LSA_UNICODE_STRING();

                IntPtr policyHandle = IntPtr.Zero;
                IntPtr userRightsPtr = IntPtr.Zero;
                int countOfRights = 0;

                LSA_OBJECT_ATTRIBUTES objectAttributes = CreateLSAObject();

                uint policyStatus = LsaOpenPolicy(ref systemName, ref objectAttributes, Access, out policyHandle);
                winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(policyStatus));

                if (winErrorCode != 0) {
                    errorMessage = string.Format("OpenPolicy failed: {0}.", winErrorCode);
                } else {
                    int result = LsaEnumerateAccountRights(policyHandle, sid, out userRightsPtr, out countOfRights);
                    winErrorCode = LsaNtStatusToWinError(result);
                    if (winErrorCode != 0) {
                        errorMessage = string.Format("LsaAddAccountRights failed: {0}", winErrorCode);
                    }

                    Int32 ptr = userRightsPtr.ToInt32();
                    LSA_UNICODE_STRING userRight;

                    for (int i = 0; i < countOfRights; i++) {
                        userRight = (LSA_UNICODE_STRING)Marshal.PtrToStructure(new IntPtr(ptr), typeof(LSA_UNICODE_STRING));
                        string userRightStr = Marshal.PtrToStringAuto(userRight.Buffer);
                        rights.Add(userRightStr);
                        ptr += Marshal.SizeOf(userRight);
                    }
                    LsaClose(policyHandle);
                }
                FreeSid(sid);
            }
            if (winErrorCode > 0) {
                throw new ApplicationException(string.Format("Error occured in LSA, error code {0}, detail: {1}", winErrorCode, errorMessage));
            }
            return rights;
        }

        // Adds a privilege to an account
        public void SetRight(string accountName, string privilegeName) {
            long winErrorCode = 0;
            string errorMessage = string.Empty;

            IntPtr sid = IntPtr.Zero;
            int sidSize = 0;
            StringBuilder domainName = new StringBuilder();
            int nameSize = 0;
            int accountType = 0;

            LookupAccountName(String.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType);

            domainName = new StringBuilder(nameSize);
            sid = Marshal.AllocHGlobal(sidSize);

            if (!LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType)) {
                winErrorCode = GetLastError();
                errorMessage = string.Format("LookupAccountName failed: {0}", winErrorCode);
            } else {
                LSA_UNICODE_STRING systemName = new LSA_UNICODE_STRING();
                IntPtr policyHandle = IntPtr.Zero;
                LSA_OBJECT_ATTRIBUTES objectAttributes = CreateLSAObject();

                uint resultPolicy = LsaOpenPolicy(ref systemName, ref objectAttributes, Access, out policyHandle);
                winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(resultPolicy));

                if (winErrorCode != 0) {
                    errorMessage = string.Format("OpenPolicy failed: {0} ", winErrorCode);
                } else {
                    LSA_UNICODE_STRING[] userRights = new LSA_UNICODE_STRING[1];
                    userRights[0] = new LSA_UNICODE_STRING();
                    userRights[0].Buffer = Marshal.StringToHGlobalUni(privilegeName);
                    userRights[0].Length = (UInt16)(privilegeName.Length * UnicodeEncoding.CharSize);
                    userRights[0].MaximumLength = (UInt16)((privilegeName.Length + 1) * UnicodeEncoding.CharSize);

                    int res = LsaAddAccountRights(policyHandle, sid, userRights, 1);
                    winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(res));
                    if (winErrorCode != 0) {
                        errorMessage = string.Format("LsaAddAccountRights failed: {0}", winErrorCode);
                    }

                    LsaClose(policyHandle);
                }
                FreeSid(sid);
            }

            if (winErrorCode > 0) {
                throw new ApplicationException(string.Format("Failed to add right {0} to {1}. Error detail:{2}", accountName, privilegeName, errorMessage));
            }
        }

        public void SetRights(string accountName, IList<string> rights) {
            rights.ToList().ForEach(right => SetRight(accountName, right));
        }

        private static LSA_OBJECT_ATTRIBUTES CreateLSAObject() {
            LSA_OBJECT_ATTRIBUTES newInstance = new LSA_OBJECT_ATTRIBUTES();

            newInstance.Length = 0;
            newInstance.RootDirectory = IntPtr.Zero;
            newInstance.Attributes = 0;
            newInstance.SecurityDescriptor = IntPtr.Zero;
            newInstance.SecurityQualityOfService = IntPtr.Zero;

            return newInstance;
        }
    }

    // Local security rights managed by the Local Security Authority
    public class LocalSecurityAuthorityRights {
        // Log on as a service right
        public const string LogonAsService = "SeServiceLogonRight";
        // Log on as a batch job right
        public const string LogonAsBatchJob = "SeBatchLogonRight";
        // Interactive log on right
        public const string InteractiveLogon = "SeInteractiveLogonRight";
        // Network log on right
        public const string NetworkLogon = "SeNetworkLogonRight";
        // Generate security audit logs right
        public const string GenerateSecurityAudits = "SeAuditPrivilege";
    }
}

PowerShell Part 2 – Installing a new service

Following on from the brief introduction to PowerShell, let’s walk through the installation script…

The script installs a simple Magic Eight Ball service that will return a pseudo-random answer to any question it’s given. The service is written as a WCF service in C#, the files to deploy are available from http://public.me.com/stefsewell/ , have a look in TechEd2010/DEV306-WindowsServerAppFabric/InstallationSource. The folder contains a web.config to set up the service activation and a bin folder with the service implementation. The PowerShell scripts are also available from the file share, look in Powershell folder in DEV306…

Pre-requisite Checking

The script begins by checking a couple of pre-requisites. If any of these checks fail then we do not attempt to install the service, instead the installing admin is told of the failed checks. There are a number of different checks we can make, in this script we check the OS version, that dependent services are installed and that the correct version of the .NET framework is available.

First we need a variable to hold whether or not we have a failure:

$failedPrereqs = $false

Next we move on to our first check: that the correct version of Windows being used:

$OSVersion = Get-WmiObject Win32_OperatingSystem
if(-not $OSVersion.Version.StartsWith('6.1')) {
    Write-Host "The operating system version is not supported, Windows 7 or Windows Server 2008 required."
    $failedPrereqs = $true
    # See http://msdn.microsoft.com/en-us/library/aa394239(v=VS.85).aspx for other properties of Win32_OperatingSystem
    # See http://msdn.microsoft.com/en-us/library/aa394084(VS.85).aspx for additional WMI classes
}

The script fetches the Win32_OperatingSystem WMI object for interrogation using Get-WmiObject. This object contains a good deal of useful information, links are provided above to let you drill down into other properties. The script checks the Version to ensure that we are working with either Windows 7 or Windows Server 2008, in which case the version starts with “6.1”.

Next we look for a couple of installed services:

# IIS is installed
$IISService = Get-Service -Name 'W3SVC' -ErrorAction SilentlyContinue
if(-not $IISService) {
    Write-Host "IIS is not installed on" $env:computername
    $FailedPrereqs = $true
}

# AppFabric is installed
$AppFabricMonitoringService = Get-Service -Name 'AppFabricEventCollectionService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Monitoring Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

$AppFabricMonitoringService = Get-Service -Name 'AppFabricWorkflowManagementService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Workflow Management Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

A basic pattern is repeated here using the Get-Service command to determine if a particular Windows Service is installed on the machine.

With the service requirements checked, we look to see if we have the correct version of the .NET framework installed. In our case we want the RTM of version 4 and go to the registry to validate this.

$frameworkVersion = get-itemProperty -Path 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' -ErrorAction SilentlyContinue
if(-not($frameworkVersion) -or (-not($frameworkVersion.Version -eq '4.0.30319'))){
    Write-Host "The RTM version of the full .NET 4 framework is not installed."
    $FailedPrereqs = $true
}

The registry provider is used, HKLM: [HKEY_LOCAL_MACHINE], to look up a path in the registry that should contain the version. If the key is not found or the value is incorrect we fail the test.

Those are all the checks made in the original script from the DEV306 session, however there is great feature in Windows Server 2008 R2 that allows very simple querying of the installed Windows features. I found this by accident:

>Get-Module -ListAvailable

This command lists all of the available modules on a system, the ServerManager module looked interesting:

>Get-Command -Module ServerManager

CommandType Name Definition
----------- ---- ----------
Cmdlet Add-WindowsFeature Add-WindowsFeature [-Name] [-IncludeAllSubFeature] [-LogPath ] [-...
Cmdlet Get-WindowsFeature Get-WindowsFeature [[-Name] ] [-LogPath ] [-Verbose] [-Debug] [-Err...
Cmdlet Remove-WindowsFeature Remove-WindowsFeature [-Name] [-LogPath ] [-Concurrent] [-Restart...

A simple add/remove/get interface which allows you to easily determine which Windows roles and features are installed – then add or remove as required. This is ideal for pre-requisite checking as we can now explicitly check to see if the WinRM IIS Extensions are installed for example:

import-module ServerManager

if(-not (Get-WindowsFeature ‘WinRM-IIS-Ext’).Installed) {
    Write-Host "The WinRM IIS Extension is not installed"
}

Simply calling Get-WindowsFeature lists all features and marks-up those that are installed with [X]:

PS>C:\Windows\system32> Get-WindowsFeature

Display Name Name
------------ ----
[ ] Active Directory Certificate Services AD-Certificate
[ ] Certification Authority ADCS-Cert-Authority
[ ] Certification Authority Web Enrollment ADCS-Web-Enrollment
[ ] Certificate Enrollment Web Service ADCS-Enroll-Web-Svc
[ ] Certificate Enrollment Policy Web Service ADCS-Enroll-Web-Pol
[ ] Active Directory Domain Services AD-Domain-Services
[ ] Active Directory Domain Controller ADDS-Domain-Controller
[ ] Identity Management for UNIX ADDS-Identity-Mgmt
[ ] Server for Network Information Services ADDS-NIS
[ ] Password Synchronization ADDS-Password-Sync
[ ] Administration Tools ADDS-IDMU-Tools
[ ] Active Directory Federation Services AD-Federation-Services
[ ] Federation Service ADFS-Federation
[ ] Federation Service Proxy ADFS-Proxy
[ ] AD FS Web Agents ADFS-Web-Agents
[ ] Claims-aware Agent ADFS-Claims
[ ] Windows Token-based Agent ADFS-Windows-Token
[ ] Active Directory Lightweight Directory Services ADLDS
[ ] Active Directory Rights Management Services ADRMS
[ ] Active Directory Rights Management Server ADRMS-Server
[ ] Identity Federation Support ADRMS-Identity
[X] Application Server Application-Server
[X] .NET Framework 3.5.1 AS-NET-Framework
[X] AppFabric AS-AppServer-Ext
[X] Web Server (IIS) Support AS-Web-Support
[X] COM+ Network Access AS-Ent-Services
[X] TCP Port Sharing AS-TCP-Port-Sharing
[X] Windows Process Activation Service Support AS-WAS-Support
[X] HTTP Activation AS-HTTP-Activation
[X] Message Queuing Activation AS-MSMQ-Activation
[X] TCP Activation AS-TCP-Activation
...

The right hand column contains the name of the feature to use via the command.

I ended up writing a simple function to check for a list of features:

<#
.SYNOPSIS
Checks to see if a given set of Windows features are installed.    

.DESCRIPTION
Checks to see if a given set of Windows features are installed.

.PARAMETER featureSetArray
An array of strings containing the Windows features to check for.

.PARAMETER featuresName
A description of the feature set being tested for.

.EXAMPLE
Check that a couple of web server features are installed.

Check-FeatureSet -featureSetArray @('Web-Server','Web-WebServer','Web-Common-Http') -featuresName 'Required Web Features'

#>
function Check-FeatureSet{
    param(
        [Parameter(Mandatory=$true)]
        [array] $featureSetArray,
        [Parameter(Mandatory=$true)]
        [string]$featuresName
    )
    Write-Host "Checking $featuresName for missing features..."

    foreach($feature in $featureSetArray){
        if(-not (Get-WindowsFeature $feature).Installed){
            Write-Host "The feature $feature is not installed"
        }
    }
}

The function introduces a number of PowerShell features such as comment documentation, functions, parameters and parameter attributes. I don’t intend to dwell on any as I hope the code is quite readable.

Then to use this:

# array of strings containing .NET related features
$dotNetFeatureSet = @('NET-Framework','NET-Framework-Core','NET-Win-CFAC','NET-HTTP-Activation','NET-Non-HTTP-Activ')

# array of string containing MSMQ related features
$messageQueueFeatureSet = @('MSMQ','MSMQ-Services','MSMQ-Server')

Check-FeatureSet $dotNetFeatureSet '.NET'
Check-FeatureSet $messageQueueFeatureSet 'Message Queuing'

To complete the pre-requisite check, after making each individual test the failure variable is evaluated. If true then the script ends with a suitable message, otherwise we go ahead with the install.

Installing the Service

The first step in the installation is to copy the required files from a known location. This is a pull model – the target server pulls the files across the network, rather than having the files pushed on to the server via an administration share or such like [e.g. \\myMachine\c$\Services\].

$sourcePath = '\\SomeMachine\MagicEightBallInstaller\'
$installPath = 'C:\Services\MagicEightBall'

if(-not (Test-Path $sourcePath)) {
Write-Host 'Cannot find the source path ' $sourcePath
Throw (New-Object System.IO.FileNotFoundException)
}

if(-not (Test-Path $installPath)) {
New-Item -type directory -path $installPath
Write-Host 'Created service directory at ' $installPath
}

Copy-Item -Path (Join-Path $sourcePath "*") -Destination $installPath -Recurse

Write-Host 'Copied the required service files to ' $installPath

The file structure is copied from a network share onto the machine the script is running on. The Test-Path command determines whether a path exists an allows appropriate action to be taken. To perform a recursive copy the Copy-Item command is called, using the Join-Path command to establish the source path. These path commands can be used with any provider, not just the file system.

With the files and directories in place, we now need to host the service in IIS. To do this we need to use the PowerShell module for IIS:

import-module WebAdministration # require admin-level privileges

Next…

$found = Get-ChildItem IIS:\AppPools | Where-Object {$_.Name -eq "NewAppPool"}
if(-not $found){
    New-WebAppPool 'NewAppPool'
}

We want to isolate our service into its own pool so we check to see if NewAppPool exists and if not we create it. We are using the IIS: provider to treat the web server as if it was a file system, again we just use standard commands to query the path.

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ProcessModel -Value @{IdentityType=3;Username="MyServer\Service.EightBall";Password="p@ssw0rd"} # 3 = Custom

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ManagedRuntimeVersion -Value v4.0

Write-Host 'Created application pool NewAppPool'

Having created the application pool we set some properties. In particular we ensure that .NET v4 is used and that a custom identity is used. The @{} syntax allows us to construct new object instances – in this case a new process model object.

New-WebApplication -Site 'Default Web Site' -Name 'MagicEightBall' -PhysicalPath $installPath -ApplicationPool 'NewAppPool' -Force

With the application pool in place and configured, we next set-up the web application itself. The New-WebApplication command is all we need, giving it the site, application name, physical file system path and application pool.

Set-ItemProperty 'IIS:/Sites/Default Web Site/MagicEightBall' -Name EnabledProtocols 'http,net.tcp' # do not include spaces in the list!

Write-Host 'Created web application MagicEightBall'

To enable both HTTP and net.tcp endpoints, we simply update the EnabledProtocols property of the web application. Thanks to default endpoints in WCF4, this is all we need to do get both protocols supported. Note: do not put spaces into the list of protocols.

Configuring AppFabric Monitoring

We now have enough script to create the service host, but we want to add AppFabric monitoring. Windows Server AppFabric has a rich PowerShell API, to access it we need to import the module:

import-module ApplicationServer

Next we need to create our monitoring database:

[Reflection.Assembly]::LoadWithPartialName("System.Data")

$monitoringDatabase = 'MagicEightBallMonitoring'
$monitoringConnection = New-Object System.Data.SqlClient.SqlConnectionStringBuilder -argumentList "Server=localhost;Database=$monitoringDatabase;Integrated Security=true"
$monitoringConnection.Pooling = $true

We need a couple of variables: a database name and a connection string. We use the SqlConnectionStringBuilder out of the System.Data assembly to get our connection string. This demonstrates the deep integration between PowerShell and .NET.

Add-WebConfiguration -Filter connectionStrings -PSPath "MACHINE/WEBROOT/APPHOST/Default Web Site/MagicEightBall" -Value @{name="MagicEightBallMonitoringConnection"; connectionString=$monitoringConnection.ToString()}

We add the connection string to our web application configuration.

Initialize-ASMonitoringSqlDatabase -Admins 'Domain\AS_Admins' -Readers 'DOMAIN\AS_Observers' -Writers 'DOMAIN\AS_MonitoringWriters' -ConnectionString $monitoringConnection.ToString() -Force

And then we create the actual database, passing in the security groups. While local machine groups can be used, in this case I’m mocking a domain group which is more appropriate for load balanced scenarios.

Set-ASAppMonitoring -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -MonitoringLevel 'HealthMonitoring' -ConnectionStringName 'MagicEightBallMonitoringConnection'

The last step is to enable monitoring for the web application, above we are setting a ‘health monitoring’ level which is enough to populate the AppFabric dashboard inside the IIS manager.

Set-ASAppServiceMetadata -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -HttpGetEnabled $True

Last of all we ensure that meta data publishing is available for our service. This allows us to test the service using the WCFTestClient application.

PowerShell Part 1 – Getting Started

As part of ‘DEV306: Taming SOA Deployments using Windows Server AppFabric’ I showed a couple of PowerShell scripts that can be used to deploy a simple WCF service. The demo was pretty quick due to 60 minute session length and the fact that reading PowerShell is not the most exciting presentation. Over the next couple of blogs I’m going to walk through the scripts which are available from http://public.me.com/stefsewell. This first post is just to whet the appetite and introduce some PowerShell basics and concepts.

To dig into PowerShell I’ve been using the MEAP edition of Windows PowerShell in Action, 2nd Edition by Bruce Payette and I definitely recommend it.

The Basics

To run PowerShell commands you can use either the PowerShell console or the PowerShell ISE (integrated scripting environment). The ISE has some neat features such as breakpoints and allows you to easily build up scripts rather than issuing single commands.

Note: On a 64-bit system there is a 32-bit and 64-bit version of the PowerShell console and ISE. Confusingly the 64-bit version runs out of the C:\Windows\System32 directory while the 32-bit version runs out of c:\Windows\SysWOW64. You want to be using the 64-bit version, we’ve seen some strange behavior and errors when trying to use the 32-bit version on a 64-bit OS.

Getting Help…

The first script 1-PowerShell basics is just an introduction to some of the PowerShell goodness. The first useful thing is knowing how to get help and as with all PowerShell commands this takes the form of a verb-noun pairing:

> Get-Help

This gets you into the first page of the help system and from here you’ll want to drill down into specific commands:

> Get-Help invoke-command

You’ll get a description of the command, including the supported parameters. One of the very useful standard parameters for get-help is the -examples:

> Get-Help Invoke-Command -examples

This returns you a number of usage examples. Not only is help provided for specific commands but there is also help on a number of more general topics:

> Get-Help about_remoting

This will give you a good overview of the PowerShell remoting features.

Wildcards are supported so to see all the ‘about’ topics:

> Get-Help about*

Using Aliases…

Next up is navigating around using a familiar set of commands. The standard PowerShell commands can take a little getting used to, especially after years of either UNIX or DOS. To make you feel at home, there is the concept of an alias. An alias is simply another name for a command, for example Get-ChildItem will be more familiar as ls or dir to most people. To see the list of mapped aliases:

> Get-Alias

You can use cd to change directory which is an alias for Set-Location.

Variables…

PowerShell supports variables and uses a $ prefix:

> $foo = “TechEd”

To display the contents of $foo:

> Write-Host “The value of foo is $foo”
The value of foo is TechEd

The “” delimited string is evaluated prior to printing. If you use a single quote ‘ then a literal string is created:

> Write-Host ‘The value of foo is $foo’
The value of foo is $foo

Conditionals…

To check to see if a variable is not null:

if(-not $foo) { # do something } else { # do something else }

Slightly odd syntax but you check for -not of the variable, ! can be used as shorthand for -not. The comment character in PowerShell is #.

Loops…

Within a script, foreach and while loops are supported:

Foreach ($file in Get-ChildItem C:\) {
 
    $file.name
}

$count = 0;

While($count -lt 10) {
    $count++
    "$count"
}

To get access to environment variables you use $env:, for example:

> Write-Host $env:ComputerName

Using Pipes…

Both DOS and UNIX support piping the output from one command into another, allowing complex chains of commands to be linked together. PowerShell also supports this:

> get-service | where-object {$_.Status -eq "Stopped"}

This returns all of the installed Windows services with a status of stopped. The $_ is an iterator variable allowing you to enumerate over all of the results returned from the get-service command. The equality operator is -eq, in the same style as -not.

Additional Modules…

Before going much further, we need to relax the default security setting slightly. Out of the box a script execution policy of Restricted is set. This prevents the loading of configuration and the running of scripts. I find that changing this to RemoteSigned works well, this allows local scripts to run and signed scripts (by a trusted publisher) if the are downloaded from the internet.

> Set-ExecutionPolicy RemoteSigned

A number of Microsoft technologies have an accompanying PowerShell module that contains commands allowing automation. For example IIS comes with WebAdministration and Windows Server AppFabric brings along ApplicationServer. To use these modules you first need to be running in an elevated PowerShell console (run as Administrator) then import the module:

> Import-Module WebAdministration
> Import-Module ApplicationServer

To see the commands available in a module:

> Get-Command -module WebAdministration
> Get-Command -module ApplicationServer

There are commands allowing you to manage web applications, virtual directories, application pools, the AppFabric monitoring and workflow stores, and much more. We’ll see examples of these in the WCF service installation script in the next post.

A great feature of PowerShell is the concept of the provider, this allows a hierarchical structure to be navigated as if it was a physical drive. Consider how we navigate and administer the file system: cd (set-location), dir (get-childitem), mkdir (new-item) etc. These same commands can be used to navigate any hierarchy that has a provider such as:

cert: the certificate store
wsman: WinRM settings
HKLM: registry HKEY_LOCAL_MACHINE
IIS: Internet Information Server

This allows you to do the following:

> dir IIS:\Sites\Default Web Site\
> dir HKLM:\SOFTWARE\Microsoft\MSDTC

To change to the IIS ‘drive’:

> IIS:

Your PowerShell prompt will now show you an IIS path rather than a file system path. You navigate around using the standard commands. Note that WSMAN: doesn’t work, you need to cd WSMAN: explicitly.

.NET Integration

PowerShell is tightly integrated with .NET allowing objects to be constructed and consumed directly. For example:

> Write-Host ([System.DateTime]::Now)

The () indicates the expression is to be evaluated, the [] indicates a .NET type. The :: denotes a method call.

> [Reflection.Assembly]::LoadWithPartialName("System.Messaging")
> [System.Messaging.MessageQueue]::Create(".\Private$\MyNewQueue")

This second example shows how to create a private message queue in MSMQ. The System.Messaging assembly is loaded via the Reflection API.

This is really only just scratching the surface, however it gives us enough to be able to read through the installation script and understand what is going on. That’s for the next post…

PS: The canonical Hello, World! in PowerShell is simply:

> ‘Hello, World!’

Not tremendously useful but we’ve now ticked that box.

DEV306: PowerShell Scripts Available

Thanks to everyone who attended the sessions at TechEd New Zealand. The PowerShell deployment files demonstrated are now available from http://public.me.com/stefsewell

Have a look in the TechEd2010/DEV306-WindowsServerAppFabric folder, it contains a simple VS2010 project showing how to call PowerShell from C#. It also contains the PowerShell scripts that deploy, validate and remove a simple WCF service.

Pete has updated his blog ( http://blog.petegoo.com ) with the demo code from his workflow services.

Feedback from the sessions has been mixed. The workflow introduction seems to have worked for a high percentage of those who attended. For a 200 level session, I thought the content was pretty technical but sorry to the few who thought it was too lightweight. All I can say is that it was an introduction to workflow and the goal was to get the basics across. I would recommend the additional resources included in the slide decks to drill down further.

The Windows Server AppFabric session was not as successful as the workflow introduction. Windows Server AppFabric is a great addition to the service hosting capabilities of Windows Server 2008 – if you have WCF services in IIS/WAS, you should be using it if possible. The convenience of monitoring is a little difficult to get across in a demo, it has made our lives so much easier in support and in development. The workflow service host opens up many scenarios that were previously very hard to implement. Microsoft has taken on the heavy lifting (persistence, tracking, failover, scale-out) and we have a very simple model to work with. The PowerShell demo was quick but I didn’t want to spend 30 minutes walking through a page of commands. The scripts are available for download and commented; please take the time to explore the commands and experiment with your own services. The remote shell capabilities of PowerShell make large scale deployments much simpler than previously. The DSL demo at the end was a taste of what is possible with a model driven approach, I’m leaving you to connect the dots and transform from model to PowerShell.

Thanks again to all that attended, I hope there is something useful for you either in the session or in this blog.

Two new TechEd sessions

The New Zealand TechEd conference will be running from August 30th through to the 1st of September. Last week I was confirmed to present a couple of sessions:

DEV208: Getting Started with Workflow in .NET 4
DEV306: Taming SOA Deployments using Windows Server AppFabric

I’ve managed to talk one of ADERANTs lead developers, Peter Goodman, to share the stage and the workload. Pete is our workflow & DSL ninja, it’s going to be fun presenting with him.

The workflow session is an introduction, and rather than focusing purely on the technical aspects we are going to also talk about when using workflow makes sense and which problem types it is particularly suited for. There will be demos too and we plan to build a workflow and a workflow service on stage during the talk.

The Windows Server AppFabric session will be a more technical session and I want to cover three areas: deployment (using the Powershell API), monitoring and scale out of workflow services. Most of the content of the session can be found on this blog already but I do have plans for a new code sample if I can squeeze it into my schedule.

Now Microsoft is posting TechEd sessions online and into the public domain after the conference.

Update: Sessions are now available online…
·         Getting Started with Workflow in .NET 4, http://www.msteched.com/2010/NewZealand/DEV208
·         Taming SOA Deployments using Windows Server AppFabric, http://www.msteched.com/2010/NewZealand/DEV306

Configuration options for Remote PowerShell and WS-Management

Here’s the want list:
• to be able to run WCF and workflow services in IIS that use a basicHttpBinding.
• to scale out services in an application farm using the network load balancing service in Windows Server 2008.
• to authenticate users using Kerberos to flow the Windows Identity.
• to administer servers remotely using PowerShell.

It’s not exactly an exotic or out there set of needs, however it has been over three weeks now that I’ve been working through various attempts to get this up are running reliably.

The crux of the issue is around the use of HTTP and kerberos. To get the services to work in a load balanced environment with kerberos, a set of SPNs needed to be added to the Active Directory for the domain.The web applications hosting the service needed to run under a domain identity (e.g. MyDomain\service.expert) so they are mapped to an application pool with this identity. SPNs are then added to map the HTTP protocol to this user, rather than the machine account. In our case, four SPNs are added to the service.expert user – one for the network load balancers virtual host name and one for each server in the application farm:

HTTP/SVNLB301.ap.aderant.com
HTTP/SVEXPGG302.ap.aderant.com
HTTP/SVEXPGG303.ap.aderant.com
HTTP/SVEXPGG304.ap.aderant.com

Doing this breaks the default WinRM service configuration as the WinRM HTTP listener is running under a machine account not service.expert and so the SPN is incorrect and Kerberos negotiation fails. This is pretty much where we left off on the last posting and since then I have been looking at using HTTPS as the transport for the PowerShell remoting calls and other authentication mechanisms.

There are two options for hosting the WinRM service:

1. as a Windows Service (this is the default)
2. in IIS using a WinRM v2 features called ‘WinRM IIS Extensions’. This is an optional install in Windows Server 2008 to support the ‘fan-in’ model for PowerShell remoting which is targeted at the cloud.

Hosting the WinRM service using HTTPS is meant to be simple so long as you have an appropriate certificate installed on the server for SSL. The command is:

> winrm quickconfig -transport:HTTPS

I have never been able to get this to work. Before explaining how I did get a WinRM HTTPS endpoint working, let’s cover off the certificate.

Windows Server 2008 has a role which allows a server to act as a certificate authority (CA) for a domain. This role includes a self-service website from which any machine on the domain can request a certificate. I used this to request certificates created using the web server template with the common name (CN) set to the fully qualified domain name of the server in my application farm. The self-service website is pretty straight forward but note that the certificated is installed in the current user path, not the local machine so so you need to move it. The easiest way to see this is to use the certificate provider within PowerShell:

> cd cert:\CurrentUser\My
> ls
> cd cert:\LocalMachine\My
> ls

This will show you all of the certificates installed in the current user\my and the local machine\my stores. You can also use the management console (MMC) and add in the certificate plug-in for both the current user and local computer.

The WSMAN provider allows you to configure the WinRM service from within Powershell.

> cd WSMAN:\localhost\Listener
> new-item . -Address * -Transport HTTPS -CertificateThumbprint “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”

You need the 40 character certificate thumbprint which can be easily found by listing the certificates in cert:\LocalMachine\My. With the real thumbprint replacing the Xs, the above command will create an HTTPS listener that is hosted in the WinRM service.

To connect to the machine from a remote client, using kerberos to authenticate as the current user:

> icm -ComputerName targetServer -UseSSL -Authentication NegotiateWithImplicitCredential -ScriptBlock {get-host}

The script block is executed on the remote machine. If a test certificate has been used to set-up the HTTPS channel, then the remote call will fail. The certificate must have been issued by the domain CA, the CN must match the machine name and the revocation list is checked. It is possible to switch off these checks by adding the following parameter to the call:

> icm ... -SessionOption (new-PSSessionOption -SkipCNCheck -SkipCACheck - SkipRevocationCheck)

Any combination of the three skips can be used.

This again proved somewhat unreliable for me, due to the use of Kerberos over HTTPS to authorize the user. There are other authentication options available such as basic, which is secure over an HTTPS channel since the channel is encrypted.

The change in identity of the HTTP SPN just seemed to keep tripping me up, which made me wonder why not host the management service in IIS and then set it to run in an application pool with the same identity as our other services? Finding out how to do this took me some time and led me to the fan-in model for PowerShell mentioned earlier.

Fan-In Model
Within WinRM v2 there comes a plug-in model to allow ISVs to supply a module that allows their software to be managed via WS-Management. The PowerShell team ships such a module pwrshplugin.dll which can be found in %windir%\system32. To be able to host such a module in IIS, you need to ensure that you have the WinRM IIS Extensions option installed, I have only seen it available on Windows Server 2008 and not Windows 7.

[ On Windows Server 2008 R2, you can use the ServerManager module to check the installed features:

> Import-Module ServerManager
> Get-WindowsFeatures
]

With this option enabled, you can create a new web application and drop in a web.config file similar to the following which is discussed here:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <system.management.wsmanagement.config>
      <PluginModules>
        <OperationsPlugins>
          <Plugin Name="PowerShellplugin" Filename="%windir%\system32\pwrshplugin.dll" SDKVersion="1" XmlRenderingType="text">
           <InitializationParameters>
                <Param Name="PSVersion" Value="2.0" />
            </InitializationParameters>
            <Resources>
                <Resource ResourceUri="http://schemas.microsoft.com/powershell/Microsoft.PowerShell" SupportsOptions="true">
                    <Capability Type="Shell" />
                </Resource>
            </Resources>
          </Plugin>
        </OperationsPlugins>
      </PluginModules>
    </system.management.wsmanagement.config>
        <security>
            <access sslFlags="Ssl" />
            <authentication>
                <anonymousAuthentication enabled="false" />
                <basicAuthentication enabled="true" />
                <windowsAuthentication enabled="true" />
            </authentication>
        </security>
        <modules>
            <add name="WSMan" />
        </modules>
  </system.webServer>
</configuration>

The web application is configured to use SSL and Basic or Windows authentication is accepted. You might need to edit your applicationhost.config file to unlock the section of the section. The web application can be mapped to an application pool that has the same identity as the other services, in our case MyDomain\service.expert so the SPNs should work.

[Note: do not set-up an HTTPS listener in both IIS and WinRM at the same time on the same certificate, if you do recycling the app pool will drop the HTTPS binding from IIS – the Windows Service WinRM gets precedence.]

To connect to the machine from a remote client (using basic authentication), the following is required:

> $secpasswd = ConvertTo-SecureString "myPassword" -AsPlainText -Force
> $mycreds = New-Object System.Management.Automation.PSCredential ("MyDomain\MyUsername", $secpasswd)
> icm -ConnectionUri https://svexpgg303.ap.aderant.com/Powershell -Authentication Basic -Credential $mycreds -ScriptBlock {get-host}

The password is captured in a secure string and then a new PSCredential object is created to contain the username and password. This is passed to the invoke-command cmdlet using the -Credential parameter. Note that we are also using the -ConnectionUri parameter.


UPDATE [2nd October 2010]: I finally got to the bottom of the 1300 error I saw in the Windows Remote Management event log thanks to this post: http://blogs.msdn.com/b/wmi/archive/2010/02/25/winrm-hosted-in-iis-fails-to-start-with-error-1300-in-event-log.aspx

The account that the application pool is using must have the ‘Generate security audits’ right granted. Also when testing, it is important to reset IIS after each change to ensure that you are running against the correct set-up.

Retesting with security set-up correctly proved that any app pool can be used and the web application path could contain subfolders.

Having managed to establish a secure connection for remote PowerShell via IIS using basic auth and HTTPS, I’ve pretty much given up on getting it to work over Kerberos. I might try just once more to do Kerberos over HTTP when the management service is hosted in IIS but I’ve already been fighting with this for way too long. I hope the above saves someone the pain I went through…

Unexpected consequences…

Having set-up a load balanced environment as per the previous post, I then discovered some knock on effects…

By changing the SPNs for HTTP to be account rather than machine specific, the remote Powershell calls were broken – so our automated deployments were broken. By default the WinRM service connection from the client to the target server is authenticated using kerberos. The communication channel is HTTP through a separate listener process and it expects a machine SPN to be registered. In our case it was expecting HTTP/LRSRV310.lr.aderant.com to be registered against the machine account LRSRV310. Instead this SPN was mapped to our application pool identity service.workflow.lr and so we were broken.

I added in the SPN mapping to the LRSRV310 machine account and remote Powershell sessions were available again however this meant duplicate SPNs in AD which is against the rules. After a little thought and some digging it turns out there are (at least) two options available to us:
1. use an HTTPS channel rather than HTTP for the WinRM service.
2. add the client machine names to the TrustedHosts list for WinRM.

I’ve tried option 2. and it works, though I think option 1 may be a more secure approach. To get option 2. to work, from a Powershell prompt:

PS> set-item WSMAN:\localhost\Client\TrustedHosts -value “*.aderant.com”

In the command above I’m using a wildcard but you can be more specific and list individual machines that you trust. Note that you need to enable the trusted hosts setting before you set up the SPNs against the application pool identity or else you won’t be able to use the WSMAN provider.

Update…
Turns out the TrustedHosts list option is not so great. It seems that this appears to work while the kerberos ticket is valid which makes it look like everything is good. The local access to WSMAN settings is available but remote access still has kerberos issues when the ticket expires. So next we will try setting up HTTPS for WinRM.

> winrm quickconfig -transport:https

However, this requires that a certificate is installed to validate the server identity. Tomorrow we will be using the certificate server for our domain to generate a certificate however not all environments will have this. I’ll also have a look at the other authentication options and try turning off kerberos support [WSMAN:\localhost\Service\Auth].

When we sort this out, I’ll post the solution.

Configuration for Kerberos

This is a summary of the voodoo required to get WCF services hosted in IIS to work with a load balancer and kerberos. This took me way longer than I had hoped to figure out so I hope I can save someone else that pain.

We have recently been running some load and stress tests against our latest Golden Gate SP1 product which supports the horizontal scale out of workflow services. This scale out capability is one of the core features of Windows Server AppFabric. Our software is designed to run in an ‘on premise’ scenario and leverages Windows integrated security for authorization of users. A major performance improvement we discovered during our original Golden Gate testing was to ensure kerberos was used rather than NTLM when performing Windows Authentication. We wanted to ensure that our new services were using kerberos for Windows authentication since we had moved some of our services from being hosted as a Windows Service to being hosted in IIS, in particular the workflow services.

Note: in addition to performance advantages, you need to use Kerberos if you want to achieve multi-hop delegation of credentials, NTLM does not support this. The resources at the end of this post discuss this further.

In this post I’m going to walk through a worked example and give a checklist to follow. In a later post I may drill down into a little more of the background, in the meantime I’ll include some additional resources at the end.

Scenario
The scenario involves three application servers that are configured into a network load balanced (NLB) cluster using NLB in Windows Server 2008. The machine names are:
• svexpgg310.ap.aderant.com
• svexpgg311.ap.aderant.com
• svexpgg312.ap.aderant.com

The virtual host name for the NLB is svnlb301.ap.aderant.com.

The NLB is set-up to load balance traffic on port 80, for our HTTP based services and the port range 18180-18199 for our Windows Services. Each of the servers runs all of the services that we support horizontal scale out for and one of the servers (310) runs the services that only support a single instance. In a typical installation we have around 15 services, rather than list out all of these I’ll concentrate on two types:
• services hosted in IIS that expose HTTP endpoints
• services hosted as Windows Services that expose net.tcp endpoints

Alongside the three application servers is a database server that hosts the ADERANT Expert database, the AppFabric monitoring database and the AppFabric workflow persistence database.

The basicHttpBinding configuration used to enable Windows authentication is as follows:

      <basicHttpBinding>
        <binding name="expertBasicHttpBinding" maxReceivedMessageSize="2147483647">
          <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" />
          <security mode="TransportCredentialOnly">
            <transport clientCredentialType="Windows" proxyCredentialType="Windows">
              <extendedProtectionPolicy policyEnforcement="Never" />
            </transport>
          </security>
        </binding>
      </basicHttpBinding>

1. The servers must be in the local intranet zone of any calling machines.
As of Windows Server 2003, by default only the local intranet zone supports the passing of credentials for Windows Integrated authentication between machines. This makes sense as you rarely want to pass your Windows credentials beyond your own domain. At ADERANT we have a group policy set-up so that all machines have any machine with a name matching *.aderant.com registered in the local intranet zone.

You can explicitly name the servers for the zone, also ensure that the servers are not listed in the Trusted Sites zone.

2. Windows Services exposing WCF net.tcp endpoints must have SPNs registered for both the application server and the network load balancer addresses.

When a non-basicHttpBinding is used, such as net.tcp, the WCF infrastructure checks to ensure that the service is running under the identity that the client expects. This prevents ‘man-in-the-middle’ attacks where someone spoofs the service you want to call with their own for some nefarious purpose. When you generate a service proxy against a net.tcp endpoint you’ll see something similar to the following configuration snippet in the app.config:

<client>
  <endpoint
    address="net.tcp://myserver.mydomain.com:8003/servicemodelsamples/service/spnIdentity"
    binding="netTcpBinding"
    bindingConfiguration="netTcpBinding_ICalculator_Windows"
    contract="ICalculator"
    name="netTcpBinding_ICalculator">
    <identity>
      <servicePrincipalName value="CalculatorSvc/myServer.myDomain.com:8003" />
    </identity>
  </endpoint>
</client>

There is an identity element that specifies the expected identity of the service host. There are two different options supported: and . If your service is published on a domain and you always expect the client calling the service to be online, then the userPrincipalName is easiest to configure. The value attribute contains the identity that the service is running as, e.g. value=“ADERANT_AP\service.expert”.

Alternatively you can set a servicePrincipalName, as above. The service principal name (SPN) is broken down into three parts:

serviceClassName / address [: portNumber]

The service class name is a token that uniquely represents the service. Common service classes are HTTP and HOST, the example above is using CalculatorSvc to uniquely identify a calculation service. At ADERANT we use class names such as ExpertConfigurationSvc. After the service class name comes the machine name, e.g. SVEXPGG310. Note that the NetBIOS name and the fully qualified domain names are considered to be different, it is common place to register both. For example:

ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG310:18180

Once we have an SPN, it must be registered in Active Directory (AD) against the user account used to run the service. We recommend a service account along the lines of myDomain\service.expert to run the ADERANT services. To register this account with an SPN there is a command line tool setspn:

setspn -A ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180 service.expert

As part of our deployment tooling we automatically generate a batch file containing all the SPNs that require to be registered in AD for a given environment. An SPN must not be registered twice, this will cause errors. To see the SPNs currently registered against a user you can use the setspn tool using the -L option and passing the account name:

setspn -L service.expert

If we take our configuration service as an example, we need the following SPNs registered in AD for the scenario environment:

ExpertConfigurationSvc/SVNLB301.ap.aderant.com:18180
ExpertConfigurationSvc/SVNLB301:18180
ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG310:18180
ExpertConfigurationSvc/SVEXPGG311.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG311:18180
ExpertConfigurationSvc/SVEXPGG312.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG312:18180

If you are running a development workstation, you will often see HOST/localhost as the SPN generated by the svcutil for locally hosted WCF services. This indicates that the service is expected to be running on the local machine.

If the service needs to support delegation then the AD account used to run the service must have this enabled:

The account must also be granted ‘Log on as a service’ rights on the application server hosting the service. This can be set-up using the local machine policies admin tool or pushed out via group policy.

3. Load balanced WCF Services hosted in IIS, using HTTP bindings, must have HTTP SPNs added for the account of the application pool.

By default an SPN is created in AD for the machine account of a server running IIS, for example HTTP/SVEXPGG310. In a load balanced scenario the machine account SPN cannot be used to issue a kerberos ticket because it is different for each machine in the application farm. Instead the kerberos ticket needs to be issued using the identity of the application pool that the web service is running under. If you have multiple application pools, these must all be running under the same account. The application pool account must have SPNs registered for the HTTP service as follows:

setspn -A HTTP/svnlb301.ap.aderant.com service.expert
setspn -A HTTP/svnlb301 service.expert
setspn -A HTTP/svexpgg310.ap.aderant.com service.expert
setspn -A HTTP/svexpgg310 service.expert
setspn -A HTTP/svexpgg311.ap.aderant.com service.expert
setspn -A HTTP/svexpgg311 service.expert
setspn -A HTTP/svexpgg312.ap.aderant.com service.expert
setspn -A HTTP/svexpgg312 service.expert

Here we have both the NetBIOS and FQDNs for the servers and the load balancer.

4. Load balanced WCF services hosted in IIS, using HTTP bindings, must use the Application Pool credentials to issue kerberos tickets.

In addition to adding the SPNs in 3, now change IIS so that it uses the app pool credentials for the kerberos ticket. This can be done either through the configuration manager in IIS or from the command line.

The obscured section path is system.webServer/security/authentication/windowsAuthentication.
From a command line:
appcmd set config /section:windowsAuthentication /useAppPoolCredentials:true

This has to be set on all of the application servers within the application farm.

While in IIS configuration, it is also worth setting authPersistNonNTLM to true, see http://support.microsoft.com/kb/954873 for details.

5. Enabled Windows Authentication on the required web applications in IIS.
There are two parts to this, the first of which is to ensure that the Windows Authentication provider for IIS is installed. This can be checked in the Windows features control panel.

The next step isto enable the Windows Authentication on the website itself. From the dashboard for the site, open the Authentication manager and then ensure that Windows Authentication is enabled:

While you are here, it’s worth checking the advanced properties of the Windows Authentication (available from the context menu) to ensure that Kernel-mode authentication is set.

This can also be set programmatically:

appcmd set config “Default Web Site/MyWebService” -section:system.webServer/security/authentication/windowsAuthentication /enabled:true /commit:apphost

Wrap up & Testing
Those are the key steps required to get kerberos working in a load balanced environment:
1. ensure the servers are in the local intranet zone.
2. create and register SPNs for net.tcp services for all app servers and the load balancer.
3. create and register HTTP SPNs for all app servers and the load balancer.
4. take care to avoid duplicate SPNs.
5. understand that NetBIOS and FQDNs require separate SPNs.
6. set useAppPoolCredentials to true on all IIS servers in the app farm.
7. run all application pools using a common domain service account, give this account permission to delegate and log on as a service.
8. ensure the web applications for the services have Windows authentication enabled.

It’s mostly straight forward once you’ve been through the steps once.

The easiest tool to test with is a browser and Fiddler. From within Fiddler you can look at the authorization headers for the HTTP requests which will show you if kerberos or NTLM is used. We expose an OData service which requires Windows authentication, it was very easy to trace the authentication negotiation going on for this site within Fiddler.

Resources
Security in WCF (MSDN Magazine): http://msdn.microsoft.com/en-us/magazine/cc163570.aspx

Patterns & Practices Kerberos Overview: http://msdn.microsoft.com/en-us/library/ff649429.aspx

Patterns & Practices WCF Security Guide: http://msdn.microsoft.com/en-us/library/ff650794.aspx

A Tale of Two Services

Now back in New Zealand after two weeks in the US, first week at TechEd and then a week in our US development centre. I finally feel free of jet lag and so it’s time to make good on a promise to write up a couple of samples I didn’t show at TechEd. The first is a quick introduction to authoring services…

The source code to accompany this post can be downloaded from http://public.me.com/stefsewell/ from the TechEd2010 folder. The sample code is in the archive ServiceAuthoringSample.zip.


A service is simply a piece of software that provides some functionality, access to this functionality is formalized into a contract. A service is often hosted in a separate process and utilized by a number of different consumers. The service does not know anything about the consumer, it just performs some work on their request. Between the consumer and service is most likely a process, machine and possibly a network boundary, therefore any data to be exchanged must be serializable. For the consumer to call the service, it must know where it lives, therefore the service has an address. The consumer must also be able to understand and be understood by the service, the supported communication protocols are captured as bindings. So there we have the ABC of Windows Communication Foundation; the Address, the Binding and the Contract.

Services in Code

With each release of Visual Studio, the key use cases that Microsoft is targeting with its tooling become easier to perform. In VS2010 the ease of service authoring and hosting has taken a leap forward and the code line count required to implement a service dropped. Let’s look at a very simple service that provides a random answer to a question, a Magic Eight Ball service. The contract for the magic eight ball is very simple and is captured as the following class:

using System.ServiceModel;

namespace MagicEightBall.CodedService {
    [ServiceContract]
    public interface MagicEightBallContract {
        [OperationContract]
        string AskQuestion(string question);
    }
}

There is a single method that takes a string containing a question and returns a string containing the answer. The System.ServiceModel namespace is the hint that we are going to use WCF to take care of our service. To provide an implementation of the service we have the following code.

using System;

namespace MagicEightBall.CodedService {
    public class MagicEightBallService : MagicEightBallContract {
        public string AskQuestion(string question) {
            return EightBall.Shake();
        }
    }

    internal sealed class EightBall {
        private readonly static Random random = new Random();
        private readonly static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public static string Shake(){
            return answers[random.Next(0, answers.Length)];
        }
    }
}

The eight ball is captured as a simple class with a Shake method, the service is not enforcing any validation such as ensuring a question is asked to keep things simple. Note that there is no System.ServiceModel using statement, this is vanilla .NET. We have a service contract and an implementation, our coding is complete. The next step is to host the service and allow our consumers to call it. The service host can be implemented in a number of ways, for this example we are going to use WAS (Windows Process Activation Service) which uses the IIS infrastructure to host the service – we don’t need to write a host, we’ll just use one that Microsoft provides. To access the service, the host exposes an endpoint, the endpoint is composed of the address, binding and contract. One of the criticisms of WCF in .NET 3 was the steep initial learning curve required to get a service hosted and configured. In .NET 4, the idea of defaults has been introduced which greatly reduces the amount of WCF configuration required to get up and running (to the point where it is possible to have no explicit configuration). In the example below we have a little configuration due to a slightly non-standard approach.

<?xml ="1.0"?>
<configuration>
  <system.serviceModel>
    <serviceHostingEnvironment>
      <serviceActivations>
        <add relativeAddress="MagicEightBall.svc" service="MagicEightBall.CodedService.MagicEightBallService"/>
      </serviceActivations>
    </serviceHostingEnvironment>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <serviceMetadata httpGetEnabled="True"/>
          <serviceDebug includeExceptionDetailInFaults="False"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

Here we are using the element to specify the last part of the address of the service rather than having a separate .svc file. Personally I think this is quite a tidy approach rather than having separate .config and .svc files. The section states that we want to publish metadata about this service and that we want to hide any exception details from consumers of our service. By publishing metadata about our service we allow tooling to generate a proxy class for us that allows our service to be easily called. Visual Studio provides such tooling, from within a project you can add a Service Reference:

The service reference needs to know the address of the service and then from the metadata it creates a class, the proxy, that allows the project to make use of the service. After clicking on OK, the service reference is listed as part of the project, in the sample below the MagicEightBall client is making use of two separate services.

I’m jumping a little bit ahead though, since we haven’t got the service host set up yet. We want to publish the service which we can do from within VS2010 by choosing Publish… from the context menu for the project:

A dialog pops up asking from a location to publish to, I used http://localhost/MagicEightBall which set up a new web application in IIS. By default the web application is set up to support the http protocol. If you want to change this you need to alter the ‘Enabled Protocols’ in the Advanced Settings dialog which is available from the web application context menu in IIS Manager [Manage application | Advanced Settings…].

In the example above I added the net.tcp protocol in addition to http. Note that there is no space between the comma and net.tcp. Putting a space in here will break the enabled protocols! Now we have created and published a WCF service, to test it, point your browser to http://localhost/MagicEightBall/MagicEightBall.svc. You should see the standard metadata page for your service instructing how to create a proxy class and consume it.

[Note that I have .NET 4 registered as the default framework version for IIS and so the default app pool uses .NET 4. The command C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i registers .NET 4 as the default for IIS.]

To test the service, create a console application, add a service reference called MagicEightBallService using the http url. Code to call the service is as follows:

using System;
using System.Text;

using MagicEightBall.Client.MagicEightBallService;

namespace MagicEightBall.Client {
    class Program {
        private const string CodeEndpointNameHttp = "BasicHttpBinding_MagicEightBallContract";
        string question = "Will you answer my questions?";
        string answer = string.Empty;

        using (MagicEightBallContractClient client = new MagicEightBallContractClient(CodeEndpointNameHttp)) {
            answer = client.AskQuestion(question);
        }

        Console.WriteLine(answer);
    }
}

In total there is less than 30 lines of code required for us to write to define, implement, host and consume a WCF service.

Services as Workflows
There is an alternative way to author services which uses a workflow to define the service implementation. A functionally equivalent Magic Eight Ball service can be developed as a workflow service as follows…

First create a new project in VS2010 that is a ‘WCF Workflow Service Application’ which sets up the basic send / receive service template. We need to set up a couple of variables within our workflow so click on the variables button at the bottom left having selected the outer scope:

The handle is created by the template so we need to add in the question and answer strings. The variables are used to pass data into and out of activities, the activity is the equivalent of a program statement and acts on the data. In workflow it is possible to author new activities such as the EightBall in the example above. The code for the activity is as follows:

using System;
using System.Activities;

namespace MagicEightBall.WorkflowService {
    public sealed class EightBall : CodeActivity {
        private static Random random = new Random();
        private static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public InArgument Question { get; set; }

        protected override string Execute(CodeActivityContext context) {
            string question = context.GetValue(this.Question);
            string answer = answers[random.Next(0, answers.Length - 1)];

            return answer;
        }
    }
}

This activity is essentially the same code as the Eightball class in the original service. The question is captured as an InArgument to the activity and the result is a string, specified as a CodeActivity. Note the use of the CodeActivityContext to get the value of the question from the workflow runtime at execution time.

After compiling the project we get an EightBall activity in our toolbox and this can be dragged into the service workflow. The completed implementation looks as follows with the addition of the EightBall activity:

The EightBall activity needs to have its arguments mapped to variables. The properties of the activity are defined as follows:

In the receive activity, the operation name is changed to AskQuestion and the content is changed to:

Here the receive activity expects to get a string parameter called question which is mapped to the question variable we created earlier. The receive/send activity pairing is analogous to the AskQuestion method in our coded service.

The send activity returns a string and is paired with the Receive Question send activity as shown in the Request field.

Here we are returning the answer that we got from the EightBall activity. This workflow is now functionally equivalent to our original coded example: a string containing a question is passed in, a string containing an answer is returned.

To host the workflow service, the same steps are taken as before. You simply choose to publish the service from Visual Studio into IIS. The service exposes metadata in the same way as the coded service, therefore you can as Visual Studio to generate a service reference for you and then consume the service in the same way as we did for the coded service.

So we have two ways to solve a problem – which is better? It depends on the work that the service is performing. If the service is co-ordinating work across multiple services then a workflow makes sense as it can be easier to visualize the intended flow of control. If the service co-ordination is long running and needs to be persisted then again a workflow makes sense as this long running, durable capability is built right into the workflow service host that Microsoft ships out of the box.

The sample code contains some additional concepts not discussed such as a separate activity library and instrumentation options for service code. The code is small and so hopefully this does not clutter the examples too much.

Migration from .NET 2/3/3.5 to .NET 4

During the TechEd session, the question was asked:

“How do I migrate my services from WCF3 to WCF4?”

The simple as answer is that you recompile your source under .NET 4 and you should be done. .NET 4 is backwards compatible with .NET 2/3.X but you need to recompile for the new CLR (common language runtime).

TechEd NZ 2009 Sessions

This year Microsoft have opened up the TechEd sessions to the public and so you no longer have to be a TechEd attendee to be able to be able to watch the sessions online. This includes sessions from previous years, which means the sessions I co-presented at New Zealand TechEd last year are now available.

A first look at WCF and WF in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA206

This session covered the new features in .NET 4 for WCF and WF. The slide deck was prepared and originally presented by Aaron Skonnard from Pluralsight. Mark, a colleague at ADERANT, and I were asked to present in New Zealand due to our .NET 4.0 TAP involvement (Technology Adoption Program). The demos were our own and so the content is slightly different to the original presentation.

Building declarative apps in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA306

In this session we wanted to show how Microsoft is choosing a declarative approach for much of its new technology, freeing the developer from the how and letting them concentrate on the what. Using the Visual Studio DSL toolkit is it possible to build your own visual DSLs and designers. From these models you can then use T4 to transform the model into code. This approach is at the heart of a software factory we use internally in ADERANT and has saved us from technology churn as well as speeding up product development.

Note: The DSL toolkit has been renamed for VS2010 and is now the Visual Studio Virtualization and Modeling SDK.

TechEd Follow-up

The morning after TechEd was spent on a cruiser a bike with Jeff from http://confederacyofcruisers.com/. This was an awesome way to see the city and hear about the history and unique culture that is New Orleans. It was pretty hot out there (I came from the New Zealand winter) and so I’m now hiding in an air conditioned room and following through on my promise to make the slide deck from the talk available.

The slide deck for the ASI02-INT session can be downloaded from the TechEd2010 folder at http://public.me.com/stefsewell.

It was prepared in Office 2010 and I’ve left in the slide notes just to give some additional context. Please let me know if there are any issues.

A big thank you to everyone who came along for the session, I will follow up on some of the questions asked in subsequent posts. The feedback has been mixed, on the positive side some found real value in hearing how we are tackling the same problems they face, some were interested directly in using our framework. On the less positive, it was felt the session was too biased towards ADERANT and not enough on WF, WCF and AppFabric. My goal was to show how WF, WCF and AppFabric is used as a platform to build an application framework. Looking back, the balance could have been closer towards the out of the box technologies that Microsoft is shipping and less on where we used the extensibility model. To try to redress the balance I’m going to write up the demos I didn’t show and make the source code available over the coming weeks. Please bear with me and I hope that in the end everyone will get something useful either out of the session directly or out of the follow up material. If you attended the session and there is a particular topic or problem that you would like me to cover then please let me know (stefan.sewell at aderant.com).

The samples I’ll cover will be:
• Creating and hosting a simple code based WCF service, then implementing the same functionality as a workflow service and hosting it.
• A walkthrough of the sample deployment DSL I demoed to show how to get started with the VS2010 DSL Toolkit.

Service Deployment

Deployment is one of those tasks that can often be left late in the development lifecycle, though it is a non-trivial problem. The adoption of continuous integration as part of an agile approach encourages the deployment aspects to be undertaken along side the development so that at the end of each sprint, the stakeholder has an installable piece of software delivered. When creating a service orientated architecture the deployment problem increases in complexity. Gone are the days of a SQL script for the database server and an installer for the client machine. Now there are often tens of servers interacting in a medium scale solution, often in a web or application server farm to provide both resilience and scale out capabilities. Almost two years ago I took a step back and looked at how we were deploying software and saw that there had to be a better way. We were installing early versions of the Golden Gate software onto customer sites and experiencing a lot teething problems getting the system running. Often the problems were due to the servers not having the required pre-requisites installed such as the .NET framework, they did not have the correct services running and so on. In an attempt to document the installation process we ended up with an installation guide that was rapidly approaching 100 pages. There had to be a better way…

Environment and Role Manifests
I’m on occasion reminded that I’m primarily paid to think so I took a deep breath and started to think about the problem. What would the ideal situation be? The first, and in many ways the biggest, realization is that we wanted to treat the deployment of the whole system as a unit of work. We wanted to allow an administrator to define where they wanted our software to be deployed into their site and then they simply click ‘go’.

The definition of the system would include a list of the servers they wanted to use and the roles they wanted the server to perform. Windows Server has the concept of a role, when setting up a new installation you choose what you want the server to do; is it the active directory controller, is it an application server, is it a file server, is it a web server? Depending upon which roles you allocate, different features are available. Some roles are incompatible on the same server, some roles are dependent upon other roles being satisfied by other servers. The role concept was something we also required as we had a number of different server components: configuration, security, workflow, messaging and application services. Each component was a unit of deployment, a server could be allocated the workflow role for example, which contained a number of services such as instance management and task management . We did not want to have to walk / remote onto to each server and perform an installation, we wanted a central process co-ordinate and manage the installation across all of the servers.

We needed a collective term for the definition of a complete deployment and in the end I chose the term environment. This came from my days working for an internet bank where we had a strictly defined set of staging platforms (environments) that code had to work its way through on the way to production; integration test, system test, user acceptance test, pre production. The environment is the root level object in a system deployment and contains information such as the environment name, the list of servers to install to, common file locations such as the install directory and others. A firm is expected to have multiple environments, as a minimum: development, test and production/live.

The concepts of the environment and the role are similar to the two manifests that ClickOnce uses to control client installations: the publisher manifest and the application manifest. The publisher manifest is owned by the company that is running the software and it includes information specific to them such as the installation URL. The application manifest is owned the the company who authored the software and includes all of the files required on the client to run the software (amongst other details). In fact I drew a lot of inspiration from ClickOnce, what we wanted was a ClickOnce mechanism for the server deployment. ClickOnce is driven from the two XML manifest files that declare what is required, these are given to the ClickOnce engine to action and the deployment takes place. I’m a big fan of both declarative programming and modeling so I wanted a deployment model that could be actioned. This was 12 months before all the excitement around Oslo and DSLs flared up (and then died down again). We had seen that both WPF and WF worked well as XAML driven runtimes (in .NET 3.X) and so the basic concepts of a deployment model and runtime took shape.

In summary an environment contains a mapping of servers to roles. A role represents an installable server component. Both the environment and role details are captured as manifest files which can be described in XML.

Environment Manifest
The environment manifest is quite simple and most easily explained with an example:

<environment    name="Local" 
                networkSharePath="C:\ExpertShare\Beaker" 
                sourcePath="C:\ExpertSource"
                createClickOnceDeployments="true" 
                expertServiceUser="Domain\service.expert"
                expertServicePassword="SOrtabXXXXX5GF3SDKIEw==">
  <expertDatabaseServer serverName="dbserver.domain.com" serverInstance="">
    <databaseConnection     databaseName="Expert" 
                            username="cmsdbo" 
                            password="eo4G3S2KLO05EzgQb3Q==" />
  </expertDatabaseServer>
  <servers>
    <server name="appserver.domain.com" 
            expertPath="C:\AderantExpert\{{Name}}" 
            skipPrerequisitesCheck="false" servicesWebsite="Default Web Site">
      <roles>
        <role type="configuration"/>
        <role type="customworkflows"/>
        <role type="employeeIntake"/>
        <role type="fileopening"/>
        <role type="identity"/>
        <role type="messaging"/>
        <role type="queryservice"/>
        <role type="security"/>
        <role type="workflow">
          <roleParameters>
            <roleParameter name="defaultSmtpHost" value="smtp.dev.domain.com" />
            <roleParameter name="defaultSmtpPort" value="25" />
            <roleParameter name="defaultFromEmailAddress" value="wfadmin@domain.com" />
          </roleParameters>
        </role>
      </roles>
    </server>
  </servers>
</environment>

This example manifest captures the environment details specific to the installing firm such as the server names, database details, installation source and so on. In this simple example only one application server is specified for brevity, which runs all of the roles. In reality there would be multiple servers listed each running the roles in a load balanced configuration.

Role Manifest
A role manifest defines the pre-requisites, the files and the services deployed as a unit.

Prerequisite Checking
As mentioned, the first problem we hit during a deployment was pre-requisites. How could we be sure that a server was capable of running our software? There were a number of aspects to this:
• was a supported OS installed
• were the correct operating system components installed
• were third party dependencies met
• were the correct supporting services running
• were the components correctly configured

The pre-requisites vary by component so in the role definition we have a section of checks that must all pass before the deployment can proceed. One of the first examples we saw was that the Microsoft Distributed Transaction Co-ordinator (MSDTC) was not enabled on many of the servers. If it was enabled, then the configuration was incorrect and the machine would not accept remote transactions. For Windows Services, the service control manager (SCM) can be queried to find the state of a service and the registry contained the configuration keys for the component settings. The big problem here was the poor support for remote processes in Windows, coming from a UNIX background it has always frustrated me. At the time Powershell v1 was full of promise but it did not support remote sessions, that was coming in v2. Powershell v2 was a CTP and did not look like it would be ready in time. While a number of shell commands have built-in support for running against a remote machine, there were enough gaps, version incompatibilities between 2003 and 2008 or performance issues that in the end I wrote a Windows service that would perform the checking. Using an xcopy deployment and the SC command it is possible to remotely deploy, register and start a Windows service. This service accepts a list of pre-requisite to check and returned a list of results: pass or fail. The pre-requisites required by a role are defined within the role manifest, examples are:

<registryPrerequisite
     path="HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC\Security\NetworkDtcAccess"
 value="1"
 description="MSDTC configured to allow remote access." />

<servicePrerequisite
      serviceName="WinRM"
      description="Ensure Windows Remote Management (WS-Management) service is available" />

Required Files
A role contains a list of the files required to be installed on the server and where the files need to go. An installation of Expert has a root directory specified by the installing administrator and then the structure is fixed under that:

Each file to be copied is captured in a files section in the role manifest, an example is:

    <file   filename="Aderant.Framework.Notes.dll"
            deploymentLocation="Local"
            targetRelativePath="LegacyServices" />
    <file   filename="Aderant.Framework.Notes.Presentation.dll"
            deploymentLocation="Local"
            targetRelativePath="LegacyServices" />
    <file   filename="Aderant.Framework.Notes.Services.dll"
            deploymentLocation="Local"
            targetRelativePath="LegacyServices" />

In order to be flexible, the file specification allows the source and target paths to be specified as well as the source and target filenames. This allows us to perform any manipulation of the file structure that we need to.

Services
In Golden Gate SP1 we support host services either as Windows Services under the SCM or in IIS under AppFabric. We are in the process of moving all of our services to AppFabric/IIS however this is not yet complete. Therefore a role manifest may contain a section for Windows Services:

  <serviceHost exeName="Expert.Notes.Service"
             serviceName="Aderant.Framework.Services.NotesService:{{Name}}"
             displayName="ADERANT Notes Services ({{Name}} instance)"
             description="Host for Notes Services for the {{Name}} environment."
             watchFiles="Aderant.Framework.*.dll"
             dependencies="MSMQ">
    <services>
      <service name="Notes"
               assemblyName="Aderant.Framework.Notes.Services.dll"
               entryPoint="Aderant.Framework.Notes.Service.Host.NotesService"
               requiresThread="true"
               serviceName="ADERANT Notes Service"
               proxyInterface="Aderant.Framework.Notes.Service.INotesService"
               serviceClass="ExpertNotesSvc"
               port="[[notesServicePort]]" />
    </services>
  </serviceHost>

and AppFabric hosted services:

  <appFabricServiceHost>
    <applicationPools>
      <applicationPool name="[[workflowApplicationPool]]"
                       netVersion="V4.0" />
      <applicationPool name="[[workflowApplicationPool]]"
                       netVersion="V4.0" />
    </applicationPools>
    <services>
      <service
        name="TaskManagement"
        proxyInterface="Aderant.Tasks.Interfaces.Service.ITaskManagementService"
        applicationPool="[[workflowApplicationPool]]"
        serviceType="FrameworkServices"
        supportedProtocols="http"
        allowAnonymousAuthentication="true"
        allowWindowsAuthentication="true" />
    </services>
  </appFabricServiceHost>

In both cases, the information required to create an host a service is provided. For Windows based services we have a reusable service host exe, AppFabric extends IIS and WAS to provide the hosting.

Deployment Engine
Up to this point we really been looking at the deployment model and how it is captured in the two manifests. These manifests are just an XML serialization of a deployment model. When we load an environment we just map from the XML into an in memory object graph of the environment. We now need something to action the model, and this is the deployment engine.

The deployment engine itself is the coordinator that executes a number of deployment actions. A deployment action performs a piece of work required in a deployment, its interface is as follows:

namespace Aderant.Framework.Deployment.Actions {
    public interface IDeploymentAction: IDeploymentMessage {
        void Deploy(Environment environment);
        void Clean(Environment environment);
        void Validate(Environment environment);
    }
}

The deployment engine supports a set of actions that can be performed to an environment. The three key actions are: deploy, remove (clean in the interface) and validate. When the deployment engine is asked to perform a ‘deploy’, it asks each of the deployment actions in turn to ‘deploy’. We have a library of around 30 deployment actions, examples are:

• AppFabricHostingAction
• FileDeploymentAction
• LoadBalancingConfigurationAction
• ServiceHostBuilderAction
• SQLScriptRunnerAction

Each action in turn knows how to deploy, remove and validate its role in a deployment. The validate action is very important, it allows an administrator to check to see if a pre-installed environment still meets the pre-requisites, still has the required files in place and has the required services up and running. For example it allows an administrator to easy see that a registry setting is no longer correctly set. The deployment actions in turn rely on a set of controller classes that interact with external components such as AppFabric, the file system, the Windows service manager, MSMQ and others. The separation of controller from the deployment actions also a high degree of code re-use as well as better unit testing.

While the deployment engine is currently C# code, it would be relatively easy to move it to a workflow. The deployment engine is a coordinator and therefore the control flow would be quite naturally captured as a workflow. The deployment actions would become an activity library.

As it stands the deployment engine is a command line utility, however it does have a WPF UI that calls through to it (in a very similar model to AppFabric calling the Powershell API from the IIS Manager add-in).

The environment manifest in the screenshot above shows a small load balanced environment being used to host multiple instances of our services.

The declarative deployment model and runtime is a good candidate for a DSL. In fact we prototyped a visual DSL using the Visual DSL toolkit for Visual Studio. This allowed an administrator to literally draw out the deployment diagram for an environment, which was then transformed via a T4 template into an environment XML file. This could then be executed via the deployment engine and used to deploy a full system.

Data over the Web

It’s been a little while since the last posting, in no small part due to my broadband quota exceeding the monthly allowance. Dial-up speed is just painful, and made me realize just how much I use the internet for media: music, movies, podcasts, blogs, … It was also a great reminder just how sensitive applications are when you have a constrained network connection.

One of the most significant changes made to the Expert architecture with SP1 is the introduction of a query service. Prior to SP1, the architectural layering required that data transfer objects (DTOs) were used to move data from a service boundary to the client. The domain model was mapped to whatever shape was required by the client requesting the entities.

Writing the DTO and mapper classes is very repetitive and quite dull and so it was automated using the Visual DSL Toolkit for Visual Studio (now renamed to the Visual Studio Visualization and Modeling SDK). A key component in the Expert framework is our software factory which builds code from 3 models: relational model, domain model and the view model. The view model provides a model and tooling to generate use case specific views of the domain model and the mappers required to transform from domain model to view model and back. An optimization we made when sending data back to the service to update the domain model was just to send back the changes. This required the view model to track any updates made to the model between the time it was fetched from a service and the time it was sent back to the service. The mechanism we wrote to achieve that is worth a few blog entries on its own and I’m going to skip over the details here.

One of the primary clients within this architecture is our workflow service which allows data from the business services to be managed within a long running workflow process before being updated back into the main line of business system. In the original Golden Gate release, the data associated with a workflow instance is sent out with every task within the workflow (a task is a workflow activity that requires human interaction). For very large workflow processes, this can be an issue, particularly over restricted network connections such as VPN or very remote sites. For SP1 we took a look at this particular areas and addressed it in the following ways:

• Tasks now have a data contract so that only the required data is sent.
• The way we fetch data is now via a dedicated query service rather than combining reads and write operations in the same service contract. The query service is http based and therefore can take advantage of out-of-the-box optimizations such as caching and compression.

The separation of query from command at the architectural level we found is currently being explored by a number of people, most vocally by Greg Young and Udi Dahan. The architectural pattern is Command Query Responsibility Segregation (CQRS) and is similar in spirit to the Command-Query separation CQS concept first discussed by Bertrand Meyer in Object Orientated Software Construction back in 1988. This is another topic worthy of blog posts and InfoQ has a great presentation from Greg Young.

Our query service implementation is a WCF Data Service which takes an Entity Framework 4 model and exposes it as a RESTful service.


All data required by a client is fetched via the query service and this is delivered over an http channel. The use of IIS and HTTP gives us the following:

• monitoring via AppFabric
• compression via the dynamic compression in IIS7
• caching using standard HTTP based caches
• cross platform capable data feed

The lifecycle for data is now:

There are a number of interesting aspects to this, not least of which we now use two different ORM technologies: NHibernate and Entity Framework. This is based on both historical use and feature set. Given the data model that we map to, we need the rich extension points available in NHibernate to support the desired object model. The WCF Data Services and EF4 features in .NET 4 / Visual Studio 2010 take most of the heavy lifting out of exposing a domain model via REST. Microsoft is now promoting an Open Data Protocol built on top of http/atom/json as a cross platform capable mechanism for interoperating with data over the web; an ODBC for the web perhaps. The Mix10 keynote included a chapter on OData and how Microsoft is tooling it.

Given that we have a software factory that already contains a domain model which includes the persistence mapping, we just needed to add additional T4 transforms to the software factory so that we could generate a query service and view model implementation from the existing domain model. Along the way we also simplified the change tracking approach, as the explicit task data contract reduced the potential for merge conflicts.

Now that we have a query service and Microsoft is doing all it can to promote OData as a cross platform solution, there are an interesting number of options opening up. One of which is the iPhone/iPad platform from Apple. As part of the OData SDK, Microsoft has released a library and tooling to make consuming OData feeds from the Apple platform straight forward. This includes a tool that generates objective-C classes from the meta data available from an OData stream (via the $metadata directive).