A solution to WinRM in a NLB cluster…

I’ve written a couple of posts discussing the remoting options for PowerShell:
• fan-out model – Windows Remote Management service (WinRM)
• fan-in model – IIS hosted PowerShell endpoint (using the IIS WinRM extension)

When running load balanced WCF services in IIS that are secured using Windows Authentication, the web applications are mapped to app pools that use a domain account. This is required by kerberos to ensure that the encrypted messages can be decoded using a common set of credentials. By default, the HTTP SPN would be registered against the machine account, however this is changed to map to the domain account. This broke WinRM which is also an HTTP endpoint but runs as the network service, therefore the kerberos authentication failed because it is expected to be running under the domain account.

PowerShell supports two machine name formats, when setting the Invoke-Command -ComputerName parameter: the NETBIOS name and the fully qualified domain name (FQDN). To be able to call the WinRM service and authenticate using kerberos, you need to use the machine name format that is not used in the SPN. For example, if

HTTP/myserver.domain.com

is the SPN registered against the domain account used by the application pools, then

PS>icm -ComputerName myserver.domain.com -scriptblock { ‘foo’}

will fail, however

PS>icm -ComputerName myserver -ScriptBlock {‘foo’}

will succeed. It works because the SPN must be an exact match for the machine name used (though case insensitive on Windows). If HTTP/myserver was registered, the command would fail. [I tried using the IP address too but PowerShell reports an error saying it does not support that scenario unless the IP address is in the TrustedHosts list]. This is still a little ‘magic’ and the better way to do this is to enable CredSSP in PowerShell.

This discovery removes the need to use the fan-in model, which we’ve found to be more problematic than the WinRM Windows Service:
• Cannot use the IIS:/AppPools/ path, returns no results
• Cannot use IIS:/Sites/, throws a COM exception
• AppPool identity must have ‘Generate Security Audit’ right on the machine
• Intermittent failures with the Windows Process Activation

Another recent discovery is around the effect of the NETBIOS name with IE zone security. If a resource is consider to be outside the local intranet or trusted sites zone, then kerberos does not work – the ticket is not issued. Therefore using the FQDN requires the domain to be added to the local intranet zone sites. The use of the NETBIOS name however is considered to be within the local intranet zone and therefore no amendment to the zones are required.

One last tangential gotcha… it is possible to extend the probe path that IIS uses when looking for assemblies beyond the standard bin directory.

<runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <probing privatePath="bin;SharedBin" />
    </assemblyBinding>
</runtime>

However files not in the normal bin directory are not shadow copied, therefore you can get file locking that you don’t expect when updating the files – in the case above, the SharedBin.

Co-ordinating deployments using the Parallel class in .NET 4.0

It’s been a long time since the last entry, the new year brings with it a fresh post based on some of the deployment work I’ve been looking at recently. This work has opened my eyes to the support for parallel co-ordination of work within .NET 4…

Recently I’ve been looking at the deployment approach we have for our services with an eye to reducing the time it takes for a full deployment. There are two simple concepts that leapt out: the first is to use a pull rather than a push model; the second is to deploy to all of the servers in parallel. This second point becomes increasing important as more servers get involved in hosting the services.

Pull versus Push
One of the most basic operations performed by the deployment engine is the copying of files to the application servers that host the various services within our product. The file copying was originally implemented as a push: the deployment agent performs the copy to the target server using an administration share, e.g. \\appserver01.domain.com\d$\AderantExpert\Live\ . This requires the deployment engine to run with administrator privilege on the remote machines which is not ideal.

An alternative is to send a script to the target server containing the copy the commands, the target server is then responsible for pulling the file to its local storage from a network share (which can be secured appropriately). The deployment engine is responsible for creating the script from the deployment model and co-ordinating the execution of the scripts across the various application servers.

PowerShell remoting is a great option for the remote execution of scripts and it’s quite straight forward to transform an object model into a PowerShell script using LINQ. I created a small script library class that provides common functions, for example:

internal class PowerShellScriptLibrary {
    internal static void ImportModules(StringBuilder script) {
    script.AppendLine("import-module WebAdministration");
    script.AppendLine("import-module ApplicationServer");
}

internal static void StopWindowsServices(string filter, StringBuilder script) {
    script.AppendLine("# Stop Windows Services");
    script.AppendLine(string.Format("Stop-Service {0}", filter));
}

internal static void CreateTargetDirectories(string rootPath, IEnumerable fileSpecifications, StringBuilder script) {
    script.AppendLine("# Create the required folder structure");
    fileSpecifications
        .Where(spec => !string.IsNullOrWhiteSpace(spec.TargetFile.TargetRelativePath))
        .Select(x => x.TargetFile)
        .Distinct()
        .ToList()
        .ForEach(targetFile => {
            string path = Path.Combine(rootPath, targetFile.TargetRelativePath);
            script.AppendLine(string.Format("if(-not(Test-Path '{0}'))", path));
            script.AppendLine("{");
            script.AppendLine(string.Format("\tNew-Item '{0}' -ItemType directory", path));
            script.AppendLine("}");
        });
 }


The library is then used to create the required script by calling the various functions, the examples below are for the patching approach that allows updates to be installed without requiring a full remove and redeploy:

private string GenerateInstallScriptForPatch(Server server, IEnumerable filesToDeploy, Environment environment, string patchFolder) {
    StringBuilder powershellScript = new StringBuilder();

    PowerShellScriptLibrary.ImportModules(powershellScript);
    PowerShellScriptLibrary.StopWindowsServices("ADERANT*", powershellScript);
    PowerShellScriptLibrary.StopAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.CreateTargetDirectories(server.ExpertPath, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.CreatePatchRollback(server, patchFolder, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.CopyFilesFromSourceToServer(environment, server, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.UpdateFactoryBinFromExpertShare(server, environment.NetworkSharePath, powershellScript);
    PowerShellScriptLibrary.StartAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.StartWindowsServices("ADERANT*", powershellScript);

    return powershellScript.ToString();
}

Though it is possible to treat NTFS as a transactional system (see http://msdn.microsoft.com/en-us/library/bb968806(v=VS.85).aspx ), and therefore have it participate in atomic actions, I didn’t walk this path. Instead I chose the compensation route and so when the model is transformed into a script I create both an install script and a compensate script which is executed in the event of anything going wrong.

private string GenerateRollbackScriptForPatch(Server server, IEnumerable filesToDeploy, Environment environment, string patchFolder) {
    StringBuilder powershellScript = new StringBuilder();

    PowerShellScriptLibrary.ImportModules(powershellScript);
    PowerShellScriptLibrary.StopWindowsServices("ADERANT*", powershellScript);
    PowerShellScriptLibrary.StopAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.RollbackPatchedFiles(server, patchFolder, filesToDeploy, powershellScript);
    PowerShellScriptLibrary.StartAppFabricServices(environment, powershellScript);
    PowerShellScriptLibrary.StartWindowsServices("ADERANT*", powershellScript);

    return powershellScript.ToString();
}

The scripts simply take a copy of the existing files that will be replaced before replacing them with the new versions. If anything goes wrong during the patch install, the compensating script is executed to restore the previous files.

Given that a server specific script is now generated per application server, because different servers host different roles and therefore require different files, the deployment engine has the opportunity to pass the script to the server; ask it to execute it and then wait for the OK from each server. If one server has an error then all can have the compensation script executed as required.

Parallelizing a deployment
Before looking at ome co-ordination code for the deployment engine, I want to explicitly note that there are two different and often confused concepts:
• Asynchronous execution
• Parallel execution

An asynchronous execution involves a call to begin a method and then a callback from that method when the work is complete. IO operations are natural candidates for asynchronous calls to ensure that the calling thread is not blocked waiting on the IO to complete. Single threaded frameworks such as UI are the most common place to see a push for asynchronous programming. In .NET 3, the Windows Workflow Foundation provided an excellent asynchronous programming model where asynchronous activities are co-ordinated by a single scheduler thread. It is bad practice to have this scheduler thread block or perform long running operations as it stalls the workflow progress when in a parallel activity. It is better to schedule multiple asynchronous activities in parallel when possible and have these execute on separate worker threads.

Parallel execution involves breaking a problem into small parts that can be executed in parallel due to the multi-core nature of todays CPUs. Rather than having a single core work towards an answer, many cores can participate in the calculation. To reduce the elapsed time, the time experienced by the end user, of a calculation, it may be possible to execute a LINQ query over all available cores (typically 2, 4 or 8). Linq now has the .AsParallel() extension method which can be applied to queries to enable parallel execution of the query. Of course, profiling is required to determine if the query performs better in parallel for typical data sets.

.NET 4 added the Task Parallel Library into the core runtime. This library adds numerous classes to the BCL to make parallel programming and the writing of co-ordination logic much simpler. In particular the Parallel class can be used to easily schedule multiple threads of work. For example:

Parallel.Invoke(
    () => Parallel.ForEach(updateMap, server =>
        serverInstallationScripts.Add(server.Key, GenerateInstallScriptForPatch(server.Key, server.Value, environment, patchFolder))),
    () => Parallel.ForEach(updateMap, server =>
        serverRollbackScripts.Add(server.Key, GenerateRollbackScriptForPatch(server.Key, server.Value, environment, patchFolder)))
);

The above code is responsible for creating the install and compensate PowerShell scripts from the deployment model discussed above. There are two levels of parallelism going on here. First the generation of the install and compensate scripts are scheduled at the same time using a Parallel.Invoke() call. Then a Parallel.ForEach() is used to generate the required script for each application server defined in the environment in parallel. The runtime is responsible for figuring out how best to achieve this, as a programmer we simply declare what we want to happen. In the above code the updateMap is an IDictionary<server, IList>, this is a list of files to deploy to each server keyed on the server.

I was simply blown away by how simple and yet how powerful this programming model is.

Accessing the LSA from managed code

This blog entry would be filed under the ‘it should not be this hard’ category if I had one. A reasonably common requirement is to determine the rights a user has and then to add additional rights as necessary. After much searching I could not find a ‘managed’ way to do this so I ended up with the following…

This post very much stands on the shoulders of others and so here are the links to the original articles I used:

LSA .NET from Code Project
“RE: Unmarshalling LsaEnumerateAccountRights() list of privileges”

When installing a new service, it is often necessary to add additional rights to the user that the service runs as, for example ‘Log on a service’. To do so from either managed code or PowerShell would seem like a reasonably obvious ask but I could not find any type that allowed me to. The security information is managed by the Local Security Authority (LSA) which has an unmanaged API available from advapi32.dll, to access this from C# requires P/Invoke and a reasonable amount of code to marshall the types. I’m not a C++ programmer and so I first looked for an alternative.

The Windows Server 2003 Resource Kit includes a utility NTRights.exe which allows rights to be added and removed from a user via the command line. Unfortunately this tool no longer ships in the Windows Server 2008 Resource Kit but the 2003 version still works on both Windows 7 and Server 2008 (R2). The tool provided part of the solution but I also wanted to be able to find out the rights that have already been assigned to the user as well as add and remove.

No matter which way I turned, I was always led back to the advapi32 and writing a wrapper to allow the functions to be called from C#. Thankfully most of the hard work had already been done and documented by Corinna John, with a sample project posted on Code Project. The original article comes from 2003, so I was a little surprised that it still hasn’t made it into a managed library. The sample by Corinna showed show to add rights to a user but unfortunately did not include listing the rights. For that I have to thank Seng who lists sample code here.

By combining the efforts of both together and cleaning up the code a little I ended up with the wrapper class given at the end of the posting (there is plenty of room for improvement in my code). This was compiled in VS2010, the API ended up as:

public IList GetRights(string accountName)
public void SetRight(string accountName, string privilegeName)
public void SetRights(string accountName, IList rights)

I had to compile for .NET 2.0 so that I could call it from PowerShell…

[void][Reflection.Assembly]::LoadFile('C:\Samples\LSAController.dll') # void suppresses the output of the message text
$LsaController = New-Object -TypeName 'LSAController.LocalSecurityAuthorityController'
$LsaRights = New-Object -TypeName 'LSAController.LocalSecurityAuthorityRights' # a convenience class containing common rights
$LsaController.SetRight('ADERANT_AP\stefan.sewell', [LSAController.LocalSecurityAuthorityRights]::LogonAsBatchJob)
$LsaController.GetRights('ADERANT_AP\stefan.sewell')

The code for the wrapper follows, I hope this saves someone the 2 days I spend on this.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
//
// This code has been adapted from http://www.codeproject.com/KB/cs/lsadotnet.aspx
// The rights enumeration code came from http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.interop/2004-11/0394.html
//
// Windows Security via .NET is covered on by Pluralsight:http://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook/HomePage.html
//

namespace LSAController {
    //
    // Provides methods the local security authority which controls user rights. Managed via secpol.msc normally.
    //
    public class LocalSecurityAuthorityController {
        private const int Access = (int)(
            LSA_AccessPolicy.POLICY_AUDIT_LOG_ADMIN |
            LSA_AccessPolicy.POLICY_CREATE_ACCOUNT |
            LSA_AccessPolicy.POLICY_CREATE_PRIVILEGE |
            LSA_AccessPolicy.POLICY_CREATE_SECRET |
            LSA_AccessPolicy.POLICY_GET_PRIVATE_INFORMATION |
            LSA_AccessPolicy.POLICY_LOOKUP_NAMES |
            LSA_AccessPolicy.POLICY_NOTIFICATION |
            LSA_AccessPolicy.POLICY_SERVER_ADMIN |
            LSA_AccessPolicy.POLICY_SET_AUDIT_REQUIREMENTS |
            LSA_AccessPolicy.POLICY_SET_DEFAULT_QUOTA_LIMITS |
            LSA_AccessPolicy.POLICY_TRUST_ADMIN |
            LSA_AccessPolicy.POLICY_VIEW_AUDIT_INFORMATION |
            LSA_AccessPolicy.POLICY_VIEW_LOCAL_INFORMATION
            );

        [DllImport("advapi32.dll", PreserveSig = true)]
        private static extern UInt32 LsaOpenPolicy(ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle);

        [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)]
        private static extern int LsaAddAccountRights(IntPtr PolicyHandle, IntPtr AccountSid, LSA_UNICODE_STRING[] UserRights, int CountOfRights);

        [DllImport("advapi32")]
        public static extern void FreeSid(IntPtr pSid);

        [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true, PreserveSig = true)]
        private static extern bool LookupAccountName(string lpSystemName, string lpAccountName, IntPtr psid, ref int cbsid, StringBuilder domainName, ref int cbdomainLength, ref int use);

        [DllImport("advapi32.dll")]
        private static extern bool IsValidSid(IntPtr pSid);

        [DllImport("advapi32.dll")]
        private static extern int LsaClose(IntPtr ObjectHandle);

        [DllImport("kernel32.dll")]
        private static extern int GetLastError();

        [DllImport("advapi32.dll")]
        private static extern int LsaNtStatusToWinError(int status);

        [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)]
        private static extern int LsaEnumerateAccountRights(IntPtr PolicyHandle, IntPtr AccountSid, out IntPtr UserRightsPtr, out int CountOfRights);

        [StructLayout(LayoutKind.Sequential)]
        private struct LSA_UNICODE_STRING {
            public UInt16 Length;
            public UInt16 MaximumLength;
            public IntPtr Buffer;
        }

        [StructLayout(LayoutKind.Sequential)]
        private struct LSA_OBJECT_ATTRIBUTES {
            public int Length;
            public IntPtr RootDirectory;
            public LSA_UNICODE_STRING ObjectName;
            public UInt32 Attributes;
            public IntPtr SecurityDescriptor;
            public IntPtr SecurityQualityOfService;
        }

        [Flags]
        private enum LSA_AccessPolicy : long {
            POLICY_VIEW_LOCAL_INFORMATION = 0x00000001L,
            POLICY_VIEW_AUDIT_INFORMATION = 0x00000002L,
            POLICY_GET_PRIVATE_INFORMATION = 0x00000004L,
            POLICY_TRUST_ADMIN = 0x00000008L,
            POLICY_CREATE_ACCOUNT = 0x00000010L,
            POLICY_CREATE_SECRET = 0x00000020L,
            POLICY_CREATE_PRIVILEGE = 0x00000040L,
            POLICY_SET_DEFAULT_QUOTA_LIMITS = 0x00000080L,
            POLICY_SET_AUDIT_REQUIREMENTS = 0x00000100L,
            POLICY_AUDIT_LOG_ADMIN = 0x00000200L,
            POLICY_SERVER_ADMIN = 0x00000400L,
            POLICY_LOOKUP_NAMES = 0x00000800L,
            POLICY_NOTIFICATION = 0x00001000L
        }

        // Returns the Local Security Authority rights granted to the account
        public IList<string> GetRights(string accountName) {
            IList<string> rights = new List<string>();
            string errorMessage = string.Empty;

            long winErrorCode = 0;
            IntPtr sid = IntPtr.Zero;
            int sidSize = 0;
            StringBuilder domainName = new StringBuilder();
            int nameSize = 0;
            int accountType = 0;

            LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType);

            domainName = new StringBuilder(nameSize);
            sid = Marshal.AllocHGlobal(sidSize);

            if (!LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType)) {
                winErrorCode = GetLastError();
                errorMessage = ("LookupAccountName failed: " + winErrorCode);
            } else {
                LSA_UNICODE_STRING systemName = new LSA_UNICODE_STRING();

                IntPtr policyHandle = IntPtr.Zero;
                IntPtr userRightsPtr = IntPtr.Zero;
                int countOfRights = 0;

                LSA_OBJECT_ATTRIBUTES objectAttributes = CreateLSAObject();

                uint policyStatus = LsaOpenPolicy(ref systemName, ref objectAttributes, Access, out policyHandle);
                winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(policyStatus));

                if (winErrorCode != 0) {
                    errorMessage = string.Format("OpenPolicy failed: {0}.", winErrorCode);
                } else {
                    int result = LsaEnumerateAccountRights(policyHandle, sid, out userRightsPtr, out countOfRights);
                    winErrorCode = LsaNtStatusToWinError(result);
                    if (winErrorCode != 0) {
                        errorMessage = string.Format("LsaAddAccountRights failed: {0}", winErrorCode);
                    }

                    Int32 ptr = userRightsPtr.ToInt32();
                    LSA_UNICODE_STRING userRight;

                    for (int i = 0; i < countOfRights; i++) {
                        userRight = (LSA_UNICODE_STRING)Marshal.PtrToStructure(new IntPtr(ptr), typeof(LSA_UNICODE_STRING));
                        string userRightStr = Marshal.PtrToStringAuto(userRight.Buffer);
                        rights.Add(userRightStr);
                        ptr += Marshal.SizeOf(userRight);
                    }
                    LsaClose(policyHandle);
                }
                FreeSid(sid);
            }
            if (winErrorCode > 0) {
                throw new ApplicationException(string.Format("Error occured in LSA, error code {0}, detail: {1}", winErrorCode, errorMessage));
            }
            return rights;
        }

        // Adds a privilege to an account
        public void SetRight(string accountName, string privilegeName) {
            long winErrorCode = 0;
            string errorMessage = string.Empty;

            IntPtr sid = IntPtr.Zero;
            int sidSize = 0;
            StringBuilder domainName = new StringBuilder();
            int nameSize = 0;
            int accountType = 0;

            LookupAccountName(String.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType);

            domainName = new StringBuilder(nameSize);
            sid = Marshal.AllocHGlobal(sidSize);

            if (!LookupAccountName(string.Empty, accountName, sid, ref sidSize, domainName, ref nameSize, ref accountType)) {
                winErrorCode = GetLastError();
                errorMessage = string.Format("LookupAccountName failed: {0}", winErrorCode);
            } else {
                LSA_UNICODE_STRING systemName = new LSA_UNICODE_STRING();
                IntPtr policyHandle = IntPtr.Zero;
                LSA_OBJECT_ATTRIBUTES objectAttributes = CreateLSAObject();

                uint resultPolicy = LsaOpenPolicy(ref systemName, ref objectAttributes, Access, out policyHandle);
                winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(resultPolicy));

                if (winErrorCode != 0) {
                    errorMessage = string.Format("OpenPolicy failed: {0} ", winErrorCode);
                } else {
                    LSA_UNICODE_STRING[] userRights = new LSA_UNICODE_STRING[1];
                    userRights[0] = new LSA_UNICODE_STRING();
                    userRights[0].Buffer = Marshal.StringToHGlobalUni(privilegeName);
                    userRights[0].Length = (UInt16)(privilegeName.Length * UnicodeEncoding.CharSize);
                    userRights[0].MaximumLength = (UInt16)((privilegeName.Length + 1) * UnicodeEncoding.CharSize);

                    int res = LsaAddAccountRights(policyHandle, sid, userRights, 1);
                    winErrorCode = LsaNtStatusToWinError(Convert.ToInt32(res));
                    if (winErrorCode != 0) {
                        errorMessage = string.Format("LsaAddAccountRights failed: {0}", winErrorCode);
                    }

                    LsaClose(policyHandle);
                }
                FreeSid(sid);
            }

            if (winErrorCode > 0) {
                throw new ApplicationException(string.Format("Failed to add right {0} to {1}. Error detail:{2}", accountName, privilegeName, errorMessage));
            }
        }

        public void SetRights(string accountName, IList<string> rights) {
            rights.ToList().ForEach(right => SetRight(accountName, right));
        }

        private static LSA_OBJECT_ATTRIBUTES CreateLSAObject() {
            LSA_OBJECT_ATTRIBUTES newInstance = new LSA_OBJECT_ATTRIBUTES();

            newInstance.Length = 0;
            newInstance.RootDirectory = IntPtr.Zero;
            newInstance.Attributes = 0;
            newInstance.SecurityDescriptor = IntPtr.Zero;
            newInstance.SecurityQualityOfService = IntPtr.Zero;

            return newInstance;
        }
    }

    // Local security rights managed by the Local Security Authority
    public class LocalSecurityAuthorityRights {
        // Log on as a service right
        public const string LogonAsService = "SeServiceLogonRight";
        // Log on as a batch job right
        public const string LogonAsBatchJob = "SeBatchLogonRight";
        // Interactive log on right
        public const string InteractiveLogon = "SeInteractiveLogonRight";
        // Network log on right
        public const string NetworkLogon = "SeNetworkLogonRight";
        // Generate security audit logs right
        public const string GenerateSecurityAudits = "SeAuditPrivilege";
    }
}

PowerShell Part 2 – Installing a new service

Following on from the brief introduction to PowerShell, let’s walk through the installation script…

The script installs a simple Magic Eight Ball service that will return a pseudo-random answer to any question it’s given. The service is written as a WCF service in C#, the files to deploy are available from http://public.me.com/stefsewell/ , have a look in TechEd2010/DEV306-WindowsServerAppFabric/InstallationSource. The folder contains a web.config to set up the service activation and a bin folder with the service implementation. The PowerShell scripts are also available from the file share, look in Powershell folder in DEV306…

Pre-requisite Checking

The script begins by checking a couple of pre-requisites. If any of these checks fail then we do not attempt to install the service, instead the installing admin is told of the failed checks. There are a number of different checks we can make, in this script we check the OS version, that dependent services are installed and that the correct version of the .NET framework is available.

First we need a variable to hold whether or not we have a failure:

$failedPrereqs = $false

Next we move on to our first check: that the correct version of Windows being used:

$OSVersion = Get-WmiObject Win32_OperatingSystem
if(-not $OSVersion.Version.StartsWith('6.1')) {
    Write-Host "The operating system version is not supported, Windows 7 or Windows Server 2008 required."
    $failedPrereqs = $true
    # See http://msdn.microsoft.com/en-us/library/aa394239(v=VS.85).aspx for other properties of Win32_OperatingSystem
    # See http://msdn.microsoft.com/en-us/library/aa394084(VS.85).aspx for additional WMI classes
}

The script fetches the Win32_OperatingSystem WMI object for interrogation using Get-WmiObject. This object contains a good deal of useful information, links are provided above to let you drill down into other properties. The script checks the Version to ensure that we are working with either Windows 7 or Windows Server 2008, in which case the version starts with “6.1”.

Next we look for a couple of installed services:

# IIS is installed
$IISService = Get-Service -Name 'W3SVC' -ErrorAction SilentlyContinue
if(-not $IISService) {
    Write-Host "IIS is not installed on" $env:computername
    $FailedPrereqs = $true
}

# AppFabric is installed
$AppFabricMonitoringService = Get-Service -Name 'AppFabricEventCollectionService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Monitoring Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

$AppFabricMonitoringService = Get-Service -Name 'AppFabricWorkflowManagementService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Workflow Management Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

A basic pattern is repeated here using the Get-Service command to determine if a particular Windows Service is installed on the machine.

With the service requirements checked, we look to see if we have the correct version of the .NET framework installed. In our case we want the RTM of version 4 and go to the registry to validate this.

$frameworkVersion = get-itemProperty -Path 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' -ErrorAction SilentlyContinue
if(-not($frameworkVersion) -or (-not($frameworkVersion.Version -eq '4.0.30319'))){
    Write-Host "The RTM version of the full .NET 4 framework is not installed."
    $FailedPrereqs = $true
}

The registry provider is used, HKLM: [HKEY_LOCAL_MACHINE], to look up a path in the registry that should contain the version. If the key is not found or the value is incorrect we fail the test.

Those are all the checks made in the original script from the DEV306 session, however there is great feature in Windows Server 2008 R2 that allows very simple querying of the installed Windows features. I found this by accident:

>Get-Module -ListAvailable

This command lists all of the available modules on a system, the ServerManager module looked interesting:

>Get-Command -Module ServerManager

CommandType Name Definition
----------- ---- ----------
Cmdlet Add-WindowsFeature Add-WindowsFeature [-Name] [-IncludeAllSubFeature] [-LogPath ] [-...
Cmdlet Get-WindowsFeature Get-WindowsFeature [[-Name] ] [-LogPath ] [-Verbose] [-Debug] [-Err...
Cmdlet Remove-WindowsFeature Remove-WindowsFeature [-Name] [-LogPath ] [-Concurrent] [-Restart...

A simple add/remove/get interface which allows you to easily determine which Windows roles and features are installed – then add or remove as required. This is ideal for pre-requisite checking as we can now explicitly check to see if the WinRM IIS Extensions are installed for example:

import-module ServerManager

if(-not (Get-WindowsFeature ‘WinRM-IIS-Ext’).Installed) {
    Write-Host "The WinRM IIS Extension is not installed"
}

Simply calling Get-WindowsFeature lists all features and marks-up those that are installed with [X]:

PS>C:\Windows\system32> Get-WindowsFeature

Display Name Name
------------ ----
[ ] Active Directory Certificate Services AD-Certificate
[ ] Certification Authority ADCS-Cert-Authority
[ ] Certification Authority Web Enrollment ADCS-Web-Enrollment
[ ] Certificate Enrollment Web Service ADCS-Enroll-Web-Svc
[ ] Certificate Enrollment Policy Web Service ADCS-Enroll-Web-Pol
[ ] Active Directory Domain Services AD-Domain-Services
[ ] Active Directory Domain Controller ADDS-Domain-Controller
[ ] Identity Management for UNIX ADDS-Identity-Mgmt
[ ] Server for Network Information Services ADDS-NIS
[ ] Password Synchronization ADDS-Password-Sync
[ ] Administration Tools ADDS-IDMU-Tools
[ ] Active Directory Federation Services AD-Federation-Services
[ ] Federation Service ADFS-Federation
[ ] Federation Service Proxy ADFS-Proxy
[ ] AD FS Web Agents ADFS-Web-Agents
[ ] Claims-aware Agent ADFS-Claims
[ ] Windows Token-based Agent ADFS-Windows-Token
[ ] Active Directory Lightweight Directory Services ADLDS
[ ] Active Directory Rights Management Services ADRMS
[ ] Active Directory Rights Management Server ADRMS-Server
[ ] Identity Federation Support ADRMS-Identity
[X] Application Server Application-Server
[X] .NET Framework 3.5.1 AS-NET-Framework
[X] AppFabric AS-AppServer-Ext
[X] Web Server (IIS) Support AS-Web-Support
[X] COM+ Network Access AS-Ent-Services
[X] TCP Port Sharing AS-TCP-Port-Sharing
[X] Windows Process Activation Service Support AS-WAS-Support
[X] HTTP Activation AS-HTTP-Activation
[X] Message Queuing Activation AS-MSMQ-Activation
[X] TCP Activation AS-TCP-Activation
...

The right hand column contains the name of the feature to use via the command.

I ended up writing a simple function to check for a list of features:

<#
.SYNOPSIS
Checks to see if a given set of Windows features are installed.    

.DESCRIPTION
Checks to see if a given set of Windows features are installed.

.PARAMETER featureSetArray
An array of strings containing the Windows features to check for.

.PARAMETER featuresName
A description of the feature set being tested for.

.EXAMPLE
Check that a couple of web server features are installed.

Check-FeatureSet -featureSetArray @('Web-Server','Web-WebServer','Web-Common-Http') -featuresName 'Required Web Features'

#>
function Check-FeatureSet{
    param(
        [Parameter(Mandatory=$true)]
        [array] $featureSetArray,
        [Parameter(Mandatory=$true)]
        [string]$featuresName
    )
    Write-Host "Checking $featuresName for missing features..."

    foreach($feature in $featureSetArray){
        if(-not (Get-WindowsFeature $feature).Installed){
            Write-Host "The feature $feature is not installed"
        }
    }
}

The function introduces a number of PowerShell features such as comment documentation, functions, parameters and parameter attributes. I don’t intend to dwell on any as I hope the code is quite readable.

Then to use this:

# array of strings containing .NET related features
$dotNetFeatureSet = @('NET-Framework','NET-Framework-Core','NET-Win-CFAC','NET-HTTP-Activation','NET-Non-HTTP-Activ')

# array of string containing MSMQ related features
$messageQueueFeatureSet = @('MSMQ','MSMQ-Services','MSMQ-Server')

Check-FeatureSet $dotNetFeatureSet '.NET'
Check-FeatureSet $messageQueueFeatureSet 'Message Queuing'

To complete the pre-requisite check, after making each individual test the failure variable is evaluated. If true then the script ends with a suitable message, otherwise we go ahead with the install.

Installing the Service

The first step in the installation is to copy the required files from a known location. This is a pull model – the target server pulls the files across the network, rather than having the files pushed on to the server via an administration share or such like [e.g. \\myMachine\c$\Services\].

$sourcePath = '\\SomeMachine\MagicEightBallInstaller\'
$installPath = 'C:\Services\MagicEightBall'

if(-not (Test-Path $sourcePath)) {
Write-Host 'Cannot find the source path ' $sourcePath
Throw (New-Object System.IO.FileNotFoundException)
}

if(-not (Test-Path $installPath)) {
New-Item -type directory -path $installPath
Write-Host 'Created service directory at ' $installPath
}

Copy-Item -Path (Join-Path $sourcePath "*") -Destination $installPath -Recurse

Write-Host 'Copied the required service files to ' $installPath

The file structure is copied from a network share onto the machine the script is running on. The Test-Path command determines whether a path exists an allows appropriate action to be taken. To perform a recursive copy the Copy-Item command is called, using the Join-Path command to establish the source path. These path commands can be used with any provider, not just the file system.

With the files and directories in place, we now need to host the service in IIS. To do this we need to use the PowerShell module for IIS:

import-module WebAdministration # require admin-level privileges

Next…

$found = Get-ChildItem IIS:\AppPools | Where-Object {$_.Name -eq "NewAppPool"}
if(-not $found){
    New-WebAppPool 'NewAppPool'
}

We want to isolate our service into its own pool so we check to see if NewAppPool exists and if not we create it. We are using the IIS: provider to treat the web server as if it was a file system, again we just use standard commands to query the path.

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ProcessModel -Value @{IdentityType=3;Username="MyServer\Service.EightBall";Password="p@ssw0rd"} # 3 = Custom

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ManagedRuntimeVersion -Value v4.0

Write-Host 'Created application pool NewAppPool'

Having created the application pool we set some properties. In particular we ensure that .NET v4 is used and that a custom identity is used. The @{} syntax allows us to construct new object instances – in this case a new process model object.

New-WebApplication -Site 'Default Web Site' -Name 'MagicEightBall' -PhysicalPath $installPath -ApplicationPool 'NewAppPool' -Force

With the application pool in place and configured, we next set-up the web application itself. The New-WebApplication command is all we need, giving it the site, application name, physical file system path and application pool.

Set-ItemProperty 'IIS:/Sites/Default Web Site/MagicEightBall' -Name EnabledProtocols 'http,net.tcp' # do not include spaces in the list!

Write-Host 'Created web application MagicEightBall'

To enable both HTTP and net.tcp endpoints, we simply update the EnabledProtocols property of the web application. Thanks to default endpoints in WCF4, this is all we need to do get both protocols supported. Note: do not put spaces into the list of protocols.

Configuring AppFabric Monitoring

We now have enough script to create the service host, but we want to add AppFabric monitoring. Windows Server AppFabric has a rich PowerShell API, to access it we need to import the module:

import-module ApplicationServer

Next we need to create our monitoring database:

[Reflection.Assembly]::LoadWithPartialName("System.Data")

$monitoringDatabase = 'MagicEightBallMonitoring'
$monitoringConnection = New-Object System.Data.SqlClient.SqlConnectionStringBuilder -argumentList "Server=localhost;Database=$monitoringDatabase;Integrated Security=true"
$monitoringConnection.Pooling = $true

We need a couple of variables: a database name and a connection string. We use the SqlConnectionStringBuilder out of the System.Data assembly to get our connection string. This demonstrates the deep integration between PowerShell and .NET.

Add-WebConfiguration -Filter connectionStrings -PSPath "MACHINE/WEBROOT/APPHOST/Default Web Site/MagicEightBall" -Value @{name="MagicEightBallMonitoringConnection"; connectionString=$monitoringConnection.ToString()}

We add the connection string to our web application configuration.

Initialize-ASMonitoringSqlDatabase -Admins 'Domain\AS_Admins' -Readers 'DOMAIN\AS_Observers' -Writers 'DOMAIN\AS_MonitoringWriters' -ConnectionString $monitoringConnection.ToString() -Force

And then we create the actual database, passing in the security groups. While local machine groups can be used, in this case I’m mocking a domain group which is more appropriate for load balanced scenarios.

Set-ASAppMonitoring -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -MonitoringLevel 'HealthMonitoring' -ConnectionStringName 'MagicEightBallMonitoringConnection'

The last step is to enable monitoring for the web application, above we are setting a ‘health monitoring’ level which is enough to populate the AppFabric dashboard inside the IIS manager.

Set-ASAppServiceMetadata -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -HttpGetEnabled $True

Last of all we ensure that meta data publishing is available for our service. This allows us to test the service using the WCFTestClient application.

PowerShell Part 1 – Getting Started

As part of ‘DEV306: Taming SOA Deployments using Windows Server AppFabric’ I showed a couple of PowerShell scripts that can be used to deploy a simple WCF service. The demo was pretty quick due to 60 minute session length and the fact that reading PowerShell is not the most exciting presentation. Over the next couple of blogs I’m going to walk through the scripts which are available from http://public.me.com/stefsewell. This first post is just to whet the appetite and introduce some PowerShell basics and concepts.

To dig into PowerShell I’ve been using the MEAP edition of Windows PowerShell in Action, 2nd Edition by Bruce Payette and I definitely recommend it.

The Basics

To run PowerShell commands you can use either the PowerShell console or the PowerShell ISE (integrated scripting environment). The ISE has some neat features such as breakpoints and allows you to easily build up scripts rather than issuing single commands.

Note: On a 64-bit system there is a 32-bit and 64-bit version of the PowerShell console and ISE. Confusingly the 64-bit version runs out of the C:\Windows\System32 directory while the 32-bit version runs out of c:\Windows\SysWOW64. You want to be using the 64-bit version, we’ve seen some strange behavior and errors when trying to use the 32-bit version on a 64-bit OS.

Getting Help…

The first script 1-PowerShell basics is just an introduction to some of the PowerShell goodness. The first useful thing is knowing how to get help and as with all PowerShell commands this takes the form of a verb-noun pairing:

> Get-Help

This gets you into the first page of the help system and from here you’ll want to drill down into specific commands:

> Get-Help invoke-command

You’ll get a description of the command, including the supported parameters. One of the very useful standard parameters for get-help is the -examples:

> Get-Help Invoke-Command -examples

This returns you a number of usage examples. Not only is help provided for specific commands but there is also help on a number of more general topics:

> Get-Help about_remoting

This will give you a good overview of the PowerShell remoting features.

Wildcards are supported so to see all the ‘about’ topics:

> Get-Help about*

Using Aliases…

Next up is navigating around using a familiar set of commands. The standard PowerShell commands can take a little getting used to, especially after years of either UNIX or DOS. To make you feel at home, there is the concept of an alias. An alias is simply another name for a command, for example Get-ChildItem will be more familiar as ls or dir to most people. To see the list of mapped aliases:

> Get-Alias

You can use cd to change directory which is an alias for Set-Location.

Variables…

PowerShell supports variables and uses a $ prefix:

> $foo = “TechEd”

To display the contents of $foo:

> Write-Host “The value of foo is $foo”
The value of foo is TechEd

The “” delimited string is evaluated prior to printing. If you use a single quote ‘ then a literal string is created:

> Write-Host ‘The value of foo is $foo’
The value of foo is $foo

Conditionals…

To check to see if a variable is not null:

if(-not $foo) { # do something } else { # do something else }

Slightly odd syntax but you check for -not of the variable, ! can be used as shorthand for -not. The comment character in PowerShell is #.

Loops…

Within a script, foreach and while loops are supported:

Foreach ($file in Get-ChildItem C:\) {
 
    $file.name
}

$count = 0;

While($count -lt 10) {
    $count++
    "$count"
}

To get access to environment variables you use $env:, for example:

> Write-Host $env:ComputerName

Using Pipes…

Both DOS and UNIX support piping the output from one command into another, allowing complex chains of commands to be linked together. PowerShell also supports this:

> get-service | where-object {$_.Status -eq "Stopped"}

This returns all of the installed Windows services with a status of stopped. The $_ is an iterator variable allowing you to enumerate over all of the results returned from the get-service command. The equality operator is -eq, in the same style as -not.

Additional Modules…

Before going much further, we need to relax the default security setting slightly. Out of the box a script execution policy of Restricted is set. This prevents the loading of configuration and the running of scripts. I find that changing this to RemoteSigned works well, this allows local scripts to run and signed scripts (by a trusted publisher) if the are downloaded from the internet.

> Set-ExecutionPolicy RemoteSigned

A number of Microsoft technologies have an accompanying PowerShell module that contains commands allowing automation. For example IIS comes with WebAdministration and Windows Server AppFabric brings along ApplicationServer. To use these modules you first need to be running in an elevated PowerShell console (run as Administrator) then import the module:

> Import-Module WebAdministration
> Import-Module ApplicationServer

To see the commands available in a module:

> Get-Command -module WebAdministration
> Get-Command -module ApplicationServer

There are commands allowing you to manage web applications, virtual directories, application pools, the AppFabric monitoring and workflow stores, and much more. We’ll see examples of these in the WCF service installation script in the next post.

A great feature of PowerShell is the concept of the provider, this allows a hierarchical structure to be navigated as if it was a physical drive. Consider how we navigate and administer the file system: cd (set-location), dir (get-childitem), mkdir (new-item) etc. These same commands can be used to navigate any hierarchy that has a provider such as:

cert: the certificate store
wsman: WinRM settings
HKLM: registry HKEY_LOCAL_MACHINE
IIS: Internet Information Server

This allows you to do the following:

> dir IIS:\Sites\Default Web Site\
> dir HKLM:\SOFTWARE\Microsoft\MSDTC

To change to the IIS ‘drive’:

> IIS:

Your PowerShell prompt will now show you an IIS path rather than a file system path. You navigate around using the standard commands. Note that WSMAN: doesn’t work, you need to cd WSMAN: explicitly.

.NET Integration

PowerShell is tightly integrated with .NET allowing objects to be constructed and consumed directly. For example:

> Write-Host ([System.DateTime]::Now)

The () indicates the expression is to be evaluated, the [] indicates a .NET type. The :: denotes a method call.

> [Reflection.Assembly]::LoadWithPartialName("System.Messaging")
> [System.Messaging.MessageQueue]::Create(".\Private$\MyNewQueue")

This second example shows how to create a private message queue in MSMQ. The System.Messaging assembly is loaded via the Reflection API.

This is really only just scratching the surface, however it gives us enough to be able to read through the installation script and understand what is going on. That’s for the next post…

PS: The canonical Hello, World! in PowerShell is simply:

> ‘Hello, World!’

Not tremendously useful but we’ve now ticked that box.

DEV306: PowerShell Scripts Available

Thanks to everyone who attended the sessions at TechEd New Zealand. The PowerShell deployment files demonstrated are now available from http://public.me.com/stefsewell

Have a look in the TechEd2010/DEV306-WindowsServerAppFabric folder, it contains a simple VS2010 project showing how to call PowerShell from C#. It also contains the PowerShell scripts that deploy, validate and remove a simple WCF service.

Pete has updated his blog ( http://blog.petegoo.com ) with the demo code from his workflow services.

Feedback from the sessions has been mixed. The workflow introduction seems to have worked for a high percentage of those who attended. For a 200 level session, I thought the content was pretty technical but sorry to the few who thought it was too lightweight. All I can say is that it was an introduction to workflow and the goal was to get the basics across. I would recommend the additional resources included in the slide decks to drill down further.

The Windows Server AppFabric session was not as successful as the workflow introduction. Windows Server AppFabric is a great addition to the service hosting capabilities of Windows Server 2008 – if you have WCF services in IIS/WAS, you should be using it if possible. The convenience of monitoring is a little difficult to get across in a demo, it has made our lives so much easier in support and in development. The workflow service host opens up many scenarios that were previously very hard to implement. Microsoft has taken on the heavy lifting (persistence, tracking, failover, scale-out) and we have a very simple model to work with. The PowerShell demo was quick but I didn’t want to spend 30 minutes walking through a page of commands. The scripts are available for download and commented; please take the time to explore the commands and experiment with your own services. The remote shell capabilities of PowerShell make large scale deployments much simpler than previously. The DSL demo at the end was a taste of what is possible with a model driven approach, I’m leaving you to connect the dots and transform from model to PowerShell.

Thanks again to all that attended, I hope there is something useful for you either in the session or in this blog.

Configuration options for Remote PowerShell and WS-Management

Here’s the want list:
• to be able to run WCF and workflow services in IIS that use a basicHttpBinding.
• to scale out services in an application farm using the network load balancing service in Windows Server 2008.
• to authenticate users using Kerberos to flow the Windows Identity.
• to administer servers remotely using PowerShell.

It’s not exactly an exotic or out there set of needs, however it has been over three weeks now that I’ve been working through various attempts to get this up are running reliably.

The crux of the issue is around the use of HTTP and kerberos. To get the services to work in a load balanced environment with kerberos, a set of SPNs needed to be added to the Active Directory for the domain.The web applications hosting the service needed to run under a domain identity (e.g. MyDomain\service.expert) so they are mapped to an application pool with this identity. SPNs are then added to map the HTTP protocol to this user, rather than the machine account. In our case, four SPNs are added to the service.expert user – one for the network load balancers virtual host name and one for each server in the application farm:

HTTP/SVNLB301.ap.aderant.com
HTTP/SVEXPGG302.ap.aderant.com
HTTP/SVEXPGG303.ap.aderant.com
HTTP/SVEXPGG304.ap.aderant.com

Doing this breaks the default WinRM service configuration as the WinRM HTTP listener is running under a machine account not service.expert and so the SPN is incorrect and Kerberos negotiation fails. This is pretty much where we left off on the last posting and since then I have been looking at using HTTPS as the transport for the PowerShell remoting calls and other authentication mechanisms.

There are two options for hosting the WinRM service:

1. as a Windows Service (this is the default)
2. in IIS using a WinRM v2 features called ‘WinRM IIS Extensions’. This is an optional install in Windows Server 2008 to support the ‘fan-in’ model for PowerShell remoting which is targeted at the cloud.

Hosting the WinRM service using HTTPS is meant to be simple so long as you have an appropriate certificate installed on the server for SSL. The command is:

> winrm quickconfig -transport:HTTPS

I have never been able to get this to work. Before explaining how I did get a WinRM HTTPS endpoint working, let’s cover off the certificate.

Windows Server 2008 has a role which allows a server to act as a certificate authority (CA) for a domain. This role includes a self-service website from which any machine on the domain can request a certificate. I used this to request certificates created using the web server template with the common name (CN) set to the fully qualified domain name of the server in my application farm. The self-service website is pretty straight forward but note that the certificated is installed in the current user path, not the local machine so so you need to move it. The easiest way to see this is to use the certificate provider within PowerShell:

> cd cert:\CurrentUser\My
> ls
> cd cert:\LocalMachine\My
> ls

This will show you all of the certificates installed in the current user\my and the local machine\my stores. You can also use the management console (MMC) and add in the certificate plug-in for both the current user and local computer.

The WSMAN provider allows you to configure the WinRM service from within Powershell.

> cd WSMAN:\localhost\Listener
> new-item . -Address * -Transport HTTPS -CertificateThumbprint “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”

You need the 40 character certificate thumbprint which can be easily found by listing the certificates in cert:\LocalMachine\My. With the real thumbprint replacing the Xs, the above command will create an HTTPS listener that is hosted in the WinRM service.

To connect to the machine from a remote client, using kerberos to authenticate as the current user:

> icm -ComputerName targetServer -UseSSL -Authentication NegotiateWithImplicitCredential -ScriptBlock {get-host}

The script block is executed on the remote machine. If a test certificate has been used to set-up the HTTPS channel, then the remote call will fail. The certificate must have been issued by the domain CA, the CN must match the machine name and the revocation list is checked. It is possible to switch off these checks by adding the following parameter to the call:

> icm ... -SessionOption (new-PSSessionOption -SkipCNCheck -SkipCACheck - SkipRevocationCheck)

Any combination of the three skips can be used.

This again proved somewhat unreliable for me, due to the use of Kerberos over HTTPS to authorize the user. There are other authentication options available such as basic, which is secure over an HTTPS channel since the channel is encrypted.

The change in identity of the HTTP SPN just seemed to keep tripping me up, which made me wonder why not host the management service in IIS and then set it to run in an application pool with the same identity as our other services? Finding out how to do this took me some time and led me to the fan-in model for PowerShell mentioned earlier.

Fan-In Model
Within WinRM v2 there comes a plug-in model to allow ISVs to supply a module that allows their software to be managed via WS-Management. The PowerShell team ships such a module pwrshplugin.dll which can be found in %windir%\system32. To be able to host such a module in IIS, you need to ensure that you have the WinRM IIS Extensions option installed, I have only seen it available on Windows Server 2008 and not Windows 7.

[ On Windows Server 2008 R2, you can use the ServerManager module to check the installed features:

> Import-Module ServerManager
> Get-WindowsFeatures
]

With this option enabled, you can create a new web application and drop in a web.config file similar to the following which is discussed here:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <system.management.wsmanagement.config>
      <PluginModules>
        <OperationsPlugins>
          <Plugin Name="PowerShellplugin" Filename="%windir%\system32\pwrshplugin.dll" SDKVersion="1" XmlRenderingType="text">
           <InitializationParameters>
                <Param Name="PSVersion" Value="2.0" />
            </InitializationParameters>
            <Resources>
                <Resource ResourceUri="http://schemas.microsoft.com/powershell/Microsoft.PowerShell" SupportsOptions="true">
                    <Capability Type="Shell" />
                </Resource>
            </Resources>
          </Plugin>
        </OperationsPlugins>
      </PluginModules>
    </system.management.wsmanagement.config>
        <security>
            <access sslFlags="Ssl" />
            <authentication>
                <anonymousAuthentication enabled="false" />
                <basicAuthentication enabled="true" />
                <windowsAuthentication enabled="true" />
            </authentication>
        </security>
        <modules>
            <add name="WSMan" />
        </modules>
  </system.webServer>
</configuration>

The web application is configured to use SSL and Basic or Windows authentication is accepted. You might need to edit your applicationhost.config file to unlock the section of the section. The web application can be mapped to an application pool that has the same identity as the other services, in our case MyDomain\service.expert so the SPNs should work.

[Note: do not set-up an HTTPS listener in both IIS and WinRM at the same time on the same certificate, if you do recycling the app pool will drop the HTTPS binding from IIS – the Windows Service WinRM gets precedence.]

To connect to the machine from a remote client (using basic authentication), the following is required:

> $secpasswd = ConvertTo-SecureString "myPassword" -AsPlainText -Force
> $mycreds = New-Object System.Management.Automation.PSCredential ("MyDomain\MyUsername", $secpasswd)
> icm -ConnectionUri https://svexpgg303.ap.aderant.com/Powershell -Authentication Basic -Credential $mycreds -ScriptBlock {get-host}

The password is captured in a secure string and then a new PSCredential object is created to contain the username and password. This is passed to the invoke-command cmdlet using the -Credential parameter. Note that we are also using the -ConnectionUri parameter.


UPDATE [2nd October 2010]: I finally got to the bottom of the 1300 error I saw in the Windows Remote Management event log thanks to this post: http://blogs.msdn.com/b/wmi/archive/2010/02/25/winrm-hosted-in-iis-fails-to-start-with-error-1300-in-event-log.aspx

The account that the application pool is using must have the ‘Generate security audits’ right granted. Also when testing, it is important to reset IIS after each change to ensure that you are running against the correct set-up.

Retesting with security set-up correctly proved that any app pool can be used and the web application path could contain subfolders.

Having managed to establish a secure connection for remote PowerShell via IIS using basic auth and HTTPS, I’ve pretty much given up on getting it to work over Kerberos. I might try just once more to do Kerberos over HTTP when the management service is hosted in IIS but I’ve already been fighting with this for way too long. I hope the above saves someone the pain I went through…