PowerShell Part 2 – Installing a new service

Following on from the brief introduction to PowerShell, let’s walk through the installation script…

The script installs a simple Magic Eight Ball service that will return a pseudo-random answer to any question it’s given. The service is written as a WCF service in C#, the files to deploy are available from http://public.me.com/stefsewell/ , have a look in TechEd2010/DEV306-WindowsServerAppFabric/InstallationSource. The folder contains a web.config to set up the service activation and a bin folder with the service implementation. The PowerShell scripts are also available from the file share, look in Powershell folder in DEV306…

Pre-requisite Checking

The script begins by checking a couple of pre-requisites. If any of these checks fail then we do not attempt to install the service, instead the installing admin is told of the failed checks. There are a number of different checks we can make, in this script we check the OS version, that dependent services are installed and that the correct version of the .NET framework is available.

First we need a variable to hold whether or not we have a failure:

$failedPrereqs = $false

Next we move on to our first check: that the correct version of Windows being used:

$OSVersion = Get-WmiObject Win32_OperatingSystem
if(-not $OSVersion.Version.StartsWith('6.1')) {
    Write-Host "The operating system version is not supported, Windows 7 or Windows Server 2008 required."
    $failedPrereqs = $true
    # See http://msdn.microsoft.com/en-us/library/aa394239(v=VS.85).aspx for other properties of Win32_OperatingSystem
    # See http://msdn.microsoft.com/en-us/library/aa394084(VS.85).aspx for additional WMI classes
}

The script fetches the Win32_OperatingSystem WMI object for interrogation using Get-WmiObject. This object contains a good deal of useful information, links are provided above to let you drill down into other properties. The script checks the Version to ensure that we are working with either Windows 7 or Windows Server 2008, in which case the version starts with “6.1”.

Next we look for a couple of installed services:

# IIS is installed
$IISService = Get-Service -Name 'W3SVC' -ErrorAction SilentlyContinue
if(-not $IISService) {
    Write-Host "IIS is not installed on" $env:computername
    $FailedPrereqs = $true
}

# AppFabric is installed
$AppFabricMonitoringService = Get-Service -Name 'AppFabricEventCollectionService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Monitoring Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

$AppFabricMonitoringService = Get-Service -Name 'AppFabricWorkflowManagementService' -ErrorAction SilentlyContinue
if(-not $AppFabricMonitoringService) {
    Write-Host "AppFabric Workflow Management Service is not installed on" $env:computername
    $FailedPrereqs = $true
}

A basic pattern is repeated here using the Get-Service command to determine if a particular Windows Service is installed on the machine.

With the service requirements checked, we look to see if we have the correct version of the .NET framework installed. In our case we want the RTM of version 4 and go to the registry to validate this.

$frameworkVersion = get-itemProperty -Path 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full' -ErrorAction SilentlyContinue
if(-not($frameworkVersion) -or (-not($frameworkVersion.Version -eq '4.0.30319'))){
    Write-Host "The RTM version of the full .NET 4 framework is not installed."
    $FailedPrereqs = $true
}

The registry provider is used, HKLM: [HKEY_LOCAL_MACHINE], to look up a path in the registry that should contain the version. If the key is not found or the value is incorrect we fail the test.

Those are all the checks made in the original script from the DEV306 session, however there is great feature in Windows Server 2008 R2 that allows very simple querying of the installed Windows features. I found this by accident:

>Get-Module -ListAvailable

This command lists all of the available modules on a system, the ServerManager module looked interesting:

>Get-Command -Module ServerManager

CommandType Name Definition
----------- ---- ----------
Cmdlet Add-WindowsFeature Add-WindowsFeature [-Name] [-IncludeAllSubFeature] [-LogPath ] [-...
Cmdlet Get-WindowsFeature Get-WindowsFeature [[-Name] ] [-LogPath ] [-Verbose] [-Debug] [-Err...
Cmdlet Remove-WindowsFeature Remove-WindowsFeature [-Name] [-LogPath ] [-Concurrent] [-Restart...

A simple add/remove/get interface which allows you to easily determine which Windows roles and features are installed – then add or remove as required. This is ideal for pre-requisite checking as we can now explicitly check to see if the WinRM IIS Extensions are installed for example:

import-module ServerManager

if(-not (Get-WindowsFeature ‘WinRM-IIS-Ext’).Installed) {
    Write-Host "The WinRM IIS Extension is not installed"
}

Simply calling Get-WindowsFeature lists all features and marks-up those that are installed with [X]:

PS>C:\Windows\system32> Get-WindowsFeature

Display Name Name
------------ ----
[ ] Active Directory Certificate Services AD-Certificate
[ ] Certification Authority ADCS-Cert-Authority
[ ] Certification Authority Web Enrollment ADCS-Web-Enrollment
[ ] Certificate Enrollment Web Service ADCS-Enroll-Web-Svc
[ ] Certificate Enrollment Policy Web Service ADCS-Enroll-Web-Pol
[ ] Active Directory Domain Services AD-Domain-Services
[ ] Active Directory Domain Controller ADDS-Domain-Controller
[ ] Identity Management for UNIX ADDS-Identity-Mgmt
[ ] Server for Network Information Services ADDS-NIS
[ ] Password Synchronization ADDS-Password-Sync
[ ] Administration Tools ADDS-IDMU-Tools
[ ] Active Directory Federation Services AD-Federation-Services
[ ] Federation Service ADFS-Federation
[ ] Federation Service Proxy ADFS-Proxy
[ ] AD FS Web Agents ADFS-Web-Agents
[ ] Claims-aware Agent ADFS-Claims
[ ] Windows Token-based Agent ADFS-Windows-Token
[ ] Active Directory Lightweight Directory Services ADLDS
[ ] Active Directory Rights Management Services ADRMS
[ ] Active Directory Rights Management Server ADRMS-Server
[ ] Identity Federation Support ADRMS-Identity
[X] Application Server Application-Server
[X] .NET Framework 3.5.1 AS-NET-Framework
[X] AppFabric AS-AppServer-Ext
[X] Web Server (IIS) Support AS-Web-Support
[X] COM+ Network Access AS-Ent-Services
[X] TCP Port Sharing AS-TCP-Port-Sharing
[X] Windows Process Activation Service Support AS-WAS-Support
[X] HTTP Activation AS-HTTP-Activation
[X] Message Queuing Activation AS-MSMQ-Activation
[X] TCP Activation AS-TCP-Activation
...

The right hand column contains the name of the feature to use via the command.

I ended up writing a simple function to check for a list of features:

<#
.SYNOPSIS
Checks to see if a given set of Windows features are installed.    

.DESCRIPTION
Checks to see if a given set of Windows features are installed.

.PARAMETER featureSetArray
An array of strings containing the Windows features to check for.

.PARAMETER featuresName
A description of the feature set being tested for.

.EXAMPLE
Check that a couple of web server features are installed.

Check-FeatureSet -featureSetArray @('Web-Server','Web-WebServer','Web-Common-Http') -featuresName 'Required Web Features'

#>
function Check-FeatureSet{
    param(
        [Parameter(Mandatory=$true)]
        [array] $featureSetArray,
        [Parameter(Mandatory=$true)]
        [string]$featuresName
    )
    Write-Host "Checking $featuresName for missing features..."

    foreach($feature in $featureSetArray){
        if(-not (Get-WindowsFeature $feature).Installed){
            Write-Host "The feature $feature is not installed"
        }
    }
}

The function introduces a number of PowerShell features such as comment documentation, functions, parameters and parameter attributes. I don’t intend to dwell on any as I hope the code is quite readable.

Then to use this:

# array of strings containing .NET related features
$dotNetFeatureSet = @('NET-Framework','NET-Framework-Core','NET-Win-CFAC','NET-HTTP-Activation','NET-Non-HTTP-Activ')

# array of string containing MSMQ related features
$messageQueueFeatureSet = @('MSMQ','MSMQ-Services','MSMQ-Server')

Check-FeatureSet $dotNetFeatureSet '.NET'
Check-FeatureSet $messageQueueFeatureSet 'Message Queuing'

To complete the pre-requisite check, after making each individual test the failure variable is evaluated. If true then the script ends with a suitable message, otherwise we go ahead with the install.

Installing the Service

The first step in the installation is to copy the required files from a known location. This is a pull model – the target server pulls the files across the network, rather than having the files pushed on to the server via an administration share or such like [e.g. \\myMachine\c$\Services\].

$sourcePath = '\\SomeMachine\MagicEightBallInstaller\'
$installPath = 'C:\Services\MagicEightBall'

if(-not (Test-Path $sourcePath)) {
Write-Host 'Cannot find the source path ' $sourcePath
Throw (New-Object System.IO.FileNotFoundException)
}

if(-not (Test-Path $installPath)) {
New-Item -type directory -path $installPath
Write-Host 'Created service directory at ' $installPath
}

Copy-Item -Path (Join-Path $sourcePath "*") -Destination $installPath -Recurse

Write-Host 'Copied the required service files to ' $installPath

The file structure is copied from a network share onto the machine the script is running on. The Test-Path command determines whether a path exists an allows appropriate action to be taken. To perform a recursive copy the Copy-Item command is called, using the Join-Path command to establish the source path. These path commands can be used with any provider, not just the file system.

With the files and directories in place, we now need to host the service in IIS. To do this we need to use the PowerShell module for IIS:

import-module WebAdministration # require admin-level privileges

Next…

$found = Get-ChildItem IIS:\AppPools | Where-Object {$_.Name -eq "NewAppPool"}
if(-not $found){
    New-WebAppPool 'NewAppPool'
}

We want to isolate our service into its own pool so we check to see if NewAppPool exists and if not we create it. We are using the IIS: provider to treat the web server as if it was a file system, again we just use standard commands to query the path.

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ProcessModel -Value @{IdentityType=3;Username="MyServer\Service.EightBall";Password="p@ssw0rd"} # 3 = Custom

Set-ItemProperty IIS:\AppPools\NewAppPool -Name ManagedRuntimeVersion -Value v4.0

Write-Host 'Created application pool NewAppPool'

Having created the application pool we set some properties. In particular we ensure that .NET v4 is used and that a custom identity is used. The @{} syntax allows us to construct new object instances – in this case a new process model object.

New-WebApplication -Site 'Default Web Site' -Name 'MagicEightBall' -PhysicalPath $installPath -ApplicationPool 'NewAppPool' -Force

With the application pool in place and configured, we next set-up the web application itself. The New-WebApplication command is all we need, giving it the site, application name, physical file system path and application pool.

Set-ItemProperty 'IIS:/Sites/Default Web Site/MagicEightBall' -Name EnabledProtocols 'http,net.tcp' # do not include spaces in the list!

Write-Host 'Created web application MagicEightBall'

To enable both HTTP and net.tcp endpoints, we simply update the EnabledProtocols property of the web application. Thanks to default endpoints in WCF4, this is all we need to do get both protocols supported. Note: do not put spaces into the list of protocols.

Configuring AppFabric Monitoring

We now have enough script to create the service host, but we want to add AppFabric monitoring. Windows Server AppFabric has a rich PowerShell API, to access it we need to import the module:

import-module ApplicationServer

Next we need to create our monitoring database:

[Reflection.Assembly]::LoadWithPartialName("System.Data")

$monitoringDatabase = 'MagicEightBallMonitoring'
$monitoringConnection = New-Object System.Data.SqlClient.SqlConnectionStringBuilder -argumentList "Server=localhost;Database=$monitoringDatabase;Integrated Security=true"
$monitoringConnection.Pooling = $true

We need a couple of variables: a database name and a connection string. We use the SqlConnectionStringBuilder out of the System.Data assembly to get our connection string. This demonstrates the deep integration between PowerShell and .NET.

Add-WebConfiguration -Filter connectionStrings -PSPath "MACHINE/WEBROOT/APPHOST/Default Web Site/MagicEightBall" -Value @{name="MagicEightBallMonitoringConnection"; connectionString=$monitoringConnection.ToString()}

We add the connection string to our web application configuration.

Initialize-ASMonitoringSqlDatabase -Admins 'Domain\AS_Admins' -Readers 'DOMAIN\AS_Observers' -Writers 'DOMAIN\AS_MonitoringWriters' -ConnectionString $monitoringConnection.ToString() -Force

And then we create the actual database, passing in the security groups. While local machine groups can be used, in this case I’m mocking a domain group which is more appropriate for load balanced scenarios.

Set-ASAppMonitoring -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -MonitoringLevel 'HealthMonitoring' -ConnectionStringName 'MagicEightBallMonitoringConnection'

The last step is to enable monitoring for the web application, above we are setting a ‘health monitoring’ level which is enough to populate the AppFabric dashboard inside the IIS manager.

Set-ASAppServiceMetadata -SiteName 'Default Web Site' -VirtualPath 'MagicEightBall' -HttpGetEnabled $True

Last of all we ensure that meta data publishing is available for our service. This allows us to test the service using the WCFTestClient application.

DEV306: PowerShell Scripts Available

Thanks to everyone who attended the sessions at TechEd New Zealand. The PowerShell deployment files demonstrated are now available from http://public.me.com/stefsewell

Have a look in the TechEd2010/DEV306-WindowsServerAppFabric folder, it contains a simple VS2010 project showing how to call PowerShell from C#. It also contains the PowerShell scripts that deploy, validate and remove a simple WCF service.

Pete has updated his blog ( http://blog.petegoo.com ) with the demo code from his workflow services.

Feedback from the sessions has been mixed. The workflow introduction seems to have worked for a high percentage of those who attended. For a 200 level session, I thought the content was pretty technical but sorry to the few who thought it was too lightweight. All I can say is that it was an introduction to workflow and the goal was to get the basics across. I would recommend the additional resources included in the slide decks to drill down further.

The Windows Server AppFabric session was not as successful as the workflow introduction. Windows Server AppFabric is a great addition to the service hosting capabilities of Windows Server 2008 – if you have WCF services in IIS/WAS, you should be using it if possible. The convenience of monitoring is a little difficult to get across in a demo, it has made our lives so much easier in support and in development. The workflow service host opens up many scenarios that were previously very hard to implement. Microsoft has taken on the heavy lifting (persistence, tracking, failover, scale-out) and we have a very simple model to work with. The PowerShell demo was quick but I didn’t want to spend 30 minutes walking through a page of commands. The scripts are available for download and commented; please take the time to explore the commands and experiment with your own services. The remote shell capabilities of PowerShell make large scale deployments much simpler than previously. The DSL demo at the end was a taste of what is possible with a model driven approach, I’m leaving you to connect the dots and transform from model to PowerShell.

Thanks again to all that attended, I hope there is something useful for you either in the session or in this blog.

Two new TechEd sessions

The New Zealand TechEd conference will be running from August 30th through to the 1st of September. Last week I was confirmed to present a couple of sessions:

DEV208: Getting Started with Workflow in .NET 4
DEV306: Taming SOA Deployments using Windows Server AppFabric

I’ve managed to talk one of ADERANTs lead developers, Peter Goodman, to share the stage and the workload. Pete is our workflow & DSL ninja, it’s going to be fun presenting with him.

The workflow session is an introduction, and rather than focusing purely on the technical aspects we are going to also talk about when using workflow makes sense and which problem types it is particularly suited for. There will be demos too and we plan to build a workflow and a workflow service on stage during the talk.

The Windows Server AppFabric session will be a more technical session and I want to cover three areas: deployment (using the Powershell API), monitoring and scale out of workflow services. Most of the content of the session can be found on this blog already but I do have plans for a new code sample if I can squeeze it into my schedule.

Now Microsoft is posting TechEd sessions online and into the public domain after the conference.

Update: Sessions are now available online…
·         Getting Started with Workflow in .NET 4, http://www.msteched.com/2010/NewZealand/DEV208
·         Taming SOA Deployments using Windows Server AppFabric, http://www.msteched.com/2010/NewZealand/DEV306

Configuration for Kerberos

This is a summary of the voodoo required to get WCF services hosted in IIS to work with a load balancer and kerberos. This took me way longer than I had hoped to figure out so I hope I can save someone else that pain.

We have recently been running some load and stress tests against our latest Golden Gate SP1 product which supports the horizontal scale out of workflow services. This scale out capability is one of the core features of Windows Server AppFabric. Our software is designed to run in an ‘on premise’ scenario and leverages Windows integrated security for authorization of users. A major performance improvement we discovered during our original Golden Gate testing was to ensure kerberos was used rather than NTLM when performing Windows Authentication. We wanted to ensure that our new services were using kerberos for Windows authentication since we had moved some of our services from being hosted as a Windows Service to being hosted in IIS, in particular the workflow services.

Note: in addition to performance advantages, you need to use Kerberos if you want to achieve multi-hop delegation of credentials, NTLM does not support this. The resources at the end of this post discuss this further.

In this post I’m going to walk through a worked example and give a checklist to follow. In a later post I may drill down into a little more of the background, in the meantime I’ll include some additional resources at the end.

Scenario
The scenario involves three application servers that are configured into a network load balanced (NLB) cluster using NLB in Windows Server 2008. The machine names are:
• svexpgg310.ap.aderant.com
• svexpgg311.ap.aderant.com
• svexpgg312.ap.aderant.com

The virtual host name for the NLB is svnlb301.ap.aderant.com.

The NLB is set-up to load balance traffic on port 80, for our HTTP based services and the port range 18180-18199 for our Windows Services. Each of the servers runs all of the services that we support horizontal scale out for and one of the servers (310) runs the services that only support a single instance. In a typical installation we have around 15 services, rather than list out all of these I’ll concentrate on two types:
• services hosted in IIS that expose HTTP endpoints
• services hosted as Windows Services that expose net.tcp endpoints

Alongside the three application servers is a database server that hosts the ADERANT Expert database, the AppFabric monitoring database and the AppFabric workflow persistence database.

The basicHttpBinding configuration used to enable Windows authentication is as follows:

      <basicHttpBinding>
        <binding name="expertBasicHttpBinding" maxReceivedMessageSize="2147483647">
          <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" />
          <security mode="TransportCredentialOnly">
            <transport clientCredentialType="Windows" proxyCredentialType="Windows">
              <extendedProtectionPolicy policyEnforcement="Never" />
            </transport>
          </security>
        </binding>
      </basicHttpBinding>

1. The servers must be in the local intranet zone of any calling machines.
As of Windows Server 2003, by default only the local intranet zone supports the passing of credentials for Windows Integrated authentication between machines. This makes sense as you rarely want to pass your Windows credentials beyond your own domain. At ADERANT we have a group policy set-up so that all machines have any machine with a name matching *.aderant.com registered in the local intranet zone.

You can explicitly name the servers for the zone, also ensure that the servers are not listed in the Trusted Sites zone.

2. Windows Services exposing WCF net.tcp endpoints must have SPNs registered for both the application server and the network load balancer addresses.

When a non-basicHttpBinding is used, such as net.tcp, the WCF infrastructure checks to ensure that the service is running under the identity that the client expects. This prevents ‘man-in-the-middle’ attacks where someone spoofs the service you want to call with their own for some nefarious purpose. When you generate a service proxy against a net.tcp endpoint you’ll see something similar to the following configuration snippet in the app.config:

<client>
  <endpoint
    address="net.tcp://myserver.mydomain.com:8003/servicemodelsamples/service/spnIdentity"
    binding="netTcpBinding"
    bindingConfiguration="netTcpBinding_ICalculator_Windows"
    contract="ICalculator"
    name="netTcpBinding_ICalculator">
    <identity>
      <servicePrincipalName value="CalculatorSvc/myServer.myDomain.com:8003" />
    </identity>
  </endpoint>
</client>

There is an identity element that specifies the expected identity of the service host. There are two different options supported: and . If your service is published on a domain and you always expect the client calling the service to be online, then the userPrincipalName is easiest to configure. The value attribute contains the identity that the service is running as, e.g. value=“ADERANT_AP\service.expert”.

Alternatively you can set a servicePrincipalName, as above. The service principal name (SPN) is broken down into three parts:

serviceClassName / address [: portNumber]

The service class name is a token that uniquely represents the service. Common service classes are HTTP and HOST, the example above is using CalculatorSvc to uniquely identify a calculation service. At ADERANT we use class names such as ExpertConfigurationSvc. After the service class name comes the machine name, e.g. SVEXPGG310. Note that the NetBIOS name and the fully qualified domain names are considered to be different, it is common place to register both. For example:

ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG310:18180

Once we have an SPN, it must be registered in Active Directory (AD) against the user account used to run the service. We recommend a service account along the lines of myDomain\service.expert to run the ADERANT services. To register this account with an SPN there is a command line tool setspn:

setspn -A ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180 service.expert

As part of our deployment tooling we automatically generate a batch file containing all the SPNs that require to be registered in AD for a given environment. An SPN must not be registered twice, this will cause errors. To see the SPNs currently registered against a user you can use the setspn tool using the -L option and passing the account name:

setspn -L service.expert

If we take our configuration service as an example, we need the following SPNs registered in AD for the scenario environment:

ExpertConfigurationSvc/SVNLB301.ap.aderant.com:18180
ExpertConfigurationSvc/SVNLB301:18180
ExpertConfigurationSvc/SVEXPGG310.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG310:18180
ExpertConfigurationSvc/SVEXPGG311.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG311:18180
ExpertConfigurationSvc/SVEXPGG312.ap.aderant.com:18180
ExpertConfigurationSvc/SVEXPGG312:18180

If you are running a development workstation, you will often see HOST/localhost as the SPN generated by the svcutil for locally hosted WCF services. This indicates that the service is expected to be running on the local machine.

If the service needs to support delegation then the AD account used to run the service must have this enabled:

The account must also be granted ‘Log on as a service’ rights on the application server hosting the service. This can be set-up using the local machine policies admin tool or pushed out via group policy.

3. Load balanced WCF Services hosted in IIS, using HTTP bindings, must have HTTP SPNs added for the account of the application pool.

By default an SPN is created in AD for the machine account of a server running IIS, for example HTTP/SVEXPGG310. In a load balanced scenario the machine account SPN cannot be used to issue a kerberos ticket because it is different for each machine in the application farm. Instead the kerberos ticket needs to be issued using the identity of the application pool that the web service is running under. If you have multiple application pools, these must all be running under the same account. The application pool account must have SPNs registered for the HTTP service as follows:

setspn -A HTTP/svnlb301.ap.aderant.com service.expert
setspn -A HTTP/svnlb301 service.expert
setspn -A HTTP/svexpgg310.ap.aderant.com service.expert
setspn -A HTTP/svexpgg310 service.expert
setspn -A HTTP/svexpgg311.ap.aderant.com service.expert
setspn -A HTTP/svexpgg311 service.expert
setspn -A HTTP/svexpgg312.ap.aderant.com service.expert
setspn -A HTTP/svexpgg312 service.expert

Here we have both the NetBIOS and FQDNs for the servers and the load balancer.

4. Load balanced WCF services hosted in IIS, using HTTP bindings, must use the Application Pool credentials to issue kerberos tickets.

In addition to adding the SPNs in 3, now change IIS so that it uses the app pool credentials for the kerberos ticket. This can be done either through the configuration manager in IIS or from the command line.

The obscured section path is system.webServer/security/authentication/windowsAuthentication.
From a command line:
appcmd set config /section:windowsAuthentication /useAppPoolCredentials:true

This has to be set on all of the application servers within the application farm.

While in IIS configuration, it is also worth setting authPersistNonNTLM to true, see http://support.microsoft.com/kb/954873 for details.

5. Enabled Windows Authentication on the required web applications in IIS.
There are two parts to this, the first of which is to ensure that the Windows Authentication provider for IIS is installed. This can be checked in the Windows features control panel.

The next step isto enable the Windows Authentication on the website itself. From the dashboard for the site, open the Authentication manager and then ensure that Windows Authentication is enabled:

While you are here, it’s worth checking the advanced properties of the Windows Authentication (available from the context menu) to ensure that Kernel-mode authentication is set.

This can also be set programmatically:

appcmd set config “Default Web Site/MyWebService” -section:system.webServer/security/authentication/windowsAuthentication /enabled:true /commit:apphost

Wrap up & Testing
Those are the key steps required to get kerberos working in a load balanced environment:
1. ensure the servers are in the local intranet zone.
2. create and register SPNs for net.tcp services for all app servers and the load balancer.
3. create and register HTTP SPNs for all app servers and the load balancer.
4. take care to avoid duplicate SPNs.
5. understand that NetBIOS and FQDNs require separate SPNs.
6. set useAppPoolCredentials to true on all IIS servers in the app farm.
7. run all application pools using a common domain service account, give this account permission to delegate and log on as a service.
8. ensure the web applications for the services have Windows authentication enabled.

It’s mostly straight forward once you’ve been through the steps once.

The easiest tool to test with is a browser and Fiddler. From within Fiddler you can look at the authorization headers for the HTTP requests which will show you if kerberos or NTLM is used. We expose an OData service which requires Windows authentication, it was very easy to trace the authentication negotiation going on for this site within Fiddler.

Resources
Security in WCF (MSDN Magazine): http://msdn.microsoft.com/en-us/magazine/cc163570.aspx

Patterns & Practices Kerberos Overview: http://msdn.microsoft.com/en-us/library/ff649429.aspx

Patterns & Practices WCF Security Guide: http://msdn.microsoft.com/en-us/library/ff650794.aspx

A Tale of Two Services

Now back in New Zealand after two weeks in the US, first week at TechEd and then a week in our US development centre. I finally feel free of jet lag and so it’s time to make good on a promise to write up a couple of samples I didn’t show at TechEd. The first is a quick introduction to authoring services…

The source code to accompany this post can be downloaded from http://public.me.com/stefsewell/ from the TechEd2010 folder. The sample code is in the archive ServiceAuthoringSample.zip.


A service is simply a piece of software that provides some functionality, access to this functionality is formalized into a contract. A service is often hosted in a separate process and utilized by a number of different consumers. The service does not know anything about the consumer, it just performs some work on their request. Between the consumer and service is most likely a process, machine and possibly a network boundary, therefore any data to be exchanged must be serializable. For the consumer to call the service, it must know where it lives, therefore the service has an address. The consumer must also be able to understand and be understood by the service, the supported communication protocols are captured as bindings. So there we have the ABC of Windows Communication Foundation; the Address, the Binding and the Contract.

Services in Code

With each release of Visual Studio, the key use cases that Microsoft is targeting with its tooling become easier to perform. In VS2010 the ease of service authoring and hosting has taken a leap forward and the code line count required to implement a service dropped. Let’s look at a very simple service that provides a random answer to a question, a Magic Eight Ball service. The contract for the magic eight ball is very simple and is captured as the following class:

using System.ServiceModel;

namespace MagicEightBall.CodedService {
    [ServiceContract]
    public interface MagicEightBallContract {
        [OperationContract]
        string AskQuestion(string question);
    }
}

There is a single method that takes a string containing a question and returns a string containing the answer. The System.ServiceModel namespace is the hint that we are going to use WCF to take care of our service. To provide an implementation of the service we have the following code.

using System;

namespace MagicEightBall.CodedService {
    public class MagicEightBallService : MagicEightBallContract {
        public string AskQuestion(string question) {
            return EightBall.Shake();
        }
    }

    internal sealed class EightBall {
        private readonly static Random random = new Random();
        private readonly static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public static string Shake(){
            return answers[random.Next(0, answers.Length)];
        }
    }
}

The eight ball is captured as a simple class with a Shake method, the service is not enforcing any validation such as ensuring a question is asked to keep things simple. Note that there is no System.ServiceModel using statement, this is vanilla .NET. We have a service contract and an implementation, our coding is complete. The next step is to host the service and allow our consumers to call it. The service host can be implemented in a number of ways, for this example we are going to use WAS (Windows Process Activation Service) which uses the IIS infrastructure to host the service – we don’t need to write a host, we’ll just use one that Microsoft provides. To access the service, the host exposes an endpoint, the endpoint is composed of the address, binding and contract. One of the criticisms of WCF in .NET 3 was the steep initial learning curve required to get a service hosted and configured. In .NET 4, the idea of defaults has been introduced which greatly reduces the amount of WCF configuration required to get up and running (to the point where it is possible to have no explicit configuration). In the example below we have a little configuration due to a slightly non-standard approach.

<?xml ="1.0"?>
<configuration>
  <system.serviceModel>
    <serviceHostingEnvironment>
      <serviceActivations>
        <add relativeAddress="MagicEightBall.svc" service="MagicEightBall.CodedService.MagicEightBallService"/>
      </serviceActivations>
    </serviceHostingEnvironment>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <serviceMetadata httpGetEnabled="True"/>
          <serviceDebug includeExceptionDetailInFaults="False"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

Here we are using the element to specify the last part of the address of the service rather than having a separate .svc file. Personally I think this is quite a tidy approach rather than having separate .config and .svc files. The section states that we want to publish metadata about this service and that we want to hide any exception details from consumers of our service. By publishing metadata about our service we allow tooling to generate a proxy class for us that allows our service to be easily called. Visual Studio provides such tooling, from within a project you can add a Service Reference:

The service reference needs to know the address of the service and then from the metadata it creates a class, the proxy, that allows the project to make use of the service. After clicking on OK, the service reference is listed as part of the project, in the sample below the MagicEightBall client is making use of two separate services.

I’m jumping a little bit ahead though, since we haven’t got the service host set up yet. We want to publish the service which we can do from within VS2010 by choosing Publish… from the context menu for the project:

A dialog pops up asking from a location to publish to, I used http://localhost/MagicEightBall which set up a new web application in IIS. By default the web application is set up to support the http protocol. If you want to change this you need to alter the ‘Enabled Protocols’ in the Advanced Settings dialog which is available from the web application context menu in IIS Manager [Manage application | Advanced Settings…].

In the example above I added the net.tcp protocol in addition to http. Note that there is no space between the comma and net.tcp. Putting a space in here will break the enabled protocols! Now we have created and published a WCF service, to test it, point your browser to http://localhost/MagicEightBall/MagicEightBall.svc. You should see the standard metadata page for your service instructing how to create a proxy class and consume it.

[Note that I have .NET 4 registered as the default framework version for IIS and so the default app pool uses .NET 4. The command C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i registers .NET 4 as the default for IIS.]

To test the service, create a console application, add a service reference called MagicEightBallService using the http url. Code to call the service is as follows:

using System;
using System.Text;

using MagicEightBall.Client.MagicEightBallService;

namespace MagicEightBall.Client {
    class Program {
        private const string CodeEndpointNameHttp = "BasicHttpBinding_MagicEightBallContract";
        string question = "Will you answer my questions?";
        string answer = string.Empty;

        using (MagicEightBallContractClient client = new MagicEightBallContractClient(CodeEndpointNameHttp)) {
            answer = client.AskQuestion(question);
        }

        Console.WriteLine(answer);
    }
}

In total there is less than 30 lines of code required for us to write to define, implement, host and consume a WCF service.

Services as Workflows
There is an alternative way to author services which uses a workflow to define the service implementation. A functionally equivalent Magic Eight Ball service can be developed as a workflow service as follows…

First create a new project in VS2010 that is a ‘WCF Workflow Service Application’ which sets up the basic send / receive service template. We need to set up a couple of variables within our workflow so click on the variables button at the bottom left having selected the outer scope:

The handle is created by the template so we need to add in the question and answer strings. The variables are used to pass data into and out of activities, the activity is the equivalent of a program statement and acts on the data. In workflow it is possible to author new activities such as the EightBall in the example above. The code for the activity is as follows:

using System;
using System.Activities;

namespace MagicEightBall.WorkflowService {
    public sealed class EightBall : CodeActivity {
        private static Random random = new Random();
        private static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public InArgument Question { get; set; }

        protected override string Execute(CodeActivityContext context) {
            string question = context.GetValue(this.Question);
            string answer = answers[random.Next(0, answers.Length - 1)];

            return answer;
        }
    }
}

This activity is essentially the same code as the Eightball class in the original service. The question is captured as an InArgument to the activity and the result is a string, specified as a CodeActivity. Note the use of the CodeActivityContext to get the value of the question from the workflow runtime at execution time.

After compiling the project we get an EightBall activity in our toolbox and this can be dragged into the service workflow. The completed implementation looks as follows with the addition of the EightBall activity:

The EightBall activity needs to have its arguments mapped to variables. The properties of the activity are defined as follows:

In the receive activity, the operation name is changed to AskQuestion and the content is changed to:

Here the receive activity expects to get a string parameter called question which is mapped to the question variable we created earlier. The receive/send activity pairing is analogous to the AskQuestion method in our coded service.

The send activity returns a string and is paired with the Receive Question send activity as shown in the Request field.

Here we are returning the answer that we got from the EightBall activity. This workflow is now functionally equivalent to our original coded example: a string containing a question is passed in, a string containing an answer is returned.

To host the workflow service, the same steps are taken as before. You simply choose to publish the service from Visual Studio into IIS. The service exposes metadata in the same way as the coded service, therefore you can as Visual Studio to generate a service reference for you and then consume the service in the same way as we did for the coded service.

So we have two ways to solve a problem – which is better? It depends on the work that the service is performing. If the service is co-ordinating work across multiple services then a workflow makes sense as it can be easier to visualize the intended flow of control. If the service co-ordination is long running and needs to be persisted then again a workflow makes sense as this long running, durable capability is built right into the workflow service host that Microsoft ships out of the box.

The sample code contains some additional concepts not discussed such as a separate activity library and instrumentation options for service code. The code is small and so hopefully this does not clutter the examples too much.

Migration from .NET 2/3/3.5 to .NET 4

During the TechEd session, the question was asked:

“How do I migrate my services from WCF3 to WCF4?”

The simple as answer is that you recompile your source under .NET 4 and you should be done. .NET 4 is backwards compatible with .NET 2/3.X but you need to recompile for the new CLR (common language runtime).

TechEd NZ 2009 Sessions

This year Microsoft have opened up the TechEd sessions to the public and so you no longer have to be a TechEd attendee to be able to be able to watch the sessions online. This includes sessions from previous years, which means the sessions I co-presented at New Zealand TechEd last year are now available.

A first look at WCF and WF in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA206

This session covered the new features in .NET 4 for WCF and WF. The slide deck was prepared and originally presented by Aaron Skonnard from Pluralsight. Mark, a colleague at ADERANT, and I were asked to present in New Zealand due to our .NET 4.0 TAP involvement (Technology Adoption Program). The demos were our own and so the content is slightly different to the original presentation.

Building declarative apps in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA306

In this session we wanted to show how Microsoft is choosing a declarative approach for much of its new technology, freeing the developer from the how and letting them concentrate on the what. Using the Visual Studio DSL toolkit is it possible to build your own visual DSLs and designers. From these models you can then use T4 to transform the model into code. This approach is at the heart of a software factory we use internally in ADERANT and has saved us from technology churn as well as speeding up product development.

Note: The DSL toolkit has been renamed for VS2010 and is now the Visual Studio Virtualization and Modeling SDK.

Data over the Web

It’s been a little while since the last posting, in no small part due to my broadband quota exceeding the monthly allowance. Dial-up speed is just painful, and made me realize just how much I use the internet for media: music, movies, podcasts, blogs, … It was also a great reminder just how sensitive applications are when you have a constrained network connection.

One of the most significant changes made to the Expert architecture with SP1 is the introduction of a query service. Prior to SP1, the architectural layering required that data transfer objects (DTOs) were used to move data from a service boundary to the client. The domain model was mapped to whatever shape was required by the client requesting the entities.

Writing the DTO and mapper classes is very repetitive and quite dull and so it was automated using the Visual DSL Toolkit for Visual Studio (now renamed to the Visual Studio Visualization and Modeling SDK). A key component in the Expert framework is our software factory which builds code from 3 models: relational model, domain model and the view model. The view model provides a model and tooling to generate use case specific views of the domain model and the mappers required to transform from domain model to view model and back. An optimization we made when sending data back to the service to update the domain model was just to send back the changes. This required the view model to track any updates made to the model between the time it was fetched from a service and the time it was sent back to the service. The mechanism we wrote to achieve that is worth a few blog entries on its own and I’m going to skip over the details here.

One of the primary clients within this architecture is our workflow service which allows data from the business services to be managed within a long running workflow process before being updated back into the main line of business system. In the original Golden Gate release, the data associated with a workflow instance is sent out with every task within the workflow (a task is a workflow activity that requires human interaction). For very large workflow processes, this can be an issue, particularly over restricted network connections such as VPN or very remote sites. For SP1 we took a look at this particular areas and addressed it in the following ways:

• Tasks now have a data contract so that only the required data is sent.
• The way we fetch data is now via a dedicated query service rather than combining reads and write operations in the same service contract. The query service is http based and therefore can take advantage of out-of-the-box optimizations such as caching and compression.

The separation of query from command at the architectural level we found is currently being explored by a number of people, most vocally by Greg Young and Udi Dahan. The architectural pattern is Command Query Responsibility Segregation (CQRS) and is similar in spirit to the Command-Query separation CQS concept first discussed by Bertrand Meyer in Object Orientated Software Construction back in 1988. This is another topic worthy of blog posts and InfoQ has a great presentation from Greg Young.

Our query service implementation is a WCF Data Service which takes an Entity Framework 4 model and exposes it as a RESTful service.


All data required by a client is fetched via the query service and this is delivered over an http channel. The use of IIS and HTTP gives us the following:

• monitoring via AppFabric
• compression via the dynamic compression in IIS7
• caching using standard HTTP based caches
• cross platform capable data feed

The lifecycle for data is now:

There are a number of interesting aspects to this, not least of which we now use two different ORM technologies: NHibernate and Entity Framework. This is based on both historical use and feature set. Given the data model that we map to, we need the rich extension points available in NHibernate to support the desired object model. The WCF Data Services and EF4 features in .NET 4 / Visual Studio 2010 take most of the heavy lifting out of exposing a domain model via REST. Microsoft is now promoting an Open Data Protocol built on top of http/atom/json as a cross platform capable mechanism for interoperating with data over the web; an ODBC for the web perhaps. The Mix10 keynote included a chapter on OData and how Microsoft is tooling it.

Given that we have a software factory that already contains a domain model which includes the persistence mapping, we just needed to add additional T4 transforms to the software factory so that we could generate a query service and view model implementation from the existing domain model. Along the way we also simplified the change tracking approach, as the explicit task data contract reduced the potential for merge conflicts.

Now that we have a query service and Microsoft is doing all it can to promote OData as a cross platform solution, there are an interesting number of options opening up. One of which is the iPhone/iPad platform from Apple. As part of the OData SDK, Microsoft has released a library and tooling to make consuming OData feeds from the Apple platform straight forward. This includes a tool that generates objective-C classes from the meta data available from an OData stream (via the $metadata directive).

How to diagnose errors in AppFabric monitoring configuration

It wasn’t the best Friday, my external hard drive died taking my work iTunes library with it and I wasn’t having much fun with AppFabric either. The dashboard showed no data and the Windows application event log kept filling up with login errors. Looking back, the afternoon was useful since I learned that little bit more about AppFabric though I didn’t get any ‘real’ work done.

I started off reading this: http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx before getting stuck in.

AppFabric has two data stores: a monitoring store and a workflow persistence store. These stores are paired with two Windows services, an event collection service paired with the monitoring store and a workflow management service paired with the workflow persistence store.

Lets start with the event collection service and monitoring store. This service is responsible for capturing the WF and WCF events emitted by services hosted in IIS/WAS and storing them in the monitoring store. These events are used to populate the dashboard that is integrated into IIS Manager. To enable capture of events you can use the ‘Manage WF and WCF Services | Configure…’ option in the web application context menu or the Powershell commands Set-ASAppMonitoring and Start-ASAppMonitoring. For help on these commands call get-help, e.g. ‘get-help Set-ASAppMonitoring’, from a Powershell command line.

When you set up monitoring you need to provide a connection string name and set the monitoring level. As a minimum, the level needs to be set to Health Monitoring to populate the AppFabric dashboard. Below this are the levels Off and Errors Only which are self explanatory. Above this level are End-to-End Monitoring and Troubleshooting both of which capture additional information. End-toEnd Monitoring adds a header into WCF traffic to allow a logical call sequence to be followed. When a WCF service calls another WCF service the header is flowed across the call providing a correlation token for querying by. Note that the capture levels are cumulative, the higher level setting includes all of the events from the settings below. The higher the setting, the greater the impact on the performance of the system as more resources are required to capture and log the monitored events. For day to day operations health monitoring is recommended with the more verbose options used when required to aid troubleshooting. The connection string is a named connection string value, set as a property of the web application (or one of its ancestors). The connection string dashboard page is available from the ASP.NET section of the Features View for the web application.

Clicking on the Connection Strings option brings up the following:


Note that IIS configuration is hierarchical, the connection strings available to the Magic8Ball web application are both inherited which means they are defined at a higher node in the tree. In this case the strings are defined in the machine web.config found at %SystemDrive%\Windows\Microsoft.NET\Framework64\v4.0.30128\Config (I’m using 64-bit Windows and .NET 4.0 RC). When installing AppFabric the default connection strings are written into the machine level web.config. In my case, both connection strings are set-up to use integrated security.

The event collection service is a Windows Service and so managed through the services administration snap-in, services.msc. To help set up integrated security from Windows through to SQL Server, I run the services under a domain account. Note that if you plan to use a machine that is not always on a domain, you need to use a local machine account.


This account needs to have login rights to the SQL Server and should be mapped to the ASMonitoringDbWriter role. In my case I’ve mapped the user to all three roles set up in the monitoring store.

There are four Jobs managed by the SQL Agent that are used to populate and manage the tables in the monitoring database. These are:

The SQL Server Agent must be running on for the tables to be populated. The Import*Events jobs run every 10 seconds by default, if they are not correctly set up your application event log soon fills up with errors and warnings (as I found). These jobs call stored procedures defined in the monitoring database: ASImportTransferEvents, ASImportWcfEvents, ASImportWFEvents and run as the AS_MonitoringDbJobsAdmin. The AutoPurge job is scheduled to run once every minute and calls the ASAutoPurge stored procedure. These stored procedures in turn call ASInternal_* versions of themselves and you can drill into the SQL to see exactly what they do. To housekeep the monitoring database you can use the Clear-ASMonitoringSqlDatabase command. An other option is to move the events to an archive database so that the queries feeding the dashboard remain responsive, see Set-ASMonitoringSqlDatabaseArchiveConfiguration. The archive database can then be managed as per any audit requirements you may have.

To monitor the SQL Agent jobs, you can use the Job Activity Monitor:

The Windows Event Viewer is a great help tracking down the cause of issues and AppFabric sets up a couple of customs logs.

To see the Debug and Analytic logs you need to set the following:

Right click on a debug or analytic log and enable it. Make sure you disable it when you are finished to prevent performance degradation due to high volume event capture.

From these logs I could determine that my IIS configuration had invalid entries, the SQL Server login was failing for the Event Collector and so on. I’ll talk more about diagnosing IIS configuration issues and the workflow persistence store in the next post…