Workflow Services & MSMQ Revisited

I’ recently dusted off a WCF sample I’d written and blogged about a year or two ago. During the process of getting it to work again, I discovered the blog posting is incorrect so I’m reposting with corrections and additional explanation.

Tom Hollander published a great set of posts on this topic which I needed…

http://blogs.msdn.com/b/tomholl/archive/2008/07/12/msmq-wcf-and-iis-getting-them-to-play-nice-part-1.aspx

http://blogs.msdn.com/b/tomholl/archive/2008/07/12/msmq-wcf-and-iis-getting-them-to-play-nice-part-2.aspx

http://blogs.msdn.com/b/tomholl/archive/2008/07/12/msmq-wcf-and-iis-getting-them-to-play-nice-part-3.aspx

We needed a quick proof of concept to show that a workflow service could be activated via a message sent over MSMQ. First part was workflow design and coding, this was the easy part. All I wanted to do was accept a custom type, in this case a TimeEntry, from a SubmitTime service operation that belonged to an ITimeEntryContract. On receiving the time entry I would simple log the fact that it arrived into the event log. This is pretty much the “Hello, World!” of the services demos. The second part was getting the configuration correct…

One of the promises of WCF is that it gives us a unified communication model regardless of the protocol: net.tcp, http, msmq, net.pipe and it does. The best description I’ve heard for WCF is that it is a channel factory, and you configure the channels declaratively in the .config file (you can of course use code too if you prefer). The key benefit is that the service contract and implementation, for the main part, can be channel agnostic. Of course there are the exceptions to prove the rule such as a void return being required by an MSMQ channel but for the most part it holds. As it turns out, it was true that I needed to make no changes to code to move from a default http endpoint to a MSMQ endpoint. I need need to do a lot of configuration and setup though which is not that well documented. This post hopes to correct that is some small way.

First up the easy part, writing the code.

In Visual Studio 2010 I started a new ‘WCF Workflow Service Application’ project. First I define my TimeEntry model class:

using System;

namespace QueuedWorkflowService.Service {
    public class TimeEntry {
        public Guid TimekeeperId { get; set; }
        public Guid MatterId { get; set; }
        public TimeSpan Duration { get; set; }

        public override string ToString() {
            return string.Format(“Timekeeper: {0}, Matter: {1}, Duration: {2}”, TimekeeperId, MatterId, Duration);
        }
    }
}

Then I defined a code activity to write to the time entry provided into the event log:

using System;
using System.Diagnostics;
using System.Activities;

namespace QueuedWorkflowService.Service {
    public sealed class DebugLog : CodeActivity {
        public InArgument<string> Text { get; set; }

        protected override void Execute(CodeActivityContext context) {
            string message = string.Format(“Server [{0}] – Queued Workflow Service – debug :{1}”, DateTime.Now, context.GetValue(this.Text));
            Debug.WriteLine(message);
            EventLog.WriteEntry(“Queued Service Example”, message, EventLogEntryType.Information);
        }
    }
}

All the C# code is now written and I create my workflow:

I need a variable to hold my time entry so I define one at the scope of the service:

The project template creates the CorrelationHandle for me but we won’t be using it.

The receive activity is configured as follows:

With the Content specified as:

This is such a simple service that I don’t need any correlation between messages, it just receives and processes the message without communicating back to the sender of the message. Therefore I also cleared out the CorrelatesOn and the CorrelationInitializer properties.

Finally I set up the Debug activity to write the time entry to the event log:

That’s it! I’m done. This now runs using the default binding introduced in WCF4 (http and net.tcp). Starting up the project launches my service and the WCF test client is also opened pointing to my new service. The service is running in Cassini, the local web server built into the Visual Studio debugging environment.

15 minutes, or there about, to build a workflow service. What follows a summary of the steps discovered over the next 4 hours try to convert this sample from using an http endpoint to an msmq endpoint.

Default Behaviour
One of the key messages Microsoft heard from the WCF 3 community was that configuration was too hard. To even get started using WCF you had to understand a mountain of new terms and concepts including: channels, address, binding, contract, behaviours,… the response to this in .NET 4 is defaults. If you don’t specify an endpoint, binding etc then WCF creates a default one for you based upon your machine configuration settings. This makes getting a service up and running a very straightforward experience. BUT as soon as you want to step outside of the defaults, you need the same knowledge that you needed in the WCF 3 world.

Here’s the web.config I ended up with after a couple of hours, the MSMQ settings were a voyage of personal discovery… (http://msdn.microsoft.com/en-us/library/ms731380.aspx)

<?xml version=”1.0″ encoding=”utf-8″?>
<configuration>
  <system.web>
    <compilation debug=”true” targetFramework=”4.0″ />
  </system.web>
  <system.serviceModel>
    <services>
      <service name=”TimeEntryService”>
        <endpoint
            binding=”netMsmqBinding”
            bindingConfiguration=”nonTxnMsmqBinding”
            address=”net.msmq://localhost/private/QueuedWorkflowService/TimeEntryService.xamlx”
            contract=”ITimeEntryContract” />

        <endpoint
            binding=”netMsmqBinding”
            bindingConfiguration=”txnMsmqBinding”
            address=”net.msmq://localhost/private/QueuedWorkflowServiceTxn/TimeEntryService.xamlx”
            contract=”ITimeEntryContract” />
        <endpoint
            address=”mex”
            binding=”mexHttpBinding”
            contract=”IMetadataExchange” />
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <serviceMetadata httpGetEnabled=”true” />
        </behavior>
      </serviceBehaviors>
    </behaviors>
    <bindings>
      <netMsmqBinding>
        <binding
            name=”nonTxnMsmqBinding”
            durable=”false”
            exactlyOnce=”false”
            useActiveDirectory=”false
            queueTransferProtocol=”Native”>
          <security mode=”None”>
            <message clientCredentialType=”None” />
            <transport
                msmqAuthenticationMode=”None
                msmqProtectionLevel=”None” />
          </security>
        </binding>

        <binding
            name=”txnMsmqBinding”
            durable=”true”
            exactlyOnce=”true”
            useActiveDirectory=”false”
            queueTransferProtocol=”Native”>
          <security mode=”None”>
            <message clientCredentialType=”None” />
            <transport
                msmqAuthenticationMode=”None”
                msmqProtectionLevel=”None” />
          </security>
        </binding>
      </netMsmqBinding>
    </bindings>
  </system.serviceModel>
    <microsoft.applicationServer>
        <hosting>
            <serviceAutoStart>
                <add relativeVirtualPath=”TimeEntryService.xamlx” />
            </serviceAutoStart>
        </hosting>
    </microsoft.applicationServer>
</configuration>

The two important sections are the endpoint and the netMsmqBinding sections. A single service is defined that exposes two MSMQ endpoints, a transactional endpoint and a non-transactional one. This was done to demonstrate the changes required in the netMsmqBinding to support a transactional queue over a non-transactional queue; namely the durable and exactlyOnce attributes. In both cases no security is enabled. I had to do this to get the simplest example to work. Note that the WCF address for the queue does not include a $ suffix on the private queue name and matches the Uri of the service.

We still have some way to go to get this to work, we need a number of services to be installed and running on the workstation:

Services
• Message Queuing (MSMQ)
• Net.Msmq Listener Adapter (NetMsmqActivator)
• Windows Process Activation Service (WAS)

I also ensured that AppFabric was running as this is the easiest way to start the debugging process:
• AppFabric Event Collection Service (AppFabricEventCollectionService)

If you don’t have these services registered on your workstation you will need to go into the ‘Programs and Features’ control panel, then ‘Turn Windows features on or off’ to enable them (Windows 7).

With the services installed and started you need to create a private message queue to map the endpoint to (see : http://msdn.microsoft.com/en-us/library/ms789025.aspx ). The queue name must match the Uri of the service.

image

The sample is configured to run without security on the queues, i.e. the queues are not authorized. You must allow the anonymous login ‘send’ rights on the queues. If you don’t, the messages will be delivered but the WAS listener will not be able to pick up the messages from the queue.

image

If you have problems and do not see the message delivered to the correct queue, have a look in the system Dead Letter queues.

image

You also need to change your VS2010 project to use IIS as the host rather than Cassini. On the project properties dialog, open the Web tab:

As I wanted events from this service to be added to my AppFabric monitoring store, I also added a connection string to the mapped web application and then configured AppFAbric monitoring to use that connection.


And in AppFabric configuration:

Finally you also need to enable the correct protocols on the web application (Manage Application… | Advanced Settings):

I’ve added in net.msmq for queuing support and also net.pipe for the workflow control endpoint.

Make sure that the user the application pool is running as has access to read and write to the queue.

With all the server configured I then wrote a simple WPF test application that used a service reference generated by VS2010, this creates the appropriate client side WCF configuration. The button click handler called the service proxy directly:

private void submitTimeEntryButton_Click(object sender, RoutedEventArgs e) {
    using (TimeEntryContractClient proxy = new TimeEntryContractClient(“QueuedTimeEntryContract”)) {
        TimeEntry timeEntry = new TimeEntry {
                                            TimekeeperId = Guid.NewGuid(),
                                            MatterId = Guid.NewGuid(),
                                            Duration = new TimeSpan(0, 4, 0, 0)
                                            };

        string message = string.Format(“Client [{3}]- TimekeeperId: {0}, MatterId: {1}, Duration: {2}”,
            timeEntry.TimekeeperId,
            timeEntry.MatterId,
            timeEntry.Duration,
            DateTime.Now);

        proxy.SubmitTimeEntry(timeEntry);
        EventLog.WriteEntry(“Queued Service Example”, message, EventLogEntryType.Information);
    }
}

And the awesome UI:

Click the button and you get entries in the event log, a client event and the server event:

image

I made no changes to the code to move from an http endpoint to a MSMQ endpoint, but it’s not as simple as tweaking the config and you’re good to go. I’d love to see some tooling in VS2010 or VS vNext to take some of the pain away from WCF config, similar to the tooling AppFabric adds into IIS. Until that happens, there are plenty of angle brackets to deal with.

Securing WF & WCF Services using Windows Authentication

To finish off the DEV404 session Pete and I presented at TechEd NZ, I gave a brief run through of the steps required to get Windows Authentication working in a load balanced environment using kerberos. Given the number of camera phones that appeared for snaps I’m going to assume this is a common problem with a non-intuitive solution…

The product I work on is an on-premise enterprise solution that uses the Windows Identity to provide an authenticated credential against which to authorize user requests. We host our services in IIS/Windows Server AppFabric and take advantage of the Windows Authentication provided by IIS. This allows one of two protocols to be used: kerberos and NTLM, which have quite separate characteristics.

Why Use Kerberos?
There are two main reasons we want to use kerberos over NTLM:

1. Performance: NTLM uses a challenge response pattern for authentication which leads to a high network utilization. During performance testing we saw a high volume of NTLM challenges which ultimately throttled our ability to serve requests. Kerberos uses tickets which can be cached permitted a better performing protocol.

1. Double hops: NTLM does not flow credentials – the canonical example is a user requesting serviceA on server1 to access a secured resource on server2. Server1 cannot flow the users identity to server2.

Kerberos and Load Balancing
We want to run our services within a load balanced cluster to avoid single points of failure and to be able to grow resources to meet demand as required, without having to adopt bigger tin. The default configuration of IIS does not encourage this… the Application Pools run as a local machine account. This is a significant issue for Kerberos because of the manner in which the protocol encrypts the tickets passed between client, TGS and target server. The password of the account running the service is used to encrypt tickets so that only a process running under that account can decrypt the message. The default use of a machine specific account prevents a ticket granting access to serviceX on server A also being used to access serviceX on server B.

The following steps are required to fix this:

1. Use a common domain account for the applications pools.

We use a DOMAIN\service.expert account to run our services. This domain account is granted log on as a service and log on as a batch job rights on each of the application servers.

2. Register an SPN mapping the service class to the account.

We run our services on HTTP and so register the load balancer address with the domain account used to run the services:

>setspn -a HTTP/clusteraddress serviceAccount

We are using the WCF BasicHttpBinding which does not require the client to ensure the service is running as a particular user (to prevent man in the middle attacks). If you are using any other type of binding then the client needs to state who it expects the service to be running as.

3. Configure IIS to use the application pool account rather than a machine account

system.webServer/security/authentication/windowsAuthentication useAppPoolCredentials must be set to true.

4. Configure IIS to allow kerberos authentication tokens to be cached

system.webServer/security/authentication/windowsAuthentication authPersistNonNTLM must be set to true.

See also http://support.microsoft.com/kb/954873

5. Ensure the cluster address is considered to be in the Local Intranet zone


Kerberos tokens are not supported in the Internet zone, therefore the URL for your services must be considered to be trusted. The standard way to implement this is to roll out a group policy that adds your domain to the local intranet zone settings.

The slide deck for the talk is available from http://public.me.com/stefsewell/

DEV404 – Hardcore Workflow 4

Thanks to everyone who attended the DEV404 session at TechEd NZ. We wanted to cover off some new material that we haven’t seen else where and so Pete concentrated on the extensibility of the WorkflowServiceHost and the WorkflowServiceHostFactory. Before we got there I felt we needed a lead in and so I gave a brief overview of the workflow runtime, much of the material was covered in depth at PDC09 in the session Workflow 4 Inside Out.

The key point was the single threaded nature of the workflow scheduler. There is a single thread responsible for scheduling the execution of the activities in the activity tree, you really do not want to block this thread. This is the thread that runs the Execute method of synchronous activities, to show this in action I built the following workflow:

There’s a collection of strings populated with URLs of a few well known websites.

Then, there is a ParallelForEach that iterates over the collection and fetches the contents of the web page. The FetchUrl activity was written as follows:

using System.Activities;
using System.IO;
using System.Net;

namespace WorkflowRuntime.Activities {
    /// <summary>
    /// Fetch HTTP resource synchronously 
    /// </summary>
    public sealed class FetchUrlSync: CodeActivity {
        public InArgument Address { get; set; }
        protected override string Execute(CodeActivityContext context) {
            string address = context.GetValue(Address);
            string content = string.Empty;
            WebRequest request = HttpWebRequest.Create(address); 

            using(HttpWebResponse response = request.GetResponse() as HttpWebResponse) {
                if(response != null) {
                    using (Stream stream = response.GetResponseStream()) {
                        if (stream != null) {
                            StreamReader reader = new StreamReader(stream);
                            content = reader.ReadToEnd();
                        }
                    }
                }
            }
            return content;
        }
    }
}

The HttpWebRequest class is used to fetch the page contents. Running the workflow gives the following results:

The Urls are fetched one at a time, the same behavior that you would see if the activities were scheduled in a sequence rather than in a parallel. Why? This is the single threaded scheduler, it must wait for the Execute() method of the activity to complete before it can schedule the next activity.

What we want to see is:

So how can we achieve this? Well, we have to rewrite the FetchUrl activity to perform its work asynchronously. The HttpWebRequest already has async support via the BeginGetResponse and EndGetResponse method pairs; this is a standard pattern in .NET for async programming. The FetchUrl activity becomes:

using System;
using System.Activities;
using System.IO;
using System.Net;

namespace WorkflowRuntime.Activities {
    /// <summary>
    /// Fetch HTTP resource asynchronously
    /// <summary>
    public sealed class FetchUrlAsync: AsyncCodeActivity {
        public InArgument Address { get; set; }

        protected override IAsyncResult BeginExecute(AsyncCodeActivityContext context, AsyncCallback callback, object state) {
            string address = context.GetValue(Address);
            WebRequest request = HttpWebRequest.Create(address);
            context.UserState = request;
            return request.BeginGetResponse(callback, state);
        }

        protected override string EndExecute(AsyncCodeActivityContext context, IAsyncResult result) {
            string content = string.Empty; WebRequest request = (WebRequest) context.UserState;
            using (HttpWebResponse response = request.EndGetResponse(result) as HttpWebResponse) {
                if (response != null) {
                    using (Stream stream = response.GetResponseStream()) {
                        if (stream != null) {
                            StreamReader reader = new StreamReader(stream);
                            content = reader.ReadToEnd();
                        }
                    }
                }
            }
            return content;
        }
    }
}

We call the BeginGetResponse method passing in the callback and state object given to us by the workflow runtime as part of the AsyncCodeActivity.BeginExecute method. When the fetch is completed, by a separate worker thread, the workflow runtime is called back and the EndExecute method is invoked. In this method we take the resultant stream and read the contents into a string that we return. The workflow scheduler thread is no longer responsible for fetching the content, therefore it can schedule the fetch of the next Url and we get the parallel behavior we expect. All fetches are scheduled and then the workflow runtime waits to be called back by each worker thread when complete.

The time taken for the synchronous fetches to complete is the sum total of all fetches. For the asynchronous fetches, it is the time of the longest fetch plus a little overhead.

A basic rule of workflow is to perform I/O asynchronously and not to block the scheduler thread.

The sample code and PPT deck is available from https://public.me.com/stefsewell

A Tale of Two Services

Now back in New Zealand after two weeks in the US, first week at TechEd and then a week in our US development centre. I finally feel free of jet lag and so it’s time to make good on a promise to write up a couple of samples I didn’t show at TechEd. The first is a quick introduction to authoring services…

The source code to accompany this post can be downloaded from http://public.me.com/stefsewell/ from the TechEd2010 folder. The sample code is in the archive ServiceAuthoringSample.zip.


A service is simply a piece of software that provides some functionality, access to this functionality is formalized into a contract. A service is often hosted in a separate process and utilized by a number of different consumers. The service does not know anything about the consumer, it just performs some work on their request. Between the consumer and service is most likely a process, machine and possibly a network boundary, therefore any data to be exchanged must be serializable. For the consumer to call the service, it must know where it lives, therefore the service has an address. The consumer must also be able to understand and be understood by the service, the supported communication protocols are captured as bindings. So there we have the ABC of Windows Communication Foundation; the Address, the Binding and the Contract.

Services in Code

With each release of Visual Studio, the key use cases that Microsoft is targeting with its tooling become easier to perform. In VS2010 the ease of service authoring and hosting has taken a leap forward and the code line count required to implement a service dropped. Let’s look at a very simple service that provides a random answer to a question, a Magic Eight Ball service. The contract for the magic eight ball is very simple and is captured as the following class:

using System.ServiceModel;

namespace MagicEightBall.CodedService {
    [ServiceContract]
    public interface MagicEightBallContract {
        [OperationContract]
        string AskQuestion(string question);
    }
}

There is a single method that takes a string containing a question and returns a string containing the answer. The System.ServiceModel namespace is the hint that we are going to use WCF to take care of our service. To provide an implementation of the service we have the following code.

using System;

namespace MagicEightBall.CodedService {
    public class MagicEightBallService : MagicEightBallContract {
        public string AskQuestion(string question) {
            return EightBall.Shake();
        }
    }

    internal sealed class EightBall {
        private readonly static Random random = new Random();
        private readonly static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public static string Shake(){
            return answers[random.Next(0, answers.Length)];
        }
    }
}

The eight ball is captured as a simple class with a Shake method, the service is not enforcing any validation such as ensuring a question is asked to keep things simple. Note that there is no System.ServiceModel using statement, this is vanilla .NET. We have a service contract and an implementation, our coding is complete. The next step is to host the service and allow our consumers to call it. The service host can be implemented in a number of ways, for this example we are going to use WAS (Windows Process Activation Service) which uses the IIS infrastructure to host the service – we don’t need to write a host, we’ll just use one that Microsoft provides. To access the service, the host exposes an endpoint, the endpoint is composed of the address, binding and contract. One of the criticisms of WCF in .NET 3 was the steep initial learning curve required to get a service hosted and configured. In .NET 4, the idea of defaults has been introduced which greatly reduces the amount of WCF configuration required to get up and running (to the point where it is possible to have no explicit configuration). In the example below we have a little configuration due to a slightly non-standard approach.

<?xml ="1.0"?>
<configuration>
  <system.serviceModel>
    <serviceHostingEnvironment>
      <serviceActivations>
        <add relativeAddress="MagicEightBall.svc" service="MagicEightBall.CodedService.MagicEightBallService"/>
      </serviceActivations>
    </serviceHostingEnvironment>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <serviceMetadata httpGetEnabled="True"/>
          <serviceDebug includeExceptionDetailInFaults="False"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

Here we are using the element to specify the last part of the address of the service rather than having a separate .svc file. Personally I think this is quite a tidy approach rather than having separate .config and .svc files. The section states that we want to publish metadata about this service and that we want to hide any exception details from consumers of our service. By publishing metadata about our service we allow tooling to generate a proxy class for us that allows our service to be easily called. Visual Studio provides such tooling, from within a project you can add a Service Reference:

The service reference needs to know the address of the service and then from the metadata it creates a class, the proxy, that allows the project to make use of the service. After clicking on OK, the service reference is listed as part of the project, in the sample below the MagicEightBall client is making use of two separate services.

I’m jumping a little bit ahead though, since we haven’t got the service host set up yet. We want to publish the service which we can do from within VS2010 by choosing Publish… from the context menu for the project:

A dialog pops up asking from a location to publish to, I used http://localhost/MagicEightBall which set up a new web application in IIS. By default the web application is set up to support the http protocol. If you want to change this you need to alter the ‘Enabled Protocols’ in the Advanced Settings dialog which is available from the web application context menu in IIS Manager [Manage application | Advanced Settings…].

In the example above I added the net.tcp protocol in addition to http. Note that there is no space between the comma and net.tcp. Putting a space in here will break the enabled protocols! Now we have created and published a WCF service, to test it, point your browser to http://localhost/MagicEightBall/MagicEightBall.svc. You should see the standard metadata page for your service instructing how to create a proxy class and consume it.

[Note that I have .NET 4 registered as the default framework version for IIS and so the default app pool uses .NET 4. The command C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i registers .NET 4 as the default for IIS.]

To test the service, create a console application, add a service reference called MagicEightBallService using the http url. Code to call the service is as follows:

using System;
using System.Text;

using MagicEightBall.Client.MagicEightBallService;

namespace MagicEightBall.Client {
    class Program {
        private const string CodeEndpointNameHttp = "BasicHttpBinding_MagicEightBallContract";
        string question = "Will you answer my questions?";
        string answer = string.Empty;

        using (MagicEightBallContractClient client = new MagicEightBallContractClient(CodeEndpointNameHttp)) {
            answer = client.AskQuestion(question);
        }

        Console.WriteLine(answer);
    }
}

In total there is less than 30 lines of code required for us to write to define, implement, host and consume a WCF service.

Services as Workflows
There is an alternative way to author services which uses a workflow to define the service implementation. A functionally equivalent Magic Eight Ball service can be developed as a workflow service as follows…

First create a new project in VS2010 that is a ‘WCF Workflow Service Application’ which sets up the basic send / receive service template. We need to set up a couple of variables within our workflow so click on the variables button at the bottom left having selected the outer scope:

The handle is created by the template so we need to add in the question and answer strings. The variables are used to pass data into and out of activities, the activity is the equivalent of a program statement and acts on the data. In workflow it is possible to author new activities such as the EightBall in the example above. The code for the activity is as follows:

using System;
using System.Activities;

namespace MagicEightBall.WorkflowService {
    public sealed class EightBall : CodeActivity {
        private static Random random = new Random();
        private static string[] answers = { "Yes", "No", "Ask again", "Definitely", "Bad idea", "Perhaps", "Unsure" };

        public InArgument Question { get; set; }

        protected override string Execute(CodeActivityContext context) {
            string question = context.GetValue(this.Question);
            string answer = answers[random.Next(0, answers.Length - 1)];

            return answer;
        }
    }
}

This activity is essentially the same code as the Eightball class in the original service. The question is captured as an InArgument to the activity and the result is a string, specified as a CodeActivity. Note the use of the CodeActivityContext to get the value of the question from the workflow runtime at execution time.

After compiling the project we get an EightBall activity in our toolbox and this can be dragged into the service workflow. The completed implementation looks as follows with the addition of the EightBall activity:

The EightBall activity needs to have its arguments mapped to variables. The properties of the activity are defined as follows:

In the receive activity, the operation name is changed to AskQuestion and the content is changed to:

Here the receive activity expects to get a string parameter called question which is mapped to the question variable we created earlier. The receive/send activity pairing is analogous to the AskQuestion method in our coded service.

The send activity returns a string and is paired with the Receive Question send activity as shown in the Request field.

Here we are returning the answer that we got from the EightBall activity. This workflow is now functionally equivalent to our original coded example: a string containing a question is passed in, a string containing an answer is returned.

To host the workflow service, the same steps are taken as before. You simply choose to publish the service from Visual Studio into IIS. The service exposes metadata in the same way as the coded service, therefore you can as Visual Studio to generate a service reference for you and then consume the service in the same way as we did for the coded service.

So we have two ways to solve a problem – which is better? It depends on the work that the service is performing. If the service is co-ordinating work across multiple services then a workflow makes sense as it can be easier to visualize the intended flow of control. If the service co-ordination is long running and needs to be persisted then again a workflow makes sense as this long running, durable capability is built right into the workflow service host that Microsoft ships out of the box.

The sample code contains some additional concepts not discussed such as a separate activity library and instrumentation options for service code. The code is small and so hopefully this does not clutter the examples too much.

Migration from .NET 2/3/3.5 to .NET 4

During the TechEd session, the question was asked:

“How do I migrate my services from WCF3 to WCF4?”

The simple as answer is that you recompile your source under .NET 4 and you should be done. .NET 4 is backwards compatible with .NET 2/3.X but you need to recompile for the new CLR (common language runtime).

TechEd NZ 2009 Sessions

This year Microsoft have opened up the TechEd sessions to the public and so you no longer have to be a TechEd attendee to be able to be able to watch the sessions online. This includes sessions from previous years, which means the sessions I co-presented at New Zealand TechEd last year are now available.

A first look at WCF and WF in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA206

This session covered the new features in .NET 4 for WCF and WF. The slide deck was prepared and originally presented by Aaron Skonnard from Pluralsight. Mark, a colleague at ADERANT, and I were asked to present in New Zealand due to our .NET 4.0 TAP involvement (Technology Adoption Program). The demos were our own and so the content is slightly different to the original presentation.

Building declarative apps in .NET 4.0
http://www.msteched.com/2009/NewZealand/SOA306

In this session we wanted to show how Microsoft is choosing a declarative approach for much of its new technology, freeing the developer from the how and letting them concentrate on the what. Using the Visual Studio DSL toolkit is it possible to build your own visual DSLs and designers. From these models you can then use T4 to transform the model into code. This approach is at the heart of a software factory we use internally in ADERANT and has saved us from technology churn as well as speeding up product development.

Note: The DSL toolkit has been renamed for VS2010 and is now the Visual Studio Virtualization and Modeling SDK.

How to diagnose errors in AppFabric monitoring configuration

It wasn’t the best Friday, my external hard drive died taking my work iTunes library with it and I wasn’t having much fun with AppFabric either. The dashboard showed no data and the Windows application event log kept filling up with login errors. Looking back, the afternoon was useful since I learned that little bit more about AppFabric though I didn’t get any ‘real’ work done.

I started off reading this: http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx before getting stuck in.

AppFabric has two data stores: a monitoring store and a workflow persistence store. These stores are paired with two Windows services, an event collection service paired with the monitoring store and a workflow management service paired with the workflow persistence store.

Lets start with the event collection service and monitoring store. This service is responsible for capturing the WF and WCF events emitted by services hosted in IIS/WAS and storing them in the monitoring store. These events are used to populate the dashboard that is integrated into IIS Manager. To enable capture of events you can use the ‘Manage WF and WCF Services | Configure…’ option in the web application context menu or the Powershell commands Set-ASAppMonitoring and Start-ASAppMonitoring. For help on these commands call get-help, e.g. ‘get-help Set-ASAppMonitoring’, from a Powershell command line.

When you set up monitoring you need to provide a connection string name and set the monitoring level. As a minimum, the level needs to be set to Health Monitoring to populate the AppFabric dashboard. Below this are the levels Off and Errors Only which are self explanatory. Above this level are End-to-End Monitoring and Troubleshooting both of which capture additional information. End-toEnd Monitoring adds a header into WCF traffic to allow a logical call sequence to be followed. When a WCF service calls another WCF service the header is flowed across the call providing a correlation token for querying by. Note that the capture levels are cumulative, the higher level setting includes all of the events from the settings below. The higher the setting, the greater the impact on the performance of the system as more resources are required to capture and log the monitored events. For day to day operations health monitoring is recommended with the more verbose options used when required to aid troubleshooting. The connection string is a named connection string value, set as a property of the web application (or one of its ancestors). The connection string dashboard page is available from the ASP.NET section of the Features View for the web application.

Clicking on the Connection Strings option brings up the following:


Note that IIS configuration is hierarchical, the connection strings available to the Magic8Ball web application are both inherited which means they are defined at a higher node in the tree. In this case the strings are defined in the machine web.config found at %SystemDrive%\Windows\Microsoft.NET\Framework64\v4.0.30128\Config (I’m using 64-bit Windows and .NET 4.0 RC). When installing AppFabric the default connection strings are written into the machine level web.config. In my case, both connection strings are set-up to use integrated security.

The event collection service is a Windows Service and so managed through the services administration snap-in, services.msc. To help set up integrated security from Windows through to SQL Server, I run the services under a domain account. Note that if you plan to use a machine that is not always on a domain, you need to use a local machine account.


This account needs to have login rights to the SQL Server and should be mapped to the ASMonitoringDbWriter role. In my case I’ve mapped the user to all three roles set up in the monitoring store.

There are four Jobs managed by the SQL Agent that are used to populate and manage the tables in the monitoring database. These are:

The SQL Server Agent must be running on for the tables to be populated. The Import*Events jobs run every 10 seconds by default, if they are not correctly set up your application event log soon fills up with errors and warnings (as I found). These jobs call stored procedures defined in the monitoring database: ASImportTransferEvents, ASImportWcfEvents, ASImportWFEvents and run as the AS_MonitoringDbJobsAdmin. The AutoPurge job is scheduled to run once every minute and calls the ASAutoPurge stored procedure. These stored procedures in turn call ASInternal_* versions of themselves and you can drill into the SQL to see exactly what they do. To housekeep the monitoring database you can use the Clear-ASMonitoringSqlDatabase command. An other option is to move the events to an archive database so that the queries feeding the dashboard remain responsive, see Set-ASMonitoringSqlDatabaseArchiveConfiguration. The archive database can then be managed as per any audit requirements you may have.

To monitor the SQL Agent jobs, you can use the Job Activity Monitor:

The Windows Event Viewer is a great help tracking down the cause of issues and AppFabric sets up a couple of customs logs.

To see the Debug and Analytic logs you need to set the following:

Right click on a debug or analytic log and enable it. Make sure you disable it when you are finished to prevent performance degradation due to high volume event capture.

From these logs I could determine that my IIS configuration had invalid entries, the SQL Server login was failing for the Event Collector and so on. I’ll talk more about diagnosing IIS configuration issues and the workflow persistence store in the next post…