Remote Powershell & calling Powershell cmdlets from C#

Powershell rocks. Why it’s taken the Windows platform so long to get a decent shell I can’t answer but with Powershell V2, it has arrived. For me, the single most useful feature of Powershell is its remoting capabilities.

A while back I was looking at the rather thorny issue of managing the deployment of an ADERANT Expert solution. Being a service orientated architecture, there are a significant number of services to deploy most likely over a number of servers. Feedback from our consultants who were trying to install all the moving parts was that it was too hard to diagnose what had gone wrong and often the problem was a missing pre-requisite. Examples were:
* a component was not configured correctly, e.g. MSDTC
* a required service was not running, e.g. message queueing
* a component was missing, e.g. the .NET 3.5 framework

We needed a way to check that the target machine we were deploying onto satisfied a set of pre-requisites. At the time Powershell V1 was available but it has no remoting capabilities so I had to look else where. After a couple of days of wading through various approaches such as the remote registry, WMI and hitting road blocks with security, performance and compatibility across different OS versions I changed tact. Everything I wanted to do was relatively easy if I was running on the target server so I wrote a Windows Service that could be deployed onto the server and sent a list of pre-requisites to check. While this wasn’t rocket science, it was far more work than it should have been. As it turns out, installations were also proving troublesome for the Microsoft Developer & Platform Evangelism group who were trying to show off the features of .NET 3.5 through a sample app: DinnerNow.Net. In the end they implemented an installer that performed pre-requisite checking and offered resolutions to any issues found. I drew a lot confidence from the sample that maybe I was on to something.

A couple of years later and Powershell V2 is out, it’s part of the OS and it comes with a comprehensive remoting capability. I no longer need to jump through hoops to run commands remotely, my pre-requisite checking service can be replaced with a few lines of script. From the ISE (Integrated Shell Environment) and the console you can open sessions to any machine that has been enabled for remoting (Enable-PSRemoting) and therefore is running the Windows Remote Management (WinRM) service. Now it is very easy to invoke commands on a remote machine:

PS> invoke-command -ComputerName RemoteMachine -ScriptBlock { powershell command }

Note: if a machine is not part of a domain, you need to add the machines that can remote onto it into the trusted hosts collection.

PS> set-item -path WSMan:\localhost\Client\TrustedHosts -Value "trustedMachine" -Force

A common forum post on MSMQ is ‘How do I create private queues on a remote computer?’. Well, here’s how:

invoke-command -ComputerName RemoteServerName -ScriptBlock { 
[Reflection.Assembly]::LoadWithPartialName("System.Messaging")[System.Messaging.MessageQueue]::Create(".\Private$\MyNewQueue")
}

Powershell is capable of calling .NET classes directly, above I’m using the System.Messaging assembly to create a new MSMQ queue. The code inside the script block is run on the target machine, therefore it is written as if running locally.

The administration API for AppFabric is a Powershell API, therefore it is a simple task to remotely configure AppFabric. We have extended our deployment engine to support automated AppFabric deployment using Powershell.

Calling Powershell from C#
At ADERANT, we have designed and built a declarative deployment engine that we use to install all of our server-side components. One of the techniques we use is to call Powershell cmdlets from C#. The basic pattern for doing this is as follows:

using System.Management.Automation;
using System.Management.Automation.Runspaces;

Collection results;
string script = “get-help”; // whatever you want to execute

using (Runspace runspace = RunspaceFactory.CreateRunspace()) {
    try {
        runspace.Open();
        using (Pipeline pipeline = runspace.CreatePipeline()) {
            pipeline.Commands.AddScript(script);
            results = pipeline.Invoke();
        }
    }
    finally {
        runspace.Close();
    }
}

// convert the script result into a single string
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject obj in results) {
    stringBuilder.AppendLine(obj.ToString());
}

return stringBuilder.ToString();

The Runspace represents a session, a Pipeline is a sequence of commands to execute. Invoking the pipeline will execute the commands and the results are provided as a collection of Powershell objects. In the example above we simple map the result to a string.

Combine the two techniques together and you can build cmdlets in C# and execute them via Powershell on any machine with Powershell remoting enabled.

Hunting Zombies (orphaned IIS Web Applications)

Following on from the previous post, it’s time to look at one of the more sensitive areas of AppFabric… the IIS configuration.

When you run many of the AppFabric configuration commands via Powershell or the IIS Manager, the result is a change to a web.config file. IIS configuration is hierarchical with settings being inherited from parent nodes as we saw with connection strings. The implication of this is that when determining the correct settings for a web application, a series of configuration files are parsed. An error in any one of these configuration files can lead to a broken system. The event logs mentioned in the previous post are a good place to look for these errors, the offending configuration files will often be named in the log entry.

[Update: AppFabric has a one time inheritance model for its configuration, if you choose to provide a configuration setting at a node then this overrides the configuration set at a parent node. The scope / granularity of this is all AppFabric config. Microsoft tried to provide a merged inheritance model but it is a non-trivial problem and did not make v1.]

A common issue on a development workstation is the configuration getting left behind due to poor housekeeping. For example, you map a folder into IIS as a web application, this folder contains other subfolders which in turn are also mapped as web applications. If you remove the parent web application without first removing the child applications then the child configuration remains. It cannot be seen via IIS Manager as there is no way to reach it, however you can easily see it through Powershell. One of the many awesome features in Powershell is the provider model which allows any hierarchical system to be navigated in a consistent way. The canonical example is the file system, we are all used to: cd, dir, etc to navigate around. Well, these same commands (which are actually aliases in Powershell to standard verb-noun commands) can be used to navigate other hierarchies, for example IIS.

From a Powershell console running with elevated status (run as Admin), you can do the following:

First you need to add the IIS Management module to the session:

PS> import-module WebAdministration

You can the navigate the IIS structure by changing the ‘drive’ to be IIS:

> IIS:
> ls

Both the dir and ls commands are mapped to get-childitem powershell command via an alias providing a standard Windows console or UNIX console experience. Listing the children at the root level gives us access to the application pools, web sites and SSL bindings. Following through the example above, we navigate to the default web site and then list all of its children. In my case this maps exactly to what is shown in IIS:

Hunting Zombies
So, let’s makes some zombies…

I created a new folder C:\ZombieParent and added two sub folders, ZombieChild1 and ZombieChild2. I then mapped the parent folder to a web application called Zombies and converted the two sub folders also to web applications. Re-running the get-childitem commands now shows:

You can see the three web applications at the end of the list, in IIS Manager we have:

Let’s now remove the parent Zombies web application:


In IIS Manager we no longer see the ZombieChild1 or ZombieChild2 web applications that we can still see via Powershell.

This can be the source of many weird and wonderful errors when working with AppFabric as it tries to parse configuration for zombie web applications. If you are getting strange behavior it is well worth launching a Powershell console and going on a zombie hunt. The web applications left behind can be removed via the console:


Powershell can be a sensitive soul…

I’ll mention another gotcha that tripped me up… case sensitivity. IIS allows you to promote a physical path, to a virtual directory, to a web application. E.g.

> cd \inetpub\wwwroot\
> mkdir test
> IIS:
> cd '\Sites\Default Web Site'
> dir

 directory test C:\inetpub\wwwroot\test

 > new-webvirtualdirectory test -physicalpath 'c:\inetpub\wwwroot\test'
 > dir

 virtualDirectory test C:\inetpub\wwwroot\test

 > remove-webvirtualdirectory test
 > dir

 directory test C:\inetpub\wwwroot\test

However if the case of the directory/virtual directory/web application does not match exactly then you get the following behavior:

> import-module WebAdministration
 > cd \inetpub\wwwroot\
 > mkdir test
 > IIS:
 > cd '\Sites\Default Web Site'
 > dir

 directory test C:\inetpub\wwwroot\test

 > new-webvirtualdirectory Test -physicalpath 'c:\inetpub\wwwroot\test'
 > dir

 directory test C:\inetpub\wwwroot\test
 virtualDirectory Test C:\inetpub\wwwroot\test

 > remove-webvirtualdirectory Test
 > dir

 directory test C:\inetpub\wwwroot\test
 virtualDirectory Test C:\inetpub\wwwroot\test

Here we created a new physical directory under the wwwroot folder and then mapped a virtual directory to this location but used a name of Test rather then test. When we get-childitem and we see two entries: ‘test’ for the physical path and ‘Test’ for the virtual directory. Then we remove the virtual directory but it is not deleted and no error is reported.

This caused a heap of confusion for me when automating our deployments so beware of case! This has been raised with Microsoft as an issue. I found that the ConvertTo-WebApplication cmdlet worked for my needs without the case issues.

How to diagnose errors in AppFabric monitoring configuration

It wasn’t the best Friday, my external hard drive died taking my work iTunes library with it and I wasn’t having much fun with AppFabric either. The dashboard showed no data and the Windows application event log kept filling up with login errors. Looking back, the afternoon was useful since I learned that little bit more about AppFabric though I didn’t get any ‘real’ work done.

I started off reading this: http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx before getting stuck in.

AppFabric has two data stores: a monitoring store and a workflow persistence store. These stores are paired with two Windows services, an event collection service paired with the monitoring store and a workflow management service paired with the workflow persistence store.

Lets start with the event collection service and monitoring store. This service is responsible for capturing the WF and WCF events emitted by services hosted in IIS/WAS and storing them in the monitoring store. These events are used to populate the dashboard that is integrated into IIS Manager. To enable capture of events you can use the ‘Manage WF and WCF Services | Configure…’ option in the web application context menu or the Powershell commands Set-ASAppMonitoring and Start-ASAppMonitoring. For help on these commands call get-help, e.g. ‘get-help Set-ASAppMonitoring’, from a Powershell command line.

When you set up monitoring you need to provide a connection string name and set the monitoring level. As a minimum, the level needs to be set to Health Monitoring to populate the AppFabric dashboard. Below this are the levels Off and Errors Only which are self explanatory. Above this level are End-to-End Monitoring and Troubleshooting both of which capture additional information. End-toEnd Monitoring adds a header into WCF traffic to allow a logical call sequence to be followed. When a WCF service calls another WCF service the header is flowed across the call providing a correlation token for querying by. Note that the capture levels are cumulative, the higher level setting includes all of the events from the settings below. The higher the setting, the greater the impact on the performance of the system as more resources are required to capture and log the monitored events. For day to day operations health monitoring is recommended with the more verbose options used when required to aid troubleshooting. The connection string is a named connection string value, set as a property of the web application (or one of its ancestors). The connection string dashboard page is available from the ASP.NET section of the Features View for the web application.

Clicking on the Connection Strings option brings up the following:


Note that IIS configuration is hierarchical, the connection strings available to the Magic8Ball web application are both inherited which means they are defined at a higher node in the tree. In this case the strings are defined in the machine web.config found at %SystemDrive%\Windows\Microsoft.NET\Framework64\v4.0.30128\Config (I’m using 64-bit Windows and .NET 4.0 RC). When installing AppFabric the default connection strings are written into the machine level web.config. In my case, both connection strings are set-up to use integrated security.

The event collection service is a Windows Service and so managed through the services administration snap-in, services.msc. To help set up integrated security from Windows through to SQL Server, I run the services under a domain account. Note that if you plan to use a machine that is not always on a domain, you need to use a local machine account.


This account needs to have login rights to the SQL Server and should be mapped to the ASMonitoringDbWriter role. In my case I’ve mapped the user to all three roles set up in the monitoring store.

There are four Jobs managed by the SQL Agent that are used to populate and manage the tables in the monitoring database. These are:

The SQL Server Agent must be running on for the tables to be populated. The Import*Events jobs run every 10 seconds by default, if they are not correctly set up your application event log soon fills up with errors and warnings (as I found). These jobs call stored procedures defined in the monitoring database: ASImportTransferEvents, ASImportWcfEvents, ASImportWFEvents and run as the AS_MonitoringDbJobsAdmin. The AutoPurge job is scheduled to run once every minute and calls the ASAutoPurge stored procedure. These stored procedures in turn call ASInternal_* versions of themselves and you can drill into the SQL to see exactly what they do. To housekeep the monitoring database you can use the Clear-ASMonitoringSqlDatabase command. An other option is to move the events to an archive database so that the queries feeding the dashboard remain responsive, see Set-ASMonitoringSqlDatabaseArchiveConfiguration. The archive database can then be managed as per any audit requirements you may have.

To monitor the SQL Agent jobs, you can use the Job Activity Monitor:

The Windows Event Viewer is a great help tracking down the cause of issues and AppFabric sets up a couple of customs logs.

To see the Debug and Analytic logs you need to set the following:

Right click on a debug or analytic log and enable it. Make sure you disable it when you are finished to prevent performance degradation due to high volume event capture.

From these logs I could determine that my IIS configuration had invalid entries, the SQL Server login was failing for the Event Collector and so on. I’ll talk more about diagnosing IIS configuration issues and the workflow persistence store in the next post…

Planning a TechEd session.

For the last 18 months, I’ve had the privilege of contributing to the Microsoft TAP programs for Visual Studio 2010 & .NET 4 and AppFabric (previously code named as Dublin). These TAP programs are coming to a close with Visual Studio 2010 and .NET 4 now shipping, and AppFabric shipping in H1 2010.

As part of the TAP engagement, I’ll be in New Orleans in June to present an interactive session discussing Windows Workflow Foundation 4, Windows Communication Foundation 4 and AppFabric ( in particular the ‘Dublin’ components). The goal of this blog is to capture the thought process that goes into preparing for the session and to provide a detailed reference for the attendees. I hope that the usefulness of the content extends beyond this primary audience and the .NET community as a whole can find something of interest.

The first, and possibly most difficult, question is what shall I talk about? What do I think people will find interesting? The real challenge is condensing 12 months of hands-on experience into a single hour of relevant and approachable material. I’ve attended a number of TechEd conferences in the past, in Europe and New Zealand so I have an idea of the sessions that I found interesting so I’ll start there. I’ll post a link to this blog in the session description so that you can tell me what you like to see covered, just email me: stefan.sewell at aderant.com.

Right now I have the following high level breakdown for the talk:

1. Setting the Context

The basics, I work for ADERANT as a software architect. ADERANT is an Independent Software Vendor (ISV) producing enterprise solutions for the legal and professional services market. At the very core of the software is the ability to track the work completed and expenses incurred for a project (aka matter), this is billed out to the client the work was performed for. From this grossly simplified view of the world, we then can add in client management, resource planning, budgeting, time capture, expense management, eBilling, profitability projection and much more. We basically provide software to run a law firm and some of the worlds largest law firms are running on ADERANT Expert. In a subsequent post I’ll expand on the challenges we face writing software for global companies.

2. Overview of our Software and Approach

Having covered off what we do, I’ll next talk about how we do it from a 10,000 ft view. ADERANT Expert is a suite of products built on the Microsoft stack. It’s origins go back over 30 years but in the last 10 years we’ve undertaken a major architectural overhaul moving from a client-server architecture predominantly written in unmanaged C++ to a service architecture built on the .NET 4.0 platform. The current version of the software, Expert Golden Gate, is written on .NET 3.5 and shortly we’ll be releasing an enhanced .NET 4 based version.

In moving from .NET 3.5 to .NET 4, we made several refinements to our architecture based on two factors: firstly lessons learned from the field and secondly new features shipping in .NET 4 which allowed us to replace infrastructure that we had written in-house with out of the box functionality from Microsoft. We are a products company selling to the legal market, we don’t want to have to develop the infrastructure to support our products, we want that from our chosen platform. I’ll compare our NET 3.5 approach with our NET 4.0 approach highlighting the changes we made and why.

1. Examples

The next section is a drill down into some examples of how we use .NET 4 and AppFabric. First example will be our task concept, which is a human-based workflow activity. This is a non-trivial example that will show:
• Using our DesignStudio add-in for VS2010 which allows firms to create their own custom workflow processes. These processes can include human tasks such as data entry and approval.
• How a process is published as a workflow service and hosted in IIS under AppFabric management.
• The flow of services call made as part of a tasks’ lifecycle, this will include a discussion on correlation of service calls to workflow instances.

Having demonstrated a running workflow instance, we can then have a look at the tracking data captured by AppFabric and review it through the dashboard.

A second worked example I’d like to cover is deployment. An SOA brings with it a significant complexity around deployment and management of services. AppFabric goes some way to address this by providing a centralized monitoring store for WF and WCF events as well as providing a Powershell administration API. At ADERANT we gone a step further and created a deployment runtime and declarative deployment model. The runtime uses the AppFabric powershell API to provide a ClickOnce-style deployment mechanism for servers, including automated deployment into an application farm. At this point I can discuss the horizontal scale out options that AppFabric provides for Workflow Services.

With two significant examples there will be plenty of potential for discussion as this is billed as an interactive session.

1. Futures and Wrap Up

By now I should have covered what we are leveraging in .NET 4 and AppFabric today, and there are a couple of futures to mention that we are interested in, most notably Azure & AppFabric.

So there we have my initial plan, let me know what you think…