Data over the Web

It’s been a little while since the last posting, in no small part due to my broadband quota exceeding the monthly allowance. Dial-up speed is just painful, and made me realize just how much I use the internet for media: music, movies, podcasts, blogs, … It was also a great reminder just how sensitive applications are when you have a constrained network connection.

One of the most significant changes made to the Expert architecture with SP1 is the introduction of a query service. Prior to SP1, the architectural layering required that data transfer objects (DTOs) were used to move data from a service boundary to the client. The domain model was mapped to whatever shape was required by the client requesting the entities.

Writing the DTO and mapper classes is very repetitive and quite dull and so it was automated using the Visual DSL Toolkit for Visual Studio (now renamed to the Visual Studio Visualization and Modeling SDK). A key component in the Expert framework is our software factory which builds code from 3 models: relational model, domain model and the view model. The view model provides a model and tooling to generate use case specific views of the domain model and the mappers required to transform from domain model to view model and back. An optimization we made when sending data back to the service to update the domain model was just to send back the changes. This required the view model to track any updates made to the model between the time it was fetched from a service and the time it was sent back to the service. The mechanism we wrote to achieve that is worth a few blog entries on its own and I’m going to skip over the details here.

One of the primary clients within this architecture is our workflow service which allows data from the business services to be managed within a long running workflow process before being updated back into the main line of business system. In the original Golden Gate release, the data associated with a workflow instance is sent out with every task within the workflow (a task is a workflow activity that requires human interaction). For very large workflow processes, this can be an issue, particularly over restricted network connections such as VPN or very remote sites. For SP1 we took a look at this particular areas and addressed it in the following ways:

• Tasks now have a data contract so that only the required data is sent.
• The way we fetch data is now via a dedicated query service rather than combining reads and write operations in the same service contract. The query service is http based and therefore can take advantage of out-of-the-box optimizations such as caching and compression.

The separation of query from command at the architectural level we found is currently being explored by a number of people, most vocally by Greg Young and Udi Dahan. The architectural pattern is Command Query Responsibility Segregation (CQRS) and is similar in spirit to the Command-Query separation CQS concept first discussed by Bertrand Meyer in Object Orientated Software Construction back in 1988. This is another topic worthy of blog posts and InfoQ has a great presentation from Greg Young.

Our query service implementation is a WCF Data Service which takes an Entity Framework 4 model and exposes it as a RESTful service.


All data required by a client is fetched via the query service and this is delivered over an http channel. The use of IIS and HTTP gives us the following:

• monitoring via AppFabric
• compression via the dynamic compression in IIS7
• caching using standard HTTP based caches
• cross platform capable data feed

The lifecycle for data is now:

There are a number of interesting aspects to this, not least of which we now use two different ORM technologies: NHibernate and Entity Framework. This is based on both historical use and feature set. Given the data model that we map to, we need the rich extension points available in NHibernate to support the desired object model. The WCF Data Services and EF4 features in .NET 4 / Visual Studio 2010 take most of the heavy lifting out of exposing a domain model via REST. Microsoft is now promoting an Open Data Protocol built on top of http/atom/json as a cross platform capable mechanism for interoperating with data over the web; an ODBC for the web perhaps. The Mix10 keynote included a chapter on OData and how Microsoft is tooling it.

Given that we have a software factory that already contains a domain model which includes the persistence mapping, we just needed to add additional T4 transforms to the software factory so that we could generate a query service and view model implementation from the existing domain model. Along the way we also simplified the change tracking approach, as the explicit task data contract reduced the potential for merge conflicts.

Now that we have a query service and Microsoft is doing all it can to promote OData as a cross platform solution, there are an interesting number of options opening up. One of which is the iPhone/iPad platform from Apple. As part of the OData SDK, Microsoft has released a library and tooling to make consuming OData feeds from the Apple platform straight forward. This includes a tool that generates objective-C classes from the meta data available from an OData stream (via the $metadata directive).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: