spacer
11 December 11

Meanwhile... on the command side of my architecture

This article describes how one single interface can make the design of your application so much cleaner and flexible than you ever thought was possible.

If you find this article interesting, you should also read my follow up: Meanwhile... on the query side of my architecture.

Since writing applications in .NET, I've been separating operations that mutate state (of the database mostly) and operations that return data. Basically, this is what the Command-query separation principle is all about. Over the years the design I used has evolved. Triggered by a former colleague of mine, I started to use the Command Pattern about four years back as the design around state mutations in my business layer. We called them business commands, since a single command would represent an atomic business operation, or use case.

This model has worked well for the last couple of years, and in a sense, I'm still using it today. However the design was around command classes that contained both properties to hold the data and an Execute() method that would start that operation. The last few years however, the projects I participated in, increased in complexity and I started doing things like Test Driven Development and Dependency Injection (DI). I soon started to notice the flaws in this design (DI has this tendency of exposing violations of the SOLID principles) and that design hindered me in the maintainability of these applications.

The design was always based around some abstract Command base class that contained all logic for handling transactions, re-executing commands after a deadlock occurred, measuring performance, security checks, and such. This base class was a big code smell, because it acted as some sort of God Object having many responsibilities. Furthermore, the design where data and behavior were mixed, made it very hard to fake that logic during testing, since that consumer would typically new up a command instance and call Execute() directly on that instance, as shown in the following example:

var command = new MoveCustomerCommand
{
CustomerId = customerId,
NewAddress = address
};

command.Execute();

I tried to solve this problem by injecting commands in the constructor (constructor injection) of a consumer, but this was awkward to say the least. In that case, the consumer had to set all the properties of an object it got from the outside, and that still didn't really solve the problem of abstracting away the command elegantly, since for every command I would have to define a fake command in the test suite and it still left me with a big complicated base class that was hard to extend.

This finally lead to a design that I’ve seen others use, but never saw the benefits of. In this design data and behavior is separated. For each business operation I now use a simple data container (a DTO), which name ends with 'Command', such as the following:

public class MoveCustomerCommand
{
public int CustomerId { get; set; }

public Address NewAddress { get; set; }
}

The actual logic gets its own class, which name ends with 'CommandHandler':

public class MoveCustomerCommandHandler
{
private readonly UnitOfWork db;

public MoveCustomerCommandHandler(UnitOfWork db,
[Other dependencies here])
{
this.db = db;
}

public virtual void Handle(MoveCustomerCommand command)
{
// TODO: Logic here
}
}

This design immediately wins us a lot, since a command handler can be injected into a consumer, which can simply new up a command. Because the command only contains data, there is never a reason anymore to fake the command itself during testing. Here’s an example of how a consumer can use that command and handler:

public class CustomerController : Controller
{
private readonly MoveCustomerCommandHandler handler;

public CustomerController(
MoveCustomerCommandHandler handler)
{
this.handler = handler;
}

public void MoveCustomer(int customerId,
Address newAddress)
{
var command = new MoveCustomerCommand
{
CustomerId = customerId,
NewAddress = newAddress
};

this.handler.Handle(command);
}
}

There is however, still a problem with this design. Although every handler class has a single (public) method (and therefore adheres the Interface Segregation Principle), all handlers define their own interface (there is no common interface), which makes it hard to extend those handlers with new features and add cross-cutting concerns. Say for instance that we would like to measure the time it takes to execute every command and log this to the database. How would we do this? It would either involve changing each and every command handler, or moving that logic to a base class. Moving this to the base class is not ideal, because in no time the base class would contain lots and lots of such features, growing out of control (which I’ve seen happening). Besides, this would make it hard to enable/disable such behavior for certain types (or instances) of command handlers, since this would involve in adding conditional logic into the base class, making it even more complicated.

All these problems can be solved elegantly by letting all handlers inherit from a single generic interface:

public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}

Using this interface, the MoveCustomerCommandHandler would now look like this:

// Exactly the same as before, but now with the interface.
public class MoveCustomerCommandHandler
: ICommandHandler<MoveCustomerCommand>
{
private readonly UnitOfWork db;

public MoveCustomerCommandHandler(UnitOfWork db,
[Other dependencies here])
{
this.db = db;
}

public void Handle(MoveCustomerCommand command)
{
// TODO: Logic here
}
}

Important benefit of this interface is that it allows us to let the consumers depend on that single abstraction, not on a concrete command handler implementation:

// Again, same implementation as before, but now we depend
// upon the ICommandHandler abstraction.
public class CustomerController : Controller
{
private ICommandHandler<MoveCustomerCommand> handler;

public CustomerController(
ICommandHandler<MoveCustomerCommand> handler)
{
this.handler = handler;
}

public void MoveCustomer(int customerId,
Address newAddress)
{
var command = new MoveCustomerCommand
{
CustomerId = customerId,
NewAddress = newAddress
};

this.handler.Handle(command);
}
}

What does adding an interface help us in this case? Well frankly, a lot! Since nobody depends directly on the implementation, we can now replace those handlers with something else (as long as it implements that interface). Setting (the usual argument of) testability aside, look for instance at this generic class:

public class TransactionCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private readonly ICommandHandler<TCommand> decorated;

public TransactionCommandHandlerDecorator(
ICommandHandler<TCommand> decorated)
{
this.decorated = decorated;
}

public void Handle(TCommand command)
{
using (var scope = new TransactionScope())
{
this.decorated.Handle(command);

scope.Complete();
}
}
}

This class wraps an ICommandHandler<TCommand> instance, but also implements ICommandHandler<TCommand>. It is an implementation of the Decorator pattern. This very simple class allows us to add transaction support to all command handlers. For instance, instead of injecting a MoveCustomerCommandHandler directly into the CustomerController, we can inject the following:

var handler =
new TransactionCommandHandlerDecorator<MoveCustomerCommand>(
new MoveCustomerCommandHandler(
new EntityFrameworkUnitOfWork(connectionString),
// Inject other dependencies for the handler here
)
);

// Inject the handler into the controller’s constructor.
var controller = new CustomerController(handler);

Nice about this is that this single decorator class (containing just 5 lines of code) could be reused for all command handlers in the system. Downside of this is of course that it would take a lot of boilerplate code to wire this up for all classes that depend on a command handler, but at least the rest of the application is oblivious to this change. Of course, in reality, when you have more than a couple consumers of such handler, you should start using a Dependency Injection framework, since such framework can automate this wiring for you and it will help you in making even this part of your application maintainable.

If you're still not convinced, let's define another decorator:

public class DeadlockRetryCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private readonly ICommandHandler<TCommand> decorated;

public DeadlockRetryCommandHandlerDecorator(
ICommandHandler<TCommand> decorated)
{
this.decorated = decorated;
}

public void Handle(TCommand command)
{
this.HandleWithCountDown(command, 5);
}

private void HandleWithCountDown(TCommand command,
int count)
{
try
{
this.decorated.Handle(command);
}
catch (Exception ex)
{
if (count <= 0 || !IsDeadlockException(ex))
throw;

Thread.Sleep(300);

this.HandleWithCountDown(command, count - 1);
}
}

private static bool IsDeadlockException(Exception ex)
{
while (ex != null)
{
if (ex is DbException &&
ex.Message.Contains("deadlock"))
{
return true;
}

ex = ex.InnerException;
}

return false;
}
}

I think this class speaks for itself. Although this class contains more code than the previous, it's still just 14 lines of code. It allows the execution of a command to be re-executed for 5 times in case a database deadlock occurred, before it lets the thrown exception bubble up. Again, we can use this class by wrapping the previous decorator, as follows:

var handler =
new DeadlockRetryCommandHandlerDecorator<MoveCustomerCommand>(
new TransactionCommandHandlerDecorator<MoveCustomerCommand>(
new MoveCustomerCommandHandler(
new EntityFrameworkUnitOfWork(connectionString),
// Inject other dependencies for the handler here
)
)
);

var controller = new CustomerController(handler);

By the way, did you notice how both decorators are completely focused? They each have just a single responsibility. This makes them easy to understand, easy to change, which is what the Single Responsibility Principle is about.

Of course the correctness of the system now depends on the correct wiring of these dependencies, since wrapping the deadlock retry behavior INSIDE the transaction behavior will lead to unexpected behavior (since a database deadlock typically has the effect of the database rolling back the transaction, while leaving the connection open), but this is solely a problem for the part of the application that wires this all together. Again, the rest of the application is oblivious.

Both the transaction logic and deadlock retry logic are examples of cross-cutting concerns. The use of decorators to add cross-cutting concerns is the cleanest and most effective way there is. It is a form of Aspect Oriented Programming. Besides these two examples there are many other cross-cutting concerns that I could think of that can be added fairly easy using decorators. Think about:

  • checking the authorization of the current user before commands get executed,
  • validating commands before commands get executed, 
  • measuring the duration of executing commands, 
  • logging and audit trailing,
  • executing commands in the background, or
  • queueing commands to be processed in a different process.

<BackgroundStory>This last one is a very interesting one. Years back I worked on an application that queued commands in the database. We wrote business processes (commands by themselves) that sometimes queued dozens of other (sub) commands, which allowed them to be processed in parallel by different processes (multiple Windows services on different machines). These commands did things like sending mails or heavy stuff such as generating PDF documents (that would be merged later on by another command, and sending those merged documents to a printer by yet another command). The queue was transactional, which allowed us to -in a sense- send mails and upload files to FTP in a transactional manner. However, We didn't use dependency injection back then, which made everything so much harder (if only we knew).</BackgroundStory>

Because commands are simple data containers without behavior, it is very easy to serialize them (using the XmlSerializer for instance) or send them over the wire (using WCF for instance), which makes it not only easy to queue them for later processing, but also makes it very easy to log them as an audit trail, which is yet another reason to separate data and behavior. All these features can be added, without changing a single line of code in the application (except perhaps a few lines in the start-up path of the application).

Yet another reason to design your application this way is because it makes maintaining web services so much easier. Your (WCF) web service can consist of only one 'handle' method that takes in any arbitrary command (only commands that you explicitly expose of course) and you can execute these commands (after doing the usual authentication, authorization, and validation of course). Since you will be defining commands and their handlers anyway, your web service project won't have to be changed. If you're interested, take a look at my article about Writing Highly Maintainable WCF Services.

And it was just one simple ICommandHandler<TCommand> interface that made this all possible. While it may seem complex at first, when you get the hang of this (together with dependency injection), well... the possibilities are endless. And even when you think you don’t need all of this, this design allows you to make many unforeseen changes to the system, without much trouble. Still we can hardly argue a system with this design is over-engineered, since we just put every operation in its own class and put a generic interface on it. It’s hard to over-engineer that, because even really small systems benefit from separating concerns.

Still, this doesn't mean things can’t get complicated sometimes. Doing the correct wiring of all those dependencies, and writing and adding the decorators in the right order can sometimes be challenging. But still, this complexity is focused in a single part of the application (the start-up path a.k.a. Composition Root), which leaves the rest of the application unaware and unaffected of this. You hardly ever have to change the rest of the application for this, which is what the Open/Closed Principle is all about.

By the way, you probably think the way I created all those decorators around a single command handler is rather awkward, and imagined the big ball of mud that it would become after we got a few dozen of command handlers. Yes, you would be probably right that this doesn’t scale well. But as I said, this is where DI containers come in. For instance, when using the Simple Injector, registering all command handlers in the system can be done with a single line of code. After that, registering a decorator is also a single line. Here is an example of the configuration when using the Simple Injector:

using SimpleInjector;
using SimpleInjector.Extensions;

var container = new Container();

// Go look in all assemblies and register all implementations
// of ICommandHandler<T> by their closed interface:
container.RegisterManyForOpenGeneric(
typeof(ICommandHandler<>),
AppDomain.CurrentDomain.GetAssemblies());

// Decorate each returned ICommandHandler<T> object with
// a TransactionCommandHandlerDecorator<T>.
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(TransactionCommandHandlerDecorator<>));

// Decorate each returned ICommandHandler<T> object with
// a DeadlockRetryCommandHandlerDecorator<T>.
container.RegisterDecorator(typeof(ICommandHandler<>),
typeof(DeadlockRetryCommandHandlerDecorator<>));

No matter how many handlers you add to the system, this few lines of code won’t change, which underlines the true power of what DI containers can do for you. After you made your application maintainable by applying the SOLID principles, a good DI container will ensure that also the startup path of your application stays maintainable.

This is how I roll on the command side of my architecture.

If you found this article interesting, you should also read my follow up: Meanwhile... on the query side of my architecture and take a look how to return data from command handlers. In Writing highly maintainable WCF services I talk about sending commands over the wire.

- .NET General, Architecture, C#, Dependency injection - nine comments / No trackbacks - § ¶

The code samples on my weblog are colorized using javascript, but you disabled javascript (for my website) on your browser. If you're interested in viewing the posted code snippets in color, please enable javascript.

nine comments:

SOLID article! thanks! :)
Evaldas Dauksevičius - 23 12 11 - 12:24

I had been playing with a few similar concepts, reading your article really helped me get to grip on what it was I was trying to achieve, a great help thanks!
Ian - 27 08 12 - 12:15

Great article !
turbot - 09 11 12 - 15:16

Nice article, thanks! Interesting ideas and well explained.

I have a question of how you handle cases (or how you manage not to have them) when a consumer is interested in a result of the command handling? For example, when command is - entity creation and consumer is interested in entity id which is generated during command handling.
Alexey Zuev - 11 11 12 - 14:25

Hi Alexey,

Returning data from command handlers is something I explain in one of my later articles: www.cuttingedge.it/blogs/steven/p..
Steven (URL) - 11 11 12 - 15:09

The AutoFac registration part equivalent is:

var assemblies = AppDomain.CurrentDomain.GetAssemblies();

builder.RegisterAssemblyTypes(assemblies)
    .As(t => t.GetInterfaces()
    .Where(a => a.IsClosedTypeOf(typeof(ICommandHandler<>)))
    .Select(a => new KeyedService("commandHandler", a)));

builder.RegisterGenericDecorator(
    typeof(TransactionCommandHandlerDecorator<>),
    typeof(ICommandHandler<>),
    fromKey: "commandHandler");
Graeme - 23 11 12 - 17:10

Great series of articles.

Should there only ever be one command handler for a command? My assumption is yes and that the handler can raise domain events for further participation. This makes sense if you consider a command & handler as corresponding to use cases.
Rick - 28 11 12 - 21:55

If you have more than one command handler per command, there is something. A command handler is the implementation of a use case and it should be atomic, so it makes little sense to split this up in multiple handlers. For event handlers on the other hand, it would be very likely to have multiple.
Steven - 29 11 12 - 06:15

In larger applications, how do you organize your commands & command handlers? Commands in some "Contract" project, and handlers in another? Both having nested folders & namespaces separating the classes by functional groups?

I'm toying with the idea of defining the command and the handler in the same file, but it feels wrong. I like it because I can hit F12 on the command, and immediately see the handler, which is useful when working in the code that is firing off commands.
Rick - 05 12 12 - 18:03


No trackbacks:

Trackback link:

Please enable javascript to generate a trackback url


  
Remember personal info?

/

Before sending a comment, you have to answer correctly a simple question everyone knows the answer to. This completely baffles automated spam bots.
 

  (Register your username / Log in)

Notify:
Hide email:

Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.