Optimistic concurrency control using ETag

This is the scenario: you have a CRM system where the editors can change customer details. The CRM user interface is a web application which will be used by several editors. There is a chance that multiple editors will edit the same customer simultaneously.

Since the HTTP protocol is stateless there is a chance that an editor can overwrite changes made after the editor loaded the “edit customer” web page.

To solve this you can make use of an ETag containing a value representation of the customer data, preferably a changed date. By submitting that value when initially sending the page to the web client and then posting the value back along with the new customer details the values can be compared. The comparison will result in either accepting or rejecting the changed customer information.

The HTTP specification (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24) states that if the If-Match HTTP header value is not a representation of the current entity the server should return status code 412 (Precondition Failed) and not persist the data. Otherwise, return 200 (OK).

When loading the page you submit the ETag either in the header or in the body. When the customer details are sent back to the server using a PUT request you pass the ETag value in the If-Match HTTP header.

If you are utilizing an ASP.NET MVC solution with AngularJS (without using SPA) and ASP.NET Web API you can solve this by doing the following.

GET request – when loading the page with the customer information

Pass a representation of the ETag through the MVC model from the MVC controller and make it accessible from your Angular controller. I use a sort of initial data collection which will populate an AngularJS scope variable when the page is loaded.

PUT request – when passing the changed data back to the server

The data is passed from the UI through an AngularJS $http.put request

var config = {

method: 'PUT',

url: '/customer',

data: {  },

headers: { 'If-Match': $scope.etag }

// $scope.etag is initiated during loading of page

};

$http(config)

.success(function (response) {

// notify user that update was ok

})

.error(function (data, status) {

if(status == '412'){

// notify the editor that customer has already been updated by someone else and should reload the page to get the new customer data.

}

});

The receiving end which is the Web API controller

public HttpResponseMessage Put(CustomerData customer)

{

var customer = GetCustomerFromDatabase(customer.Id);

var isAlreadyModified = IsAlreadyModified(customer);

if (isAlreadyModified)

{

// return status code 412 if the customer has already been changed during the editing

return Request.CreateErrorResponse(HttpStatusCode.PreconditionFailed, "Customer has already been modified. Please reload the page and redo your changes.");

}

return Request.CreateResponse(HttpStatusCode.OK);

}

private bool IsAlreadyModified (Customer customer)

{

// using the ticks as etag

var ourEtag = customer.ChangedDate.Ticks.ToString(CultureInfo.InvariantCulture)

: string.Empty;

var theirEtag = Request.Headers.IfMatch.ToString();

return ourEtag.Equals(theirEtag, StringComparison.InvariantCultureIgnoreCase) == false;

}

Command and Query-based Entity Framework architecture PART 2

In my previous post I described a different take on an Entity Framework architecture with commands and queries. I probably confused the ones that know every angle of CQRS since my commands contained more than just holding new state. However, the intention was not to do a CQRS solution. It is only meant to be an alternative to repositories with the concept of “one business rule equals one command or query” in mind.

PSST! The code can be found here: https://github.com/tobiasnilsson/CommandQuerySample/.
PSST2! Sorry for the badly formatted code. I will try to find a better way to paste code from Visual Studio into WordPress…

PSST3! I refer to the different projects in the solutio in the following text. Core contains the definition of the application. Infrastructure provides the implementation. See prev post for details.

The command classes are responsible for three things:

  • Validating data,
  • handle the state and
  • persist the state in the database

Like this:

public class AddUserToDepartmentsCommand : CommandBase, IAddUserToDepartmentsCommand
{
public AddUserToDepartmentsCommand(ISampleDbContext context) : base(context)
{
}
 
public void Add(User user, IEnumerable<Department> departments)
{
if(user == null)
throw new ArgumentNullException("user");

 

if(departments == null)
throw new ArgumentNullException("departments");

 

if(!departments.Any())
throw new ArgumentException("departments");

 
foreach (var department in departments)
{
department.Users.Add(user);
}
 
Context.SaveChanges();
}
}

So refactoring the command concept seem to be a good idea! I will borrow some of the concepts from CQRS when doing this refactoring, such as command and command handler. Commands will now only hold state, i.e. become POCO classes.

 

First step: add class to hold new state

A class called NewAddUserToDepartmentsCommand is added to the Core project. It looks like this:

public class NewAddUserToDepartmentsCommand : ICommand
{
public User User { get; set; }
public IEnumerable<Department> Departments { get; set; }
}

Also, the empty interface ICommand is added to the Core project. This interface only acts as a marker for commands.

public interface ICommand
{
}

Second step: add command handling

A class called NewAddUserToDepartmentsCommandHandler is added to the Infrastructure project since this will be specific to the implementation of the persistence stuff (in this case an Entity Framework based persistence). The handler will act on the data and add it to the EF context.

public class NewAddUserToDepartmentsCommandHandler : ICommandHandler<NewAddUserToDepartmentsCommand>
{
private readonly ISampleDbContext _context;
 
public NewAddUserToDepartmentsCommandHandler(ISampleDbContext context)
{
_context = context;
}
 
public void Handle(NewAddUserToDepartmentsCommand command)
{
foreach (var department in command.Departments)
{
department.Users.Add(command.User);
}
}
}

 

Also, the generic interface ICommandHandler is added to the Core project. This provides two features: it defines the Handle method and provides a relationship between the Command class and the corresponding CommandHandler.

public interface ICommandHandler<in TCommand> where TCommand : ICommand
{
void Handle(TCommand command);
}

 

Third step: add command persistence

A class called CommandExecutor is added to the Infrastructure project. This class receives the commands that should be handled and persisted in the database through Entity Framework.

Side note: in many cases you want to persist data in a transaction. If one of the commands fails to execute or persist data, the other commands should not persist data either. In the previous solution, each command was responsible for persistence (each command called Context.SaveChanges()).

public class CommandExecutor : ICommandExecutor
{
private readonly ISampleDbContext _context;
private readonly ICommandDispatcher _dispatcher;
 
public CommandExecutor(ISampleDbContext context, ICommandDispatcher dispatcher)
{
_context = context;
_dispatcher = dispatcher;
}

 

public void Execute(IEnumerable<ICommand> commands)
{
foreach (var command in commands)
{
 
var validator = _dispatcher.GetValidator(command);
var validationResult = validator.Validate(command);
 
if (!validationResult.IsValid)
throw new CommandValidationException(validationResult.ErrorMessages);
 
var handler = _dispatcher.GetHandler(command);
handler.Handle(command);
}
 
_context.SaveChanges();
}
}

 

However, when calling SaveChanges on the context in this case nothing should happen I guess. No data is persisted. The context used in this CommandExecutor is a different instance the instance used in each of the CommandHandlers. The context is injected in the constructors of the commands and the executor. Calling SaveChanges in the executor class will act on a different context than in the commands. Structuremap to the rescue!

Since the running application is a web application, we can specify that ONE instance of SampleDbContext should be used throughout the web request:

public static class IoC {
public static IContainer Initialize() {
ObjectFactory.Initialize(x =>
{
x.Scan(scan =>
{
scan.AssembliesFromApplicationBaseDirectory();
scan.WithDefaultConventions();
 

//Add the assemblies that contains the handlers and validators to the scanning
scan.AssemblyContainingType<NewAddUserCommandHandler>();
scan.IncludeNamespaceContainingType<NewAddUserCommandHandler>();
scan.AssemblyContainingType<NewAddUserCommandValidator>();
scan.IncludeNamespaceContainingType<NewAddUserCommandValidator>();
 

//Register all types of command validators and handlers
scan.AddAllTypesOf(typeof(ICommandHandler<>));
scan.AddAllTypesOf(typeof(ICommandValidator<>));
 
});

 

//The context needs to be one instance per http request
x.For<ISampleDbContext>().HttpContextScoped().Use<SampleDbContext>();
});

return ObjectFactory.Container;
}
}

 

Ok, back to the executor. The command executor implements the following interface located in the Core project:

public interface ICommandExecutor
{
void Execute(IEnumerable<ICommand> commands);
}

 

Step four: get command handler for a command

A class named CommandDispatcher is added to the Infrastructure project. This class provides a way of matching a command with its handler and validator objects. This is where the magic happens! The command is passed to each of the methods GetHandler and GetValidator. They both return the corresponding commandhandler and commandvalidator.

 

public class CommandDispatcher : ICommandDispatcher
{
public ICommandHandler GetHandler(ICommand command)
{
var commandType = command.GetType();
 
Type handlerType = typeof(ICommandHandler<>);
Type constructedClass = handlerType.MakeGenericType(commandType);
 
var handler = ObjectFactory.GetInstance(constructedClass);
 
return handler as ICommandHandler;
}
 
public ICommandValidator GetValidator(ICommand command)
{
var commandType = command.GetType();
 
Type validatorType = typeof(ICommandValidator<>);
Type constructedClass = validatorType.MakeGenericType(commandType);
 
var validator = ObjectFactory.GetInstance(constructedClass);
 
return validator as ICommandValidator;
}
}

 

…and the ICommandDispatcher interface in the Core project:

public interface ICommandDispatcher
{
ICommandHandler GetHandler(ICommand command);
ICommandValidator GetValidator(ICommand command);
}

 

The matching between a command and its handler can be done since each command implements the generic interface ICommandHandler<T> which in turn depends upon the non-generic interface ICommandHandler:

 

public interface ICommandHandler
{
void Handle(object commandObj);
}
 

public interface ICommandHandler<TCommand> : ICommandHandler where TCommand : ICommand
{
}

 

 

Step five: break out validation

The original command contained some ”validation” of the data before acting on it and adding it to the context.

if(user == null)
throw new ArgumentNullException("user");
 
if(departments == null)
throw new ArgumentNullException("departments");
 
if(!departments.Any())
throw new ArgumentException("departments");

 

The validation will now take place in a new class:

public class NewAddUserToDepartmentsValidator : ICommandValidator<NewAddUserToDepartmentsCommand>
{
public ValidationResult Validate(NewAddUserToDepartmentsCommand command)
{
var result = new ValidationResult();
 
if (command.User == null)
{
result.IsValid = false;
result.ErrorMessages.Add("Must contain user");
}

 

if (command.Departments == null || !command.Departments.Any())
{
result.IsValid = false;
result.ErrorMessages.Add("Must contain departments");
}
 
return result;
}
}

The validation messages are meant to be user friendly and could be passed to the user interface or some error log if validation fails. Although, this is not a replacement for the model validation that should take place in the UI or MVC controller action prior to creating the command and passing it to the CommandExecutor.

The validation class implements the following interface:

public interface ICommandValidator<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}

 

…and ValidationResult:

public class ValidationResult
{
public bool IsValid { get; set; }
public IList<string> ErrorMessages { get; set; }
}

 

The idea of the generic interface ICommandValidator is to be able to define the relationship between the validator and the command itself. In the same way as between command handlers and commands.

 

The validation takes place inside the CommandExecutor like this:

public class CommandExecutor : ICommandExecutor
{
private readonly ISampleDbContext _context;
private readonly ICommandDispatcher _dispatcher;
 
public CommandExecutor(ISampleDbContext context, ICommandDispatcher dispatcher)
{
_context = context;
_dispatcher = dispatcher;
}
 
public void Execute(IEnumerable<ICommand> commands)
{
foreach (var command in commands)
{
var commandType = command.GetType();
 
var validator = _dispatcher.GetValidator(commandType);
                var validationResult = validator.Validate(command);

 

                if (!validationResult.IsValid)

                    throw new CommandValidationException(validationResult.ErrorMessages);

 

var handler = _dispatcher.GetHandler(commandType);

handler.Handle(command);

}

 

_context.SaveChanges();

}

}

 

In lack of a better solution, I use the command dispatcher to load the correct validator given the command.

 

This is a work in progress. Most likely there will be more refactoring done in the near future.

Command and Query-based Entity Framework architecture

I have participated in various .NET projects where we´ve created an n-tier architecture with repositories as the lowest tier next to the database. Then, we´ve added a service layer on top the repositories. The services are then consumed by a web application, Web API or WCF or services. The repositories handles the CRUD operations towards the DbContext in Entity Framework. The services contains the business logic.

However, the service classes (and sometimes the repositories) can be quite large and may handle a lot of various operations as time goes by. Single responsibility – not so much. The thing is that it can be quite hard for new developers to know which service class to extend when adding functionality according to new business requirements etc.

For my current project, I thought I would address this issue and try a different approach. The services and repositories are replaced by smaller queries and commands. Sort of a lighter take on CQRS (without the event sourcing). The commands handles operations such as Delete, Update and Insert. Queries handles Reads. If needed, you can use different read and write models since the commands and queries are separated. In this example I will use the same read and write model.

Project structure

The project structure is setup somewhat according to the Onion architecture. This means that we have a Core project with the “definition” of the application and is the center of the onion. More precisely, the Core contains the domain entities (User, Department) and the interfaces that make up the queries and commands. A project called Infrastructure contains the implementations of the C & Q interfaces. A web application utilizes the C & Q objects. The web app uses StructureMap to inject the correct implementation based on the interfaces.

The Core entities are POCO classes. Nothing in the Core project is dependent on other projects. Hence, the Core project is not dependent on the implementation of the “definition”. Entity Framework 5 lets you define primary keys, relationships etc. in the fluent API of EF. This means that we can define these configurations in classes in the Infrastructure project. EF also lets you use Data annotations but this will tightly couple the entities to EF. The configuration classes are found in Infrastructure\DbConfigurations and called from the SampleDbContext like this:

public class SampleDbContext : DbContext, ISampleDbContext

{

public IDbSet<User> Users { get; set; }

public IDbSet<Department> Departments { get; set; }

protected override void OnModelCreating(DbModelBuilder modelBuilder)

{

modelBuilder.Configurations.Add(new UserConfiguration());

modelBuilder.Configurations.Add(new DepartmentConfiguration());

}

}

The DbContext in this case covers all tables for the application. So there is just one DbContext in this application. The context is called SampleDbContext.

onion-project

In this simple application each command and query is responsible for telling the EF DbContext to execute on the context. This means that each command or query will require a roundtrip to the database. (To minimize the roundtrips you can share the DbContext between commands and move the call to SaveChanges to the calling method. In this case the MVC controller. )

Queries and commands

The queries and commands in the Infrastructure project all have a corresponding interface in the Core project. I have created two abstract base classes for the commands and queries, called QueryBase and CommandBase. The base classes both specifies that the derived classes should contain a constructor which takes an ISampleDbContext. The QueryBase should also take a ICacheManager but since I need to go to sleep pretty soon I won´t be adding that to this sample code. :)

The queries and commands have expressive names that declares exactly what they do (AddUserCommand or GetUserByIdQuery) They typically consist of one method like Add or Get. So it should be pretty clear to a new developer how to maintain the correct structure when extending the application with more queries and commands.

 

You: Hey, gief codez plz!

A query example:

public class GetUsersInDepartmentQuery : QueryBase, IGetUsersInDepartmentQuery

{

public GetUsersInDepartmentQuery(ISampleDbContext context) //TODO: Should take ICacheManager as well

: base(context)

{

}

public IEnumerable<User> GetUsers(int departmentId)

{

return Context.Users.Where(u => u.DepartmentId == departmentId);

}

}

A command example:

public class AddUserCommand : CommandBase, IAddUserCommand

{

public AddUserCommand(ISampleDbContext context) : base(context)

{

}

public void Add(User user)

{

Context.Users.Add(user);

Context.SaveChanges();

}

}

The sample project featured in this post can be found here.

PSST! The solution has been refactored. See newer post at http://wp.me/p2WWp3-29 for more info.

List all local SQL Server 2012 instances

Note to future me: when forgetting the newly installed Microsoft SQL Server instance name it´s a good thing to remember that the local instances (or servers) can be listed using sqlcmd -L from the command prompt. If the instance(s) is nowhere to be found in the listing, be sure to check if the local service SQL Server Browser is running. By default, it is stopped.

sqlserverbrowser

 

The reason why I needed to get the instance name was to be able to connect to it using SQL Server Management Studio. Sure, one could connect to the local db using “.” as server name which worked like a charm. But for the sake of it I wanted the instance name. The error given was:

An error has occurred while establishing a connection to the server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) (Microsoft SQL Server)

There are a bunch of other stuff that can be the reason why you cannot connect with the server name other than the name itself. This guide is pretty helpful even though it covers older versions of Microsoft SQL Server and Windows.

CQRS

I am starting a new assignment soon. Prior to starting I thought I should read up on CQRS which I haven´t really come to use yet. CQRS has been one of those buzzwords that “everyone” in the ASP.NET community seem to be adopting (alongside BDD and DDD etc). So I am eager to learn what the fuzz is all about and hopefully find a new and useful architectural pattern for my next assignment. Turns out that Microsoft offers a thorough tutorial (+free pdf book and video) on the subject at http://msdn.microsoft.com/en-us/library/jj591573.aspx.

Although, some articles on CQRS states that one should be careful on when to use CQRS. The reason of hesitancy is that sometimes leads to overcomplicated software, see http://www.udidahan.com/2011/04/22/when-to-avoid-cqrs/.

For those of you about to segregate command and query responsibilites, I salute you!

Microsoft exam 70-487 study guide

I have put together a study guide for the Microsoft exam 70-487 (Developing Windows Azure and Web Services) since there are yet no books available from Microsoft Press. This is the material I am using right now to study for the exam. Hopefully it covers most of the content on the exam.

The exam covers the following sections according to the exam site:

  • Accessing Data (24%)
  • Querying and Manipulating Data by Using the Entity Framework (20%)
  • Designing and Implementing WCF Services (19%)
  • Creating and Consuming Web API-based services (18%)
  • Deploying Web Applications and Services (19%)

The objectives are narrowed down to keywords in the following sections in this post. The keywords are paired with links, preferably a link to a webcast. Some of the links refer to older .net technologies but will hopefully be applicable even in .Net 4.5. Most of the webcast links require a PluralSight account, so I suggest you visit pluralsight.com and get yourself a subscription. They have a free subscription for up to 200 minutes for new customers.

Happy studying!

Accessing Data

Entity Framework

WCF Services

Web API services

Deploying Web Applications and Services

Other study guides

For the 70-480 and 70-486 exams I followed Chris Myers excellent study guides at www.bloggedbychris.com. He just recently posted the guide for 70-487 here. I really recommend to go through this one as well as it contains some lessons learned from the actual exam.

Pointing fingers at ugly code

I watched Alex Papadimoulis talk from Öredev on ugly code. We as developers tend to point fingers at whoever writes what we at first glance think of is ugly code. However,  Beauty of code is in the eye of the beholder, which Alex rightfully states. What defines as ugly code is code that is hard to maintain. Everything else is just object of opinion, at least in my mind.

The video can be found here: http://oredev.org/2012/sessions/ugly-code