SMASHINGCONF 2015 DAY 2 – CONFERENCE

Todays focus was less engineering and more philosophy regarding design. The designers were really eager to manifest their position and take a stand towards all automatic processes such as design templates for bootstrap etc. It was all about how good it is to “think about the box” every time you need to design something new. At least that was my impression. I guess the movement of bootstrap templates is now meeting resistance from designers wanting to change the world one web design at the time.

To sum up the whole conference in a couple of buzzwords it would be like this: HTTP2, SPOF, progressive enhancement and flexbox.

Over and out for from Smashing Conf 2015!

SMASHINGCONF 2015 DAY 1 – Conference

I have the pleasure of being in Barcelona for the Smashing Conf Oct 19-21. This is my impressions from the second day, or actually the first day of the conference.

The common denominator throughout the sessions today were craftsmanship and eye for detail. I was really impressed by both @seblester and his calligraphy as well as all the performance boosting techniques being used at FT.com described by @patrickhamann.

A key takeaway from the latter talk was analysing SPOFs (single points of failure) from a UI standpoint. What if your webpage makes 80 requests to different internal and external sources to load different assets (fonts, js, css, etc) and your user is on a really crappy connection with a high level of packet loss or if one of the assets simply will not load in time? Will the page stop loading and make the browser freeze for more than 1000 ms good chances are the user will switch context hence making it hard to fulfil the task that she was meant to do.

To find these SPOFs you can use http://www.webpagetest.org. Check Advanced settings–>SPOF. To make the analysis realtime Javascript have a resource timing API. This means that you can collect, client side, loading times for different assets on the web page and beacon it back to your analysis server. See https://speakerdeck.com/patrickhamann/embracing-the-network-coldfront-september-2015?slide=26. Pretty neat if you want to be proactive!

Also, the silver bullet when it comes to web performance will be HTTP 2. But that’s a whole different story!

Smashingconf 2015 day 0 – workshop

I have the pleasure of being in Barcelona for the Smashing Conf Oct 19-21. This is my impressions from the first day. This was the big workshop day and I chose Andy Clarkes (@Malarkey) session about CSS 3.

Being mostly a backend web developer dabbling with frontend stuff, I found this workshop really good. There were smaller bits and pieces that I didn´t follow coming from a server side point of view.  Andy took us through the concepts of flexbox and some image processing done with CSS. What struck me the most is that there are several parts of HTML5 + CSS 3 that can be helpful when trying out a new design together with the customer or another stakeholder. Suddenly, many parts of Photoshop are now redundant since the designer or developer can pretty easily mock up the web application in HTML and try out new stuff.

Time to market

CMS developers should really make use of the content editable property in HTML. This makes every element editable and the customer can put in text on the fly to try out a new design. This makes the implementation and feedback loop really short. Secondly, when structuring a web application layout using flexbox you have complete control of the layout. Together with the customer or stakeholder, you have the possibility to rearrange elements, resize columns and make elements change position depending on different browser layouts etc just by using the developer toolbar in your favourite web browser. No need to wait for new PSD files from the designer!

The specific parts of flexbox that caught my eye were:

  • The width of each box or column using flexbox is now relative to the others in the same group. Previously, you had to give the width in a measurement relative to the full width of the surrounding container.
  • Ability to make use of a correct semantic structure within the HTML document even though the design requires different placements between different browser widths or devices. This way screen readers and older browsers can still understand the page even though the menu is sometimes located on top and sometimes in the bottom, visually.
  • Pagination
  • Equal heights in parallel columns – sweet when implementing blocks of information where the content can be of equal length.

A word of caution – the flexbox is not supported in IE8. Display table is that case a good fallback.

Image processing

So Adobe apparently has had a huge influence on the image processing. The standard filters from Photoshop such as saturation, hue, sepia and so forth are now part of the CSS spec. I think they are also called blends in CSS..?

Another cool thing is regarding how text floats around images. For instance, an image depicting a complex shape can now have text floating around closely to the object itself. This can be done be done in two ways as far as I understand. The first is to manually tap in the coordinates in CSS for how the text should float using CSS shapes. The other is much more dynamic as you can specify two urls to the image itself. The first being the image that the user should see. The second is for a black and white mask image from which the browser reads the alpha channel. From that information the browser knows where the boundaries are and can float the text around. There are some limitations regarding CORS that you may want to lookup beforehand though.

Optimistic concurrency control using ETag

This is the scenario: you have a CRM system where the editors can change customer details. The CRM user interface is a web application which will be used by several editors. There is a chance that multiple editors will edit the same customer simultaneously.

Since the HTTP protocol is stateless there is a chance that an editor can overwrite changes made after the editor loaded the “edit customer” web page.

To solve this you can make use of an ETag containing a value representation of the customer data, preferably a changed date. By submitting that value when initially sending the page to the web client and then posting the value back along with the new customer details the values can be compared. The comparison will result in either accepting or rejecting the changed customer information.

The HTTP specification (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24) states that if the If-Match HTTP header value is not a representation of the current entity the server should return status code 412 (Precondition Failed) and not persist the data. Otherwise, return 200 (OK).

When loading the page you submit the ETag either in the header or in the body. When the customer details are sent back to the server using a PUT request you pass the ETag value in the If-Match HTTP header.

If you are utilizing an ASP.NET MVC solution with AngularJS (without using SPA) and ASP.NET Web API you can solve this by doing the following.

GET request – when loading the page with the customer information

Pass a representation of the ETag through the MVC model from the MVC controller and make it accessible from your Angular controller. I use a sort of initial data collection which will populate an AngularJS scope variable when the page is loaded.

PUT request – when passing the changed data back to the server

The data is passed from the UI through an AngularJS $http.put request

var config = {

method: 'PUT',

url: '/customer',

data: {  },

headers: { 'If-Match': $scope.etag }

// $scope.etag is initiated during loading of page

};

$http(config)

.success(function (response) {

// notify user that update was ok

})

.error(function (data, status) {

if(status == '412'){

// notify the editor that customer has already been updated by someone else and should reload the page to get the new customer data.

}

});

The receiving end which is the Web API controller

public HttpResponseMessage Put(CustomerData customer)

{

var customer = GetCustomerFromDatabase(customer.Id);

var isAlreadyModified = IsAlreadyModified(customer);

if (isAlreadyModified)

{

// return status code 412 if the customer has already been changed during the editing

return Request.CreateErrorResponse(HttpStatusCode.PreconditionFailed, "Customer has already been modified. Please reload the page and redo your changes.");

}

return Request.CreateResponse(HttpStatusCode.OK);

}

private bool IsAlreadyModified (Customer customer)

{

// using the ticks as etag

var ourEtag = customer.ChangedDate.Ticks.ToString(CultureInfo.InvariantCulture)

: string.Empty;

var theirEtag = Request.Headers.IfMatch.ToString();

return ourEtag.Equals(theirEtag, StringComparison.InvariantCultureIgnoreCase) == false;

}

Command and Query-based Entity Framework architecture PART 2

In my previous post I described a different take on an Entity Framework architecture with commands and queries. I probably confused the ones that know every angle of CQRS since my commands contained more than just holding new state. However, the intention was not to do a CQRS solution. It is only meant to be an alternative to repositories with the concept of “one business rule equals one command or query” in mind.

PSST! The code can be found here: https://github.com/tobiasnilsson/CommandQuerySample/.
PSST2! Sorry for the badly formatted code. I will try to find a better way to paste code from Visual Studio into WordPress…

PSST3! I refer to the different projects in the solutio in the following text. Core contains the definition of the application. Infrastructure provides the implementation. See prev post for details.

The command classes are responsible for three things:

  • Validating data,
  • handle the state and
  • persist the state in the database

Like this:

public class AddUserToDepartmentsCommand : CommandBase, IAddUserToDepartmentsCommand
{
public AddUserToDepartmentsCommand(ISampleDbContext context) : base(context)
{
}
 
public void Add(User user, IEnumerable<Department> departments)
{
if(user == null)
throw new ArgumentNullException("user");

 

if(departments == null)
throw new ArgumentNullException("departments");

 

if(!departments.Any())
throw new ArgumentException("departments");

 
foreach (var department in departments)
{
department.Users.Add(user);
}
 
Context.SaveChanges();
}
}

So refactoring the command concept seem to be a good idea! I will borrow some of the concepts from CQRS when doing this refactoring, such as command and command handler. Commands will now only hold state, i.e. become POCO classes.

 

First step: add class to hold new state

A class called NewAddUserToDepartmentsCommand is added to the Core project. It looks like this:

public class NewAddUserToDepartmentsCommand : ICommand
{
public User User { get; set; }
public IEnumerable<Department> Departments { get; set; }
}

Also, the empty interface ICommand is added to the Core project. This interface only acts as a marker for commands.

public interface ICommand
{
}

Second step: add command handling

A class called NewAddUserToDepartmentsCommandHandler is added to the Infrastructure project since this will be specific to the implementation of the persistence stuff (in this case an Entity Framework based persistence). The handler will act on the data and add it to the EF context.

public class NewAddUserToDepartmentsCommandHandler : ICommandHandler<NewAddUserToDepartmentsCommand>
{
private readonly ISampleDbContext _context;
 
public NewAddUserToDepartmentsCommandHandler(ISampleDbContext context)
{
_context = context;
}
 
public void Handle(NewAddUserToDepartmentsCommand command)
{
foreach (var department in command.Departments)
{
department.Users.Add(command.User);
}
}
}

 

Also, the generic interface ICommandHandler is added to the Core project. This provides two features: it defines the Handle method and provides a relationship between the Command class and the corresponding CommandHandler.

public interface ICommandHandler<in TCommand> where TCommand : ICommand
{
void Handle(TCommand command);
}

 

Third step: add command persistence

A class called CommandExecutor is added to the Infrastructure project. This class receives the commands that should be handled and persisted in the database through Entity Framework.

Side note: in many cases you want to persist data in a transaction. If one of the commands fails to execute or persist data, the other commands should not persist data either. In the previous solution, each command was responsible for persistence (each command called Context.SaveChanges()).

public class CommandExecutor : ICommandExecutor
{
private readonly ISampleDbContext _context;
private readonly ICommandDispatcher _dispatcher;
 
public CommandExecutor(ISampleDbContext context, ICommandDispatcher dispatcher)
{
_context = context;
_dispatcher = dispatcher;
}

 

public void Execute(IEnumerable<ICommand> commands)
{
foreach (var command in commands)
{
 
var validator = _dispatcher.GetValidator(command);
var validationResult = validator.Validate(command);
 
if (!validationResult.IsValid)
throw new CommandValidationException(validationResult.ErrorMessages);
 
var handler = _dispatcher.GetHandler(command);
handler.Handle(command);
}
 
_context.SaveChanges();
}
}

 

However, when calling SaveChanges on the context in this case nothing should happen I guess. No data is persisted. The context used in this CommandExecutor is a different instance the instance used in each of the CommandHandlers. The context is injected in the constructors of the commands and the executor. Calling SaveChanges in the executor class will act on a different context than in the commands. Structuremap to the rescue!

Since the running application is a web application, we can specify that ONE instance of SampleDbContext should be used throughout the web request:

public static class IoC {
public static IContainer Initialize() {
ObjectFactory.Initialize(x =>
{
x.Scan(scan =>
{
scan.AssembliesFromApplicationBaseDirectory();
scan.WithDefaultConventions();
 

//Add the assemblies that contains the handlers and validators to the scanning
scan.AssemblyContainingType<NewAddUserCommandHandler>();
scan.IncludeNamespaceContainingType<NewAddUserCommandHandler>();
scan.AssemblyContainingType<NewAddUserCommandValidator>();
scan.IncludeNamespaceContainingType<NewAddUserCommandValidator>();
 

//Register all types of command validators and handlers
scan.AddAllTypesOf(typeof(ICommandHandler<>));
scan.AddAllTypesOf(typeof(ICommandValidator<>));
 
});

 

//The context needs to be one instance per http request
x.For<ISampleDbContext>().HttpContextScoped().Use<SampleDbContext>();
});

return ObjectFactory.Container;
}
}

 

Ok, back to the executor. The command executor implements the following interface located in the Core project:

public interface ICommandExecutor
{
void Execute(IEnumerable<ICommand> commands);
}

 

Step four: get command handler for a command

A class named CommandDispatcher is added to the Infrastructure project. This class provides a way of matching a command with its handler and validator objects. This is where the magic happens! The command is passed to each of the methods GetHandler and GetValidator. They both return the corresponding commandhandler and commandvalidator.

 

public class CommandDispatcher : ICommandDispatcher
{
public ICommandHandler GetHandler(ICommand command)
{
var commandType = command.GetType();
 
Type handlerType = typeof(ICommandHandler<>);
Type constructedClass = handlerType.MakeGenericType(commandType);
 
var handler = ObjectFactory.GetInstance(constructedClass);
 
return handler as ICommandHandler;
}
 
public ICommandValidator GetValidator(ICommand command)
{
var commandType = command.GetType();
 
Type validatorType = typeof(ICommandValidator<>);
Type constructedClass = validatorType.MakeGenericType(commandType);
 
var validator = ObjectFactory.GetInstance(constructedClass);
 
return validator as ICommandValidator;
}
}

 

…and the ICommandDispatcher interface in the Core project:

public interface ICommandDispatcher
{
ICommandHandler GetHandler(ICommand command);
ICommandValidator GetValidator(ICommand command);
}

 

The matching between a command and its handler can be done since each command implements the generic interface ICommandHandler<T> which in turn depends upon the non-generic interface ICommandHandler:

 

public interface ICommandHandler
{
void Handle(object commandObj);
}
 

public interface ICommandHandler<TCommand> : ICommandHandler where TCommand : ICommand
{
}

 

 

Step five: break out validation

The original command contained some ”validation” of the data before acting on it and adding it to the context.

if(user == null)
throw new ArgumentNullException("user");
 
if(departments == null)
throw new ArgumentNullException("departments");
 
if(!departments.Any())
throw new ArgumentException("departments");

 

The validation will now take place in a new class:

public class NewAddUserToDepartmentsValidator : ICommandValidator<NewAddUserToDepartmentsCommand>
{
public ValidationResult Validate(NewAddUserToDepartmentsCommand command)
{
var result = new ValidationResult();
 
if (command.User == null)
{
result.IsValid = false;
result.ErrorMessages.Add("Must contain user");
}

 

if (command.Departments == null || !command.Departments.Any())
{
result.IsValid = false;
result.ErrorMessages.Add("Must contain departments");
}
 
return result;
}
}

The validation messages are meant to be user friendly and could be passed to the user interface or some error log if validation fails. Although, this is not a replacement for the model validation that should take place in the UI or MVC controller action prior to creating the command and passing it to the CommandExecutor.

The validation class implements the following interface:

public interface ICommandValidator<in TCommand> where TCommand : ICommand
{
ValidationResult Validate(TCommand command);
}

 

…and ValidationResult:

public class ValidationResult
{
public bool IsValid { get; set; }
public IList<string> ErrorMessages { get; set; }
}

 

The idea of the generic interface ICommandValidator is to be able to define the relationship between the validator and the command itself. In the same way as between command handlers and commands.

 

The validation takes place inside the CommandExecutor like this:

public class CommandExecutor : ICommandExecutor
{
private readonly ISampleDbContext _context;
private readonly ICommandDispatcher _dispatcher;
 
public CommandExecutor(ISampleDbContext context, ICommandDispatcher dispatcher)
{
_context = context;
_dispatcher = dispatcher;
}
 
public void Execute(IEnumerable<ICommand> commands)
{
foreach (var command in commands)
{
var commandType = command.GetType();
 
var validator = _dispatcher.GetValidator(commandType);
                var validationResult = validator.Validate(command);

 

                if (!validationResult.IsValid)

                    throw new CommandValidationException(validationResult.ErrorMessages);

 

var handler = _dispatcher.GetHandler(commandType);

handler.Handle(command);

}

 

_context.SaveChanges();

}

}

 

In lack of a better solution, I use the command dispatcher to load the correct validator given the command.

 

This is a work in progress. Most likely there will be more refactoring done in the near future.

Command and Query-based Entity Framework architecture

I have participated in various .NET projects where we´ve created an n-tier architecture with repositories as the lowest tier next to the database. Then, we´ve added a service layer on top the repositories. The services are then consumed by a web application, Web API or WCF or services. The repositories handles the CRUD operations towards the DbContext in Entity Framework. The services contains the business logic.

However, the service classes (and sometimes the repositories) can be quite large and may handle a lot of various operations as time goes by. Single responsibility – not so much. The thing is that it can be quite hard for new developers to know which service class to extend when adding functionality according to new business requirements etc.

For my current project, I thought I would address this issue and try a different approach. The services and repositories are replaced by smaller queries and commands. Sort of a lighter take on CQRS (without the event sourcing). The commands handles operations such as Delete, Update and Insert. Queries handles Reads. If needed, you can use different read and write models since the commands and queries are separated. In this example I will use the same read and write model.

Project structure

The project structure is setup somewhat according to the Onion architecture. This means that we have a Core project with the “definition” of the application and is the center of the onion. More precisely, the Core contains the domain entities (User, Department) and the interfaces that make up the queries and commands. A project called Infrastructure contains the implementations of the C & Q interfaces. A web application utilizes the C & Q objects. The web app uses StructureMap to inject the correct implementation based on the interfaces.

The Core entities are POCO classes. Nothing in the Core project is dependent on other projects. Hence, the Core project is not dependent on the implementation of the “definition”. Entity Framework 5 lets you define primary keys, relationships etc. in the fluent API of EF. This means that we can define these configurations in classes in the Infrastructure project. EF also lets you use Data annotations but this will tightly couple the entities to EF. The configuration classes are found in Infrastructure\DbConfigurations and called from the SampleDbContext like this:

public class SampleDbContext : DbContext, ISampleDbContext

{

public IDbSet<User> Users { get; set; }

public IDbSet<Department> Departments { get; set; }

protected override void OnModelCreating(DbModelBuilder modelBuilder)

{

modelBuilder.Configurations.Add(new UserConfiguration());

modelBuilder.Configurations.Add(new DepartmentConfiguration());

}

}

The DbContext in this case covers all tables for the application. So there is just one DbContext in this application. The context is called SampleDbContext.

onion-project

In this simple application each command and query is responsible for telling the EF DbContext to execute on the context. This means that each command or query will require a roundtrip to the database. (To minimize the roundtrips you can share the DbContext between commands and move the call to SaveChanges to the calling method. In this case the MVC controller. )

Queries and commands

The queries and commands in the Infrastructure project all have a corresponding interface in the Core project. I have created two abstract base classes for the commands and queries, called QueryBase and CommandBase. The base classes both specifies that the derived classes should contain a constructor which takes an ISampleDbContext. The QueryBase should also take a ICacheManager but since I need to go to sleep pretty soon I won´t be adding that to this sample code. :)

The queries and commands have expressive names that declares exactly what they do (AddUserCommand or GetUserByIdQuery) They typically consist of one method like Add or Get. So it should be pretty clear to a new developer how to maintain the correct structure when extending the application with more queries and commands.

 

You: Hey, gief codez plz!

A query example:

public class GetUsersInDepartmentQuery : QueryBase, IGetUsersInDepartmentQuery

{

public GetUsersInDepartmentQuery(ISampleDbContext context) //TODO: Should take ICacheManager as well

: base(context)

{

}

public IEnumerable<User> GetUsers(int departmentId)

{

return Context.Users.Where(u => u.DepartmentId == departmentId);

}

}

A command example:

public class AddUserCommand : CommandBase, IAddUserCommand

{

public AddUserCommand(ISampleDbContext context) : base(context)

{

}

public void Add(User user)

{

Context.Users.Add(user);

Context.SaveChanges();

}

}

The sample project featured in this post can be found here.

PSST! The solution has been refactored. See newer post at http://wp.me/p2WWp3-29 for more info.

List all local SQL Server 2012 instances

Note to future me: when forgetting the newly installed Microsoft SQL Server instance name it´s a good thing to remember that the local instances (or servers) can be listed using sqlcmd -L from the command prompt. If the instance(s) is nowhere to be found in the listing, be sure to check if the local service SQL Server Browser is running. By default, it is stopped.

sqlserverbrowser

 

The reason why I needed to get the instance name was to be able to connect to it using SQL Server Management Studio. Sure, one could connect to the local db using “.” as server name which worked like a charm. But for the sake of it I wanted the instance name. The error given was:

An error has occurred while establishing a connection to the server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) (Microsoft SQL Server)

There are a bunch of other stuff that can be the reason why you cannot connect with the server name other than the name itself. This guide is pretty helpful even though it covers older versions of Microsoft SQL Server and Windows.