Author Archives: jahav

Specification pattern – why use it?

The specification pattern is a simple design pattern that basically says:

Specification of which objects satisfy certain business rules should be reusable (DRY). In order to do that we will create a class with sole responsibility of determining, whether object satisfies the rules or not.

If you are interested in how and why you should use it, there is an excellent article on these details specification pattern at the  Enterprise Craftsmanship.

While great article, there are some points about the pattern that are worth emphasizing/mentioning.

Specifications have a name

Mundane, right. But very, very important aspect. Specifications have a name in source code and that makes your code readable. Compare

var payments = repository.Query(new OverduePayementsSpecification(TimeSpan.FromDays(10));

with

var now = Clock.UtcNow; var payments = repository.Query(p => p.PaymentDate.Subtract(now) >= TimeSpan.FromDays(10))

In one case the purpose of the code is obvious, in other, it is not.

The business rule should be important enough to have its own name and type.

Specification pattern keeps business rules in business layer

If you are using onion mode, hexagonal architecture or basically any kind of architecture, where business is in the center of your architecture, specification pattern is in your business project.

The business layer contains interfaces and the implementation of the interfaces is outside, probably in some kind of infrastructure layer.

Thus comparing use of a method at your data access object

var payements = repository.GetOverduePayements(TimeSpan.FromDays(10));

with the specification pattern

var payments = repository.Query(new OverduePayementsSpecification(TimeSpan.FromDays(10));

will result into having your business rules in your infrastructure layer.

 

That is just wrong, what if you need to run your software once on your on-premise PostgreSql (or some kind of other Db) and other instance for cloud customers on Azure SQL (very common licencing model). Do you duplicate all your business rules? If you do, I predict failure and plethora of bugs and maintenance nightmare.

Specification Transformations

ORM mappers have advanced quite a bit, but even EF Core still can’t query a NodaTime propery, (or any kind of custom property, e.g. CustomerId instead of int/guid) which means that object-relational impedance is still alive and kicking strongly.

In my projects, I don’t use EF entities (or NHbernate or any ORM mappter) directly, but I use T4 template to translate business classes into partial classes with sensible properties and types.

This has some advantages

  • ORM entities can have more properties than business objects, e.g. transaction entity (code-first) has all properties received from the import file, but business only ~10 that are useful.
  • Eliminates impedance
  • Keeps infrastructure in infrastructure

and disadvantages:

  • mapping logic is necessary

The specification should be written using business classes, because it is a business rule. However, in order to query your data store, you need to translate the specification expression into an expression on ORM entities.

I recommend AutoMapper, specifically its queryable extensions and expression translation. It helps to keep mapping logic to minimum.

Specification projections

Most examples of the pattern end up with a specification of an object, but don’t show that you how to get a related object. That is especially troubling in case of DDD aggreates, where there are no navigation properties to the outside of aggregate, only ids of other aggregates.

I have a specification of overdue payement… how do I get clients with an overdue payments?

Fear not, it is solvable. Most examples show examples of AndSpecification (or some other kind of logic operation on two specifications) to demonstrate versatility of the pattern.

But there is more! Although in this case, you have to add a method to repository and voila> thanks to magic of projection, you can use specification of aggregate to select completely different aggregate.

public class Entity {     
   public int Id { get; set;} 
}
public class Client : Entity{ 
   public string Name {get;set;}
}
// Paymeent has no navigation property to Client, only Id of payer. 
// Without navigation property, you can't just write an expresion to check.
public class Payment : Entity {
   public LocalDate PayementDate { get; set; }
   public int PayerId { get; set; }
}
class OverduePaymentSpecification : Specification<Payment> {
    Expression<Lambda<Payment, bool>> AsExpression() {/* implementation*/ }
}

public class Repository<TEntity> where TEntity : Entity
{
    public IList<TEntity> ProjectQuery<TSpecifiedEntity>(
          Specification<TSpecifiedEntity> specification, 
          Expression<Func<TSpecifiedEntity, int>> idSelection)
    {
        var projectedEntities = GetDbSet<TEntity>();
        var specifiedEntities = GetDbSet<TSpecifiedEntity>();
        return specifiedEntities
            .Where(specification.AsExpression())
            .Join(projectedEntities, idSelection, 
                  projectedEntity => projectedEntity.Id, 
                  (specifiedEntity, projectedEntity) => projectedEntity)
            .ToList();
    }
}

// Example of usage
var clientRepository = new Repository<Client>(context);
var clientsWithOverduePayments = clientRepository
     .ProjectQuery(new OverduePaymentSpecification(), payment => payment.PayerId);

That’s all. I like specification pattern, because thanks to magic of EF and Automapper, I am mostly left with a standard AutoMapperRepository and I don’t have to program business logic into infrastructure.

Sidenote: Personally, I am not too fond of using “chains” of specifications, but is quite useful pattern. I am working with systems that have a lot of weird business rules and having an separate class

.NET Standard, .NET Core and .NET Framework with xUnit.net

A few years ago, MS released .NET Core, then it released .NET Standard. I have avoided them and kept using .NET Framework to keep out of trouble.

Until now.

I made a simple library and wondered what kind of project I should use and what ramifications of my selection are. And I ran into a trouble…. I didn’t really understand what is going on.

There are many .NET frameworks, the most common ones are

  • .NET Framework – monolithic framework, basically a set of libraries we all know and love, including WPF, ASP.NET and so on.
  • .NET Core – a fork of .NET framework reworked to be more modular.

There are several others (Compact, Xamarin, Mono…) and basically all of them somehow implement .NET. Your library used to target one of those and relied on (slightly) incompatible API.

So MS tried to make it easier to develop applications that can target multiple platforms = applications that can run on multiple platforms (e.g. Mono and .NET Framework) and created Portable Class Library profiles, where you say which platforms you want to target and as a result get a list of API that is on all platforms of the profile.

That went ok for a while, but it had problems, I won’t go into detail (see this lengthy blog: Introducing .NET Standard).

So the MS has decided that it will create an API specification, libraries will be written against this specification and then the libraries will be run on a platform that implements the standard.

Basically .NET Standard is an interface and each platform is an implementor (see Developer Metaphor for .NET Standard).

Ok, so far so good. We have .NET Standard 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6 and 2.0. The .NET Core platform implements them and so does .NET Framework platform 4 and something.

While this was going on, MS has done some other things, e.g. first it got rid of .csproj project file and went with project.json, a few months later it replaced project.json with significantly simplified .csproj again (to keep the build toolchian same for all things, e.g. WPF applications and so on). It seems that MSBuild is too entrenched and MS wants a unified build chains, instead of two build chains.

Library project = .NET Standard

So now when you develop a library, you can develop against .NET Core, .NET Framework or you can choose to develop against .NET Standard. Obviously, I chose .NET Standard to have biggest audience and least hassle with different platforms.

If I need something that is missing from .NET Standard, I will just use NuGet.

Unit tests

I chose xUnit.Net for my unit framework and got to work. There is a nice tutorial over xUnit.net: Getting started with xUnit.net (.NET Core / ASP.NET Core).

You might note that there is

  • Getting Started with xUnit.net (desktop)
  • Getting started with xUnit.net (.NET Core / ASP.NET Core)
  • Getting started with Devices

but no getting started with .NET Standard. The reason is kind of obvious, because the test runner needs to know the actual platform it will run tests against. See a comment of xUnit BDFL. If you want, you can easily run xUnit against multiple targets (=platforms) to make sure there are no kinks anywhere.

dotnet

With an introduction of .NET Core platform, MS has added a CLI tool for common tasks. The program itself is called dotnet.exe. It is basically a CLI tool to create, scaffold, build, pack or test .NET Core projects.

This tool is only for .NET Core platform, not for .NET Framework or others.

By using dotnet.exe, you can easily do common tasks, such as:

C:\Users\jahav\source\repos>mkdir app
C:\Users\jahav\source\repos>cd app
C:\Users\jahav\source\repos\app>dotnet new sln --name myapp
The template "Solution File" was created successfully.
C:\Users\jahav\source\repos\app>dotnet new xunit --name mytest
The template "xUnit Test Project" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on mytest\mytest.csproj...
 Restoring packages for C:\Users\jahav\source\repos\app\mytest\mytest.csproj...
 Generating MSBuild file C:\Users\jahav\source\repos\app\mytest\obj\mytest.csproj.nuget.g.props.
 Generating MSBuild file C:\Users\jahav\source\repos\app\mytest\obj\mytest.csproj.nuget.g.targets.
 Restore completed in 1,11 sec for C:\Users\jahav\source\repos\app\mytest\mytest.csproj.
Restore succeeded.
C:\Users\jahav\source\repos\app>dotnet sln add mytest\mytest.csproj
Project `mytest\mytest.csproj` added to the solution.
C:\Users\jahav\source\repos\app>dotnet test
Build started, please wait...
Build completed.
Test run for C:\Users\jahav\source\repos\app\mytest\bin\Debug\netcoreapp2.0\mytest.dll(.NETCoreApp,Version=v2.0)
Microsoft (R) Test Execution Command Line Tool verze 15.3.0-preview-20170628-02
Copyright (c) Microsoft Corporation. Všechna práva vyhrazena.
Test are starting to run, please wait...
[xUnit.net 00:00:00.4928198] Discovering: mytest
[xUnit.net 00:00:00.5726261] Discovered: mytest
[xUnit.net 00:00:00.6093754] Starting: mytest
[xUnit.net 00:00:00.7880631] Finished: mytest
Tests total: 1. Success: 1. Fail: 0. Ignored: 0
Test run was successful.
Time of test run: 1,7359 sec

dotnet is also quite extensible, e.g. xunit is adding its own runner (dotnet xunit) that has some nicer features that standard dotnet test. ‘dotnet test’ vs. ‘dotnet xunit’

 

Let’s look at xunit .csproj:

<Project Sdk="Microsoft.NET.Sdk">
 <PropertyGroup>
 <TargetFramework>netcoreapp2.0</TargetFramework>
 <IsPackable>false</IsPackable>
 </PropertyGroup>
 <ItemGroup>
 <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.3.0" />
 <PackageReference Include="xunit" Version="2.2.0" />
 <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
 </ItemGroup>
</Project>
  • TargetFramework: netcoreapp2.0 – Platform the test cases will be run against. There can be multiple platforms specified (remember to change TargetFramework to TargetFrameworks)
  • Microsoft.NET.Test.Sdk package – will turn the project from a class library to a console application when building for .net core so that all assets needed to “run” are generated. This package does a lot of things. This package is needed, if you are using dotnet test. This is used for all testing frameworks (MSTest, xUnit, NUnit…)
  • xunit – package to add annotations to tests in the assembly. Doesn’t do anything really useful, it is mostly metadata.
  • xunit.runner.visualstudio – this is the xUnit test runner, that connects the xUnit with the .NET Core test runnign platform. Thanks to this, you can discover and run tests in visual studio or by using dotnet test.

In the xUnit tutorial is slightly different csproj.

<Project Sdk="Microsoft.NET.Sdk">
 <PropertyGroup>
 <TargetFramework>netcoreapp2.0</TargetFramework>
 </PropertyGroup>
 <ItemGroup>
 <PackageReference Include="xunit" Version="2.3.1" />
 <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.1" />
 </ItemGroup>
</Project>

This csproj is missing Microsoft.NET.Test.Sdk, so it’s not possible to use dotnet test, but it uses a dotnet extension point: DotNetCliToolReference . By adding this nuget (yes, really nuget package that is automatically downloaded for the project, no need to install separately) package to the project, you can use dotnet xunitcommand.

BTW you can’t use xunit.console.x86.exe for .NET Core projects, that runner is for .NET Framework only.

Custom MVC Validation Message

The MVC has a commonly used extension method @Html.ValidationMessageFor(model =>model.Property), that generates a code similar to

<span class="field-validation-error" data-valmsg-replace="true" data-valmsg-for="Name">
  The Name field is required.
</span>

We wanted to hide the error message into a tooltip of a red asterisk (yes, I know that it is not user/mobile  friendly). Something like this:

ScreenShot

We have @Html.EditorFor(), so perhaps there will be an easy way…. Nope, there isn’t. We have to do some work either through JavaScript and CSS or by creating a separate helper.

Javascript & CSS way

This is easier way and it works.

C# way

Changing the generated markup to something to something like

<span data-error-message="Error message" class="field-validation-error">*</span>

would work too.

Well, it seems that there is no easy way to get a validation error message for a specified property of a model. The code bubbles through several recalls and ends up in a method ValidationMessageHelper in a file ValidationExtensions.cs.

I can change either tag or css class for generated element, I can of course add another attributes through htmlAttributes dictionary, but that is about it. I can’t get the message itself (unless I do the work myself).

I have found some nice info about how to generate custom summary and someone who needed to create a custom generated validation message with an icon (using @Html.ValidationIconFor(model => model.property)). It is a considerable amount of basically cope and paste code.

Conclusion

AFACT there is no easy way to have a template or something similar for validation message. As long as the screen estate is not an issue, try to stick with default.

 

NHibernate using .NET 4 ISet

NHibernate 4.0 (released in 2014-08-17) has brought us support for .NET 4 ISet<> collections, thus freeing us from the tyranny of the Iesi package. But long before that, there was a way to use .NET 4 ISet in you NHibernate 3 projects:

NHibernate.SetForNet4

SetForNet4 is a NuGet package you can install into your application. Once you install it, there will be a new file with a implementation of a ICollectionTypeFactory in your project that will support System.Collections.Generic.ISet<> instead of Iesi.Collections.Generic.ISet<>. Other than that, it is basically a copy of the original DefaultCollectionTypeFactory.

All you need to do after installation is to set the configuration property “collectiontype.factory_class” to the assembly qualified name of the created factory, make sure the dll with the created collection factory can be loaded (if you have it in separate project) and all will be peachy.

I had slight trouble with configuration, I have configuration spread over the XML and the code and the comment at the top said to add line to my configuration:

//add to your configuration:
//configuration.Properties[Environment.CollectionTypeFactoryClass]
//        = typeof(Net4CollectionTypeFactory).AssemblyQualifiedName

Since it was integral part of the configuration (not the db dialect or something), I put it to the code

var configuration = new Configuration().Configure();
configuration.Properties[Environment.CollectionTypeFactoryClass]
     = typeof(Net4CollectionTypeFactory).AssemblyQualifiedName;

… and it didn’t work. I had to set the factory before I made configured from XML.

var configuration = new Configuration()
    .SetProperty(Environment.CollectionTypeFactoryClass, typeof(Net4CollectionTypeFactory).AssemblyQualifiedName)
    .Configure()

I am not exactly sure why, but I think it is because I also have assemblies with *hbm.xml mapping files specified in the XML using <mapping assembly="EntityAssembly" /> tags.

The configuration code have set the collection factory of the bytecode provider before the mapped assemblies were processed.

Downside

It should be noted that using SetForNet4 package, you can’t use both, the Iesi and the .NET4 collections at the same time, thus you need to replace the Iesi in the whole application. Also, we have NH 4.0 now, so doing this is kind of pointless, but I did it before 4.0 and I don’t have time to upgrade to 4.0 and check that my app still works as advertised.

Related links

AutoMapper queryable extensions

How to generate a LINQ query for your DTOs

AutoMapper is a really cool library that allows us to map one object to another, e.g. when passing objects through layers of our application, where we work with different objects in different layers of our app and we have to map them from one layer to another, e.g. from business object to viewmodel.

All is good and well for POCO, not so much for entity objects. The automapper tries to map everything using reflection, so properties like Project.Code can turn to ProjectCode, but that is troublesome with ORM, where querying an object means loading another entity from the database.

I am using a NHibernate linq provider that only gets columns we actually ask from the database, so it would be nice to have a DTO type, entity type and magically create a linq expression mapping from one to another that can be used by NHibernate LINQ provider.

// Blog entity
public class Blog {
  public virtual int Id {get;set;}
  public virtual string Name {get;set;}
}

// Post entity
public class Post {
  public virtual int Id {get;set;}
  public virtual string Title {get;set;}
  public virtual DateTime Created {get;set;}
  public virtual string Body {get;set;}
  public virtual Blog Blog {get;set;} 
}

public class PostDto {
  public string BlogName {get;set;} 
  public string Title {get;set;}
  public string Body {get;set;}
}

public class BlogRepository {
  private readonly ISession session;

  public PostDto GetPost(int id)
  {
    return (
      from post in session.Query<Post>()
      where post.Id == id
      select 
        new PostDto                  // This is an expression I want to generate
        {                            // in this case, I have 3 properties,
          BlogName = post.Blog.Name, // in my project, I have 5-30 in each entity
          Title = post.Title,        // and many entities. Repeatable code that
          Body = post.Body           // should be generated.
        }                            //
      ).Single();
  }
}

Remember, such expression will require only necessary fiels, so Id or Created won’t be part of SQL query (see NHibernate Linq query evaluation process for more info).

 

Queryable Extensions

Automapper provides a solution to this proble: queryable extensions (QE). They allow us to create such expression and they even solve SELECT N+1 problem. It is no panacea, but it solves most of my trouble.

Notice the key difference, normal automapping will traverse object graph and return a mapped object, QE will only generate a mapping expression.

Example

I will provide an example using the entities above:

  1. NuGet package for AutoMapper, the QueryableExtensions are part of the package and are in AutoMapper.QueryableExtensions namespace
  2. Create a test
  3. Create a mapping
    Mapper.CreateMap<Post, PostDto>();
  4. Query by hand (see query above)
    var postDto = 
       session.Query<Post>().Where(post => post.Id == id)
       .Project().To<PostDto>()
       .Single();
  5. Observe the generated SQL:
    select
        blog1_.Name as col_0_0_,
        post0_.Title as col_1_0_,
        post0_.Body as col_2_0_ 
    from
        Post post0_ 
    left outer join
        Blog blog1_ 
            on post0_.Blog=blog1_.Id 
    where
        post0_.Id=@p0;
    @p0 = 1 [Type: Int32 (0)]
    

    It is no different that the SQL generated by the hand made query. It only queries what is necessary without boilerplate code.

  6. Remove boilerplate code from your app.

You can also do a more difficult transformations, although QE are slightly more limited than in-memory AutoMapper capabilities, go and read the wiki.

This is really cool extension that will remove quite a lot of boilerplate code, so give it a try!

Further reading

NHibernate Linq query evaluation process

I like the Linq provider of NHibernate, but I have encountered weird behavior and had to dive into how does NHibernate creates the SQL queries from the linq queries and creation of Linq provider extensions. If you are only interested how to program NHibernate Linq function  (e.g. how to check if text column corresponds to some regular expression in database), this is not the post to read – read Michaels post instead. This one is about how does NHibernate turns a Linq query into a result of the query.

I will be mostly dealing with a select clause of Linq, but other are quite similar.

Problem

I had a query that checked for entity nullness before creating a derived object using some properties of null-checked entity:

from post in session.Query<Post>()
select new PostModel
{
  Body = post.Body,
  Blog = post.Blog != null ? new EntityReference
    {
      Id = post.Blog.Id,
      Name = post.Blog.Name
    } : null
}

However, the generated SQL had more fields than necessary, in fact, it had all fields of a Blog entity:

-- Cursive are the properties of Blog that are not needed
select
    post0_.Body as col_0_0_,
    blog1_.Id as col_1_0_,
    post0_.Blog as col_2_0_,
    blog1_.Name as col_3_0_,
    blog1_.Id as Id0_,
    blog1_.Name as Name0_,
    blog1_.Subtitle as Subtitle0_,
    blog1_.Created as Created0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

It doesn’t look like a big problem, but that is because this is a demonstration. In reality I have been creating several references and for each entity the query required all its properties. The result was a query with 100+ columns (4 entities * 20-30 columns per entity) and quite slow execution.

I had no idea why that happened, because similar query without a null check worked fine and NHibernate generated SQL without unnecessary fields:

from post in session.Query<Post>()
select new PostModel
{
    Body = post.Body,
    Blog = new EntityReference
        {
            Id = post.Blog.Id,
            Name = post.Blog.Name
        }
}
-- 
select
    post0_.Body as col_0_0_,
    post0_.Blog as col_1_0_,
    blog1_.Name as col_2_0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

Explanation

I had to dive into NHibernate source code, because Google turned up nothing and all books on NHibernate are either old (pre 3.0 =without Linq) or only go through some simple Linq queries. Official NHibernate documentation even mention how to create Linq extensions, much less how does the process works. Yay me.

Well, here is a short overview of how does NH turns a Linq query to a result:

1. Rewriting query to be more HQL friendly

We have a Linq expression (=Linq query) object supplied by user. It’s representations is identical to the representation in the code, which can be hard to process, so NHibernate will take the query and rewrites it a little to be more easily processed. Here is an example of how a query looks before and after rewriting:

/* Source query */
from Post post in value(NHibernate.Linq.NhQueryable[Nerula.Data.Post])
select new PostModel() {
  Body = [post].Body,
  Blog =
    IIF(([post].Blog != null),
    new EntityReference() 
    {
      Id = [post].Blog.Id,   
      Name = [post].Blog.Name    
    },   
    null)
}

/* Rewritten query */
from Post post in value(NHibernate.Linq.NhQueryable[Nerula.Data.Post]) 
from Blog _0 in [post].Blog 
select new PostModel() 
{
  Body = [post].Body,
  Blog = 
    IIF(([_0] != null),
      new EntityReference() 
      {
        Id = [post].Blog.Id, 
        Name = [_0].Name 
      }, 
      null)
}

 2. Find sub-expressions representable in HQL

The Linq query is represented as a tree of subexpression.

For example: new PostModel() {Body = [100001].Body, Blog = IIF(([100001] != null), new EntityReference() {Id = [100001].Blog.Id, Name = [100001].Name}, null)} is a Linq expression of type ExpressionType.MemberInit. It consists from several subexpressions:

  • new PostModel() is an expression of type ExpressionType.New
  • [100001].Body is an expression of type ExpressionType.MemberAccess
  • {IIF(([100001] != null), new EntityReference() {Id = [100001].Blog.Id, Name = [100001].Name}, null)} is an expression of type ExpressionType.Conditional. This expression has its own subexpressions.

It is kind of obvious that expressions form a tree.

NHibernate will recursively visit all linq sub-expressions and gets a list of all expression that can be evaluated in database using HQL select statement – e.g. registered methods, entities and its properties, but not unregistered function calls or constants (e.g. null, 1, “Hello World”, post.Body.Contains(“Hello”)).

Here is a screenshot of all sub-expressions from the query above. The NHibernate extracts following expressions (no method here, but it could be):

NHibernate Linq candidates

BTW here is the magic that allows user to add their own Linq provider extension happens.

3. Replace database subexpressions

Now we have a list of sub-expressions that can be queried directly in the database. We will take our Linq query and rewrite it a little more. NHibernate will go recursively through the Linq expression and rewrites HQL representable expressions with an item from an array. The reason is simple, once it performs the DB query, it will have the result in array and once it performs the Linq query, the result of original subexpression (e.g. post.Body.Contains(“Subexpression”)) will be already done by database and the result will be an item in an array (e.g. true or false). That is the whole point – doing stuff in database.

The select clause will now have stuff queriable in db replaced with an item in an array:

new PostModel() {
  Body = Convert(input[0]), 
  Blog = IIF((Convert(input[1]) != null), 
         new EntityReference() 
         {
           Id = Convert(input[2]), 
           Name = Convert(input[3])
         }, 
         null)
}

4. Perform database query

We have a list of sub-expressions we are interested from step 2, NHibernate will do its magic, query the database and gets result in an input array.

5. Evaluate Linq query

We have a rewritten Linq query from step 3 and we have data used in it from step 4. We can actually evaluate the query! NHibernate does it and returns result.

Back to the problem

Well, that is about it. Where is the problem? Why did I get all these extra columns in my query?

Easy, NHibernate doesn’t recognize operator == in a select clause and therefore it has to load whole entity Blog into a memory (=that is why it loads all properties) where it is compared with null.

I though about why does NHibernate behaves like this, but it seems reasonable after a while. What if the comparison was done between entity and some in-memory instance of entity? What about overloaded operator ==? Database would have no idea what to do, so we can’t have blanket == from NHibernate Linq provider.

Important: This is valid for the Linq select clasue, the where clause doesn’t work like that. The where clause works as you expect, unlike select.

  • If you use post.Blog != null in the Linq where clause, the NHibernate will correctly translate it to SQL  where post0_.Blog is not null.
  • If you use post.Blog == memoryBlog  you get where post0_.Blog=@p0.

Solution

Nope, we are not lost. We can create a custom Linq provider for checking null. The NHibernate will recognize our method EntityState.Exists as a something that can be performed in the database and replace load of a whole entity with an extra column in query:

public static class EntityState {
    public static bool Exists(EntityBase entity) {
        return !ReferenceEquals(entity, null);
    }
}

public class EntityStateGenerator : BaseHqlGeneratorForMethod
{
    public EntityStateGenerator()
    {
        SupportedMethods = new[] { 
            ReflectionHelper.GetMethod(() => EntityState.Exists(null)) 
        };
    }

    public override HqlTreeNode BuildHql(MethodInfo method, Expression targetObject,
        ReadOnlyCollection<Expression> arguments, HqlTreeBuilder treeBuilder, 
        IHqlExpressionVisitor visitor)
    {
        return treeBuilder.IsNotNull(visitor.Visit(arguments[0]).AsExpression());
    }
}

public class MyLinqToHqlGeneratorsRegistry : DefaultLinqToHqlGeneratorsRegistry
{
    public NerulaLinqToHqlGeneratorsRegistry()
    {
        var generator = new EntityStateGenerator();
        foreach (var method in generator.SupportedMethods)
        {
            RegisterGenerator(method, generator);
        }
    }
}

We also have to register the generator (for XML, the property name is linqtohql.generatorsregistry) or you can simple add it to the configuration

configuration.SetProperty(NHibernate.Cfg.Environment.LinqToHqlGeneratorsRegistry, 
  typeof(MyLinqToHqlGeneratorsRegistry).AssemblyQualifiedName);

Now, the query

from post in session.Query<Post>()
select new PostModel
{
    Body = post.Body,
    Blog = EntityState.Exists(post.Blog)
        ? new EntityReference
        {
            Id = post.Blog.Id,
            Name = post.Blog.Name
        }
        : null
}

Will result in the following SQL, because NHibernate will replace subexpression EntityState.Exists(post.Blog) with the result of SQL case statement per step 3,4 and 5:

select
    post0_.Body as col_0_0_,
    case 
        when blog1_.Id is not null then 1 
        else 0 
    end as col_1_0_,
    post0_.Blog as col_2_0_,
    blog1_.Name as col_3_0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

The reason for weird query was quite simple, but pretty understandable. One extra column is OK with me. I just wish it was documented.

Using NHibernate readonly property accessor

Recently I needed to create a string combined from several columns in the database into a single field. More precisely, I had an User entity with a FirstName and Surname properties and I needed the FullName property for filling the data into a ViewModel. The entity itself has many properties and its hydration is quite slow.

public class User : EntityBase
{
    public virtual string FirstName {get;set;}
    public virtual string Surname {get;set;}
    public virtual string FullName
    {
        get { return string.Format("{0} {1}", FirstName, Surname); }
    }
    // many other properties
}

Querying a view model

I wanted to create a view model in single linq query. The query will get from database only the fields that are necessary, not whole entities (e.g. only Title and).

var viewModelQuery = 
    from post in session.Query<Post>()
    where post.Id == postId
    select new PostViewModel
    {
        Title = post.Title,
        Text = post.Body,
        CreatedBy = string.Format("{0} {1}", 
            post.CreatedBy.FirstName, post.CreatedBy.Surname)
    };

The corresponsing query looks like this – notice it is a single query that gets only what is needed, no unnecessary properties of the User or the Post are fetched. This is the reason why I really like the NHibernate Linq provider as opposed to Criteria API or QueryOver.

select
    post0_.Title as col_0_0_,
    post0_.Body as col_1_0_,
    user1_.first_name as col_2_0_,
    user1_.surname as col_3_0_ 
from
    Post post0_ 
left outer join
    User user1_ 
        on post0_.CreatedBy=user1_.Id 
where
    post0_.Id=@p0;

However, there is a flaw: The string.Format creation of the CreatedBy. I would like to use FullName property, but that is not possible, because it is not mapped and using CreatedBy = post.CreatedBy.FullName would throw a mapping exception.

Querying full name

We can get around that using a formula in the mapping that has same result as the property in the entity class:

<class name="User">
    <id name="Id">
      <generator class="increment" />
    </id>
    <property name="FirstName" column="first_name" />
    <property name="Surname" column="surname" />
    <property name="FullName" access="readonly" formula="(first_name || ' ' || surname)" />
</class>

Notice use of readonly access. It was implemented in the NH-1621. It is an accessor used for querying in the database and thanks to that we can use it directly in the query, the prerequisites are existing property with getter (no need for setter) and semantically same as the column in the database. In our case, the property in not mapped to a column, but to a formula, but result is same: We can use it directly in the query:

var viewModelQuery = 
    from post in session.Query<Post>()
    where post.Id == postId
    select new PostViewModel
    {
        Title = post.Title,
        Text = post.Body,
        CreatedBy = post.CreatedBy.FullName
    };
var viewModel = viewModelQuery.Single();
Assert.AreEqual("Post title", viewModel.Title);
Assert.AreEqual("Text of the post", viewModel.Text);
Assert.AreEqual("John Smith", viewModel.CreatedBy)

The SQL query uses formula instead of two separate columns and as before it does it without getting unnecessary properties:

select
    post0_.Title as col_0_0_,
    post0_.Body as col_1_0_,
    (user1_.first_name || ' ' || user1_.surname) as col_2_0_ 
from
    Post post0_ 
left outer join
    User user1_ 
        on post0_.CreatedBy=user1_.Id 
where
    post0_.Id=@p0

Now we can safely add a middle name or reverse order of full name simply by modifying a formula in the mapping file and the getter property and the change will appear everywhere as opposed to modifying the string.Format method.

Testing NHibernate queries using IEnumerable

NHibernate has several ways to query a database, the easiest one to use is through a Linq provider. I don’t like other ways very much:

  • HQL (Hibernate Query Language)  – You have to write a string with no type checking, e.g. “select Name from Cat”.
  • Criteria API – Uses magic string, rather awkward for more complex queries.
  • QueryOver – It doesn’t use magic strings like criteria API, but I find aliases variables disgusting, plus making more complex queries (e.g. multiple sub-queries) rather difficult and unfriendly.
  • Sql query – Just plain SQL, IMO best choice when linq can’t do the job.

NHibernate Linq

The NHibernate Linq provider is great, you can search through entities using a Linq, everything is statically checked, the intent is clear. NHibernate is using IQueryable, not IEnumerable, the difference is that IQueryable stores the info about expressions used for search and these expressions are later used to create a SQL query that hits the database. The IEnumerable always pulls object from previous method in the chain thus any filtering is done not in the database, but in the memory.

var oldCatNames = 
  from cat in session.Query<Cat>()
  where cat.Age >= 12
  select cat.Name;
return View(oldCatNames.ToArray())

The example of a NHibernate Linq query getting all cats that are old. NHibernate generates a SQL statement, executed it and transforms the result into an array of names. The key question is how to we test such queries?

We can

  1. Have our production DBMS and each test will have to fill in the data and run the query against the database. I am doing it in my project with Oracle and TBH it is rather slow (you have to connect to the db for each test – several seconds), you have to clear/fill in a lot of data as required by constraints (most of the time the referenced data are not not required by the test) and although it has a merit (e.g. when testing stored procedures, more complex queries and so on), for simple queries (=most queries) it seems like a overkill.
  2. Have a in-memory DBMS (e.g. SQLite) and run tests against it. I am doing it for my playground project, but IMO it is even worse than the first proposition, the only benefit is speed and drawbacks are significant. You still have to fill the database and the engine is different that from the production one. For example, sequences are not supported by SQLite. I am using them in my mapping files, so now what? What about stored procedures? SQLite has lousy implementation of time functions and so on.
  3. Use IEnumerable instead of IQueryable and run tests in memory, w/o any DBMS at all.

I am going to explore third option, because it will correctly test most Linq queries for quite a little code.

SQL semantic vs IEnumerable semantic

Before we dive into how to actually do it, there is an important thing to remember:

Result of SQL query and IEnumerable query may be different, although it looks exactly same in the code.

The NHibernate (and Entity Framework) are using SQL semantic that sometimes differ from IEnumerable sematics, the most obvious case are aggregation methods such as Sum. Let us consider following query that is getting total amount of all conjectures:

int sum = session.Query<Conjecture>()
  .Sum(conjecture => conjecture.Amount);

What is the result, when the table for entity Conjecture is empty? No, it is not 0, it is a GenericADOException. The reason is SQL semantic. The NHibernate will infer from the conjecture.Amount that result of query should be an int. It will constructs the query and tries to cast the result into an int. But the result of SQL query (select cast(sum(conjecture0_.Amount) as INT) as col_0_0_ from Conjecture conjecture0_) on empty table is not a 0, but null per definition of SUM in SQL. Thus the exception.

This is intended result per bug NH-3113. In order to get zero, we have to change type of infered result and return 0, when result is null:

int sum = session.Query<Conjecture>()
  .Sum(conjecture => (int?)conjecture.Amount) ?? 0;

When using IEnumerable masquerading as IQueryable for tests, we must be aware of the difference.

Testing query

Query is not a method of ISession, but an extension method of NHibernate.Linq.LinqExtensionMethods class and testing extension methods in C# is painful – they are basically static methods called on an instance. The obvious solution is to use your own facade that hides the NHibernate ISession so you are using your own interfaces that isolate you from quirks such as this one.

If you are using facade, it is really simple to mock result of query, just take any IEnumerable and use extension method AsQueryable from Queryable class (use better name than ISessionFacade):

Conjecture[] conjectures = new[] 
{
  new Conjecture("Minor work", 10),
  new Conjecture("Bug fix", 50),
  new Conjecture("Simple feature", 100),
  new Conjecture("Complicated feature", 500),
};
var sessionFacade = Mock.Of<ISessionFacade>();
sessionFacade.Setup(x => x.Query<Conjecture>())
  .Returns(conjectures.AsQueryable())
// Here would be tested method, I am inlining
var largeConjectureNames =
                from conjecture in sessionFacade.Query<Conjecture>()
                where conjecture.Amount >= 100
                select conjecture.Name;
var expected = new[] { "Simple feature", "Complicated feature" };
CollectionAssert.AreEqual(expected, largeConjectureNames.ToArray());

If you are using ISession from NHibernate and Query extension method from NHibernate.Linq for your queries, you either have to replace the ISession with a facade or mock the Query extension method. I am mocking the extension method, because our project is not using sane DI system (my next task).

Mocking Query method

Let me start by saying this: Mocking extension method is horrible.

Extension methods have their place, e.g. string object doesn’t have a Truncate method and you can’t use Substring(0, length), because it will throw ArgumentOutOfRangeException if your length is greater than the length of a string.

But! You should never ever use extension method for anything that has a potential to be mocked. I have no idea what NHibernate developers thought when they used it for method that returns the result of a query.

So, how to mock the Query method?

1. Use wrapper

Query method is from the NHibernate.Linq namespace, so if the namespace is not included, the method is not found and code is not compiled. Include your own

namespace Nerula.Linq
{
  public static class NHibernateLinqExtension {
    public static IQueryable<TEntity> Query<TEntity>(this ISession session)
    {
      return NHibernate.Linq.LinqExtensionMethods.Query<TEntity>(session);
    }
  }
}

Replacing the using NHibernate.Linq with using Nerula.Linq won’t change anything, except the app is now calling the NHibernate Query through our wrapper.

2. Call mockable interface from wrapper

Instead of just calling another static method, create an interface that is used to perform the static methods and create a default implementation of the interface that calls the original extension methods:

namespace Nerula.Linq
{
  public interface ISessionLinq
  {
    IQueryable<TEntity> Query<TEntity>(ISession session);
  }
  public static class NHibernateLinqExtension {
    internal static ISessionLinq SessionLinq {get;set;}
        
    static NHibernateLinqExtension()
    {
      SessionLinq = new NHiberanteSessionLinq();
    }

    private class NHiberanteSessionLinq : ISessionLinq
    {
      public IQueryable<TEntity> Query<TEntity>(ISession session)
      {
        return NHibernate.Linq.LinqExtensionMethods.Query<TEntity>(session);
      }
    }

    public static IQueryable<TEntity> Query<TEntity>(this ISession session)
    {
      return SessionLinq.Query<TEntity>(session);
    }
  }
}

Notice that SessionLinq has an internal access, you can configure your test projects to access to the internal properties or simply change the property to public. Now, we have a default implementation that will call the static methods for the program, but we can also change the implementation during tests and return whatever we want.

3. Mock your queries

Now, we can replace the default implementation of ISessionLinq with mocked one and finally use memory lists and other IEnumerables oodies to mock the queries.

ISession session = Mock.Of<ISession>();
Mock<ISessionLinq> sessionLinq = new Mock<ISessionLinq>(MockBehavior.Strict);

Conjecture[] conjectures = new[] 
{
  new Conjecture("Minor work", 10),
  new Conjecture("Bug fix", 50),
  new Conjecture("Simple feature", 100),
  new Conjecture("Complicated feature", 500),
};

sessionLinq.Setup(x => x.Query<Conjecture>(session))
  .Returns(conjectures.AsQueryable());
// Here is the change of the query provider
NHibernateLinqExtension.SessionLinq = sessionLinq.Object;

var largeConjectureName =
  from conjecture in session.Query<Conjecture>()
  where conjecture.Amount >= 100
  select conjecture.Name;

CollectionAssert.AreEqual(new[] { "Simple feature", "Complicated feature" }, largeConjectureName.ToArray());

4. Restore default property

Since we are changing the static property, we must make sure to change it back after the test has run, otherwise all tests would have to make sure to set correct implementation of ISessionQuery, e.g. NUnit reuses the instance of test fixture for all test and if one test is mocking the Query method while other uses NHibernate.Linq Query method, they would be order dependent. NUnit has an action attributes that make this very simple.

Conclusion

I have found that using IEnumerable to test NHibernate linq queries makes writing tests much easier and faster. You can’t use it for testing other NHibernate API used to access the database and you have to be careful about SQL vs IEnumerable semantic.

The Query extension method is a horrible design and if you are using NHibernate, you should consider rolling a facade. Not only for mocking queries, but Entity Framework is getting better and better and possible switch would be much smoother. NHibernate has recently released version 4.0, but except for support of BCL collections, I don’t find release notes very newsworthy.

Limit your abstractions

While trying to find a better alternative to our “pass-the-ball” architecture (webform ->code behind-> presenter->controller) for my app, I have stumbled upon interesting bite-sized series Limit your abstractions by Ayende.

It basically starts with a code from ndddsample and shows what is wrong (in his opinion) with it (events, too much useless abstraction).

Series

  1. Analyzing a DDD application – The abstration is non-abstracted abstraction. Basically only extracted interfaces.
  2. Application Events–the wrong way
  3. Application Events–what about change? – What if we have new state, e.g. lost cargo
  4. Application Events–Proposed Solution #1
  5. Reflections on the Interface Segregation Principle
  6. Application Events–Proposed Solution #2–Cohesion
  7. Application Events–event processing and RX
  8. You only get six to a dozen in the entire app
  9. Commands vs. Tasks, did you forget the workflow?
  10. All cookies looks the same to the cookie cutter
  11. So what is the whole big deal about?
  12. Refactoring toward reduced abstractions
  13. The key is in the infrastructure…
  14. And how do you handle testing?

Events

public override void InspectCargo(TrackingId trackingId)
{
  Validate.NotNull(trackingId, "Tracking ID is required");

  Cargo cargo = cargoRepository.Find(trackingId);
  if (cargo == null)
  {
    logger.Warn("Can't inspect non-existing cargo " + trackingId);
    return;
  }

  HandlingHistory handlingHistory = handlingEventRepository.LookupHandlingHistoryOfCargo(trackingId);

  cargo.DeriveDeliveryProgress(handlingHistory);

  if (cargo.Delivery.Misdirected)
  {
    applicationEvents.CargoWasMisdirected(cargo);
  }

  if (cargo.Delivery.UnloadedAtDestination)
  {
    applicationEvents.CargoHasArrived(cargo);
  }
  cargoRepository.Store(cargo);
}

This is actual business method that does business logic method. It violates Single Responsiblity Principle (it looks up the delivery history and dispatches events) and Open Closed Principle (if we add or change cargo state, e.g. cargo is lost, we have to modify the class).

There are of course many possible solutions to event handling and dispatching, some are discussed. I didn’t know about Reactive Extensions, rather nice.

Non-abstracted abstraction

According to Ayende, the code should have a very limited amount (<10) of abstractions, he proposes following abstractions are good enough for most projects.

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

Creating an abstraction always has a cost, sometimes small, sometimes large, see Abstract Factory Factory Façade Factory. Use your abstractions carefully.

My notes

Definitely worth reading, but I wonder how does proposed reduced solution work in a real project with more complex operations and larger teams.

Basically, he puts the code into a self-container Command class that contains all the logic and calls it from the MVC action. The queries are also self contained classes that get their result using Query method of the Command.

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
{
    var trackingId = ExecuteCommand(new RegisterCargo
    {
        OriginCode = originUnlocode,
        DestinationCode = destinationUnlocode,
        ArrivalDeadline = arrivalDeadline
    });

    return RedirectToAction(ShowActionName, new RouteValueDictionary(new { trackingId }));
}
public abstract class Command
{
    public IDocumentSession Session { get; set; }
    public abstract void Execute();

    protected TResult Query<TResult>(Query<TResult> query);
}

public abstract class Command<T> : Command
{
    public T Result { get; protected set; }
}

public class RegisterCargo : Command<string>
{
    public override void Execute()
    {
        var origin = Session.Load<Location>(OriginCode);
        var destination = Session.Load<Location>(DestinationCode);

        var trackingId = Query(new NextTrackingIdQuery());

        var routeSpecification = new RouteSpecification(origin, destination, ArrivalDeadline);
        var cargo = new Cargo(trackingId, routeSpecification);
        Session.Save(cargo);

        Result = trackingId;
    }

    public string OriginCode { get; set; }
    public string DestinationCode { get; set; }
    public DateTime ArrivalDeadline { get; set; }
}

In the end, he uses hand-coded mocking which I find rather distasteful

public void ExecuteCommand(Command cmd)
{
  if (AlternativeExecuteCommand!= null)
    AlternativeExecuteCommand(cmd);
  else
    Default_ExecuteCommand(cmd);
}

It seems much easier and maintainable just to create ICommandExecutor or even virtual method that can be overriden.

 

CyanogenMod for HTC Desire C

I have been pondering for a while if I should install CyanogenMod (custom Android ROM) to my phone. In the end, I have decided to give it a shot:

  • My phone is HTC Desire C (HTCDC) – really old and slow. The stock Android worked fine.. for a time. After that, it really slowed down.
  • HTC won’t release a new version of Android – It is old device that is not even sold anymore. It makes no sense for them to invest into a new version of Android (the installed one is 4.0.3) and pushing it to the customers.
  • HTC is using a HTC Sense – Modified Android with a lot of value added software bloatware, like DropBox and Facebook. Because the bloatware is installed on system partition, I can’t uninstall it without root. I would also like a stock version of Android.
  • Privacy – The Andoird permission system is terrible. You can only approve permissions during installation even if your app requires them once in a blue moon (e.g. sending SMS for two step verification).
  • The recent “simplification” of permissions – All apps can now access the internet and if you can only grant permissions per category.

I get it, Google is an advertising company – giving user an option to block the adds is completely at odds with their business model. On the other hand they could at least try to have some balance. Also, most users don’t care. I kind of do, so I decided to root my phone and install CyanogenMod.

CyanogenMod

TinyCM Android

Tiny CyanogenMod (TinyCM) hoem screen along with few app

Android is open source and that means there are geeks out there working hard to create custom versions of it. Out of them the CyanogenMod is the most popular and known. It was obvious choice, but unfortunately, the HTC Desire C is not on the official list of supported devices. It is however on the list of unofficially supported devices, but don’t waste time – the ROM in the referenced forum thread doesn’t work (it works for someone, but not for me).

I had success with the MiniCM 10 – V8, in order to install it you have to follow rather complicated process.

This is really high level guide, it explains more why not, how. If you want to really install it, you should read How To Install A ROM Or App From Zip File To Android Device From Recovery.

Understanding the partitions

Android is a Linux, it is a normal operating system and it uses several partitions for different tasks. Replacing the stock Android is a process of replacing content of the partitions. It is well explained on the addictivetips. You really should read it in order to understand the process.

Unlock bootloader

First, you have to unlock the bootloader, HTC gives an official way to do it, but you have to get a key from the HTC. The key is different for each phone. There is a great step-by-step video for HTCDC on Youtube.

Unlocking the bootloader will allow you to upload the custom recovery OS to the /recovery partition.

Installing recovery

Recovery is basically self-container OS on a separate partition that is used to update/backup/restore the main Android OS plus few other things. The recovery supplied with the phone is usually very limited so there are other recoveries out there, the most known are Team Win Recovery Project (TWRP) and ClockWorkMod (CWM). Although HTCDC is among supported phones for TWRP, it didn’t work for me. I could install it and 2.7 didn’t even boot, while later versions booted, but screen was corrupted and I couldn’t swipe (TWRP is touch based) – e.g. backup required to swipe the screen.

CWM officially doesn’t support HTCDC, but I have found a version that worked for me (forum thread, recovery image). It has no frills interface, but it does the job.

Backup the stock Android

Yes, it is not an option, it is a necessity. I have gone through several ROMS before finding one that works.

CWM backup will all partitions (see the headline above) from the internal memory to the external SD card :

  • /boot partition (as in boot.img),
  • /recovery partition (as recovery.img)
  • /system partition – it saves the files on the partition as blobs and adds the system.ext4.dup with info how they fit on the partition
  • /data partition – User data of apps, e.g. your preferences ect.
  • /cache partition – Cache of davik bytecode compiled to ARM native code or something like that. The cache partition can be deleted.

Do a full wipe

You can find this in most threads with custom ROMS: Do a FULL WIPE first. Full Wipe means format /system, /data, /cache.

That basically means go to recovery mode, and format the /system, /data and /cache partitions. I have also wiped out the davik cache (CWM-Advanced-Wipe Davik Cache), but I believe it is redundant, because it is stored on of the formatted partitions.

This step is necessary because old files can interfere with the new ones.

Installing the custom ROM

The custom ROM (at least MiniCM V8 and few others) consists from the parts:

  • Files for /system partition that will be copied to the /system partition
  • boot.img with new kernel and other stuff

You have to have the ROM file on the SD card beforehand. Just choose “install from zip” from the CWM menu, select zip file on your SD card and it will install the files to the /system.

After that boot to the bootloaded and flash the boot.img for the ROM to the /boot partition.

Conclusion

So far the ROM mostly works, it seems faster and I could install XPosed framework and the XPrivacy module, giving me more control over which app can do what (e.g. I can deny some app to download from internet – mostly ads).

 

There are some small errors:

  • When downloading from the google store, I get error 941 first time I try to download an app. It works the second time. There are quite a few people on the net with same problem and it should be resolvable.
  • The panorama mode of the photo app is broken – The image shows green horizontal lines.

It should also be noted that apps in CyanogenMode are from Android Open Source Project (the vanilla Android, without any Google stuff like Gmail or Google Play), and because of tightening grip of Google, some features are missing, for more info read Google’s iron grip on Android: Controlling open source by any means necessary. It is a great corporate strategy, but it shows the reality of open source vs money. Money wins all the time – every Google stockholder approves.

Overall I am satisfied, although I hoped for much easier process. It reading and the process itself took me at least 6 hours.

References