Custom MVC Validation Message

The MVC has a commonly used extension method @Html.ValidationMessageFor(model =>model.Property), that generates a code similar to

<span class="field-validation-error" data-valmsg-replace="true" data-valmsg-for="Name">
  The Name field is required.
</span>

We wanted to hide the error message into a tooltip of a red asterisk (yes, I know that it is not user/mobile  friendly). Something like this:

ScreenShot

We have @Html.EditorFor(), so perhaps there will be an easy way…. Nope, there isn’t. We have to do some work either through JavaScript and CSS or by creating a separate helper.

Javascript & CSS way

This is easier way and it works.

C# way

Changing the generated markup to something to something like

<span data-error-message="Error message" class="field-validation-error">*</span>

would work too.

Well, it seems that there is no easy way to get a validation error message for a specified property of a model. The code bubbles through several recalls and ends up in a method ValidationMessageHelper in a file ValidationExtensions.cs.

I can change either tag or css class for generated element, I can of course add another attributes through htmlAttributes dictionary, but that is about it. I can’t get the message itself (unless I do the work myself).

I have found some nice info about how to generate custom summary and someone who needed to create a custom generated validation message with an icon (using @Html.ValidationIconFor(model => model.property)). It is a considerable amount of basically cope and paste code.

Conclusion

AFACT there is no easy way to have a template or something similar for validation message. As long as the screen estate is not an issue, try to stick with default.

 

NHibernate using .NET 4 ISet

NHibernate 4.0 (released in 2014-08-17) has brought us support for .NET 4 ISet<> collections, thus freeing us from the tyranny of the Iesi package. But long before that, there was a way to use .NET 4 ISet in you NHibernate 3 projects:

NHibernate.SetForNet4

SetForNet4 is a NuGet package you can install into your application. Once you install it, there will be a new file with a implementation of a ICollectionTypeFactory in your project that will support System.Collections.Generic.ISet<> instead of Iesi.Collections.Generic.ISet<>. Other than that, it is basically a copy of the original DefaultCollectionTypeFactory.

All you need to do after installation is to set the configuration property “collectiontype.factory_class” to the assembly qualified name of the created factory, make sure the dll with the created collection factory can be loaded (if you have it in separate project) and all will be peachy.

I had slight trouble with configuration, I have configuration spread over the XML and the code and the comment at the top said to add line to my configuration:

//add to your configuration:
//configuration.Properties[Environment.CollectionTypeFactoryClass]
//        = typeof(Net4CollectionTypeFactory).AssemblyQualifiedName

Since it was integral part of the configuration (not the db dialect or something), I put it to the code

var configuration = new Configuration().Configure();
configuration.Properties[Environment.CollectionTypeFactoryClass]
     = typeof(Net4CollectionTypeFactory).AssemblyQualifiedName;

… and it didn’t work. I had to set the factory before I made configured from XML.

var configuration = new Configuration()
    .SetProperty(Environment.CollectionTypeFactoryClass, typeof(Net4CollectionTypeFactory).AssemblyQualifiedName)
    .Configure()

I am not exactly sure why, but I think it is because I also have assemblies with *hbm.xml mapping files specified in the XML using <mapping assembly="EntityAssembly" /> tags.

The configuration code have set the collection factory of the bytecode provider before the mapped assemblies were processed.

Downside

It should be noted that using SetForNet4 package, you can’t use both, the Iesi and the .NET4 collections at the same time, thus you need to replace the Iesi in the whole application. Also, we have NH 4.0 now, so doing this is kind of pointless, but I did it before 4.0 and I don’t have time to upgrade to 4.0 and check that my app still works as advertised.

Related links

AutoMapper queryable extensions

How to generate a LINQ query for your DTOs

AutoMapper is a really cool library that allows us to map one object to another, e.g. when passing objects through layers of our application, where we work with different objects in different layers of our app and we have to map them from one layer to another, e.g. from business object to viewmodel.

All is good and well for POCO, not so much for entity objects. The automapper tries to map everything using reflection, so properties like Project.Code can turn to ProjectCode, but that is troublesome with ORM, where querying an object means loading another entity from the database.

I am using a NHibernate linq provider that only gets columns we actually ask from the database, so it would be nice to have a DTO type, entity type and magically create a linq expression mapping from one to another that can be used by NHibernate LINQ provider.

// Blog entity
public class Blog {
  public virtual int Id {get;set;}
  public virtual string Name {get;set;}
}

// Post entity
public class Post {
  public virtual int Id {get;set;}
  public virtual string Title {get;set;}
  public virtual DateTime Created {get;set;}
  public virtual string Body {get;set;}
  public virtual Blog Blog {get;set;} 
}

public class PostDto {
  public string BlogName {get;set;} 
  public string Title {get;set;}
  public string Body {get;set;}
}

public class BlogRepository {
  private readonly ISession session;

  public PostDto GetPost(int id)
  {
    return (
      from post in session.Query<Post>()
      where post.Id == id
      select 
        new PostDto                  // This is an expression I want to generate
        {                            // in this case, I have 3 properties,
          BlogName = post.Blog.Name, // in my project, I have 5-30 in each entity
          Title = post.Title,        // and many entities. Repeatable code that
          Body = post.Body           // should be generated.
        }                            //
      ).Single();
  }
}

Remember, such expression will require only necessary fiels, so Id or Created won’t be part of SQL query (see NHibernate Linq query evaluation process for more info).

 

Queryable Extensions

Automapper provides a solution to this proble: queryable extensions (QE). They allow us to create such expression and they even solve SELECT N+1 problem. It is no panacea, but it solves most of my trouble.

Notice the key difference, normal automapping will traverse object graph and return a mapped object, QE will only generate a mapping expression.

Example

I will provide an example using the entities above:

  1. NuGet package for AutoMapper, the QueryableExtensions are part of the package and are in AutoMapper.QueryableExtensions namespace
  2. Create a test
  3. Create a mapping
    Mapper.CreateMap<Post, PostDto>();
  4. Query by hand (see query above)
    var postDto = 
       session.Query<Post>().Where(post => post.Id == id)
       .Project().To<PostDto>()
       .Single();
  5. Observe the generated SQL:
    select
        blog1_.Name as col_0_0_,
        post0_.Title as col_1_0_,
        post0_.Body as col_2_0_ 
    from
        Post post0_ 
    left outer join
        Blog blog1_ 
            on post0_.Blog=blog1_.Id 
    where
        post0_.Id=@p0;
    @p0 = 1 [Type: Int32 (0)]
    

    It is no different that the SQL generated by the hand made query. It only queries what is necessary without boilerplate code.

  6. Remove boilerplate code from your app.

You can also do a more difficult transformations, although QE are slightly more limited than in-memory AutoMapper capabilities, go and read the wiki.

This is really cool extension that will remove quite a lot of boilerplate code, so give it a try!

Further reading

NHibernate Linq query evaluation process

I like the Linq provider of NHibernate, but I have encountered weird behavior and had to dive into how does NHibernate creates the SQL queries from the linq queries and creation of Linq provider extensions. If you are only interested how to program NHibernate Linq function  (e.g. how to check if text column corresponds to some regular expression in database), this is not the post to read – read Michaels post instead. This one is about how does NHibernate turns a Linq query into a result of the query.

I will be mostly dealing with a select clause of Linq, but other are quite similar.

Problem

I had a query that checked for entity nullness before creating a derived object using some properties of null-checked entity:

from post in session.Query<Post>()
select new PostModel
{
  Body = post.Body,
  Blog = post.Blog != null ? new EntityReference
    {
      Id = post.Blog.Id,
      Name = post.Blog.Name
    } : null
}

However, the generated SQL had more fields than necessary, in fact, it had all fields of a Blog entity:

-- Cursive are the properties of Blog that are not needed
select
    post0_.Body as col_0_0_,
    blog1_.Id as col_1_0_,
    post0_.Blog as col_2_0_,
    blog1_.Name as col_3_0_,
    blog1_.Id as Id0_,
    blog1_.Name as Name0_,
    blog1_.Subtitle as Subtitle0_,
    blog1_.Created as Created0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

It doesn’t look like a big problem, but that is because this is a demonstration. In reality I have been creating several references and for each entity the query required all its properties. The result was a query with 100+ columns (4 entities * 20-30 columns per entity) and quite slow execution.

I had no idea why that happened, because similar query without a null check worked fine and NHibernate generated SQL without unnecessary fields:

from post in session.Query<Post>()
select new PostModel
{
    Body = post.Body,
    Blog = new EntityReference
        {
            Id = post.Blog.Id,
            Name = post.Blog.Name
        }
}
-- 
select
    post0_.Body as col_0_0_,
    post0_.Blog as col_1_0_,
    blog1_.Name as col_2_0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

Explanation

I had to dive into NHibernate source code, because Google turned up nothing and all books on NHibernate are either old (pre 3.0 =without Linq) or only go through some simple Linq queries. Official NHibernate documentation even mention how to create Linq extensions, much less how does the process works. Yay me.

Well, here is a short overview of how does NH turns a Linq query to a result:

1. Rewriting query to be more HQL friendly

We have a Linq expression (=Linq query) object supplied by user. It’s representations is identical to the representation in the code, which can be hard to process, so NHibernate will take the query and rewrites it a little to be more easily processed. Here is an example of how a query looks before and after rewriting:

/* Source query */
from Post post in value(NHibernate.Linq.NhQueryable[Nerula.Data.Post])
select new PostModel() {
  Body = [post].Body,
  Blog =
    IIF(([post].Blog != null),
    new EntityReference() 
    {
      Id = [post].Blog.Id,   
      Name = [post].Blog.Name    
    },   
    null)
}

/* Rewritten query */
from Post post in value(NHibernate.Linq.NhQueryable[Nerula.Data.Post]) 
from Blog _0 in [post].Blog 
select new PostModel() 
{
  Body = [post].Body,
  Blog = 
    IIF(([_0] != null),
      new EntityReference() 
      {
        Id = [post].Blog.Id, 
        Name = [_0].Name 
      }, 
      null)
}

 2. Find sub-expressions representable in HQL

The Linq query is represented as a tree of subexpression.

For example: new PostModel() {Body = [100001].Body, Blog = IIF(([100001] != null), new EntityReference() {Id = [100001].Blog.Id, Name = [100001].Name}, null)} is a Linq expression of type ExpressionType.MemberInit. It consists from several subexpressions:

  • new PostModel() is an expression of type ExpressionType.New
  • [100001].Body is an expression of type ExpressionType.MemberAccess
  • {IIF(([100001] != null), new EntityReference() {Id = [100001].Blog.Id, Name = [100001].Name}, null)} is an expression of type ExpressionType.Conditional. This expression has its own subexpressions.

It is kind of obvious that expressions form a tree.

NHibernate will recursively visit all linq sub-expressions and gets a list of all expression that can be evaluated in database using HQL select statement – e.g. registered methods, entities and its properties, but not unregistered function calls or constants (e.g. null, 1, “Hello World”, post.Body.Contains(“Hello”)).

Here is a screenshot of all sub-expressions from the query above. The NHibernate extracts following expressions (no method here, but it could be):

NHibernate Linq candidates

BTW here is the magic that allows user to add their own Linq provider extension happens.

3. Replace database subexpressions

Now we have a list of sub-expressions that can be queried directly in the database. We will take our Linq query and rewrite it a little more. NHibernate will go recursively through the Linq expression and rewrites HQL representable expressions with an item from an array. The reason is simple, once it performs the DB query, it will have the result in array and once it performs the Linq query, the result of original subexpression (e.g. post.Body.Contains(“Subexpression”)) will be already done by database and the result will be an item in an array (e.g. true or false). That is the whole point – doing stuff in database.

The select clause will now have stuff queriable in db replaced with an item in an array:

new PostModel() {
  Body = Convert(input[0]), 
  Blog = IIF((Convert(input[1]) != null), 
         new EntityReference() 
         {
           Id = Convert(input[2]), 
           Name = Convert(input[3])
         }, 
         null)
}

4. Perform database query

We have a list of sub-expressions we are interested from step 2, NHibernate will do its magic, query the database and gets result in an input array.

5. Evaluate Linq query

We have a rewritten Linq query from step 3 and we have data used in it from step 4. We can actually evaluate the query! NHibernate does it and returns result.

Back to the problem

Well, that is about it. Where is the problem? Why did I get all these extra columns in my query?

Easy, NHibernate doesn’t recognize operator == in a select clause and therefore it has to load whole entity Blog into a memory (=that is why it loads all properties) where it is compared with null.

I though about why does NHibernate behaves like this, but it seems reasonable after a while. What if the comparison was done between entity and some in-memory instance of entity? What about overloaded operator ==? Database would have no idea what to do, so we can’t have blanket == from NHibernate Linq provider.

Important: This is valid for the Linq select clasue, the where clause doesn’t work like that. The where clause works as you expect, unlike select.

  • If you use post.Blog != null in the Linq where clause, the NHibernate will correctly translate it to SQL  where post0_.Blog is not null.
  • If you use post.Blog == memoryBlog  you get where post0_.Blog=@p0.

Solution

Nope, we are not lost. We can create a custom Linq provider for checking null. The NHibernate will recognize our method EntityState.Exists as a something that can be performed in the database and replace load of a whole entity with an extra column in query:

public static class EntityState {
    public static bool Exists(EntityBase entity) {
        return !ReferenceEquals(entity, null);
    }
}

public class EntityStateGenerator : BaseHqlGeneratorForMethod
{
    public EntityStateGenerator()
    {
        SupportedMethods = new[] { 
            ReflectionHelper.GetMethod(() => EntityState.Exists(null)) 
        };
    }

    public override HqlTreeNode BuildHql(MethodInfo method, Expression targetObject,
        ReadOnlyCollection<Expression> arguments, HqlTreeBuilder treeBuilder, 
        IHqlExpressionVisitor visitor)
    {
        return treeBuilder.IsNotNull(visitor.Visit(arguments[0]).AsExpression());
    }
}

public class MyLinqToHqlGeneratorsRegistry : DefaultLinqToHqlGeneratorsRegistry
{
    public NerulaLinqToHqlGeneratorsRegistry()
    {
        var generator = new EntityStateGenerator();
        foreach (var method in generator.SupportedMethods)
        {
            RegisterGenerator(method, generator);
        }
    }
}

We also have to register the generator (for XML, the property name is linqtohql.generatorsregistry) or you can simple add it to the configuration

configuration.SetProperty(NHibernate.Cfg.Environment.LinqToHqlGeneratorsRegistry, 
  typeof(MyLinqToHqlGeneratorsRegistry).AssemblyQualifiedName);

Now, the query

from post in session.Query<Post>()
select new PostModel
{
    Body = post.Body,
    Blog = EntityState.Exists(post.Blog)
        ? new EntityReference
        {
            Id = post.Blog.Id,
            Name = post.Blog.Name
        }
        : null
}

Will result in the following SQL, because NHibernate will replace subexpression EntityState.Exists(post.Blog) with the result of SQL case statement per step 3,4 and 5:

select
    post0_.Body as col_0_0_,
    case 
        when blog1_.Id is not null then 1 
        else 0 
    end as col_1_0_,
    post0_.Blog as col_2_0_,
    blog1_.Name as col_3_0_ 
from
    Post post0_ 
left outer join
    Blog blog1_ 
        on post0_.Blog=blog1_.Id

The reason for weird query was quite simple, but pretty understandable. One extra column is OK with me. I just wish it was documented.

Using NHibernate readonly property accessor

Recently I needed to create a string combined from several columns in the database into a single field. More precisely, I had an User entity with a FirstName and Surname properties and I needed the FullName property for filling the data into a ViewModel. The entity itself has many properties and its hydration is quite slow.

public class User : EntityBase
{
    public virtual string FirstName {get;set;}
    public virtual string Surname {get;set;}
    public virtual string FullName
    {
        get { return string.Format("{0} {1}", FirstName, Surname); }
    }
    // many other properties
}

Querying a view model

I wanted to create a view model in single linq query. The query will get from database only the fields that are necessary, not whole entities (e.g. only Title and).

var viewModelQuery = 
    from post in session.Query<Post>()
    where post.Id == postId
    select new PostViewModel
    {
        Title = post.Title,
        Text = post.Body,
        CreatedBy = string.Format("{0} {1}", 
            post.CreatedBy.FirstName, post.CreatedBy.Surname)
    };

The corresponsing query looks like this – notice it is a single query that gets only what is needed, no unnecessary properties of the User or the Post are fetched. This is the reason why I really like the NHibernate Linq provider as opposed to Criteria API or QueryOver.

select
    post0_.Title as col_0_0_,
    post0_.Body as col_1_0_,
    user1_.first_name as col_2_0_,
    user1_.surname as col_3_0_ 
from
    Post post0_ 
left outer join
    User user1_ 
        on post0_.CreatedBy=user1_.Id 
where
    post0_.Id=@p0;

However, there is a flaw: The string.Format creation of the CreatedBy. I would like to use FullName property, but that is not possible, because it is not mapped and using CreatedBy = post.CreatedBy.FullName would throw a mapping exception.

Querying full name

We can get around that using a formula in the mapping that has same result as the property in the entity class:

<class name="User">
    <id name="Id">
      <generator class="increment" />
    </id>
    <property name="FirstName" column="first_name" />
    <property name="Surname" column="surname" />
    <property name="FullName" access="readonly" formula="(first_name || ' ' || surname)" />
</class>

Notice use of readonly access. It was implemented in the NH-1621. It is an accessor used for querying in the database and thanks to that we can use it directly in the query, the prerequisites are existing property with getter (no need for setter) and semantically same as the column in the database. In our case, the property in not mapped to a column, but to a formula, but result is same: We can use it directly in the query:

var viewModelQuery = 
    from post in session.Query<Post>()
    where post.Id == postId
    select new PostViewModel
    {
        Title = post.Title,
        Text = post.Body,
        CreatedBy = post.CreatedBy.FullName
    };
var viewModel = viewModelQuery.Single();
Assert.AreEqual("Post title", viewModel.Title);
Assert.AreEqual("Text of the post", viewModel.Text);
Assert.AreEqual("John Smith", viewModel.CreatedBy)

The SQL query uses formula instead of two separate columns and as before it does it without getting unnecessary properties:

select
    post0_.Title as col_0_0_,
    post0_.Body as col_1_0_,
    (user1_.first_name || ' ' || user1_.surname) as col_2_0_ 
from
    Post post0_ 
left outer join
    User user1_ 
        on post0_.CreatedBy=user1_.Id 
where
    post0_.Id=@p0

Now we can safely add a middle name or reverse order of full name simply by modifying a formula in the mapping file and the getter property and the change will appear everywhere as opposed to modifying the string.Format method.

Testing NHibernate queries using IEnumerable

NHibernate has several ways to query a database, the easiest one to use is through a Linq provider. I don’t like other ways very much:

  • HQL (Hibernate Query Language)  – You have to write a string with no type checking, e.g. “select Name from Cat”.
  • Criteria API – Uses magic string, rather awkward for more complex queries.
  • QueryOver – It doesn’t use magic strings like criteria API, but I find aliases variables disgusting, plus making more complex queries (e.g. multiple sub-queries) rather difficult and unfriendly.
  • Sql query – Just plain SQL, IMO best choice when linq can’t do the job.

NHibernate Linq

The NHibernate Linq provider is great, you can search through entities using a Linq, everything is statically checked, the intent is clear. NHibernate is using IQueryable, not IEnumerable, the difference is that IQueryable stores the info about expressions used for search and these expressions are later used to create a SQL query that hits the database. The IEnumerable always pulls object from previous method in the chain thus any filtering is done not in the database, but in the memory.

var oldCatNames = 
  from cat in session.Query<Cat>()
  where cat.Age >= 12
  select cat.Name;
return View(oldCatNames.ToArray())

The example of a NHibernate Linq query getting all cats that are old. NHibernate generates a SQL statement, executed it and transforms the result into an array of names. The key question is how to we test such queries?

We can

  1. Have our production DBMS and each test will have to fill in the data and run the query against the database. I am doing it in my project with Oracle and TBH it is rather slow (you have to connect to the db for each test – several seconds), you have to clear/fill in a lot of data as required by constraints (most of the time the referenced data are not not required by the test) and although it has a merit (e.g. when testing stored procedures, more complex queries and so on), for simple queries (=most queries) it seems like a overkill.
  2. Have a in-memory DBMS (e.g. SQLite) and run tests against it. I am doing it for my playground project, but IMO it is even worse than the first proposition, the only benefit is speed and drawbacks are significant. You still have to fill the database and the engine is different that from the production one. For example, sequences are not supported by SQLite. I am using them in my mapping files, so now what? What about stored procedures? SQLite has lousy implementation of time functions and so on.
  3. Use IEnumerable instead of IQueryable and run tests in memory, w/o any DBMS at all.

I am going to explore third option, because it will correctly test most Linq queries for quite a little code.

SQL semantic vs IEnumerable semantic

Before we dive into how to actually do it, there is an important thing to remember:

Result of SQL query and IEnumerable query may be different, although it looks exactly same in the code.

The NHibernate (and Entity Framework) are using SQL semantic that sometimes differ from IEnumerable sematics, the most obvious case are aggregation methods such as Sum. Let us consider following query that is getting total amount of all conjectures:

int sum = session.Query<Conjecture>()
  .Sum(conjecture => conjecture.Amount);

What is the result, when the table for entity Conjecture is empty? No, it is not 0, it is a GenericADOException. The reason is SQL semantic. The NHibernate will infer from the conjecture.Amount that result of query should be an int. It will constructs the query and tries to cast the result into an int. But the result of SQL query (select cast(sum(conjecture0_.Amount) as INT) as col_0_0_ from Conjecture conjecture0_) on empty table is not a 0, but null per definition of SUM in SQL. Thus the exception.

This is intended result per bug NH-3113. In order to get zero, we have to change type of infered result and return 0, when result is null:

int sum = session.Query<Conjecture>()
  .Sum(conjecture => (int?)conjecture.Amount) ?? 0;

When using IEnumerable masquerading as IQueryable for tests, we must be aware of the difference.

Testing query

Query is not a method of ISession, but an extension method of NHibernate.Linq.LinqExtensionMethods class and testing extension methods in C# is painful – they are basically static methods called on an instance. The obvious solution is to use your own facade that hides the NHibernate ISession so you are using your own interfaces that isolate you from quirks such as this one.

If you are using facade, it is really simple to mock result of query, just take any IEnumerable and use extension method AsQueryable from Queryable class (use better name than ISessionFacade):

Conjecture[] conjectures = new[] 
{
  new Conjecture("Minor work", 10),
  new Conjecture("Bug fix", 50),
  new Conjecture("Simple feature", 100),
  new Conjecture("Complicated feature", 500),
};
var sessionFacade = Mock.Of<ISessionFacade>();
sessionFacade.Setup(x => x.Query<Conjecture>())
  .Returns(conjectures.AsQueryable())
// Here would be tested method, I am inlining
var largeConjectureNames =
                from conjecture in sessionFacade.Query<Conjecture>()
                where conjecture.Amount >= 100
                select conjecture.Name;
var expected = new[] { "Simple feature", "Complicated feature" };
CollectionAssert.AreEqual(expected, largeConjectureNames.ToArray());

If you are using ISession from NHibernate and Query extension method from NHibernate.Linq for your queries, you either have to replace the ISession with a facade or mock the Query extension method. I am mocking the extension method, because our project is not using sane DI system (my next task).

Mocking Query method

Let me start by saying this: Mocking extension method is horrible.

Extension methods have their place, e.g. string object doesn’t have a Truncate method and you can’t use Substring(0, length), because it will throw ArgumentOutOfRangeException if your length is greater than the length of a string.

But! You should never ever use extension method for anything that has a potential to be mocked. I have no idea what NHibernate developers thought when they used it for method that returns the result of a query.

So, how to mock the Query method?

1. Use wrapper

Query method is from the NHibernate.Linq namespace, so if the namespace is not included, the method is not found and code is not compiled. Include your own

namespace Nerula.Linq
{
  public static class NHibernateLinqExtension {
    public static IQueryable<TEntity> Query<TEntity>(this ISession session)
    {
      return NHibernate.Linq.LinqExtensionMethods.Query<TEntity>(session);
    }
  }
}

Replacing the using NHibernate.Linq with using Nerula.Linq won’t change anything, except the app is now calling the NHibernate Query through our wrapper.

2. Call mockable interface from wrapper

Instead of just calling another static method, create an interface that is used to perform the static methods and create a default implementation of the interface that calls the original extension methods:

namespace Nerula.Linq
{
  public interface ISessionLinq
  {
    IQueryable<TEntity> Query<TEntity>(ISession session);
  }
  public static class NHibernateLinqExtension {
    internal static ISessionLinq SessionLinq {get;set;}
        
    static NHibernateLinqExtension()
    {
      SessionLinq = new NHiberanteSessionLinq();
    }

    private class NHiberanteSessionLinq : ISessionLinq
    {
      public IQueryable<TEntity> Query<TEntity>(ISession session)
      {
        return NHibernate.Linq.LinqExtensionMethods.Query<TEntity>(session);
      }
    }

    public static IQueryable<TEntity> Query<TEntity>(this ISession session)
    {
      return SessionLinq.Query<TEntity>(session);
    }
  }
}

Notice that SessionLinq has an internal access, you can configure your test projects to access to the internal properties or simply change the property to public. Now, we have a default implementation that will call the static methods for the program, but we can also change the implementation during tests and return whatever we want.

3. Mock your queries

Now, we can replace the default implementation of ISessionLinq with mocked one and finally use memory lists and other IEnumerables oodies to mock the queries.

ISession session = Mock.Of<ISession>();
Mock<ISessionLinq> sessionLinq = new Mock<ISessionLinq>(MockBehavior.Strict);

Conjecture[] conjectures = new[] 
{
  new Conjecture("Minor work", 10),
  new Conjecture("Bug fix", 50),
  new Conjecture("Simple feature", 100),
  new Conjecture("Complicated feature", 500),
};

sessionLinq.Setup(x => x.Query<Conjecture>(session))
  .Returns(conjectures.AsQueryable());
// Here is the change of the query provider
NHibernateLinqExtension.SessionLinq = sessionLinq.Object;

var largeConjectureName =
  from conjecture in session.Query<Conjecture>()
  where conjecture.Amount >= 100
  select conjecture.Name;

CollectionAssert.AreEqual(new[] { "Simple feature", "Complicated feature" }, largeConjectureName.ToArray());

4. Restore default property

Since we are changing the static property, we must make sure to change it back after the test has run, otherwise all tests would have to make sure to set correct implementation of ISessionQuery, e.g. NUnit reuses the instance of test fixture for all test and if one test is mocking the Query method while other uses NHibernate.Linq Query method, they would be order dependent. NUnit has an action attributes that make this very simple.

Conclusion

I have found that using IEnumerable to test NHibernate linq queries makes writing tests much easier and faster. You can’t use it for testing other NHibernate API used to access the database and you have to be careful about SQL vs IEnumerable semantic.

The Query extension method is a horrible design and if you are using NHibernate, you should consider rolling a facade. Not only for mocking queries, but Entity Framework is getting better and better and possible switch would be much smoother. NHibernate has recently released version 4.0, but except for support of BCL collections, I don’t find release notes very newsworthy.

Limit your abstractions

While trying to find a better alternative to our “pass-the-ball” architecture (webform ->code behind-> presenter->controller) for my app, I have stumbled upon interesting bite-sized series Limit your abstractions by Ayende.

It basically starts with a code from ndddsample and shows what is wrong (in his opinion) with it (events, too much useless abstraction).

Series

  1. Analyzing a DDD application – The abstration is non-abstracted abstraction. Basically only extracted interfaces.
  2. Application Events–the wrong way
  3. Application Events–what about change? – What if we have new state, e.g. lost cargo
  4. Application Events–Proposed Solution #1
  5. Reflections on the Interface Segregation Principle
  6. Application Events–Proposed Solution #2–Cohesion
  7. Application Events–event processing and RX
  8. You only get six to a dozen in the entire app
  9. Commands vs. Tasks, did you forget the workflow?
  10. All cookies looks the same to the cookie cutter
  11. So what is the whole big deal about?
  12. Refactoring toward reduced abstractions
  13. The key is in the infrastructure…
  14. And how do you handle testing?

Events

public override void InspectCargo(TrackingId trackingId)
{
  Validate.NotNull(trackingId, "Tracking ID is required");

  Cargo cargo = cargoRepository.Find(trackingId);
  if (cargo == null)
  {
    logger.Warn("Can't inspect non-existing cargo " + trackingId);
    return;
  }

  HandlingHistory handlingHistory = handlingEventRepository.LookupHandlingHistoryOfCargo(trackingId);

  cargo.DeriveDeliveryProgress(handlingHistory);

  if (cargo.Delivery.Misdirected)
  {
    applicationEvents.CargoWasMisdirected(cargo);
  }

  if (cargo.Delivery.UnloadedAtDestination)
  {
    applicationEvents.CargoHasArrived(cargo);
  }
  cargoRepository.Store(cargo);
}

This is actual business method that does business logic method. It violates Single Responsiblity Principle (it looks up the delivery history and dispatches events) and Open Closed Principle (if we add or change cargo state, e.g. cargo is lost, we have to modify the class).

There are of course many possible solutions to event handling and dispatching, some are discussed. I didn’t know about Reactive Extensions, rather nice.

Non-abstracted abstraction

According to Ayende, the code should have a very limited amount (<10) of abstractions, he proposes following abstractions are good enough for most projects.

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

Creating an abstraction always has a cost, sometimes small, sometimes large, see Abstract Factory Factory Façade Factory. Use your abstractions carefully.

My notes

Definitely worth reading, but I wonder how does proposed reduced solution work in a real project with more complex operations and larger teams.

Basically, he puts the code into a self-container Command class that contains all the logic and calls it from the MVC action. The queries are also self contained classes that get their result using Query method of the Command.

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
{
    var trackingId = ExecuteCommand(new RegisterCargo
    {
        OriginCode = originUnlocode,
        DestinationCode = destinationUnlocode,
        ArrivalDeadline = arrivalDeadline
    });

    return RedirectToAction(ShowActionName, new RouteValueDictionary(new { trackingId }));
}
public abstract class Command
{
    public IDocumentSession Session { get; set; }
    public abstract void Execute();

    protected TResult Query<TResult>(Query<TResult> query);
}

public abstract class Command<T> : Command
{
    public T Result { get; protected set; }
}

public class RegisterCargo : Command<string>
{
    public override void Execute()
    {
        var origin = Session.Load<Location>(OriginCode);
        var destination = Session.Load<Location>(DestinationCode);

        var trackingId = Query(new NextTrackingIdQuery());

        var routeSpecification = new RouteSpecification(origin, destination, ArrivalDeadline);
        var cargo = new Cargo(trackingId, routeSpecification);
        Session.Save(cargo);

        Result = trackingId;
    }

    public string OriginCode { get; set; }
    public string DestinationCode { get; set; }
    public DateTime ArrivalDeadline { get; set; }
}

In the end, he uses hand-coded mocking which I find rather distasteful

public void ExecuteCommand(Command cmd)
{
  if (AlternativeExecuteCommand!= null)
    AlternativeExecuteCommand(cmd);
  else
    Default_ExecuteCommand(cmd);
}

It seems much easier and maintainable just to create ICommandExecutor or even virtual method that can be overriden.

 

CyanogenMod for HTC Desire C

I have been pondering for a while if I should install CyanogenMod (custom Android ROM) to my phone. In the end, I have decided to give it a shot:

  • My phone is HTC Desire C (HTCDC) – really old and slow. The stock Android worked fine.. for a time. After that, it really slowed down.
  • HTC won’t release a new version of Android – It is old device that is not even sold anymore. It makes no sense for them to invest into a new version of Android (the installed one is 4.0.3) and pushing it to the customers.
  • HTC is using a HTC Sense – Modified Android with a lot of value added software bloatware, like DropBox and Facebook. Because the bloatware is installed on system partition, I can’t uninstall it without root. I would also like a stock version of Android.
  • Privacy – The Andoird permission system is terrible. You can only approve permissions during installation even if your app requires them once in a blue moon (e.g. sending SMS for two step verification).
  • The recent “simplification” of permissions – All apps can now access the internet and if you can only grant permissions per category.

I get it, Google is an advertising company – giving user an option to block the adds is completely at odds with their business model. On the other hand they could at least try to have some balance. Also, most users don’t care. I kind of do, so I decided to root my phone and install CyanogenMod.

CyanogenMod

TinyCM Android

Tiny CyanogenMod (TinyCM) hoem screen along with few app

Android is open source and that means there are geeks out there working hard to create custom versions of it. Out of them the CyanogenMod is the most popular and known. It was obvious choice, but unfortunately, the HTC Desire C is not on the official list of supported devices. It is however on the list of unofficially supported devices, but don’t waste time – the ROM in the referenced forum thread doesn’t work (it works for someone, but not for me).

I had success with the MiniCM 10 – V8, in order to install it you have to follow rather complicated process.

This is really high level guide, it explains more why not, how. If you want to really install it, you should read How To Install A ROM Or App From Zip File To Android Device From Recovery.

Understanding the partitions

Android is a Linux, it is a normal operating system and it uses several partitions for different tasks. Replacing the stock Android is a process of replacing content of the partitions. It is well explained on the addictivetips. You really should read it in order to understand the process.

Unlock bootloader

First, you have to unlock the bootloader, HTC gives an official way to do it, but you have to get a key from the HTC. The key is different for each phone. There is a great step-by-step video for HTCDC on Youtube.

Unlocking the bootloader will allow you to upload the custom recovery OS to the /recovery partition.

Installing recovery

Recovery is basically self-container OS on a separate partition that is used to update/backup/restore the main Android OS plus few other things. The recovery supplied with the phone is usually very limited so there are other recoveries out there, the most known are Team Win Recovery Project (TWRP) and ClockWorkMod (CWM). Although HTCDC is among supported phones for TWRP, it didn’t work for me. I could install it and 2.7 didn’t even boot, while later versions booted, but screen was corrupted and I couldn’t swipe (TWRP is touch based) – e.g. backup required to swipe the screen.

CWM officially doesn’t support HTCDC, but I have found a version that worked for me (forum thread, recovery image). It has no frills interface, but it does the job.

Backup the stock Android

Yes, it is not an option, it is a necessity. I have gone through several ROMS before finding one that works.

CWM backup will all partitions (see the headline above) from the internal memory to the external SD card :

  • /boot partition (as in boot.img),
  • /recovery partition (as recovery.img)
  • /system partition – it saves the files on the partition as blobs and adds the system.ext4.dup with info how they fit on the partition
  • /data partition – User data of apps, e.g. your preferences ect.
  • /cache partition – Cache of davik bytecode compiled to ARM native code or something like that. The cache partition can be deleted.

Do a full wipe

You can find this in most threads with custom ROMS: Do a FULL WIPE first. Full Wipe means format /system, /data, /cache.

That basically means go to recovery mode, and format the /system, /data and /cache partitions. I have also wiped out the davik cache (CWM-Advanced-Wipe Davik Cache), but I believe it is redundant, because it is stored on of the formatted partitions.

This step is necessary because old files can interfere with the new ones.

Installing the custom ROM

The custom ROM (at least MiniCM V8 and few others) consists from the parts:

  • Files for /system partition that will be copied to the /system partition
  • boot.img with new kernel and other stuff

You have to have the ROM file on the SD card beforehand. Just choose “install from zip” from the CWM menu, select zip file on your SD card and it will install the files to the /system.

After that boot to the bootloaded and flash the boot.img for the ROM to the /boot partition.

Conclusion

So far the ROM mostly works, it seems faster and I could install XPosed framework and the XPrivacy module, giving me more control over which app can do what (e.g. I can deny some app to download from internet – mostly ads).

 

There are some small errors:

  • When downloading from the google store, I get error 941 first time I try to download an app. It works the second time. There are quite a few people on the net with same problem and it should be resolvable.
  • The panorama mode of the photo app is broken – The image shows green horizontal lines.

It should also be noted that apps in CyanogenMode are from Android Open Source Project (the vanilla Android, without any Google stuff like Gmail or Google Play), and because of tightening grip of Google, some features are missing, for more info read Google’s iron grip on Android: Controlling open source by any means necessary. It is a great corporate strategy, but it shows the reality of open source vs money. Money wins all the time – every Google stockholder approves.

Overall I am satisfied, although I hoped for much easier process. It reading and the process itself took me at least 6 hours.

References

Circular Dependency in Unity

I have been toying around with an idea of replacing the ObjectBuilder in the WCSF with a proper dependency injector. The Unity seemed like the obvious choice (someone even tried porting WCSF to Unity) so I have been reading Dependency Injection with Unity.

During my research I have stumbled upon the ugly side of the Unity: It can’t detect circular dependencies. I though that it is only true in some old version, it would be hell trying to find the circular dependency only with StackOverflowException so I tested it out. After all “One good test is worth a thousand expert opinions”:

public interface IA {}
public interface IB {}

public class A : IA
{
  public A(IB ib) {}
}

public class B : IB
{
  public B(IA ia) {}
}

[TestMethod]
public void UnityContainer()
{
  using (var container = new UnityContainer())
  {
    container.RegisterType<IA, A>();
    container.RegisterType<IB, B>();
    container.Resolve<IA>();
  }
}

It crashed, hard. The test didn’t pass, nor fail. I just got

—— Run test started ——
The active Test Run was aborted because the execution process exited unexpectedly. To investigate further, enable local crash dumps either at the machine level or for process vstest.executionengine.x86.exe. Go to more details: http://go.microsoft.com/fwlink/?linkid=232477
========== Run test finished: 0 run (0:00:05,2708735) ==========

Unity_CircularDependency_StackOverflowExceptionDuring debugging the test, I got the dreaded StackOverflowException, so there it is: No detection of circular dependencies in the Unity and that is the reason why I won’t be using it. There are other fishes in the barrel.

I have tried Ninject and Castle Windsor, thankfully, both detect circular dependencies and throw exceptions with meaningful messages. Ninject has this error message:

Ninject.ActivationException: Error activating IA using binding from IA to A
A cyclical dependency was detected between the constructors of two services.
Activation path:
3) Injection of dependency IA into parameter ia of constructor of type B
2) Injection of dependency IB into parameter ib of constructor of type A
1) Request for IA

Suggestions:
1) Ensure that you have not declared a dependency for IA on any implementations of the service.
2) Consider combining the services into a single one to remove the cycle.
3) Use property injection instead of constructor injection, and implement IInitializable
if you need initialization logic to be run after property values have been injected.

While Castle exception has this message:

Castle.MicroKernel.CircularDependencyException: Dependency cycle has been detected when trying to resolve component ‘UnityTest.A’.
The resolution tree that resulted in the cycle is the following:
Component ‘UnityTest.A’ resolved as dependency of
component ‘UnityTest.B’ resolved as dependency of
component ‘UnityTest.A’ which is the root component being resolved.

I am not impressed with the simplicity of the Castle nor with documentation of the Ninject, but I don’t want a nightmare of circular dependencies without meaningful error message in my project.

Chained web.config transformation

ASP.NET apps have a global configuration file called Web.config and because apps are published in several configurations (e.g. Debug and Release), there is a simple way to transform the web.config using transformation files instead of keeping two nearly identical web.config files at once. It is a great feature (it also works for App.config).

It works like this: you have a base Web.config and transformation files named Web.$(configuration).config (e.g. Web.Release.config) that transform the original Web.config (e.g. specifying smtp server) during deploy or publish.

The transformation files make it very easy to change Web.config, e.g. following snippet adds a key to the appSettings section.

<?xml version="1.0" encoding="utf-8"?>
<!-- For more information on using web.config transformation 
visit http://go.microsoft.com/fwlink/?LinkId=125889 -->
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <appSettings>
    <add key="Version" value="3.15" xdt:Transform="Insert"/>
  </appSettings>
</configuration>

You can look up info about possible transformations in the official documentation or for quick review go to the Scott Hanselman blog.

This allows one transformation, but you can have two chained transformations if you use publish profiles.

It is simple to do, in the context menu of the profile click on the Add Config Transform and Visual Studio will create a new transformation file that is applied after the build configuration transformation.

add-config-transform

In the following image you can see the result of staged transformation (Web.Debug.config and then Web.Integration.config, see red rectangle). You can see the diff between the original Web.config and transformed one using the “Preview Transform” item of the Web.$(configuration).config context menu.

 

web.config-chained-transformation

I thought it would be great, we have 5 different environments (local, CI, INT, ACC, PROD) and for each one two web.configs (i.e. 10 configs in total) that have to be kept in sync. It is a lot of rather error prone work. While testing it out, I encountered following problems:

Web.config is transformed only on publish, not build

We use Visual Studio to program and debug our application, so we simply choose Debug -> Start without debugging. That is troublesome, because the app will use the original, untransformed Web.config, because transformation is done only on Publish or Deploy and the resulting Web.config is stored somewhere else.

I often debug app, change Web.config and the IIS automatically detects that Web.config has changed and reloads the site. When I configured logging, I changed web.config a lot of time.

I am of course not the first person to encounter this, so there are some solutions. The *.csproj project file is only a MSBuild script, it can be modified.

  • Rename the base Web.config to Web.generic.config
  • Open your *.csproj file in text editor
  • Uncomment  <Target Name="AfterBuild"> target in the *.csproj. It is commented out along with   <Target Name="BeforeBuild"> target.
  • Add the TransformXml for Web.Config into the target
    <Target Name="AfterBuild">
      <TransformXml Source="Web.Generic.Config"
                    Transform="$(ProjectConfigTransformFileName)"
                    Destination="Web.Config" />  
    </Target>
    

Now, every time the project is build, the Web.config will be changed. Note that this doesn’t support chained transformation, so only build configuration is applied.

For details see the Making Visual Studio 2010 Web.config Transformations Apply on Every Build. You can also look at this forum thread that provides some other suggestions.

Sensitive information must not be in transformation file

Some information, e.g. password to our production database is not available to developers, so it can’t be in the transformation files, yet I want simple and reliable deploy to production I (as a developer) can do without our PM present. I am hoping to use a Web Deploy Parameters to do that.

References