Circular Dependency in Unity

I have been toying around with an idea of replacing the ObjectBuilder in the WCSF with a proper dependency injector. The Unity seemed like the obvious choice (someone even tried porting WCSF to Unity) so I have been reading Dependency Injection with Unity.

During my research I have stumbled upon the ugly side of the Unity: It can’t detect circular dependencies. I though that it is only true in some old version, it would be hell trying to find the circular dependency only with StackOverflowException so I tested it out. After all “One good test is worth a thousand expert opinions”:

public interface IA {}
public interface IB {}

public class A : IA
  public A(IB ib) {}

public class B : IB
  public B(IA ia) {}

public void UnityContainer()
  using (var container = new UnityContainer())
    container.RegisterType<IA, A>();
    container.RegisterType<IB, B>();

It crashed, hard. The test didn’t pass, nor fail. I just got

—— Run test started ——
The active Test Run was aborted because the execution process exited unexpectedly. To investigate further, enable local crash dumps either at the machine level or for process vstest.executionengine.x86.exe. Go to more details:
========== Run test finished: 0 run (0:00:05,2708735) ==========

Unity_CircularDependency_StackOverflowExceptionDuring debugging the test, I got the dreaded StackOverflowException, so there it is: No detection of circular dependencies in the Unity and that is the reason why I won’t be using it. There are other fishes in the barrel.

I have tried Ninject and Castle Windsor, thankfully, both detect circular dependencies and throw exceptions with meaningful messages. Ninject has this error message:

Ninject.ActivationException: Error activating IA using binding from IA to A
A cyclical dependency was detected between the constructors of two services.
Activation path:
3) Injection of dependency IA into parameter ia of constructor of type B
2) Injection of dependency IB into parameter ib of constructor of type A
1) Request for IA

1) Ensure that you have not declared a dependency for IA on any implementations of the service.
2) Consider combining the services into a single one to remove the cycle.
3) Use property injection instead of constructor injection, and implement IInitializable
if you need initialization logic to be run after property values have been injected.

While Castle exception has this message:

Castle.MicroKernel.CircularDependencyException: Dependency cycle has been detected when trying to resolve component ‘UnityTest.A’.
The resolution tree that resulted in the cycle is the following:
Component ‘UnityTest.A’ resolved as dependency of
component ‘UnityTest.B’ resolved as dependency of
component ‘UnityTest.A’ which is the root component being resolved.

I am not impressed with the simplicity of the Castle nor with documentation of the Ninject, but I don’t want a nightmare of circular dependencies without meaningful error message in my project.

Chained web.config transformation

ASP.NET apps have a global configuration file called Web.config and because apps are published in several configurations (e.g. Debug and Release), there is a simple way to transform the web.config using transformation files instead of keeping two nearly identical web.config files at once. It is a great feature (it also works for App.config).

It works like this: you have a base Web.config and transformation files named Web.$(configuration).config (e.g. Web.Release.config) that transform the original Web.config (e.g. specifying smtp server) during deploy or publish.

The transformation files make it very easy to change Web.config, e.g. following snippet adds a key to the appSettings section.

<?xml version="1.0" encoding="utf-8"?>
<!-- For more information on using web.config transformation 
visit -->
<configuration xmlns:xdt="">
    <add key="Version" value="3.15" xdt:Transform="Insert"/>

You can look up info about possible transformations in the official documentation or for quick review go to the Scott Hanselman blog.

This allows one transformation, but you can have two chained transformations if you use publish profiles.

It is simple to do, in the context menu of the profile click on the Add Config Transform and Visual Studio will create a new transformation file that is applied after the build configuration transformation.


In the following image you can see the result of staged transformation (Web.Debug.config and then Web.Integration.config, see red rectangle). You can see the diff between the original Web.config and transformed one using the “Preview Transform” item of the Web.$(configuration).config context menu.



I thought it would be great, we have 5 different environments (local, CI, INT, ACC, PROD) and for each one two web.configs (i.e. 10 configs in total) that have to be kept in sync. It is a lot of rather error prone work. While testing it out, I encountered following problems:

Web.config is transformed only on publish, not build

We use Visual Studio to program and debug our application, so we simply choose Debug -> Start without debugging. That is troublesome, because the app will use the original, untransformed Web.config, because transformation is done only on Publish or Deploy and the resulting Web.config is stored somewhere else.

I often debug app, change Web.config and the IIS automatically detects that Web.config has changed and reloads the site. When I configured logging, I changed web.config a lot of time.

I am of course not the first person to encounter this, so there are some solutions. The *.csproj project file is only a MSBuild script, it can be modified.

  • Rename the base Web.config to Web.generic.config
  • Open your *.csproj file in text editor
  • Uncomment  <Target Name="AfterBuild"> target in the *.csproj. It is commented out along with   <Target Name="BeforeBuild"> target.
  • Add the TransformXml for Web.Config into the target
    <Target Name="AfterBuild">
      <TransformXml Source="Web.Generic.Config"
                    Destination="Web.Config" />  

Now, every time the project is build, the Web.config will be changed. Note that this doesn’t support chained transformation, so only build configuration is applied.

For details see the Making Visual Studio 2010 Web.config Transformations Apply on Every Build. You can also look at this forum thread that provides some other suggestions.

Sensitive information must not be in transformation file

Some information, e.g. password to our production database is not available to developers, so it can’t be in the transformation files, yet I want simple and reliable deploy to production I (as a developer) can do without our PM present. I am hoping to use a Web Deploy Parameters to do that.





Builders in WCSF

Expect more details about mini-DI in WCSF. Be sure to read previous posts before this one.

WCSF has a two builders (i.e. builders that instantiate the requested objects), they are identical objects (and use identical type and service mappings), but have one crucial difference – singleton policy:

  • ApplicationBuilder – Builder used by the modules and thus in Module Initializers (MI). Its singleton policy is such, that created singletons are stored in the ILocatior. When MI adds a service to the module (e.g. using a container.ServicesNew<SomeService,ISomeService>()), the application builder is used and real singleton service is created. The service is available to all objects in the module and all child modules (unless they overwrite service mapping).
  • PageBuilder – Used by the pages and web controls. WCSF has a lot of slightly tailored WebControls in Microsoft.Practices.CompositeWeb.Web.UI that are subclasses from System.Web.UI, so with WCSF you use the Microsoft.Practices.CompositeWeb.Web.UI.Page instead of System.Web.UI.Page. The singleton policy is such that objects created by this builder are never added to the ILocator thus are never singletons.

Why is there a PageBuilder? The reason is simple, the PageBuilder is used only by the WCSF WebControls to build up the properties of the WebControls. The WCSF WebControls themselves are not instantiated by the ObjectBuilder, but by ASP.NET. The ObjectBuilder comes into a play using the a code in the event methods of the WCSF WebControls (that is the reason why they are there). The WCSF is using PageBuilder to populate the properties of a page using ObjectBuilder, e.g. in OnPreInit method of Page object, OnInit method of MasterPage and so on.

The WebControls themselves are never singletons thus the singleton policy of the PageBuilder and ApplicationBuilder differ.

Crucial difference

Just because PageBuilder doesn’t store singletons doesn’t mean that it always created a new service instance for services. Thanks to default order of the strategies and common ILocator, if it finds a [ServiceDependency], it will locate the service in the ILocator (populated in ModuleInitializer) and uses already existing instance!

The difference in singleton policy is only if created instance is stored in ILocator or not. If there already is an instance, the WCSF will use it.

Basically they tried to work around the problem of how to build up a page we haven’t instantiated. They build it up (=fill public [CreateNew]/[ServiceDependency] properties) in the OnPreInit/OnInit methods of WebControls.

How to use

You use PageBuilder automatically when you use WCSF WebControl. If you really need to use it, call static method WebClientApplication.BuildItemWithCurrentContext(objectToBeBuild).

The ApplicationBuilder property is in Global.asax (the page is derived from the WebClientApplication). To use application builder, follow the code of the BuildItemWithCurrentContext. Basically you need

IModuleContainerLocatorService – WCSF service to locate module from the URL of the page. Use the current URL and get a CompositionContainer of a module.

From CompositionContainer get ILocator and call

webApp.ApplicationBuilder<TypeToBeBuild>(locator, idToBuild, nullOrExistingObject);

For more info, just dive into the source (or not.. I would rather not).

Changing precision of a column in the Oracle

Sometimes, we want to change the format of a column in database, say from NUMBER(12,2) to NUMBER(15,5). The first number is precision (number of digits) and second is scale (number of decimal places), e.g 15,5 means 10 whole numbers and 5 decimal places. For more info, see Oracle documentation.

The precision is limited to 38 (or use * as synonym) and in our db I have seen disturbing number of declarations like WORKLOAD(38,2). Oracle makes it easy to increase scale, but only if you also can increase precision:

-- The column is NUMBER(12,2)
alter table T_TEST modify (WORKLOAD NUMBER(15,5))

If you use 38 or * as precision, it gets more complicated and you have to do the heavy lifting on your own, because there are data that could potentially be lost. Most common way is to create a second column, copy the data, drop original column and rename former column. The other way is to set the original column to null and then you can use the alter table from above (see Stack Overflow for more details).

That works, but there may be constraints and you can lose stuff, not to mention it is a lot of work, prone to error… it would be nice to create a procedure that allows user to change format of the column.


  • Expects that column is a NUMBER
  • Expects that PK is named id


  • Inplace change, won’t change column ordering
  • Must works even if column has some constraints
  • Must not lose data, e.g. changing from NUMBER(7,3) to NUMBER(5,2) will work, but only if there is no row that has a column with number like 12.456 or 1234.
  • Must work across schema

set serveroutput on
  p_owner varchar2(64 char) := 'APPLICATION';
  p_table varchar2(64 char) := 'T_TEST';
  p_column varchar2(64 char) := 'NUM';
  p_format varchar2(64 char) := 'NUMBER(15,5)';
  l_sql varchar2(512 char);
  l_temp_col varchar2(64 char);
  l_diff_ids clob; -- there can be quite a lot of different ids
  procedure disable_constraints(p_owner varchar2, p_table varchar2, p_column varchar2) is
    l_sql varchar2(512 char);
    for l_constraint in (select owner, table_name, constraint_name from ALL_CONS_COLUMNS 
                                where owner = p_owner
                                  and table_name = p_table
                                  and column_name = p_column)
      -- disable constraints 
      l_sql := 'alter table ' || l_constraint.owner || '.' || l_constraint.table_name || ' disable constraint ' || l_constraint.constraint_name;
      execute immediate l_sql;
    end loop;

  procedure enable_constraints(p_owner varchar2, p_table varchar2, p_column varchar2) is
    l_sql varchar2(512 char);
    for l_constraint in (select owner, table_name, constraint_name from ALL_CONS_COLUMNS 
                                where owner = p_owner
                                  and table_name = p_table
                                  and column_name = p_column)
      -- disable constraints 
      l_sql := 'alter table ' || l_constraint.owner || '.' || l_constraint.table_name || ' enable constraint ' || l_constraint.constraint_name;
      execute immediate l_sql;
    end loop;
  dbms_output.put_line('Create temp column');
  -- beware oracle 30 char naming limit
  select 'TEMP_COLUMN_' || dbms_random.string('U', 10) into l_temp_col from dual;
  l_sql := 'alter table ' || p_owner || '.' || p_table || ' add ' || l_temp_col || ' ' || p_format;
  execute immediate l_sql;
  -- copy values to the new column, if values don't fit into new column, but only for upper bound (e.g. the whole number), I get ORA-01438
  -- for fractions, it doesn't work that way
  l_sql := 'update ' || p_owner || '.' || p_table || ' set ' || l_temp_col || ' = ' || p_column;
  execute immediate l_sql;

  -- validate that values are same in new format as were in old format
  l_sql := 'select listagg(id, '','') within group (order by id) from ' || p_owner || '.' || p_table || ' where ' || p_column || ' - ' || l_temp_col || ' != 0';
  execute immediate l_sql into l_diff_ids;
  if l_diff_ids is not null
    -- drop temp column
    execute immediate 'alter table ' || p_owner || '.' || p_table || ' drop column ' || l_temp_col;
    raise_application_error(-20010, 'Values are different in the new format ' || l_diff_ids);
  end if;

  dbms_output.put_line('Disable constraints');
  disable_constraints(p_owner, p_table, p_column);
  -- set old to null
  l_sql := 'update ' || p_owner || '.' || p_table || ' set ' || p_column || ' = null';
  dbms_output.put_line('Change format of the column ' || p_column);
  execute immediate l_sql;

  -- change format 
  l_sql := 'alter table ' || p_owner || '.' || p_table  || ' modify (' || p_column || ' ' || p_format || ')';
  execute immediate l_sql;

  l_sql := 'update ' || p_owner || '.' || p_table || ' set ' || p_column || ' = ' || l_temp_col;
  execute immediate l_sql;
  -- drop temp column
  l_sql := 'alter table ' || p_owner || '.' || p_table || ' drop  column ' || l_temp_col;
  execute immediate l_sql;
  dbms_output.put_line('Enable constraints');
  enable_constraints(p_owner, p_table, p_column);

It is not the prettiest code in the world, but it works.

Moving ASP.NET from WebSite project to Web Application Project

We have rather intranet WebForms app that is using WebForms and is WebSite application. I am converting it to the Web Application Project using a great walkthrough from Microsoft.

WebSite vs Web Application

What are the major differences between these two projects and what advantages do I expect? You can find talkative comparison on MSDN or use more succinct one on StackOverflow. Basically the WebSite is a bunch of aspx files compiled on the fly on the server, while Web Application is precompiled dlls.

  • Faster build. I am not sure why, but when I build website to check for static errors, I have to rebuild everything, rebuild takes few minutes, while WPA converted one is in seconds.
  • Easier dependencies. WebSite requires user to manually specify dlls it depends on. Wweb Application is a real project and NuGet is available.

There are few other, but WPA is simply better while WebSite looks like technically inferior solution from 2005.

How does it work?

Great! Simply follow the walkthrough and you will be OK. I had it converted in several hours:

  1. Create  Empty Web Project Application
  2. Copy all files (except the bin with *.refresh files and the publish file) from WebSite to the WPA
  3. Include all copied files to the WPA
  4. Right click on the project in Solution explorer, select Convert to Web Application, OK.
  5. The automatic conversion moved the App_Code to Old_App_Code, because it is the only place where WPA dynamically compiles code, and should be empty. Rename it to something sutable, e.g. App_Start
  6. Try to build the WPA, you will get a lot of errors because of dependencies. Add dependencies, preferably using NuGet.
  7. In 2 cases, I couldn’t access the aspx controls from code behind and had to resort to …FormView.FindControl(“ControlId”).Method() instead of ControlId.Method().
  8. You are done!

Tracker – fulltext search from CLI

I have downloaded rather large site full of HTML and few PDF files and stored it on my Raspberry Pi (my constantly running linux toy). It is not too large (few GB and tens of thousands of files), but it is rather annoying to wait for MidnightCommander content search.

Since they are mostly HTML and PDF files, I thought that a search engine would be nice. My requirements were:

  • Must have CLI interface, I don’t have a monitor attached and no desire to run remote desktop.
  • Efficient and small, raspberry pi has something like 512 MB ff memory.

Quick googling reveleased few contestants: Sphinx search (its CLI is only for debugging purposes – Nope), Lucene and Tracker.  Lucene is Java-based, but with quite small memory footprint (1MB memory heap) with a lucli CLI interface. I kind of regret not choosing it. Anyway, I chose Tracker, poorly documented search engine with issues (mostly lack of documentation). It is supposed to be

Designed and built to run well on lower-memory systems with typically 128MB or 256MB memory. Typical RAM usage is 4-6 MB.


apt-get install --no-install-recommends tracker-utils tracker-miner-fs libglib2.0-bin

Everything is in packages, simply install it. The most important program is tracker-control that can start miners, reset them or give you status of the indexing. You need libglib2.0-bin for gsettings utility that allows user to change the gconfig from CLI.


If you try to run tracker-control without X11, you get an error:

honza@pina ~ $ tracker-control -s
Starting miners…
Could not start miners, manager could not be created, Command line `dbus-launch --autolaunch=3b0e4b712f60d6b9547b25ae51c194dd --binary-syntax --close-stderr' exited with non-zero exit status 1: Autolaunch error: X11 initialization failed.

Somone else already encountered the problem, the solution is:

eval `dbus-launch --auto-syntax`

It is not pleasant, because you have to manually start the Tracker each time you log in, so you should put it into your login scripts.

You can see all configuration options of Tracker using

gsettings list-recursively | grep -i org.freedesktop.Tracker | sort | uniq

I was interested in searching only one directory, so I changed the index-recursive-directories

gsettings set org.freedesktop.Tracker.Miner.Files index-recursive-directories "['/home/pi/website-mirror']"

Starting the miners

You can start the miners using tracker-control

honza@pina ~ $ tracker-control -s
Starting miners…
  ✓ Applications
  ✓ File System

And after that check the progress using the status option

honza@pina ~ $ tracker-control -S
27 Jul 2014, 12:44:29:  ✓     Store                 - Idle

27 Jul 2014, 12:44:31:  ✓     Applications          - Idle
27 Jul 2014, 12:44:33:   32%  File System           - Processing… 01h 03m 32s remaining

Once you have that, you can easily search for the term using tracker-search

honza@pina ~ $ tracker-search ping

I also have an application tracker that should find applications, but in default settings, it is probably limited to the Gnome desktop and not programs in /bin.


You can set logging either through the gsettings (for each component separately) or using tracker-control for all of them at once. The default level errors. Possible values are [debug|detailed|minimal|errors].

gsettings set org.freedesktop.Tracker.Miner.Files verbosity 'detailed'
// Or for all
tracker-control --set-log-verbosity=detailed

The logs are stored in $HOME/.local/share/tracker directory.

End notes

Someone else has also tried to his 5 minutes with Tracker, my observations are similar>

  • Name is horrible, it was hard to google anything with such generic name. Even Tracker itself giver warning when trying to search for common words (Search term ‘index’ is a stop word. Stop words are common words which may be ignored during the indexing process.)
  • It is not running without X out of the box. Rather annoying.
  • Search works, but I would probably choose Lucene next time.

I got my own domain!

This is the very first time I got my own domain. I have never bought one before, mostly because I don’t feel comfortable with the all info in the WHOIS register. I’ve bought one from namesilo and I’ve got a WHOIS privacy guard included – the WHOIS registry has a see as its registrant/admin/billing/tech, but if you really want, you can send email… that will probably end up in black hole of spam. I have sent email to the address in WHOIS registry and I have also used a form at the PrivacyGuradian webpage, not sure where it will turn up.

Update: I’ve got the email from the form on the PrivacyGuard page(my contact email from namesilo), but not from the WHOIS db.

I have no illusions about level of privacy granted by this arrangement, but a relative told me about one of her clients who was inpersonated by someone else (=a case of identity theft). It was not pretty and the bureaucracy took a while to stop. It required some time, effort, money and a lawyer. The less info is freely laying around the better.

Privacy Guard has some side-effects, the most glaring one is that who is in the WHOIS register is the domain owner. When one registrar went under, there were serious trouble with domain ownership.

I am running a WordPress, and while I am only scrating surface, it is quite troublesome to write a code snippets in the editor.

I have installed following plugins that alleviated my trouble:

  • Prettify Code SyntaxPrettify GC Syntax Highlighter – The GC Syntax highlighter has a noquote class that makes switching between text and visual mode much easier (the PCS always escapes < and >, thus when user switches from text to visual, the elements like <some-tag> disappear).
  • Tab Override
  • Visual Editor Custom Buttons


Synchronize tag in sql-query of NHibernate

I have been trying to write rather complex query using Linq To NHibernate, I gave up and wrote it in the SQL instead. While writing the query in <sql-query> tag, the Visual Studio had offered me a <synchronize> tag. I had no idea what the tag was for.

The tag itself has one required attribute table, so it is used like <synchronize table="SOME_TABLE" />.

The NHibernate is infamous for its quality of official documentation, so I obviously had no luck there. Not even Hibernate was much more forthcoming. Open source to the rescue: In the Beings.hbm.xml of one NH test was following comment:

(2) use of to ensure that auto-flush happens
correctly, and that queries against the derived entity
do not return stale data

The test itself was for a read-only entity created from select (interesting feature I didn’t know about), not a SQL query, but I hoped it would be similar.

I have opened my project for testing the NHibernate (Nerula) and started writing code and observing how does the tag behave.


The purpose of the <synchronize> tag is to flush entities that use the specified table before running the query, but it doesn’t flush entities unrelated to the table. The secondary objective is for notifying the second level cache about update/delete sql queries, but that is for another post.

To demonstrate the concept, I have created three identical sql-queries, each with different synchronize tags:

 <sql-query name="DeleteProjectSync">
  <![CDATA[ delete from Project where Code = :Code ]]>
  <query-param name="Code" type="String" />
  <synchronize table="Project"/>

<sql-query name="DeleteProject">
  <![CDATA[ delete from Project where Code = :Code ]]>
  <query-param name="Code" type="String" />

<sql-query name="DeleteProjectSyncWrongTable">
  <![CDATA[ delete from Project where Code = :Code ]]>
  <query-param name="Code" type="String" />
  <!-- We are flushing the wrong table -->
  <synchronize table="Blog"/>

I have loaded and modified a Code property of a Project entity and my queries will try to delete the modified project row from the database.

var project = session.Query().First();
var originalCode = project.Code;
project.Code = "Test";

So what happend when I tried to delete a Project with Code “Test”? Let’s go over each query.

var deletedCount = session.GetNamedQuery("DeleteProject").SetString("Code", "Test").ExecuteUpdate();
Assert.AreEqual(0, deletedCount);

In the query without any synchronization, the project is not deleted (ExecuteUpdate returns number of modified/deleted rows), because the project is modified only in the session, it is not synced with the database and thus the SQL query can’t find a project with specified code. If I manually flush the session (session.Flush()) before running the query, the record is deleted.

var deletedCount = session.GetNamedQuery("DeleteProjectSync").SetString("Code", "Test").ExecuteUpdate();
Assert.AreEqual(1, deletedCount);

In second query, I use tag for the table from which we are deleting. All entities using the table Project are flushed to the database and we actually delete one record with our specified Code.

var deletedCount = session.GetNamedQuery("DeleteProjectSyncWrongTable").SetString("Code", "Test").ExecuteUpdate();
Assert.AreEqual(0, deletedCount);

Just to make sure, we will try with a third query that has a synchronize tag, but for wrong table. This is to demonstrate that we don’t flush all entities in session, in this case we also delete zero rows, because the updated code is still not in db. If I use session.Flush(), the row is deleted.

I hope this sufficiently demonstrates the concept.

This is all nice and well, but what about 2nd level cache? When and how is it updated? I am quite sure (courtesy of BulkOperationCleanupAction) that entities affected by tables in sycnhronize tag are also evicted from 2nd level cache, but I haven’t tested it yet.


The <synchronize> tag respects the FlushMode of the session. If you set FlushMode to Always, all queries will perform delete, if you set FlushMode to Commit, all will fail, including the DeleteProjectSync.

ObjectBuilder in WCSF

I have described how does the ObjectBuilder work in the previous post. The reason why I even started to investigate the internals of a dead project is because of the WCSF – another dead project.

Since OB is a framework for building DI, the WCSF has created its own simple DI with two ways to build up objects – either as singletons or new objects. The depending objects can be inserted either through constructor or through properties.

The WCSF has its own builder of objects – the WCSFBuilder class derived from WCSFBuilderBase. It should be noted that when you diff the WCSFBuilderBase from the WCSF and BuilderBase from OB, they are quite similar and there was no reason to reimplamenetcopy&edit the base builder class.

The gist of WCSF are four strategies:


Policies.SetDefault<ICreationPolicy>(new DefaultCreationPolicy());
Policies.SetDefault<IBuildPlanPolicy>(new BuildPlanPolicy());

private static IPlanBuilderPolicy CreatePlanBuilder()
  BuilderStrategyChain chain = new BuilderStrategyChain();
  chain.Add(new CallConstructorStrategy());
  chain.Add(new SetPropertiesStrategy());
  chain.Add(new CallMethodsStrategy());

  PolicyList policies = new PolicyList();
  policies.SetDefault<IConstructorChooserPolicy>(new AttributeBasedConstructorChooser());
  policies.SetDefault<IPropertyChooserPolicy>(new AttributeBasedPropertyChooser());
  policies.SetDefault<IMethodChooserPolicy>(new AttributeBasedMethodChooser());

  return new DynamicMethodPlanBuilderPolicy(chain, policies);

As you can see, the OB has four chained strategies that are chained.



This is strategy that preprocess the buildup request before passing it to the rest of chain, it doesn’t actually build the object. Its task is to change requested type to the type we actually to build up, in most cases it maps interfaces to concrete classes, e.g. ITimeService to NetworkTimeProtocolService. The precise mapping is defined by ITypeMappingPolicy.


This strategy is quite simple:

  • If locator contains an instance of requested object with id&type -> Return instance
  • Otherwise build up the object using rest of chain, insert it to locator and return it.

The startegy checks and respects ISingletonPolicy of builder. If the policy says no to singletons, new instance is not injected into locator.


This is a candidate for The Daily WTF. When I go through all the stuff and dependency, it seems to create a specialmethod using ILGenerator just for creation. We actually have things like il.Emit(OpCodes.Ldarg_2);. Generating assembler at runtime… In 2008.

This strategy uses DynamicMethodPlanBuilderPolicy that basically for each type creates a dynamically created method (using ILGenerator and opcodes) for building an object of the type. The strategy then calls the method to create the object and passes the created object to the next link of the chain.

The interesting part is DynamicMethodPlanBuilderPolicy.CreatePlan – the method returns dynamic method “BuildUp_” + typeToBuild that will be executed on typeToBuild class.

The code of method is generated sequentially by the chain from CreatePlanBuilder from code snippet above:

// Code used to create a method that will build up the typeToBuild in the DynamicMethodPlanBuilderPolicy used by BuildPlanStrategy
ILGenerator il = buildMethod.GetILGenerator();
// In this chain is the CallConstructorStrategy, SetPropertiesStrategy and CallMethodsStrategy
context.HeadOfChain.BuildUp(context, typeToBuild, il, idToBuild);

The CallConstructorStrategy check if existing object is null, if not, it builds up parameters of constructor and calls it.

  • The SetPropertiesStrategy builds up and sets objects for all marked properites ([CreateNew]/[ServiceDependency]).
  • The CallMethodsStrategy – It will call all methods of the object that have the [InjectionMethod] attribute with build up parameters.

Aaaargh! Is there any reason to do this instead of three link of build starategy chain, where

  • first link creates an instance using constructor and passes it to the second one
  • second link builds up and assigns instances to the [CreateNew]/[ServiceDependency] properties of the object
  • third calls all [InjectionMethod] methods of the existing object.

UPDATE: Somebody probably thought so too, because there are three unused strategies that do exactly that: ConstructorReflectionStrategy, PropertyReflectionStrategy and MethodReflectionStrategy.

Once the method creates the existing object, is is passed to the last strategy:


Post initialization task, this strategy checks if passed existing object is an instance of IBuilderAware and if it is, the OB will call OnBuiltUp method of the existing object.

I think this strategy is used only in tests of WCSF, e.g. if WCSF did build up a WCSF UserControl.


ObjectBuilder is a C# dependency injection framework, more precisely it is a framework for building a dependency injectors. There were two versions, the Object Builder 2 was later integrated into Unity Application Block.

The earlier versions is used by Web Client Software Factory (WCSF), the library for building web applications in WebForms. It is tool we are using for our internal system.

The first thing I notices about Object Builder is that it is woefully undocumented (official MSDN documentation), the source code is available, but no quick start or anything. I have googled a little and found very helpful post about how to actually use it to create objects. – sort of quick start tutorial.

Good, but not enough, I am trying to transfer out C# app from Web Site Project in Web Forms and WCSF to to MVC, but since WCSF auto-magically inserts all dependencies, I had to dive into source code of OB in order to understand it and later integrate it for MVC controller creation (I want to insert already existing services and other stuff into MVC controllers).

First, it is good to have an idea what functionality of ObjectBuilder do I have to replicate:

  • Services – Singletons that are alive during the whole life of web app. When constructor of object or property of object has a attribute [ServiceDependency], the OB will put there a singleton instance of some object.
  • New objects – whenever asked (through attribute [CreateNew]), the OB creates a new object.
  • There is also some registration of services and type mappings, but I am only interested in how to create a new object / get service just like the WCSF would.

Since the OB is not a dependency injection framework, but rather framework for building DI, it has only very simple DI framework – the one that can either create a new object or use singleton singleton instance.

The WCSF can use DI in two ways, either through constructor or by filling proprties (see official documentation):

public class MyClass {
  public MyClass(
    [CreateNew] IStoreInventory storeInventory,
    [ServiceDependency] ITimeService timeService)
    // ... object initialization
  public IMyService MyService {get;set;}

Core concepts of the ObjectBuilder:

  • IBuilder – The builder that builds up or tears own the objects. The BuilderBase class is easy to understand. You can create a object like this
builder.BuildUp<MyClass>(locator, idToBuild, existing)
// In reality all parameters can be null when not used

The builder has a chain of building strategies (the various ways to create an object).

  • IBuilderStrategy – Strategy how will the objects be build.

The strategies can be varied, e.g. singleton strategy can look through locator and if there already is an object, return it; if the object is not in locator, create it, add to locator and return it. Or the the object can be build by some factory method.

  • IBuilderPolicy – Policy tailoring concrete implementations of IBuilderStartegy, e.g. IMethodPolicy can tailor which method will be called by MethodExecutionStrategy.
  • IBuilderContext – context used for one request to build up an object. It consists from locator, chain of building strategies and list of policies. It basically only holds tailored data passed to the chain of strategies of IBuilder.
  • ILocator – Basically a dictionary of id-object, it is used mostly for singletons, so when someone asks for an object with specified id, the OB will use locator to locate it.

When we request an object from IBuilder, it

  1. takes passed paramaters,
  2. creates a new IBuilderContext from passed parameters and other internal data (strategies, policies)
  3. IBuilder asks the head of builder strategy chain to build up an object.
  4. The IBuilderStartegy will look at the IBuilderContext and other data and determines if it can build up an object. If it can, it returns an object. If it can’t, it asks the next strategy in builder chain to try to build up the object. Note that each link of chain must call previous link.
  5. The base implementation BuilderStartegy of IBuilderStartegy will return the existing object (the one passed as parameter into IBuilder.BuildUp), if all strategies in chain fail.

This is rather high level description of OB, concrete example how to set it up, look at the David Hayden blog.