IoC with SisoDb in ASP.Net MVC

I just put together a short screencast (about 4min) showing you have to configure SisoDb with an IoC-container in ASP.Net MVC using “One session per HttpRequest”. For this demo I will use Ninject.

The screencast is hosted in the SisoDb channel at Vimeo.

Updated: Mike Paterson has a GitHub repository with code for the episode, found here: https://github.com/devlife/Sandbox/tree/master/SisoDb

Summarized

After having installed the “Ninject.MVC3 NuGet package”, I added a NinjectModule named “DbConfig” under a new folder/namespace “IoCConfig”.

public class DbConfig : NinjectModule
{
    public override void Load()
    {
        var db = "CoreConcepts".CreateSql2012Db();
        Kernel.Bind<ISisoDatabase>()
            .ToMethod(ctx => db)
            .InSingletonScope();

        db.CreateIfNotExists();

        Kernel.Bind<ISession>()
            .ToMethod(ctx => db.BeginSession())
            .InRequestScope();
    }
}

After that we just need to ensure the module is loaded by Ninject in the bootstraper, which is installed by the Ninject.MVC3 NuGet, under “App_Start”. The only change that is needed is adding one row to the “CreateKernel” member, so that it look like this:

private static IKernel CreateKernel()
{
    var kernel = new StandardKernel();
    kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel);
    kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>();

    //ADD THIS TO LOAD OUR MODULE(s)
    kernel.Load(typeof(MvcApplication).Assembly);

    RegisterServices(kernel);
    return kernel;
}

We are now all set and can take a dependency on “ISession” in our controllers. Either in the constructor or by resolving it using a static class/method/service locator concept against the Ninject IoC-container. Since my sample uses Db-access in each action, I used constructor injection.

public class CustomerController : Controller
{
    private readonly ISession _dbSession;

    public CustomerController(ISession dbSession)
    {
        _dbSession = dbSession;
    }

    //...
}

That’s it.

//Daniel

Let business scenarios be reflected in your code

I’m working with a project where I use something I call “scenarios”. The idea is that each scenario in the businessmodel is represented by three classes.

Command – Contains the inputdata needed to perform the Scenario. Is optional since the operation might not need any input parameters.

Scenario – Validates and Executes the Command and is where the logic is kept. It is not ment to cross boundraries. It also provides an eventmodel, publishing events for Validated, Executed etc.

Result – Contains the command (input data), any exception and violations generated by the Validation- or Execution step in the scenario, as well as scenariospecific outputdata.

The reason for why I wan’t this is that I want each scenario easily detected in my coding-model. I want to be able to quickly look in the solutionexplorer in the Scenarios namespace, and directly see what the system targets in the domain. By looking at the Command- and the Result-classes I can also see what the scenario accomplishes. By isolating each scenario-flow-logic to one single class, I hopefully achieve a more cohesive and readable class.

An example

The example is not drawn from the business model since it’s about signing in to the system, hence it’s sort of an indirect business scenario, since you can’t run the “Place order scenario” if you’r not an authorized user.

The shown example is about logging in to the system and is using OpenId in an Asp.Net Mvc 2 application. This concludes two steps. First a request has to be initiated to an OpenId-provider. Second, we need to handle the callback request from the provider, hence the enum LogOnSteps. (I guess it would have been more clear to put these in two different scenarios.)

[Serializable]
public enum LogOnSteps
{
    Initiate,
    Finalize
}

The Command contains properties that are required for Step 1 as well as a flag indicating where we are in the process so that the validation and execution in the scenario-class nows what to do.

[Serializable]
public class LogOnCommand
{
    public LogOnSteps Step { get; set; }

    public string OpenId { get; set; }

    public string ReturnToUrl { get; set; }

    public LogOnCommand(LogOnSteps step)
    {
        Step = step;
    }
}

The implemented scenario extends a baseclass so that boilerplate code for catching exceptions etc. is kept away. The scenario is defined in an interface and is accessed via an IoC-container.

public class LogOnScenario : Scenario<LogOnCommand, LogOnResult>, ILogOnScenario
{
    public IOpenIdAuthenticationService AuthenticationService { protected get; set; }

    public IMembershipService MembershipService { protected get; set; }

    public LogOnScenario(
        IOpenIdAuthenticationService authenticationService,
        IMembershipService membershipService)
    {
        AuthenticationService = authenticationService;
        MembershipService = membershipService;
    }

    protected override IViolations OnValidate(LogOnCommand command)
    {
        var violations = new Violations();

        if (command.Step == LogOnSteps.Initiate)
        {
            violations.AddIf(command.OpenId.IsNullOrEmpty(), new Violation(
                ResourceStrings.LogOnCommand_OpenId_IsRequired, CommandName, "OpenId"));
            
            violations.AddIf(command.ReturnToUrl.IsNullOrEmpty(), new Violation(
                ResourceStrings.LogOnCommand_ReturnToUrl_IsRequired, CommandName, "ReturnToUrl"));
        }

        return violations;
    }

    protected override LogOnResult OnExecute(LogOnCommand command)
    {
        return command.Step == LogOnSteps.Initiate
            ? HandleInitiateStep(command)
            : HandleFinalizeStep(command);
    }

    private LogOnResult HandleInitiateStep(LogOnCommand command)
    {
        var logOnResult = new LogOnResult
        {
            Command = command,
            MvcActionResult = AuthenticationService
                                .InitiateMvcAuthentication(command.OpenId, command.ReturnToUrl)
        };

        return logOnResult;
    }

    private LogOnResult HandleFinalizeStep(LogOnCommand command)
    {
        var logOnResult = new LogOnResult { Command = command };
        var openIdAuthentication = AuthenticationService.AuthenticateRequest();

        if (openIdAuthentication.Violations.IsNotEmpty)
        {
            logOnResult.Violations.Add(openIdAuthentication.Violations);

            return logOnResult;
        }

        var mapper = IoCContainer.Instance.GetInstance<IObjectMapper>();
        var identity = mapper.Map<OpenIdAuthentication, Identity>(openIdAuthentication);
            
        logOnResult.UserSession = new UserSession(identity);
        logOnResult.Member = MembershipService.GetByOpenId(identity.OpenId);

        return logOnResult;
    }
}

Since we want every scenario to return any caught exception and violations, we have a base-class that the ScenarioResults can extend, which contains members for this.

[Serializable]
public class LogOnResult : ScenarioResult
{
    public LogOnCommand Command { get; set; }

    public ActionResult MvcActionResult { get; set; }

    public IUserSession UserSession { get; set; }

    public Member Member { get; set; }
}

Now if we take a look at some consumer code, I think it gets a bit easier to understand, and the application controller only handles logic for controlling which view and viewdata should be presented.

Step 1 – Initiate the scenario

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult LogOn(LogOnViewModel viewModel)
{
    if (!ModelState.IsValid)
        return View(viewModel);

    var returnToUrl = Url.AbsoluteUrlFromAction("AuthenticateOpenIdRequest");
    var command = new LogOnCommand(LogOnSteps.Initiate)
                      {
                          OpenId = viewModel.OpenId, 
                          ReturnToUrl = returnToUrl
                      };

    var scenario = IoCContainer.Instance.GetInstance<ILogOnScenario>();
    var scenarioResult = scenario.Execute(command);

    if (!scenarioResult.Succeeded)
    {
        this.HandleScenarioResult(scenarioResult);
        return View(viewModel);
    }

    return scenarioResult.MvcActionResult;
}

Step 2 – Finalize the scenario

public ActionResult Authenticate()
{
    WebUserSession.SetGuest();

    var viewModel = new LogOnViewModel(OpenIdProviders);
    var command = new LogOnCommand(LogOnSteps.Finalize);
    var scenario = IoCContainer.Instance.GetInstance<ILogOnScenario>();

    var scenarioResult = scenario.Execute(command);
    if (!scenarioResult.Succeeded)
    {
        this.HandleScenarioResult(scenarioResult);

        return View("LogOn", viewModel);
    }

    if (scenarioResult.Member == null)
        return View("SetupNewMembership");

    if (!scenarioResult.Member.IsActivated)
        return View("ConfirmMembership");

    WebUserSession.Set(scenarioResult.UserSession);

    if (WebUserSession.Current.IsAuthenticated)
        return Redirect("~/");

    return View("LogOn", viewModel);
}

That’s it.

//Daniel

Use extension methods to let your enums hold the logic.

I often stumble upon code checking values on enumerations to determine the state of some object or rule. At best, this code is extracted and put in a helper or utils class of some sort. Don’t! Use extension methods instead. It let’s you provide a name for the condition (rule) that you are checking, and it put’s it together with the enumeration.

Example
Lets say I have an enumeration with storage providers. Each provider might be able to store data virtually, hence I need the possibility of determining if it’s capable of this.

[Serializable]
public enum StorageProviders
{
    LuceneIo = 0,
    LuceneVirtual = 1
}

public static class StorageProvidersExtensions
{
    public static bool IsVirtual(this StorageProviders provider)
    {
        return provider == StorageProviders.LuceneVirtual;
    }
}

I can now act on the enumeration value it self.

...connectionInfo.ProviderType.IsVirtual()...

//Daniel

Tip – When writing custom assertion methods – Keep your assertion stack trace clean

I have bumped in to a couple of custom assertion methods which lets the assertion stack trace contain the assertion method itself. Hence, if I use, e.g. Testdriven.Net, I might double-click on the wrong line and end up in the assertion method, instead of the test. It’s really easy to prevent this, just add the DebuggerHiddenAttribute (read more) to the assertion method.

Example

First, lets be clear. The taken example is not realistic and there’s already support for array assertions but lets pretend I want an assertion method that asserts string arrays and ensures that:

  • the number of elements are the same
  • the order of the elements are the same

I’m using Galio, MbUnit and Testdriven.net

The test

[TestFixture]
public class DummyTests
{
    [Test]
    public void GetAsSortedStringArray_WhenManyParams_ReturnsSortedStringArray()
    {
        var values = new int[] {0, 4, 5, 1, 3, 2};
        var expectedResult = new[] {"0", "-1", "2", "3", "4", "5"};

        var dummy = new Dummy();
        var sortedStringArray = dummy.GetAsSortedStringArray(values);

        ArrayAssert.AreMatches(expectedResult, sortedStringArray);
    }
}

The test will fail (which is expected in this blogpost) since The expected element number "2" should have value "1" and not "-1"

What is wrong with the test

The implementation of Dummy

public class Dummy
{
    public string[] GetAsSortedStringArray(params int[] values)
    {
        return values.Select(v => v.ToString()).OrderBy(v => v).ToArray();
    }
}

The Assertion method

public static class ArrayAssert
{
    public static void AreMatches(string[] expectedValues, string[] actualValues)
    {
        var numOfExpected = expectedValues.Count();
        var numOfActual = actualValues.Count();

        if (numOfExpected != numOfActual)
            throw new AssertionException(string.Format(
                @"The expected values and the actual values have 
                different number of elements.
                Expected: '{0}'; Actual: '{1}'.", numOfExpected, numOfActual));

        for(var c = 0; c < numOfExpected; c++)
        {
            var expected = expectedValues[c];
            var actual = actualValues[c];

            if(!expected.Equals(actual))
                throw new AssertionException(string.Format(
                    @"The values in element-position '{0}' are not the same.
                    Expected: '{1}'; Actual: '{2}'.", c, expected ?? "", actual ?? ""));
        }
    }
}

What’s wrong?

The result of the assertion will be:

Testdriven.Net – Result
Testdriven.Net - Result - Not friendly

Gallio – Result
Gallio - Result - Not friendly

I have tried to mark the lines that I think pollutes the result. It’s the lines stating something like:

Class1.cs(44,0): at ClassLibrary1.ArrayAssert.AreMatches(String[] expectedValues, String[] actualValues)

This is not of interest. And by adding the attribute DebuggerHidden I will get another assertion result, which in my opinion is slightly more friendly.

The corrected assertion method

public static class ArrayAssert
{
    [DebuggerHidden]
    public static void AreMatches(string[] expectedValues, string[] actualValues)
    {
        ...
    }
}
More friendly assertion result

Testdriven.Net – Result
Testdriven.Net - Results - Friendly

Gallio – Result
Gallio - Results - Friendly

Nothing more to say.

//Daniel

Some thoughts about O/RMs.

1. The Mappings:
Whether it’s rolled via XML, attributes, or some fluent API, it’s something you as a developer have to maintain, so even if you let the conceptual object-model drive the design of your database, you are implicitly DDL through the mappings.

2. HQL, EQL, XQL:
It is just a DSL over SQL, which means that I still write SQL, just in another dialect. So if I can not use IQueryable, I will still write SQL, which I probably could have done more conveniently in Management Studio against my SQL DB.

3. POCO:
We are so concentrated on getting “pure” classes and have troubles acceptiong a generated code base either created by a designer or T4 templates. We feal that these kind of solutions are highly dangerous. Why? If you handcraft your entities yourself, you usually end up with “POCO” classes that have to extend a certain base-entity and implement value-compare and INotify, etc. Isn’t it clearer and easier for future maintenacestaff, to open a designer, get an overview, being able to easilly maintain references, attributes etc? What’s so terrifying of getting the properties/state generated for you?

Thoughts?

//Daniel

Where are your Scenarios in your domain?

If you design your entities before you design your scenarios, you are data-driven! The Scenarios are primary – The entities are secondary!

– Daniel Wertheim

My words of what I think is wrong in many projects, and thats what this post is about.

Definition of Scenario

a postulated sequence of possible events

http://wordnetweb.princeton.edu/perl/webwn?s=scenario

In computing, a scenario is a narrative describing foreseeable interactions of types of users (characters) and the system. Scenarios include information about goals, expectations, motivations, actions and reactions. Scenarios are neither predictions nor forecasts, but rather attempts to reflect on or portray the way in which a system is used in the context of daily activity.

http://en.wikipedia.org/wiki/Scenario_(computing)

I tend to bump into situations where I meet people that are proclaiming they are adapting DDD in their projects and that they are writing object-oriented systems. As I’m always interested in learning, I fire of a couple of questions about what they mean by DDD and OO. I will not bother you with the dialog, but will rather skip to a summarize of the conversation. and present five of the artifacts that they think significates that they are conforming to DDD which I think is well worth to notice and discuss about:

  • They are object-oriented
    Meaning “that they are not using datasets anymore to keep track of the state and simple validation.”
  • They have POCO entities
    Meaning “that they have an OR/M that to some extent hides some of the artifacts of dealing with persisting the entities.”
  • They have repositories
    Meaning “there’s at least one repository per aggregate root (most often more), dealing with CRUDs.”
  • They have domain-services
    Meaning “they have some objects named [Entity][Service/Manager] which contains logic and calls to repositories”.
  • They have a model
    Meaning “they have a bunch of the above objects collaborating together”.
  • Ok, so I have designed system like these to. I will probably do it again. But lets look at what I mean by “systems like these” and what I think is wrong.

    Whats wrong?
    So I think we tend to create a model with an emphasis on entities, which more or less becomes a representation of tables in a traditional database. I like to define these entities as “an intelligent data-row” and by intelligent I mean that it might have simple validation of state and some logic for aggregating values, e.g. “Sum of the contained order lines in the order”. Furthermore we tend to create a bunch of services/managers that is named: “[Entity][Service/Manager]”; which to some extent holds together the flow – which most often means, they do some simple validation/rule-evaluation and they interact with a repository. I think these Services/Manager-classes arise from a mental mind of the developer looking like this: “The entity must be POCO and can’t have anything to do with persisting entities. I’ll just create a service that calls the repository and since I’m working with my order-entity, I’ll name it “OrderService”.

    Again, we end up here because we are thinking objects and state and how the objects correlate to each other. And when sitting there, performing TDD with a shallow understanding of the domain; it’s probably the easiest way to get started. Classic example: “Customer places a new order” => “OrderService.PlacerOrder” which receives an order entity with some contained lines and a customer. So my question to you then is: “What exactly is OrderService in the business”? If it is the customer that makes the request, why can’t it be “Customer.PlaceNewOrder”? Or a domain-object named something that mimics the scenario/process, e.g.: “PlaceNewOrder” or “OrderRegistration”.

    To sum it all up: The scenarios/processes in the domain is the core of the business. It’s what make the business unique and is what they are competing with against other competitors. This is what should drive your design. Without a fair understanding of the domain you will risk, getting data centric. The Domain should drive the design of your model, not TDD. Get a deep knowledge of the business and it’s processes and not only the binary rules and the state. These are the aspects that should be reflected in your code.

    //Daniel

    Use StructureMap and Castle DynamicProxy to easily change the semantics of your code

    In this posting my intentions are to show you how to use Inversion of Control (IoC), with the help of StructureMap, for being able to easily switch the semantics of e.g entities in a model with the change of a few lines in the StructureMap configuration. I will use it to go from plain classes to dynamically generated proxies, using Castle DynamicProxy. The proxied version will add the attributes: Serializable and DataContract. I will not implement any interception, but I will setup a empty interceptor which, if you want, you can extend and play with for yourself.

    Ok, even if this might not be the most useful scenario (as it was something I fiddled around with) it shows you how easily it is to control your dependencies(when the infrastructure code is in place). E.g. Lets say that I’m using Entity framework 4 (EF4) and the new capabilities of Code-only and POCO entities and I let EF4 generate proxies to keep track of changes etc. I could then instruct StructureMap to let EF4 create the instances for me at all time and not just when I fetch allready persisted entities.

    The solution – Pls.ProxyLab

    Pls.Core

    Contains some simple “reusable” infrastructure code that could be references by all other assemblies.

    Pls.ProxyLab.Client

    Simple Console application that consumes and outouts some metadata information about the entities.

    Pls.ProxyLab.Entities

    Contains the entities of the model

    Pls.ProxyLab.IoC

    Contains the StructureMap-based IoC-container.

    Pls.ProxyLab.IoC.Configuration

    Contains the congifurations for the StructureMap-based IoC-container. In this demo I have also put the configuration of the ProxyBuilder in this assembly.

    The model

    To keep things simple I have not included anything else than simple automatic properties and constructors in my entities.

    The entity model of the Pls.ProxyLab

    A little note about the constructors. As you will se they are only used for initialization of default values and for resolving dependencies between entities. With the use of StructureMap, I actually wouldn’t need the resolving of the other entities, since I could let StructureMap handle this by autowiring dependencies using property setter injection. The reason to why I’m resolving it in the model is that: It shows the intent more clearly and I also couldn’t get StructureMap to handle the dependencies when I was generating proxies.

    All members are made virtual so that Castle DynamicProxy can create proxies and intercept the members.

    Entity

    A simple base class that (when I start incorporating persistancy) will contain implementation for Identity and Concurrency tokens etc. As of right now it’s an empty shell.

    public abstract class Entity
    {
    }
    

    Person, Customer and Room

    Simple classes that extends the Entity baseclass with some automatic properties.

    public class Person
        : Entity
    {
        public virtual string Firstname { get; set; }
        public virtual string Lastname { get; set; }
    }
    
    public class Customer
        : Person
    {
        public virtual string CustomerNo { get; set; }
    }
    
    public class Room
        : Entity
    {
        public virtual string RoomNo { get; set; }
    }
    

    BookingRequest

    Just a little bit more complicated, as it set’s some default values for its members.

    public class BookingRequest
        : Entity
    {
        public virtual int NoOfAdultBeds { get; set; }
        public virtual int NoOfChildBeds { get; set; }
        public virtual bool SmokingAllowed { get; set; }
        public virtual bool WantsWindow { get; set; }
    
        public BookingRequest()
        {
            NoOfAdultBeds = 2;
            NoOfChildBeds = 0;
            SmokingAllowed = false;
            WantsWindow = true;
        }
    }
    

    Booking

    Not so complicated. The only thing that is new is that is resolves some dependencies to other entities.

    public class Booking
        : Entity
    {
        public virtual BookingRequest Request { get; set; }
        public virtual string BookingNo { get; set; }
        public virtual Customer Customer { get; set; }
        public virtual Room Room { get; set; }
    
        public Booking()
        {
            Request = EntityFactory.Instance.GetInstance<BookingRequest>();
            Customer = EntityFactory.Instance.GetInstance<Customer>();
            Room = EntityFactory.Instance.GetInstance<Room>();
        }
    }
    

    EntityFactory

    I have choosen not to talk directly to my IoC-container, but via a simple singleton based EntityFactory, which in turn communicates with the IoC-Container (ProxyLabObjectContainer). I think it gives a more clean naming and understanding of what’s actually being resolved, and I can add a generic constraint so that only entities can be resolved.

    public class EntityFactory
    {
        public static EntityFactory Instance
        {
            get { return Singleton<EntityFactory>.Instance; }
        }
    
        public virtual T GetInstance<T>()
            where T : Entity
        {
            return ProxyLabObjectContainer.Instance.GetInstance<T>();
        }
    }
    

    Consuming the model

    The code for consuming the model is as simple as:

    var booking = EntityFactory.Instance.GetInstance<Booking>();
    
    booking.Customer.Firstname = "Daniel";
    booking.Customer.Lastname = "Wertheim";
    booking.Request.NoOfAdultBeds = 1;
    booking.Request.WantsWindow = false;
    

    The StructureMap based IoC-container

    The EntityFactory consumed something called ProxyLabObjectContainer which is an IoC-container that uses StructureMap for resolving the objects. The project (Pls.ProxyLab.IoC) only contains one class, the IoC-container, which extends a baseclass from Pls.Core. The only thing it does is to tell StructureMap to look for configurations (in the form of specific StructureMap Registry implementations) in the assembly Pls.ProxyLab.IoC.Configuration, or rather the assemblyname of the assembly containing the IoC-container + .Configurations.

    Configuring the IoC-container – Step 1

    public class ProxyLabObjectContainer
        : StructureMapObjectContainer
    {
        private static string _configurationNamespace 
            = typeof (ProxyLabObjectContainer).Namespace + ".Configuration";
    
        public static IObjectContainer Instance
        {
            get
            {
                return Singleton<ProxyLabObjectContainer>.Instance;
            }
        }
    
        protected override void BootstrapContainer()
        {
            Container.Configure(x => x
                .Scan(scanner =>
                          {
                              scanner.Assembly(_configurationNamespace);
                              scanner.LookForRegistries();
                          })
                );
        }
    }
    

    Configuring the IoC-container – Step 2

    Create a StructureMap Registry implementation that tells the IoC-container how-to resolve the entities. I do this by telling it to scan a specific assembly for all types that extends Entity. I also explicitly exclude the type Entity, since it is abstract and shall not be resolved.

    [Serializable]
    public class IoCRegistry
        : Registry
    {
        public IoCRegistry()
        {
            Scan(s =>
                    {
                        s.AssemblyContainingType<Entity>();
                        s.AddAllTypesOf<Entity>();
                        s.ExcludeType<Entity>();
                    });
        }
    }
    

    Take it for a testride

    If we run the application now (which will output information to the console about extended baseclasses, implemented interfaces and attributes) it will not output that much. No interfaces and no attributes, just the baseclass Entity and ultimately System.Object.

    Start using proxies instead

    Ok, lets pretend I want to use my entities in a WCF scenario, and I want to implement the attributes: Serializable and DataContract. Instead of adding this manually to my classes I will generate proxies for the classes and inject these attributes.

    To get acquainted with Castles DynamicProxy (CDP) I have built my own custom API that wraps the funtionality of CDP. All that code is placed in the Pls.Core assembly. I have an interface that defines my custom IProxyBuilder.

    ProxyBuilder

    public interface IProxyBuilder
    {
        IProxyConfig Config { get; set; }
    
        object ProxyFromClass(Type proxiedClassType, params IInterceptor[] interceptors);
    
        T ProxyFromClass<T>(params IInterceptor[] interceptors)
            where T : class;
    
        object ProxyFromInterface(Type proxiedInterfaceType, params IInterceptor[] interceptors);
    
        T ProxyFromInterface<T>(params IInterceptor[] interceptors)
            where T : class;
    }
    

    The intentions are quite clear: A proxybuilder can create proxies using either classes or interfaces as templates. Either using generics or by passing Types. This is only a fraction of all the overloads and possibilities that Castles DynamicProxy (CDP) offers, so my wrapping interface also gives me a cleaner API and therefore, hopefully, offers less confusions.

    The implementation that’s included in Pls.Core is really simple and just forwards the calls to CDP’s ProxyGenerator.

    public class ProxyBuilder
        : IProxyBuilder
    {
        protected virtual ProxyGenerator ProxyGenerator { get; set; }
        public virtual IProxyConfig Config { get; set;}
    
        public ProxyBuilder(IProxyConfig config)
        {
            ProxyGenerator = new ProxyGenerator();
            Config = config;
        }
    
        public virtual object ProxyFromClass(Type proxiedClassType, params IInterceptor[] interceptors)
        {
            return ProxyGenerator.CreateClassProxy(
                proxiedClassType, Config.GenerationOptions, interceptors);
        }
    
        public virtual T ProxyFromClass<T>(params IInterceptor[] interceptors)
            where T : class
        {
            return ProxyGenerator.CreateClassProxy<T>(
                Config.GenerationOptions, interceptors);
        }
    
        public virtual object ProxyFromInterface(Type proxiedInterfaceType, params IInterceptor[] interceptors)
        {
            return ProxyGenerator.CreateInterfaceProxyWithoutTarget(
                proxiedInterfaceType, Config.GenerationOptions, interceptors);
        }
    
        public virtual T ProxyFromInterface<T>(params IInterceptor[] interceptors)
            where T : class
        {
            return ProxyGenerator.CreateInterfaceProxyWithoutTarget<T>(
                Config.GenerationOptions, interceptors);
        }
    }
    

    Default ProxyConfig

    The ProxyBuilder needs some configuration which is provided by injecting an implementation of my custom IProxyConfig interface. I have made a simple default implementation that I will extend when I setup the configurations to use when creating proxies for my entities.

    [Serializable]
    public class ProxyConfig
        : IProxyConfig
    {
        public virtual ProxyGenerationOptions GenerationOptions { get; private set; }
    
        public ProxyConfig()
        {
            GenerationOptions = new ProxyGenerationOptions();
        }
    }
    

    The EntityProxyConfig

    The next step is to create a custom ProxyConfig that I will use to instruct an instance of ProxyBuilder, how-to construct proxies for my entities.

      Instructions:

    • Add Serializable attribute
    • Add DataContract attribute

    To achieve this I need to create instances of System.Reflection.Emit.CustomAttributeBuilder and inject them to Castles DynamicProxy’s GenerationOptions.

    Serializable – CustomAttributeBuilder

    Since this attribute doesn’t have any properties or arguments to provide values for, the cTor overload I need to use is:

    public CustomAttributeBuilder(
        System.Reflection.ConstructorInfo con,
        object[] constructorArgs)
    

    Which is done in CreateSerializableAttributeBuilder in EntityProxyConfig.

    protected virtual CustomAttributeBuilder CreateSerializableAttributeBuilder()
    {
        var attributeType = typeof(SerializableAttribute);
        var ctor = attributeType.GetDefaultCtor();
    
        return new CustomAttributeBuilder(ctor, new object[0]);
    }
    

    The GetDefaultCtor is an extensionmethod that you will find in Pls.Core.

    DataContract – CustomAttributeBuilder

    This attribute has some properties that I want to pass values to, so I need to use another overload of the cTor of CustomAttributeBuilder:

    public CustomAttributeBuilder(
        System.Reflection.ConstructorInfo con,
        object[] constructorArgs,
        System.Reflection.PropertyInfo[] namedProperties,
        object[] propertyValues)
    

    So I need to extract PropertyInfos for the properties I want to set values for (namedProperties) and I need to provide values for them (propertyValues).

    protected virtual CustomAttributeBuilder CreateDataContractAttributeBuilder()
    {
        var attributeType = typeof(DataContractAttribute);
        var ctor = attributeType.GetDefaultCtor();
        var props = GetPropertiesAndValues(attributeType,
                                  new Tuple<string, object>(
                                      "IsReference", DataContractTemplate.IsReference),
                                  new Tuple<string, object>(
                                      "Namespace", DataContractTemplate.Namespace));
    
        return new CustomAttributeBuilder(
            ctor, new object[0],
            props.Select(p => p.Item1).ToArray(),
            props.Select(p => p.Item2).ToArray());
    }
    
    protected virtual IEnumerable<Tuple<PropertyInfo, object>> GetPropertiesAndValues(
        Type type, params Tuple<string, object>[] nameValues)
    {
        return nameValues.Select(
            nameValue => new Tuple<PropertyInfo, object>(
                type.GetProperty(nameValue.Item1), nameValue.Item2)).ToList();
    }
    

    A short explanation. I pass simple Tuples with the name of the property I want and the value I want to provide for it:

    • new Tuple(“IsReference”, DataContractTemplate.IsReference)
    • new Tuple(“Namespace”, DataContractTemplate.Namespace)

    So I’m obviously interested in specifying two properties: IsReference and Namespace.

    I pass these tuples to a simple helper: GetPropertiesAndValues which returns Tuples where the name of the property is replaced with a PropertyInfo. Hence you will get back:

    • Tuple(xxx, true)
    • Tuple(xxx, “MyNamespace.Org”)

    Ok, so where does the values “true” and “MyNamespace.Org” comefrom. The being able to affect the configuration I have provided some properties of the EntityProxyConfig. One property is the “DataContractTemplate” which is an simple instance of the DataContractAttribute. I have also provided some other properties that can be configured. The values are now sat in the cTor which probably would lie somewhere else, but this is just a demo so…

    public virtual bool ApplySerializable { get; set; }
    public virtual bool ApplyDataContract { get; set; }
    public virtual DataContractAttribute DataContractTemplate { get; set; }
    
    public EntityProxyConfig()
    {
        ApplySerializable = true;
        ApplyDataContract = true;
        DataContractTemplate = new DataContractAttribute { IsReference = true, Namespace = "MyNamespace.org" };
    
        InitializeProxyGenerationOptions();
    }
    

    That is the important code of the EntityProxyConfig-class, the rest is just code that forms a process (chain of method calls and simple instructions) to initialize the GenerationOptions. To complete code for the class looks like this:

    [Serializable]
    internal class EntityProxyConfig
        : ProxyConfig
    {
        public virtual bool ApplySerializable { get; set; }
        public virtual bool ApplyDataContract { get; set; }
        public virtual DataContractAttribute DataContractTemplate { get; set; }
    
        public EntityProxyConfig()
        {
            ApplySerializable = true;
            ApplyDataContract = true;
            DataContractTemplate = new DataContractAttribute { IsReference = true, Namespace = "MyNamespace.org" };
    
            InitializeProxyGenerationOptions();
        }
    
        protected virtual void InitializeProxyGenerationOptions()
        {
            var attributeBuilders = CreateAttributeBuilders();
    
            foreach (var attributeBuilder in attributeBuilders)
                GenerationOptions.AdditionalAttributes.Add(attributeBuilder);
        }
    
        protected virtual IEnumerable<CustomAttributeBuilder> CreateAttributeBuilders()
        {
            return new[] { CreateSerializableAttributeBuilder(), CreateDataContractAttributeBuilder() };
        }
    
        protected virtual CustomAttributeBuilder CreateSerializableAttributeBuilder()
        {
            var attributeType = typeof(SerializableAttribute);
            var ctor = attributeType.GetDefaultCtor();
    
            return new CustomAttributeBuilder(ctor, new object[0]);
        }
    
        protected virtual CustomAttributeBuilder CreateDataContractAttributeBuilder()
        {
            var attributeType = typeof(DataContractAttribute);
            var ctor = attributeType.GetDefaultCtor();
            var props = GetPropertiesAndValues(attributeType,
                                      new Tuple<string, object>(
                                          "IsReference", DataContractTemplate.IsReference),
                                      new Tuple<string, object>(
                                          "Namespace", DataContractTemplate.Namespace));
    
            return new CustomAttributeBuilder(
                ctor, new object[0],
                props.Select(p => p.Item1).ToArray(),
                props.Select(p => p.Item2).ToArray());
        }
    
        protected virtual IEnumerable<Tuple<PropertyInfo, object>> GetPropertiesAndValues(
            Type type, params Tuple<string, object>[] nameValues)
        {
            return nameValues.Select(
                nameValue => new Tuple<PropertyInfo, object>(
                    type.GetProperty(nameValue.Item1), nameValue.Item2)).ToList();
        }
    }
    

    Configure StructureMap to use the ProxyBuilder

    If I would like to consume the ProxyBuilder without using StructureMap I would use it like this:

    var proxyBuilder = new ProxyBuilder(new EntityProxyConfig());
    var bookingItem = proxyBuilder.ProxyFromClass<Booking>(new EntityInterceptor());
    

    I will then get a proxied version of my Booking entity and all calls to its virtual members will be intercepted in my custom EntityInterceptor (more about this later).

    Remember the IoCRegistry class that scanned the Pls.ProxyLab.Entities assembly for Entity-implementations? We need to add one single line to it, so that we don’t use StructureMaps default convention, but instead our custom EntityConvention, which in turn will specify that the ProxyBuilder should be used.

    The line to be added is:

    s.Convention<EntityConvention>();
    

    I will also let the IoC-container be responsible of creating my IProxyBuilder implementation as a Singleton:

    For<IProxyBuilder>()
        .LifecycleIs(Lifecycles.GetLifecycle(InstanceScope.Singleton))
        .Use<ProxyBuilder>()
        .Named("EntityProxyBuilder")
        .Ctor<IProxyConfig>().Is<EntityProxyConfig>();
    

    So the complete code now looks like this:

    [Serializable]
    public class IoCRegistry
        : Registry
    {
        public IoCRegistry()
        {
            //Add proxybuilder that are used for creation of my entities
            For<IProxyBuilder>()
                .LifecycleIs(Lifecycles.GetLifecycle(InstanceScope.Singleton))
                .Use<ProxyBuilder>()
                .Named("EntityProxyBuilder")
                .Ctor<IProxyConfig>().Is<EntityProxyConfig>();
    
            //Add all entities
            Scan(s =>
                    {
                        s.Convention<EntityConvention>();
                        s.AssemblyContainingType<Entity>();
                        s.AddAllTypesOf<Entity>();
                        s.ExcludeType<Entity>();
                    });
        }
    }
    

    EntityConvention

    Only one thing left to implement, the EntityConvention that will be used to register all the found Entity implementations.

    [Serializable]
    internal class EntityConvention
        : IRegistrationConvention
    {
        public void Process(Type type, Registry registry)
        {
            registry.For(type).Use(
                ctx =>
                {
                    var proxyBuilder = ctx.GetInstance<IProxyBuilder>("EntityProxyBuilder");
    
                    return proxyBuilder.ProxyFromClass(type, new EntityInterceptor());
                });
        }
    }
    

    I simply use the previously registrered IProxyBuilder for entities and uses it to create a proxied representation of the specific Entity implementation. I provide my custom EntityInterceptor so that I can intercept interactions made to the virtual members of my entities.

    The interceptor is currently empty and just lets all calls pass through.

    [Serializable]
    internal class EntityInterceptor 
        : IInterceptor
    {
        public void Intercept(IInvocation invocation)
        {
            invocation.Proceed();
        }
    }
    

    Take it for a testride

    Ok, this time the IoC-container will return proxied entities, hence we will get another output.

    Proxied entities

    As you can see we are now getting proxies of our entities and they have all gained the injected attributes.

    That’s it for now. As always you can download the complete code here.

    //Daniel

    The Moq, mockingframework, and the tangling of Stubs and Mocks

    When I need the powers of a Faking-framework I use the Moq-framework, which I use when I feel that it is to much work for creating a manual Fake (which I only do with stubs). There’s one thing that I really don’t like with Moq, the tangling of Mocks and Stubs. I like to see them as follows:

    Fakes

  • Stubs (is used to make interaction work and to return or throw exceptions etc.)
  • Mocks (is used to assert against by putting demands/expectations on the interaction)
  • This is why I wrote “Faking-framework” above. A stub is a fake and a mock is a fake but a mock is not a stub and a stub is not a mock. Glad that we sorted that out!

    Don’t get me wrong here, I like the Moq-framework, but what I would like to see in it is a separation of stubs and mocks. The API should let you clearly create a stub or a mock. You shouldn’t be able to turn a stub into a mock. When creating the stub, a class named Mock is confusing and missleading hence should not be used.

    Look at the following simple test. I have a shoppingcart for a customer and the cart uses a pricelocator to lookup the prices of the products that I’m adding. The price is determined by looking at the customer, the product and the quantity. In the test I create a stub for my pricelocator, so that I’m sure of the prices it will return. I stub the pricelocator so that the test can query it for two different products for the same customer. Then I add two items to the shoppingcart, which will use the pricelocator when I invoke the GetTotal-function on my shoppingcart.

    The test

    [TestClass]
    public class ShoppingCartTests
    {
        [TestMethod]
        public void GetTotal_TwoValidShoppingCartItems_GivesTotal()
        {
            const string customerNo = "2010-1";
            var item1 = new { CustomerNo = customerNo, ProductNo = "P01-01-00001", Quantity = 2, Result = 101.50M };
            var item2 = new { CustomerNo = customerNo, ProductNo = "P01-01-00002", Quantity = 3, Result = 99.75M };
    
            var expectedTotal = item1.Result + item2.Result;
            var priceLocatorStub = GetPriceLocatorStub(item1, item2);
    
            var shoppingCart = new ShoppingCart(customerNo) { PriceLocator = priceLocatorStub };
            shoppingCart.AddProduct(item1.ProductNo, item1.Quantity);
            shoppingCart.AddProduct(item2.ProductNo, item2.Quantity);
            var actualTotal = shoppingCart.GetTotal();
    
            Assert.AreEqual(expectedTotal, actualTotal);
        }
    
        private static IPriceLocator GetPriceLocatorStub(params object[] items)
        {
            var priceLocatorStub = new Moq.Mock<IPriceLocator>();
    
            foreach (var item in items)
            {
                var tmp = TypeCaster.CastAnonymous(item, new { CustomerNo = "", ProductNo = "", Quantity = 0, Result = 0M });
    
                priceLocatorStub
                    .Setup(pl => pl.LookupPrice(
                                     tmp.CustomerNo,
                                     tmp.ProductNo,
                                     tmp.Quantity)).Returns(tmp.Result);
            }
    
            return priceLocatorStub.Object;
        }
    }
    

    What if I add a expectation to my stub? It is possible, but should it be? Yes it’s possible. With the little change of adding “.AtMost(0)” below, I have created an mock and totally changed the semantics, hence the Assert should change to, but that’s not the point in this blog post. The point is that I think the Moq guys should keep the mocks and stubs API’s sepparated, so when I create a stub I can’t add expectations.

    priceLocatorStub
        .Setup(pl => pl.LookupPrice(
                         tmp.CustomerNo,
                         tmp.ProductNo,
                         tmp.Quantity)).Returns(tmp.Result).AtMost(0);
    

    The changes I want are:

    Instead of:

    var stub = new Mock();
    

    I want:

    var stub = new Stub();
    

    and not being able to add e.g. AtMost-expectations to the Stub.

    Even if this change will not come, I will continue to use the Moq-framework.

    To finnish of, I would like to point out that I think this missuse of the terms mocks and mocking is common amongst developers; who gladly uses variablenames like “*Mock” when they actually are creating a stub; who frequently speaks in terms of mocks, when they actually are using stubs.

    The complete sourcecode can be downloaded from here.

    //Daniel

    Updates to – Putting Entity framework 4 to use in a business architecture

    This post only reflects some updates to my previous article: Putting Entity framework 4 to use in a business architecture.

    I made a “minor” error in my EfDynamicProxyAssemblies that is consumed by my EfDataContractSerializer. The error which I know have corrected arised when I added an example to the client application, where I intended to read back the created wishlist(s) for a certain username. What happened was that the service used the overload of the EfEntityStore’s Query method that lets you specify string’s for memebers in the objectgraph to include in the select. The generated proxie than contained three entities: Wishlist, UserAccount and Wish, but my EfDynamicProxyAssemblies cleared the cached assemblies once the types were extracted. And the first time the dynamic proxie assembly was executed, the only contained type was UserAccount, hence WishList and Wish didn’t get extracted and registrered as a known type.

    I have corrected the code and added some information about the custom EfDataContractSerializer attribute in the article.

    Download the pdf

    The complete code example can be found here.

    I have also added some information about “eager loading”. So there is some new code and two new sub chapters.

    Enjoy!

    //Daniel

    Putting Entity framework 4 to use in a business architecture

    Updates have been made since version 1 of this article!

    “>Read more about it here.

    Finally I’m finished. I have been struggling with an intended sum-up of my latest post using Entity framework 4. It ended up with an document of about 40 pages. There is a lot of code in there so the number of pages could be a bit missleading (which of course depends on what you are seeking).

    Download the pdf

    The complete code example can be found here.

    As always….. Have fun with it!

    //Daniel