Use extension methods to let your enums hold the logic.

I often stumble upon code checking values on enumerations to determine the state of some object or rule. At best, this code is extracted and put in a helper or utils class of some sort. Don’t! Use extension methods instead. It let’s you provide a name for the condition (rule) that you are checking, and it put’s it together with the enumeration.

Example
Lets say I have an enumeration with storage providers. Each provider might be able to store data virtually, hence I need the possibility of determining if it’s capable of this.

[Serializable]
public enum StorageProviders
{
    LuceneIo = 0,
    LuceneVirtual = 1
}

public static class StorageProvidersExtensions
{
    public static bool IsVirtual(this StorageProviders provider)
    {
        return provider == StorageProviders.LuceneVirtual;
    }
}

I can now act on the enumeration value it self.

...connectionInfo.ProviderType.IsVirtual()...

//Daniel

Where are your Scenarios in your domain?

If you design your entities before you design your scenarios, you are data-driven! The Scenarios are primary – The entities are secondary!

– Daniel Wertheim

My words of what I think is wrong in many projects, and thats what this post is about.

Definition of Scenario

a postulated sequence of possible events

http://wordnetweb.princeton.edu/perl/webwn?s=scenario

In computing, a scenario is a narrative describing foreseeable interactions of types of users (characters) and the system. Scenarios include information about goals, expectations, motivations, actions and reactions. Scenarios are neither predictions nor forecasts, but rather attempts to reflect on or portray the way in which a system is used in the context of daily activity.

http://en.wikipedia.org/wiki/Scenario_(computing)

I tend to bump into situations where I meet people that are proclaiming they are adapting DDD in their projects and that they are writing object-oriented systems. As I’m always interested in learning, I fire of a couple of questions about what they mean by DDD and OO. I will not bother you with the dialog, but will rather skip to a summarize of the conversation. and present five of the artifacts that they think significates that they are conforming to DDD which I think is well worth to notice and discuss about:

  • They are object-oriented
    Meaning “that they are not using datasets anymore to keep track of the state and simple validation.”
  • They have POCO entities
    Meaning “that they have an OR/M that to some extent hides some of the artifacts of dealing with persisting the entities.”
  • They have repositories
    Meaning “there’s at least one repository per aggregate root (most often more), dealing with CRUDs.”
  • They have domain-services
    Meaning “they have some objects named [Entity][Service/Manager] which contains logic and calls to repositories”.
  • They have a model
    Meaning “they have a bunch of the above objects collaborating together”.
  • Ok, so I have designed system like these to. I will probably do it again. But lets look at what I mean by “systems like these” and what I think is wrong.

    Whats wrong?
    So I think we tend to create a model with an emphasis on entities, which more or less becomes a representation of tables in a traditional database. I like to define these entities as “an intelligent data-row” and by intelligent I mean that it might have simple validation of state and some logic for aggregating values, e.g. “Sum of the contained order lines in the order”. Furthermore we tend to create a bunch of services/managers that is named: “[Entity][Service/Manager]”; which to some extent holds together the flow – which most often means, they do some simple validation/rule-evaluation and they interact with a repository. I think these Services/Manager-classes arise from a mental mind of the developer looking like this: “The entity must be POCO and can’t have anything to do with persisting entities. I’ll just create a service that calls the repository and since I’m working with my order-entity, I’ll name it “OrderService”.

    Again, we end up here because we are thinking objects and state and how the objects correlate to each other. And when sitting there, performing TDD with a shallow understanding of the domain; it’s probably the easiest way to get started. Classic example: “Customer places a new order” => “OrderService.PlacerOrder” which receives an order entity with some contained lines and a customer. So my question to you then is: “What exactly is OrderService in the business”? If it is the customer that makes the request, why can’t it be “Customer.PlaceNewOrder”? Or a domain-object named something that mimics the scenario/process, e.g.: “PlaceNewOrder” or “OrderRegistration”.

    To sum it all up: The scenarios/processes in the domain is the core of the business. It’s what make the business unique and is what they are competing with against other competitors. This is what should drive your design. Without a fair understanding of the domain you will risk, getting data centric. The Domain should drive the design of your model, not TDD. Get a deep knowledge of the business and it’s processes and not only the binary rules and the state. These are the aspects that should be reflected in your code.

    //Daniel

    The Moq, mockingframework, and the tangling of Stubs and Mocks

    When I need the powers of a Faking-framework I use the Moq-framework, which I use when I feel that it is to much work for creating a manual Fake (which I only do with stubs). There’s one thing that I really don’t like with Moq, the tangling of Mocks and Stubs. I like to see them as follows:

    Fakes

  • Stubs (is used to make interaction work and to return or throw exceptions etc.)
  • Mocks (is used to assert against by putting demands/expectations on the interaction)
  • This is why I wrote “Faking-framework” above. A stub is a fake and a mock is a fake but a mock is not a stub and a stub is not a mock. Glad that we sorted that out!

    Don’t get me wrong here, I like the Moq-framework, but what I would like to see in it is a separation of stubs and mocks. The API should let you clearly create a stub or a mock. You shouldn’t be able to turn a stub into a mock. When creating the stub, a class named Mock is confusing and missleading hence should not be used.

    Look at the following simple test. I have a shoppingcart for a customer and the cart uses a pricelocator to lookup the prices of the products that I’m adding. The price is determined by looking at the customer, the product and the quantity. In the test I create a stub for my pricelocator, so that I’m sure of the prices it will return. I stub the pricelocator so that the test can query it for two different products for the same customer. Then I add two items to the shoppingcart, which will use the pricelocator when I invoke the GetTotal-function on my shoppingcart.

    The test

    [TestClass]
    public class ShoppingCartTests
    {
        [TestMethod]
        public void GetTotal_TwoValidShoppingCartItems_GivesTotal()
        {
            const string customerNo = "2010-1";
            var item1 = new { CustomerNo = customerNo, ProductNo = "P01-01-00001", Quantity = 2, Result = 101.50M };
            var item2 = new { CustomerNo = customerNo, ProductNo = "P01-01-00002", Quantity = 3, Result = 99.75M };
    
            var expectedTotal = item1.Result + item2.Result;
            var priceLocatorStub = GetPriceLocatorStub(item1, item2);
    
            var shoppingCart = new ShoppingCart(customerNo) { PriceLocator = priceLocatorStub };
            shoppingCart.AddProduct(item1.ProductNo, item1.Quantity);
            shoppingCart.AddProduct(item2.ProductNo, item2.Quantity);
            var actualTotal = shoppingCart.GetTotal();
    
            Assert.AreEqual(expectedTotal, actualTotal);
        }
    
        private static IPriceLocator GetPriceLocatorStub(params object[] items)
        {
            var priceLocatorStub = new Moq.Mock<IPriceLocator>();
    
            foreach (var item in items)
            {
                var tmp = TypeCaster.CastAnonymous(item, new { CustomerNo = "", ProductNo = "", Quantity = 0, Result = 0M });
    
                priceLocatorStub
                    .Setup(pl => pl.LookupPrice(
                                     tmp.CustomerNo,
                                     tmp.ProductNo,
                                     tmp.Quantity)).Returns(tmp.Result);
            }
    
            return priceLocatorStub.Object;
        }
    }
    

    What if I add a expectation to my stub? It is possible, but should it be? Yes it’s possible. With the little change of adding “.AtMost(0)” below, I have created an mock and totally changed the semantics, hence the Assert should change to, but that’s not the point in this blog post. The point is that I think the Moq guys should keep the mocks and stubs API’s sepparated, so when I create a stub I can’t add expectations.

    priceLocatorStub
        .Setup(pl => pl.LookupPrice(
                         tmp.CustomerNo,
                         tmp.ProductNo,
                         tmp.Quantity)).Returns(tmp.Result).AtMost(0);
    

    The changes I want are:

    Instead of:

    var stub = new Mock();
    

    I want:

    var stub = new Stub();
    

    and not being able to add e.g. AtMost-expectations to the Stub.

    Even if this change will not come, I will continue to use the Moq-framework.

    To finnish of, I would like to point out that I think this missuse of the terms mocks and mocking is common amongst developers; who gladly uses variablenames like “*Mock” when they actually are creating a stub; who frequently speaks in terms of mocks, when they actually are using stubs.

    The complete sourcecode can be downloaded from here.

    //Daniel

    Extend IQueryable instead of a certain dataprovider – more decoupled code

    This is going to be a real short post and is more of an update to my last post (Entity validaton using Custom Data Annotation attributes) then a new one. I have made two small technical changes and with these small changes I have gained a more decoupled code.

    Change One
    Instead of having a helper method in my service base class I have put it in the entity store. This due to the fact that the validation is mere simple datavalidation and is pure validation of the entity’s state. Custom validation logic is still in my services but this logic is injected to the validation method in the entity store, via a Func<>.

    Change Two
    I don’t make use of repositories anymore. Instead I have a generic entity store that can handle whichever entity you pass to it, as long as you have provided mapping information about the entity. But what if I want to have “Named queries”, e.g GetAllUsersThatAreBlocked? In this case, I don’t create a custom implementation of my entity store by inheriting the generic implementation and creates the function “GetAllUsersThatAreBlocked”. No, I use my generic implementation that lets me get an IQueryable<T>, which I then extend. In my previous post I extended the entity store, which was really ugly. This new solution decouples my queries from the entity store and gives me the opportunity to execute the queries against any source that can provide me an IQueryable<T>.

    The validation
    The code below shows a Service-method and how it calls the ValidateEntity-method (which now is placed in the EntityStore).

    public ServiceResponse<UserAccount> SetupNewUserAccount(UserAccount userAccount)
    {
        var validationResult = EntityStore.ValidateEntity<UserAccount>(userAccount, CustomValidationForSettingUpNewAccount);
        var serviceResponse = new ServiceResponse<UserAccount>(userAccount, validationResult);
        
        if (!serviceResponse.ValidationResult.HasViolations)
        {
            EntityStore.AddEntity(userAccount);
            EntityStore.SaveChanges();
        }
    
        return serviceResponse;
    }
    
    private IEnumerable<ValidationResult> CustomValidationForSettingUpNewAccount(UserAccount userAccount)
    {
        var violations = new List<ValidationResult>();
        var emailIsTaken = EntityStore.Query<UserAccount>().EmailIsTakenByOther(userAccount.Username, userAccount.Email);
    
        if (emailIsTaken)
            violations.Add(new ValidationResult("Email is allready taken."));
    
        return violations;
    }
    

    The ValidateEntityMethod is as before. Where it uses a EntityValidator that I have written about before. To perform custom validation you can pass in a Func<> which is done above, where I pass a pointer to method “CustomValidationForSettingUpNewAccount” in my service. This function only ensures that the email isn’t allready in use.

    public EntityValidationResult ValidateEntity<T>(T entity, Func<T, IEnumerable<ValidationResult>> customValidation = null)
        where T : IEntity
    {
        Func<T, bool, IEnumerable<ValidationResult>> customValidationProxy = null;
    
        if (customValidation != null)
            customValidationProxy = (e, isValid) => isValid ? customValidation(e) : null;
    
        return new EntityValidator<T>().Validate(entity, customValidationProxy);
    }
    

    Named queries as extensions to IQueryable<T>
    The extension code is the same as in earliear post except that I now extend IQueryable instead of EfEntityStore and can hook on the where clause directly on the injected queryable. A verry little syntax change but a tremendous architechtural change since, the query extension now has no dependency to a certain implementation technique.

    namespace Sds.Christmas.Storage.Queries.UserAccounts
    {
        public static class UserAccountQueries
        {
            public static bool EmailIsTakenByOther(this IQueryable<UserAccount> userAccounts, string username, string email)
            {
                return
                    userAccounts.Where(
                        u =>
                            u.Username.Equals(username, StringComparison.InvariantCultureIgnoreCase) &&
                            u.Email.Equals(email, StringComparison.InvariantCultureIgnoreCase)).Count() > 0;
            }
        }
    }
    

    As always, there’s a complete sample project available for download.

    Enjoy!

    //Daniel

    Does code have to be Localized for achieving a “Ubiquitous Language”?

    Recently I have encountered C# code containing names found in the business that the applications maps to. So far so good except that the names of objects, methods, properties, functions etc. were written in Swedish and of course with special characters like “Å, Ä, Ö”

    My question is “Does code has to be localized for achieving an Ubiquitous Language?

    In my eyes the answer is “No”, even if the corporate language in the business is Swedish, I think that the code should be written in English. Why? Well…for starters: You lose internationalization. What if developers that doesn’t understand Swedish have to work with and understand the code? Another downside is that English and Swedish will be tangled up and even if it’s a Swedish developer that reads the code he/she will probably feel confused since the “normal” language used when writing software is English. So finding e.g. Swedish verbs for functions etc. will be confusing. And for me “Confusion is the result of unclear intent of the code”.

    Personally I would go with 100% English, even if we get terms that doesn’t match the Swedish counterpart to 100%, yes it will require more of the developer whom will have to have two versions of the domain-model in their mind, but rather that than unclear code. And finally to answer my question, I don’t think that the localization of the code affects the degree of how Ubiquitous Language you have achieved. What affects this is how well the structure of the domain-model maps to the business.

    //Daniel