C#, Visual Studio 2010, No more Client profile in 5minutes.

I guess I’m not alone being tired of running into the “Client profile” framework version used when creating a Console application in Visual Studio 2010. Don’t know how life is going to be in coming version of Visual Studio, but until then, lets show you a solution that takes no more than 5 minutes.

Step 1 – Locate the Project template shipped with Visual Studio

For me the installation set this up here: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\ProjectTemplates\CSharp\Windows\1033

Step 2 – Edit the VS-Template file

Unzip the content of ConsoleApplication.zip to a temp location. Rename csConsoleApplication.vstemplate to csConsoleApplication-NoClientProfile.vstemplate. Open the file with a plain old text editor and make the following change. More info: http://msdn.microsoft.com/en-us/library/ms185291.aspx

Original

<?xml version="1.0" encoding="utf-8"?>
<VSTemplate Version="3.0.0" Type="Project" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005">
  <TemplateData>
    <Name Package="{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}" ID="2320" />
    <Description Package="{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}" ID="2321" />
    <Icon Package="{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}" ID="4548" />
    <TemplateID>Microsoft.CSharp.ConsoleApplication</TemplateID>
    <ProjectType>CSharp</ProjectType>
    <RequiredFrameworkVersion>2.0</RequiredFrameworkVersion>
    <SortOrder>12</SortOrder>
    <NumberOfParentCategoriesToRollUp>1</NumberOfParentCategoriesToRollUp>
    <CreateNewFolder>true</CreateNewFolder>
    <DefaultName>ConsoleApplication</DefaultName>
    <ProvideDefaultName>true</ProvideDefaultName>
  </TemplateData>
  <TemplateContent>
    <Project File="ConsoleApplication.csproj" ReplaceParameters="true">
      <ProjectItem ReplaceParameters="true" TargetFileName="Properties\AssemblyInfo.cs">AssemblyInfo.cs</ProjectItem>
      <ProjectItem ReplaceParameters="true" OpenInEditor="true">Program.cs</ProjectItem>
      <ProjectItem ReplaceParameters="true">App.config</ProjectItem>
    </Project>
  </TemplateContent>
</VSTemplate>

Change this
Provide new tags for:

<Name>
<Description>

Remove the tag:

<TemplateId>

Updated

<?xml version="1.0" encoding="utf-8"?>
<VSTemplate Version="3.0.0" Type="Project" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005">
  <TemplateData>
    <Name>ConsoleApplication-NoClientProfile</Name>
    <Description>Ordinary Console application but NO CLIENT PROFILE</Description>
    <Icon Package="{FAE04EC1-301F-11d3-BF4B-00C04F79EFBC}" ID="4548" />
    <ProjectType>CSharp</ProjectType>
    <RequiredFrameworkVersion>2.0</RequiredFrameworkVersion>
    <SortOrder>12</SortOrder>
    <NumberOfParentCategoriesToRollUp>1</NumberOfParentCategoriesToRollUp>
    <CreateNewFolder>true</CreateNewFolder>
    <DefaultName>ConsoleApplication</DefaultName>
    <ProvideDefaultName>true</ProvideDefaultName>
  </TemplateData>
  <TemplateContent>
    <Project File="ConsoleApplication.csproj" ReplaceParameters="true">
      <ProjectItem ReplaceParameters="true" TargetFileName="Properties\AssemblyInfo.cs">AssemblyInfo.cs</ProjectItem>
      <ProjectItem ReplaceParameters="true" OpenInEditor="true">Program.cs</ProjectItem>
      <ProjectItem ReplaceParameters="true">App.config</ProjectItem>
    </Project>
  </TemplateContent>
</VSTemplate>

Step 3 – Edit the CS-project file

Locate this section:

$if$ ($targetframeworkversion$ >= 4.0)
  <TargetFrameworkProfile>Client</TargetFrameworkProfile>
$endif$

and remove Client.

$if$ ($targetframeworkversion$ >= 4.0)
  <TargetFrameworkProfile></TargetFrameworkProfile>
$endif$

Step 4 – Zip and “install” the new template

Rezip the extracted files from Step 2 into a new file ConsoleApplication-NoClientProfile.zip and drop it into the folder that holds custom project templates for Visual Studio. For me this was: C:\Users\sedanwer\Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#

More info: http://msdn.microsoft.com/en-us/library/y3kkate1.aspx

Step 5 – Consume it

The next time you start Visual Studio 2010 and do File – New project, you will now find your now project template under CSharp – Windows.

That’s it. Have fun!

//Daniel

MongoDB in C# – Extensions to support Json-mapping or Proxy generation

Note! This is an update to my previous post. For history and explanation of the consuming code see my previous post.

I have made some quick changes to make the JSON API and the Proxy API that I wrote about in the earlier post, easier to use.

I have put the code in an external lib (yes it has a rather bad name). I have ensured that the consumer doesn’t have to reference Castle’s libs etc. unless he/she wants more control (more about this later). The consumer also doesn’t have to reference Json.Net.

First, I have reconstructed my JSON-API to use extensions on the Document-class, so that you see how I think the JSON interaction could be. The method that needs to be updated (tested) to handle more types etc. is in MongoJsonMapper.ConvertToMongoDbValue.

var firstNoteDocument = new Document().UpdateFrom("{
        Title : \"First note using Json.\",
        Body : \"Some nice text.\",
        Tags : [\"MongoDB\", \"Getting started\"],
        Votes : 3}");
notes.Insert(firstNoteDocument);

...

var secondNote = new Note {
        Title = "Second note using Serialization.",
        Body = "Some nice text.",
        Votes = 3 };
var secondNoteDocument = new Document().UpdateFrom(secondNote);

notes.Insert(secondNoteDocument);

For the proxy-API I have created a simple MongoDbProxyBuilder which is a simple default implementation of a proxybuilder that offers the oppurtunity to create proxies of entities that implements IMongoDbEntityProxy. If you want more control you can use the code under ProxyGeneration or role an own implementation that uses Castle. The MongoDbEntityInterceptor then can be used as a starting point for you to extend.

var proxyBuilder = new MongoDbProxyBuilder();

var firstNote = proxyBuilder.ProxyFromClass<Note>();
firstNote.Title = "First note using Proxies.";
firstNote.Tags = new [] { "MongoDB", "Getting started" };
firstNote.Body = "Some nice text.";

notes.Insert(firstNote.GetAsDocument());

What do you think? Anything useful?

I will try to get the time to ensure that DbRef and embeded documents etc. works.

You can download the changes from here.

//Daniel

Getting started with MongoDB – Using Json.Net and Castle Dynamic proxy

This post is divided in three blocks:
– Getting MongoDB to run on your machine
– Consume it manually via the MongoDB – Console
– Consume it from C# code.

Updates!

If you are just intereseted in the C# implementation, the section for how to consume it manually using the console, can be skipped.

The section where I show you how to use it from C# code contains examples of how to get it to work using either Json.Net or Castle Dynamic proxy.

Note! The compiled libs of the MongoDB-driver (mongodb-csharp) has one tweak that I have applied. I have updated the ToString implementation of the class “Oid”, so that it returns a correct Json-format.

Getting MongoDB to run on your machine

Step 1 – Download the binaries

Go to MongoDB – downloads

I selected the 64-bit version for Windows. There is a limitations of a maximum size of 2gb per database if you use 32-bit.

Unzip and put the binaries where you want them.

My selection: “C:\MongoDB\Binaries”

Step 2 – Create the data folder

We need to manually create the folder which MongoDB will use as its storage area. By default this is: “c:\data\db” and you must create this manually. If you want to customize this, which I want, you have to execute the binary “mongod.exe” and provide it the switch “-dbpath”. I will also like it to have one directory per database that is being created, hence I use the switch “-directoryperdb”.

More about switches could be found here.

The account that is executing the mongod.exe, must have read and write access to the data folder.

My selection: “C:\MongoDB\Data”

Step 3 – Start “mongod.exe”

Start the “server” so that the core is up and running so that you can connect to it and start storing data.

So my exe is located under: C:\MongoDB\Binaries\, hence my command looks like
C:\MongoDB\Binaries>mongod -dbpath c:\MongoDB\Data -directoryperdb

Step 4 – Verfiy that Mongo is up and running

I will do this manually, using the consoleclient “mongo.exe”. It’s located with the other binaries. Just fire it up using the command line and you will be seing:

Which confirms that the server is up. Initially there are some default databases created, and as you can see one of them are “test”.

Consume MongoDB manually via the MongoDB – Console

Step 1 – Store some data using the console

Ok, lets just create a simple object with three properties: Title, Body, Tags; and lets store it under the collection “Notes” which in turn is stored in the database “SimpleNotes”:

The conceptual model will look something like this:

SimpleNotes
	Notes
		Title
		Body
		Tags
			Tag#1
			Tag#2
			...		

Start the console (mongo.exe) and type in the following commands:


use SimpleNotes
db.Notes.save({ Title: "The first document", Body: "My first document stored in MongoDB.", Tags: ["MongoDB", "Getting started"]})
db.Notes.save({ Title: "The second document", Body: "My second document stored in MongoDB."})

Read more about inserting.

Step 2 – Reconnect and query the SimpleNotes-database

Lets intend that we are opening the console and have forgotten database name etc. If you are still connected, type exit so that we can simulate a clean session. Then start mongo.exe again and exeute the following commands:

show dbs
use SimpleNotes
show collections

You will find our created database “SimpleNotes” (which is created automatically when first used) and you will also find the “Notes” collection.

Step 7 – List the stored items

Lets query out our two stored notes.

db.Notes.find()

You should now be presented with two stored documents. One that has a property “Tags” and one that doesn’t.

Step 10 – Query for specific attribute

Lets find the document that has the Tag “Getting started”.


db.Notes.find({Tags : "Getting started"})

Read more about querying

Consume MongoDB from C# code

Step 1 – Get C# drivers for building a custom client

First you need to download a driver for C#. Go to http://github.com/samus/mongodb-csharp

I downloaded the ZIP (click “Download source”).

Unzip and open the Visual Studio solution and compile it. After having updated the ToString implementation in the “Oid” class, I took the two dll’s:
MongoDB.Driver.dll
MongoDB.Linq.dll

and I put them under: “C:\MongoDB\Drivers\csharp”. You can put them anywhere. You are just going to use “Add reference from within Visual studio”.

Step 2 – Get Json.Net

I’m using Json.Net for serialization/deserialization. You can download it from here.

Step 3 – Build the C# client

For simplicity I just created a simple console application using Visual Studio 2010. I have provided three different ways, showning you have to consume MongoDB via the C#-driver:
– Using Json
– Using Serialization/Deserialization in Json.Net
– Using Castle Dynamic proxy

All three cases will look like this: Store two notes, one with Tags and one without. The one without will then be refetched and updated with a Tag.

Fix Oid – ToString

To get things to work I had to do some tweaking. At first I hade to ensure that ToString in the class “Oid” returned a Json-representation that I code consume with Json.Net.

public override string ToString()
{
    //Old: return string.Format(@"ObjectId(""{0}"")", BitConverter.ToString(value).Replace("-","").ToLower());
    return string.Format("\"{0}\"", BitConverter.ToString(value).Replace("-","").ToLower());
}

Using Json

Lets look at the consuming code. Do you remember the notation in MongoDB? Database, Collections and Documents. The Document is what contains the datastructure, which isn’t the same as a traditional row, since the schema doesn’t have to be equal for all the documents in the same collection. Each document can further contain other documents or references to other documents (will be covered in future writings).

I have built some helper methods to map between the “Document” and the C# entity class “Note”. These methods are placed in the helper class: “MongoJson”.

var json = new MongoJson();

//Connect to server
var mongo = new Mongo();
mongo.Connect();

//Create clean database
var db = mongo["SimpleNotes"];
db.SendCommand("dropDatabase");

//Get collection "Notes" to hold our Note-documents
var notes = db["Notes"];

//Dump to console to see that the database is empty.
PrintNotes("Initial notes", notes);

//Create and Insert first note with properties:
//Title, Body, Tags-array
var firstNoteDocument = 
	json.DocumentFrom("{
		Title : \"First note using Json.\",
		Body : \"Some nice text.\",
		Tags : [\"MongoDB\", \"Getting started\"] }");
		
notes.Insert(firstNoteDocument);

PrintNotes("After first insert using Json", notes);

//Create and Insert a second note with no Tags
//This note will not have Tags-represented in the schema.
var secondNoteDocument =
	json.DocumentFrom("{
		Title : \"Second note using Json.\", 
		Body : \"Some nice text.\"}");
		
notes.Insert(secondNoteDocument);

PrintNotes("After second insert using Json", notes);

//Read back the second note that lacked tags and provide one
var noteDocument = notes.FindOne(new Document { { "Tags", MongoDBNull.Value } });

//Update the fetch object with values from another document
//(merge of members/values)
noteDocument.Update(json.DocumentFrom("{Tags : [\"The tag\"]}"));

//Update in Db
notes.Update(noteDocument);

PrintNotes("After update of post with empty tags, using Json", notes);

mongo.Disconnect();

Using Serialization/Deserialization in Json.Net

Now I will have a static C# representation of my Note-entity.

public interface IMongoEntity
{
    string _id { get; set; }
    Oid GetOid();

    Document GetAsDocument();
    void UpdateFromDocument(Document document);
}

[Serlializable]
public class Note
    : IMongoEntity
{
    public virtual string _id { get; set; }
    public virtual string Title { get; set; }
    public virtual string Body { get; set; }
    public virtual string[] Tags { get; set; }

    public virtual Oid GetOid()
    {
        return new Oid(_id);
    }

    public virtual Document GetAsDocument()
    {
        throw new NotImplementedException();
    }

    public virtual void UpdateFromDocument(Document document)
    {
        throw new NotImplementedException();
    }
}

The members GetAsDocument() and UpdateFromDocument() isn’t used in this example. They are used when we use Castle Dynamic proxy.

The consuming code will now use Document when communicating with MongoDB and will use Note in the application/domain. To map between them I will make use of Json.Net. I will only show the parts that are different this time. The consuming code looks like this:

//Create new C# Note and convert it to a document using JSON serialization/deserialization
var firstNote = new Note {
	Title = "First note using Serialization",
	Tags = new string[] { "MongoDB", "Getting started" },
	Body = "Some nice text." };
	
//Convert Note to Document and insert it
var firstNoteDocument = json.DocumentFrom(firstNote);
notes.Insert(firstNoteDocument);

...

//Create and Insert second note
var secondNote = new Note { 
	Title = "Second note using Serialization.", 
	Body = "Some nice text." };
	
var secondNoteDocument = json.DocumentFrom(secondNote);
notes.Insert(secondNoteDocument);

...

//Read back the second note that lacked tags.
var noteDocument = notes.FindOne(new Document { { "Tags", MongoDBNull.Value } });
var note = json.ObjectFrom<Note>(noteDocument);

note.Tags = new[] { "The tag" };

//Populate the document with the changed C# object, and update in MongoDB.
json.PopulateDocumentFrom(noteDocument, note);
notes.Update(noteDocument);

Ok, time to look at the helper class. There’s no actual magic there. Just using Json.Net for serialization and deserialization.

To go from a Document to a Note I just deserialize the JSON representation of the document, which now works since I fixed the Oid-class (read about it above).

To go from a Note to a Document I need to get the JSON representation of the Note and then deserialize this to a Dictionary with key-value objects, which then are looped and assigned to the Document.

public class MongoJson
{
    private const string _oidContainerName = "_id";

    public T ObjectFrom<T>(Document document)
        where T : class, IMongoEntity
    {
        if (document == null)
            return null;

        return JsonConvert.DeserializeObject<T>(document.ToString());
    }

    public Document DocumentFrom(string json)
    {
        return PopulateDocumentFrom(new Document(), json);
    }

    public Document DocumentFrom<T>(T item)
        where T : class, IMongoEntity
    {
        return PopulateDocumentFrom(new Document(), item);
    }

    public Document PopulateDocumentFrom<T>(Document document, T item)
        where T : class, IMongoEntity
    {
        if (item == null)
            return document;

        var json = JsonConvert.SerializeObject(item, Formatting.None);

        return PopulateDocumentFrom(document, json);
    }

    private Document PopulateDocumentFrom(Document document, string json)
    {
        var keyValues = JsonConvert.DeserializeObject<Dictionary<string, object>>(json);

        foreach (var keyValue in keyValues)
        {
            var isEmptyKeyField = (
                                      keyValue.Key == _oidContainerName && document[_oidContainerName] != MongoDBNull.Value);

            if (isEmptyKeyField)
                continue;

            var value = keyValue.Value ?? MongoDBNull.Value;

            if (value != MongoDBNull.Value)
            {
                var arrayValue = (keyValue.Value as JArray);
                if (arrayValue != null)
                    value = arrayValue.Select(j => (string)j).ToArray();
            }

            if (document.Contains(keyValue.Key))
                document[keyValue.Key] = value;
            else
            {
                if (value != MongoDBNull.Value)
                    document.Add(keyValue.Key, value);
            }
        }

        return document;
    }
}

Using Castle Dynamic proxy

One implementation left. This time I will use Castle Dynamic proxy to map between Document and Note. I do this by intercepting the properties of Note and store them in a simple statebag. The statebag is then used when converting between Notes and Documents.

First the consuming code that are different than before.

var proxyBuilder = new ProxyBuilder(new ProxyConfig());

//Create and Insert first note
var firstNote = proxyBuilder.ProxyFromClass<Note>(new EntityInterceptor());
firstNote.Title = "First note using Proxies.";
firstNote.Tags = new string[] { "MongoDB", "Getting started" };
firstNote.Body = "Some nice text.";

notes.Insert(firstNote.GetAsDocument());

...

//Create and Insert second note
var secondNote = proxyBuilder.ProxyFromClass<Note>(new EntityInterceptor());
secondNote.Title = "Second note using Proxies.";
secondNote.Body = "Some nice text.";

notes.Insert(secondNote.GetAsDocument());

...

//Read back the second note that lacked tags.
var noteDocument = notes.FindOne(new Document { { "Tags", MongoDBNull.Value } });
var note = proxyBuilder.ProxyFromClass<Note>(new EntityInterceptor());

note.UpdateFromDocument(noteDocument);

note.Tags = new[] { "The tag" };

//Populate the document with the changed C# object, and update in MongoDB.
notes.Update(note.GetAsDocument());

The interceptor that does the work, looks like this.

public class EntityInterceptor
    : IInterceptor
{
    private Dictionary<string, object> _stateBag = new Dictionary<string, object>();
    private static readonly Type _documentType = typeof (Document);

    public void Intercept(IInvocation invocation)
    {
        var name = invocation.MethodInvocationTarget.Name;

        if(IsProperty(name))
        {
            var key = invocation.MethodInvocationTarget.Name.Remove(0, 4);

            if (IsSetter(name))
                SetValue(key, invocation.Arguments[0]);
            else if (IsGetter(name))
            {
                var value = GetValue(key);
                if (value != null)
                    invocation.ReturnValue = value;
            }
        }
        else if(name == "GetAsDocument")
        {
            var document = new Document();

            foreach (var keyValue in _stateBag)
            {
                document.Add(keyValue.Key, keyValue.Value);
            }

            invocation.ReturnValue = document;
        }
        else if (name == "UpdateFromDocument")
            SetValuesFrom((Document)invocation.Arguments[0]);
        else
            invocation.Proceed();
    }

    private void SetValuesFrom(Document document)
    {
        foreach (DictionaryEntry keyValue in document)
            SetValue((string)keyValue.Key, keyValue.Value);
    }

    private void SetValue(string key, object value)
    {
        if(!_stateBag.ContainsKey(key))
            _stateBag.Add(key, value);
        else
            _stateBag[key] = value;
    }

    private object GetValue(string key)
    {
        if (!_stateBag.ContainsKey(key))
            _stateBag.Add(key, null);

        return _stateBag[key];
    }

    private bool IsProperty(string name)
    {
        return (name.StartsWith("set_") || name.StartsWith("get_"));
    }

    private bool IsGetter(string name)
    {
        return name.StartsWith("get_");
    }

    private bool IsSetter(string name)
    {
        return name.StartsWith("set_");
    }
}

That’s it. As always all the code can be downloaded from here.

//Daniel

How-to test a project that is focused on SSIS

How-to test a project that is focused on SSIS
Recently I have been working in a project where SSIS is the primary technology used for solve a business case where focus lies on data-processing. Normally I work with plain C# code and I like test driven development and I like the bi-product that comes with TDD – the unit tests and in the longer run – the integration tests. In this little article I will go through the steps needed to setup a fairly simple but yet effective testing solution for SSIS-development.

All C# code shall be placed in a separate assembly and hosted in GAC
First of all: “All C# code that is written I script-components and script-tasks etc. should lie in a separate C#-assembly and should be consumed via the global assembly cache (GAC).” The C# code contained in the SSIS-packages should only place calls to C# code in the external assembly/assemblies. By taking this design decision, you could easily write tests for your code.

Load and run your packages via code
Ok, that was the obvious decision, now what could be done to test our packages? We need a way to execute our packages and test the outcome. The first step is to configure the packages to be deployed to a certain directory whenever the Integration services project is built. This is achieved by tweaking the “Deployment utility settings” under properties of the SSIS-project. Activate it by setting “CreateDeploymentUtility” to true and to specify a path for “DeploymentOutputPath”.

In our integration test project we kicked the packages using simple “load and run technique” which you could read up on here (http://msdn.microsoft.com/en-us/library/ms403355.aspx).

Make the package-execution configurable
For ease of setting up different environments (like different developer machines or build servers) I made a Package-executor that was configurable from the app-config.

<ssisPackageExecutor
  xmlConfigFile="C:\ProjectX\SSIS-Config\CommonConfig-Test.dtsConfig">
  <packages>
    <package name="PackageOne"
             locationType="Filesystem"
             packagelocation="C:\ProjectX\Packages\PackageOne.dtsx" />
    <package name="PackageTwo"
             locationType="Filesystem"
             packagelocation="C:\ProjectX\Packages\PackageTwo.dtsx" />
  </packages>
</ssisPackageExecutor>

The attribute “xmlConfigFile” above is used by the Package-executor to inject configurations that are valid for testing purposes. E.g: Point a connection-string to a specific Test-database.

All that was left was to make a simple SSIS-assert class (which uses the Package-executor) on which you could call, e.g:

SSISAssert.SuccessfulPackage("PackageOne");
SSISAssert.FailedPackage("PackageTwo");

After the execution of the package you “just” have to write tests that check the database, filesystem etc. for correct changes.

Setting up a database automatically
To make the tests run smoothly you need to have the Test-database setup automatically. One way to achieve this is to make use of script-files that are executed on a certain database. To achieve this all tables, views, functions, stored-procedures etc. needs to be scripted and placed on a disk. I made a simple database installer that checks for version-folders in a certain script-folder repository. If a certain folder should be installed/executed and in what order is configurable in the app-config (see below for example). So the script-folder contained subfolders where each subfolder represented a certain version of the database. For testing purposes I added a test-version folder that contained scripts necessary for setting up the Test-database with specific test tables. Theses specific test-scripts where not executed/installed in the Normal-database. The custom database installer that I build was configurable so that you could install several databases at the same time. This made it easy for each developer to have a Test-database and a Normal-database on their machines.
The database installer made use of Sql-Server Management Objects (SMO) to execute the scripts and the Test-database was initialized once per assembly. If a certain test required certain test-data in the database, initialize and cleanup methods (using attributes [TestInitialize] and [TestCleanup]) in the test-class called stored-procedure (which was installed/created by a script in the test-version-folder); one for initialization of test-data for the specific test and one for clean up.
The Configuration looked like this:

<databaseInstaller>
  <setups>
    <setupInfo name="Normal"
               connectionStringName="NormalDb"
               scriptFolderPath="C:\ProjectX\Scripts"
               versions="initialize;v1"/>
    <setupInfo name="Test"
               connectionStringName="TestDb"
               scriptFolderPath="C:\ProjectX\Scripts"
               versions="initialize;v1;vTest"/>
    </setups>
</databaseInstaller>

The script-version-folders are applied in the order that they are listed above in the “versions” attribute. If a folder is not present in the attribute but on disk, it will not be executed/installed.

That’s it. Happy testing.

//Daniel