Number of Unit test projects in Visual Studio solution

Some time ago I have discussion with my co-worker  how to organize test projects. 
Should we have a single test project that does all sorts of things and references every project?
It is good to have one integration test dll, but for unit tests, what is the point merging everything into one.

In ideal world I agree that small independent projects are better. Unfortunately we have  solution size limitations 
However, Visual Studio performance quickly degrades as the number of projects increases. Around the 40 project mark compilation becomes an obstacle to compiling and running the tests, so larger projects may benefit from consolidating test projects.
  • Single solution. If you work on a small system, create a single solution and place all of your projects within it.
  • Partitioned solution. If you work on a large system, use multiple solutions to group related projects together. Create solutions to logically group subsets of projects that a developer would be most likely to modify as a set, and then create one master solution to contain all of your projects. This approach reduces the amount of data that needs to be pulled from source control when you only need to work on specific projects.
  • Multiple solutions. If you are working on a very large system that requires dozens of projects or more, use multiple solutions to work on sub-systems but for dependency mapping and performance reasons do not create a master solution that contains all projects.
At the moment we decided to go with one huge integration test and one huge unit test projects.
And we constantly trying to keep reasonable (not too many) number of projects in the main solution. Unfortunately this number is quite big – 70+. 

Override ToString() using Json serialization or something else.

When creating a new data class, it’s a good idea to override ToString() method to output most of the data.
It will help to see details in logs.

The only exception if the class has a sensitive data like CreditCard number or password.
For DataContract classes just use

public override string ToString()
                      //Use JSON as the simplest serializer
                      string sRet = this.ToExtJsJsonItem();
                      return sRet;
Sometimes it is useful to create extensions to standard .Net classes.
E.g. In CookieHelper class I’ve created
  public static string ToJsonString( this HttpCookie httpCookie)
            string sRet = httpCookie.ToExtJsJsonItem();
            return sRet;
The actual extension we are using was written by Nikita Pinchuk
      public static string ToExtJsJsonItem( this object item)
            DataContractJsonSerializer serializer = new DataContractJsonSerializer(item.GetType());
            using ( MemoryStream ms = new MemoryStream ())
                serializer.WriteObject(ms, item);
                StringBuilder sb = new StringBuilder ();

                sb.Append( Encoding.UTF8.GetString(ms.ToArray()));

                return sb.ToString();
For non DataContract classes the extension method may not work-it could cause an error: 
A first chance exception of type ‘System.Runtime.Serialization.InvalidDataContractException’ occurred in System.Runtime.Serialization.dll

In this case you can try  JSON.NET or the JsonValue types (nuget package JsonValue).

Sometimes I am using
 string sRet = this.XmlSerializeSafe();
But it is also not working for all types, e.g. 
MyClass cannot be serialized because it does not have a parameterless constructor.
In some places we are using LinqPad’s Dump an arbitrary object To Html String, but we found it is too heavy for logs, plain text is easier to read than HTML.
I haven’t tried yet ServiceStack.Text  C# .NET Extension method: T.Dump();

Deploy PDB files into production to ease exception report investigation.


 For a long time I believed that PDB as a part of debugging information should not be included in production deployment. Recently my colleague suggested to copy them to simplify exception investigations. 


The following  SO discussion convinced us that it is a good idea  ( at least for web sites).

    These files will not be exposed to the public if kept in the right places (websitebin). 

 So we decided to deploy the PDBs into production.

BTW, if you include PDBs with your deployments, you don’t need to store them in a symbol server,

 as it is suggested in

However we found that  PDBs   were generated not for all DLLs.  After some analysis we believe that MS changed default settings starting from VS 2008 (or may be since VS 2005) and make generation of PDB-only even for release mode. This is why older projects had generation of PDBs disabled.

To change setting in Visual Studio there is an option in the “Project Properties”, “Build”, “Advanced…”.

Change “Debug Info:” to PDB-only.

The screenshots are available in the post


Related links:
The article compares different options for debug and release and confirms that in 2007 pdbonly was the default for release configuration of visual studio   

/optimize+ /debug:pdbonly (release configuration of visual studio)

The article Include .pdb files in Web Application Publish for Release mode (VS2012)  wasn’t applicable for us, but may be useful for someone else.

Default Enum initialisation by MS Create Unit Tests wizard.

VS 2012 doesn’t show Create Unit Tests wizard. However it can be used – see

Where is the “Create Unit Tests” selection?  and

I’ve found the Wizard creates quite funny default for enumerator- to use constructor.
PaymentType paymentType = new PaymentType (); // TODO: Initialize to an appropriate value
I would prefer to have first/default enum value, e.g. 
PaymentType paymentType =PaymentType.None ;
I should suggest it to ALM Rangers, who started a new test generation project.
Another funny default is for numeric primitives:

 Decimal totalAmount = new Decimal(); // TODO: Initialize to an appropriate value


EntLIb editor corrupts config files

I’ve tried to use Microsoft Enterprise Library(EntLIb) editor, as it was suggested in, but after changes all comments in config files were removed.
Always consider to move any Enterprise Library configurations to a separate file before editing.

Static methods not always bad for testability

Some time ago I’ve posted a few links about What is testable code?
Reading the links someone can feel that any static methods are bad for testability. However it is a wrong impression- static methods without external dependencies are good for testing.

There is nothing wrong with static methods and they are easy to test (so long as they don’t change any static data). For instance, think of a Maths library, which is good candidate for a static class with static methods 

Static methods which hold no state and cause no side effects should be easily unit testable. In fact, I consider such methods a “poor-man’s” form of functional programming; you hand the method an object or value, and it returns an object or value. Nothing more. I don’t see how such methods would negatively affect unit testing at all.
Alternatively you can mock anything – implemented by MS Fakes, TypeMock, JustMock and Moles.  They rely on .NET’sProfiling API. It can intercept any of your CIL instructions.
See related links  

Enterprise Instrumentation: The ‘sessionName’ parameter of value ‘TraceSession’ is not valid

We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)
In new Server 2008 environment and IIS 7 we have the following errors:
The ‘sessionName’ parameter of value ‘TraceSession’ is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.
   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)
   — End of inner exception stack trace —
   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)
   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)
   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)

I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.

I’ve checked ,that Windows Trace Session Manager Service has started and 
The file C:Program Files (x86)Microsoft Enterprise InstrumentationBinTrace ServiceTraceSessions.config has corresponding entry
<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration xmlns=””>
                <defaultParameters minBuffers=”4″ maxFileSize=”10″ maxBuffers=”25″ bufferSize=”20″ logFileMode=”sequential” flushTimer=”3″ />
                                 <session name=”TraceSession” enabled=”false” fileName=”C:Program Files (x86)Microsoft Enterprise InstrumentationBinTrace ServiceLogsTraceLog.log” />
The errors still continue, but I was able to disable  the parameter in  eventSink configuration

   <eventSink name= traceSink description= Outputs events to the Windows Event Trace. type =Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink >
                <!– MNF disabled parameter to  avoid error “The ‘sessionName’ parameter of value ‘TraceSession’ is not valid” 
                     < parameter name =sessionName  value =TraceSession  />
    </ eventSink>

One day I wish to replace all EnterpriseInstrumentation calls with NLog.