Application structure

Every new application we’re trying to raise from scratch, especially when it’s a big one, we’re drawn to the basic questions of how to structure our code so it will be easy to maintain, easy to extend and easy on the eyes(= it makes sense). This post is meant for teams with more than 4 programmers working on the source of a 2+ (human) years project. If you work alone and the client doesn’t really care, heck, you can do it in one big assembly and name it [your_name]Rules.


I’ve discovered along the years that it really bothers me to see unorganized solutions or bad naming. I call it “structure smell” and as you might have guessed, I’m a sensitive guy. I’ve structured my thoughts about the way I see things so I could use it later on as a reference for myself and for my Team. Before I’ll continue, keep in mind that most of these questions are philosophical, so there is no one holy answer, it’s just a matter of point of view. I tried to point out best practices based on my experience. In addition, instead of writing user-story\feature\requirement\bug fix\UI change\you-name-it, I would use the term “task” instead. I’ll even go one step further and say that a given task should be limited to 0.5-1.5 days so it would be easy to see progress over time(if you’re on the agile boat as I am) and help us focus on the domain\context we’re working at during the task.


Enough said, let’s get going:


“Should we build one big solution?”


The immediate answer on this one is absolutely not.
The quick reason behind it as no matter what you do, while working on a task, you usually don’t need all of the projects at the same time. I see no reason to compile so many projects if you’re working only on 2-3(or even 5-6) of them at a time. I know that Visual Studio .Net is smart enough to avoid needless compilation of projects that we’re not changed, but keep in mind that John, your teammate, is working on different tasks than you are which means that he can make some changes, checked them in and your next “Get Latest Version” might cause unnecessary compilation on your side. If you haven’t noticed(who am I kidding), VS.NET can become an heavy memory consumer for big solutions, add to it our beloved ReSharper(that must analyze all of the projects in the solution to give you smart ideas), it can get quite messy.


The second reason, is simplicity. Why looking at 40 projects when you need only just a few? sure, you can collapse them or even organize them in Solution Folders(in VS.NET 2005), but it’s much easier to keep the noise out.


“So How should we split our solutions and projects?”


On this scale of projects, it would not be a great idea to create projects based on layers (DataAccess project, Business layer project, UI project etc). This way, each layer(=ClassLibrary) would be filled with too many classes and in time, it will be hard to find your path in one project with more than 200 files in it. Another bad side effect for splitting the projects by layers is that it will narrow the way you think about solutions (to problems). Instead of trying to create pure OO components you’ll immediately start breaking one piece into “this is UI, this is BL, this is DAL” and possibly branch your code into the wrong assemblies by cold 0 or 1 decisions. Life is one big gray CLR.


So I’ll try to define the way I see solutions, projects and namespaces and how should we use them:


1). Solution represent a domain in your application.

Domain is a complete sub-system in the application. It’s much bigger than a single component and it’s usually bind a list of components into one large sub-system that we can address as one big black box. The sub-system expose interfaces to other sub-systems in the application.

If I had to develop Lnbogen’s Calendar for example, I would consider these sub-systems: Common, Engine, DataStorage, Site, Widgets. Each one of these sub-systems deserves it’s own solution.

2). Project is a component in that domain or a mini-sub-system in the application.

A component is a all-around solution in a specific domain. The consumer of the component expect it to perform its task from A-Z even if that requires some of interaction with other objects. It should be transparent to the component’s consumer. Let’s say that we have a Calendar component, I would like to be able to call myCalendar.CreateNewMeeting(user, [meeting details]…) without taking care of insert it to the database, update some sort of cache(if exists) or to trigger alerts manually in case of collision. I expect the component to provide a full solution to my problem. Obviously, we don’t expect the Calendar to save the meeting to the data storage by it’s own but rather to receive some sort of IDataSource that will take care of it, but that should be made behind the scene as the purpose is to expose complete functionality.

In addition, a project might be “Entities” or “Utilities” where in these scenarios, the project represent a mini-sub-system.


3). Namespace group components and types under the same domain or “logic context”

Namespaces allow us to group types that are logically belong to the same domain and create a proper hierarchy so the programmer could easily find is way around the available types.


“What about naming?”


Naming is crucial for a few reasons: (A) It ables us to quickly understand the purpose of an assembly\class\method as its consumers, (B) good naming of classes\methods => less documentation => more 1:1 between your docs & your code and (C) it helps you to keep the most important principle of coding – be proud of your (and your team’s) code. It’s a beautiful thing to see Semingo.[…]. I’m loving it!

Naming rules:
1). Name your solutions by the domain they represent.
2). Name your projects by the components or mini-sub-system they represent. Template: [CompanyName].[Application].[ComponentName\MiniSubSystem]
3). Name your namespaces by the domain they group (the types) by.


Example (Lnbogen’s Calendar):


Directories tree:

– Lnbogen
 – Calendar (root Directory)
   – build
      – v1.0
      – v1.1
      (etc)
   – tools
      (list of assemblies, exe or other 3rd party components you might use)
   – documents
   – db
      (maybe backup of database files for easy deployment)
   – src
      – Common
         | Common.sln
         – Lnbogen.Calendar.Entities 
         – Lnbogen.Calendar.Entities.Tests 
         – Lnbogen.Calendar.Utilities            
         – Lnbogen.Calendar.Utilities.Tests 
      – Engine
         | Engine.sln
         – Lnbogen.Calendar.Framework
         – Lnbogen.Calendar.Framework.Tests
         – Lnbogen.Calendar.TimeCoordinator
         – Lnbogen.Calendar.TimeCoordinator.Tests
         – Lnbogen.Calendar.RulesEngine
         – Lnbogen.Calendar.RulesEngine.Tests
         – Lnbogen.Calendar.Service (*1)
         – Lnbogen.Calendar.Service.Tests
      – DataStorage
         | DataStorage.sln
         – Lnbogen.Calendar.DataStorage.Framework
         – Lnbogen.Calendar.DataStorage.Framework.Tests
         – Lnbogen.Calendar.DataStorage.HardDiskPersisenceManager
         – Lnbogen.Calendar.DataStorage.HardDiskPersisenceManager.Tests
         – Lnbogen.Calendar.DataStorage.WebPersisteneceManger
         – Lnbogen.Calendar.DataStorage.WebPersisteneceManger.Tests
         – Lnbogen.Calendar.DataStorage.DatabasePersistenceManager
         – Lnbogen.Calendar.DataStorage.DatabasePersistenceManager.Tests
         – Lnbogen.Calendar.DataStorage.Service (*1)
         – Lnbogen.Calendar.DataStorage.Service.Tests
      – Site
         – Lnbogen.Calendar.UI
         – Lnbogen.Calendar.UI.AdminSite
         – Lnbogen.Calendar.UI.UserSite
      – Widgets
         – Lnbogen.Calendar.Widgets.Framework
         – Lnbogen.Calendar.Widgets.Interfaces (for plug-ins support)
         – Lnbogen.Calendar.Widgets.Service
         (more directories per widget)
      – Integration
         – Lnbogen.Calendar.Integration.InternalWorkflow.Tests 
         – Lnbogen.Calendar.Integration.ExternalWorkflow.Tests (test that the services we expose to the world work as expected)
      – References
         (here you should put all the dlls that you use as “file reference” in the various solutions)
         
*1: for example, this could be WCF wrapper of the underlying engine that enable other internal components to talk with the CalendarEngine\DataStorage as one complete component.


You can notice that I’ve chosen to drop the “Engine” or “Common” while selecting the name of the directories. “Common” is not really a domain but rather a logic separation of things that belong to many domains (usually all of them). “Engine” is the real deal, there is no Calendar without the engine right? So in this case I feel comfortable to drop the obvious (Lnbogen.Calendar.Framework won’t sound better as Lnbogen.Calendar.Engine.Framework).


Solution structure:


In VS.NET 2005, there is a nice feature named “Solution Folder” (right-click on the solution->Add->New Solution Folder) which is a lovely way to group projects. The Solution Folder is a virtual folder(you won’t see it on your HD) so you don’t have to get worried about too much nesting. 

Here is the pattern I love to use, demonstrated on the Engine.sln:

Engine (Solution)
   _Core (Solution Folder) (*2)
      – Lnbogen.Calendar.Framework
      – Lnbogen.Calendar.TimeCoordinator
      – Lnbogen.Calendar.RulesEngine
      – Lnbogen.Calendar.Service
   Tests (Solution Folder)
      – Lnbogen.Calendar.Framework.Tests
      – Lnbogen.Calendar.TimeCoordinator.Tests
      – Lnbogen.Calendar.RulesEngine.Tests
      – Lnbogen.Calendar.Service.Tests
   ExternalComponents (Solution Folder)
      – Lnbogen.Calendar.Entities (via “Add existing project”)
      – Lnbogen.Calendar.Utilities (via “Add existing project”)
   3rdPartyComponents (Solution Folder)
      – (Open Source projects that I might use will go here)
   Solution Items
      (add any dll that you use as file reference in this solution)


*2: The reason I’m using “_” is to make sure it’s the first Solution Folder. I just think it’s more productive way of looking on your projects. I use the same thing for my interfaces and call the file that contains them _Interfaces.cs.



On the next post, I’ll try to focus on strong naming and versioning of assemblies.

 

Templating with Generics & Delegates

It took me some time to understand the real power behind .Net 2.0 Generics. I think that the lack of usage with delegates hold me back from developing some elegant template-based code. After practicing a little, I managed to pull something off and I thought to share it with you.


My goal:


Create a template for executing an IDbCommand object (as a Reader, NonQuery, Scalar – you name it).


Why?


There is a common pattern for querying the database and that is:


try
{
   using (IDbConnection conn = ProviderFactory.CreateConnection())
   {
      cmd.Connection = conn;

      conn.Open();

      // use the cmd object. 

   } //”using” will close the connection even in case of exception.
}
catch (Exception e)
{
   // 1. Trace ?
   // 2. Rollback transaction ?
   // 3. Throw a wrapper exception with some more information ?
}


I want to make this pattern a template so I could trace the entire traffic(sql queries) between my application to my database in one single place !
I can even change the type of the exception it throws or the information it add to it in just one single place !
You got the idea…


Solution:


Step 1 – delegates as generic handlers:


I declare this two delegates:


public delegate T ReaderHandler<T>(IDataReader reader);
public delegate T CommandHandler<T>(IDbCommand cmd);


I believe that they are pretty simple to grasp, lets see the usage of those delegates in the generic template.


Step 2 – The generic “command executer” template:


/// <summary>
/// Simple command executer “design pattern”.
/// </summary>
/// <typeparam name=”T”>The type to return</typeparam>
/// <param name=”cmd”>The command</param>
/// <param name=”handler”>The handler which will receive the open command and handle it (as required)</param>
/// <returns>A generic defined result, according to the handler choice</returns>
public static T ExecuteCommand<T>(IDbCommand cmd, CommandHandler<T> handler) //*1
{
   try
   {
      using (IDbConnection conn = ProviderFactory.CreateConnection()) //*2
      {
         cmd.Connection = conn;

         // Trace the query & parameters.
         DatabaseTracer.WriteToTrace(TraceLevel.Verbose, cmd, “Data Access Layer – Query profiler”); //*3

         conn.Open();

         return handler(cmd); //*4
      } //”using” will close the connection even in case of exception.
   }
   catch (Exception e)
   {
      // Trace the exception into the same log.
      Tracer.WriteToTrace(TraceLevel.Error, e, “Data Access Layer – Exception”); //*5

      throw WrapException(e); //*6
   }
}


I know, this is hard to watch at first glance, but I’ll walk you through:


*1: Notice the generic type “T” – this will be necessary for returning different types depending on programmer choice (see step 4 for further details).
*2: Create a connection – I’m using some factory I’ve built in order to return a strongly typed connection depending on the selected provider in the application configuration(*.config) file.
*3: Trace the command (CommandText, Parameters etc) – DataBaseTracer class check the “switch” I’ve declared in the .config file and trace the query only if it was requested. This will give me that ability to trace all the queries later on in the production environment (good for prodction debugging).
*4: Send the live command(the connection was opened) to the handler so it can use the command for its needs.
*5: Trace the exception, again, only if requests.
*6: Wrap the exception in DALException – myself, as an architect, believe that the Data Access Layer should throw only DALException exceptions.


Step 3 – A first usage of the template:


/// <summary>
/// Execute the db command as reader and parse it via the given handler.
/// </summary>
/// <typeparam name=”T”>The type to return after parsing the reader.</typeparam>
/// <param name=”cmd”>The command to execute</param>
/// <param name=”handler”>The handler which will parse the reader</param>
/// <returns>A generic defined result, according to the handler choice</returns>
public static T ExecuteReader<T>(IDbCommand cmd, ReaderHandler<T> handler)
{
   return DalServices.ExecuteCommand<T>(cmd,
      delegate(IDbCommand liveCommand) //*1
      {
         IDataReader r = liveCommand.ExecuteReader();
         return handler(r);
      });
}


This one is even harder to follow, but relax, it’s not as bad as you might think.
*1: You can see that I’m using anonymous delegate for the CommandHandler<T>, so the delegate gets the live command object from the ExecuteCommand method and call ExecuteReader() on it. Afterwards, it sends the reader to the ReaderHandler<T> handler (given as parameter).


Step 4 – Real life example, using ExecuteReader<T> to parse a reader into List<Person> and string:


/// <summary>
/// Retrieve the persons according to the specified command.
/// </summary>
/// <param name=”getCmd”>The command to run.</param>
/// <returns>Typed collection of person.</returns>
protected List<Person> ExecuteReader(IDbCommand cmd)
{
   return DalServices.ExecuteReader<List<Person>>(cmd,
      delegate(IDataReader r)
      {
         List<Person> persons = new List<Person>();
         
         while (r.Read())
            // Create a Person object, fill it by the reader and add it to the “persons” list.
      
         return persons;
      });
}


/// <summary>
/// Retrieve the persons xml according to the specified command.
/// </summary>
/// <param name=”getCmd”>The command to run.</param>
/// <returns>Xml representation of the persons.</returns>
protected string ExecuteReader(IDbCommand cmd)
{
   return DalServices.ExecuteReader<string>(cmd,
      delegate(IDataReader r)
      {
         StringBuilder res = new StringBuilder();
         
         while (r.Read())
            // Build the person object xml and add it to “res”.
      
         return res;
      });
}


Step 5 – Leveraging the command executer template:


Now that we understand the template, let’s wrap some more execution “modes”. You can add to it later on, according to your needs.


/// <summary>
/// Execute the db command in “NonQuery mode”.
/// </summary>
/// <param name=”cmd”>The command to parse</param>
/// <returns>Affected rows number</returns>
public static int ExecuteNonQuery(IDbCommand cmd)
{
   return DalServices.ExecuteCommand<int>(cmd,
      delegate(IDbCommand liveCommand)
      {
         return liveCommand.ExecuteNonQuery();
      });
}

/// <summary>
/// Execute the db command in “Scalar mode”.
/// </summary>
/// <typeparam name=”T”>The type to return after parsing the reader.</typeparam>
/// <param name=”cmd”>The command to execute</param>
/// <returns>A generic defined result, according to the handler choice</returns>
public virtual T ExecuteScalar<T>(IDbCommand cmd)
{
   return DalServices.ExecuteCommand<T>(cmd,
      delegate(IDbCommand liveCommand)
      {
         return (T)liveCommand.ExecuteScalar();
      });
}



Conclusions:


Now that every command run via ExecuteCommand<T> method I can capture the entire communication between my Application and my database and make some enhanced production debugging, if I’ll need to. In addition, I can decide what to do in case of an exception or add some generic validation checks later on.


All in just one place.


This is a “real life” usage of the power Generics & delegates give us; Using them may be harder at first, but playing with them for a little bit can give you the ability to “patternize” your repetitive code and thus give you a tremendous power to control and manipulate it.


I hope I was clear enough,
Any feedbacks would be great !

 

Best Practice: verify the safe cleanup of your unmanaged code

After elaborating on the topic on my first post (see comments), it’s time to wrap the all thing up.
As I’ve noted, the purpose was to brought up some points of interest about developing classes which wrap unmanaged code inside them. When I’m reviewing my teammates code, any usage of unmanaged code triggers me as I start thinking about all the scenarios of which the unmanaged code stays un-handled and therefore causes memory leaks in the application. One of the scenarios which can cause a memory leak was the one we’ve talked about(again, the code in the first post was only for teaching purpose, so this post will be more relevant), but this could be better handled by implementing “Dispose pattern” on the (wrapper)class:


public class CustomWriter : IDisposable
{
    private bool disposed = false; // Track whether Dispose has been called.
    private StreamWriter m_sw;

    public CustomWriter(string filePath, int numberOfLinesToWrite)
    {
        m_sw = new StreamWriter(filePath);

        // If we know that an exception may occur after initializing unmanaged code – try-catch is preferable.
        try
        {
            if (numberOfLinesToWrite < 0)
            {
                throw new ArgumentException(“What’s wrong with you ?!!? numberOfLinesToWrite can’t be less than 0”, “numberOfLinesToWrite”);
            }

            // Assume that I’m using numberOfLinesToWrite here…
        }
        catch // equals to catch(Exception)
        {
            // Explicit disposal of the object – deterministic.
            Dispose();

            throw; // rethrow the exception without losing the stack trace.
        }
    }

    #region IDisposable Members

    // Implement IDisposable.
    // Do not make this method virtual !
    // A derived class should not be able to override this method.
    public void Dispose()
    {
        Dispose(true);
        // This object will be cleaned up by the Dispose method.
        // Therefore, you should call GC.SupressFinalize to
        // take this object off the finalization queue
        // and prevent finalization code for this object
        // from executing a second time.
        GC.SuppressFinalize(this);
    }

    private void Dispose(bool disposing)
    {
        // Check to see if Dispose has already been called.
        if(!disposed)
        {
            // If disposing equals true – Explicit disposal is called.
            // Dispose managed resources.
            if(disposing)
            {
                // clean up managed resources (if have any).
            }

            // Call the appropriate methods to clean up unmanaged resources here.
            m_sw.Close();

            disposed = true;
        }            
    }

    ~CustomWriter()
    {
        // nondeterministic
        // Do not re-create Dispose clean-up code here.
        // Calling Dispose(false) is optimal in terms of
        // readability and maintainability.
        Dispose(false);
    }


    #endregion
}


Summing up:


If you’re developing a wrapper class for unmanaged code, make sure you implement the “Dispose pattern” properly.
Now, if you’re a consumer of a wrapper class, make sure to call the Dispose method (Explicit Dispose) via using or try-finally{Dispose}.

 

Annotations:




  • Remember that the destructor\finalizer (~CustomWriter()) in C# is nondeterministic; This is an important one as the unmanaged code will stay unhandled until the GC(Garbage collector) will decide otherwise.
  • “Explicit Dispose” is your goal while using wrapper classes, especially if you’re developing an ASP.NET application which is hosted by a single process (aspnet_wp.exe\w3wp.exe) which hosts many applications (AppDomains) as the nondeterministic nature of the (C#) class destructor\finalizer. In a console\Windows Forms application, each application has its own process, and shutting them down will call the objects destructors(if have any) thus making it harder to detect memory leaks.
  • Implementing a destructor\finalizer will harm your performance as the GC has to handle this class instances in a special manner, but this is still much better than memory leaks in your application – more about it at Joe’s article (link at the end of my post).
  • If you’re a developer – while writing the wrapper classes or using them, make sure that you call\implement the Dispose method as needed.
  • If you’re a code reviewer – make sure to add the appropriate tests to you code review list, so you won’t forget to check them.


More on the subject:


You should read this article by Joe Duffy, a technical program manager of the CLR team, about “Dispose, Finalization, and Resource Management“. 
* The article is quite long and hard to follow, but I’m sure that a quick scan can teach you a couple of things.

 

Exception handling – be smart about it !

I’ve encounter a numerous bad usages of try, catch and throw statements in my last 3 years in .NET so I thought to write here my “best practices” in this subject.


Before I begin, just to remind you about the “using” keyword


” The using statement defines a scope at the end of which an object will be disposed. ” (MSDN)


Meaning, this code:


using (MyDisposableObject o = new MyDisposableObject())
{
   // use o…
}


Is equal to:


MyDisposableObject o;
try
{
   o = new MyDisposableObject();
   // use o…
}
finally
{
   // Don’t forget, MyDisposableObject must implement IDisposable
   o.Dispose();
}


The using statement is much more “cleaner” than the try-finally(->call Dispose) block. Of course, in order to use the using statement, MyDisposableObject must implement IDisposable interface, but most of the .NET frameworks’ classes which use external resources do, so no problem here.


When do I use the using keyword instead of “try-catch(-finally)” ?


In any case your code block doesn’t required any exception handling (catch) and you’re using a disposable object.

 

When do I catch an exception ?

 

You should use the catch statement only if you can REALLY handle the exception – meaning you want\need to “eat”(catch it and do nothing about it) the exception or you (want to)\can try to “fix” the application’s flow according the exception type (call transactionInstance.Rollback() in my data-access-layer if an error occurred, for example).

 

Do NOT catch exceptions as a “default” behavior in your code !

The following code is a BAD practice in exception handling:


try
{
   //some code
}
catch(Exception e)
{
   throw new Exception(“X operation error: “ + e.Message);
}
finally // if exists.
{
   //some code
}


Why is it bad ? The catch statement doesn’t handling the original exception, it creates a (bad)new one which means:


  1. The Stack Trace of the original exception will be LOST, which means I lose the ability to view the entire “process” (who called to who flow).
  2. In the demonstrated code, I catch an exception and re-throwing a pointless new exception. Throwing exceptions is an expensive task so you should avoid (at any cost) throwing them as long as you don’t really need to !
  3. If you wrap an exception, at least save the original exception in the InnerException property (I’ll elaborate later on).

When do I wrap an exception, when do I rethrow it ?


  1. You should catch and wrap the exception with a new one only if you can add INFORMATIVE data to the original exception which WILL be used later on. Writing this type of code (in my DAL) will be a smart idea usually:

    SqlTransaction trans;
    SqlConnection conn = null;
    try
    {
        // use the connection to do some DB operation
        
        trans.Commit();
    }
    catch(Exception e)
    {
        if (trans != null)
            trans.Rollback();
        
        // Wrap the exception with DALException
        // I can check if e is SqlException and by the e.Number –
        // Set a “clean”(show to user) message to the DALException.
        // I can add the full “sql” exception in some custom property, 
        // I can determine which stored procedure went wrong, 
        // I can determine the line number (and so on).

        throw new DALException(“clean message to the user”, e);
    }
    finally
    {
        if ( conn != null && conn.State == ConnectionState.Open )
            conn.Close();
    }


    Why is this code smart ? Because I call the Rollback() in case of an error, which will ensure “clean” database. Because it “hides” the original SqlException which allows me, at my Business Layer, to catch a generic DALException which will abstract the Business Layer from the Data Access Layer. Because I CAN add more informative data to the exception so the Business Layer could send the GUI (to show the user).


  2. You should rethrow the exception if you catch-ed it but you “found out” that you can’t really handle it:
    try
    {
        // do some code…
    }
    catch(Exception e)
    {
        if (e is SqlException)
        {
            // Add more information about the exception and
            // throw a new, more informative, exception:
            throw new DALException (“more data”, e);
        }
        else
        {
            // I can’t handle it really, so I’m not even going to try
            throw; // <– look at this syntax ! I’ll explain later on
        }    
    }

     

    Calling throw; will bubble the original exception (including its’ Stack Trace) – this will actually “cancel” the catch statement.

When you wrap an exception, you should *almost* always use the “InnerException” property

 

When you wrap an exception, you should save the original exception as InnerException:


try
{
}
catch(Exception e)
{
   throw new MyCustomException(“custom data”, e);

   // OR

   MyCustomException mce = new MyCustomException(“custom data”);
   mce.InnerException = e;
}


This will preserve the original stack which will be important for later debugging.


 


Any Insights you want to share with me ?

 

“Hello World” to a mini enterprise application… sounds familiar ?

Hey happy coders,

 

I’m currently developing a “2-weeks-max” application for an Israeli bank (If you see “Israeli bank” instead of the name of the bank, you don’t have sufficient privileges ;-)). The characterization was written and approved on the fly, without a deep understanding of the client’s domain, i.e what are the other applications that the users use in his every day work ? are they look the same ? does he “MUST” have some features that his already used to in his other applications ?

Now I’m facing the harsh results.

 

I’ll give you an example of “little-MUST-feature-that-can-come-back-and-bite-you”. The user requested a screen which will have to following fields in it (and some others, but they’re irrelevant now):


  1. Requester drop down list: show all the users from a specific group in the AD (active directory).
  2. A->B->C linked drop down list: 3  DDL (drop down list) which are connected meaning that when you select a different value in A DDL all the values in B&C DDLs must show all the children of A (changing B will change C values of course).

This looks like a trivial requests right? Well, you’re right, if you look at this skeleton request it may appear trivial, but the key is to understand how to user expect to choose a value from those DDLs. In my case, the user likes to work with auto-complete drop down lists. Combining his auto-complete mechanism in this screen was NOT a trivial task at all. The client gave me his code, so it seems that I just have to integrate it in my code and Voila; BUT The code was written for ASP, so I had to wrap it in a custom web control, handle the viewstate of the control, and worst – his code didn’t worked so well (client-side behavior) so I was required to adjust and fix it as well(black box component in my a$$).

 

The 2 weeks application is now estimated in 3 weeks, 50% more than the first estimation !

I’m expecting a good night 24:00-06:00 sleep at the office for the next 2 weeks, wish me luck !

 

Some lessons I’ve learned:


  1. Don’t give a fast reply for hours estimation just to move along and get the development process running, it WILL hurt you later on.
  2. Always try to understand the solution domain as well as the client’s domain. Ask him to show you other applications that he uses for his every day work, ask him if he wants the same GUI in his new application that you’re going to develop for him. If he does, think about the amount of time you’ll be needed in order to do that, do you need to use a specific “template” so the new application will look the same as his old ones (how are you going to integrate it with ease?), do you need to allow him some special features that his already used to (auto-complete drop down list)? do you need to support key combination (Ctrl+S will save his document) ? In a “clean”(without technical considerations) characterization it probably will not be shown, but you must consider it in the technical characterization(if you have one) and in your total hours estimation ! Don’t forget that the user thinks that adding a new feature to the application is an easy task because he already saw those features in other applications he uses every day (“can’t you simply copy it from here???”) – you must remind him that it’s not a trivial task, and allow him to choose if he wants to spend time on these features.
  3. “Milk” the client for information, write all the possible scenarios that the client expect the system to handle. What screen X will show when I login as Administrator, what it will show when I login as Technician, as Supervisor ? Don’t start programming unless you have the answers for those client “trivial” scenarios, refactoring will be a bitch !
  4. update: just to clarify, I didn’t write the characterization and neither was my PM (project manager), it was written by another co-worker a long time ago. This is a huge mistake IMHO, you must ask the implementors, if you’re not one of them, for their opinion on the development process before you can give your hours estimation.

Any tips you can share with me ?

 

 

 

Generate your way to the solution via CodeSmith.

Why do we(my bad, I) need code generator ?
I really don’t want to get into this philosophical argument, whether code generation is good or not, but just think of your Code Generator as a silent programmer which REALLY loves to do your dirty repetitive work. Please read the following with an open mind to the subject, you’ve got nothing to lose.

 

Background

Way back, When I was still serving my country, I was “introduced” to the world of Code Generation. The concept was well familiar to me, but I didn’t saw a tangible implementation until my team leader at the time, Pavel Bitz, showed us his(great) implementation which later on became the first code generator (aka “CG”) I ever used. This CG was written in PL/SQL and ran, obviously, on our Oracle (9i) database. I wasn’t so happy about the extendability of this code, but we were in a tremendous stress to develop & deploy our projects so there was no time to develop something “cleaner”(in my opinion anyway). After few weeks, at my final days in the army, I was starting to build my own code generator just to understand more of the .NET framework power – i.e usage of templates, custom config files, Database reverse-engineering, reflection, practicing my OOD\OOP etc. After all, nothing teaches you better to “exploit” the framework than writing an application which will test its capabilities. After 2 weeks I had a nice CG which ran quite well and did everything the PL/SQL CG did and then some.
My time at the army was done and I was back in the free market.

2 days after I was employed for SQLink company, I talked with Amir Engel about the idea of code generation. He told me that he managed to write some templates with CodeSmith and he showed them to me. It looked great but I was hoping to use my CG for that purpose, still, I wanted to protect my “baby”. But this desire was gone when I read about CodeSmith’s SchemaExplorer. My god, I was shocked about how easy it is to write a template which will connect to my database with zero (0 !) effort. In addition, it has its own Studio which allows me to write,test and debug my templates with ease.

BUT the biggest advantage was that there were many open source templates out there – just for me to use !

 

Code Generation – safety first

Before you start to generate code, you must think about how to integrate the code generation usage in your every day work without the need to overwrite your “custom” writing. I’ll give you an example – Let’s say that we generate the Data Access Object for Users table in our database. This object does the CRUD (Create\Insert, Read, Update, Delete) operations for that table. Now I need to add a “specific” method named “GetUsersFromCity” which will return all the users from a given city id. So I’ll add the method to my UsersDAL class and I’m an happy programmer no ?   (think here…)  NO!

Why not ? because the next time I’ll want to generate this class again, because one of the fields in the table was changed or a new field was added to the table (whatever), I don’t want to overwrite the class and lose my custom changes !

The “Base” principle:
This is where the “Base” principle kicks in and help us to protect our custom changes. My correct “DAL” object class structure will be:


// File: UsersDALBase.cs
public class UsersDALBase
{
    // our “generated” code here – all the CRUD operations for example.
}

// in another file!
// File: UsersDAL.cs
public class UsersDAL : UsersDALBase
{
    // My custom behivors here.
}


Every time I’ll regenerate the code, I’ll overwrite the “Base” file and no harm done – my custom changes are safe !
In my “upper” layers, I always use UsersDAL(or [table]DAL for that manner) and not UsersDALBase.


Now that we’ve got the background, it’s time to introduce to you my amigo –


CodeSmith – harness its power for your own need


I’m not going to write here about how to use CodeSmith, there is too much information about this issue all over the web – just google a little and I’m sure you’ll do just fine. What I’m going to talked about is what’s the greatness of CodeSmith and my tips about easy development and debugging while using this tool.

 


  • SchemaExplorer – Stop doing all the reverse engineering for Oracle\SqlServer (via “system tables”), it’s already written for you. The only thing you need is to specify is the connection string and… thats it ! You want to see how easy it is? OK OK, relax:

      <%@ CodeTemplate Language=“C#” TargetLanguage=“C#”
        Src=“../OeCodeTemplate.cs” Inherits=“OrenEllenbogen.Templates.OeCodeTemplate”
        Debug=“False” Description=“Generate an entity class for a given table.” %>

      <%@ Property Name=“SourceTable” Type=“SchemaExplorer.TableSchema” 
         Category=“Connection” Description=“Table Object should be based on.” %>

      <%@ Assembly Name=“SchemaExplorer” %>
      <%@ Import Namespace=“SchemaExplorer” %>


      <%
      // Collection of all columns in the table.
      ColumnSchemaCollection Columns = new ColumnSchemaCollection(SourceTable.Columns); 
      %>


      <% 
      // Variables by table columns.
      for (int i=0; i < Columns.Count; i++) { %>
      /// <summary>
      /// <%=Columns[i].Name%><%=(Columns[i].Description.Length>0) ? ” : ” + Columns[i].Description : “”%>
      /// </summary>
      protected <%= GetCSType(Columns[i]) %> <%=VariableStyle(Columns[i].Name)%> = <%= GetCSDefaultByType(Columns[i]) %>;
      <% 
      } //end for 
      %>


         Some Explaining:



    • you can use “code behind” file (*.cs) which holds the common methods for all your templates files (mine is OeCodeTemplate.cs)
    • I use the methods GetCSType, VariableStyle and GetCSDefaultByType methods which are in my “code behind” file, but this are simple methods which you can copy\write by yourself.

         As you can see, it’s so easy to start dealing with the template itself instead of trying to remember 
         how to get all the indexes\columns\primary keys\(etc)  from a given table.


  • Templates GUI – You’ve seen that I’m using SourceTable property on the previous section, the great thing about it is when I’ll open this template with CodeSmith explorer I’ll be able to pick the table directly from my DataSource. So easy and comfortable.

   CodeSmithExplorer.JPG   

 

 


  • Huge community – you can find a lot of examples in CodeSmith forums. Don’t try to write something which was already written before you; You can always find a template and get some of its code for your specific need.
  • Its easy – believe me, my 4 years old cousin can write a template with CodeSmith, its that easy !
  • Its(was) free ! – version 2.6 is free for use, but the new one (3.0) costs (nothing you can’t afford though).

Developing with CodeSmith


  • Using Lut’z reflector with CodeSmith 2.6 – The main problem with this version (and former) is the lack of IntelliSense. This is a problem due to the simple fact that you can’t remember every method\member  in SchemaExplorer object for example which makes it harder for you to develop. The simple solution is to use the reflector over the SchemaExplorer.dll, which “sits” in CodeSmith directory, and explore your way to the required member\method.
  • Debugging – Debugging with CodeSmith 2.6 (and former)  isn’t so comfortable, but it doable – that’s enough for me. In order to do it, all you need to do is to put Debug=”True” in the page header directive and call Debugger.Break(); before the lines you want to debug.

NOTE: in the new version (3.0), Eric .J. Smith, the creator of this great tool, spoiled us with brand new features like built-in IntelliSense and easier debugging options and much more.



Summarize:
Code generation, in my humble opinion, is MONEY SAVER, simple as that, you can cut down the development time by half !

I’ve built full-spectrum templates for my N-Tier applications – Entities (object which represent table), Data Access objects, Business Object, Web forms, Utilities layer, Cache layer and stored procedures script generator. I’m now able to build 50%-60% of the project “foundation” before I need to write my first code line ! In addition,  I’m using my CodeSmith templates for my every day work – I did some helper templates like generating new “SqlParameter” with the parameter type\size\scale according to the column type, This is great for adding a new SqlParameter to SqlCommand with “full information” about the parameter in single click.
And finally, I’m not afraid to change the database because I know what I need to change and now – it’s even EASY to do.

 

 

Automated build for my ASP.NET project using NAnt

As I’ve promised – I share with you my experience with NAnt while attempting to automate my build process for my ASP.NET project. After reading a lot on the subject, I must admit that the documentation is quite poor and the “open source” examples are not exactly what I’m looking for. Before I’ll start, if you don’t know what is NAnt or NAnt contrib, now will be a good time to do some reading, I’ll wait…


Requirements (What do you need to install before you can start):



  1. Download and install NAnt.
  2. Add the nant\bin directory path to your PATH variable (“System Variables”).
  3. Download and install NAnt contrib.
  4. Integrate between NAnt and NAnt contrib using these steps:

    1. Copy nant-contrib\bin\*.dll to nant\bin\ directory.
    2. Create the folder nant\bin\tasks\net (manually).
    3. Copy nant-contrib\bin\*.dll to nant\bin\taks\net directory.

  5. Download NAnt.UtilityTasks.dll and put it in nant\bin directory.

Let’s Start:

OK, now we’re ready to start the build process; So here is my solution structure:


MySolution
   – WebProject (ASP.NET project)
      – *AssemblyInfo.cs
      – Other files and directories (& porno, of course)
   – BusinessLayer (Class Library)
   – DataAccessLayer (Class Library)
   – EntitiesLayer (Class Library)
   – *SolutionInfo.cs
  – etc..


* Before I continue about the process itself, I must mention that I’m using “Link File” option in order to share my AssemblyVersion attribute in my ClassLibraries projects. In a perfect world, ASP.NET projects would be able to “Link File” as well, but (surprisingly) it’s not; So I have 2 places which hold the AssemblyVersion:
1. MySolution\SolutionInfo.cs (This file is being linked in all of my class libraries).
2. MySolution\WebProject\AssemblyInfo.cs.


Great, we can carry on –
I’ve discussed about the proper build process with my friends A&A (The Amir’s aka “The markowitz” and “The Engel”) and we’ve agreed that we require 2 processes – “Build”(complete new version) and “Rebuild”(rebuild the current version, it’s required after the Build process failed for some reason).

The desired processes work flow
:
Note: you don’t need to copycat my process, you’re a free human being, consider me as your Build mentor :-o

1. “Build” process –
   1.1 Increment my build version (SolutionInfo.cs & AssemblyInfo.cs)
         In order to do that, I’m using a great code snippet I found during my last weekend searches to 
         create a new version number. Afterwards, I’m checking-out the SolutionInfo.cs file and imprint the 
         new AssemblyVersion using <asminfo> task.
         Eventually I’m checking-in the SolutionInfo.cs file.

         * I’m using some properties – ${property_name} – for easy maintenance, you can see it in the 
            file I’ve upload (way over down).


   <vsscheckout username=“${vss.username}” password=“${vss.password}” 
      localpath=“${slndir}” recursive=“false” writable=“true” dbpath=“${vss.dbpath}” 
      path=“${vss.slnpath}/SolutionInfo.cs” failonerror=“true” />

   <asminfo output=“${slndir}\SolutionInfo.cs” language=“CSharp”>
      <imports>
         <import namespace=“System” />
         <import namespace=“System.Reflection”/>
         <import namespace=“System.Runtime.CompilerServices”/>
      </imports>
      <attributes>
         <attribute type=“AssemblyVersionAttribute” value=“${build.version}” />
         <attribute type=“AssemblyCompanyAttribute” value=“${company}” />
         <attribute type=“AssemblyProductAttribute” value=“${product}” />
         <attribute type=“AssemblyCopyrightAttribute” value=“Copyright (c) 2005, ${company}.” />
         <attribute type=“AssemblyTrademarkAttribute” value=“Trademark by ${company}” />
         <attribute type=“AssemblyDelaySignAttribute” value=“false” />
         <attribute type=“AssemblyKeyFileAttribute” value=“..//..//..//key.snk” />
         <attribute type=“AssemblyKeyNameAttribute” value=“” />
      </attributes>
   </asminfo>

   <vsscheckin username=“${vss.username}” password=“${vss.password}” 
      localpath=“${slndir}\SolutionInfo.cs” recursive=“false” writable=“true” 
      dbpath=“${vss.dbpath}” path=“${vss.slnpath}/SolutionInfo.cs”
      comment=“change build version: ${build.version}” />


         I use the same code (with minor changes) to update my AssemblyInfo.cs file as well.

   1.2 Put a label on the VSS with the current new version.
         This is even easier, I just use <vsslabel>(NAnt contrib) task:


      <vsslabel
           username=“${vss.username}”
           password=“${vss.password}”
           dbpath=“${vss.dbpath}”
           path=“${vss.slnpath}”
           comment=“New build version: ${build.version}”
           label=“${build.version}”
      />


   1.3 Get the files from the VSS to my local directory recursively using <vssget>(NAnt contrib) task.
      <vssget
           username=“${vss.username}”
           password=“${vss.password}”
           localpath=“${vss.outdir}”
           recursive=“true”
           replace=“true”
           writable=“false”
           dbpath=“${vss.dbpath}”
           path=“${vss.slnpath}”
      />


   1.4 If I’m required to (by property) – generate the projects pdb’s.
        This is great for release mode, I can pull the pdb’s for the version at the Production environment 
        and make some extreme production debugging using the pdb’s as symbols – 
        I must give the credit to “The markowitz” for the idea !
      


        The code is quite simple – build the solution in Debug mode and copy the *.pdb files
        to my “version_directory”\pdb\WebProject.
        You can see the code in the attached file.


   1.5 Build the solution.
         I’m using <solution> task and <webmap> for easy build.
         Again, nothing fancy, look in the attached file.

   1.6 Copy the web project output to my MySolution\Builds\[version_number] directory for 
        an easy XCOPY deployment. I’m using <copywebproject> task in order to do 
        this magic, it’s working like a charm.  
   <copywebproject project=“${slndir}\WebApp\WebApp.csproj” 
      todir=“${outdir}\WebApp” configuration=“${config}” />


   
2. “Rebuild” process – 
      Call steps 1.3 to 1.6.


TIPS:
I recommend that you’ll download VSTweak and let the VS.NET treat your .build file as .xml file.


TODO:
I’m going to add a NAnt task which will get all the *.js\*.htc files from the web project (.csproj) and format them
via Jazmin – this will cut down the size of the files(by removing comments and other not-required characters) for performance improvement (the smaller the file is, the faster the client will download it).


CREDIT:
If you’re using my file as your “template”, I would appreciate if you’ll add a comment and let me know about it, including new features you’ve added or planning to add. I’ve dedicated my time to share with you, please do the same grace with me. Thanks !


My (template) build file:
default.zip (3 KB)

How to run the build file ?
You’ll need to configure the default.build file I’ve attached (5 minutes max).
Go to your solution directory and copy the “default.build” file into it.
Now, activate “cmd” and navigate to that directory.
Type “nant build” and Voila !