Fluent Interfaces – Let the API tell the story

In my last post about Creating a decent API for client side script registration, Eran raised a few great comments about the readability and proper usage of this style of coding. I decided to answer his questions with a post, as my comment started to fill enough paper to clean a Brazilian forest or two (well, in terms of a response).

Introduced by Martin Folwer, Fluent Interfaces ables the programmer to supply an API that can be used to build a genuine use-case in the system or just a complete logical query\request from a service. This coding style is quite different from the traditional 101 lessons in OOP school. The biggest benefit of Fluent Interface, in my opinion, is that you can read the code out load like the customer is talking to you instead of the programmer that wrote it. Sometimes it gets even better, you can read someone’s else code like she\he was next to you, explaining what she\he meant do do. My take is that using a method to describe use-case\action\query\request will be (almost)always better, in terms of readability, than using parameter(s) as you’ll need the IntelliSense to understand the latter. Here is a simple API, the first one is traditional OOP while the second one applies Fluent Interfaces. Please bare in mind that these samples were written just to set the ground for the difference between these two coding technique:

// take 1 – traditional style
public class ClientSideExtender
    public void CallMethod(string methodName, RunAt runScriptAt, bool ignoreExceptions, params object[] parameters);

// take 2 – Fluent Interfaces

public class ClientSideExtender
   public ScriptCommand CallMethod(string methodName);

public class ScriptCommand
     public ScriptCommand WithParameters(params object[] parameters);
     public ScriptCommand When(RunAt runScriptAt);
     public ScriptCommand IgnoreExceptions();

Assuming that we have a javascript method with this signature “markRow(rowId, shouldDisableOtherRows)”, here is how can one use these API to register client-side method call(accordingly):

clientSideExtender.CallMethod(“markRow”, RunAt.AfterPageLoaded, true, “5”, true);

clientSideExtender.CallMethod(“markRow”).WithParameters(“5”, true).When(RunAt.AfterPageLoaded).IgnoreExceptions();

Obviously, both API will create the same code eventually: <script …>markRow(“5”, true);</script>.
What I really love about Fluent Interfaces is that I don’t need the freakin’ IntelliSense in order to understand what “true” means as a parameter(the difference is marked in red). It ables me to read it out load – I want to call a client-side method named “markRow”, with 2 parameters, execute it after the page is loaded and wrap the entire thing to swallow exceptions (assume that someone else will take care of it). If you want to call a method that doesn’t get any parameter, don’t call to WithParameters method. You can always change the order of the calls if you see it fit (maybe calling IgnoreException before When).

One of the blames I hear(again and again) about Fluent Interfaces is that it “allows” programmers to abuse the code. “You can change the order of the calls or forget to call one and make a big mess” is a common response to the concept. To be totally honest, I don’t eat it as programmers can make a mess of pretty much anything. We’ve all been there, right? I agree that it requires some different way of thinking about creating & using API, but then again, so does learning a new programming principle, a design pattern or a coding techinque. It took several years until people started to chew TDD and accept the advantages of using it. My guess is that in ~1-2 years, Fluent Interfaces will be much more common in the way we’re writing and using code (LINQ rings a bell? well OK, leaving the “sql-like” synthetic sugar aside).

This leads me to my believe about designing Fluent Interface. I say – when appropriate, why not allowing the programmer to choose?
I would create two overloads for CallMethod, as shown above, and let the programmer decide which one she\he would like to use.

I would use Fluent Interface.


Creating a decent API for client side script registration

I’ll start my post with a question:
What’s the difference between ScriptManager.RegisterClientScriptBlock and ScriptManager.RegisterStartupScript methods?

Well, the only way to find out is not by looking at the method names but rather to look in MSDN. According to MSDN the former registers your script after the <form> element while the latter is registering your script just before the </form>. Now, let me ask you this – how the word “Startup” can be interpreted as “end of page”?

So OK, the naming is really bad but what’s even worse are the arguments of these methods:

public static void RegisterClientScriptBlock(Control\Page control\page, Type type, string key, string script, bool addScriptTags);
public static void RegisterStartupScript(Control\Page control\page, Type type, string key, string script, bool addScriptTags);

Now, most of us write this code 95%(+) of the times:

ScriptManager.RegisterClientScriptBlock(this, this.GetType(), “some stupid key”, “the script here, finally…”, true); //like someone is stupid enough to give false – if you have a full script, why not putting it inside myFile.js and add it to the header?

I don’t understand the real need behind creating a “unique” key from the type+key given to this method. Why not creating a unique key each and every time? You need to create a simple API for the common (90%) tasks. I almost never actually asked about IsClientScriptBlockRegistered. But enough complaining, time to write a few bits & bytes.

I tried to play with the API a little and here is what I came up with (it’s merly the beginning, I’ll update on my progress during the week):

PageClientSideExtender clientSide = new PageClientSideExtender(Page);

// A better approach, IMHO, to ScriptManager API
  .RegisterScript(“alert(‘run at the beginning of the page’);”)

  .RegisterScript(“<script>alert(‘run at the end of the page’);</script>”)

//let’s register something like:
// var width = 300;

//let’s register something like:
// var data = ‘width:300;height:500’;
clientSide.RegisterLocalVariable<string>(“data”).SetValueFormatted(“width:{0};height:{1}”, 300, 500);

// Let’s register to the onload of the <body> and trigger a nice alert
clientSide.Body.Load += ClientSideScriptHelper.CreateHandler(“alert(‘run on body onload! cool ah??’);”);


clientSide.Body.Load += delegate { return “alert(‘another message shown after the page onload event was raised. sweet!’);”; };

The Fluent interface gives it a nice “read-the-code-like-a-story” look&feel which makes things really easy to understand. There is no thinking here, the code says it all.

Another rant I have is that all of the API examples I’ve demonstrated so far are implemented although not fully tested as using Microsoft classes requires a lot of work in order to abstract. The funny thing is that they(Microsoft) have decoupled things in the new Microsoft ASP.NET Ajax library(System.Web.Extension.dll) but they made everything internal!  You have IPage, which is really useful abstraction to the Page class, sitting there as an internal member. I had to come up with some heavy abstraction to make things play nice together.

To sum up, I would really appreciate YOUR feedback about the API and any kind of suggestion or things that you would like to see in future API. I’ll release the code later this week with a short demo to get you going.


Extension Methods changes the game

One of my favorite C# 3.0 features is Extension Methods. In short, it ables you to extend an existing type with additional behaviors. For example, you can consider to extend the type System.Int32 (aka “int”) with a new method named IsEven.

The syntax is quite trivial:

namespace Lnbogen.Extensions
    public static class Extentions
        public static bool IsEven(this int i)
            return ((i%2)==0);

Notice the “this int” which means “I want to extend the type int“.
Now I can write the following code (fully IntelliSense-d and compile time checking):

using Lnbogen.Extensions; //<- this will load the extensions, it won’t compile without it!

 void TestIsEven()
   int num = 5;

What’s going on behind the scenes is quite simple, the compiler got a little “smarter” and now knows to look for Extension Methods in the imported namespaces. I don’t want to elaborate too much about the syntax and requirements as Anders Hejlsberg makes it all clear in his MSDN video: C# 3.0: Future Directions in Language Innovation.

Let me direct you to the  real  meat:

Extension Methods is going to change the way we enhance “already used” code. Up to now, looking at that need (for backward compatibility), Microsoft suggested to create an interface and an abstract class that implement this interface. This allows us to extend the abstract class simply by providing new virtual members without breaking the code. The problem in this solution is that things were getting out of sync really fast leaving our interfaces shy & dry(=out of date), just to avoid breaking changes. Even worse, due to the fact that multi-inheritance is forbidden, inheriting from the abstract class pretty much slammed the door on most of our design choices. 

Let’s face it; Interfaces were programmer’s nightmare(before you hack my blog and delete this post, bare with me). On one hand, we’re reluctant to implement an interface with 10 members as we prefer to finish our tasks in this century but on the other hand, we want our interface to expose as much API as it can to allow easy usage for its customers(=programmers) and to avoid casting it(the interface) to “specialist” interfaces.

Well, Extension Methods allows us to extend any type. Do you get my drift here?
You can now enhance any interface you’ve ever exposed thus making old interfaces incredibly strong. You can define a lightweight interface(easy for the implementor of the interface) and extend its abilities without breaking any existing code(enrich the customers of the interface). This is exactly what Microsoft did with LINQ. They have extended IEnumerable<T> with many Query methods such as “Select”, “From”, “Where”, “GroupBy”, “Join” (and much more…) to enhance it with a great set of new methods while still keeping the original interface untouched. All we have to do is simply import System.Query namespace.



Usable BitArray, take 2

After digging in into BitArray with Reflector, I saw that this class, in its current state, is simply unusable; Here is why:

1.  The class does not override Equals (and ==, !=, GetHashCode). We use BitArray because bitwise comparison is relatively very fast. I don’t want to perform reference comparison on bitwise array. I use bitwise structure to perform bitwise operation and bitwise comparison.

2.  This class is sealed.  Reading the first section can give you the idea why this is a mistake built on top of another mistake.

3.  The exposed bitwise operations change the state of the object. Meaning doing bitArrayInstance.And(anotherBitArray) actually change the state of bitArrayInstance. I really don’t understand why this was made by design. It is very common to perform several bitwise operations on a BitArray object and perform some comparison afterwords.

We decide to disassemble the code into a new class named BitArray2 and I refactored the code so it would fit the “normal”(in my book) usage of BitArray.
The code:

BitArray2.zip (4.19 KB)


BitArray and Equals riddle

I have to following code:

BitArray arr1 = new BitArray(200);
BitArray arr2 = new BitArray(200);

arr1[2] = true;
arr2[2] = true;

if (arr1.And(arr2) == arr2)

It returns “bad” for some unknown reason.
My guess is that Equals implemented a reference comparison and not bitwise comparison.
It seems like a strange behavior as BitArray is classic for bitwise operations on it.

Am I missing something?

update: I’ve refactored Microsoft orginal BitArray and named it BitArray2. You can find it here.


How to activate xml documentation in Visual Studio .Net 2005

(Before you start – give a look at C# and XML Source Code Documentation)

So you build up a nice API that everyone can use but you want to provide nice, IntelliSense-enabled comments along with your magical code right?
Let’s say I have the class Logger in my “Infrastructure” dll:

/// <summary>
/// Our logger….
/// </summary>
public class Logger
   /// <summary>
   /// Log the given message…
   /// </summary>
   /// <param name=”messageToLog”>The message to log.</param>
   public void Log(string messageToLog)

Now, my clients(e.g other teammates) want to use this logger. Adding a quick “File Reference” to the dll and they are good to go. This is great _but_ they also want to see the comments I provided as they type (IntelliSense in action baby). Surprisingly enough, they will _not_ see it “by default”:


As you can see (well, not see, but that’s my case) – we don’t see the class comment “Our logger…”. The same goes for our Log method:


Where are my comments ?!!?

Well, it turns out that you should make a small change to make it alright. Say hello to “XML Documentation file”. Open the Project Properties, under the “Build” tab there is a little checkbox named “XML Documentation file” – make sure it’s checked and you’re done!
All you have to do is to recompile the dll and add the reference again (e.g remove & add, Dumb, but works) or manually copy the generated xml file to your bin directory (if CopyLocal = true).





You should always make sure that “XML Documentation file” is checked!
Why providing comments if no one can see them (unless you have access to the code and you make a “Project Reference”).

I’m not sure why Microsoft made this checkbox unchecked as default.
Am I missing something here?


Static Events Notifier

A few days ago a teammate asked to me to help him with a little some-some. This some-some was an event delegation problem (some-some sounds better) that she wanted to implement and wasn’t really sure how. The scenario is quite simple, we have a few classes and one of the classes is a little “deep” (deep object graph), meaning:

object of type A 
object of type B 
object of type C 
   inner member of type class D 
     inner member of type class E
      inner member of type class F – known as f1

Now, the value f1 can change, and while it does so, we need to notify the rest of the instances (a1,b1,c1,d1,e1) of that change and provide them some extra details about the change itself. One solution is to add an event to all the classes, register from each one to the inner member event and then f1 can trigger the change to E which will trigger the event to D and then to C->B->A. In short – delegate the call all the way up and around. It seems like a hard job to me – too many places to change, too many events to declare which are not really necessary. I came up with a static Events Notifier solution. Think of it as a repository for registering and triggering events. Here it goes:

public static class EventsNotifier
   public static event EventHandler<Status> StatusChanged = delegate {};

   public static TriggerChangeStatus(object sender, Status s)
      StatusChanged(sender, s);

Now each class, in its constructor, can register to EventsNotifier.ChangeStatus event and my f1 can call EventsNotifier.TriggerChangeStatus(this, new Status(…)); which will notify the rest of the objects. I know, it’s not a perfect solution, but It has its pros. What would you do ?


COM object are from Jupiter, .Net assemblies are from Mars

COM object think that they understand .Net assemblies (via Proxy(tlb file)) but in matter of fact, this proxy is a mediator(girlfriend\gay-friend) to the real assembly that actually make the connection work while .Net assemblies think that they understand COM object, again, via proxy(Interop file) but in matter of fact, this proxy is only a mediator(friend, lesbian-friend) that make the connection work.

The hard migrating process my team encounters this days(and will keep dealing with for the next few iterations) makes you(well, me, but a sorrow shared is a fool’s comfort) appreciate one-platform systems. Integration between different platforms can be a female dog (translation: bitch!) if you are not familiar with the tips&tricks on the subject. Working with the registry is a complete disaster. I don’t think that the initial idea of MS was to abuse the registry so much and literally write every reference as a long GUID that points to some class\interface\dll. Hack, register a simple dll (via RegSvr32.exe) and unregister it can leave you garbage on the registry, not mentioning migrating VB6 code into VB.NET\C# code which requires RegAsm.exe for “old” clients. So much garbage to cleanup. And you think that throwing the garbage at your home, once every your-wife-is-nagging-again is hard. Think again.

Yes, they(COM, .Net) know how to communicate and live together, but just like Men and Women – you can’t really understand how it actually works.

I wonder if I should start writing a book on the topic…

p.s – don’t get me wrong, women are hard to understand but it’s only making the game more fun. So does the migration challange.


Changing the Output Path in your Web Applications is a bad idea

<rational thinking>

Let’s assume we have a WebSite(the same issue applied to WebService btw) named WebApplication1. Now, we want to put its(the website’s) output files into some other directory (!= “bin” directory) for development reasons (working as part of a team with some sophisticated Source Safe). What’s the first thing you (and me) do? we use our “rational” programmer nature and Right-Click on the project->Properties->Build Tab->and changing the Output path to whatever we need.


(Instead of “bin\” we can write here “..\..\infrastructure” for example)

We then build the all thing and surprise surprise, the new output path contains all the dlls as expected. Awesome!
Satisfied with the greatness of Visual Studio .Net 2005, we now want to Publish the WebSite so we(or the QA) can play with it. “Think as a developer, think as a developer” I say to myself and Right-Click the WebSite project->Publish… A few really easy “decisions” and ~10 seconds later, VS.NET tells(it speaks to me, I swear) me that my site was published successfully.

Happy as a little girl with a new puppy, I enter my site: http://localhost/webapp1/Default.aspx and Oops!


The page can’t find its “code behind”(The class that it inherits from)! What the hack is going on here!?

Well, it turns out that the Publish process is not as smart as you may think it should be. Changing our Output path to another directory (!= “bin”) caused this all mess as the Publish process simply copy all the files from the bin directory into the new(Published) bin directory. No questions asked. The Publish algorithm do not check if you actually compile your dlls into another directory via Output path and taking it into account.

</rational thinking>

<effective thinking>

Fortunately for us, the solution is pretty easy: define your Output path into the original location (“bin\”) and use the Build Events(post-build in this scenario) in order to copy the output files into your “infrastructure”(or whatever) directory like this:


(The command: xcopy /Y /S ${TargetDir}*.* ..\..\Infrastructure)

</effective thinking>

May it save you the 15 minutes it took me and my teammate Hagay to solve this one.


Missing Invoke button while trying to activate WebMethod from the explorer

The solution is pretty simple, just add the support(in bold) for the httpPost protocol in your web.config file:

<?xml version=”1.0″ encoding=”utf-8″?>
<configuration xmlns=”http://schemas.microsoft.com/.NetConfiguration/v2.0“>
            <add name=”HttpPost”/>  

Publishing the WebService do not automatically add these lines so you’ll have to do it manually.