Visual Studio .Net 2005 JIT debugger isn’t available

For some strange reason, I can’t seem to debug my .Net 2.0 application via VS.NET 2005 debugger. I call Debug.Fail(“…”) in my code and run it via Ctrl+F5 – this is the screen I get:


no2005debugger.jpg


Where the hack is my VS.NET 2005 debugger ?!
If anyone found my lost debugger, please send it back to this address (via comment).


Thanks!


btw – debugging with F5 works fine; That makes me wonder how can I control the items in the box (at the picture), maybe registry magic is required. I’ll update if I’ll manage to find my debugger …


 

 

Overriding right, right ??

Do I need to call base.METHOD after overriding it, and if so – when is it the right time ?



Well, I’m glad you’ve asked; I had some chat with my co-worker about the subject when he did some overriding to the Form (WinForm) methods and he wasn’t sure about when is it smart to call the base method and if so, should he call base.METHOD before implementing his code, or maybe after. To clarify, quick example:


public class MyForm : Form
{
   protected override void OnLoad(EventArgs e)
   {
      //OK, what here ???!!


      // Custom code here ?
      //base.OnLoad(e); // should I ?
      // Or maybe – Custom code here… ?

   }

}


After thinking for a few seconds, I came up with this explanation – sort of my “best practice” to the issue:


When do I call base.METHOD:



  1. If you don’t know the implementation in the method you’ve just override: assume that it does some “magic” stuff which is important to the correct flow of the application; But This doesn’t mean that you must call the base method.

    • If you want to stop the logic flow, it makes sense to override the appropriate method and write your code which will change the flow according to your need. In this scenario there is no need to call the base method.
    • If you don’t want to change the logic flow, or you’re not sure – call the base method.

  2. If you know the implementation in the method you’ve just override: no “magic” stuff so this is an easy call – if you need the code in the base method, call it – otherwise – leave it out.

Do I call base.METHOD after my custom code, or maybe before:


This is a tricky one and there is not straight answer – my way of thinking and tackling this question is simply by Explore, Run & Learn “process” – meaning, try to Explore about the base method original purpose and try to figure out the logic; This will give you some logical direction whether you need to call the base method or not. For example – if you override the Render method, you can tell (by MSDN or by Reflector) that its rendering the entire object graph so you probably should call it (eventually). Now that you have some feeling about whether you need to write your code before or after the base.METHOD call, Run it and exam the results:



  • Does the page behaves as you expected?
  • Can you mess it up so it won’t work in some scenarios – how can you deal with it ?
  • Grab the nearest programmer available, can he think about something which will mess it up ? if so – how can you deal with it ?

According to the results and the answers about the given questions, you should now have a clear vision about your decision. Make the required adjustments, place some important remarks(to describe your thinking at the moment) and feel good about your code, you did your best to get a good result. And most important – Learn from the process so it will be more natural on the next time you’ll have to face this decision.


 

WebService unit tests with extensive code generation.

The title got me your attention right? good, because this task – unit testing our WebService in one of our project – seemed to be a frazzled mission at first, but it’s amazing how good tools and healthy thinking able you to think about new ways to “assign” the redundant work to the one(and only) dude that never complains (though it makes noises here and there) – the computer !


Let me take a step back and explain a little bit about our WebService and the unit tests it was required:
We currently finished develop another application in our department and this application was required to expose an interface for “CRUD”ing(Create,Read,Update,Delete) several “System Tables”(“cities”,”countries”,”languages” etc) in the application. Eran(my teammate) was assigned for the job and he did a great job by well designing a general (XML)protocols for “talking” with the WS and by implementing the WS itself. After building the WebService and manually testing it he told me that his work was done and he’s ready for his code review. I still have the image burned in my mind about how happy he was thinking about the required protocols, learning about WSE 2.0 (for WS security), implementing the WS itself to be extendable and maintainable, that it was almost tragic to see his face after I’ve told him that we should write an extensive unit tests for this WS due to his importance and it’s extensive usage by our 3rd party softwares. The WebService had 5 methods and it handled 18 tables in our application so I thought about testing 3 different cases for each method, for each table:



  1. 1 valid xml which should return the expected valid data from the WS.
  2. invalid structure xml which should return an error from the WS.
  3. 1 invalid data xml which should return an error from the WS.

The math was simple:


3 xml files * 5 methods * 18 tables = 270 unit tests !


Eran started to plan the following 2 months for this taks but I had something else in mind – let’s generate those unit test and hack, while we’re in the middle of it, let’s generate the XMLs as well !


I started writing the templates(while Eran was looking and learning the required basics) via CodeSmith (did I mentioned that this tool rocks? I’m sure I did, but again – Eric, great job) and after 2 hours we had all the xml files generated. Eran continued the job and written the classes (which holds the unit test methods) and checked that everything integrated properly. After a total of about 6 hours we had 270 unit tests, but more important, we managed to avoid a lot of dirty work and in case we need to support a new table, the unit tests for it will be only 2 clicks job; This is quite extendable, doesn’t it ?

 

Exception handling – be smart about it !

I’ve encounter a numerous bad usages of try, catch and throw statements in my last 3 years in .NET so I thought to write here my “best practices” in this subject.


Before I begin, just to remind you about the “using” keyword


” The using statement defines a scope at the end of which an object will be disposed. ” (MSDN)


Meaning, this code:


using (MyDisposableObject o = new MyDisposableObject())
{
   // use o…
}


Is equal to:


MyDisposableObject o;
try
{
   o = new MyDisposableObject();
   // use o…
}
finally
{
   // Don’t forget, MyDisposableObject must implement IDisposable
   o.Dispose();
}


The using statement is much more “cleaner” than the try-finally(->call Dispose) block. Of course, in order to use the using statement, MyDisposableObject must implement IDisposable interface, but most of the .NET frameworks’ classes which use external resources do, so no problem here.


When do I use the using keyword instead of “try-catch(-finally)” ?


In any case your code block doesn’t required any exception handling (catch) and you’re using a disposable object.

 

When do I catch an exception ?

 

You should use the catch statement only if you can REALLY handle the exception – meaning you want\need to “eat”(catch it and do nothing about it) the exception or you (want to)\can try to “fix” the application’s flow according the exception type (call transactionInstance.Rollback() in my data-access-layer if an error occurred, for example).

 

Do NOT catch exceptions as a “default” behavior in your code !

The following code is a BAD practice in exception handling:


try
{
   //some code
}
catch(Exception e)
{
   throw new Exception(“X operation error: “ + e.Message);
}
finally // if exists.
{
   //some code
}


Why is it bad ? The catch statement doesn’t handling the original exception, it creates a (bad)new one which means:


  1. The Stack Trace of the original exception will be LOST, which means I lose the ability to view the entire “process” (who called to who flow).
  2. In the demonstrated code, I catch an exception and re-throwing a pointless new exception. Throwing exceptions is an expensive task so you should avoid (at any cost) throwing them as long as you don’t really need to !
  3. If you wrap an exception, at least save the original exception in the InnerException property (I’ll elaborate later on).

When do I wrap an exception, when do I rethrow it ?


  1. You should catch and wrap the exception with a new one only if you can add INFORMATIVE data to the original exception which WILL be used later on. Writing this type of code (in my DAL) will be a smart idea usually:

    SqlTransaction trans;
    SqlConnection conn = null;
    try
    {
        // use the connection to do some DB operation
        
        trans.Commit();
    }
    catch(Exception e)
    {
        if (trans != null)
            trans.Rollback();
        
        // Wrap the exception with DALException
        // I can check if e is SqlException and by the e.Number –
        // Set a “clean”(show to user) message to the DALException.
        // I can add the full “sql” exception in some custom property, 
        // I can determine which stored procedure went wrong, 
        // I can determine the line number (and so on).

        throw new DALException(“clean message to the user”, e);
    }
    finally
    {
        if ( conn != null && conn.State == ConnectionState.Open )
            conn.Close();
    }


    Why is this code smart ? Because I call the Rollback() in case of an error, which will ensure “clean” database. Because it “hides” the original SqlException which allows me, at my Business Layer, to catch a generic DALException which will abstract the Business Layer from the Data Access Layer. Because I CAN add more informative data to the exception so the Business Layer could send the GUI (to show the user).


  2. You should rethrow the exception if you catch-ed it but you “found out” that you can’t really handle it:
    try
    {
        // do some code…
    }
    catch(Exception e)
    {
        if (e is SqlException)
        {
            // Add more information about the exception and
            // throw a new, more informative, exception:
            throw new DALException (“more data”, e);
        }
        else
        {
            // I can’t handle it really, so I’m not even going to try
            throw; // <– look at this syntax ! I’ll explain later on
        }    
    }

     

    Calling throw; will bubble the original exception (including its’ Stack Trace) – this will actually “cancel” the catch statement.

When you wrap an exception, you should *almost* always use the “InnerException” property

 

When you wrap an exception, you should save the original exception as InnerException:


try
{
}
catch(Exception e)
{
   throw new MyCustomException(“custom data”, e);

   // OR

   MyCustomException mce = new MyCustomException(“custom data”);
   mce.InnerException = e;
}


This will preserve the original stack which will be important for later debugging.


 


Any Insights you want to share with me ?

 

Recording your meetings with the client, does it make sense ?

It stroke me today at the train (on my way home) while I was reading a book named “Object Thinking” (by David West); I’m at chapter 7 which is talking about discovering the client domain, i.e what does he(the client) expect from the application. In short, although it’s hard(especially if you’re a technical person in your nature), you must understand the client’s requirements without casting it to your world “I’ll implement it with web-service” or “This is a classic multi-threading application”; You’re job in this initiatory step is just to analyze the client’s needs and trying to depict his world in simple “objects” world; What are my primary entities (for example: “Employee”, “Employer”, “Agreement” etc), what are the relationships between them and how the client expect to “activate” those entities (screens functionality).


I remembered my meeting with our last client and reading this stuff made me think about how well (?) did I managed to handle this task. While the client was depicting his world to me I was trying to understand and write his requirements and his special needs from the application. I’ve noticed that writing down the client’s needs\remarks\requests sometimes stopped the conversation flow, caused needless repetition over the question & answers and sometimes even got me out of focus.


So, maybe tape-recording the entire meeting(well, the important stuff anyhow), concentrating on asking the right question and analyzing the answers later can create a better characterization ? better understanding of the client’s domain ? shorter and more thorough meetings ?
It seems like a good idea, I think I’ll give it a try on my next meeting.


What do you think?

 

“Hello World” to a mini enterprise application… sounds familiar ?

Hey happy coders,

 

I’m currently developing a “2-weeks-max” application for an Israeli bank (If you see “Israeli bank” instead of the name of the bank, you don’t have sufficient privileges ;-)). The characterization was written and approved on the fly, without a deep understanding of the client’s domain, i.e what are the other applications that the users use in his every day work ? are they look the same ? does he “MUST” have some features that his already used to in his other applications ?

Now I’m facing the harsh results.

 

I’ll give you an example of “little-MUST-feature-that-can-come-back-and-bite-you”. The user requested a screen which will have to following fields in it (and some others, but they’re irrelevant now):


  1. Requester drop down list: show all the users from a specific group in the AD (active directory).
  2. A->B->C linked drop down list: 3  DDL (drop down list) which are connected meaning that when you select a different value in A DDL all the values in B&C DDLs must show all the children of A (changing B will change C values of course).

This looks like a trivial requests right? Well, you’re right, if you look at this skeleton request it may appear trivial, but the key is to understand how to user expect to choose a value from those DDLs. In my case, the user likes to work with auto-complete drop down lists. Combining his auto-complete mechanism in this screen was NOT a trivial task at all. The client gave me his code, so it seems that I just have to integrate it in my code and Voila; BUT The code was written for ASP, so I had to wrap it in a custom web control, handle the viewstate of the control, and worst – his code didn’t worked so well (client-side behavior) so I was required to adjust and fix it as well(black box component in my a$$).

 

The 2 weeks application is now estimated in 3 weeks, 50% more than the first estimation !

I’m expecting a good night 24:00-06:00 sleep at the office for the next 2 weeks, wish me luck !

 

Some lessons I’ve learned:


  1. Don’t give a fast reply for hours estimation just to move along and get the development process running, it WILL hurt you later on.
  2. Always try to understand the solution domain as well as the client’s domain. Ask him to show you other applications that he uses for his every day work, ask him if he wants the same GUI in his new application that you’re going to develop for him. If he does, think about the amount of time you’ll be needed in order to do that, do you need to use a specific “template” so the new application will look the same as his old ones (how are you going to integrate it with ease?), do you need to allow him some special features that his already used to (auto-complete drop down list)? do you need to support key combination (Ctrl+S will save his document) ? In a “clean”(without technical considerations) characterization it probably will not be shown, but you must consider it in the technical characterization(if you have one) and in your total hours estimation ! Don’t forget that the user thinks that adding a new feature to the application is an easy task because he already saw those features in other applications he uses every day (“can’t you simply copy it from here???”) – you must remind him that it’s not a trivial task, and allow him to choose if he wants to spend time on these features.
  3. “Milk” the client for information, write all the possible scenarios that the client expect the system to handle. What screen X will show when I login as Administrator, what it will show when I login as Technician, as Supervisor ? Don’t start programming unless you have the answers for those client “trivial” scenarios, refactoring will be a bitch !
  4. update: just to clarify, I didn’t write the characterization and neither was my PM (project manager), it was written by another co-worker a long time ago. This is a huge mistake IMHO, you must ask the implementors, if you’re not one of them, for their opinion on the development process before you can give your hours estimation.

Any tips you can share with me ?

 

 

 

Freeze ! Put your hands on the keyboard !

I’m one of those guys who likes to use shortcuts for saving myself the honor of

remembering the path of every single program I use…

 

 

Problem

 

So here is my every day scenario (tell me if it sounds familiar) –

I’m looking for “Internet Information Services” in order to customize my Virtual Directory.

I’m starting to look at my “Programs” menu, but damn, I have only 3 programs there !

Oh yes, I need to click this ugly arrow.JPG button to view all the programs.

OK, done that, now I’m searching for my Administrative Tools menu, but I can’t seem to find it.

Yes, I remember, I need to check “Display Administrative tools” in order to see this, SHI$ !

Finally, I’m able to view this menu and here I go, mission completed…

 

I bet that this case is quite familiar to you as well; So I’ve added the IIS shortcut to my taskbar to shorten this process a little.

 

The main problem is that my taskbar needs place, place I prefer to “waste” on my VS.NET instead

of 3-4 lines of programs shortcuts.

In addition, I don’t like to leave the keyboard and messing around with the mouse (sounds dirty, I know, let it go) !

 

 

Solution

 

I’m glad to introduce to you – SlickRun !!

This devil has a “magic keywords” mechanism which is absolutely brilliant !

In short, every magic keyword is a shortcut – to a site, to a document, to a program, ANYTHING !

 


Here are some shortcuts which I’m using for my every day work:

ggl [your search sentence here] – search in google.

iis – open Internet Information Services.

vss – well, need I say more ?

cs\vsnet – open vs.net

events – open the event viewer.

codesmith – open CodeSmith studio.

n2 – open notepad2.

msdn – open my latest version of MSDN.

regulator – open Roy Osherove’s Regulator.

ie – open internet explorer window.

mssql – open SqlServer 2000 Enterprise Manager.

reflactor – open Lut’s Reflector tool which I love (a MUST tool when working with CodeSmith 2.6)

lnbogen – opens my site ! COOL !

[myprojectname] – open the directory of my project (it’s much faster than writing Run->c:\path-to-my-project\ & Enter).

[company documentation] – the main directory which keeps your company[->application->] customer requests, application structure, code guidelines and every other thing you think is “every day” search program\directory\document etc.

[projecttodo] – my personal TODO.xls file (for every project).

 

Here is my taskbar (some of it anyway):

taskbar.gif

 

All you have to do is to create a magic keyword and with a single Ctrl+Q and your magic keyword – your shortcut is running. 

 

The greatest thing is that my hands don’t leave the keyboard, which I find a lot faster

than moving the mouse, clicking on the keyboard, moving the mouse again, clicking the keyboard etc.

 

I can hear you till here – “this is a nice feature but certainly not a big time saver”.

Let me refresh your memory; Try to count the number of times a day your hands leave the

keyboard and wasting time searching this or that application\document and multiply this number by 15.

You’ll get a good evaluation of the number of seconds you’re wasting 

every day for SEARCHING instead of DOING,

on trying to REMEMBER paths instead of being FOCUSED on the “real” work.

 

Even more – you can export\import your shortcuts, and share these definitions between

your home\work computers or even

between your teammates (assuming the installation paths are the same, of course).

This will keep your “easygoing” work environment in each computer you’ll use.

 

I’m hooked, tell me if you are also (share your every day magic keywords with us!).