Generics Constraints: Use Interfaces over Base Class

Prologue:


I’m a big fan of Interface with a Base Class which implement that interface with base behavior. Providing this two together allows the programmer to decide if he wants to inherit from the Base Class and override only the things he really wants or to implement the Interface from scratch.


Here is an example from our code:


1). The Interface:


public interface IPersistentEntity
{
   string ToXml();
   string GetKey();
   // … snippet …
}


2). The Base Class that implements the interface:


/// <summary>
/// Base class for a Business Entity.
/// </summary>
public abstract class EntityBase : IPersistentEntity
{
   public string GetKey()
   {
      return GetKeyCore();   
   }

   protected abstract string GetKeyCore();

   public virtual string ToXml()
   {
      // Build the xml via Reflection over the current type.
   }

   // … snippet …
}



I constraint my EntityCollection object to EntityBase:


/// <summary>
/// Represents a strongly typed list of Business Entities.
/// </summary>
/// <typeparam name=”ENT”>Entity type which inherits from EntityBase class.</typeparam>
[Serializable]
public class EntityCollection<ENT> : List<ENT>, ISerializable
   where ENT : EntityBase
{
   // … snippet …
}


As you can tell, I’m constraining the ENT generic type to be any object that inherit from EntityBase.


The Story:


My teammate, Moran (Yes, him again, goshh… he really challenges me to think deeply about our infrastructure), wanted to create a new EntityCollection of ISpecificEntity (i.e: EntityCollection<ISpecificEntity>). That means that he wanted to extend the IPersistantEntity I showed above.

The problem was that my EntityCollection made a constraint on the generic type to inherit from EntityBase.

That means that the interface should inherit from EntityBase class which is obviously not possible. so his interface should have been some sort of another SpecificEntity class that inherit from EntityBase. Well, that’s no good as some of our existing entities already inherit from the EntityBase and Moran wanted to use them in the collection. That means that Moran should have change the signature of this class (among others):


public class Zone : SpecificEntity //EntityBase
{
   // … snippet …
}


The problem was that Zone was inherited also by another class and we couldn’t change that. To make a long story short, our  headaches were a result of one simple fact:
One of the biggest drawback of inheriting from a Base Class is [your answer here].
You right, the biggest drawback is that you can do it only *once*.


Using abstract classes with interfaces behind is a very good practice, in my book anyway, but using it(abstract class) as a constraint in a Generic Type can cause you some problems later on. The only thing we did was to change the signature of EntityCollection to:


public class EntityCollection<ENT> : List<ENT>, ISerializable
   where ENT : IPersistentEntity
{
   // …
}


That made our life easier as we can always implement additional interface(s) in our class.


Epilogue:


The drawback of single inheritance from class is a dangerous pitfall you should avoid while working with Generics constraints, so my tip for you is:

Constraint your generic type(s) by Interface and not by Base Class.

 

LINQ, IQueryable and in between

I’m doing some thinking about our GUI, Data Access Layer, Entities and the way we should combine them. Looking a little further, LINQ is just a matter of time and effort until we’ll use this great infrastructure. I’ve downloaded a few movies about LINQ from Channel 9, and started to glance at my future. If you’re an architect, you should really take a look at Chatting about LINQ and ADO.NET Entities. This will give you a better insight about LINQ’s Architecture and where you should intervene if needed.


Our department uses a product we developed in-house named Code-Agent which allows us to generate our application based on our ERD. At current stage, we have our own infrastructure for manipulating data between our tiers, but it lacks some abilities which I really want to see in the near future.


This movie made me *think*.


But before I’ll carry on with my insights, let me talk about LINQ, IQueryable<T> and lamda expressions; Let’s start with a little example:


var results = from order in orders
              where order.Amount > 100
              select order.Name;


results will now contain all the orders with amount > 100.
The CLR transforms this query into method calls:


var results = orders.
                  Where(delegate(Order order) { return order.Amount>100; }).
                  Select(delegate(Order order) { return order.Name; });


Or via Lamda expression (kind of version 2 to anonymous method):


var results = orders.
               Where(order => order.Amount>100).
               Select(order => order.Name);


Seeing this kind of solution immediately made me think about writing our own LINQ wrapper for the meanwhile. Still, I can’t work on production code with LINQ as it really premature at current build but I can definitely build some sort of engine which implement the core principles of LINQ and in time remove my implementation, bit after bit. The next step was to look for IQueryable<T>, this baby inherit from IEnumerable<T> and let you do this magic I’ve demonstrated above (Where, Select, Group By, Order… you name it). I found a great post on this matter at Wayward Blog.


What’s the next step you might ask ?
I guess the answer is to emulate IQueryable<T> in our current infrastructure and aim for compatibility and consistency with LINQ.


Here is a quick implementation which pops out of my head:


EntityCollection<Order> orders = 
                  EntityCollection<Order>.
                     Where(delegate(Order o) { return o.Amount > 100; }).
                     Select(delegate(Order o) { return new { o.Name, o.Id }; });

foreach (Order o in orders)
{
    Console.WriteLine(“Order id: ” + o.Id + ” name: ” + o.Name);
}


EntityCollection<T> will inherit from IQueryable<T> and implement the basics.


It’s time to get into code-till-I’ll-bleed mode…

 

Overload or different naming ?

I need two methods which will bring me some data.


1st scenario: bring the data from the file system.


   Method name: GetData(string directory, string[] files)


2nd scenario: bring the data from a dll (assembly) file. The data is an embedded Resource inside the assembly.


   Method name: GetData(string assemblyName, string prefixPath, string[] files)



They both sit in some Helper (static)class.


Now, I thought about changing the method names into GetDataFromFS and GetDataFromAssembly. The fact that they are both serve the same purpose makes me feel that this should be a simple overload but yet I feel that this is wrong somehow due to the fact that their domain is quite different. If tomorrow I’ll add GetData(string webServiceUrl, string[] files) should it be overload as well? I must say I’m not so sure…


What do you think ? overload or naming by domain[1] ?




[1] – No. I don’t want some factory and concrete classes or strategy pattern implementation; This should be a simple utils class.

 

A different approach for saving programmer changes while using Code Generator

Before I’ll talk about a different solution, let’s talk about the default solution so we could compare them.


Demonstrate the (default)missionary approach – Inheritance:


Let’s view a very simple piece of code to make things a bit more tangible:


UsersDal.generated.cs file:


This file is auto-generated based on Users table from our database:


// This file is auto-generated.
// Any changes to this file can be lost in the next code generation.
// If you need to change something – check UsersDal.cs

public abstract class UsersDalBase
{
   public virtual void Add(User u)
   {
      Console.WriteLine(“Doing \”base\” behavior for UsersDalBase.Add(User u)”);
   }
}



You can imagine that in reality, UsersDalBase.Add method will actually open a connection against the database and insert the received user object. As you can tell, this file should be changed only by the code generator (if you change the table structure or relations for example).



Let’s look at our stub class that inherit from the “Base” class.

UsersDal.cs file:


// This file is safe for edit !

public class UsersDal : UsersDalBase
{
}


This file is also generated but only as an empty stub class which can be used if we(the programmers) need to change the default behavior in UsersDalBase. Any upper layer(Business Layer, WS, whatever) in our application will use only UsersDal so any changes to UsersDal or UsersDalBase will be reflected automatically.


Now, let’s say that the generated code isn’t good enough. I want to check some conditions on the user object before actually adding it to our database. All I have to do is simply override the Add method in our UsersDal class.


public class UsersDal : UsersDalBase
{
   public override void Add(User u)
   {
      if (Validator.IsValid(u))
      {
         base.Add(u);
      }
      else
      {
         throw new ArgumentException(“The given object is invalid.”);
      }
   }
}


Now, if I want to re-generate my code due to some changes in my ERD, all I have to do is to replace the UsersDalBase.generated.cs file and my custom changes remains untouched.


The only problem with inheritance is it’s limitation:


(1) You can inherit from one class only and by using inheritance I can’t inherit from GenericDal (unless I’ll add another layer of inheritance, not ideal).

(2) I need to give “high” access modifier(public) to my UsersDalBase. We use UsersDalBase just to allow custom changes. In a perfect world UsersDalBase should be internal (Data Access Layer project).



Let’s look at a different solution – events based separation:


We want to keep our original ability to make changes in our code without losing it on the next code-generation. First, let’s look at a very raw UsersEventArgs I’ll use later on:


public class UserEventArgs : EventArgs
{
   private User m_user;
   private bool m_cancel;

   public UserEventArgs(User u)
   {
      m_user = u;
   }

   public User User
   {
      get { return m_user; }
      set { m_user = value; }
   }   

   public bool Cancel
   {
      get { return m_cancel; }
      set { m_cancel = value; }
   }
}


Notice the “Cancel” property, we’ll use it later on.


UsersDal.generated.cs (version 2)


public partial class UsersDal
{
   protected event EventHandler<UserEventArgs> Adding;
   protected event EventHandler<UserEventArgs> Added;

   public void Add(User u)
   {
      UserEventArgs ea = new UserEventArgs(u);

      RaiseAdding(ea);

      if (!ea.Cancel)
      {
         // default logic
         AddUser(u);

         RaiseAdded(ea);
      }
   }

   protected virtual void AddUser(User u)
   {
      // generated code
      Console.WriteLine(“Doing \”base\” behavior for UsersDal.Add”);
   }

   protected void RaiseAdding(UserEventArgs e)
   {
      EventHandler<UserEventArgs> handler = Adding;
      if (handler != null)
         handler(this, e);
   }

   protected void RaiseAdded(UserEventArgs e)
   {
      EventHandler<UserEventArgs> handler = Added;
      if (handler != null)
         handler(this, e);
   }
}


(1) As you can see, I’m raising an event just before adding the user and another event just after adding the user.
(2) We are using partial class (UsersDal).


UsersDal.cs (version 2):


public partial class UsersDal
{
   public UsersDal()
   {
      // Register to (self) events:
      this.Adding += new EventHandler<UserEventArgs>(UsersDal_OnAdding);
      this.Added += new EventHandler<UserEventArgs>(UsersDal_Added);
   }

   protected void UsersDal_OnAdding(object sender, UserEventArgs e)
   {
      if (!Validator.IsValid(e.User))
      {
         e.Cancel = true;
         // I can also throw a new exception, that will work as well.
      }

      // If e.Cancel remains false (default) the basic (generated)behavior will be executed.
   }

   void UsersDal_Added(object sender, UserEventArgs e)
   {
      // Console.WriteLine(“Doing custom UsersDal.Add – on added”);
   }
}


Calling e.Cancel = true will actually stop the flow in the generated file. You can obviously add more code – according to your needs.


This separation will able you to do just about everything you could via inheritance and then some:


(1) The ability to add another partial class in order to separate class by “topics”.
(2) The ability to add more listeners(event handlers).
(3) The ability to inherit from other classes, if you need to.
(4) No need for extra class in our namespace which means less confusion to the programmer – “Should I use UsersDalBase ? Oh.. wait… I remember something about it… Ya! I need to use UsersDal…”.


Hope it helps.

 

delegates & anonymous methods – can it beat the traditional OOP ?

I want to elaborate on my main ideas from my presentation about Code Templating – abstracting your code via delegates & anonymous methods. During the presentation, I talked about solutions to repetitive, every-day tasks. One of the examples I presented was something *we are all familiar with and that is querying our database. Lets take a look shall we?


public static List<User> GetUsersList()
{
   using (SqlConnection conn = new SqlConnection(ConnectionString))
   {
      SqlCommand cmd = new SqlCommand(“Select Id, Name From Users”, conn);

      conn.Open();

      List<User> users = new List<User>();

      SqlDataReader r = cmd.ExecuteReader();
      while (r.Read())
      {
         User u = new User();
         u.Id = Convert.ToInt32(r[“Id”]);
         u.Name = Convert.ToString(r[“Name”]);

         users.Add(u);
      }

      return users;
   }
}


Now let’s imagine that we also have Get*List for Products, Orders, Items and Sales. That’s 5 Get*List methods already. Let’s look at the code that is common among those potential methods (in red):


public static List<User> GetUsersList()
{
   using (SqlConnection conn = new SqlConnection(ConnectionString))
   {

      SqlCommand cmd = new SqlCommand(“Select Id, Name From Users”, conn);

      conn.Open();

      List<User> users = new List<User>();

      SqlDataReader r = cmd.ExecuteReader();
      while (r.Read())
      {
         User u = new User();
         u.Id = Convert.ToInt32(r[“Id”]);
         u.Name = Convert.ToString(r[“Name”]);

         users.Add(u);
      }

      return users;
   }
}



So the first solution to refactor those lines of (repetitive)code out will be, obviously, via OOP.
Here is a quick solution I came up with just to make the point clear:


interface IDataReaderParser<TRet>
{
   TRet Parse(IDataReader liveReader);
}


static class DbServices
{
   public static TRet ExecuteReader<TRet>(
      IDbCommand cmd, 
      IDataReaderParser<TRet> parser)
   {
      using (SqlConnection conn = new SqlConnection(ConnectionString))
      {
         cmd.Connection = conn;

         conn.Open();

         IDataReader reader = cmd.ExecuteReader();

         return parser.Parse(reader);
      }
   }
}


So far we have some “parser” interface which will get a liveReader and return some generic type based on the required parsing. I don’t know if you’ve noticed but DbServices.ExecuteReader holds all the lines I marked with red just before. 


Let’s look at our GetUsers parser:


class GetUserListParser : IDataReaderParser<List<User>>
{
   public List<User> Parse(IDataReader liveReader)
   {
      List<User> users = new List<User>();

      while (liveReader.Read())
      {
         User u = new User();
         u.Id = Convert.ToInt32(r[“Id”]);
         u.Name = Convert.ToString(r[“Name”]);

         users.Add(u);
      }

      return users;
   }
}


Our parser get the live IDataReader and returns a list of users. Simple.
GetUsersListParser.Parse method contains the rest of the code (all code minus code in red) from our original GetUsersList method.


Finally, our GetUsersList method looks like this:


public static List<User> GetUsersList()
{
   SqlCommand cmd = new SqlCommand(“Select Id, Name From Users”);

   GetUserListParser parser = new GetUserListParser();

   return DbServices.ExecuteReader<List<User>>(cmd, parser);
}


That’s nice, but is it good enough ?? 
For every Get*List method we will have to build a separate class which will implement IDataReaderParser<TRet>. Let’s pause here.


* breath…. good …. *



Let’s rewind. When I started writing the original GetUsersList method, my main goal were those lines:


// (1) Create command
SqlCommand cmd = new SqlCommand(“Select Id, Name From Users”);

// (2) In some magical way, execute the command and return a live reader so I can parse it into objects.
List<User> users = new List<User>();

while (liveReader.Read())
{
   User u = new User();
   u.Id = Convert.ToInt32(r[“Id”]);
   u.Name = Convert.ToString(r[“Name”]);

   users.Add(u);
}

return users;



Open a connection against the database, executing the command as reader and disposing the connection were irrelevant at the time, I knew that I will have to write those lines down but they were just means to get to my real goal(=my real code). So I refactored my code via some sort of OOP solution.

Now I have pieces of code all over the place and even worse – in 1-2 months from now I will have to “Go To Definition” just to remember what the hack is GetUsersListParser class.



My main code was refactored out of my method.



Life shouldn’t be so hard. Like a very wise(and old, they are always old) programmer once said: “If you code it, it will come;”.
Let’s look at a different solution – let’s abstract our code via delegates & anonymous methods.

So our DbServices.ExecuteReader<TRet> will look like:


public delegate TRet ReaderHandler<TRet>(IDataReader liveReader);


static class DbServices
{
   public static TRet ExecuteReader<TRet>(
      IDbCommand cmd, 
      ReaderHandler<TRet> handler)
   {
      using (SqlConnection conn = new SqlConnection(ConnectionString))
      {
         cmd.Connection = conn;

         conn.Open();

         IDataReader reader = cmd.ExecuteReader();

         return handler(reader);
      }
   }
}


DbServices.ExecuteReader receive a (Sql)command to execute and an “handler” – a method with the same signature as ReaderHandler<TRet> delegate. ExecuteReader method will execute the command as a reader and send it(the reader) to the method handler. So who is this “handler” I talk about so much ?! The anonymous method !!

Let’s look at the new version of GetUsersList:


public static List<User> GetUsersList()
{
   SqlCommand cmd = new SqlCommand(“Select Id, Name From Users”);

   return DbServices.ExecuteReader<List<User>>(cmd,
      delegate(IDataReader liveReader) <– our handler, inline
      {
         List<User> users = new List<User>();

         while (liveReader.Read())
         {
            User u = new User();
            u.Id = Convert.ToInt32(liveReader[“Id”]);
            u.Name = Convert.ToString(liveReader[“Name”]);

            users.Add(u);
         }

         return users;
   });
}



The entire logic is just in front of me now, no need to start smelling around it !


Just like in the first OOP solution, I don’t need to handle the connection, call the ADO.NETs’ ExecuteReader, nothing.


To sum up:


Delegates & anonymous methods       1 : 0       OOP




Think about solutions you’ve implemented via OOP and start thinking about delegates as an alternative abstraction technique. For many straight forward architectural problems, delegates will be by far a better solution than traditional OOP.


My next post on this matter will cover the expected question – when delegates solution is too complex??


* Well, most of us anyway. For the rest of you – be cool and play along.

 

Return operation “status” from a method

I sat with Mario today in order to figure out the best way to receive a “status” message from one of our business layer methods.


The task was clear:

We want to activate a rules validation method which will query the data and retrieve the missing operation(s) which are needed before the user can save his data. It is possible that the user have only one operation he has to do but it’s also possible that he has to do N operations. We needed some sort of status combination.
In addition, if the validation passes – we want some sort of “Everything is OK” status.

Our options:



  1. Use ref\out parameter and fill a string variable. Concat error messages to that string and send it back to the caller. The method will return true if everything is good and false otherwise.
  2. Return the error message(s) as string. Again, we’ll have to concat the error messages.
  3. Return a Bitwise Enum which will allow us to “add”\”remove” statuses.

Elimination process:


The main reason we quickly drop options (1) and (2) is the orientation of the error message(s). Sometimes, in act of pure laziness, we will print the error we’ve got from the BL method directly to the user screen. Some time passes by and the user gives you a call and says “Hey, I’m trying to save a user but it throws me an error like [Rule 1] is invalid… What the heck is [Rule 1] ?? I don’t remember seeing it in the user manual!” and you’ll reply “Wait a second, let me see… Hmm… if… else…not… Oh ya!! the bank account number you filled doesn’t match the bank address !”. See the problem here ?


Think about it – your business layer should return a “programmer oriented” message rather than “client oriented” message. If you’ll return some sort of client oriented message from your BL – how would this work in multilingual application ? your BL would talk with some Resource Manager just to return the “correct” message to the user ? No. This is why you have Graphic User Interface layer. So we’ll send a programmer oriented message from the BL back to our caller (for logging purpose), but wait, this will require our GUI to parse the string we’ve got from the BL method and format it for the client. Something like:


if (errorMessage.indexOf(“Rule 1 is invalid”) != -1 && errorMessage.indexOf(“Rule 2 is invalid”) != -1)
{
    // show “1). You must fill the user details. 2). You must fill the user paycheck”
}
else if (errorMessage.indexOf(“Rule 1 is invalid”) != -1)
{
    // show “You must fill the user details”
}
else if (errorMessage.indexOf(“Rule 2 is invalid”) != -1)
{
    // show “You must fill the user paycheck”
}
else
{
    // show “Save complete”
}


Yes, I can make some refactoring (constants, ToLower(), split) but no matter what I’ll do – this code smells (terrible) !


Solution:


This leaves me with option 3. It would be great to ask something *like*:


if (status == ActionStatuses.InvalidRule1 && status == ActionStatuses.InvalidRule2)
{
   // show “You must fill the user details. You must fill the user paycheck”
}
else if (status == ActionStatuses.InvalidRule1)
{
   // show “You must fill the user details”
}
else if (status == ActionStatuses.InvalidRule2)
{
   // show “You must fill the user paycheck”
}
else
{
   // show “Save complete”
}


It’s time to write some code, shall we ?


1). Let’s define a bitwise Enum, so will be able to create a status combination:


[Flags]
public enum SaveUserStatuses
{
   OK = 1,
   BankAccountMissing = 2,
   WrongEmailFormat = 4,
   BankAccountMismatchBankAddress = 8,
   SaveFailed = 16
}


The only rule: if you need to add more values just double the previous value by 2.


2). In our business layer method, we’ll do something like:


public static SaveUserStatuses Save(User user)
{
   SaveUserStatuses status = SaveUserStatuses.OK;


   if (!CheckIfBankAccountExists(user.BankAccount))
      status = (status | SaveUserStatuses.BankAccountMissing) & ~SaveUserStatuses.OK;

   if (!CheckEmailFormat(user.Email))
      status = (status | SaveUserStatuses.WrongEmailFormat) & ~SaveUserStatuses.OK;


   // you get the idea…

   if (status == SaveUserStatuses.OK)
   {
      if (!UsersDal.Save(user))
      {
         status = SaveUserStatuses.SaveFailed;
      }
   }

   return status;
}



The trick here is for every bad validation you add (“|” – bitwise OR)  the required status and remove the “OK” status by using “~” complement operator.


3). Finally, in our GUI:


SaveUserStatuses status = UserBL.Save(someUser);

if ( (status & SaveUserStatuses.BankAccountMissing) == SaveUserStatuses.BankAccountMissing &&
     (status & SaveUserStatuses.WrongEmailFormat) == SaveUserStatuses.WrongEmailFormat )
{
    // show the required message – client\user oriented !
}
// like the above – parse the status and show the required client oriented message(s).



This code smells a lot better:


a). We have no problem extending this code: (1) Add another member to the Enum (2) Add another check to our BL (3) handle the new status at GUI level.


b). We don’t have to parse strings in order to build our errors. Parsing strings is error prone. Using Enum values can set a contract between the BL and the GUI. If someone will break it, we’ll get compile time error (fail fast).


Back to code…

 

Refactoring our architecture

After a long meeting with Ken and Roee about our(SQLink R&D department) architecture, I decided to put it out on the table – maybe you can give us some better insights about some questions we brought up during our session. I’ll start from the end of our session – this is the architecture design we thought about:


layers_small.jpg


Before I’ll start:

We’re generating our code (as much as we can anyway) which means we’re working in dual mode – “Data\ERD Driven” development for trivial data-access work and pure OOP development for our infrastructures\services. The generated code uses our infrastructures\services according to its needs, of course.
That was important to specify as some of our architecture decisions are made according to this development model.


So what’s the difference between this architecture and the one we had before ?
Old architecture:



  • The GUI (Graphic User Interface) talked only with the BL (Business Layer), even for data-operations which were NOT business (or logic, for that manner) at all.
  • The Business Layer was responsible to wrap any call to the Data Access Layer and publish any exception (if occurred).
  • The Business Layer was auto-generated but was “table based” instead of “context based”, i.e – there were directories for each generated table and in each directory – the relevant classes for wrapping the DAL(Data Access Layer) calls. This caused our Business layer to become a “Data Gateway” instead of clean business logic layer as it should have been.
  • The business layer was pretty massive – think about generating 80 tables. This means 80 directories in the BL project for 80 “Data Gateway” operations. Any real logic was hard to detect.

What we’ve changed:


  • It was crucial, for us, that the GUI will be “aware of” one layer only. The application shouldn’t “know” if the given data is logic based or not. Its only purpose is to present data, that’s it.
  • With this in mind, we wanted to make a clear separation of our “Data Gateway” and our “Business Logic” services. Application Gateway layer was born.
  • Application Gateway layer is now responsible for wrapping any calls to the DAL, BL or any other required services. In addition, it is responsible for publishing any kind of exception which occurred in any one of the underlying services (including DAL & BL). The directory structure is now:

    • ApplicationGateway.Data.[Table] -> under each of the tables there are the required classes for clean data operations (CRUD).
    • ApplicationGateway.Logic.[Context] -> the required classes for each logic context.

Open note:

With the new architecture in mind, we had some open notes which were answered but maybe could have been handled better.


The Application Gateway is now a manager\wrapper\sometimes facade class for our services and is responsible for communicating between the layers, is it completely true ?


A scenario:

The Business Layer needs some data in order to do some logic stuff. It makes sense that the Application Gateway will query the DAL and then will send the results for the BL method so it will be able to do the logic stuff with ease. But what happens if the BL method needs different data according to the inner logic.

Solutions:

1. We know the entire process of the BL method – so we can send all the data, even if some of it will not be used. BAD.
2. We can send delegate(s) to the BL method so the Application Gateway will be raised if any other data will be required. if so – the AG (Application Gateway) will be responsible for communicating with other services and return the required data back to the BL method. This is good decoupling but requires harder maintenance and development as the BL method could potentially required different delegates (signatures) – requires experienced programmers.
3. We can put a reference between our BL and the rest of the required services. In this case, if the BL method required data from the DB, we will add a reference to the DAL, create the required object and get the required data. This solution is easier for maintenance and development (requires less knowledge by the programmer) but create some coupling between the services.

Our decision:

We decided to go on solution #3 – better trade off.

 

What do you think ?