Using a transaction doesn’t mean you save roundtrips to the database.

I saw a post in one of the forums I’m active at about “how to insert multiple rows to a table in the database”.

The main goal in this scenario, and that’s a rule of thumb in almost any database-operation, is to send a bulk of requests so they’ll be send for execution in ONE roundtrip to the database.

 

I immediately thought to concatenate the insert requests via “;”(or using the “Bulk Insert” command) for Sql Server or using “BEGIN” + “END” for Oracle.

 

[SqlServer]

SqlConnection conn = new SqlConnection(“your-connection-string”);
string query = “insert into t1 (‘value’); insert into t1 (‘value2’);”;
SqlCommand cmd = new SqlCommand(query, conn);
cmd.ExecuteNonQuery();

 

[Oracle]

OracleConnection conn = new SqlConnection(“your-connection-string”);
string query = “BEGIN insert into t1 (‘value’); insert into t1 (‘value2’); END;”;
OracleCommand cmd = new OracleCommand(query, conn);
cmd.ExecuteNonQuery();

 

Someone from the forum suggested that all we need to do is to create the required commands object (with the query inside) and wrap them with a transaction. His code looked like this:


using (SqlConnection oCon = new SqlConnection(“ConnectionString”))
{
   oCon.Open();
   IDbTransaction transaction = oCon.BeginTransaction();

   try
   {
         string Commands[] = new string[] {“insert…”, “insert…”};
         foreach (string sqlCommand in Commands)
         {
            SqlCommand oCom = new SqlCommand(sqlCommand, oCon, transaction);
            oCom.ExecuteNonQuery();
         }

         transaction.Commit();
   }
   catch
   {
      transaction.Rollback();
   }
}


He claimed that the transaction object will simply collect the commands and only in transaction.Commit(); the request will be sent to the DB so only one roundtrip was performed.

Well, it didn’t sound right to me, I knew that the transaction wraps the commands but I thought that every call to ExecuteNonQuery() will cause a roundtrip to the DB. But I wasn’t absolutely sure so I’ve decided to check it out and I’ve built a simple tester:


// I’ve created Cities table (Sql Server) with the fields –
// 1. ID – int – PK & identity
// 2. Name – varchar(50) – Unique !
using (SqlConnection oCon = new SqlConnection(“connString”))
{
   oCon.Open();
   SqlTransaction transaction = oCon.BeginTransaction();

   SqlCommand c1 = new SqlCommand(“Insert into cities(name)values(‘oren_test1’)”, oCon, transaction);
   SqlCommand c2 = new SqlCommand(“Insert into cities(name)values(‘oren_test1’)”, oCon, transaction); // this will throw an exception for duplicated row (Cities.Name is unique)
   SqlCommand[] commands = new SqlCommand[] {c1, c2};

   try
   {
      foreach (SqlCommand command in commands)
      {
         command.ExecuteNonQuery();
      }

      transaction.Commit();
   }
   catch(Exception err)
   {
      Console.WriteLine(“error: {0}, rollback.”, err.Message);

      transaction.Rollback(); 
   }
}//using will close the connection even in case of exception.


As you can see for yourself(try it), the second ExecuteNonQuery (for the second insert) throw me an exception about the duplicated row (unique exception) so now I’m sure that the roundtrips are NOT “saved” by wrapping the command with a transaction. It’s making a lot of sense, the DB opens a transaction while calling the BeginTransaction() method and keeping it open until we call the Commit() or Rollback() methods. In between, while calling the ExecuteNonQuery() method, the database saves the row in a temporary table just until the the transaction ends (again, via Commit() or Rollback()) and only then the rows are inserted to the “real” table (i.e Cities table).


Conclusion


One of your main concerns while developing a web application is to prevent redundant roundtrips from your application server to the database server.


While the transaction object is vital for data integrity while inserting\updating\deleting many(more than 1) rows and it can boost your request(Trying to insert 1000 rows to the table, each insert with it’s own private transaction, will be much slower than calling the same insert with one transaction for the all 1000 rows), it doesn’t mean that it’s wrapping your calls to the DB and saves those redundant roundtrips.


And if I wasn’t clear so far – You should use the first approach showed in this post.


p.s –
General tip: (need I to say?) Always check your theories before you running along and using them in your applications.

 

Implementing a Dynamic WHERE Clause in SQL Server 2000.

I’m working on my Code Generator so it will be able to generate stored procedures for me. Until now I generated the SQL queries in my C# code, but now, due to a development request(\demand) from my client I must work with stored procedures.


The first thing I thought about is how to make my search pages easy to upgrade and maintain if I’m going to use SP(stored procedure). Now, when I need to add another Field to my dynamic where clause, it’s quite simple – I’m building the query in my data ccess object according to the parameters and everything is OK. I feared that by using SP for my search pages, I’ll need to call something like EXEC which looks like a joke – If you must call EXEC in the SP, you better put the SQL in your C# code and get it over with.


After doing some searches, I found this article which present a way to build and execute dynamic where clauses with “COALESCE” function. After I’ve searched a little more, I found a similar way to do the same thing with better performance:


” I check the value against NULL and achieve good performance and flexibility. No problem with LIKE values either:

WHERE
(@title IS NULL OR title LIKE @title)
AND
(@min_release_date IS NULL OR release_date >= @min_release_date)
AND
(@document_id IS NULL or document_id = @document_id)

Great performance and flexibility! All indexes are used as expected. ” (Zelk)


Looks great !
If the sent parameter is null, don’t “cut” the results by the parameter, else – use it…


OK, I must dig into my generator and start implementing this…