1. Pinning Data Tips While Debugging
Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.
Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value) – and then click the new “pin” button on it to make the DataTip always visible:
Also, you can even enter comments for a pinned data tip, so that you can view the comment later.
You can “pin” any number of DataTips you want onto the screen. In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.
Another great thing is Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions:
Pinned DataTips can be used across multiple debugger sessions. This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.
Pinned DataTips can also be used across multiple Visual Studio sessions. This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them.
See the Value from Last Debug Session:
DataTips are by default hidden when you are in the code editor and the debugger isn’t running. On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously set-up.
Hovering your mouse over a pinned DataTip will cause it to display on the screen. Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them
Importing/Exporting Pinned DataTips:
Pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this).
VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later.
VS 2010 has also got a tone of other Debugger Enhancements, read about them from Scott Gu’s blog if you are interested.
2. Breakpoint Labels:
Sunday, November 14, 2010
Visual Studio 2010 Ultimate : Generating Sequence Diagrams
Another cool feature in Visual Studio 2010 is the ability to generate Sequence diagrams. You may right click inside a method and select “Generate the sequence diagram” from the pop up menu, to generate the diagram on the fly.
To keep example illustrative and simple I will use simple code. Let’s suppose we have ASP.NET MVC application with following controller.
Let’s say we want to generate sequence diagram for AddToProducer() method. All we have to do is to right click on method with mouse, select “Generate Sequence Diagram …” and modify some options.
You can choose various options like the Call Depth and Call scope as shown below:
On hitting the OK button, VIsual Studio 2010 analyzes the code and builds a sequence diagram based on its analysis results as shown below:
To keep example illustrative and simple I will use simple code. Let’s suppose we have ASP.NET MVC application with following controller.
Let’s say we want to generate sequence diagram for AddToProducer() method. All we have to do is to right click on method with mouse, select “Generate Sequence Diagram …” and modify some options.
You can choose various options like the Call Depth and Call scope as shown below:
On hitting the OK button, VIsual Studio 2010 analyzes the code and builds a sequence diagram based on its analysis results as shown below:
Monday, May 10, 2010
Generic Delegate
Fun with Func : Generic Delegate
Func & Action:generic delegate is a cool feature introduced with .NET 3.5. We will look at Func & Action in this short article. But let's start with .NET 1.1.
For instance, take a look a tutorial from C# 1.1 or 2.0, the tutorial on how to use the lambda operator. In that tutorial, we wrote a method that would take a delegate and fold it across the rest of the arguments to the function. It looked like this
That first line is the line that I'm complaining about - the line I'd like to get rid of. Well, guess what? we can! There are a whole bunch of basic delegate declaration in the System name-space that cover almost all of the common delegate declaration cases by using generics. For instance, that Fold method could have been written like this:
public int Fold(Func fid, params int[] list)
{
int result = 1;
foreach (int i in list)
result = fid(result, i);
return result;
}
The Funcs
The Funcs are the delegate declaration for delegates that return a value, and were first introduced in .NET 3.5. We have already seen one, the two argument Func. But there are Func declarations for everything from no arguments to 4 arguments:
Func(TResult)
Func(T1, TResult)
Func(T1, T2, TResult)
Func(T1, T2, T3, TResult)
Func(T1, T2, T3, T4, TResult)
So if you need more than 4 arguments, you have to make your own declaration. But most of the time, you don't, and so this covers many of the common signatures.
what if I don't want to return a value? Well, don't worry, that is here too:
The Actions
The Actions are the delegate declarations to use when your delegate doesn't return a value. Again, there are Actions for everything from zero to 4 arguments. And just as a note, the zero argument Action is not actually a generic delegate - cause there is no need for it to be (there are no types to deal with). The single argument Action was introduced in .NET 2, and the rest of them were added in .NET 3.5.
Action()
Action(T1)
Action(T1, T2)
Action(T1, T2, T3)
Action(T1, T2, T3, T4)
So, I think we got some idea about the Fun & Action generic delegate. Now see how I am using the above Fold function.
class Program
{
static void Main(string[] args)
{
var aa = new int[] { 1, 2, 3, 4 };
var a12 = Fold(ABC, aa);
}
public static int Fold(Func fid, params int[] list)
{
int result = 1;
foreach (int i in list)
result = fid(result, i);
return result;
}
static int ABC(int i1, int i2)
{
return i1 * i2;
}
}
Enjoy:)...
Func & Action:generic delegate is a cool feature introduced with .NET 3.5. We will look at Func & Action in this short article. But let's start with .NET 1.1.
For instance, take a look a tutorial from C# 1.1 or 2.0, the tutorial on how to use the lambda operator. In that tutorial, we wrote a method that would take a delegate and fold it across the rest of the arguments to the function. It looked like this
public delegate int FoldIntDelegate(int a, int b);
public int Fold(FoldIntDelegate fid, params int[] list)
{
int result = 1;
foreach (int i in list)
result = fid(result, i);
return result;
}
That first line is the line that I'm complaining about - the line I'd like to get rid of. Well, guess what? we can! There are a whole bunch of basic delegate declaration in the System name-space that cover almost all of the common delegate declaration cases by using generics. For instance, that Fold method could have been written like this:
public int Fold(Func
{
int result = 1;
foreach (int i in list)
result = fid(result, i);
return result;
}
The Funcs
The Funcs are the delegate declaration for delegates that return a value, and were first introduced in .NET 3.5. We have already seen one, the two argument Func. But there are Func declarations for everything from no arguments to 4 arguments:
Func(TResult)
Func(T1, TResult)
Func(T1, T2, TResult)
Func(T1, T2, T3, TResult)
Func(T1, T2, T3, T4, TResult)
So if you need more than 4 arguments, you have to make your own declaration. But most of the time, you don't, and so this covers many of the common signatures.
what if I don't want to return a value? Well, don't worry, that is here too:
The Actions
The Actions are the delegate declarations to use when your delegate doesn't return a value. Again, there are Actions for everything from zero to 4 arguments. And just as a note, the zero argument Action is not actually a generic delegate - cause there is no need for it to be (there are no types to deal with). The single argument Action was introduced in .NET 2, and the rest of them were added in .NET 3.5.
Action()
Action(T1)
Action(T1, T2)
Action(T1, T2, T3)
Action(T1, T2, T3, T4)
So, I think we got some idea about the Fun & Action generic delegate. Now see how I am using the above Fold function.
class Program
{
static void Main(string[] args)
{
var aa = new int[] { 1, 2, 3, 4 };
var a12 = Fold(ABC, aa);
}
public static int Fold(Func
{
int result = 1;
foreach (int i in list)
result = fid(result, i);
return result;
}
static int ABC(int i1, int i2)
{
return i1 * i2;
}
}
Enjoy:)...
Monday, March 29, 2010
ASP.NET MVC Partial Views
In ASP.NET WebForms, UserControls were used to break the application into smaller pieces. Each piece represented an important part of the application. In ASP.NET MVC application you can achieve the same effect using RenderPartial and RenderAction methods. In this article we are going to demonstrate how to use RenderPartial to construct and use partial views.
Partial Render:
RenderPartial serves the purpose of rendering a UserControl in an ASP.NET MVC application. The views rendered are called PartialViews. In order to use a partial view in your application add a MVC UserControl to your application. The screenshot below shows which project template to select when adding a user control to your MVC application.
You can add the MVC View UserControl in your current view folder or in the shared folder. First the view folder will be searched for the specified user control followed by the shared folder.
We have added a simple "ViewUserControl.ascx" to our views folder. Now, let's see how we can load the partial views on the page. Inside the view page you can load the partial view using RenderPartial HTML helper method.
User Control:
Parent Page:
Now, if you run the page you will see that the partial view is loaded inside the view page as shown in the screenshot below:
Passing Data to Partial View:
In the previous example our partial view was independent which means it was not dependent on any data populated by the view page. In real world applications this might not be the case. We may need to pass additional data to our partial view. The good news is that the partial view has access to the same ViewData dictionary which is used by the view page.
Lets add a class with name Category, which I will pass to the partial view:
namespace MvcPV.Models
{
public class Category
{
public string Name {get;set;}
}
}
1. Untyped ViewPages:
ViewDataDictionery is an untyped collection consisting of a string and an object. The string represents the key, and the object holds the actual data. Like any untyped collection, the object can be of any type, but must be cast to the correct type before it is used. Items are referenced via their key in the ViewData property of the ViewUserControl (in the case of a PartialView). The second is a strongly typed collection, where items are properties of the ViewUserControl.Model property.
No modify the Home Controller for the action 'Index' which is responsible to render the partial view:
public ActionResult Index()
{
ViewData["Message"] = "Welcome to ASP.NET MVC!";
var categories = new List()
{
new Category() {Name = "Beverages"},
new Category() {Name = "Condiments"},
new Category() {Name = "Meat"}
};
ViewData["Categories"] = categories;
return View();
}
Now, the Categories.ascx partial view can easily access the ViewData["Categories"] as shown below:
Now, if you run the page you will see that the partial view is loaded inside the view page as shown in the screenshot below:
2. Strongly Typed Views
As of now we show that the ViewData dictionary is shared between the view page and the view user control (partial view). We can even make it better by strong typing the partial view and sending the model as a second parameter to the RenderPartial method. The code below shows how to make the ViewUserControl as a strongly typed view which can handle IEnumerable collection.
Now modify the user control:
Also modify the parent page to pass a strong type data to partial view:
Now Run the application and check how it looks:
Since, the ViewData dictionary is shared between the ViewUserControl and the ViewPage you can easily make changes to the object in the ViewUserControl and it will be reflected in the ViewPage using the controller.
Partial Render:
RenderPartial serves the purpose of rendering a UserControl in an ASP.NET MVC application. The views rendered are called PartialViews. In order to use a partial view in your application add a MVC UserControl to your application. The screenshot below shows which project template to select when adding a user control to your MVC application.
You can add the MVC View UserControl in your current view folder or in the shared folder. First the view folder will be searched for the specified user control followed by the shared folder.
We have added a simple "ViewUserControl.ascx" to our views folder. Now, let's see how we can load the partial views on the page. Inside the view page you can load the partial view using RenderPartial HTML helper method.
User Control:
Parent Page:
Now, if you run the page you will see that the partial view is loaded inside the view page as shown in the screenshot below:
Passing Data to Partial View:
In the previous example our partial view was independent which means it was not dependent on any data populated by the view page. In real world applications this might not be the case. We may need to pass additional data to our partial view. The good news is that the partial view has access to the same ViewData dictionary which is used by the view page.
Lets add a class with name Category, which I will pass to the partial view:
namespace MvcPV.Models
{
public class Category
{
public string Name {get;set;}
}
}
1. Untyped ViewPages:
ViewDataDictionery is an untyped collection consisting of a string and an object. The string represents the key, and the object holds the actual data. Like any untyped collection, the object can be of any type, but must be cast to the correct type before it is used. Items are referenced via their key in the ViewData property of the ViewUserControl (in the case of a PartialView). The second is a strongly typed collection, where items are properties of the ViewUserControl.Model property.
No modify the Home Controller for the action 'Index' which is responsible to render the partial view:
public ActionResult Index()
{
ViewData["Message"] = "Welcome to ASP.NET MVC!";
var categories = new List
{
new Category() {Name = "Beverages"},
new Category() {Name = "Condiments"},
new Category() {Name = "Meat"}
};
ViewData["Categories"] = categories;
return View();
}
Now, the Categories.ascx partial view can easily access the ViewData["Categories"] as shown below:
Now, if you run the page you will see that the partial view is loaded inside the view page as shown in the screenshot below:
2. Strongly Typed Views
As of now we show that the ViewData dictionary is shared between the view page and the view user control (partial view). We can even make it better by strong typing the partial view and sending the model as a second parameter to the RenderPartial method. The code below shows how to make the ViewUserControl as a strongly typed view which can handle IEnumerable
Now modify the user control:
Also modify the parent page to pass a strong type data to partial view:
Now Run the application and check how it looks:
Since, the ViewData dictionary is shared between the ViewUserControl and the ViewPage you can easily make changes to the object in the ViewUserControl and it will be reflected in the ViewPage using the controller.
Sunday, March 28, 2010
ASP.NET MVC - Displaying Multi-line String with line break
Today I am going to explain how to view multi-line string in ASP.NET MVC with line break which I come across recently while working on a Blog site.
Problem:
I have seen lots of web developers have problem to writing the text from a multi-line textbox to the SQL and displaying back in HTML table format. Lets say I am developing a blog site where users can post blogs and view them. Generally the best way to display blogs is the HTML table format. The problem is here the multi-line text with line break will lose all the line break while rendering. To get-rid of this I have found a simple/great way to do this..
Solution:
My input text was like this from a multi-line test box:
a
b
c
d
If I see the above string from C# before rendering to web browser it shows me like this "a\r\nb\r\nc\r\nd" but when it displays in web page, it changes to "a b c d" without line break which I doesn't want. Now to solve this problem I did the following.
InputString.Replace("\r\n", "Put here br")
One thing, Don't use Html.Encode() for this type of issue though it is not rite. Happy programming...
Problem:
I have seen lots of web developers have problem to writing the text from a multi-line textbox to the SQL and displaying back in HTML table format. Lets say I am developing a blog site where users can post blogs and view them. Generally the best way to display blogs is the HTML table format. The problem is here the multi-line text with line break will lose all the line break while rendering. To get-rid of this I have found a simple/great way to do this..
Solution:
My input text was like this from a multi-line test box:
a
b
c
d
If I see the above string from C# before rendering to web browser it shows me like this "a\r\nb\r\nc\r\nd" but when it displays in web page, it changes to "a b c d" without line break which I doesn't want. Now to solve this problem I did the following.
InputString.Replace("\r\n", "Put here br")
One thing, Don't use Html.Encode() for this type of issue though it is not rite. Happy programming...
Saturday, March 27, 2010
General SQL Server Performance Tips
• Try to restrict the queries result set by returning only the particular columns from the table, not all table's columns.
This can results in good performance benefits, because SQL Server will return to client only particular columns, not all table's columns. This can reduce network traffic and boost the overall performance of the query.
• Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
• If you need to return the total table's row count, you can use alternative way instead of SELECT COUNT (*) statement.
Because SELECT COUNT (*) statement make a full table scan to return the total table's row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT (*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID ('table_name') AND indid < 2 So, you can improve the speed of such queries in several times.
• Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.
• Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
• Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
• Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows.
This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.
• Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
• Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.
• Use user-defined functions to encapsulate code for reuse.
The user-defined functions (UDFs) contain one or more Transact-SQL statements that can be used to encapsulate code for reuse. Using UDFs can reduce network traffic.
• You can specify whether the index keys are stored in ascending or descending order.
For example, using the CREATE INDEX statement with the DESC option (descending order) can increase the speed of queries, which return rows in the descending order. By default, the ascending order is used.
• If you need to delete all tables’ rows, consider using TRUNCATE TABLE instead of DELETE command.
Using the TRUNCATE TABLE is much fast way to delete all tables’ rows, because it removes all rows from a table without logging the individual row deletes.
• Don't use Enterprise Manager to access remote servers over a slow link or to maintain very large databases.
Because using Enterprise Manager is very resource expensive, use stored procedures and T-SQL statements, in this case.
• Use SQL Server cursors to allow your application to fetch a small subset of rows instead of fetching all tables’ rows.
SQL Server cursors allow application to fetch any block of rows from the result set, including the next n rows, the previous n rows, or n rows starting at a certain row number in the result set. Using SQL Server cursors can reduce network traffic because the smaller result set will be returned.
This can results in good performance benefits, because SQL Server will return to client only particular columns, not all table's columns. This can reduce network traffic and boost the overall performance of the query.
• Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
• If you need to return the total table's row count, you can use alternative way instead of SELECT COUNT (*) statement.
Because SELECT COUNT (*) statement make a full table scan to return the total table's row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT (*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID ('table_name') AND indid < 2 So, you can improve the speed of such queries in several times.
• Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.
• Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
• Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
• Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows.
This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.
• Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.
• Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.
• Use user-defined functions to encapsulate code for reuse.
The user-defined functions (UDFs) contain one or more Transact-SQL statements that can be used to encapsulate code for reuse. Using UDFs can reduce network traffic.
• You can specify whether the index keys are stored in ascending or descending order.
For example, using the CREATE INDEX statement with the DESC option (descending order) can increase the speed of queries, which return rows in the descending order. By default, the ascending order is used.
• If you need to delete all tables’ rows, consider using TRUNCATE TABLE instead of DELETE command.
Using the TRUNCATE TABLE is much fast way to delete all tables’ rows, because it removes all rows from a table without logging the individual row deletes.
• Don't use Enterprise Manager to access remote servers over a slow link or to maintain very large databases.
Because using Enterprise Manager is very resource expensive, use stored procedures and T-SQL statements, in this case.
• Use SQL Server cursors to allow your application to fetch a small subset of rows instead of fetching all tables’ rows.
SQL Server cursors allow application to fetch any block of rows from the result set, including the next n rows, the previous n rows, or n rows starting at a certain row number in the result set. Using SQL Server cursors can reduce network traffic because the smaller result set will be returned.
Optimization tips for designing tables in SQL Server
- Normalize your tables to the third normal form.
A table is in third normal form (3NF) if it is in second normal form (2NF) and if it does not contain transitive dependencies. In most cases, you should normalize your tables to the third normal form. The normalization is used to reduce the total amount of redundant data in the database. The less data there is, the less work SQL Server has to perform, speeding its performance. - Consider the denormalization of your tables from the forth or fifth normal forms to the third normal form.
Normalization to the forth and fifth normal forms can result in some performance degradation, especially when you need to perform many joins against several tables. It may be necessary to denormalize your tables to prevent performance degradation. - Consider horizontal partitioning of the very large tables into the current and the archives versions.
The less space used, the smaller the table, the less work SQL Server has to perform to evaluate your queries. For example, if you need to query only data for the current year in your daily work, and you need all the data only once per month for the monthly report, you can create two tables: one with the current year's data and one with the old data. - Create the table's columns as narrow as possible.
This can reduce the table's size and improve performance of your queries as well as some maintenance tasks (such as backup, restore and so on). - Try to reduce the number of columns in a table.
The fewer the number of columns in a table, the less space the table will use, since more rows will fit on a single data page, and less I/O overhead will be required to access the table's data. - Try to use constraints instead of triggers, rules, and defaults whenever possible.
Constraints are much more efficient than triggers and can boost performance. Constraints are more consistent and reliable in comparison to triggers, rules and defaults, because you can make errors when you write your own code to perform the same actions as the constraints. - If you need to store integer data from 0 through 255, use tinyint data type.
The columns with tinyint data type use only one byte to store their values, in comparison with two bytes, four bytes and eight bytes used to store the columns with smallint, int and bigint data types accordingly. For example, if you design tables for a small company with 5-7 departments, you can create the departments table with the DepartmentID tinyint column to store the unique number of each department. - If you need to store integer data from -32,768 through 32,767, use smallintdata type.
The columns with smallint data type use only two bytes to store their values, in comparison with four bytes and eight bytes used to store the columns with int and bigint data types respectively. For example, if you design tables for a company with several hundred employees, you can create an employee table with the EmployeeID smallint column to store the unique number of each employee. - If you need to store integer data from -2,147,483,648 through 2,147,483,647, use int data type.
The columns with int data type use only four bytes to store their values, in comparison with eight bytes used to store the columns with bigint data types. For example, to design tables for a library with more than 32,767 books, create a books table with a BookID int column to store the unique number of each book. - Use smallmoney data type instead of money data type, if you need to store monetary data values from 214,748.3648 through 214,748.3647.
The columns with smallmoney data type use only four bytes to store their values, in comparison with eight bytes used to store the columns with money data types. For example, if you need to store the monthly employee payments, it might be possible to use a column with the smallmoney data type instead of money data type. - Use smalldatetime data type instead of datetime data type, if you need to store the date and time data from January 1, 1900 through June 6, 2079, with accuracy to the minute.
The columns with smalldatetime data type use only four bytes to store their values, in comparison with eight bytes used to store the columns with datetime data types. For example, if you need to store the employee's hire date, you can use column with the smalldatetime data type instead of datetime data type. - Use varchar/nvarchar columns instead of text/ntext columns whenever possible.
Because SQL Server stores text/ntext columns on the Text/Image pages separately from the other data, stored on the Data pages, it can take more time to get the text/ntext values. - Use char/varchar columns instead of nchar/nvarchar if you do not need to store unicode data.
The char/varchar value uses only one byte to store one character, the nchar/nvarchar value uses two bytes to store one character, so the char/varchar columns use two times less space to store data in comparison with nchar/nvarchar columns. - Consider setting the 'text in row' SQL Server 2000 table's option.
The text, ntext, and image values are stored on the Text/Image pages, by default. This option specifies that small text, ntext, and image values will be placed on the Data pages with other data values in a data row. This can increase the speed of read and write operations and reduce the amount of space used to store small text, ntext, and image data values. You can set the 'text in row' table option by using the sp_tableoption stored procedure. - If you work with SQL Server 2000, use cascading referential integrity constraints instead of triggers whenever possible.
For example, if you need to make cascading deletes or updates, specify the ON DELETE or ON UPDATE clause in the REFERENCES clause of the CREATE TABLE or ALTER TABLE statements. The cascading referential integrity constraints are much more efficient than triggers and can boost performance.
Few Tips on Bulk Copy Optimization
- Use nonlogged bulk copy whenever possible.
The nonlogged bulk copy is much faster than logged one, but to use it you must provide all the following conditions:
1. The database option 'select into/bulkcopy' is set to true.
2. The target table is not being replicated.
3. The TABLOCK hint is specified.
4. The target table has no indexes, or if the table has indexes, it is empty when the bulk copy starts. - Use native mode bulk copy whenever possible.
This can improve performance in comparison with the character mode. - Try to use BULK INSERT command instead of bcp or DTS to load data into SQL Server.
The BULK INSERT command is much faster than bcp or the data pump to perform text file import operations, however, the BULK INSERT statement cannot bulk copy data from SQL Server to a data file. - Use the bcp utility instead of DTS when you need to export data from the SQL Server table into a text file.
The bcp utility is much faster than DTS, so try to use it whenever possible. - Specify the number of the first and the last row to bulk copy, if you need to bulk copy not all the rows from the specified data file.
This can results in good performance benefits, because the total amount of data copied will be less. - Specify the number of rows per batch of data copied, if the transaction log was filled before the bulk copy is complete.
Because each batch is copied to the server as one transaction, SQL Server commits or rolls back the transaction for every batch. When you bulk copy large data files, the transaction log can be filled before the bulk copy is complete. In this case, enlarge the transaction log, allow it to grow automatically or specify the number of rows per batch of data copied. - Try to increase the packet_size option.
The packet_size option specifies the number of bytes, per network packet, sent to and from the server. The packet_size can be from 4096 to 65535 bytes with the default of 4096. Increased packet size can enhance performance of bulk copy operations. Try to set the packet_size option to 8192 bytes and continue monitoring. - Use the ORDER hint, if the clustered index exists on the table and the data file is sorted according to the clustered index.
This can significantly improve performance of the bulk copy operation, because SQL Server will load data in the clustered index order without any reorders operations. - If you create a new table and bulk copy data into it, try to bulk load data first and only after that create any indexes.
This can significantly improve performance of the bulk copy operation, because data will be loaded into SQL Server table without any index pages creation during the bulk copy. - If you load data into an empty table with the existing nonclustered indexes, try to drop the nonclustered indexes, bulk load data and only after that re-create the nonclustered indexes.
This can significantly improve performance of the bulk copy operation, because data will be loaded into SQL Server table without any index pages creation during the bulk copy. - If you load data into a nonempty table with the existing clustered and/or nonclustered indexes, and the amount of data added is large, it can be faster to drop all indexes on the table, perform the bulk copy operation, and then re-create the indexes after the data is loaded.
Check the time needed to load data with dropping/re-creating indexes and without dropping/re-creating indexes on your test server before run bulk copy operation on the production server. - If your SQL Server box has multiple CPUs, try to divide loaded data into two or more sources and run multiple instances of BCP on separate clients to load data in parallel.
Because SQL Server allows data to be bulk copied into a single table from multiple clients in parallel using the bcp utility or BULK INSERT statement, try to use parallel data loads whenever possible. To bulk copy data into SQL Server in parallel, you must provide all the following conditions:
1. The database option 'select into/bulkcopy' is set to true.
2. The TABLOCK hint is specified.
3. The target table does not have any indexes. - Specify the TABLOCK hint, if you bulk copy data into an empty table from a single client.
This can improve performance of the bulk copy operation, because this causes a table-level lock to be taken for the duration of the bulk copy operation. - Try to avoid using CHECK_CONSTRAINTS and FIRE_TRIGGERS hints.
Using these hints can significantly degrade performance of the bulk copy operation, because for each row loaded the constraints and insert triggers defined on the destination table will be executed.
Few Tips on Index Optimization
- Consider creating index on column(s) frequently used in the WHERE, ORDER BY, and GROUP BY clauses.
These column(s) are best candidates for index creating. You should analyze your queries very attentively to avoid creating not useful indexes. - Keep your indexes as narrow as possible.
Because each index take up disk space try to minimize the index key's size to avoid using superfluous disk space. This reduces the number of reads required to read the index and boost overall index performance. - Drop indexes that are not used.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should drop indexes that are not used. You can use Index Wizard to identify indexes that are not used in your queries. - Try to create indexes on columns that have integer values rather than character values.
Because the integer values usually have less size then the characters values size (the size of the int data type is 4 bytes, the size of the bigint data type is 8 bytes), you can reduce the number of index pages which are used to store the index keys. This reduces the number of reads required to read the index and boost overall index performance. - Limit the number of indexes, if your application updates data very frequently.
Because each index take up disk space and slow the adding, deleting, and updating of rows, you should create new indexes only after analyze the uses of the data, the types and frequencies of queries performed, and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification. However, avoid using redundant indexes, create them only when it is necessary. For read-only table, the number of indexes can be increased. - Check that index you tried to create does not already exist.
Keep in mind that when you create primary key constraint or unique key constraints SQL Server automatically creates index on the column(s) participate in these constraints. If you specify another index name, you can create the indexes on the same column(s) again and again. - Create clustered index instead of nonclustered to increase performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses and return the sort results.
Because every table can have only one clustered index, you should choose the column(s) for this index very carefully. Try to analyze all your queries, choose most frequently used queries and include into the clustered index only those column(s), which provide the most performance benefits from the clustered index creation. - Create nonclustered indexes to increase performance of the queries that return few rows and where the index has good selectivity.
In comparison with a clustered index, which can be only one for each table, each table can have as many as 249 nonclustered indexes. However, you should consider nonclustered index creation as carefully as the clustered index, because each index take up disk space and drag on data modification. - Create clustered index on column(s) that is not updated very frequently.
Because the leaf node of a nonclustered index contains a clustered index key if the table has clustered index, then every time that a column used for a clustered index is modified, all of the nonclustered indexes must also be modified. - Create clustered index based on a single column that is as narrow as possibly.
Because nonclustered indexes contain a clustered index key within their leaf nodes and nonclustered indexes use the clustered index to locate data rows, creating clustered index based on a single column that is as narrow as possibly will reduce not only the size of the clustered index, but all nonclustered indexes on the table also. - Avoid creating a clustered index based on an incrementing key.
For example, if a table has surrogate integer primary key declared as IDENTITY and the clustered index was created on this column, then every time data is inserted into this table, the rows will be added to the end of the table. When many rows will be added a "hot spot" can occur. A "hot spot" occurs when many queries try to read or write data in the same area at the same time. A "hot spot" results in I/O bottleneck.
Note. By default, SQL Server creates clustered index for the primary key constraint. So, in this case, you should explicitly specify NONCLUSTERED keyword to indicate that a nonclustered index is created for the primary key constraint. - Create a clustered index for each table.
If you create a table without clustered index, the data rows will not be stored in any particular order. This structure is called a heap. Every time data is inserted into this table, the row will be added to the end of the table. When many rows will be added a "hot spot" can occur. To avoid "hot spot" and improve concurrency, you should create a clustered index for each table. - Don't create index on column(s) which values has low selectivity.
For example, don't create an index for columns with many duplicate values, such as "Sex" column (which has only "Male" and "Female" values), because in this case the disadvantages of additional space used and slowly rows modification outweigh the speed advantages of creating a new index. - If you create a composite (multi-column) index, try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
The order of the columns in a composite (multi-column) index is very important. This can increase the chance the index will be used. - If you create a composite (multi-column) index, try to order the columns in the key so that the WHERE clauses of the frequently used queries match the column(s) that are leftmost in the index.
The order of the columns in a composite (multi-column) index is very important. The index will be used to evaluate a query only if the leftmost index key's column are specified in the WHERE clause of the query. For example, if you create composite index such as "Name, Age", then the query with the WHERE clause such as "WHERE Name = 'Alex'" will use the index, but the query with the WHERE clause such as "WHERE Age = 28" will not use the index. - If you need to join several tables very frequently, consider creating index on the joined columns.
This can significantly improve performance of the queries against the joined tables. - Consider creating a surrogate integer primary key (identity, for example).
Every table must have a primary key (a unique identifier for a row within a database table). A surrogate primary key is a field that has a unique value but has no actual meaning to the record itself, so users should never see or change a surrogate primary key. Some developers use surrogate primary keys, others use data fields themselves as the primary key. If a primary key consists of many data fields and has a big size, consider creating a surrogate integer primary key. This can improve performance of your queries. - Consider creating the indexes on all the columns, which referenced in most frequently used queries in the WHERE clause which contains the OR operator.
If the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed, then the table or clustered index scan will be made. In this case, creating the indexes on all such columns can significantly improve your queries performance. - If your application will perform the same query over and over on the same table, consider creating a covering index including columns from this query.
A covering index is an index, which includes all of the columns referenced in the query. So the creating covering index can improve performance because all the data for the query is contained within the index itself and only the index pages, not the data pages, will be used to retrieve the data. Covering indexes can bring a lot of performance to a query, because it can save a huge amount of I/O operations. - Use the DBCC DBREINDEX statement to rebuild all the indexes on all the tables in your database periodically (for example, one time per week at Sunday) to reduce fragmentation.
Because fragmented data can cause SQL Server to perform unnecessary data reads and the queries performance against the heavy fragmented table can be very bad, you should periodically rebuild all indexes to reduce fragmentation. Try to schedule the DBCC DBREINDEX statement during CPU idle time and slow production periods. - Use the DBCC INDEXDEFRAG statement to defragment clustered and secondary indexes of the specified table or view.
The DBCC INDEXDEFRAG statement is a new SQL Server 2000 command, which was not supported in the previous versions. Unlike DBCC DBREINDEX, DBCC INDEXDEFRAG does not hold locks long term and thus will not block running queries or updates. So, try to use the DBCC INDEXDEFRAG command instead of DBCC DBREINDEX, whenever possible. - Consider using the SORT_IN_TEMPDB option when you create an index and when tempdb is on a different set of disks than the user database.
The SORT_IN_TEMPDB option is a new SQL Server 2000 feature, which was not supported in the previous versions. When you create an index with the SORT_IN_TEMPDB option, SQL Server uses the tempdb database, instead of the current database, to sort data during the index creation. Using this option can reduce the time it takes to create an index, but increases the amount of disk space used to create an index. - Use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes.
This trace will show which tables are being scanned by queries instead of using an index.
Few Tips on Stored Procedures Optimization
- Use stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to server only stored procedure name (perhaps with some parameters) instead of large heavy-duty queries text. Stored procedures can be used to enhance security and conceal underlying data objects also. For example, you can give the users permission to execute the stored procedure to work with the restricted set of the columns and data. - Include the SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a Transact-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a Transact-SQL statement. - Call stored procedure using its fully qualified name (If Possible).
The complete name of an object consists of four identifiers: the server name, database name, owner name, and object name. An object name that specifies all four parts is known as a fully qualified name. Using fully qualified names eliminates any confusion about which stored procedure you want to run and can boost performance because SQL Server has a better chance to reuse the stored procedures execution plans if they were executed using fully qualified names. - Consider returning the integer value as an RETURN statement instead of an integer value as part of a recordset.
The RETURN statement exits unconditionally from a stored procedure, so the statements following RETURN are not executed. Though the RETURN statement is generally used for error checking, you can use this statement to return an integer value for any other reason. Using RETURN statement can boost performance because SQL Server will not create a recordset. - Don't use the prefix "sp_" in the stored procedure name if you need to create a stored procedure to run in a database other than the master database.
The prefix "sp_" is used in the system stored procedures names. Microsoft does not recommend to use the prefix "sp_" in the user-created stored procedure name, because SQL Server always looks for a stored procedure beginning with "sp_" in the following order: the master database, the stored procedure based on the fully qualified name provided, the stored procedure using dbo as the owner, if one is not specified. So, when you have the stored procedure with the prefix "sp_" in the database other than master, the master database is always checked first, and if the user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed. - Use the sp_executesql stored procedure instead of the EXECUTE statement.
The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve readability of your code when there are many parameters are used. When you use the sp_executesql stored procedure to executes a Transact-SQL statements that will be reused many times, the SQL Server query optimizer will reuse the execution plan it generates for the first execution when the change in parameter values to the statement is the only variation. - Use sp_executesql stored procedure instead of temporary stored procedures.
Microsoft recommends to use the temporary stored procedures when connecting to earlier versions of SQL Server that do not support the reuse of execution plans. Applications connecting to SQL Server 7.0 or SQL Server 2000 should use the sp_executesql system stored procedure instead of temporary stored procedures to have a better chance to reuse the execution plans. - If you have a very large stored procedure, try to break down this stored procedure into several sub-procedures, and call them from a controlling stored procedure.
The stored procedure will be recompiled when any structural changes were made to a table or view referenced by the stored procedure (for example, ALTER TABLE statement), or when a large number of INSERTS, UPDATES or DELETES are made to a table referenced by a stored procedure. So, if you break down a very large stored procedure into several sub-procedures, you get chance that only a single sub-procedure will be recompiled, but other sub-procedures will not. - Try to avoid using temporary tables inside your stored procedure.
Using temporary tables inside stored procedure reduces the chance to reuse the execution plan. - Try to avoid using DDL (Data Definition Language) statements inside your stored procedure.
Using DDL statements inside stored procedure reduces the chance to reuse the execution plan. - Add the WITH RECOMPILE option to the CREATE PROCEDURE statement if you know that your query will vary each time it is run from the stored procedure.
The WITH RECOMPILE option prevents reusing the stored procedure execution plan, so SQL Server does not cache a plan for this procedure and the procedure is recompiled at run time. Using the WITH RECOMPILE option can boost performance if your query will vary each time it is run from the stored procedure because in this case the wrong execution plan will not be used. - Use SQL Server Profiler to determine which stored procedures has been recompiled too often.
To check the stored procedure has been recompiled, run SQL Server Profiler and choose to trace the event in the "Stored Procedures" category called "SP:Recompile". You can also trace the event "SP:StmtStarting" to see at what point in the procedure it is being recompiled. When you identify these stored procedures, you can take some correction actions to reduce or eliminate the excessive recompilations.
Saturday, March 20, 2010
Mail merge from .Net (C#)
Here are the two nice links from MSDN for Office developer:
How to automate Word with Visual Basic to create a Mail Merge Sample Link 1
How to automate Microsoft Word to perform Mail Merge from Visual C# Sample Link 2
The best Sample code which I like is available here with step by step: Sample Link 3
Sample Code I used for my application. My requirement was to send an eFax with some text body which will load from another doc file.
How to automate Word with Visual Basic to create a Mail Merge Sample Link 1
How to automate Microsoft Word to perform Mail Merge from Visual C# Sample Link 2
The best Sample code which I like is available here with step by step: Sample Link 3
Sample Code I used for my application. My requirement was to send an eFax with some text body which will load from another doc file.
Step1: Created a word template file with tow merge fields ('Header' and 'Footer') and a bookmark with name 'DocumentContents'.
Step2: Open the template file and set the header and footer
Step3: Place the source file contents to the bookmark
Step4: Save the modified template file as output file
Code:
public void Merge(string templateFile, string sourceFile, string destFile)
{
if(!File.Exists(templateFile))
{
throw new FileNotFoundException(string.Format("Template file '{0}' does not exist",templateFile));
}
if (!File.Exists(sourceFile))
{
throw new FileNotFoundException(string.Format("Source file '{0}' does not exist", sourceFile));
}
if (File.Exists(destFile))
{
File.Delete(destFile);
}
object oFalse = false;
object oMissing = Missing.Value;
Application oWord = new Application();
Document oWordDoc = new Document();
try
{
oWord.Visible = true;
object oTemplatePath = templateFile;
oWordDoc = oWord.Documents.Add(ref oTemplatePath, ref oMissing, ref oMissing, ref oMissing);
foreach (Field myMergeField in oWordDoc.Fields)
{
Microsoft.Office.Interop.Word.Range rngFieldCode = myMergeField.Code;
String fieldText = rngFieldCode.Text;
// ONLY GETTING THE MAILMERGE FIELDS
if (fieldText.StartsWith(" MERGEFIELD"))
{
// THE TEXT COMES IN THE FORMAT OF
// MERGEFIELD MyFieldName \\* MERGEFORMAT
Int32 endMerge = fieldText.IndexOf("\\");
Int32 fieldNameLength = fieldText.Length - endMerge;
String fieldName = fieldText.Substring(11, endMerge - 11).Trim();
if (fieldName == "Header")
{
myMergeField.Select();
oWord.Selection.TypeText("Set Header of the File");
}
if (fieldName == "Footer")
{
myMergeField.Select();
oWord.Selection.TypeText("Set Footer of the File");
}
}
}
object oBookMark = "DocumentContents";
String oFilePath = sourceFile;
oWordDoc.Bookmarks.get_Item(ref oBookMark).Range.InsertFile(oFilePath, ref oMissing, ref oFalse, ref oFalse, ref oFalse);
Object oSaveAsFile = (Object)destFile;
oWordDoc.SaveAs(ref oSaveAsFile, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing);
}
finally
{
oWordDoc.Close(ref oFalse, ref oMissing, ref oMissing);
oWord.Quit(ref oMissing, ref oMissing, ref oMissing);
Marshal.ReleaseComObject(oWordDoc);
}
}
public void Merge(string templateFile, string sourceFile, string destFile)
{
if(!File.Exists(templateFile))
{
throw new FileNotFoundException(string.Format("Template file '{0}' does not exist",templateFile));
}
if (!File.Exists(sourceFile))
{
throw new FileNotFoundException(string.Format("Source file '{0}' does not exist", sourceFile));
}
if (File.Exists(destFile))
{
File.Delete(destFile);
}
object oFalse = false;
object oMissing = Missing.Value;
Application oWord = new Application();
Document oWordDoc = new Document();
try
{
oWord.Visible = true;
object oTemplatePath = templateFile;
oWordDoc = oWord.Documents.Add(ref oTemplatePath, ref oMissing, ref oMissing, ref oMissing);
foreach (Field myMergeField in oWordDoc.Fields)
{
Microsoft.Office.Interop.Word.Range rngFieldCode = myMergeField.Code;
String fieldText = rngFieldCode.Text;
// ONLY GETTING THE MAILMERGE FIELDS
if (fieldText.StartsWith(" MERGEFIELD"))
{
// THE TEXT COMES IN THE FORMAT OF
// MERGEFIELD MyFieldName \\* MERGEFORMAT
Int32 endMerge = fieldText.IndexOf("\\");
Int32 fieldNameLength = fieldText.Length - endMerge;
String fieldName = fieldText.Substring(11, endMerge - 11).Trim();
if (fieldName == "Header")
{
myMergeField.Select();
oWord.Selection.TypeText("Set Header of the File");
}
if (fieldName == "Footer")
{
myMergeField.Select();
oWord.Selection.TypeText("Set Footer of the File");
}
}
}
object oBookMark = "DocumentContents";
String oFilePath = sourceFile;
oWordDoc.Bookmarks.get_Item(ref oBookMark).Range.InsertFile(oFilePath, ref oMissing, ref oFalse, ref oFalse, ref oFalse);
Object oSaveAsFile = (Object)destFile;
oWordDoc.SaveAs(ref oSaveAsFile, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing,
ref oMissing, ref oMissing);
}
finally
{
oWordDoc.Close(ref oFalse, ref oMissing, ref oMissing);
oWord.Quit(ref oMissing, ref oMissing, ref oMissing);
Marshal.ReleaseComObject(oWordDoc);
}
}
Subscribe to:
Posts (Atom)