What's the Best Practice of Naming Stored Procedure for T-Sql

What is your naming convention for stored procedures?

For my last project i used usp_[Action][Object][Process] so for example, usp_AddProduct or usp_GetProductList, usp_GetProductDetail. However now the database is at 700 procedures plus, it becomes a lot harder to find all procedures on a specific object. For example i now have to search 50 odd Add procedures for the Product add, and 50 odd for the Get etc.

Because of this in my new application I'm planning on grouping procedure names by object, I'm also dropping the usp as I feel it is somewhat redundant, other than to tell me its a procedure, something I can deduct from the name of the procedure itself.

The new format is as follows

[App]_[Object]_[Action][Process]

App_Tags_AddTag
App_Tags_AddTagRelations
App_Product_Add
App_Product_GetList
App_Product_GetSingle

It helps to group things for easier finding later, especially if there are a large amount of sprocs.

Regarding where more than one object is used, I find that most instances have a primary and secondary object, so the primary object is used in the normal instance, and the secondary is refered to in the process section, for example App_Product_AddAttribute.

What is the best practice to manage stored procedures in C# code?

You can have a class with constant properties having names of the SPs.And have this class in a seperate class library (dll). Also it is not good to have sp_ as start of procedure see the link http://msdn.microsoft.com/en-us/library/dd172115(v=vs.100).aspx

public class StoredProcedures
{
public const string GetKeyTables = "Sp_Get_Key_Tables";
}

Best practices of structuring stored procedures

I write a lot of complex stored procs. Some things I would consider best practices:

Don't use dynamic SQl in a stored proc unless you are doing a search proc with lots of parameters which may or may not be needed (then it is currently one of the best solutions). If you must use dynamic SQl in a proc always have a debug input parameter and if the debug parameter is set, then print the SQL statement created rather than executing it. This will save hours of debugging time!

If you are performing more than one action query in a proc (insert/update/delete), use Try Cacth blocks and transaction processing. Add a test parameter to the input parameters and when it is is set to 1, always rollback the entire transaction. Before rolling back in test mode, I usually have a section that returns the values in the tables I'm affecting to ensure that what I think I am doing to the database is in fact what I did do. Or you could have checks as go as shown below. That is as simple as putting in the following code around your currently commented out selects (and uncommenting them) once you have the @test parameter.

If @test =1
Begin
Select * from table1 where field1 = @myfirstparameter
End

Now you don't have to go through and comment and uncomment each time time you test.

@test or @debuig should always be set with a default value of 0 and placed last in the list. That way adding them won't break existing calls of the proc.

Consider having logging and/or error logging tables for procs doing inserts/updates/deletes. If you record the the steps and or errors in table variables as you go, they are still available after a rollback to be inserted into the logging table. Knowing what part of a complex proc failed and what the error was can be invaluable later on.

Where possible do not nest stored procs. If you need to run multiple records in a loop, replace the stored proc with one that has a table-valued parameter and set up the proc to run in a set-based and not individual record fashion. This will work if the table-valued parameter has one record or many records.

If you have a complex select with a lot of subqueries or derived tables, consider using CTEs instead. Refactor any correlated subqueries or cursors to better performing set-based code. Always think in terms of sets of data not one record.

Do not, under any conceivable circumstance, nest views. The performance hit is much worse than any small amount of saved development time. And trust me, nested views do not save maintenance time when the change needs to be to the view the furthest into the chain of views.

All stored procs (and other database code) should be in source control.

Table variables are good for smaller data sets, but temp tables (real ones that start with # or ## not staging tables) can be better for performance in large data sets. If using temp tables drop them when you don't need them anymore. Try to avoid the use of global temp tables.

Learn to write performant SQL. It is usually just as easy to write SQL that will perform well than SQL which will not once you know the techiniques. If you write complex stored procs, there is no excuse for not knowing which techniques work better than which other ones. Learn how to make sure your query is sargable. Avoid cursors, correlated subqueries, scalar functions and other things which run row-by-agonizing-row.

Stored Procedures Best Practices

What to return will depend on your business needs. In general, we only return errors, but other companies/processes have different needs.

Sometimes we use output parameters to send some values back that might be used later in a long complicated process that executes many stored procs. This is especially true if you have a lot of conditional logic to tell what other procs to run under the current set of circumstances.

Be clear though, return should return error or success information, output should return actual data (such as a person id) that you intend to do something with later in the process or display in the app. If you have more than a scalar value to return to the calling process, use a select instead of an output variable. Don't try to use Return to return anything other than status of the run and don't try to use output to return multiple records.

One thing you should consider if you decide to handle rollbacks with Try Catch and do explicit steps, if you have put information into a table variable then that is not rolled back but any data inserted into a regular table or a temp table would be. That means if you want to log your steps or errors, you should but them in a table variable, so they can be inserted into the logging table after the commit or rollback.

If you are going to write anything complex at all with lots of steps and variables, do yourself a favor and add an @test input variable, so that you can run it in a test mode (and rollback) and do tests at key points to see what your variable values or the results of a select that insert statement might use, etc, would be. This will help tremendously in finding out exactly what information you tried to insert when the insert step fails. If you spend an extra hour or so doing this up front you will save hundreds of hours figuring out why the proc errored later on.

Database Naming Conventions by Microsoft?

The naming conventions used in SQL Server's AdventureWorks database demonstrate many best practices in terms of style.

To summarize:

  • Object names are easily understood
  • Table names are not pluralized
    ("User" table not "Users")
  • Abbreviations are few, but allowed
    (i.e. Qty, Amt, etc.)
  • PascalCase used exclusively with the
    exception of certain column names
    (i.e. rowguid)
  • No underscores
  • Certain keywords are allowed (i.e.
    Name)
  • Stored procedures are prefaced with
    "usp"
  • Functions are prefaced with "ufn"

You can find more details here:

  • AdventureWorks Data Dictionary
  • Stored Procedures in
    AdventureWorks
  • Functions in AdventureWorks

One caveat: database naming conventions can be very controversial and most database developers I've met have a personal stake in their style. I've heard heated arguments over whether a table should be named "OrderHeader" or "OrderHeaders."

Best practices for stored procedure API?

Is not hard to provide informational error messages that a human can understand. Just RAISERROR with a descriptive text. slightly more difficult is to raise localized texts, which implies proper use of the sp_addmessage and family. The real hard problem is raising error to which a program can react. This means properly documented error codes (and severity and state), and severe code discipline in using them in your API.

And don't forget proper transaction nesting. I have a sample on my blog on how to properly handle transactions in combination with T-SQL exceptions: Exception handling and nested transactions.

Unfortunately the state of the art on the whole client/T-SQL stack vis-a-vis exception has some problems. Most notable is that if you catch a T-SQL exception, you cannot rethrow it, so your client cannot expect the typical system error numbers. See SQL Server: Rethrow exception with the original exception number. This leaves you with little means to communicate proper error information, other than using your own error numbers on the over 50000 range, which is very cumbersome as the number of 'transalated' error codes increases, and using the error message string as the exception information.



Related Topics



Leave a reply



Submit