Tuesday, August 24, 2010

Remove All Text Between Single Quotes

There's a zillion posts and articles about how to escape single quotes in T-SQL, but what I wanted to do was strip out the characters that were inside the single quotes. After a ridiculous amount of Googling turned up nothing, I spent an even more ridiculous amount of time figuring it out.

The code below is a complete, working example of removing the characters from between two single quotes for every row of a table. It also removes the single quotes; modifying the code to leave the single quotes in is left as an exercise for the reader. (Ok, just add 1 to PosBeg and subtract 1 from PosEnd.)

-- How to convert the rows' values to this: 
--      The cow says .          
--      The duck says .         
--      The clock says .        
-- In other words, remove the single quotes
-- and the characters *between* them. 



          SELECT 'The cow says ''moo''.'
    UNION SELECT 'The duck says ''quack''.'
    UNION SELECT 'The clock says ''eleven-thirty''.'
                       CHARINDEX('''', c),
                       CHARINDEX('''', c, CHARINDEX('''', c) + 1) - CHARINDEX('''', c) + 1,


If you insert the code below into the code above just before the UPDATE statement, it may make what's going on clearer:

SELECT CHARINDEX('''', c)                              AS PosBeg FROM t

SELECT CHARINDEX('''', c, CHARINDEX('''', c) + 1)      AS PosEnd FROM t
SELECT CHARINDEX('''', c, CHARINDEX('''', c) + 1) -
       CHARINDEX('''', c)                              AS Length FROM t 

                 CHARINDEX('''', c),
                 CHARINDEX('''', c, CHARINDEX('''', c) + 1) - CHARINDEX('''', c) + 1
  FROM t

The really important thing I learned from this is that (a) when people say T-SQL was not intended for string manipulation, they're not kidding, and (b) the way to do anything remotely fancy with strings is one tiny step at a time. As an old C/C++/C# developer, I underestimated the difficulty of this problem: playing with strings ceased to be interesting as soon as CString came out. I finally got it through my head that the only hope of implementing this in an intelligible way was to start from the inside, and work my way out. That's where the intermediate variables PosBeg, PosEnd, and Length came from - they were the baby steps I took along the way.

I haven't programmed using intermediate variables in a long, long time. I don't write C# like that, and I sure don't write other T-SQL that way, but string manipulation in T-SQL drove me back to it. I wonder if it's T-SQL's lack of powerful string operators, or just the familiarity of old habits, that are to blame?

Wednesday, August 18, 2010

ASCII Bar Chart of Job History

If you're like me, it's a lot easier to understand things with a picture. With that in mind, here's a script you can run as a Job which will print out a visual representation of the Jobs that ran in the last 24 hours. The results are emailed to whatever address you provide (near the end of the script).

-- Prints a primitive bar chart that shows all the jobs that ran during the    
-- last 24 hours.  Useful for visualizing overlap.                             
-- 2010-08-17 - Larry Leonard                                                  

USE YourDatabase

-- This is the only variable you can set.  It is the width you want the 
-- output to be, in characters.
DECLARE @nScreenDotWidth INT    SET @nScreenDotWidth = 80

-- Create the temp table that will hold all the Job History data.              

;WITH cte AS
SELECT jv.name          AS 'sJobName',
       jv.description   AS 'sJobDescription',

       SUBSTRING(CAST(      jh.run_date     AS CHAR(8)), 5, 2) + '/' +
                 CAST(RIGHT(jh.run_date, 2) AS CHAR(2))        + '/' +
                 CAST(LEFT (jh.run_date, 4) AS CHAR(4))                               AS 'sRunStartDate', 

       CAST(LEFT(RIGHT(STUFF(jh.run_time, 1, 0, '0'), 6), 2) AS CHAR(2)) + ':' + 
       CAST(LEFT(RIGHT(      jh.run_time, 4), 2)             AS CHAR(2)) + ':' +
       CAST(RIGHT(           jh.run_time, 2)                 AS CHAR(2))              AS 'sRunStartTime',

       ((60 * 60 * CAST(     LEFT(STUFF(jh.Run_Duration, 1, 0, REPLICATE('0', 6 - LEN(jh.Run_Duration))), 2)    AS INT)) +  
        (     60 * CAST(SUBSTRING(STUFF(jh.Run_Duration, 1, 0, REPLICATE('0', 6 - LEN(jh.Run_Duration))), 3, 2) AS INT)) +  
        (          CAST(    RIGHT(STUFF(jh.Run_Duration, 1, 0, REPLICATE('0', 6 - LEN(jh.Run_Duration))), 2)    AS INT)) 
       ) / 60                                                                         AS 'nRunDurationMins'

  FROM msdb.dbo.sysjobs_view    jv
  JOIN msdb.dbo.sysjobhistory   jh
    ON jv.job_id = jh.job_id
 WHERE jh.step_id = 0                               -- "Job is completed" step.
 SELECT   sJobName                               AS 'sJobName'
        , sRunStartDate + ' ' + sRunStartTime    AS 'sRunStart'
        , nRunDurationMins                       AS 'nRunDurationMins'
   INTO #Chart
   FROM cte

-- Add the runs' start and stop time from the ridiculous format returned by    
-- sysjobhistory.  Use this info to only look at last 24 hours.                


   SET dtRunStart = CAST(sRunStart AS DATETIME)

   SET dtRunEnd   = DATEADD(MINUTE, nRunDurationMins, dtRunStart)

    OR nRunDurationMins = 0  

-- Calculate each job run's start and stop points on the chart.                

DECLARE @dtLatest    DATETIME    SELECT @dtLatest    = MAX(dtRunEnd) FROM #Chart
DECLARE @dtEarliest  DATETIME    SELECT @dtEarliest  = DATEADD(hour, -24, @dtLatest)
DECLARE @nMinsPerDot INT         SET    @nMinsPerDot = DATEDIFF(MINUTE, @dtEarliest, @dtLatest) / @nScreenDotWidth


   SET StartDot = DATEDIFF(MINUTE, @dtEarliest, dtRunStart) / @nMinsPerDot

   SET EndDot   = DATEDIFF(MINUTE, @dtEarliest, dtRunEnd)   / @nMinsPerDot

-- Iterate through the job runs, creating a line for each one.                 
DECLARE @nMaxJobNameLen INT             SELECT @nMaxJobNameLen = MAX(LEN(sJobName)) FROM #Chart
DECLARE @sJobName       NVARCHAR(4000)
DECLARE @StartDot       INT              
DECLARE @EndDot         INT 
DECLARE @sChart         NVARCHAR(MAX)   SET @sChart = ''
DECLARE @sLine          NVARCHAR(4000)

    SELECT sJobName, StartDot, EndDot
      FROM #Chart
  ORDER BY StartDot,
           EndDot DESC
OPEN cur
FETCH cur INTO @sJobName, @StartDot, @EndDot 

    SELECT @sLine =    
              + REPLICATE(' ', @nMaxJobNameLen - LEN(@sJobName))
              + ' !'
              + REPLICATE(' ', @StartDot) 
              + '|' 
              + CASE 
                     WHEN @EndDot - @StartDot - LEN('||') > 0
                     THEN REPLICATE('-', @EndDot - @StartDot - LEN('||')) 
                     ELSE '-'
              + '|'

    SET @sChart = @sChart + CHAR(10) + @sLine
    FETCH cur INTO @sJobName, @StartDot, @EndDot 



-- Add top and bottom borders, and the date range, to the chart.               
SET @sChart = REPLICATE('=', LEN(@sLine) + 1) + @sChart

SET @sLine = CAST(@dtEarliest AS NVARCHAR) 
           + REPLICATE(' ', LEN(@SLine) - (2 * LEN('Aug 16 2010  5:05PM')) + 1)
           + CAST(@dtLatest AS NVARCHAR)

SET @sChart = @sChart + char(10) + REPLICATE('=', LEN(@sLine) + 1) + char(10) + @sLine

-- "PRINT" truncates NVARCHAR(MAX) variables at 8000 bytes.  So we do this.       

DECLARE @CurrentEnd  BIGINT             -- Track the length of the next substring.
DECLARE @offset      TINYINT            -- Tracks the amount of offset needed.

SET @sChartPiece = REPLACE(REPLACE(@sChart, CHAR(13) + CHAR(10), CHAR(10)), CHAR(13), CHAR(10))

WHILE LEN(@sChartPiece) > 1
    IF CHARINDEX(CHAR(10), @sChartPiece) BETWEEN 1 AND 4000
        SET @CurrentEnd =  CHARINDEX(CHAR(10), @sChartPiece) - 1
        SET @offset = 2
        SET @CurrentEnd = 4000
        SET @offset = 1

    PRINT SUBSTRING(@sChartPiece, 1, @CurrentEnd)
    SET @sChartPiece = SUBSTRING(@sChartPiece, @CurrentEnd+@offset, 1073741822)  -- 0x3FFFFFF 

-- Send me an email.                                                           

DECLARE @EmailImportance NVARCHAR(6)      SET @EmailImportance = 'Normal'
DECLARE @EMailSubject    NVARCHAR(255)    SET @EMailSubject    = 'Daily Job History Chart'
DECLARE @EmailBody       NVARCHAR(MAX)    SET @EmailBody       = @sChart
DECLARE @EmailItemID     INT              SET @EmailItemID     = 0
DECLARE @ProfileId       INT              SET @ProfileId       = -1
DECLARE @ProfileName     sysname          SET @ProfileName     = 'YourProfile'
DECLARE @nErr            INT              SET @nErr            = @@ERROR

EXEC msdb.dbo.sp_send_dbmail
    @recipients      = 'president@whitehouse.gov',
    @body            = @EmailBody,
    @body_format     = 'TEXT',
    @subject         = @EMailSubject,
    @importance      = @EmailImportance,
    @profile_name    = @ProfileName,
    @mailitem_id     = @EmailItemID OUTPUT

-- Check the results of trying to send the mail.
SET @nErr = @@ERROR

IF @EmailItemID != 0
    RAISERROR ('sp_send_dbmail - @EmailItemID %d', 10, 1, @EmailItemID) WITH NOWAIT, LOG

IF @nErr != 0
    RAISERROR ('sp_send_dbmail - @@ERROR %d', 16, 1, @nErr) WITH NOWAIT, LOG

Here's some example results. (This data is entirely made up - I don't really rebuild indexes that often.) Primitive to be sure, but still makes it easy to spot five jobs running at the same time!

CHECKDB         ! |--|
Reorg Indexes   !  |-|
CHECKDB         !            |---|
Reorg Indexes   !             |-|
CHECKDB         !                 |---|
Reorg Indexes   !                   |-|
Reorg Indexes   !                   |-|
Rebuild Indexes !                    |-----------------------------------------------------------|
CHECKDB         !                       |--|
Reorg Indexes   !                        |-|
CHECKDB         !                            |---|
Reorg Indexes   !                              |-|
CHECKDB         !                                  |--|
Reorg Indexes   !                                   |-|
CHECKDB         !                                      |------|
CHECKDB         !                                      |--|
Reorg Indexes   !                                        |----|
Daily Reboot    !                                           |--|
CHECKDB         !                                            |--------------|
Reorg Indexes   !                                             |-|
CHECKDB         !                                                 |---|
Reorg Indexes   !                                                  |-|
CHECKDB         !                                                       |---|
Reorg Indexes   !                                                        |-|
Reorg Indexes   !                                                             |-|
Load AR data    !                                                              |-|
CHECKDB         !                                                                 |-----|
Reorg Indexes   !                                                                   |-|
Run Reports     !                                                                   |-|
Load AR data    !                                                                     |-|
Reorg Indexes   !                                                                        |-|
CHECKDB         !                                                                            |---|
Reorg Indexes   !                                                                              |-|
Aug 16 2010 10:49PM                                                            Aug 17 2010 10:49PM

Saturday, August 7, 2010

Scriplet: Display Connections Per User & Database

A quick script to list out which users have how many connections with which databases.

-- Display users' connections per database.
SELECT DB_NAME(dbid) AS 'Database Name', 
       COUNT(dbid)   AS 'No Of Connections',
       loginame      AS 'Login Name'
  FROM sys.sysprocesses
 WHERE dbid > 0
 GROUP BY dbid, loginame

Sunday, July 25, 2010

Lost the "Database Tab" in Visio 2010?

I've been using Visio since before Microsoft bought them, and I love Visio 2010, but the Microsoft Office help is worthless! Every dialog has a "context-sensitive" help button with a little question mark on it, but - psyche! - it always opens to the main Visio help page.

Why not just search? Great idea! Try searching the Visio 2010 help for, oh, I don't know, something fundamental to Visio, like glue. Here's what you get:

Can you guess which of the six results actually explains what "gluing" is? If you said "Number 4", you cheated and looked.

Anyway, if suddenly the "Database" tab disappears (assuming you figured out that you must use the "Database Model Diagram" template to get the tab to show up in the first place), be sure that this checkbox in Visio Options is checked:

Hopefully this will help someone else out.

Thursday, June 24, 2010

Determining if a Folder Exists Using xp_cmdshell

The code below creates a stored proc that will report if a passed-in Windows folder exists or not. While it avoids using the undocumented xp_fileexist system proc, it uses xp_cmdshell, which in many places is the first thing that gets disabled because of the obvious security ramifications. Still, it's a nice hack if you're working in the file system a lot.

IF OBJECT_ID('dbo.usp_FolderExist') IS NOT NULL
    DROP PROCEDURE dbo.usp_FolderExist
CREATE PROCEDURE dbo.usp_FolderExist (@FullPathName NVARCHAR(255),
                                      @Exists       TINYINT OUTPUT)
    -- Remove any trailing slash.
    DECLARE @TrailingSlash NVARCHAR(255)
    SET @TrailingSlash = REVERSE(@FullPathName)
    IF '\' = LEFT(@TrailingSlash, 1) 
        SET @TrailingSlash = SUBSTRING(@TrailingSlash, 2, LEN(@FullPathName) + 1)
    SET @FullPathName = REVERSE(@TrailingSlash)
    -- Build and exec the command.
    SET @Cmd = 'dir ' + @FullPathName
    SET @Exists = 0
    CREATE TABLE #dir (output NVARCHAR(4000))
    INSERT INTO #dir
        EXEC master..xp_cmdshell @Cmd
    IF EXISTS (SELECT * FROM #dir WHERE output LIKE '%' + @FullPathName + '%')
        SET @Exists = 1
    -- Done.
    DROP TABLE #dir

/* Test Code
set nocount on
declare @bExists int   set @bExists = 0
exec usp_FolderExist 'C:\Temp',  @bExists OUTPUT
print @bExists
exec usp_FolderExist 'C:\Temp\', @bExists OUTPUT
print @bExists

Sunday, June 20, 2010

Displaying Database and Server Principals

A quick example of using the system views to look at the server principals and the database principals. First, let's look at the database principals:

SELECT  dbp.class_desc                       AS GrantedObjectType,
        CASE dbp.class_desc
            WHEN 'DATABASE'         THEN DB_NAME    (dbp.major_id)
            WHEN 'OBJECT_OR_COLUMN' THEN OBJECT_NAME(dbp.major_id)
            WHEN 'SCHEMA'           THEN SCHEMA_NAME(dbp.major_id)
        END                                  AS GrantedObjectName,
        COL_NAME(dbp.major_id, dbp.minor_id) AS GrantedColumnName,
        dbp.state_desc                       AS State,
        dbp.permission_name                  AS Permission,
        dpGrantor.type_desc                  AS GrantorType,
        dpGrantor.name                       AS Grantor,
        dpGrantee.type_desc                  AS GranteeType,
        dpGrantee.name                       AS Grantee,
        dpRole.name                          AS GranteeRole
   FROM sys.database_permissions                                     dbp
   JOIN sys.database_principals                                      dpGrantor
     ON dbp.grantor_principal_id = dpGrantor.principal_id
   JOIN sys.database_principals                                      dpGrantee
     ON dbp.grantee_principal_id = dpGrantee.principal_id
   JOIN sys.database_role_members                                    droGrantee
     ON dbp.grantee_principal_id = droGrantee.member_principal_id
   JOIN sys.database_principals                                      dpRole
     ON droGrantee.role_principal_id = dpRole.principal_id
   JOIN sys.objects                                                  o
     ON dbp.major_id = o.object_id
  ORDER BY GrantedObjectType,

The server principals are done in a similar way:

SELECT  svp.class_desc            AS GrantedObjectType,
        SCHEMA_NAME(svp.major_id) AS GrantedObjectName,
        svp.state_desc            AS State,
        svp.permission_name       AS Permission,
        spGrantor.type_desc       AS GrantorType,
        spGrantor.name            AS Grantor,
        spGrantee.type_desc       AS GranteeType,
        spGrantee.name            AS Grantee,
        spRole.name               AS GranteeRole
   FROM sys.server_permissions                                     svp
   JOIN sys.server_principals                                      spGrantor
     ON svp.grantor_principal_id = spGrantor.principal_id
   LEFT JOIN sys.server_principals                                 spGrantee
     ON svp.grantee_principal_id = spGrantee.principal_id
   LEFT JOIN sys.server_role_members                               sroGrantee
     ON svp.grantee_principal_id = sroGrantee.member_principal_id
   LEFT JOIN sys.server_principals                                 spRole
     ON sroGrantee.role_principal_id = spRole.principal_id
   LEFT JOIN sys.objects                                           o
     ON svp.major_id = o.object_id
  ORDER BY GrantedObjectType,

Thursday, May 27, 2010

ENABLE/DISABLE TRIGGER must follow a semicolon

Can't get an ENABLE or DISABLE TRIGGER to even parse, much less execute? This might be the cause,

Due to a Microsoft glitch, ENABLE and DISABLE are not yet keywords. This means for them to be recognized as the beginning of a statement, the preceeding statement must end with a semi-colon. But putting it on the front of the offending line is clearer, to me:

;DISABLE TRIGGER dbo.trDoSomething ON MyTable

For more info, see: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=307937&wa=wsignin1.0

Tuesday, May 4, 2010

Using PIVOT - An (honest!) Simple Example

The concept of the PIVOT keyword is pretty simple: rotate the results so that the rows and columns are transposed. But the syntax is, well, kinda weird. This example uses PIVOT in the simplest way, to make it easier to pick through the code and see what's going on.

First let's create the example table, which contains one row for each girl. In our admittedly skewed sample set, all girls have cats, and there are only five girl names represented: Ada, Bea, Cat, Dot, and Eve. (If anyone knows of a three-letter girl's name beginning with "C", I'd be obliged.)

-- Drop and re-create the table. 

    GirlID INT IDENTITY(1,1), 
    Name       NVARCHAR(50),
    CatCount   TINYINT              

Now let's add some reasonable (except for one "crazy cat lady", apparently) data to the table.

-- Populate the table.  
    (Name, CatCount)
    ('DOT', 9), ('CAT', 3), ('CAT', 1), ('ANN', 3),
    ('ANN', 3), ('BEA', 3), ('EVE', 4), ('EVE', 3),
    ('EVE', 3), ('EVE', 4), ('EVE', 4), ('ANN', 3),
    ('EVE', 3), ('BEA', 5), ('EVE', 3), ('ANN', 1),
    ('EVE', 1), ('EVE', 3), ('EVE', 3), ('CAT', 103),
    ('BEA', 2), ('EVE', 3), ('CAT', 3), ('DOT', 2),
    ('DOT', 4), ('EVE', 3), ('DOT', 9), ('BEA', 2)

One of the common things we do with data like this is to create subtotals by value. In this case, we can find the total number of cats by girl-name.

-- Group by girls' name to see which name has the most cats. 
-- This displays the girls' names as rows.
SELECT Name           AS 'Girl''s Name',
       SUM(CatCount)  AS 'Total Cats'
  FROM dbo.Girl

The results would look like this:

-- Girl's Name Total Cats
   ANN          10
   BEA          12
   CAT          110
   DOT          24
   EVE          37

But by using the PIVOT keyword, we can also display the data with the girls' names as the columns. If you think about it, that's pretty cool: it turns data into metadata.

-- Get the same data, but with the girls' names as columns. 
SELECT  ANN  AS 'Total Cats: Ann',
        BEA  AS 'Total Cats: Bea',
        CAT  AS 'Total Cats: Cat',
        DOT  AS 'Total Cats: Dot',
        EVE  AS 'Total Cats: Eve'
    (SELECT Name,
       FROM dbo.Girl) AS tSource
) AS tPivoted

The results would look like this:

-- Total Cats: Ann    Total Cats: Bea    Total Cats: Cat    Total Cats: Dot    Total Cats: Eve
   10                 12                 110                24                 37

Of course, this capability comes at a price: you have to explicitly list the five girl names, which means you have to know what the values are in advance. Often, that's not a big problem: for example, the months of the year.

Friday, April 9, 2010

Convert Any Integer Type to a Hex String

Here's a user-defined function to take any integer from a TINYINT to a BIGINT and return it as a hex number in a VARCHAR like '0x01033FA2':

-- Drop any existing function.

IF OBJECT_ID('dbo.Hexadecimal') IS NOT NULL
   RAISERROR('Dropping user-defined function Hexadecimal', 10, 1) WITH NOWAIT, LOG
   DROP FUNCTION dbo.Hexadecimal

-- Create function to take any integer input and return hex as a string.

RAISERROR('Creating user-defined function Hexadecimal', 10, 1) WITH NOWAIT, LOG

CREATE FUNCTION dbo.Hexadecimal(@vbInput VARBINARY(255))
 DECLARE @sResult   VARCHAR(18)  SET @sResult   = '0x'
 DECLARE @i         INT          SET @i         = 1
 DECLARE @nInputLen INT          SET @nInputLen = DATALENGTH(@vbInput)
 DECLARE @nChar     INT          SET @nChar     = 0
 DECLARE @nHiInt    INT          SET @nHiInt    = 0
 DECLARE @nLoInt    INT          SET @nLoInt    = 0

 WHILE (@i <= @nInputLen)
  SET @nChar  = CONVERT(INT, SUBSTRING(@vbInput, @i, 1))
  SET @nHiInt = FLOOR(@nChar / 16)
  SET @nLoInt = @nChar - (@nHiInt * 16)

  SET @sResult = @sResult +
       SUBSTRING('0123456789ABCDEF', @nHiInt + 1, 1) +
       SUBSTRING('0123456789ABCDEF', @nLoInt + 1, 1)
  SET @i = @i + 1

 RETURN @sResult


RAISERROR('Created  user-defined function Hexadecimal', 10, 1) WITH NOWAIT, LOG

/* Testing.
declare @return char(18)  set @return = ''
declare @bit    bit       set @bit    = 1
declare @tiny   tinyint   set @tiny   = 255
declare @small  smallint  set @small  = 32767
declare @int    int       set @int    = 2147483647
declare @big    bigint    set @big    = 9223372036854775807

print '@bit   maximum is: ' + dbo.Hexadecimal(@bit)
print '@tiny  maximum is: ' + dbo.Hexadecimal(@tiny)
print '@small maximum is: ' + dbo.Hexadecimal(@small)
print '@int   maximum is: ' + dbo.Hexadecimal(@int)
print '@big   maximum is: ' + dbo.Hexadecimal(@big)
-- End testing. */

Deleting User Objects in Master Database

This script erases everything in the master database that is not Microsoft-shipped. Be careful: some user objects belong in the master database (database triggers, security objects, for example). Backing up the master database first would be a prudent step.

-- Drops all user objects that have been accidentally created in the master database.
USE master

DECLARE @sName     sysname
DECLARE @sType     NVARCHAR(1024)
DECLARE @nScore    INT
DECLARE @sSql      NVARCHAR(1024)

 SELECT name, type_desc
   FROM sys.objects
  WHERE is_ms_shipped = 0

OPEN TheCursor

 INTO @sName, @sTypeDesc

 SET @sType  = REVERSE(@sTypeDesc)
 SET @nScore = CHARINDEX('_', @sType)
 SET @sType  = SUBSTRING(@sType, 0, @nScore)
 SET @sType  = REVERSE(@sType)

 SET @sSql = 'DROP ' + @sType + ' ' + @sName
 RAISERROR('%s', 10, 1, @sSql) WITH NOWAIT 
 EXEC sp_sqlexec @sSql
  INTO @sName, @sTypeDesc

CLOSE TheCursor

PRINT 'Done.'

(Practically) Fool-Proof Backups

A script and batch file that practically anyone can run, no matter how non-technical. The batch file prompts you for backup path, server, and instance, and then calls the SQL script.

: -----------------------------------------------------------------------------
: Prompts user for info and calls the SQL script to do the backup.             
: -----------------------------------------------------------------------------
: Copyright 2010  Larry Leonard, Definitive Solutions Inc.                     
:                 http://www.DefinitiveSolutions.com                           
: Copying and distribution of this file, with or without modification, are     
: permitted in any medium without royalty provided the copyright notice and    
: this notice are preserved.  This file is offered as-is, without any          
: warranty.                                                                    
: -----------------------------------------------------------------------------

@echo off
prompt $g
@echo %date% %time%

: -----------------------------------------------------------------------------
: Display free disk space.
: -----------------------------------------------------------------------------

for /f "tokens=3" %%a in ('dir ^ find /i "bytes free"') do set freeMB=%%a
set freeMB=%freeMB:~0,-8%
@echo Disk Space Free: %freeMB% MB

: -----------------------------------------------------------------------------
: Prompt the user.
: -----------------------------------------------------------------------------

set /p   TargetServerAndInstance=Server and Instance  ("Enter" for default)?
if     "%TargetServerAndInstance%"==""  set TargetServerAndInstance="UNICORN\SQL2008EXPRESS"

set /p   BackupFolder            =Backup Folder        ("Enter" for default)?
if     "%BackupFolder%"           ==""  set BackupFolder="C:\TEMP"

: -----------------------------------------------------------------------------
: Build command string.  For information on sqlcmd.exe, see:
:    http://msdn.microsoft.com/en-us/library/ms162773(SQL.90).aspx
:    -a    Requests a packet of a different size
:    -S    Server\instance
:    -i    Input script name
:    -o    Output script name
:    -w    Column width
:    -m    All headers are returned with messages
:    -b    sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs
:    -r    Redirects the error message output to the screen
:    -v    Declare $(variable_name)
:    -V    Lowest severity level sqlcmd reports
: -----------------------------------------------------------------------------

set SqlCmdCommand=sqlcmd.exe
if not "%TargetServerAndInstance%"=="" set SqlCmdCommand=sqlcmd.exe -S %TargetServerAndInstance%
set SqlCmdCommand=%SqlCmdCommand% -r 1 -b -m-1 -V 1 -a 32767 -w 2000
set SqlCmdCommand=%SqlCmdCommand% -i BackupAllDatabases.sql
set SqlCmdCommand=%SqlCmdCommand% -o BackupAllDatabases.txt
set SqlCmdCommand=%SqlCmdCommand% -v BackupFolder=%BackupFolder%

: -----------------------------------------------------------------------------
: Call the backup SQL script.
: -----------------------------------------------------------------------------

if not exist "@BackupFolder" mkdir "@BackupFolder"
if errorlevel 1 goto l_error

@echo on
@echo off
if errorlevel 1 goto l_error

: -----------------------------------------------------------------------------
: Done.
: -----------------------------------------------------------------------------

goto l_done

@echo                             *** AN ERROR HAS OCCURRED! ***

@echo %date% %time%

Notepad.exe BackupAllDatabases.txt


prompt $p$g

The SQL script backs up all user databases and their logs, and the master and msdb system databases.

-- Called by BackupAllDatabases.bat to backup all databases and their logs.
-- The path set in @sBackupFolder must exist - this script will not create it.
-- Copyright 2010  Larry Leonard, Definitive Solutions Inc.               
--                 http://www.DefinitiveSolutions.com                     
-- Copying and distribution of this file, with or without modification, are
-- permitted in any medium without royalty provided the copyright notice and
-- this notice are preserved. This file is offered as-is, without any warranty.

-- MUST be set as shown to support indexes on computed columns, indexed views.
SET ANSI_NULLS ON                                 -- Deprecated: leave set ON. 
SET ANSI_PADDING ON                               -- Deprecated: leave set ON. 
SET ANSI_WARNINGS ON                              -- No trailing blanks saved. 
SET ARITHABORT ON                                 -- Math failure not ignored. 
SET CONCAT_NULL_YIELDS_NULL ON                    -- NULL plus string is NULL. 
SET NUMERIC_ROUNDABORT OFF                        -- Allows loss of precision. 
SET QUOTED_IDENTIFIER ON                          -- Allows reserved keywords. 

-- These are not, strictly speaking, required, but are generally good practice.
SET NOCOUNT ON                                    -- Minimize network traffic. 
SET ROWCOUNT 0                                    -- Reset in case it got set. 
SET XACT_ABORT ON                                 -- Make transactions behave. 

-- These are the only variables which you are required to provide a value for. 
DECLARE @sBackupFolder NVARCHAR(260)
SET     @sBackupFolder = 'C:\Temp'

-- Create a file-system friendly date. 
SET @sDate = REPLACE(@sDate, 'T', '_')
SET @sDate = REPLACE(@sDate, ':', '-')

-- Create the commands. 

SET @sSql =''
 + 'IF  ''?'' NOT IN (''model'', ''tempdb'') '
 + 'AND ''?'' NOT LIKE ''ReportServer$%'' '
 + ''
 + 'BEGIN '
 + ' RAISERROR(''Starting backup of database ?.'', 10, 1) WITH NOWAIT '
 + ''
 + '   TO DISK = ''' + @sBackupFolder + '\?_dat_' + @sDate + '.bak '''
 + '   WITH '
 + '   NOFORMAT, '
 + '   NOINIT, '
 + '   NAME = ''? - Full Database Backup - ' + @sDate + ''', '
 + '   SKIP, '
 + '   STATS = 10; '
 + ''
 + ' IF ''?'' NOT IN (''master'', ''msdb'') '
 + ' BEGIN '
 + ' BACKUP LOG [?] '
 + '    TO DISK = ''' + @sBackupFolder + '\?_log_' + @sDate + '.bak '''
 + '    WITH '
 + '    NOFORMAT, '
 + '    NOINIT, '
 + '    NAME = ''? - Full Log Backup - ' + @sDate + ''', '
 + '    SKIP, '
 + '    STATS = 10; '
 + ' END '
 + ''
 + 'END; '

-- Run the commands for each database. 
USE master
EXEC sp_MsForEachDb @sSql

-- Done.

Pretty much anyone should be able to run this.

Adding a Clustered Index - Carefully

This script is useful if you need to add a Primary Key / clustered index on a table, but cannot know if the table already has a clustered index on another column, or if the column you're trying to cluster already has a non-clustered index on it.

-- Constraint_Add_PK_Clustered.sql - Very careful way to add a Primary Key     
--                 backed by a clustered index.  Checks for an existing        
--                 clustered index that was created by a PRIMARY KEY or UNIQUE 
--                 constraint. Checks whether the column is already in use in  
--                 an existing index.                                          
-- Note: In SSMS, press Ctrl-Shift-M to pop a dialog box for entering values.  
-- Copyright 2010  Larry Leonard, Definitive Solutions Inc.                    
--                 http://www.DefinitiveSolutions.com                          
-- Copying and distribution of this file, with or without modification, are    
-- permitted in any medium without royalty provided the copyright notice and   
-- this notice are preserved. This file is offered as-is, without any warranty.

DECLARE @sIndexNameToDrop  sysname
DECLARE @sSql              NVARCHAR(4000)
DECLARE @sMsg              NVARCHAR(440)

-- Check for any existing clustered index that was created by a PRIMARY KEY
-- or UNIQUE constraint.  Must use ALTER TABLE to remove these - can't use
-- DROP INDEX on constraints.

SET @sIndexNameToDrop = NULL

SELECT @sIndexNameToDrop = name
  FROM sys.indexes
 WHERE object_id = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
   AND type_desc = 'CLUSTERED'
   AND (is_unique_constraint = 1 OR is_primary_key = 1)

IF @sIndexNameToDrop IS NOT NULL
    RAISERROR('Dropping existing PRIMARY KEY or UNIQUE constraint %s on table <table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>', 10, 1, @sIndexNameToDrop ) WITH NOWAIT, LOG

    SET @sSql = 'ALTER TABLE <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> DROP CONSTRAINT ' + @sIndexNameToDrop
    PRINT @sSql

-- If there's a non-clustered index, or a clustered index that was created by 
-- an ADD INDEX, and the column we are wanting to cluster is the *only* column 
-- in the index, we want to drop it - *unless* it's a clustered one, and is 
-- *only* on the column we are wanting to cluster.  Since we're only looking 
-- at indexes with only one column, we don't have to check that the column is 
-- not 'included'.

SET @sIndexNameToDrop = NULL

SELECT @sIndexNameToDrop = si.name
  FROM sys.indexes                    AS si
  JOIN sys.index_columns              AS sic
    ON si.object_id  = sic.object_id
   AND si.index_id   = sic.index_id
  JOIN sys.columns                    AS sc
    ON sic.column_id = sc.column_id
   AND sic.object_id = sc.object_id
 WHERE si.object_id  = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
   AND NOT (sc.name IN ('<column_name, sysname, Column>') AND si.type_desc = 'CLUSTERED')
   AND (si.is_unique_constraint = 0 AND si.is_primary_key = 0)
   AND 1 =
          (SELECT COUNT(*)
             FROM sys.index_columns
            WHERE object_id    = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
              AND key_ordinal <> 0
              AND column_id   <> 0)

IF @sIndexNameToDrop IS NOT NULL
    RAISERROR('Dropping existing index on table <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>, column <column_name, sysname, Column>', 10, 1) WITH NOWAIT, LOG

    SET @sSql = 'DROP INDEX ' + @sIndexNameToDrop + ' ON <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>'
    PRINT @sSql

-- Now add the clustered index.
RAISERROR('Adding <set_as_primary_key, NVARCHAR, PRIMARY KEY> clustered index to <database_name, sysname, MyDB>''s <table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> table', 10, 1) WITH NOWAIT, LOG

ALTER TABLE <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> WITH CHECK
  ADD CONSTRAINT PK_<table_name, sysname, ThisTable> <set_as_primary_key, NVARCHAR, PRIMARY KEY> CLUSTERED (<column_name, sysname, Column>)
 WITH (PAD_INDEX = ON, FILLFACTOR = <fill_factor, INT, 70>)

See BOL for info on how to create and use templates.

Wednesday, March 24, 2010

Fix for 'USE dbname' Failing Randomly

Have you ever gotten an error message that you just couldn't figure out? A while back, and for about a year, a certain T-SQL script was giving me this error message:

Cannot find database 'SALES' in system tables.

This error message occured randomly every month or so, and with no cause-and-effect pattern that I could see. I couldn't reproduce it. It didn't fail when I ran it manually - but every time it happenned, it usually happened two or three times in a row. This gave me a series of tiny "windows of opportunity" in which to debug the problem.

After about a year (on and off), I had it narrowed down to one line of code - if I removed this line, the script ran correctly every time:

USE [Sales]

Now, the 'Sales' database was quite demonstrably there. Googling for the error message turned up one (unhelpful) hit. So, I started by dropping and re-creating the database. No joy. Next, I re-installed SQL Server. No joy. I re-imaged the machine, to re-install the operating system. I re-imaged the machine using a different operating system. I tried all of this again on a different machine. No joy. I was stymied.

I had long ago removed all the T-SQL code from this script that wasn't absolutely necessary, so now I started to re-write the script, line by line, using different keywords, constant values, and variable names. (Yes, by now I was on a mission.) Finally, I discovered the cause: it only happened when the script contained the SYSNAME datatype: changing it to NVARCHAR(128) fixed it! Hurray!

But wait: that doesn't make any sense. As BOL points out:

SQL Server 2005 Books Online (November 2008)
... sysname is used for table columns, variables, and stored procedure parameters that store object names. The exact definition of sysname is related to the rules for identifiers; therefore, it can vary between instances of SQL Server. sysname is functionally equivalent to nvarchar(128). (Emphasis added.)

If sysname and nvarchar(128) are "functionally equivalent", why would changing from one to the other fix the error message? Well, a little farther down in BOL, there's this:

In databases that are case-sensitive, or that have a binary collation, sysname is recognized as a SQL Server system data type only if it appears in lowercase.

So, that was the problem: I had used SYSNAME instead of sysname.

Actually, the root cause was having collation specifiers spread all higgledy-piggledy on our databases, tables, and columns, sometime specified, sometimes not. Somewhere, one of them had a case-sensitive collation, and that's what caused the whole problem.

Tuesday, March 23, 2010

Rant: Table Names Should be Singular!

There's two main schools of thought on how tables should be named: as singluar, or as plural. For me, the deciding factor is clear: English pluralizes nouns inconsistently!

Today, for example, I had two tables to create that were identical in all respects (don't ask). So, I created a fair amount of code to create the first table, with its assorted indexes, constraints, triggers, extended properties, etc. So far, so good.

Next, it's just a simple copy-and-paste, followed by a find-and-replace, to create the second table, right? No - not if the first table is named "Equipment", and the second is named "Tractors". Here's the problem.

This is a snippet of the "Equipment" table's code:

    EquipmentID INT,

So let's copy-and-paste and find-and-replace it, to create the "Tractors" table:

    TractorsID INT,

See the problem? "TractorsID"?? That ain't right - "TractorID" is what we want, obviously. But to get it, we have to edit each mistake by hand. So what should have been a five-second task is now an aggravating five-minute task.

The culprit in this example is the noun "equipment": what's the singular of "equipment"? "Equipment"? "Piece of equipment"? "Equip"? Although it's linguistically grotesque, this table would be better named "Equipments", just for the sake of clarity and consistency. Or, another noun, which pluralizes the same way as "tractor" does, should be used: "machine", maybe?

The root of this problem is that there are so many different ways to pluralize English nouns. According to this article, the most common way is to append an "s": one duck, two ducks. Or sometimes it's an "es": one potato, two potatoes. Or maybe an "ies": one pony, two ponies. Possibly a "ves": one calf, two calves.

And we haven't even gotten to the irregular plurals. Some nouns are the same when pluralized: one fish, two fish. Some add an "en": one ox, two oxen. Some add an "ren": one child, two children, or one brother, and some brethren.

Sometimes there's a mid-vowel change: one goose, two geese, one louse, two lice. Also, one crisis, but two crises.

And English has borrowed a lot of words from Latin and Greek, so we have: alumna/alumnae; matrix/matrices; criterion/criteria; phalanx/phalanges; addendum/addenda; hippopotamus/hippopotami; and schema/schemata.

Not to mention loan-words from: Hebrew (seraph/seraphim); French (château/châteaux); and Italian (graffito/graffiti).

Also, my personal favorites, nouns that don't even have a singular form: equipment, clothes, amends, etc.

There's much more, but the point is, name your tables as singular, not plural. Then the whole issue goes away!

Monday, March 22, 2010

Using sp_MsForEachDb and sp_MsForEachTable Together

Ever wanted to run a T-SQL command on every user table in every user database? Here's how to nest the undocumented sp_MsForEachTable system stored procedure inside the undocumented sp_MsForEachDB system stored procedure.

      'IF EXISTS (SELECT * FROM sys.databases WHERE name = ''''#'''' AND owner_sid = 0x01) RETURN '
    + 'RAISERROR(''''Reclaiming space on table ' + QUOTENAME('#') + '.?'''', 10, 1) WITH NOWAIT '
    + 'USE ' + QUOTENAME('#') + ' '
    + 'DBCC CLEANTABLE(''''#'''', ''''?'''', 10000) WITH NO_INFOMSGS '

SET @Sql = 'USE ' + QUOTENAME('#') + ' EXEC sp_MsForEachTable ''' + @Sql + ''''

EXEC sp_MsForEachDb @Sql, @replacechar = '#'

The trick is the @replacechar parameter to the sp_MsForEachDb stored procedure: it's how we keep the placeholders for the database and the table separate. If there's an easier way, I'd love to see it.

Sunday, March 21, 2010

Do MDF Files Get Defragged?

When you run a disk defragmenter, such as the built-in Windows one, or a third-party product, do your MDF and LDF files get defragmented if SQL Server is running? I get asked this question fairly often by "reluctant DBAs".

(Be mindful that we're talking about disk-level fragmentation here, not database-level, such as "logical" or "external" index fragmentation - that's another thing altogether.)

First, defragmentation of the mdf and ldf files should – in a ideal world - rarely be an issue. If you remembered to defragment the disk first, and this gave you a large enough empty space to hold the entire ?df file, and if you created the database files big enough to handle all the data you'll ever need to store, and you made sure that "AutoShrink" is turned off... then you wouldn’t need to worry about disk fragmentation. That's a lot of if's, and anyway, the database may grow bigger than you predicted, and be forced to auto-expand; that’s when disk fragmentation begins. So, we can't really avoid the issue.

Second, will the disk defragmenter work on "open" files, like our ldf and mdf files? Yes. A disk defragger works at a lower level than the file system: it knows nothing of file locks, streams, "open" files, etc., so the fact that the MDF file is open is immaterial.

Third, is it safe? Well, Microsoft hardy ever uses words like "perfectly safe" and "CANNOT" - but in this case, they do:

  • "Given that file defragmentation is integrated in the operating system, defragmenting open files is perfectly safe." ( http://www.microsoft.com/technet/abouttn/flash/tips/tips_083104.mspx )
  • Microsoft's documentation for NtFsControlFile( ) states, "[NtFsControlFile( ) internals] are implemented such that you CANNOT corrupt data on your drive by using them."

Of course, defragmenting can slow any disk-intensive application, including SQL Server. Microsoft says, "I/O activity generated must be considered if continuous high performance is mandatory." ( http://www.microsoft.com/technet/abouttn/flash/tips/tips_083104.mspx ). Some defraggers may detect disk I/O activity and "get out of the way". Generally, though, you’d want to defrag the disk during maintenance hours or low usage times.

So, defragment your hard drive (to create a big empty space), and then expand your database (hopefully forcing the OS to move the mdf and ldf files to that big, empty space). Make sure "AutoShrink" is off. And remember that a big empty database will normally be faster than a small full one.

Saturday, March 20, 2010

Is SQL Server's 2005's Exception Handling an Improvement?

At first, I was excited to read about the new TRY...CATCH exception handling in SQL Server 2005, but it feels like it has a serious flaw: only the last error message is returned in the Exception object!

This is really annoying, both during development and in production. For example, if you make a mistake while trying to create a CONSTRAINT:

-- Setup.
IF EXISTS (SELECT * FROM sys.tables WHERE name = 'tblInvoice')
   DROP TABLE tblInvoice

IF EXISTS (SELECT * FROM sys.tables WHERE name = 'tblCustomer')
   DROP TABLE tblCustomer

CREATE TABLE tblCustomer
   CustomerID INT,

   InvoiceID INT,
   CustomerID INT,

Under SQL Server 2000, we get complete error information:

ALTER TABLE tblInvoice
   ADD CONSTRAINT FK_tblInvoice_tblCustomer FOREIGN KEY (CustomerID)
   REFERENCES dbo.tblCustomer(CustomerIDDDDD)    -- Error!

Msg 1770, Level 16, State 0, Line 18
Foreign key 'FK_tblInvoice_tblCustomer' references invalid column 'CustomerIDDDDD' in referenced table 'dbo.tblCustomer'.

Msg 1750, Level 16, State 0, Line 18
Could not create constraint. See previous errors.

But under SQL Server 2005's exception mechanism, the error message for this code is almost useless:

   ALTER TABLE tblInvoice
      ADD CONSTRAINT FK_tblInvoice_tblCustomer FOREIGN KEY (CustomerID)
      REFERENCES tblCustomer (CustomerIDDDDD)

Msg 50000, Level 16, State 42, Line 24
Could not create constraint. See previous errors.

Microsoft's NaveenP explains ( http://blogs.msdn.com/sqlprogrammability/archive/2006/04/03/567550.aspx ):

"When an error is raised in a some special context, the first message has information about the error, while subsequent messages provides information about the context. This becomes an issue inside tsql try-catch. In the absence of any notion about error collection, the catch block is activated with one of the error messages. As Sql Server 2000 would have set @@error to the last error message, we decided to set error intrinsics (error_message(), etc.) to the last error message inside the catch block. In other words Sql Server 2005 would ignore all but the last error message inside [the] tsql try-catch... Backward compatibility prevents us from mergeing old error messages."

Well, shoot.

Thursday, March 18, 2010

A Clustered Index Does Not Guarantee Order

Learned something the hard way recently: just because a table has a clustered index doesn't mean a SELECT without an ORDER BY will return the rows in order. For example, given this table:


... if you say:


... then the rows will be returned in undefined (that is, random) order.

"Clustering", I've learned, is a physical construct, not a logical one. The rows are laid out in physical order on the hard drive, but SQL Server is free (in the abscence of an ORDER BY clause) to return them in whatever order it thinks most efficient.

As developers, we don't usually see this behavior, because we tend to have fresh, well-maintained tables. In fact, I only discovered it in a customer's database that hadn't had any maintenance done to it in years. So, the bottom line is, as always, "no ORDER BY, no ordering."

Wednesday, March 17, 2010

One TRUNCATE TABLE Can Use Up All Your Locks (in SQL Server 2000)

Here's an example of how you can learn more from bad schemas than good schemas. This happened to me under SQL Server 2000, and I'm wondering if anyone has seen it on later versions. (post hoc - Sankar has demonstrated that this behavior has changed as of SQL Server 2005.)

I was attempting to truncate a table with maybe 100 million rows (don't ask - that's the bad schema part). When I issued the TRUNCATE TABLE command, however, I got:

Error: 1204, Severity: 19, State: The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration.

What I've learned from http://support.microsoft.com/kb/257345 is that "almost all of the locks acquired for a TRUNCATE statement are for extent deallocations". I didn't even know TRUNCATE TABLE used locks, but it makes sense now, of course. I ended up dropping the table and re-creating it.

Tuesday, March 16, 2010

Transactions Don't Work the Way You Think

(Just a little note for myself; sometimes I want to explain XACT_ABORT, but I never have the example fresh in my mind.)

We were all trained that a transaction is a "all-or-nothing" thing: either all the statements succeed and are all committed, or none are committed. It doesn't work that way by default in SQL Server, though:

SET XACT_ABORT OFF     -- This is the default.
--SET XACT_ABORT ON    -- This is NOT the default.

CREATE TABLE DatabaseProperty

INSERT INTO DatabaseProperty
 (sName, sValue)
 ('Fred', 'This is tooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo long!')

INSERT INTO DatabaseProperty
 (sName, sValue)
 ('Barney', 'Will be added if XACT_ABORT is OFF, which is the default!')


This is why XACT_ABORT is usally set to ON, instead the default ("OFF").

Monday, March 15, 2010

Using SSMS Templates for Fun and Profit!

The "best practices" concept is truly a great thing, especially when applied to T-SQL code. But does your best practices process amount to (a) carefully writing down the things you know you should do, and then (b) not having time to do them? If so, my next few posts will describe an easy way to capture and re-use your T-SQL best practices, saving time, improving code, and reducing bugs. No, really.

In this post, I'll walk through how to use one of the least-appreciated features of SQL Server Management Studio: the "Template Library." Future posts will teach you how to leverage SSMS Templates to create a flexible, personal "toolbox" of your debugged, error-checked, optimized, commented, and otherwise perfected T-SQL code snippets. (Post hoc: You can now download my library of templates from http://sqlsoundings.blogspot.com/p/sql-server-templates.html )

What's a Template?

Templates are simply Notepad-editable text files that have an "sql" extension: "SQL scripts", in other words. What makes them special is that they contain zero or more parameters, and reside in certain "well-known" folders.

Ok, What's a Parameter?

This is one of those things that's easier to demonstrate than explain. Here's a very simple line of code with a parameter in it:

DROP TABLE <table_name, sysname, your_table_name>

As you can see, a parameter is just a parameter_name, a data_type, and a default_value, surrounded by angle brackets. Actually, it's even easier than that: the parameter_name can be made readable by using upper- and lower-case letters, spaces, and (most) punctuation; the data_type doesn't actually do anything; and the default_value is optional. So this is also a legal parameter:

DROP TABLE <Table Name,,>

Of course, including a default_value is usually a good idea, if only to jog your memory. Even better, since data_type doesn't do anything (I call this "meekly"-typed), we could conceivably use it for documentation:

DROP TABLE <Table Name, Must begin with "tbl"!, tbl>

So far, using Templates might look like more work than it's worth. And it might be, except... Microsoft has written a bunch of them for you!

So How do I Use a Template?

To see the available Microsoft-authored Templates, we use the "Template Explorer" window, via the "View, Template Explorer" menu item. The familiar tree-of-folders-and-files control appears, with the Templates arranged into folders based on database object type. For example, here's the Create Unique Nonclustered Index Template from the Index folder:

-- ===================================
-- Create Unique Nonclustered Index
-- ===================================
USE <database_name, sysname, AdventureWorks>

CREATE UNIQUE NONCLUSTERED INDEX <index_name,sysname,AK_EmployeeAddress_rowguid> 
ON <schema_name,sysname,HumanResources>.<table_name,sysname,EmployeeAddress> 
 <column_name,sysname,rowguid> ASC
ON <file_group,,[PRIMARY]>

EXEC sys.sp_addextendedproperty 
 @value=N'<description_index,string,Description of index>' ,

As you can see, there are several parameters in this Template; if there's a limit, I've never run into it. It's important to understand that the parameters describe simple text-replacement: they know nothing of T-SQL. This means they can be used to parameterize anything: database object names, text inside quotes, text in comments, portions of object names, fragments of T-SQL code, etc.

Let's see how you use this Template to create a unique nonclustered index. First, open a new (or existing) SQL file. Now drag-and-drop the Create Unique Nonclustered Index Template from the Index folder to the desired location in the editor window; this pastes the Template's contents at the drop location.

Now comes the fun part: press the Ctrl-Shift-M chord, or use the Query, Specify Values for Template Parameters menu item. This will cause SSMS to pop this dialog:

This dialog is what makes Templates so useful. There's one row for each parameter, and the columns are the variable name, data_type, and default_value for each one. To replace all the parameters in your SQL document, you enter values (or accept the defaults) for each row. (Naturally, the left and center columns in this dialog are read-only.) Once you're happy with the values you've entered, click the "OK" button, and they will be substituted for the Template parameters, resulting in T-SQL code like:

-- ===================================
-- Create Unique Nonclustered Index
-- ===================================
USE AdventureWorks

ON HumanResources.EmployeeAddress 
 rowguid ASC

EXEC sys.sp_addextendedproperty 
 @value=N'Supports the Employee Address Report for Marketing - see Pam for details' ,

Some things to note about this dialog box:

  • Each variable-name appears once, no matter how many times it's used in the Template's T-SQL code.
  • Don't allow two parameters to have the same variable-name but different data-types, because it won't work: only the first parameter for a variable-name appears in the dialog.
  • You can use the tab key to navigate, but it's faster to use the down arrow after entering each value.
  • Drag-and-dropping copies the contents of the Template to the editor window.
  • Double-clicking opens a new window, and copies the contents of the Template to the editor window. However, this is not how you edit a Template.
  • To edit a Template, right-click on it, and select the "Edit" menu item.
  • This is, for some reason, a modal dialog, so be sure to have whatever text you need available to you (on the clipboard, in an open text file, etc.) before popping it.
  • Be sure not to use extra commas or angle brackets in a parameter, as this usually confuses the parser.

Where are the Templates?

This can be a little confusing because of the way SSMS behaves. There are two locations for the Microsoft-authored Templates: a "factory-installed" one for SSMS to use, and a user-specific one:

  • The original, "factory-installed" copies can be found in C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql for SQL Server 2008, or in C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\SqlWorkbenchProjectItems\Sql for SQL Server 2005. The Templates in these folders cannot be changed via SSMS; they will always be preserved in their original state.
  • The location of the local, user-specific copies depends on your version of SQL Server and your operating system. Using the %APPDATA% Windows environment variable (which makes it operating system independent), and knowing that SQL Server 2005 is "version 9", and "SQL Server 2008" is "version 10", we can represent the local Template folder's name compactly as:
    %APPDATA%\Microsoft\Microsoft SQL Server\{SQL Version}0\Tools\Shell\Templates\Sql

The difficult thing to remember (for me, at least) is that what you're seeing in the "Template Explorer" window is your local files, not the original ones. So, edits to the these Templates will be saved to your local Template folder.

Every time it runs, SSMS makes sure that each user has a copy of some kind of all the Microsoft-authored Templates. It does this by comparing the files and folders in the original folder to each user-specific folder. If it finds any missing in the user-specific folder, it copies them from the original folder.

The trick is that the comparison is on existence only: the actual contents of the files are not examined. This allows SSMS to install (and repair) from the original Templates, but also lets the use modify their local copies without SSMS overwriting them with original files at the next program run.

So, if you edit a Template, your changes will remain forever. But, if you delete a Template, you'll find it's re-appeared at the next program run. (If you delete both the original and local Templates - which you should not do - SSMS won't be able to perform this copy, because it won't be able to find the file.)


Using SSMS Templates is less error prone than typing code by hand, faster than using the GUI, and makes it possible to add documentation while it's still fresh in your mind. The only thing better would be if you could create your own Templates. That will be the subject of a future post.

Sunday, February 28, 2010

Do your Clustered Indexes on IDENTITYs Have the Correct FILLFACTOR?

The great thing about blogging is that it forces you to be as sure as you can that you know what you're talking about. That alone makes it worthwhile...

I started this FILLFACTOR post a few days ago with a completely different goal in mind. After I did my due diligence, however - perusing Books OnLine, seeing what's in my "dead trees" library, and reading my favorite gurus' blogs - I realized that the "tip" I was preparing to demonstrate was completely wrongheaded.

In the process, I learned a lot about setting FILLFACTOR values. One thing I learned was that clustered indexes on IDENTITY columns should almost always have their FILLFACTOR set to 100% (completely full - no room for expansion). The reasoning has to do with page splits, which are expensive, especially on a clustered index.

Since IDENTITY columns are monotonically increasing, INSERTs will always be applied to the end of the index, so an INSERT will never cause a page split. UPDATE-ing a record can cause page splits, but only if a variable-length field (like, an NVARCHAR) is updated with more data than it had before. Since the page is already 100% full, there's nowhere to put the extra data: that causes a page split.

Can we avoid page splits like this by setting the FILLFACTOR to something less than 100% - that is, leave some empty space on each page? Sure, but, as always, there's a downside. Database "reads" typically outnumber "writes" by a factor of n to 1, right? Whether n is 5 or 50 depends on who you believe, and the unique characteristics of each database, and certainly changes over time; so it's a little vague, but in any case, the typical OLTP database does a lot more reads than writes. That's important, because setting the FILLFACTOR to less than 100% can make reads much slower: that's the downside.

To see why this is, imagine you have a table with 1000 rows, and 50 rows will fit on an 8 KB page. With a FILLFACTOR of 100%, every page is full (by definition), this will require 100 / 50 = 20 pages. So, a scan of this table will require, at most, 20 reads.

Now imagine you've set the FILLFACTOR to 80%, leaving each page 20% empty after you REBUILD it. This leaves lots of room for growth, so we'll reduce the number of page splits. And that's a good thing.

But now, only 40 rows (80% of 50 rows) will fit on each page, so the table now requires 100 / 40 = 25 pages. A scan of the table will now require, at most, 25 reads. You've just decreased read performance by, in the worst case, 1 - (20 / 25) = 20%. Oops.

So, to be sure all the tables that are clustered on an IDENTITY column have a FILLFACTOR setting of 100%, I've written a script, Index_ClusteredOnIdentity_SetFillFactorTo100_All.sql . The script first finds the tables to be rebuilt, using a cursor on this statement:

SELECT OBJECT_SCHEMA_NAME(i.object_id)   AS 'Schema Name',
       OBJECT_NAME(i.object_id)          AS 'Table Name',
       i.name                            AS 'Index Name'
  FROM sys.indexes                  i
  JOIN sys.index_columns            ic
    ON ic.object_id = i.object_id
   AND ic.index_id  = i.index_id 
  JOIN sys.columns                  c
    ON c.object_id  = ic.object_id
   AND c.column_id  = ic.column_id
  JOIN sysindexes                   si
    ON si.id        = i.object_id
   AND si.indid     = i.index_id
 WHERE i.object_id                     > 100
   AND OBJECT_SCHEMA_NAME(i.object_id) != 'sys'
   AND i.is_disabled                   = 0 
   AND i.type_desc                     = 'CLUSTERED'
   AND c.is_identity                   = 1
   AND si.origfillfactor               NOT IN (0, 100)

This should be pretty self-explanatory. The system tables are joined in the usual way to get at the index's column(s). We restrict the result set to exclude system tables and tables in the "sys" that can be returned in certain cases. We don't REBUILD disabled indexes, because that also enables them, a side-effect you may not expect. We of course only want clustered indexes on IDENTITY columns. Finally, there's no point in rebuilding the index unnecessarily.

As we iterate through the resultset, we build up a statement like:

ALTER INDEX PK_Product_ProductID
   ON Production.Product

Rebuilding a clustered index can be a time-intensive process, but at least, as BooksOnLine says, "Rebuilding a clustered index does not rebuild associated nonclustered indexes unless the keyword ALL is specified" - and we're not specifying it.

Tuesday, February 16, 2010

Find and Fix NOT NULL Columns Without DEFAULT Constraints

The latest and greatest version of my FindAllNotNullColumnsWithNoDefaultConstraint.sql script (formerly, embarrassingly, FindAllNonNullableColumnsWithNoDefaultConstraint.sql) is now available. It returns the DML code to add DEFAULT constraints to all NOT NULL columns that don't already have them. Useful on legacy databases to head off problems before they happen.

This is a work in progress, so the usual caveats apply: do not run this on any database you want to keep, for example, a production database, or any database that isn't backed up. Comments welcome!

Thursday, February 11, 2010

Script Produces DML to Fix Index Problems

(Getting the hang of this blogging thing... Rather than post an entire SQL script as a blog post, I've created a page for all my scripts. I'll add a blog post whenever a script is added or changed.)

The latest and greatest version of my IndexDemon.sql script (formerly, Index_Analysis.sql) is now available. It examines all the indexes in the current database and emits DML code to correct any problems it finds. It makes no changes to your database itself - you have to copy and paste the emitted SQL - seemed safer.

This is a work in progress, so the usual caveats apply: do not run this on any database you want to keep, for example, a production database, or any database that isn't backed up. Comments welcome!

How Do You Begin a SQL Script?

Have you ever noticed that, regardless of where you work, there's always some kind of "standard header" for SQL scripts? Sometimes, it's a formal, documented set of T-SQL commands, with clear goals and a well thought out implementation.

More often, though, it's like that Star Trek episode where everyone's forgotten the meaning of the Preamble to the United States Constitution, but can still kinda grunt the words? In the same way, a few lines of T-SQL, copied from file to file to file, can devolve into, essentially, a religious ritual. (I don't know what all this does, but our other scripts do it, so I better too!)

After getting bitten by a misuse of SET QUOTED_IDENTIFIER, I sat down to research the issue. Before discussing the code, here's what I came up with; this is the code I use, with occasional modifications, to start most of my scripts:

-- MUST be set as shown to support indexes on computed columns, indexed views. 
SET ANSI_NULLS ON                                 -- Deprecated: leave set ON. 
SET ANSI_PADDING ON                               -- Deprecated: leave set ON. 
SET ANSI_WARNINGS ON                              -- No trailing blanks saved. 
SET ARITHABORT ON                                 -- Math failure not ignored. 
SET CONCAT_NULL_YIELDS_NULL ON                    -- NULL plus string is NULL. 
SET NUMERIC_ROUNDABORT OFF                        -- Allows loss of precision. 
SET QUOTED_IDENTIFIER ON                          -- Allows reserved keywords. 

-- These aren't, strictly speaking, required, but are generally good practice. 
SET NOCOUNT ON                                    -- Minimize network traffic. 
SET ROWCOUNT 0                                    -- Reset in case it got set. 
SET XACT_ABORT ON                                 -- Make transactions behave. 

IF DB_NAME() IN ('master', 'tempdb', 'model', 'msdb')                  RAISERROR('   $$$ YOU ARE ATTACHED TO A SYSTEM DB $$$',   20, 1) WITH NOWAIT, LOG
IF @@TRANCOUNT <> 0                                                    RAISERROR('   $$$ OPEN TRANSACTION EXISTS $$$',           20, 1) WITH NOWAIT, LOG

Let's divide this code into three sections and examine them one at a time.

Standard SET commands

This section was easy to develop, because they are all required if you want to have indexed views (or indexed calculated columns) in your database. I came to that conclusion when I researched each command independently; only when I was finished did I notice this in BOL 2008:

ANSI_NULLS is one of seven SET options that must be set to required values when dealing with indexes on computed columns or indexed views. The options ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, QUOTED_IDENTIFIER, and CONCAT_NULL_YIELDS_NULL must also be set to ON, and NUMERIC_ROUNDABORT must be set to OFF.

Optional SET commands

These three commands are "defensive" (read, "paranoid") programming; ideally, you'd never have to do things like this, but better safe than unemployed.

Critical error checks

I like to do these to prevent run-time errors; note that I'm raising error level 20, which severs the session's connection, because if these errors occurs, I want execution to stop now.

If you've ever accidentally run your "create a bunch of tables" script against the master database, you'll appreciate the first line. It just checks the current database against the well-known names of the system databases. (Curiously, I couldn't find an "is_system" column in the system views for databases.)

The second critical error check is a simple SQL Server version check - to me there's nothing worse than downloading a cool script and having it crash because it requires version n + 1.

The final check is just to be sure there's not a transaction left often. This is the line of code I often end up commenting out in production, but during development it's nice to have, as it prevents those annoying "deadlocks" in SSMS when you forget to close a transaction.

So, these are what I use - did I miss a favorite of yours?

Friday, February 5, 2010

Use QUOTENAME( )... [or else!]

Lately, I seem to be reading a lot of "bracket-slapping" code:

         + @IndexName  + '] ON [' 
         + @DBName     + '].[' 
         + @SchemaName + '].['
         + @ObjectName + '] REORGANIZE '

This is much easier to read (and easier to type!) than using the QUOTENAME() function:

         + QUOTENAME(@IndexName)  + ' ON ' 
         + QUOTENAME(@DBName)     + '.' 
         + QUOTENAME(@SchemaName) + '.'
         + QUOTENAME(@ObjectName) + ' REORGANIZE '

BOL says that QUOTENAME() "[r]eturns a Unicode string with the delimiters added to make the input string a valid Microsoft SQL Server delimited identifier." So these two code snippets are pretty much interchangable, right?

But QUOTENAME() would be pretty lame if all it did was slap brackets around a string, wouldn't it? Fortunately (or unfortunately, if you've got a lot of "bracket-slapping" code laying around) it does more: it "escapes" embedded bracket characters. For example, let's take the table name 'Sales YTD [2009]', and try to create it.

-- Demonstrate that QUOTENAME() is safer than bracket-slapping.
DECLARE @Sql      NVARCHAR(4000) = ''
DECLARE @TableOne sysname        = 'Sales YTD [2009]'

-- Expected to fail: table name has spaces, brackets.
SET @Sql = 'CREATE TABLE '  + @TableOne            + ' (x INT)'
PRINT '(1)  ' + @Sql
EXEC sp_executesql @Sql

-- Expected to fail: bracket-slapping doesn't handle brackets.
SET @Sql = 'CREATE TABLE [' + @TableOne            + '] (x INT)'
PRINT '(2)  ' + @Sql
EXEC sp_executesql @Sql

-- Expected to succeed.
SET @Sql = 'CREATE TABLE '  + QUOTENAME(@TableOne) + ' (x INT) '
         + CHAR(10) + '     '
         + 'DROP   TABLE '  + QUOTENAME(@TableOne)
PRINT '(3)  ' + @Sql
EXEC sp_executesql @Sql

The first attempt uses the name with no "decoration"; it's not a legal identifier name, and so will fail. The second attempt uses "bracket-slapping"; the brackets in the table's name will cause this to fail. The third, and correct, method uses QUOTENAME(). Running the code above results in this output:

(1)  CREATE TABLE Sales YTD [2009] (x INT)
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'YTD'.
(2)  CREATE TABLE [Sales YTD [2009]] (x INT)
Msg 105, Level 15, State 1, Line 1
Unclosed quotation mark after the character string 'Sales YTD [2009] (x INT)'.
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'Sales YTD [2009] (x INT)'.
(3)  CREATE TABLE [Sales YTD [2009]]] (x INT) 
     DROP   TABLE [Sales YTD [2009]]]

Moral of the story: always use QUOTENAME() - unless you enjoy bug hunts!

Monday, January 11, 2010

Script Produces DML to Fix Index Problems (Old)

Here's the first version of my Index_Analysis.sql script, which examines all the indexes in the current database and emits DML code to correct any problems it finds.

This post has been moved to http://sqlsoundings.blogspot.com/2010/02/script-produces-dml-to-fix-index.html .

Thursday, January 7, 2010

My Reading Lists Have Moved!

My reading lists have moved: