Showing posts with label sys.columns. Show all posts
Showing posts with label sys.columns. Show all posts

Friday, August 29, 2014

CMS: Find all Databases on all Servers with Table "A" without Column "B"

This CMS (Central Management Server) code will tell you, for all databases in all servers, which tables of a given name don't have a column of a given name.

-- For all databases in all servers that have a "Customer" table, which ones don't have a "ZipCode" column? 
set nocount on

if left(convert(nvarchar, serverproperty('productversion')), 1) <> '8'
begin
    declare @Schema sysname     set @Schema = 'dbo'
    declare @Table  sysname     set @Table  = 'Customer'
    declare @Column sysname     set @Column = 'ZipCode'

    declare @Sql nvarchar(4000)

    set @Sql = 'use [?]

                if exists (select *
                            from sys.tables
                           where name = ''' + @Table + '''
                             and schema_id(''' + @Schema + ''') = schema_id)
                begin
                    if not exists (select *
                                     from sys.columns
                                    where name = ''' + @Column + '''
                                      and object_name(object_id) = ''' + @Table + ''')
                    begin
                        print @@SERVERNAME + '': '' + ''?'' + '' - ' + @Table + ' table has no ' + @Column + ' column''
                    end
                end'

    exec sp_msforeachdb @Sql
end

Thursday, May 30, 2013

Find All NOT NULL Columns Without a Default Constraint

Rarely, I'll run into a situation where it would be useful to know which columns that are specified as NOT NULL don't have a default constraint as well. For example, I've seen a table with two dozen columns, all but a few set as NOT NULL. This in itself isn't an issue, but on this occasion, for debugging purposes, we were constantly INSERTing a test row of data into the table. Having to specify a default value for each column got to be annoying after a while.

There are, of course, other ways to skin this cat, We could have created a SSMS template and used the old "Ctrl-Shift-M" key combo to type in just the values we needed. Or we could have created a script, or even a stored procedure to handle the default values for us. For various reasons, those solutions wouldn't work in our environment, so I decided to attack the problem at its root.

The code below looks at all non-empty tables and prints out T-SQL code to add the default constraint to each column it finds. Note that I've made an attempt to select a reasonable default value for all the common data types, but (a) the list is not complete, and (b) your idea of a "reasonable" default value may be very different from mine. So don't just copy and paste the output from this script and hit F5 - take a moment to read each line and be sure it does what you want; if not, you can easily change it!

-------------------------------------------------------------------------------
-- For all tables in the current database that have rows, finds all "NOT NULL" 
-- columns that don't have a DEFAULT constraint, and emits the ALTER TABLE     
-- statements needed to add the DEFAULT constraints for them.                  
-------------------------------------------------------------------------------

SET NOCOUNT ON

DECLARE @SchemaName      sysname         SET @SchemaName     = ''
DECLARE @TableName       sysname         SET @TableName      = ''
DECLARE @ColumnName      sysname         SET @ColumnName     = ''
DECLARE @ColumnType      sysname         SET @ColumnType     = ''
DECLARE @ConstraintName  sysname         SET @ConstraintName = ''
DECLARE @Sql             NVARCHAR(MAX)   SET @Sql            = ''

DECLARE cur CURSOR FOR
    SELECT SCHEMA_NAME(t.schema_id)    AS 'Schema Name'
         , OBJECT_NAME(c.object_id)    AS 'Table Name'
         , c.name                      AS 'Column Name'
         , ty.name                     AS 'Column Type'
      FROM sys.columns   c
      JOIN sys.tables    t
        ON c.object_id = t.object_id
      JOIN sys.types     ty
        ON c.user_type_id = ty.user_type_id
     WHERE c.is_nullable                  = 0
       AND c.is_identity                  = 0
       AND c.is_computed                  = 0
       AND t.type_desc                    = 'USER_TABLE'
       AND t.is_ms_shipped                = 0
       AND ISNULL(c.default_object_id, 0) = 0
       AND 0 < (SELECT SUM(row_count)
                  FROM sys.dm_db_partition_stats
                 WHERE object_id = OBJECT_ID(t.name)   
                   AND (index_id = 0 OR index_id = 1))
  ORDER BY 'Schema Name'
         , 'Table Name'
         , 'Column Name'

OPEN cur

FETCH NEXT FROM cur
 INTO @SchemaName, @TableName, @ColumnName, @ColumnType
        
IF @@FETCH_STATUS = 0
BEGIN        
    PRINT 'USE ' + DB_NAME(0)
    PRINT ' ' 
END

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @ConstraintName = QUOTENAME('DF_'+ @TableName + '_' + @ColumnName)
    
    SET @Sql = ''
             + 'PRINT ''Processing: ' + QUOTENAME(@SchemaName) + '.' + @ConstraintName + ''''
             + ' ALTER TABLE ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) 
             + ' ADD CONSTRAINT ' + @ConstraintName 
             + ' DEFAULT ('
             +
                CASE @ColumnType
                    WHEN 'char'             THEN ''''''
                    WHEN 'nchar'            THEN ''''''
                    WHEN 'varchar'          THEN ''''''
                    WHEN 'nvarchar'         THEN ''''''
                    WHEN 'sysname'          THEN ''''''
                    WHEN 'xml'              THEN ''''''
                    WHEN 'udtUserID'        THEN ''''''
                    WHEN 'text'             THEN ''
                    WHEN 'ntext'            THEN ''

                    WHEN 'bigint'           THEN '0'
                    WHEN 'int'              THEN '0'
                    WHEN 'smallint'         THEN '0'
                    WHEN 'tinyint'          THEN '0'
                    WHEN 'bit'              THEN '0'

                    WHEN 'real'             THEN '0.0'
                    WHEN 'money'            THEN '0.0'
                    WHEN 'smallmoney'       THEN '0.0'
                    WHEN 'float'            THEN '0.0'
                    WHEN 'decimal'          THEN '0.0'
                    WHEN 'numeric'          THEN '0.0'

                    WHEN 'image'            THEN '0x0'
                    WHEN 'binary'           THEN '0x0'
                    WHEN 'varbinary'        THEN '0x0'
                    
                    WHEN 'uniqueidentifier' THEN '0x0'
                    WHEN 'sql_variant'      THEN '0'
                    WHEN 'hierarchyid'      THEN '''/'''
                    WHEN 'geometry'         THEN '0'
                    WHEN 'geography'        THEN '0'

                    WHEN 'datetime'         THEN 'GETDATE()'
                    WHEN 'date'             THEN 'GETDATE()'
                    WHEN 'time'             THEN 'GETDATE()'
                    WHEN 'datetime2'        THEN 'GETDATE()'
                    WHEN 'datetimeoffset'   THEN 'GETDATE()'
                    WHEN 'smalldatetime'    THEN 'GETDATE()'
                    WHEN 'timestamp'        THEN 'GETDATE()'
                    ELSE                         '-1'
                END
                    
             + ')' 
             + ' FOR ' + QUOTENAME(@ColumnName)

    PRINT @Sql    
    
    FETCH NEXT FROM cur
     INTO @SchemaName, @TableName, @ColumnName, @ColumnType
END

CLOSE cur
DEALLOCATE cur

Friday, November 4, 2011

How Much Data Is In That Column?

This is a special-purpose script, but it still may be useful to someone, someday.

Columns

Imagine you have many tables with similar filenames and the same column names. This might happen if the tables are created automatically by a Job on a daily basis, for example. For each column you specify, this script displays the declared width, the average number of characters stored, and the minimum and maximum number of characters stored, and it does this across all tables matching a LIKE pattern than you also specify.

This script might be useful if you wanted to see the average "fullness" of each column, or how big the widest value is for a given column. This might be good to know if you're planning to copy data from this column to another table, for example.


-------------------------------------------------------------------------------
/* Sample setup code:                                                          
                                                                               
 USE AdventureWorks                                                            
                                                                               
 IF OBJECT_ID('dbo.Invoice_2011_10') IS NOT NULL DROP TABLE dbo.Invoice_2011_10
 IF OBJECT_ID('dbo.Invoice_2011_11') IS NOT NULL DROP TABLE dbo.Invoice_2011_11
 IF OBJECT_ID('dbo.Invoice_2011_12') IS NOT NULL DROP TABLE dbo.Invoice_2011_12
                                                                               
 CREATE TABLE dbo.Invoice_2011_10 (AccountNumber INT, Comment NVARCHAR(50))    
 CREATE TABLE dbo.Invoice_2011_11 (AccountNumber INT, Comment NVARCHAR(50))    
 CREATE TABLE dbo.Invoice_2011_12 (AccountNumber INT, Comment NVARCHAR(50))    
                                                                               
 INSERT dbo.Invoice_2011_10 (AccountNumber, Comment) VALUES (  2, 'ABC')       
 INSERT dbo.Invoice_2011_11 (AccountNumber, Comment) VALUES (234, 'ABCDEF')    
 INSERT dbo.Invoice_2011_12 (AccountNumber, Comment) VALUES (234, 'ABCDEFGHI') 
                                                                               
 -- Now set the values as follows, and run this script.                        
 --     @SchemaName  is 'dbo'                                                  
 --     @TableFormat is 'Invoice_2011%'                                        
 --     @tColumns    should contain 'AccountNumber' and 'Comment' *only*       
                                                                               
                                                                             */
-------------------------------------------------------------------------------

SET NOCOUNT ON


-------------------------------------------------------------------------------
-- These are the variables you need to set.  There are no changes necessary    
-- after this section.                                                         
-------------------------------------------------------------------------------

-- The schema for the tables. 
DECLARE @SchemaName sysname = 'dbo'

-- The LIKE format for all the tables you want to examine. 
DECLARE @TableFormat sysname = 'Invoice_2011%'

-- The set of column names in the set of tables you want to report on. 
DECLARE @tColumns TABLE (ColumnName sysname)

INSERT @tColumns
       (ColumnName)
VALUES
       ('AccountNumber')
     , ('Comment')
        

-------------------------------------------------------------------------------
-- Variables (you don't need to change these).                                 
-------------------------------------------------------------------------------

DECLARE @TableName    sysname       = ''
DECLARE @ColumnName   sysname       = ''

DECLARE @Sql          NVARCHAR(MAX) = ' '                                 + CHAR(10)
                                    + 'SET NOCOUNT ON'                    + CHAR(10)
                                    + 'SET STATISTICS IO OFF'             + CHAR(10)
                                    + ' '                                 + CHAR(10)
                                    + 'DECLARE @tTotals TABLE '           + CHAR(10)
                                    + '    (ColumnName   sysname,'        + CHAR(10)
                                    + '     Example      NVARCHAR(MAX), ' + CHAR(10)
                                    + '     AverageLen   INT, '           + CHAR(10)
                                    + '     MaximumLen   INT, '           + CHAR(10)
                                    + '     DeclaredSize INT) '           + CHAR(10)

-- The *COLUMNAME* is just a placeholder; each column will call REPLACE with 
-- its name from the set you specified above.  Done this way to keep all the 
-- variables you need to change in one place, and at the top of the file. 
DECLARE @ColumnsSQL   NVARCHAR(MAX) = 'QUOTENAME(''*COLUMNNAME*''), '            + CHAR(10)
                                    + '       MIN([*COLUMNNAME*]), '             + CHAR(10)
                                    + '       AVG(DATALENGTH([*COLUMNNAME*])), ' + CHAR(10)
                                    + '       MAX(DATALENGTH([*COLUMNNAME*])), ' + CHAR(10)

DECLARE @ColLengthSQL NVARCHAR(MAX) = ''


-------------------------------------------------------------------------------
-- Process each table that matches the LIKE expression.                        
-------------------------------------------------------------------------------

RAISERROR('All column sizes are in bytes, not characters.', 10, 1) WITH NOWAIT

DECLARE curTables CURSOR FOR
 SELECT name
   FROM sys.tables
  WHERE name LIKE @TableFormat ESCAPE '$'
    AND type_desc = 'USER_TABLE'
       AND SCHEMA_ID(@SchemaName) = schema_id
  ORDER BY name

OPEN curTables
FETCH NEXT FROM curTables INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @Sql += ' '                                                                              + CHAR(10)
             +  'RAISERROR('' '', 10, 1) WITH NOWAIT'                                            + CHAR(10)
             +  'RAISERROR(''Table ' + @SchemaName + '.' + @TableName + ''', 10, 1) WITH NOWAIT' + CHAR(10)

    SET @ColLengthSQL = '       MIN(COL_LENGTH(''' 
                      + @SchemaName + '.' + @TableName 
                      + ''', ''*COLUMNNAME*'')) ' + CHAR(10)
    
    -- Process each column in the set. 
    DECLARE curColumns CURSOR FOR
        SELECT ColumnName
          FROM @tColumns
   
    OPEN curColumns
    FETCH NEXT FROM curColumns INTO @ColumnName
              
    WHILE @@FETCH_STATUS = 0
    BEGIN
    
        -- Some types don't support MIN, etc. 
        IF EXISTS (SELECT *
                     FROM sys.columns   c
                     JOIN sys.types     t
                       ON c.system_type_id = t.system_type_id
                    WHERE t.name IN ('uniqueidentifier', 'bit', 'xml')
                      AND c.object_id = OBJECT_ID(@SchemaName + '.' + @TableName)
                      AND c.name = @ColumnName)
        BEGIN
            RAISERROR('Skipping %s.%s.%s because type is unsupported', 10, 1, @SchemaName, @TableName, @ColumnName) WITH NOWAIT
            FETCH NEXT FROM curColumns INTO @ColumnName
            CONTINUE
        END

        -- Build the INSERT statement. 
        SET @Sql += ' '                               + CHAR(10)
                 +  'INSERT @tTotals (ColumnName, '   + CHAR(10)
                 +  '                 Example, '      + CHAR(10)
                 +  '                 AverageLen, '   + CHAR(10)
                 +  '                 MaximumLen, '   + CHAR(10)
                 +  '                 DeclaredSize) ' + CHAR(10)
                 +  'SELECT '
                 
        SET @Sql += REPLACE(@ColumnsSQL + @ColLengthSQL, '*COLUMNNAME*', @ColumnName)
                 +  '  FROM ' + @SchemaName + '.' + @TableName + CHAR(10)
                 +  ' WHERE ' + QUOTENAME(@ColumnName) + ' IS NOT NULL '  + CHAR(10)
                 
        SET @Sql += ' '                                                                + CHAR(10)
                 +  'RAISERROR(''    Column ' + @ColumnName + ''', 10, 1) WITH NOWAIT' + CHAR(10)
        
        FETCH NEXT FROM curColumns INTO @ColumnName
    END
    
    CLOSE curColumns
    DEALLOCATE curColumns

    FETCH NEXT FROM curTables INTO @TableName
END
    
CLOSE curTables
DEALLOCATE curTables


-------------------------------------------------------------------------------
-- Done with the tables.  Display the results.                                 
-------------------------------------------------------------------------------

SET @Sql += ' '                                                     + CHAR(10)
         +  'RAISERROR('' '', 10, 1) WITH NOWAIT'                   + CHAR(10)
         +  ' '                                                     + CHAR(10)
         +  '  SELECT ColumnName          AS [Column Name], '       + CHAR(10)
         +  '         MAX(Example)        AS [Example], '           + CHAR(10)
         +  '         AVG(AverageLen)     AS [Average Length], '    + CHAR(10)
         +  '         MAX(MaximumLen)     AS [Maximum Length], '    + CHAR(10)
         +  '         MIN(DeclaredSize)   AS [Declared Size] '      + CHAR(10)
         +  '    FROM @tTotals '                                    + CHAR(10)
         +  'GROUP BY ColumnName '                                  + CHAR(10)
         +  'ORDER BY ColumnName '                                  
     
EXEC (@Sql)

-- PRINT is limited to 8000 bytes.  There are better solutions out there. 
DECLARE @Idx INT = 1

WHILE @Idx < LEN(@Sql)
BEGIN
    PRINT SUBSTRING(@Sql, @Idx, 4000)
    SET @Idx += 4000
END

Friday, April 9, 2010

Adding a Clustered Index - Carefully

This script is useful if you need to add a Primary Key / clustered index on a table, but cannot know if the table already has a clustered index on another column, or if the column you're trying to cluster already has a non-clustered index on it.

-------------------------------------------------------------------------------
-- Constraint_Add_PK_Clustered.sql - Very careful way to add a Primary Key     
--                 backed by a clustered index.  Checks for an existing        
--                 clustered index that was created by a PRIMARY KEY or UNIQUE 
--                 constraint. Checks whether the column is already in use in  
--                 an existing index.                                          
--                                                                             
-- Note: In SSMS, press Ctrl-Shift-M to pop a dialog box for entering values.  
-------------------------------------------------------------------------------
-- Copyright 2010  Larry Leonard, Definitive Solutions Inc.                    
--                 http://www.DefinitiveSolutions.com                          
--                                                                             
-- Copying and distribution of this file, with or without modification, are    
-- permitted in any medium without royalty provided the copyright notice and   
-- this notice are preserved. This file is offered as-is, without any warranty.
-------------------------------------------------------------------------------

DECLARE @sIndexNameToDrop  sysname
DECLARE @sSql              NVARCHAR(4000)
DECLARE @sMsg              NVARCHAR(440)

-- Check for any existing clustered index that was created by a PRIMARY KEY
-- or UNIQUE constraint.  Must use ALTER TABLE to remove these - can't use
-- DROP INDEX on constraints.

SET @sIndexNameToDrop = NULL

SELECT @sIndexNameToDrop = name
  FROM sys.indexes
 WHERE object_id = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
   AND type_desc = 'CLUSTERED'
   AND (is_unique_constraint = 1 OR is_primary_key = 1)

IF @sIndexNameToDrop IS NOT NULL
BEGIN
    RAISERROR('Dropping existing PRIMARY KEY or UNIQUE constraint %s on table <table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>', 10, 1, @sIndexNameToDrop ) WITH NOWAIT, LOG

    SET @sSql = 'ALTER TABLE <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> DROP CONSTRAINT ' + @sIndexNameToDrop
    PRINT @sSql
    EXEC(@sSql)
END

-- If there's a non-clustered index, or a clustered index that was created by 
-- an ADD INDEX, and the column we are wanting to cluster is the *only* column 
-- in the index, we want to drop it - *unless* it's a clustered one, and is 
-- *only* on the column we are wanting to cluster.  Since we're only looking 
-- at indexes with only one column, we don't have to check that the column is 
-- not 'included'.

SET @sIndexNameToDrop = NULL

SELECT @sIndexNameToDrop = si.name
  FROM sys.indexes                    AS si
  JOIN sys.index_columns              AS sic
    ON si.object_id  = sic.object_id
   AND si.index_id   = sic.index_id
  JOIN sys.columns                    AS sc
    ON sic.column_id = sc.column_id
   AND sic.object_id = sc.object_id
 WHERE si.object_id  = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
   AND NOT (sc.name IN ('<column_name, sysname, Column>') AND si.type_desc = 'CLUSTERED')
   AND (si.is_unique_constraint = 0 AND si.is_primary_key = 0)
   AND 1 =
          (SELECT COUNT(*)
             FROM sys.index_columns
            WHERE object_id    = OBJECT_ID('<schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>')
              AND key_ordinal <> 0
              AND column_id   <> 0)

IF @sIndexNameToDrop IS NOT NULL
BEGIN
    RAISERROR('Dropping existing index on table <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>, column <column_name, sysname, Column>', 10, 1) WITH NOWAIT, LOG

    SET @sSql = 'DROP INDEX ' + @sIndexNameToDrop + ' ON <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable>'
    PRINT @sSql
    EXEC(@sSql)
END

-- Now add the clustered index.
RAISERROR('Adding <set_as_primary_key, NVARCHAR, PRIMARY KEY> clustered index to <database_name, sysname, MyDB>''s <table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> table', 10, 1) WITH NOWAIT, LOG

ALTER TABLE <schema_name, sysname, dbo>.<table_name_prefix, sysname, tbl><table_name, sysname, ThisTable> WITH CHECK
  ADD CONSTRAINT PK_<table_name, sysname, ThisTable> <set_as_primary_key, NVARCHAR, PRIMARY KEY> CLUSTERED (<column_name, sysname, Column>)
 WITH (PAD_INDEX = ON, FILLFACTOR = <fill_factor, INT, 70>)

See BOL for info on how to create and use templates.

Sunday, February 28, 2010

Do your Clustered Indexes on IDENTITYs Have the Correct FILLFACTOR?

The great thing about blogging is that it forces you to be as sure as you can that you know what you're talking about. That alone makes it worthwhile...

I started this FILLFACTOR post a few days ago with a completely different goal in mind. After I did my due diligence, however - perusing Books OnLine, seeing what's in my "dead trees" library, and reading my favorite gurus' blogs - I realized that the "tip" I was preparing to demonstrate was completely wrongheaded.

In the process, I learned a lot about setting FILLFACTOR values. One thing I learned was that clustered indexes on IDENTITY columns should almost always have their FILLFACTOR set to 100% (completely full - no room for expansion). The reasoning has to do with page splits, which are expensive, especially on a clustered index.

Since IDENTITY columns are monotonically increasing, INSERTs will always be applied to the end of the index, so an INSERT will never cause a page split. UPDATE-ing a record can cause page splits, but only if a variable-length field (like, an NVARCHAR) is updated with more data than it had before. Since the page is already 100% full, there's nowhere to put the extra data: that causes a page split.

Can we avoid page splits like this by setting the FILLFACTOR to something less than 100% - that is, leave some empty space on each page? Sure, but, as always, there's a downside. Database "reads" typically outnumber "writes" by a factor of n to 1, right? Whether n is 5 or 50 depends on who you believe, and the unique characteristics of each database, and certainly changes over time; so it's a little vague, but in any case, the typical OLTP database does a lot more reads than writes. That's important, because setting the FILLFACTOR to less than 100% can make reads much slower: that's the downside.

To see why this is, imagine you have a table with 1000 rows, and 50 rows will fit on an 8 KB page. With a FILLFACTOR of 100%, every page is full (by definition), this will require 100 / 50 = 20 pages. So, a scan of this table will require, at most, 20 reads.

Now imagine you've set the FILLFACTOR to 80%, leaving each page 20% empty after you REBUILD it. This leaves lots of room for growth, so we'll reduce the number of page splits. And that's a good thing.

But now, only 40 rows (80% of 50 rows) will fit on each page, so the table now requires 100 / 40 = 25 pages. A scan of the table will now require, at most, 25 reads. You've just decreased read performance by, in the worst case, 1 - (20 / 25) = 20%. Oops.

So, to be sure all the tables that are clustered on an IDENTITY column have a FILLFACTOR setting of 100%, I've written a script, Index_ClusteredOnIdentity_SetFillFactorTo100_All.sql . The script first finds the tables to be rebuilt, using a cursor on this statement:

SELECT OBJECT_SCHEMA_NAME(i.object_id)   AS 'Schema Name',
       OBJECT_NAME(i.object_id)          AS 'Table Name',
       i.name                            AS 'Index Name'
  FROM sys.indexes                  i
  JOIN sys.index_columns            ic
    ON ic.object_id = i.object_id
   AND ic.index_id  = i.index_id 
  JOIN sys.columns                  c
    ON c.object_id  = ic.object_id
   AND c.column_id  = ic.column_id
  JOIN sysindexes                   si
    ON si.id        = i.object_id
   AND si.indid     = i.index_id
 WHERE i.object_id                     > 100
   AND OBJECT_SCHEMA_NAME(i.object_id) != 'sys'
   AND i.is_disabled                   = 0 
   AND i.type_desc                     = 'CLUSTERED'
   AND c.is_identity                   = 1
   AND si.origfillfactor               NOT IN (0, 100)

This should be pretty self-explanatory. The system tables are joined in the usual way to get at the index's column(s). We restrict the result set to exclude system tables and tables in the "sys" that can be returned in certain cases. We don't REBUILD disabled indexes, because that also enables them, a side-effect you may not expect. We of course only want clustered indexes on IDENTITY columns. Finally, there's no point in rebuilding the index unnecessarily.

As we iterate through the resultset, we build up a statement like:

ALTER INDEX PK_Product_ProductID
   ON Production.Product
   REBUILD WITH (ONLINE = OFF, PAD_INDEX = ON, FILLFACTOR = 100)

Rebuilding a clustered index can be a time-intensive process, but at least, as BooksOnLine says, "Rebuilding a clustered index does not rebuild associated nonclustered indexes unless the keyword ALL is specified" - and we're not specifying it.

Tuesday, February 16, 2010

Find and Fix NOT NULL Columns Without DEFAULT Constraints

The latest and greatest version of my FindAllNotNullColumnsWithNoDefaultConstraint.sql script (formerly, embarrassingly, FindAllNonNullableColumnsWithNoDefaultConstraint.sql) is now available. It returns the DML code to add DEFAULT constraints to all NOT NULL columns that don't already have them. Useful on legacy databases to head off problems before they happen.

This is a work in progress, so the usual caveats apply: do not run this on any database you want to keep, for example, a production database, or any database that isn't backed up. Comments welcome!

Saturday, December 5, 2009

Find NOT NULL Columns Without DEFAULT Constraints

This is really handy when you have a large legacy database where the previous owner forgot to add DEFAULT constraints to the NOT NULL columns.

This script has been moved to the Schema Analysis scripts page.