Adsense

Friday, August 23, 2013

Part Time Database Administration

As many of you know I started a new part time database administration company a few months back and its been going great.  If you're in the need for a database administrator but can't justify hiring an FTE please check us out.  http://www.yourparttimedba.com/p/part-time-sql-server-database.html

-Jeff

Thursday, April 18, 2013

How to tell what tables in a database are compressed

Run this query and then run the output of the query (remember to drop off the last union).


select 'select''' + name + ''' as dbnme, st.name, st.object_id, sp.data_compression_desc FROM ' + name + '.sys.partitions sp INNER JOIN ' + name + '.sys.tables st ON st.object_id = sp.object_id WHERE data_compression <> 0 union'
from master.sys.databases

Monday, February 11, 2013

Query to identify stored procedure table dependencies

If you're looking to identify what stored procedures have a dependency on a table the query below will help you to find them.  If your stored procedures span multiple databases this query will still pick up dependencies, however you will need to run it in each individual database.


SELECT d.referencing_id,
    QUOTENAME(s.name),
    QUOTENAME(o.name),
    QUOTENAME(d.referenced_database_name),
    QUOTENAME(d.referenced_schema_name),
    QUOTENAME(d.referenced_entity_name),
    d.referenced_id,
    o.type_desc
FROM sys.sql_expression_dependencies AS d
INNER JOIN sys.objects AS o ON d.referencing_id = o.[object_id]
INNER JOIN sys.schemas AS s ON o.[schema_id] = s.[schema_id]

Tuesday, January 29, 2013

How to identify a lead blocker in SQL Server


If you’ve ever logged into your SQL Server database only to be greeted by greeted by hundreds or thousands of active spids all of which are being blocked there’s an easy way to identify which blocker is the lead blocker by using the query below.  In some cases there may be multiple lead blockers, this indicates that there are several spids running which aren’t being blocked by any other processes.  To find the one causing the most harm by looking at the last_batch, cpu, and physical_io. 

select blocker.spid, last_batch, cpu, physical_io
from master.dbo.sysprocesses blocker
inner join master.dbo.sysprocesses victim on blocker.spid = victim.blocked
where blocker.blocked = 0

Thursday, January 24, 2013

How fast is my SQL Server database growing?


Do you know how quickly your database is growing?  If not you can get a pretty good idea based on how the size of your full backups have grown over time. 

Run the script below, and then run the output of that script and you will see the growth of your backup day by day.  Putting the output into Excel can help to generate a graph for this.  Overall it’s a quick and dirty way to get some idea of how quick you’re growing.

use master
go
DECLARE @dbname as sysname
DECLARE @select_string as varchar(4000)
DECLARE @for_string as varchar(4000)
DECLARE @init as bit

DECLARE curs CURSOR FOR
SELECT name FROM sysdatabases

OPEN CURS

FETCH NEXT FROM curs INTO @dbname

SELECT      @select_string = 'SELECT DISTINCT backup_date',
      @for_string = 'FOR database_name IN (',
      @init = 0

WHILE @@FETCH_STATUS = 0
BEGIN
      IF @init = 0
      BEGIN
            SET @init = 1

            SELECT      @select_string = @select_string + ', [' + @dbname + '] as ' + '"' + @dbname +'"',
            @for_string = @for_string +  @dbname
      END
      ELSE
      BEGIN
            SELECT      @select_string = @select_string + ', [' + @dbname + '] as ' + '"' + @dbname +'"',
                  @for_string = @for_string + ', ' + @dbname
      END
      FETCH NEXT FROM curs INTO @dbname
END

CLOSE curs
DEALLOCATE curs

PRINT @select_string
PRINT 'from
      (select isnull(backup_size / (1024 * 1024 * 1024), 0) as backup_size,
      database_name,
      cast(cast(backup_finish_date as varchar(12)) as datetime) as backup_date
      from msdb.dbo.backupset
      where type = ''d'') p
pivot
      (
      sum(backup_size)'
PRINT @for_string
PRINT ')) as pvt 
order by backup_date'

Monday, January 21, 2013

Odd Behavior Using Partitioned Tables in a Replicated Database With SQL Server 2008

I ran into an odd problem today troubleshooting an error trying to switch a partition in SQL Server 2008.  The error read:
  
  Msg 137, Level 15, State 2, Line 1
  Must declare the scalar variable "@VARIABLE".

This was a particularly odd error since the variable was indeed declared and the process worked properly in the non production environment.  After troubleshooting for a while I found that the problem was that you cannot use a variable to specify the partition as in the statement below if you have your database published for replication

  ALTER TABLE PartitionTable SWITCH PARTITION @partition TO NonPartitionTable

Microsoft's workaround to this was to use dynamic SQL.  The full KB article on the problem can be found here: http://support.microsoft.com/kb/2002474

Monday, January 14, 2013

Create and attach script for a database prior to detaching it


If you are going to detach a database in SQL Server that you’re planning on attaching later save yourself some trouble and generate the attach script first.  Running the code below will generate a script to be used to attach your database at a later time (assuming that you keep the database files in the same location that they were in when you detached it).  Make sure you’re in the database you’re going to detach when you run this script


declare @dbname as sysname
declare @sql as varchar(1000)
declare @n as integer
declare @filename as sysname

set @dbname = db_name()

declare curs cursor for
select rtrim(filename) from sysfiles

open curs

fetch next from curs into @filename
set @n = 0

print '--ATTACH SCRIPT FOR ' + @dbname
print 'exec sp_attach_db @dbname = N''' + @dbname + ''','

while @@fetch_status = 0
begin
      set @n = @n + 1
      print '@filename' + cast(@n as varchar) + ' = ''' + @filename + ''','
      fetch next from curs into @filename
end

close curs
deallocate curs

Saturday, January 12, 2013

Identify Duplicate Indexes

Indexes make things quicker in the databases but there’s not much uses for having two of the same indexes.  If you have duplicate indexes they’re likely having a negative effect on your database since they’re causing the following problems:
·         Maintaining duplicate indexes increases your overhead associated with inserts, updates, and deletes. 
·         Reindexes/Reorgs of duplicate indexes takes valuable time out of your maintenance window.
·         Increase storage costs since you have two of the same thing

You can identify duplicate indexes in your database by running the following query (note: you must be in the database you wish to run it on).

with AllIndexes (objectname, indexname, keyno, columnname)
as
(     select so.name as objectname, si.name as indexname, sik.keyno, sc.name as columnname
      from sysindexes si
      inner join sysobjects so on so.id = si.id
      inner join sysindexkeys sik on sik.id = so.id and sik.indid = si.indid
      inner join syscolumns sc on sc.id = so.id and sc.colid = sik.colid
      where so.xtype = 'u')

select objectname, indexname, [1] fld1, [2] fld2, [3] fld3, [4] fld4, [5] fld5, [6] fld6, [7] fld7, [8] fld8, [9] fld9, [10] fld10
into #temp
from (select objectname, indexname, keyno, columnname from AllIndexes) t
pivot (max(columnname) for keyno in ([1], [2], [3], [4], [5], [6], [7], [8], [9], [10])) as pvt

select t1.objectname, t1.indexname, t2.indexname
from #temp t1
inner join #temp t2 on
      t1.objectname = t2.objectname and
      t1.indexname < t2.indexname and
      isnull(t1.fld1, '') = isnull(t2.fld1, '') and
      isnull(t1.fld2, '') = isnull(t2.fld2, '') and
      isnull(t1.fld3, '') = isnull(t2.fld3, '') and
      isnull(t1.fld4, '') = isnull(t2.fld4, '') and
      isnull(t1.fld5, '') = isnull(t2.fld5, '') and
      isnull(t1.fld6, '') = isnull(t2.fld6, '') and
      isnull(t1.fld7, '') = isnull(t2.fld7, '') and
      isnull(t1.fld8, '') = isnull(t2.fld8, '') and
      isnull(t1.fld9, '') = isnull(t2.fld9, '') and
      isnull(t1.fld10, '') = isnull(t2.fld10, '')


drop table #temp

Monday, January 7, 2013

Find Fragmented Indexes in SQL Server


Identifying the most fragmented indexes in a particular database can be found by running the below query.  Only indexes in the active database will be analyzed.  The results will be ordered by the most fragmented table being listed first.


declare @table_index table (table_name sysname, table_id int, index_name sysname, index_id int, fragmentation float, pagecount int)
declare @table_id as int
declare @index_id as int
declare @fragmentation float
declare @pagecount int

insert into @table_index (table_name, table_id, index_name, index_id)
select o.name as table_name, o.object_id as table_id, isnull(i.name, 'HEAP') as index_name, i.index_id
from sys.objects o
inner join sys.indexes i on o.object_id = i.object_id
where o.type = 'u'

declare curs cursor for
select table_id, index_id
from @table_index

open curs

fetch next from curs into @table_id, @index_id

while @@FETCH_STATUS = 0
begin
      select @fragmentation = avg_fragmentation_in_percent,
      @pagecount = page_count
      from sys.dm_db_index_physical_stats (DB_ID(), @table_id, @index_id,null,'limited')

      update @table_index
      set fragmentation = @fragmentation,
      pagecount = @pagecount
      where index_id = @index_id
      and table_id = @table_id

      fetch next from curs into @table_id, @index_id

end

close curs

deallocate curs

select *
from @table_index
order by fragmentation desc

Sunday, January 6, 2013

Identifying the largest HEAPS in your database

Running the below script will identify list off all HEAPs in your database (largest HEAPs will be listed first).


DECLARE @dbname as sysname
DECLARE @sql as nvarchar(1000)
DECLARE @heaps as table (dbname sysname, objectname sysname, create_dte smalldatetime, size_mb int)

DECLARE curs CURSOR FOR
SELECT name FROM master.sys.databases
WHERE state = 0 --online
and name <> 'TEMPDB'

OPEN curs
FETCH NEXT FROM curs INTO @dbname

WHILE @@fetch_status = 0
BEGIN
      SET @sql = 'USE ' + @dbname + char(10) + 'SELECT db_name() as dbname, o.name, o.create_date, (ps.in_row_used_page_count * 8096) / (1024 * 1024) as size_mb
      FROM sys.indexes i
      INNER JOIN sys.partitions p on p.object_id = i.object_id and p.index_id = i.index_id
      INNER JOIN sys.dm_db_partition_stats ps on ps.partition_id = p.partition_id
      INNER JOIN sys.objects o on o.object_id = i.object_id
      WHERE i.type = 0
      and o.type = ''U'''

      INSERT INTO @heaps (dbname, objectname, create_dte, size_mb)
      exec sp_executesql @sql

      FETCH NEXT FROM curs INTO @dbname
END

CLOSE curs
DEALLOCATE curs

SELECT *
FROM @HEAPS
order by size_mb desc

Thursday, January 3, 2013

Monitoring Deadlock in SQL Server


 SQL Server can passively write deadlocks to the log.  To do this you can turn on two traces:
                DBCC TRACEON(1222,-1)
DBCC TRACEON(1204,-1)

Trace 1204 Returns the resources and types of locks participating in a deadlock and also the current command affected
Trace 1222 Returns the resources and types of locks that are participating in a deadlock and also the current command affected, in an XML format that does not comply with any XSD schema.

For more information on trace flags refer to http://msdn.microsoft.com/en-us/library/ms188396.aspx

Wednesday, January 2, 2013

How to get the runtimes of individual steps of an SSIS package


Open up the SSIS package which you want to add logging to and follow these steps:
·         Right click anywhere in the design screen and click on “Logging”
·         In the “Configure SSIS Logs: PROJECT NAME”  on the “Providers and Logs” tab select “SSIS Log Provider for SQL Server” and Click “Add”
·         In the newly added item in the list box choose or add a new configuration for the log
·         In the Details tab select the “OnPostExecute” and “OnPreExecute” events.
·         Click OK to save your changes

The next time you run your SSIS package a new table named sysssislog will be created.  By running the query below you will be able to view the start and end times of the different pieces of your SSIS package

declare @start_of_run as smalldatetime
set @start_of_run = '01/02/2013 14:47'

select pre.source, pre.starttime, post.endtime, isnull(cast(DATEDIFF(second, pre.starttime, post.endtime) as varchar),'In Progress') as run_time_seconds
from sysssislog pre
left outer join sysssislog post on pre.source = post.source and pre.executionid = post.executionid and post.event ='onpostexecute'
where pre.starttime > @start_of_run
and pre.event = 'onpreexecute'
order by pre.starttime