When CHECK DB found Corruption!

Looking over a new SQL Server I noticed a failing job. It was the ‘Check DB’ part of a maintenance-plan and contained this error message …

Msg 2508, Level 16, State 3, Line 1
The In-row data RSVD page count for object “SomeTable”, index ID 0, partition ID 104586854531027, alloc unit ID 104586854531027 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.
CHECKDB found 0 allocation errors and 1 consistency errors in table ‘SomeTable’ (object ID 1595868725).

It transpired that this box was once upgraded from SQL-2000 and some steps had been missed.

It was trivial to fix (and not a sign of corruption) …

DBCC CHECKDB WITH DATA_PURITY;
GO
EXEC sp_MSForEachDB 'DBCC UPDATEUSAGE(?);';

Migrate SQL 2000 to SQL 2012

Luckily there was a shiney new (virtual) server waiting to be the backend for this customers myriad in-house applications.

The live production server was Standard edition SQL 2000, so I installed SQL Server 2008r2 on my laptop and SQL2012 on the new box – both ‘Standard editions’.

My plan …

– Tidy-up by removing dev and unused databases.
– Run SQL2008r2 upgrade-advisor against the SQL2k box & correct issues
– Backup all SQL2k user databases & drag to SQL2008 box
– Restore them (triggering an auto-upgrade)
– Change compatability levels from 80 to 100

– Run SQL2012 upgrade-advisor against SQL2008r2 box & correct issues
– Backup all SQL2008 user databases & drag to SQL2012 box
– Restore them (triggering an auto-upgrade)
– Change compatability level from 100 to 110
– DBCC CHECKDB WITH DATA_PURITY;
– EXEC sp_MSForEachDB ‘DBCC UPDATEUSAGE(?);’;

– Capture and apply SQL2k logins to SQL2012
– Rewrite 2x DTS packages as SSIS
– Apply SQL 2012 (SP2)
– Backup all SQL2012 databases

I was able to get one application (and Reports) re-pointed to the new backend immediately, however the others look ages.

Some of the ‘fun’ issues moving individual applications were … Some had links to a remote Oracle database necessitating Oracle 64bit client installation. Many had old reporting issues.

To keep the SQL2012 data up to date during the ‘fun’, I ran the SSIS wizard from SQL2012 and pulled data directly from SQL2k.

To Empty a Database

To stop the import of ‘already there’ data (twice) I needed to empty every table. Its a very, very good idea to back up your databases before using this code. It empties a whole database in one go!

-- empty database
use SomeDatabaseName -- ** change to subject database first & *** CHECK SERVER NAME **
go
EXEC sp_MSForEachTable 'TRUNCATE TABLE [?]'

Untrusted Keys

Here’s my mod to Aaron Lowe’s @Vendoran script to get rid of untrusted foreign keys and constraints …

--fix_untrusted_keys.sql

SELECT 'ALTER TABLE [' + s.name + '].[' + o.name + '] WITH CHECK CHECK CONSTRAINT [' + i.name + ']' AS keyname
from sys.foreign_keys i
INNER JOIN sys.objects o ON i.parent_object_id = o.object_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE i.is_not_trusted = 1 AND i.is_not_for_replication = 0

Notification Test

Notification emails telling a DBA that something failed are all-well-an-good, but if you don’t get any emails, how can you be sure the notification system is working?

The simplest solutions are often the best. I’ve taken to scheduling an 8am job (on every SQL Server that runs jobs) called ‘Notification Heartbeat’ that contains just an empty step. It’s own notification configuration is set to email the ‘DBA’ operator every time it succeeds.

Of course, I then have an Outlook rule that moves them (all with ‘Heartbeat’ in the subject), to the ‘DBA’ folder so I dont have to read them every day.

Now when my morning inbox is suspiciously empty – I have a way to verify that the Notification system was tested end-to-end at 8am.

NOTE: I used to try combining the notification-test with actual working Jobs like Backups, but this is simpler and the message value is clearer.

Remove oci.dll

When trying to uninstall Oracle Client (from a Windows server) the standalone deinstall tool couldn’t remove the file “oci.dll”.

To remove it (without rebooting) I tried to manually delete the file. The error message indicated which Windows service I had to stop. (IE: First it was locked by “SQL Server” then when I’d stopped that “VMWare Tools”, then DCOM, then COM+.

After stopping all the locking services in order – I was able to delete the file.
Lastly I restarted all the services I’d stopped 😉

Removing Oracle Client

Whilst setting up a Linked-Server connection from SQL-2012 to Oracle the installation of “Instant client for Windows x64” failed.

To clean-up the mess left I downloaded the ‘Oracle Deinstall Tool’ from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html by clicking ‘see all’ next to Windows x64 within the Oracle 11g2 section.

To use the tool, I opened a command-prompt on my windows-server and changed directory to the folder containing ‘deinstall.bat’ from the above download. The command that worked for me was “deinstall -home D:\Oracle\Client_64”.

BTW@ The path-part of the deinstall command (above) was the ‘Oracle_Home’ location. There are a bunch of ways to find that. I looked in the Oracle Universal-installer.

Finally, I tidied-up my local (enviroment?) Path, by cutting/ pasting from/ into the ‘path’ command – to remove all traces of Oracle.

Cannot truncate table xxx because it is being referenced by a FOREIGN KEY constraint.

When you want to empty a table, and trancate returns this not-very-helpful error message …

TRUNCATE TABLE dbo.SomeTable

Msg 4712, Level 16, State 1, Line 1
Cannot truncate table ‘dbo.SomeTable’ because it is being referenced by a FOREIGN KEY constraint.

Try using delete instead …

begin tran
DELETE dbo.SomeTable
rollback tran

Msg 547, Level 16, State 0, Line 2
The DELETE statement conflicted with the REFERENCE constraint “FK__SomeTable__7BFD7C01”. The conflict occurred in database “SomeDatabase”, table “dbo.SomeTable”, column ‘SomeColumn’.

… for a more helpful message.

Dropping & (re)Creating Foreign-Keys

I noticed that when I script-out the dropping and re-creating of a foreign key constraint it would only work once – because the name-portion would be missing …

ALTER TABLE [dbo].[SomeTable] DROP CONSTRAINT [FK__SomeTable_SYS__FRSIn__1BFD2C07]
GO
ALTER TABLE [dbo].[SomeTable] WITH CHECK ADD FOREIGN KEY([SomeColumnName])
REFERENCES [dbo].[SomeOtherTable] ([SomeColumnName])
GO

The soution was to cut and paste the name portion from the DROP statement into the ADD statement …

ALTER TABLE [dbo].[SomeTable] WITH CHECK ADD CONSTRAINT [FK__SomeTable_SYS__FRSIn__1BFD2C07] FOREIGN KEY([SomeColumnName])
REFERENCES [dbo].[SomeOtherTable] ([SomeColumnName])
GO

Refreshing data (mid-migration)

After I migrated some database from SQL 2000 to SQL 2012 (via SQL2008r2) I was unable to get the front-end applications switched to point to the new backend straight away.

This resulted in the unused SQL 2012 databases gradually becoming out-of-date.

To update them without the pain of repeating the migration I found that I could use the SQL 2012 Import Wizard.

There were three tricky bits to this …

1) to use [Microsoft OLE DB Provider for SQL Server] to connect to the SQL 2000 server
2) to select all Tables but not Views, and
3) To edit the mappings for each table choosing the radio-buttons to [Delete rows in destination Table] and [Enable identity insert]

After this it was simply a matter of saving the output SSIS packages to jobs.

** Update ** … or so I thought. This failed for one database that contained Foreign-keys. So I had to create a ‘pre update’ step to remove the constraints, then a ‘post update’ step to put them back (see next).

Database Compare

Here’s a neat script I wrote to list every database with the total number of rows in it.

The idea was to do a quick-n-dirty post-migration checksum between the old sql2000 and new sql2012 databases – to see which ones needed updating.

--database_compare.sql

EXECUTE sp_MSforeachdb 'use ?; DBCC UPDATEUSAGE(0) WITH COUNT_ROWS;'

create table #temp (DBName varchar(100), TotalRows int)
exec sp_msforeachdb 'insert into #temp select ''?'', sum(rows) 
from [?].[dbo].sysindexes i join [?].[dbo].sysobjects o on o.id=i.id
where indid < 2 and type=''U'''

select * 
from #temp
where DBName not in ('master','model','msdb','tempdb')
order by 1--2

drop table #temp

Is that database used?

Before an upgrade I wanted to remove any unused databases. To help find them I created a generic sql-job that every 10 mins would log connections.

I say ‘generic’ as the job needed to work on SQL2000, SQL2005, and SQL2012. So I created it using SQL-2000 enterprise-manager and scripted it out as ‘createJob_DbaAudit.sql’.

The script in job-step-1 (‘Build Table – if needed’) was …

select	@@ServerName AS Server, 
	db.[name] [Database],
	getdate() [TimeStamp], 
	Hostname, 
	Program_name, 
	Net_library, 
	Loginame
into master.dbo.tbl_connections
from master.dbo.sysdatabases db
join master.dbo.sysprocesses pr on db.dbid = pr.dbid
where db.dbid > 4

… this step was set to continue on to the next-step regardless of whether it succeded or failed.

Step-2 (‘Record Connections’) was similar …

insert into master.dbo.tbl_connections
select	@@ServerName AS Server, 
	db.[name] [Database],
	getdate() [TimeStamp], 
	Hostname, 
	Program_name, 
	Net_library, 
	Loginame
from master.dbo.sysdatabases db
join master.dbo.sysprocesses pr on db.dbid = pr.dbid
where db.dbid > 4

Final notes: To be robust and generic, I did not set-up Notifications. Also, I added a few lines to the top of the ‘CreateJob_DbaAudit.sql’ script to delete the tbl_connections table from master if it already existed.

Uninstall SQL Server without using Add/Remove

The Add/Remove option on my work PC (Windows XP) was restricted. So I removed SQL Server 2005 by finding and running ‘ARPWrapper.exe’. This brings up a maintenance-type box which after the first few screens has a screen with 2 big square buttons, the lower one is labled ‘Remove SQL’.

Infact you have to run this executable to completion twice (with a reboot inbetween) for the instance and the shared components.

Once complete I deleted the Microsoft SQL Server folder from c:\program files\

UPDATE: on SQL 2008r2 the executable is ‘SetupARP.exe’

Dropping all Temp tables

I wanted to drop all my temp tables at the top of a large SELECT statement and already had a query to drop a Named temp-table …

IF OBJECT_ID('tempdb..#Temp2') IS NOT NULL DROP TABLE #Temp2

… so just needed to make that dynamic. Looking at the temp tables …

select * from tempdb.sys.objects where name like '#%'

… I noticed all mine had a bunch of underscores and a GUID appended to the name. And the GUID always contained atleast 5 zero’s. So I was able to filter out just mine with

select name from tempdb.sys.objects where name like '#%00000%'

… and paste that into a while loop to drop them …

--first drop all temp tables
IF OBJECT_ID('temptables') IS NOT NULL DROP TABLE temptables
select name into temptables from tempdb.sys.objects where name like '#%00000%'

declare @table varchar(max), @cmd varchar(max)
while (select count(*) from temptables) > 0
begin
	set @table = (select top 1 name from temptables)
	set @cmd = 'IF OBJECT_ID(''tempdb..' + @table + ''') IS NOT NULL DROP TABLE ' + @table
	exec(@cmd)
	delete from temptables where name = @table
end

DROP TABLE temptables

By initially creating a list and then working through it deleting the temp-table and its entry in the list, I ensured there would be no endless looping should it try (and fail) to drop a temp-table I don’t have permissions on.

The power of SSIS

When you start SSIS development it seems rather easy to paste some tested t-sql into a box, QED! job done!

However, the real power of SSIS lies in the 2 areas where it out-performs raw t-sql.

1) Server linking: Because it uses a native format, grabbing data from a distant source and applying it somewhere else is quick and robust.

2) There are blinding-fast (in memory) replacements for the t-sql operations … GROUP BY (called the ‘Aggregate’ transformation), CASE (called ‘Conditional Split’), CAST (‘Data Conversion’), UPDATE (Derived Column), SOUNDEX (‘Fuzzy Lookup’), SORT (er called ‘sort’).

Simplest Merge example

Here’s the simplest example of a working Merge statement I could make. Its great to build-on to test the formatting of ‘real’ merges.

drop table d, s
	
-- destination table containing old data
	create table d (id int, fruit varchar(50), tasty int)
	insert into d values (1,'apple', 5), (2,'orange', 5)
	select * from d

-- source table containing unchanged, updated, and new data
	create table s (id int, fruit varchar(50), tasty int)
	insert into s values (1, 'apple', 5), (2,'orange', 0), (3,'banana', 9) 
	select * from s

--merge statement
	merge d target
	using s source
	on target.id = source.id

	when matched then 
	update set tasty = source.tasty

	when not matched then 
	insert values (id, fruit, tasty);

-- show new destination table
	select * from d

To remove an item from the ‘Recent Projects’ list on the Start-Page of SQL 2008r2 BIDS

– Hover over the ‘Recent Project’ you want to remove.
– Note the path shown at the bottom-left of the screen.
– Go to that path and delete the dtproj or sln file noted previously.
– Left-click the item in BIDS ‘Recent Projects’ and click YES to the message ‘Remove this item blah blah blah …’.

A severe error occurred on the current command. The results, if any, should be discarded.

This worrying error was fixed by naming columns selected from a linked server EG:-

select * from openquery([someserver\someinstance],
'select * from [databasename].[schema].[tablename]')

Msg 0, Level 11, State 0, Line 4
A severe error occurred on the current command. The results, if any, should be discarded.

select * from openquery([someserver\someinstance],
'select [LifeID] from [databasename].[schema].[tablename]')

(111878 row(s) affected)

SSIS Error -1071607685

We had this error number written multiple times into an SSIS errors table. It seems to be one of those Microsoft generic codes.

In this instance it indicated that we were using ‘fast load’ to write to a table, and one (or more) of the rows was failing.

To find the erroneous row(s), and get specific error(s), we changed to ‘normal’ load (called “Table or View”), which is fully-logged.

Fixing SQL Server problems

SQL Server is just another Windows application, and like all Applications it ticks mechanically through its work within its environment.

If your data machine has stopped ticking or seems about to, you’re going to have to do something about it. Machines don’t heal. So first take some time to think about what changed just before it started playing up?

Or perhaps you know exactly what the problem is – its just grinding to a crawl under too heavy a load. If so – resist the urge to simply “fix-everything” by throwing every resource available at it and hoping it’ll keep going to the top of the hill. In this case its much wiser to find the squeaky-wheel before doing anything else.

  1. Have you a good backup? take another.
  2. Categorize the issue as either …
    1. Something went wrong (undo it) or
    2. The database is struggling (tune it).

… and good luck 🙂

How to Start SQL Server Agent via Management Studio

I could not remote onto this customers database-server so looked at my local machines and found the ‘short name’ for the SQLAgent service was ‘SQLAgent$sql2014’.

I recognized the last bit (‘~sql2014’) as my local instance so was able to surmise this code would work for the customer (if there instance was called ‘SomeInstanceName’).

exec xp_cmdshell ‘net start SQLAgent$SomeInstanceName’

** UPDATE **
When a colleague locked up a dev server I was able to free it by restarting the service like this …

xp_cmdshell ‘net start’ — to get list of services

xp_cmdshell ‘net stop “SQL Server Agent (DEV2008r”)”‘ — to stop the agent service in preparation for the next step
xp_cmdshell ‘net stop “SQL Server (DEV2008r2)”‘ — to stop the SQL Server Service NOTE: double-quotes

xp_cmdshell ‘net start “SQL Server (DEV2008r2)”‘
xp_cmdshell ‘net start “SQL Server Agent (DEV2008r2)”‘

Execution Plan – notes

A Clustered Index Scan is similar to a Table Scan. IE: The data is being read row-by-row

A SCAN happens when the optimizer determines that 1) all/most rows need to be returned (so it would be a waste of time reading the index keys), 2) the index is not selective enough, and the optimizer thinks it needs to read all/most of the index. 3) the index stats are found to be out-of-date 4) when the query adds functions to a column’s data, obscuring the columns data from the optimizer.

An (index) SCAN suggests that because a normally sufficient index is NOT sufficient at the moment – more data may be being returned than (normally) needed – suggesting more filtering may be needed – perhaps in the WHERE clause.

A CLUSTERED index seek is beneficial because the data can be read from the index, without having to look at the table.

Notepad++ and hidden characters

Notepad++ is great for finding patterns of text spread over several lines as you can include hidden characters in the search-string (like tab, new-line, etc).

But WATCH OUT!! if you leave a single new-line without a partner carriage-return SQL scripts will miss-behave intermittently.

The FIX – When you’ve completed whatever you’re doing – click Format / Convert to UNIX, followed by Format / Convert to Windows. Then save 🙂

Changing a Table Column to Identity

--AddIdentity.sql

/******* REMEMBER TO SCRIPT-OUT THE TABLE FIRST *******/

--1 delete the old column
Alter Table [dbo].[sometable] 
Drop Column [somecolumn]
Go

--2 if it fails ... drop the primary key (or whatever)
ALTER TABLE [dbo].[sometable]
DROP CONSTRAINT PK_sometable
GO

--3 create the new identity column
Alter Table [dbo].[sometable]
Add [somecolumn] Int Identity(1, 1)
Go

--4 restore the primary key (if dropped)
ALTER TABLE [dbo].[sometable]
ADD CONSTRAINT PK_sometable PRIMARY KEY NONCLUSTERED 
(
	[somecolumn] ASC
)

--5 lastly reset the identity seed
DECLARE @nextid INT;
SET @nextid = (SELECT isnull(MAX([somecolumn]),0) FROM [sometable]);
DBCC CHECKIDENT ('dbo.sometable', RESEED, @nextid);

Odd violation of Primary Key

This error didn’t seem to make sense as the Primary-Key was an identity column – and therefore the Insert that generated the error didn’t include it.

Msg 2627, Level 14, State 1, Line 5
Violation of PRIMARY KEY constraint ‘PK_somecolumn’. Cannot insert duplicate key in object ‘dbo.sometable’.

It transpired that the database had recently been restored in an inconsistent state, and the identity column had lost track of how many rows were in the table, and was trying to insert a Primary-Key value that already existed.

The fix was to refresh the identity column with the biggest value it contained, before the Insert …

DECLARE @nextid INT;
SET @nextid = (SELECT MAX([columnname]) FROM [tablename]);
DBCC CHECKIDENT ([tablename], RESEED, @nextid);

SQLCMD

I had my first play with SQLCMD on a SQL 2012 Development Server on which the evaluation period had expired. Incidentally this meant SSMS would not run locally (along with maint-plan backups), however the services and other jobs where still working, and remote SSMS could connect and interact as normal.

SQLCMD remind me of Oracle SQLPlus, with the added benefit that you can use the up & down arrow keys to step through history.

— connect (as current windows user)
sqlcmd -S server\instance
sqlcmd -S .\instance

— interact
select @@version
go

— interact over multiple rows
select name
from sys.databases
go

— clear the command cache
select @version
:reset
select @@version
go

— redirect output to a file
:out c:\output.txt
select @@version
go

— reset output back to the screen
:out stdout
select @@version
go

— quit
quit
exit
ctl-c

A little slice of my dba day

A little slice of my dba life 🙂 … during morning-checks I noticed the MSDB database was 2.9GB and growing (in utility explorer). I ran the built-in SSMS report ‘Disk Usage by Table’ and found the table sysjobhistory was by far the largest (at 2.6GB).

I checked the sql-job that manages job history – there wasn’t one :). I created sql-job ‘Prune Backup History’ with the command ‘delete from msdb.dbo.sysjobhistory where run_date < convert(char(8), getdate()-30,112)', and scheduled it to run every 5am.

I ran it manually and was disappointed to find the table was still 2.4GB in size. I fired-up SSDT and created a new IS project. I dragged the 'Data profiling Task' to the design surface and configured it to output to \\server\share\msdbprofile. Within 'quickprofile' I connected to the server/database/table.

I saved the package, ran it (made a coffee), then opened the task again and clicked 'open profile viewer'.

Within 'Column Value Distribution Profiles' I found a disproportionate 74% of the run_date values were the same date (about 2 weeks ago). Double-clicking this result I could NOT see the name of the job (in the raw data) but recognised the step-name as belonging to the cdc job I had set-up … about two weeks ago, lol.

I concluded the sql-job was fine, and would dramatically shrink sysjobhistory – in a few weeks time. Cake time 🙂

Running Checktable in batches

To check for corruption on a large, busy database I was able to use my filegroup checking script on all but one filegroup.

To polish off the final (monster) filegroup, I resorted to creating a database-snapshot (in the morning) and running this (checktable) script during the day.

btw: Whilst I realise checking a database-snapshot is of limited use, I reasoned it was better than the only alternative (not checking).

Before leaving work I would review the results and drop the snapshot (for application optimization), in rediness to create a fresh one (and continue) the next day.

--CheckTablesInAFilegroup_vldb.sql

use TheSnapshotName
go

-- capture table-names from the named filegroup - once
begin try
     select o.name
     into master.dbo.fgtables
     from sys.indexes i
     join sys.filegroups f on i.data_space_id = f.data_space_id
     join sys.all_objects o on i.object_id = o.object_id
     where f.name = 'somefilegroupname'
     order by o.name
end try

-- check tables one-by-one
begin catch
     declare @sql varchar(200)
          while (select count(*) from master.dbo.fgtables) > 0
          begin
               select top 1 @sql = 'dbcc checktable ("dbo.' + name + '")' from master.dbo.fgtables order by name
               exec(@sql)
               delete from master.dbo.fgtables where name = (select top 1 name from master.dbo.fgtables order by name)
          end
end catch

Showing row counts with commas

I wanted to display a changing row-count (in a staging table) including commas (eg: 1,234 or 1,234,567).

I tried using STUFF() in a CASE statement to insert commas, then LEFT(), before settling on this, which is more succinct, and scales from 1 to 1,999,999,999

select replace(convert(varchar(20), convert(money, count(*)), 1), '.00', '')
from dbo.sometable
with(nolock)

To explain how it works …

– the ‘1’ is a style of VARCHAR that includes commas when converting from MONEY
– then REPLACE removes the ‘.00’ also added during the conversion from MONEY

Database corruption false alarm

To check for corruption on a very large production database I created a database-snapshot to run CHECKDB against.

This found a corrupt-table (“CHECKDB found 0 allocation errors and 4191 consistance errors in table … repair_allow_data_loss is the minimum repair level for the erors found…”).

I tried running CHECKTABLE against the suspect table within the live database, but couldn’t due to lack of space.

Finally I used DBCC CHECKTABLE(sometablename) WITH TABLOCK so dbcc wouldn’t create a background copy on the database to check – and run out of space!

This confirmed the live table was fine (yay! ooo cake).

Arithmetic overflow error converting expression to data type int.

This error message is a sign of the times [sigh]. With databases getting bigger all the time ‘INT’ often needs to be replaced with ‘BIGINT’.

For example my ‘database_sizes’ script used to work just fine starting like this …

select round(sum(mf.size)*8/1024/1000,0) [SizeGB]

But now with Terabyte sized databases, when mf.size is multiplied by 8 its just too big! The (ugly) solution is to explicitly convert INT to BIGINT …

select round(sum(convert(bigint, mf.size))*8/1024/1000,0) [SizeGB]

ADDITIONAL

To change a table column from INT to BIGINT paste any ‘create constraint’ code into a script, then above that create the update statement, then above that work-out the drop constraint code. Here’s an example on a primary-key …

--changeIntToBigint.sql

USE [Production]
GO

--1 remove constraint

ALTER TABLE [dbo].[sometablename] DROP CONSTRAINT [someconstraintname]
GO

--2 change int to bigint

ALTER TABLE  [dbo].[sometablename] ALTER COLUMN [somecolumnname] bigint not null

--3 re-create constraint

ALTER TABLE [dbo].[sometablename] ADD  CONSTRAINT [someconstraintname] PRIMARY KEY CLUSTERED 
(somecolumnname)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Filegroup1]
GO

Resize Datafile

Here’s a simplified version of my previous datafile resizing helper script. I tend to resize most datafiles to 30%free on Friday afternoons for resilience over the weekend 😉

--datafile_resize.sql

create table #filestats 
(fileid int, fg int, AllocatedSpace int, UsedSpace int, dfile varchar(200), DataFile varchar(200))

insert into #filestats exec('dbcc showfilestats')
update #filestats set AllocatedSpace = AllocatedSpace * 0.0625, UsedSpace = UsedSpace * 0.0625

select DataFile, AllocatedSpace, UsedSpace,
     cast(cast(AllocatedSpace - UsedSpace as numeric) / AllocatedSpace * 100 as int) [Free%],
     cast(UsedSpace * 1.449 as int) [Plus30%]
from #filestats 
-- where datafile like 'E%' -- e-drive only
order by [free%]

drop table #filestats

dbcc checkfilegroup – semi-automatic

On a very-large database I wanted to check each filegroup for corruption with real-time feedback. That is, I did not want to just set a loop going and wait indefinately for the results. Also I did not want info-messages.

The semi-automatic query I wrote (below) runs CheckFileGroup against the smallest filegroup and stops. Then when manually re-run (in the same SSMS session) checks the next smallest filegroup etc.

Then at the end of the day I would know where it was upto.

-- checkfilegroups_vldb.sql
-- use current (snapshot?)

-- create table of filegroup names and sizes
begin try
     select fg.name, sum(df.size/128) mb
     into #tables
     from sys.filegroups fg
     join sys.database_files df on fg.data_space_id = df.data_space_id
     group by fg.name
     order by mb, name
     select '[F5] to Start'
end try

--check each filegroup starting with smallest
begin catch
     select * from #tables
     declare @sql varchar(100)
     select top 1 @sql = 'dbcc checkfilegroup (''' + name + ''') with all_errormsgs, no_infomsgs;' from #tables order by mb, name
     select(@sql)
     exec(@sql)
    delete from #tables where name = (select top 1 name from #tables order by mb, name)
    if (select count(*) from #tables) = 0
    begin
        drop table #tables
        select 'Complete'
    end
end catch

cdc work-sheet

Here are the cdc commands I use …

--STATUS

--is cdc enabled on my current database?
select is_cdc_enabled, * from master.sys.databases

--is cdc enabled on this table?
use SomeDatabaseName
go
select is_tracked_by_cdc, * from sys.tables order by 1 desc
-- ENABLE

use SomeDatabaseName
go
exec sys.sp_cdc_enable_db          -- enable database
go
exec sys.sp_cdc_enable_table       -- enable table
   @source_schema = N'dbo',
   @source_name = N'SomeTableName' -- table to be monitored
   @role_name = NULL,              -- login used to restrict access
   @filegroup_name = N'cdc_fg'     -- used to set path for change-table to its own drive
go
-- MISC QUERIES

select * from cdc.dbo_SomeTableName_CT -- changes plus 5x cdc columns
where __$start_lsn = 0xSomeLsnNumber

     $operation ... 1=delete
                    2=insert
                    3=value before update
                    4=value after update
     
     $update_mask ... bit-map of column-names updated (IE: 3 & 4 only)

select * from cdc.captured_columns -- list of logged column-names
select * from cdc.change_tables    -- table(s) monitored by cdc
select * from cdc.ddl_history      -- ddl chages
select * from cdc.index_columns    -- indexes on monitored tables
select * from lsn_time_mapping     -- lsn numbers with time-stamps

select min(tran_begin_time), max(tran_begin_time)
from cdc.lsn_time_mapping          -- check its running
--REMOVE

use Somedatabase
go
exec sys.sp_cdc_help_change_date_capture -- get values for @capture_instance etc
go

use SomeDatabase
go
-- exec sys.sp_cdc_disable_table  -- disable table
@source_schema = N'dbo',
@source_name = N'SomeTableName',      -- table name
@capture_instance = N'dbo_SomeTableName'
go
use somedatabase
go
--exec sys.sp_cdc_disable_db      -- disable database ... and delete all data and objects
go
--MANAGEMENT

--show latency in seconds & hours
select latency, latency/3600 latency_hrs, *
from SomeDatabase.sys.dm_cdc_log_scan_sessions

--show current sqljob settings
select * from msdb.dbo.cdc_jobs -- note: 4320 mins = 3 days

--chanage 'capture' job settings
exec sys.sp_cdc_change_job @job_type = 'capture',
  @maxtrans = 600,       --500 default
  @maxscans = 200,       --10
  @continuous = 1,       --1
  @pollinginterval = 2   --5
--dont forget to stop/start job to use these new settings

-- see if cdc (aka replication) is stopping the log-file shrinking
select name, log_reuse_wait_desc from sys.databases

-- see percentage log-file used
dbcc sqlperf(logspace)

-- show cdc error-log
select * from master.sys.dm_cdc_errors

-- emergency dump all pending transactions, allowing shrink
--sp_repldone @xactid = null, @xact_segno = null, @numtrans = 0, @reset = 1

CDC Update mask

To decode the update_mask 0x00007C00 I wrote out the last 4 digits (‘7C00’, thats those with values plus those to the right) in binary 0111 1100 0000 0000 which (counting from the right) indicated columns 11, 12, 13, 14, and 15 had been updated.

simples 😉

UPDATE
Ok ok. Until I get round to convert the above logic into a big-dynamic-case-statement … here’s a script I adapted from here (http://dba.stackexchange.com/questions/35936/change-data-capture-and-the-update-mask-binary)

select (select cc.column_name + ', '
     from cdc.capture_columns cc
     join cdc.change_tables ct on cc.[object_id] = ct.[object_id]
     where ct.capture_instance = 'dbo_MYTABLE'
     for xml path('')) as changedcolumns, *
from cdc.dbo_MYTABLE_ct pd
where __$start_lst = SOME_LSN_NUMBER

Converting Delphi datetime to SQL Server

--1of2 -- convert delphi datetime to sql

-- input a delphi float for conversion
declare @input float = 41492.9039269676 -- <<<<<<<<<<< user input

-- seperate the date-part
declare @date int = @input

-- seperate the time-part
declare @time float = (@input - @date)

-- display results
select @input 'delphi', cast(dateadd(dd, @date, '1899-12-30 00:00:00') + @time as datetime) [sql]


--2of2 -- convert sql datetime to delphi

-- input the sql datetime for conversion
declare @input2 datetime = '2013-08-06 21:41:39.290' -- <<<<<<<<<<<< user input

-- display results
select @input2 [sql], cast(@input2 as float)+2 'delphi'

Large Page Extensions

After a re-boot I noticed the message “Cannot use Large Page Extensions: lock memory privilege was not granted.”

To fix this I remoted onto the SQL2008r2 Enterprise Edition server and ran “gpedit.msc”.

Within the tree I navigated to / Computer Configuration / Windows Settings / Security settings / Local Policies / Use Rights ~ / Lock Pages in Memory.

And added the account used to start the SQL Server Service.

CXPacket

The wait type CXPACKET is no biggie. Its the time spend re-assembling the results from parallel queries.

You can change the parameter ‘Max Degree of Parallelism’, to however many cores your big-queries should be shared among. Best practice is the number of core’s in a single numa (max 8).

You can also change the ‘Cost Threshold for Parallelism’ to set the trigger level for big-queries. The default (‘5’) is usually a bit low.

Removing unwanted datafiles

Many people are too precious about scripts. Here’s my simple GUI-based method for removing unwanted datafiles.

1a) fire up the shrink gui (tasks / shink / files)
1b) locate your target datafile
1c) run the 3 ‘shrink actions’ in order, one at a time (2nd one to ‘1’)

2a) Fire up the Database properies files tab
2b) highlight the unwanted datafile, click ‘remove’, and then ‘ok’

How to limit sql-job notifications by time

Rolling my own Notifications – I wanted a sql-job that ran every 5 minutes to Notify me when it failed – but not to “spam” me every 5 minutes until I fixed it!

Here’s the simplified last-job-step that would email me the first time the job failed, but then NOT again for half an hour.

declare @email varchar(500), @operator varchar(50)
set @operator = 'DBA'

if (select datediff(mi, max(send_request_date), getdate()) 
 from [msdb].[dbo].[sysmail_mailitems]
 where subject like 'ALERT%') > 30
begin
 set @email = 'execute msdb.dbo.sp_notify_operator @name=N''' + @operator + ''', @subject=N''ALERT: [XX] failed '',
 @body=N''The [XX] sql-job on server [YY] has failed!'''
 exec(@email)
 select 'email sent'
end