Splitting strings in practice.

Quite often on technical forums a request to extract part of a text string is tackled in splendid isolation. I’ve used the technique below with postcodes before, but this time wanted to split the text in an SSRS log file to isolate report names from paths.

For example the string …

/Finance/Monthly/Outgoing/Summary

… means the report called “Summary” is contained in the “Outgoing” folder within the “Monthly” folder inside the “Finance” folder in the root.

The tricky bit being that the nesting levels vary. There are reports in root. And there are reports buried ten layers deep like russian dolls.

Task one, getting the report names is relatively easy. You could use the REVERSE function to help grab the text up to the “first” slash using CHARINDEX.

Task two, getting the path, is where forums go deep, with strings of functions to extract all the text to the left of the last slash (where there are an unknown number of slashes).

Where as, it is often far simpler to use REPLACE to remove the already found report name from the whole string, leaving just the path.

SELECT
       Report,
       REPLACE(ItemPath,Report,'') [Path],
       ...
FROM
(
    SELECT RIGHT(ItemPath, CHARINDEX ('/', REVERSE(ItemPath))-1) Report,
           ...
 

Quick row count

When COUNT(*) was too slow to get total rows from a billion row heap …

SELECT MAX(REPLACE(CONVERT(VARCHAR(20), 
       CONVERT(MONEY, rowcnt), 1), '.00', '')) [rows]
FROM sys.sysindexes
WHERE OBJECT_NAME(id) = 'TableName';

Original code from here …

https://www.i-programmer.info/programming/database/6576-sql-server-quickly-get-row-counts-for-tables-heaps-indexes-and-partitions.html

Removing duplicate rows

No need to over develop this – once you realise that the DELETE command can include the TOP option.

More? Ok. Create a SELECT command that shows the issue. EG:-

SELECT ID
FROM dbo._op2
WHERE ID = 'X123456';

Add TOP to return just the unwanted rows. IE: if the above query should return 1 row but erroneously returns 2 rows use TOP(1), if it returns 5 rows use TOP(4) …

(NOTE: It does not matter which particular rows get deleted … they are duplicates remember)

SELECT TOP(1) ID
FROM dbo._op2
WHERE ID = 'X123456';

Change SELECT to DELETE and remove the column name …

DELETE TOP(1)
FROM dbo._op2
WHERE ID = 'X123456';

… and only run it once 😉

Stop notification spam

Although I have previously tackled this problem by rolling my own notifications. This time I realised that the SQL jobs that run every minute or two are of low importance – and I don’t really want emails when they fail. I will notice at some point during the working day.

Here is the code to lists jobs that run frequently and are configured to send notification emails. Along with the commands to remove those notifications.

/* JobNotificationEmailStop.sql */

SELECT S.[name] JobName,
    SS.freq_subday_interval [ScheduleFreq(mins)],
    'EXEC msdb.dbo.sp_update_job @job_name = N''' 
    + S.[name] 
    + ''', @notify_email_operator_name = N'''';' 
    CommandToDisableEmailNotification

  FROM msdb.dbo.sysjobs S
  JOIN msdb.dbo.sysjobschedules SJ
    ON S.job_id = SJ.job_id
  JOIN msdb.dbo.sysschedules SS
    ON SS.schedule_id = SJ.schedule_id

 WHERE SS.freq_subday_interval > 0
   AND S.notify_level_email > 0
 ORDER BY SS.freq_subday_interval;
 

Except and Intersect

Here is the simplest working example of EXCEPT and INTERSECT I can come up with …

/* Except.sql */

IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1 (c1 INT);
INSERT INTO #t1 VALUES (1), (2);

IF OBJECT_ID('tempdb..#t2') IS NOT NULL DROP TABLE #t2;
CREATE TABLE #t2 (c2 INT);
INSERT INTO #t2 VALUES (2), (3);

SELECT * FROM #t1
EXCEPT
SELECT * FROM #t2; /* = 1 */

SELECT * FROM #t1
INTERSECT
SELECT * FROM #t2; /* = 2 */

SELECT * FROM #t2
EXCEPT
SELECT * FROM #t1; /* = 3 */

SELECT * FROM #T1
EXCEPT
SELECT * FROM #T2
UNION
(SELECT * FROM #T2
EXCEPT
SELECT * FROM #T1); /* = 1 & 3 */

I use this frequently whilst refactoring to check the outputs are identical. And sometimes when syncing a MySQL table to MSSQL.

Capturing input parameters

Often when a stored-procedure is executed I want to know the parameters that were input. Which is handy for performance tuning.

There is a mechanism to automatically save input parameters with the cached execution plans, but quite often this does not work well.

On this occasion I embedded the facility right into the procedure as a temporary measure (please, don’t talk to me about triggers brrr).

CREATE PROCEDURE [dbo].[sp_SomeName]
 @ID UNIQUEIDENTIFIER = NULL,    
 @Record VARCHAR(50) = NULL    
AS  
BEGIN

 /* log parameters for performance tuning 1 of 2 */

  IF OBJECT_ID('[SomeDatabase].[dbo].[tbl_SomeTable]') IS NULL
  	SELECT GETDATE() STIME, GETDATE() ETIME, @ID ID, @Record RC 
  	INTO [dbo].[tbl_SomeTable]
  ELSE
  	INSERT INTO [dbo].[tbl_SomeTable]
  	SELECT GETDATE(), GETDATE(), @ID, @Record

 /* log parameters for performance tuning 1 of 2 */

...

Overkill really, but at the end of the procedure I added …

 ...

/* log parameters for performance tuning 2 of 2 */
  
  UPDATE [dbo].[tbl_SomeTable]
  SET ETIME = getdate()
  WHERE ETIME = STIME;
  
 /* log parameters for performance tuning 2 of 2 */

END
GO

Note: the real procedure had many more input parameters, and I suspected they are all set to null. Which would explain the poor performance.

Still, best to know what we’re optimizing for 🙂

Set every users default schema to DBO where its blank

In this quick script I am assuming the Windows domain is called ‘DOM’ …

-- ChangeDefaultSchemaToDboWhereNull.sql

DECLARE @cmd varchar(1000) 

SET @cmd = 
'USE ? IF DB_ID(''?'') > 4 SELECT ''USE ?; ALTER USER ['' + name + ''] WITH DEFAULT_SCHEMA = [dbo]''
 FROM sys.database_principals
 WHERE default_schema_name IS NULL
 AND [name] LIKE ''DOM\%'''

IF OBJECT_ID('tempdb..#output') IS NOT NULL DROP TABLE #output
CREATE TABLE #output
(command varchar(1000))

INSERT INTO #output
EXEC sp_MSforeachdb @cmd

SELECT * 
FROM #output


Transactional Replication causing High CPU

In Publication Properties the setting “Subscriptions never expire …” has a surprising effect on the job “Distribution clean up: distribution”.

This job removes orphaned and replicated transactions from the Distribution database once the retention period has expired.

However, “Subscriptions never expire …” stops this procedure from removing orphaned transactions – left by a deleted subscription – or for any other reason.

This results in the Distribution database growing and high CPU.

To fix this, allow subscriptions to be able to expire.

So failed subscriptions may be deleted if not fixed within a year. This preserves the robustness of “Subscriptions never expire”, whilst allowing orphaned transactions to be cleaned up.

Drop all tables that start with underscore

In an ironic twist of fate I adapted my ‘Drop all temp-tables’ script, to drop all tables beginning with an underscore.

Yes, it is ironic, because it uses a temp-table to store the working list of tables to be deleted. Whilst my original script used a real table to store a list of temp-tables.

Well … ok then

-- DropAllTablesStartingWithAnUnderscore.sql

IF OBJECT_ID('tempdb..#tables') IS NOT NULL DROP TABLE #tables
SELECT [name] 
INTO #tables
FROM sys.tables
WHERE [name] LIKE '/_%' ESCAPE '/'

DECLARE @table VARCHAR(200), @cmd VARCHAR(500)
WHILE (SELECT COUNT(*) FROM #tables) > 0
BEGIN
	SET @table = (SELECT TOP(1) [name] FROM #tables)
	SET @cmd = 'drop table ' + @table
	EXEC(@cmd)
	DELETE FROM #tables WHERE [name] = @table
END

(In case you were wondering why I created these underscore-tables in the first place. I was refactoring a stored-procedure that took over an hour to run, and had a large number of temp-tables. I wanted to persist those temp-tables for another day, and not have to start from scratch.)

Find the partner of a bracket in SSMS

Faced with a barrage of tabbed T-SQL in SSMS it can sometimes be quite difficult to see the close bracket that signifies – for example – the end of a CTE.

TIP: swipe from the inside of the bracket outwards to highlight (in grey) the bracket itself, and also its partner. EG: swipe right-to-left across an open bracket.

Running CHECKDB on TempDB

Normally I would not bother, but when CHECKDB runs on TempDB it cannot use a snapshot so has to lock all the temp tables. This script will wait for exclusive access for up to a minute.

DECLARE @outcome VARCHAR(50) = 'TempDB is currently too busy for CHECHDB', 
	@endtime DATETIME = DATEADD(mi,1,GETDATE())
WHILE GETDATE() < @endtime
BEGIN
  IF NOT EXISTS (SELECT 1 FROM sys.dm_tran_locks WHERE request_mode = 'X' AND resource_database_id = 2)
  BEGIN
    DBCC CheckDB([tempdb]) WITH NO_INFOMSGS;
    SET @outcome = 'success'
    BREAK;
  END
  WAITFOR DELAY '00:00:01';
END
SELECT @outcome

Searching every Procedures for a string.

To search every stored procedure in every database on every server (ranging from SQL Server 2005 to SQL Server 2016) for the string ‘QueryTraceOn’, I first registered every server within SSMS.

Right-clicking on the registered server folder, I chose ‘new query’ and ran ‘select 1’ to exclude from my list any server with issues.

Once I had an error free list, I ran this code (which took around 40 minutes) …

-- SearchProcs4String.sql

EXEC sp_MSforeachdb 'use ?
SELECT db_name() [Database], ROUTINE_SCHEMA + ''.'' 
+ ROUTINE_NAME [Proc]
FROM INFORMATION_SCHEMA.ROUTINES WITH (NOLOCK)
WHERE ROUTINE_DEFINITION LIKE ''%QUERYTRACEON%'';'

Migrating to SQL Server 2017 / 2019

(for JH) When migrating to SQL Server 2017 or SQL Server 2019 the name of the game is “Risk Mitigation”. Avoid the temptation to bundle other improvements into the project. Business continuity is first, last, and everything in between.

Server Architecture should be duplicated Exactly in the new environment.

Server Rationalization – nope. Before the migration would be best, because if unexpected issues arise you are on familiar ground. But you have unused logins? databases? jobs? – double nope (Before or after, not during).

SQL Server Role based security – nope. That has been best practice for the last 20 years. This is not the time to make up for lack of database administration. 21 years will be fine.

High Availability – Avoiding downtime is important and HA helps with that. But as long as you have a mature backup system in place, this can wait until the migration is complete. Nope.

Backups – Are less clear cut. But even if the old system is a quirky high maintenance mill stone. It is still familiar, and you don’t want to add unfamiliar complexity at this time. This should be next on your list though.

Disaster Recovery – There are new toys, like clusterless AG’s but again – nope.

Compatibility Level – Should match the old system. A few months after the deed is done, you can quietly try switching less important databases, one at a time (database administration is a perpetual business).

SSIS Packages – Should be rationalized from Packages to Projects … after the migration. Oh and if a package fails after being moved and upgraded, change the environment NOT the package.

Encryption is much easier with modern versions of SQL Server. Schedule this for a few months down the line.

Cardinality Estimator – You may have picked up on a theme here 🙂 – nope. Set it to legacy for reliable performance, for now.

Start SSMS as another user

There are a few ways to open SQL Server Managaement Studio as another Windows user. You can simply hold down the Shift key and right-click SSMS. Alternativly, you can create a shortcut on the desktop that uses the built in ‘Runas.exe’ command.

In notepad I assemble the 3 parts needed …

1. Full path to runas.exe
2. The Windows account I want to use
3. The full path to the SSMS executable.

Here is an example …

C:\Windows\System32\runas.exe /user:ZGROUP\rsmithadmin "C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE\Ssms.exe"

Log Shipping for Migration

The trouble with backing up databases in ‘old production’ then restoring them to ‘new production’ is that it takes time.

And there may be some unforeseen delay switching the front-end apps over.

Resulting in ‘old production’ being updated with new data, and ‘new production’ becoming out of date.

Log-shipping is an ideal, built-in, tool that can be used to keep ‘new production’ in sync with ‘old production’ during that phase between backup/restore and switching the front-end to ‘new production’.

This time around there was no need to script the setting up of log-shipping. There were only 13 databases, so using the GUI did not take long.

The idea is to complete the backup/restore a week or so before the switch-over and set-up log-shipping to keep the data in sync.

Then at the designated switch-over time, it takes only a moment to bring ‘new production’ on-line, as a fully up-to-date copy of ‘old production’.

Here is my crib-sheet …

 

Preparation

  • Primary and secondary servers should have as near as possible the same instance settings eg: max-memory, numa configuration, CLR, max dop, etc
  • Ensure user databases are using full recovery model
  • Create shared folder (on the target ideally)
  • Default backup compression is enabled (ideally)
  • Reduce VLF counts
  • Configure file share folder and connectivity
  • Disable tlog backups on primary

Preparation – Secondary Instance

  • Ensure enough space for databases
  • Matching drive letters for datafiles and logfiles (ideally)
  • Configure file share

Preperation – Monitor Instance

  • Ideally separate from primary and secondary

Security

  • Config login is a sys admin role
  • SQL server service account on primary needs read/write permission on backup directory (for the backup job)
  • SQL server service account on secondary needs read permission on backup share and read/write permission to secondary share (for copy / restore jobs)

Configuring log-shipping

  • Manually backup and restore database (with no recovery)
  • Use notepad to cut and paste connection strings and paths
  • Transfer logins, jobs and linked servers

The switch over

  • Manually execute the backup, copy, and restore jobs a final time
  • Manually restore each database “with recovery”
  • Detach old databases (so there is no chance of them being updated)
  • Point front-end-applications to new back-end server

Post Migration

  • Full backups (the old ones cannot be restored now)
  • Update all statistics
  • Check compatability level
  • Execute dbcc checkdb
  • Enable plan-store (read/write)
  • Monitor health and performance

Postcode search

The issue was that some postcodes were stored with spaces and some without.

This was further complicated by some user inputs (into the “Postcode Search” SSRS Report) had a space and some did not.

The root cause of the slow report was that the 90 MILLION stored postcode was being retrieved and manipulated (to remove spaces) before being compared with the ONE input.

--- OLD CODE -----------------

DECLARE @PostCode VARCHAR(8)
SELECT Forename,
       Surname,
       AccountNumber AS CustomerNo,
       AccountStartDate,
       AddressLine2 AS Address,
       PostCode,
       DateOfBirth
FROM [dbo].[SV_Customers]
WHERE (CountryCode = 'GB')
      AND (REPLACE(Postcode, ' ', '') = @PostCode);

My insight was to manipulate just the ONE input postcode before comparing it TWICE (with and without a space) to the un-manipulated postcodes stored in the database.

The first task then, was to split the input postcode into two parts. In all formats the last 3 characters were number, letter, letter.

So after saving the last part of the postcode separately, it was easy to deduce that the first part must be the whole thing minus the last part.

--- NEW CODE ------------------------

DECLARE @PostCode VARCHAR(8)
DECLARE @pc2 CHAR(3) = RIGHT(@PostCode, 3);
DECLARE @pc1 VARCHAR(4) = RTRIM(REPLACE(@PostCode, @pc2, ''));

SELECT Forename,
       Surname,
       AccountNumber AS CustomerNo,
       AccountStartDate,
       AddressLine2 AS Address,
       Postcode,
       DateOfBirth
FROM [dbo].[SV_Customers]
WHERE CountryCode = 'GB'
      AND (PostCode = @pc1 + @pc2         -- without space
        OR PostCode = @pc1 + ' ' + @pc2); -- or with space

The final task was to write the WHERE clause as simply as possible for long term maintenance. That’s the DBA in me 🙂

Caching result sets

(For Sam) I wanted to performance tune a stored-procedure that was just one big SELECT statement (used to return all current Orders).

The code was just about as optimum as it could get, and returned around 8,000 rows each time, taking about 35 seconds to do so.

I saved the output over a few consecutive days and noticed (crucially) that most of the rows were the same each day.

My big-idea then, was to pre-cache (and pre-format) the results on “Day One”, and just append new rows to that going forward.

The final working stored-procedure contained 5 labeled areas:-

 - (1. Create and fill a cache-table if there isn't one)
 - 2. Save a thin version of the current data to a temp-table
 - 3. Add only NEW data to the cache-table
 - 4. Remove DELETED data from the cache-table
 - 5. Output the cache-table

1. If the cache-table didn’t exist, run the original query, but saving INTO a cache-table. Mostly this step was not executed, but I wanted the stored-procedure to be complete.

There was a DateTime column in the results set that was guaranteed to be unique. I made this the primary-key of the cache-table.

2. In a separate window, I stripped back the original query until just the DateTime column was returned. Unnecessarily, I added code to the top to delete any temp-table called “#thin” if it already existed (my habit). Then I added code to save the stripped back query results INTO a temp-table … called “#thin”.

This step would run every time, and the output could be compared with the old data (in the cache-table) to add any new rows, and knock off any old ones.

3. The original query was executed but with a WHERE clause added, like WHERE prod.DateTime not in (SELECT DateTime FROM #thin). The 2 or 3 (fat) rows returned from this step were appended to the cache-table.

4. A simple DELETE removed any rows from the cache-table where the DateTime was not in the #thin table.

5. The Cache-table was SELECT’ed in full as the stored-procedures output. Which typically ran in around 7 seconds. Despite the extra round-trip to the database.

Testing. After a day or two compare the old / new result sets in spreadsheet tabs and adjust indexing accordingly (As always, full responsibility lies with the implementer).

Addendum. To help performance I later changed Step-3 from …

WHERE prod.DateTime not in (SELECT DateTime FROM #thin)

… to …

LEFT JOIN cache.table cac ON cac.DateTime = prod.DateTime
WHERE cac.DateTime IS NULL

Column Max Length

From my “Spreadsheet sizer” script, this one helped me move sensibly away from pesky varchar(max) columns.

-- ColumnMaxLength.sql

DECLARE @TableName VARCHAR(255) = 'customers' --<< input
DECLARE @SchemaName VARCHAR(255) = 'dbo' 
DECLARE @sqlcmd varchar(max) 

select @sqlcmd = stuff((SELECT ' union all
select ' 
+ QUOTENAME(table_schema,'''') + ' [Schema], ' 
+ QUOTENAME(TABLE_NAME,'''') + ' [Table], ' 
+ quotename(column_name,'''') + ' [Column],
max(datalength(' + quotename(column_name) + ')) MaxLength 
from ' + quotename(table_schema) + '.' + quotename(table_name)
from information_schema.columns
where 1=1
AND table_name =  @TableName
AND table_schema = @SchemaName
order by column_name
for xml path(''),type).value('.','varchar(max)'),1,11,'')

exec(@sqlcmd)

Comparing Stored-Procedures

Had a bit of a problem today with the re-write project.

I had been checking new stored-procedures in the DEV database, and (if good) pasting them into the WEB database.

The issue was that some DEV stored-procedures that I had already checked-in to WEB had been modified again.

Rather than trying to enforce version-control (mmm), or download Redgate’s SQL Compare, I modified my ‘Whats New” routine to compare the modify-dates between the DEV and WEB databases.

-- CompareSP.sql

SELECT [dev].[type_desc],
       (SELECT [name] FROM [companydev].[sys].[schemas] WHERE [schema_id] = [dev].[schema_id]) [schema],
       CASE [dev].[parent_object_id]
           WHEN '0' THEN [dev].[name]
           ELSE OBJECT_NAME([dev].[parent_object_id]) + '.' + [dev].[name]
       END [object_name],
       [dev].[create_date],
       [dev].[modify_date], -- or create-date if there isn't one
	   '' v,
	   [web].[modify_date] web_modify_date , 
	   DATEDIFF(MINUTE, [dev].[modify_date], [web].[modify_date]) mod_diff
FROM [companydev].[sys].[objects] dev
JOIN [companyweb].[sys].[objects] web
  ON [dev].[name] = [web].[name]
WHERE [dev].[is_ms_shipped] = 0 -- exclude system-objects
AND [dev].[type] = 'P' -- just stored-procedures
--AND [dev].[modify_date] > '21 nov 2018'
ORDER BY [dev].[modify_date] DESC;

Adding a NOT NULL column to an existing table

-- AddingNotNullColumnToExistingTable.sql

-- 1. Add new column to the old table, as NULL for now

	ALTER TABLE [dbo].[TableName] 
	ADD [ColumnName] INT NULL

-- 2. Set the default to zero for new rows

	ALTER TABLE [dbo].[TableName] 
	ADD CONSTRAINT [DF_TableName_ColumnName] 
	DEFAULT(0) FOR [ColumnName]

-- 3. Change all existing null values to zeros

	UPDATE [dbo].[TableName] 
	SET [ColumnName] = 0 
	WHERE [ColumnName] IS NULL

-- 4. Change column from NULL to NOT NULL

	ALTER TABLE [dbo].[TableName] 
	ALTER COLUMN [ColumnName] INT NOT NULL

-- Undo (while testing)

	ALTER TABLE [dbo].[TableName] 
	DROP CONSTRAINT [DF_TableName_ColumnName]

	ALTER TABLE [dbo].[TableName] 
	DROP COLUMN [ColumnName]

Calendar UK

Must be that time of year again :). Adapted from Aaron’s beautiful US calendar script …

-- CalendarUK.sql
use [Dev];

-- initialize period

	DECLARE @StartDate DATE = '20000101', @NumberOfYears INT = 30;

-- prevent set or regional settings from interfering with 
-- interpretation of dates / literals

	SET DATEFIRST 7; -- sunday is the first day of week
	SET DATEFORMAT mdy; -- thats month/day/year
	SET LANGUAGE US_ENGLISH;

	DECLARE @CutoffDate DATE = DATEADD(YEAR, @NumberOfYears, @StartDate);

-- 1. this is just a holding table for intermediate calculations:

	IF OBJECT_ID('tempdb..#cal') IS NOT NULL DROP TABLE #cal
	CREATE TABLE #cal
	(
	  [date]       DATE PRIMARY KEY, 
	  [day]        AS DATEPART(DAY,      [date]),
	  [month]      AS DATEPART(MONTH,    [date]),
	  FirstOfMonth AS CONVERT(DATE, DATEADD(MONTH, DATEDIFF(MONTH, 0, [date]), 0)),
	  [MonthName]  AS DATENAME(MONTH,    [date]),
	  [week]       AS DATEPART(WEEK,     [date]),
	  [ISOweek]    AS DATEPART(ISO_WEEK, [date]),
	  [DayOfWeek]  AS DATEPART(WEEKDAY,  [date]),
	  [quarter]    AS DATEPART(QUARTER,  [date]),
	  [year]       AS DATEPART(YEAR,     [date]),
	  FirstOfYear  AS CONVERT(DATE, DATEADD(YEAR,  DATEDIFF(YEAR,  0, [date]), 0)),
	  Style112     AS CONVERT(CHAR(8),   [date], 112),
	  Style101     AS CONVERT(CHAR(10),  [date], 101)
	);

-- use the catalog views to generate as many rows as we need

	INSERT #cal([date]) 
	SELECT d
	FROM
	(
	  SELECT d = DATEADD(DAY, rn - 1, @StartDate)
	  FROM 
	  (
		SELECT TOP (DATEDIFF(DAY, @StartDate, @CutoffDate)) 
		  rn = ROW_NUMBER() OVER (ORDER BY s1.[object_id])
		FROM sys.all_objects AS s1
		CROSS JOIN sys.all_objects AS s2
		ORDER BY s1.[object_id]
	  ) AS x
	) AS y;

-- 2. create the real table

	IF OBJECT_ID('dbo.CalendarUK') IS NOT NULL DROP TABLE dbo.CalendarUK
	CREATE TABLE [dbo].[CalendarUK]
	(
	  DateKey             INT         NOT NULL PRIMARY KEY,
	  [Date]              DATE        NOT NULL,
	  [Day]               TINYINT     NOT NULL,
	  DaySuffix           CHAR(2)     NOT NULL,
	  [Weekday]           TINYINT     NOT NULL,
	  WeekDayName         VARCHAR(10) NOT NULL,
	  IsWeekend           BIT         NOT NULL,
	  IsHoliday           BIT         NOT NULL,
	  HolidayText         VARCHAR(64) SPARSE,
	  DOWInMonth          TINYINT     NOT NULL,
	  [DayOfYear]         SMALLINT    NOT NULL,
	  WeekOfMonth         TINYINT     NOT NULL,
	  WeekOfYear          TINYINT     NOT NULL,
	  ISOWeekOfYear       TINYINT     NOT NULL,
	  [Month]             TINYINT     NOT NULL,
	  [MonthName]         VARCHAR(10) NOT NULL,
	  [Quarter]           TINYINT     NOT NULL,
	  QuarterName         VARCHAR(6)  NOT NULL,
	  [Year]              INT         NOT NULL,
	  MMYYYY              CHAR(6)     NOT NULL,
	  MonthYear           CHAR(7)     NOT NULL,
	  FirstDayOfMonth     DATE        NOT NULL,
	  LastDayOfMonth      DATE        NOT NULL,
	  FirstDayOfQuarter   DATE        NOT NULL,
	  LastDayOfQuarter    DATE        NOT NULL,
	  FirstDayOfYear      DATE        NOT NULL,
	  LastDayOfYear       DATE        NOT NULL,
	  FirstDayOfNextMonth DATE        NOT NULL,
	  FirstDayOfNextYear  DATE        NOT NULL
	);
	GO

-- 3 populate the real table from the temp table

	INSERT dbo.CalendarUK WITH (TABLOCKX)
	SELECT
	  DateKey     = CONVERT(INT, Style112),
	  [Date]        = [date],
	  [Day]         = CONVERT(TINYINT, [day]),
	  DaySuffix     = CONVERT(CHAR(2), CASE WHEN [day] / 10 = 1 THEN 'th' ELSE 
					  CASE RIGHT([day], 1) WHEN '1' THEN 'st' WHEN '2' THEN 'nd' 
					  WHEN '3' THEN 'rd' ELSE 'th' END END),
	  [Weekday]     = CONVERT(TINYINT, [DayOfWeek]),
	  [WeekDayName] = CONVERT(VARCHAR(10), DATENAME(WEEKDAY, [date])),
	  [IsWeekend]   = CONVERT(BIT, CASE WHEN [DayOfWeek] IN (1,7) THEN 1 ELSE 0 END),
	  [IsHoliday]   = CONVERT(BIT, 0),
	  HolidayText   = CONVERT(VARCHAR(64), NULL),
	  [DOWInMonth]  = CONVERT(TINYINT, ROW_NUMBER() OVER 
					  (PARTITION BY FirstOfMonth, [DayOfWeek] ORDER BY [date])),
	  [DayOfYear]   = CONVERT(SMALLINT, DATEPART(DAYOFYEAR, [date])),
	  WeekOfMonth   = CONVERT(TINYINT, DENSE_RANK() OVER 
					  (PARTITION BY [year], [month] ORDER BY [week])),
	  WeekOfYear    = CONVERT(TINYINT, [week]),
	  ISOWeekOfYear = CONVERT(TINYINT, ISOWeek),
	  [Month]       = CONVERT(TINYINT, [month]),
	  [MonthName]   = CONVERT(VARCHAR(10), [MonthName]),
	  [Quarter]     = CONVERT(TINYINT, [quarter]),
	  QuarterName   = CONVERT(VARCHAR(6), CASE [quarter] WHEN 1 THEN 'First' 
					  WHEN 2 THEN 'Second' WHEN 3 THEN 'Third' WHEN 4 THEN 'Fourth' END), 
	  [Year]        = [year],
	  MMYYYY        = CONVERT(CHAR(6), LEFT(Style101, 2)    + LEFT(Style112, 4)),
	  MonthYear     = CONVERT(CHAR(7), LEFT([MonthName], 3) + LEFT(Style112, 4)),
	  FirstDayOfMonth     = FirstOfMonth,
	  LastDayOfMonth      = MAX([date]) OVER (PARTITION BY [year], [month]),
	  FirstDayOfQuarter   = MIN([date]) OVER (PARTITION BY [year], [quarter]),
	  LastDayOfQuarter    = MAX([date]) OVER (PARTITION BY [year], [quarter]),
	  FirstDayOfYear      = FirstOfYear,
	  LastDayOfYear       = MAX([date]) OVER (PARTITION BY [year]),
	  FirstDayOfNextMonth = DATEADD(MONTH, 1, FirstOfMonth),
	  FirstDayOfNextYear  = DATEADD(YEAR,  1, FirstOfYear)
	FROM #cal
	OPTION (MAXDOP 1);

-- 4 add holidays

	;WITH x AS 
	(
	  SELECT DateKey, [Date], IsHoliday, HolidayText, FirstDayOfYear,
		DOWInMonth, [MonthName], [WeekDayName], [Day],
		LastDOWInMonth = ROW_NUMBER() OVER 
		(
		  PARTITION BY FirstDayOfMonth, [Weekday] 
		  ORDER BY [Date] DESC
		)
	  FROM dbo.CalendarUK
	)
	UPDATE x SET IsHoliday = 1, HolidayText = CASE
	  WHEN ([Date] = FirstDayOfYear) THEN 'New Years Day'
	  WHEN ([DOWInMonth] = 3 AND [MonthName] = 'April' AND [WeekDayName] = 'Friday') THEN 'Good Friday'                  -- (3rd Monday in January)
	  WHEN ([DOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') THEN 'May Day'                        -- (first Monday in May)
	  WHEN ([LastDOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') THEN 'May Bank Holiday'           -- (last Monday in May)
	  WHEN ([LastDOWInMonth] = 1 AND [MonthName] = 'August' AND [WeekDayName] = 'Monday') THEN 'August Bank Hoiliday'    -- (last Monday in August)
	  WHEN ([MonthName] = 'December' AND [Day] = 25) THEN 'Christmas Day'
	  WHEN ([MonthName] = 'December' AND [Day] = 26) THEN 'Boxing Day'
	  END
	WHERE -- IsHoliday
	  ([Date] = FirstDayOfYear)
	  OR ([LastDOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday')
	  OR ([DOWInMonth] = 1     AND [MonthName] = 'May' AND [WeekDayName] = 'Monday')
	  OR ([LastDOWInMonth] = 1 AND [MonthName] = 'August'    AND [WeekDayName] = 'Monday')
	  OR ([MonthName] = 'December' AND [Day] = 25)
	  OR ([MonthName] = 'December' AND [Day] = 26);


-- 5. create a function to calculate easter etc

	IF OBJECT_ID('dbo.GetEasterHolidays') IS NOT NULL DROP FUNCTION dbo.GetEasterHolidays
	GO

	CREATE FUNCTION dbo.GetEasterHolidays(@year INT) 
	RETURNS TABLE
	WITH SCHEMABINDING
	AS 
	RETURN 
	(
	  WITH x AS 
	  (
		SELECT [Date] = CONVERT(DATE, RTRIM(@year) + '0' + RTRIM([Month]) 
			+ RIGHT('0' + RTRIM([Day]),2))
		  FROM (SELECT [Month], [Day] = DaysToSunday + 28 - (31 * ([Month] / 4))
		  FROM (SELECT [Month] = 3 + (DaysToSunday + 40) / 44, DaysToSunday
		  FROM (SELECT DaysToSunday = paschal - ((@year + @year / 4 + paschal - 13) % 7)
		  FROM (SELECT paschal = epact - (epact / 28)
		  FROM (SELECT epact = (24 + 19 * (@year % 19)) % 30) 
			AS epact) AS paschal) AS dts) AS m) AS d
	  )
	  SELECT DATEADD(DAY,-2,[Date]) [Date], 'Good Friday' HolidayName FROM x
		UNION ALL SELECT DATEADD(DAY, 1,[Date]), 'Easter Monday' FROM x
	  );
	GO

-- 6. use the function to insert easter etc

	;WITH x AS 
	(
	  SELECT d.[Date], d.IsHoliday, d.HolidayText, h.HolidayName
		FROM dbo.CalendarUK AS d
		CROSS APPLY dbo.GetEasterHolidays(d.[Year]) AS h
		WHERE d.[Date] = h.[Date]
	)
	UPDATE x SET IsHoliday = 1, HolidayText = HolidayName;

-- 7. show results

	SELECT * 
	FROM dbo.CalendarUK
	WHERE [year] = '2019'
	--WHERE [year] in ('2019', '2020')
	AND (IsHoliday = 1
	OR HolidayText IS NOT NULL)
	--and DateKey = '20181231'

Whats New!

This very handy little script lists stored-procedures, tables, etc with the most recent at the top.

Great when you have been away, or even as the foundation of a migration tracking SSRS report.

-- WhatsNew.sql

SELECT [type_desc],
       (SELECT [name] FROM sys.schemas WHERE schema_id = ob.schema_id) [schema],
       CASE parent_object_id
           WHEN '0' THEN [name]
           ELSE OBJECT_NAME (parent_object_id) + '.' + [name]
       END [object_name],
       create_date,
       modify_date -- or create-date if there isn't one
FROM sys.objects ob
WHERE is_ms_shipped = 0 -- exclude system-objects
--AND [type] = 'P' -- just stored-procedures
-- ORDER BY [schema] DESC, modify_date DESC
ORDER BY modify_date DESC;

Copying all tables to a new database

As part of an e-commerce re-write project I was tasked with copying over 300 tables from one database to another, including all data, identity columns, indexes, constraints, primary and foreign keys.

I was unable to simply backup / restore due to space and security issues. Here is my solution …

1. Script Create statements for every table. On the ‘old’ database I expanded the database and clicked ‘Tables’ then clicked ‘View’ / ‘Object Explorer Details’ so all the tables were listed in the right-hand pane. Then I was able to highlight all the tables there, and right-click ‘Script Table as’ / ‘Create To’ / ‘New Query Editor Window’.

When finished I changed connection to the ‘new’ empty database and ran the script to create all the tables – without data.

2. Disable all foreign-key constraints. (from here https://stackoverflow.com/questions/11639868/temporarily-disable-all-foreign-key-constraints). I ran this script on the new database …

-- disable fks
use targetdb
go

DECLARE @sql NVARCHAR(MAX) = N'';

;WITH x AS 
(
  SELECT DISTINCT obj = 
      QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' 
    + QUOTENAME(OBJECT_NAME(parent_object_id)) 
  FROM sys.foreign_keys
)
SELECT @sql += N'ALTER TABLE ' + obj + ' NOCHECK CONSTRAINT ALL;
' FROM x;

EXEC sp_executesql @sql;

3. Populate all the tables using the SSIS wizard. In SSMS I right-clicked the old database / ‘Tasks’, ‘Export Data…’. In the wizard I accepted the old database as the Source and typed in the new database details as the Target. I ticked all tables, and clicked ‘edit Mappings’ to tick ‘Enable identity insert’. I then deselected the Views, and executing the SSIS package.

4. To Re-enable all foreign keys – I ran this script (from the same web page as 2.) on the new database …

-- re-enable fks
use targetdb
go

DECLARE @sql NVARCHAR(MAX) = N'';

;WITH x AS 
(
  SELECT DISTINCT obj = 
      QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' 
    + QUOTENAME(OBJECT_NAME(parent_object_id)) 
  FROM sys.foreign_keys
)
SELECT @sql += N'ALTER TABLE ' + obj + ' WITH CHECK CHECK CONSTRAINT ALL;
' FROM x;

EXEC sp_executesql @sql;

To check progress I used my old ‘database_compare’ script.

Deadlocks from Entity Framework

Entity Framework was squirting raw SELECT statements at the database and causing deadlocks.

To fix, I captured the query text with sp_BlitzLock and executed it in Plan Explorer.

Plan Explorer confirmed that the data was being retrieved using a non-clustered index combined with the clustered-index (ahah!)

The Plan Explorer / Index Analysis tab, showed the non-clustered index had failed to retrieve over 15 columns.

I was able to create a new index that covered 100% of the columns within the Index Analysis screen.

I executed the query again to confirm it was no longer using the clustered index, and was therefore quicker and less likely to cause a deadlock.

Reducing index count

Sometimes its hard to see the wood for the trees. With over 30 indexes on a table of 50 columns I searched for some graphical way to list the columns against each index so I could easily see a) indexes that were totally encapsulated in a larger one. And b) almost identical indexes where a column (or two) could be added to one so the smaller could be dropped.

Initially it was sp_BlitzIndex that named the tables with too many indexes. The results from which I ran in SentryOne’s Plan Explorer like … select * from dbo.order_items; … or whatever.

Some time later :), in the Index Analysis tab I was able to choose tics to show every column and hey presto! The exact graphical tool I wanted 🙂 And a bonus of an easy way to manipulate them.

But watch out! you need another tool to rank the read/write ratio of each index before you start making changes (I use my old ‘indexmaint’ script).

Change Job notification Operators

I wanted to standardize job error notifications, so created an new operator called ‘DBA’ (with multiple email addresses).

This code semi-automates the process of updating all the jobs by listing them, along with the code needed to change them …

/* ChangeNotifications.sql */

SELECT 
   J.[name] JobName, 
   O.[name] OperatorName, 
   O.email_address Email, 
   'EXEC msdb.dbo.sp_update_job @job_name = N''' + J.[name] + ''', 
     @notify_level_email=2, 
     @notify_email_operator_name = N''DBA''' CommandToChangeIt 
FROM msdb..sysjobs J 
LEFT JOIN msdb..sysoperators O 
  ON O.id = J.notify_email_operator_id 
ORDER BY Email, JobName;

Bulk Email Sender

Although its inherently a bad idea to use a database server for application processes, and the built in function “sp_send_dbmail” can be a bit temperamental.

Never-the-less, the brief was to send emails this way, and individually (IE: no long strings of addresses).

The environment was mature, with a working Email Profile, a database and tables already in-place and holding HTML style emails ready to go.

-- SendEmailProcess.sql

USE [SomeDatabase]
GO

CREATE PROCEDURE [dbo].[SendEmailProcess]

		@Test varchar(100) = null

AS

/* clear out the sysmail tables */

	DELETE FROM [msdb].[dbo].[sysmail_allitems]
	
/* parameters */

	DECLARE @ID uniqueidentifier,
			@To varchar(100),
			@Subject varchar(255),
			@Html varchar(max),
			@Return int

/* start of loop */

	WHILE (SELECT COUNT(*)	
		   FROM [SomeDatabase].[dbo].[EmailMessage] EM
		   JOIN [SomeDatabase].[dbo].[Recipient] R 
			 ON EM.Id = R.EmailMessage_Id2
		   WHERE EM.[Status] = 'Submitted') > 0
	BEGIN

	/* get any one email message */

		SELECT TOP 1
			@ID = EM.ID, 
			@To = isnull(@Test, R.EmailAddress),
			@Subject = EM.[Subject],
			@Html = EM.HtmlContent
		FROM [SomeDatabase].[dbo].[EmailMessage] EM
		JOIN [SomeDatabase].[dbo].[Recipient] R 
		  ON EM.Id = R.EmailMessage_Id2
		WHERE EM.[Status] = 'Submitted';

	/* send it */

		EXEC [msdb].[dbo].[sp_send_dbmail]
			 @profile_name = 'BulkMail',
			 @recipients = @To,
			 @subject = @Subject,
			 @body = @Html,
			 @body_format = 'HTML';

	/* check it worked */

		SET @Return = @@error
		
	/* if it worked - mark it as Sent */
	
		IF @Return = 0
		BEGIN
			UPDATE [SomeDatabase].[dbo].[EmailMessage]
			SET [Status] = 'Sent'
			WHERE Id = @ID
		END

	/* if it failed - flag it and move on */

		IF @Return != 0 

	/* less-than greater-than does not work in WordPress */

		BEGIN
			UPDATE [SomeDatabase].[dbo].[EmailMessage]
			SET [Status] = 'Failed'
			WHERE Id = @ID
		END 
		
/* end of loop */

	END

GO

Splitting up a large MDF file

I wanted to break-up a single 500GB MDF file into 5 files of around 100GB each, so created 4 NDF files.

I set autogrowth to 1024 MB for the NDF files and OFF for the MDF file.

In a SQL Job I used code like …

DBCC SHRINKFILE (N'MDFLogicalFileName', EMPTYFILE);

Which after the second weekend left about 88GB in the MDF file.

--datafiles.sql

select Physical_Name,
	ROUND(CAST((size) AS FLOAT)/128,2) Size_MB,
	ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS FLOAT)/128,2) Used_MB,
	convert(int, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS float)/128,2) / ROUND(CAST((size) AS float)/128,2) * 100) Used_Pct
FROM sys.database_files
where physical_name not like '%ldf%'
order by physical_name;

I cancelled and deleted the job.

Over the next 3 week nights I reduced its physical size to 300GB, 200GB, then 100GB using a Job step like …

DBCC SHRINKFILE (N'MDFLogicalFileName', 100000);

I set MDF autogrowth to match the NDF files, so the five would naturally balance (size wise) over time.

Lastly I set up a nightly job to rebuild the most fragmented indexes (Thanks again Ola).

Update Reporting from Live (part 2 of 2)

… I created a SQL Job on the Reporting server called “Restore from Live”.

Step-1 “Kill any connections”

Before a database can be restored it needs to be unused. Removing connections in this way is more reliable then changing the database to “Single-user-mode”.

DECLARE @kill VARCHAR(8000) = '';

SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM [master].[dbo].[sysprocesses]
	WHERE dbid = DB_ID('LiveDatabaseName')
	AND spid > 50;

EXEC (@kill);

Step-2 “Full Restore”

This step restores a Full backup. Changing the database name and file locations as required.

When complete the database will be left in a “Restoring” state. *It can be brought online either by completing the next step or by manual recovery EG: “Restore Database [LiveDatabaseName] with Recovery;”.

RESTORE DATABASE [LiveDatabaseName] 

FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_FULL.bak' 

WITH NORECOVERY, REPLACE;

Step-3 “Diff Restore”

This step restores a Diff backup similar to the last step, however it brings the database back online after completion. If this step ever fails see * above.

RESTORE DATABASE [LiveDatabaseName] 

FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_DIFF.bak' 

WITH RECOVERY;

Step-4 “Switch to Simple Recovery Mode”

This step changes the database to Simple recovery mode where the log-files are re-used and do not require management (EG: regular log backups). This is appropriate for report servers where the data is already backed up from live “for recovery” (IE: outside of the backups detailed in these 2 posts).

ALTER DATABASE [LiveDatabaseName] set recovery SIMPLE;

Step-5 “Remove Orphans”

This deprecated command changes the password for the “mssql” login to match the password from the “mssql” user. Login passwords are not captured by backups.

sp_change_users_login 'auto_fix', 'mssql';

Footnote

I wrote these 2 jobs using minimal variables and dynamic SQL, and using a generous number of jobs and job-steps, in the hope that this will be robust and easy to manage. (And because I really like simplifying such things)

Removing unused databases

Here is my work-sheet for safely hiding databases from SSMS that I suspect are unused

-- DetachDB.sql

-- 1. List all attached databases with file paths

	SELECT db_name(database_id) [Database], Physical_Name
	FROM sys.master_files
	order by [Database]

-- 2. Create Attach Script for chosen db (accumulate history here)

	USE [master]; -- on some servername
	CREATE DATABASE xxx ON
	(FILENAME = 'D:\SQLData\xxx.mdf'),
	(FILENAME = 'D:\SQLLogs\xxx.ldf')
	FOR ATTACH;

	USE [master]; -- on some servername
	CREATE DATABASE Test ON
	(FILENAME = 'D:\SQLData\Test.mdf'),
	(FILENAME = 'D:\SQLLogs\Test_log.ldf')
	FOR ATTACH;

-- 3. Detatch Database

	USE [master];
	EXEC MASTER.dbo.sp_detach_db @dbname = N'xxx';

-- 4. To rollback, re-attach database (scripted in step-2)

Space Free

Central to my ‘Alert on low space’ job is this query, which is very handy by its self …

--spaceAlert.sql

select	volume_mount_point Drive, 
	cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
	avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;
 

TSQL Performance Rule #1

There’s no significance to this one being number one 🙂 its just the one I’ve just been thinking about 🙂 I may now have built this up a bit more than is warranted, so hope your not expecting too much from my number one performance rule. Oh, Ok then, here goes …

“All numbers in stored-procedures should be in single quotes”.

Even if they are defined as INT they could potentially force a VARCHAR to be converted to INT.

Consider WHERE SomeColumn = 42. Conversion precedency means VARCHAR’s will always be converted to INT’s never the other way around. The one numeric value above (42) could cause a million rows in the column (“SomeColumn”) to have to be converted to INT to be tested. Significantly affecting performance.

Consider WHERE SomeColumn = ’42’. “SomeColumn” is either numeric or non-numeric. If its INT then just one value (the ’42’ in the where clause) has to be converted to INT (taking no time at all). If “SomeColumn” is VARCHAR then there is no conversion.

Add a Column to a Report Model

I had the odd request to add an extra column to a SQL 2008r2 “Report Model”. I had never heard of one of those, but it turned-out to be a type of amalgamated data-source that the users created there own ad-hock reports from (via Report Builder 1).

To add the extra column I just added it to the SQL View (which was named in the amalgamated data-source definition). Then to refresh the Report Model I downloaded it and uploaded it with a new name.

Update Statistics on a whole database

Whilst performance tuning an SSRS report server I wanted to update all the statistics within the two databases ‘ReportServer’ and ‘ReportserverTempDB’.

I chose a simply-coded, two step method (for safety and to keep control).

First I generated the commands (per database) …

Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportserver.sys.tables
Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportservertempdb.sys.tables

… before executing them in a separate session.

Collecting Wait Stats

Based on the work of GS, here is my script to create a Job that collects wait stats every 15 minutes.

--CreateJob_DBA_CollectWaitStats.sql

USE [msdb]
GO

IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'DBA_CollectWaitStats')
EXEC msdb.dbo.sp_delete_job @job_name=N'DBA_CollectWaitStats', @delete_unused_schedule=1
GO

USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Database Maintenance' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Database Maintenance'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
END

declare 
	@today varchar(50) = (select convert(varchar, getdate(), 112)),
	@nextweek varchar(50) = (select convert(varchar, getdate()+8, 112)),
	@dbname varchar(50) = 'master' --<<<<<>>>>>>>>

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBA_CollectWaitStats', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Collects wait stats for performance tuning.', 
		@category_name=N'Database Maintenance', 
		@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Create the table', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=3, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'create table [dbo].[WaitStats] 
(
	WaitType nvarchar(60) not null,
	NumberOfWaits bigint not null,
	SignalWaitTime bigint not null,
	ResourceWaitTime bigint not null,
	SampleTime datetime not null
)', 
		@database_name=@dbname, 
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Collect current waits', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO [dbo].[WaitStats]
SELECT  wait_type as WaitType,
        waiting_tasks_count AS NumberOfWaits,
        signal_wait_time_ms AS SignalWaitTime,
        wait_time_ms - signal_wait_time_ms AS ResourceWaitTime,
        GETDATE() AS SampleTime
FROM    sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
	N''BROKER_EVENTHANDLER'', N''BROKER_RECEIVE_WAITFOR'',
	N''BROKER_TASK_STOP'', N''BROKER_TO_FLUSH'',
	N''BROKER_TRANSMITTER'', N''CHECKPOINT_QUEUE'',
	N''CHKPT'', N''CLR_AUTO_EVENT'',
	N''CLR_MANUAL_EVENT'', N''CLR_SEMAPHORE'',
	N''DBMIRROR_DBM_EVENT'', N''DBMIRROR_EVENTS_QUEUE'',
	N''DBMIRROR_WORKER_QUEUE'', N''DBMIRRORING_CMD'',
	N''DIRTY_PAGE_POLL'', N''DISPATCHER_QUEUE_SEMAPHORE'',
	N''EXECSYNC'', N''FSAGENT'',
	N''FT_IFTS_SCHEDULER_IDLE_WAIT'', N''FT_IFTSHC_MUTEX'',
	N''HADR_CLUSAPI_CALL'', N''HADR_FILESTREAM_IOMGR_IOCOMPLETION'',
	N''HADR_LOGCAPTURE_WAIT'', N''HADR_NOTIFICATION_DEQUEUE'',
	N''HADR_TIMER_TASK'', N''HADR_WORK_QUEUE'',
	N''KSOURCE_WAKEUP'', N''LAZYWRITER_SLEEP'',
	N''LOGMGR_QUEUE'', N''MEMORY_ALLOCATION_EXT'',
	N''ONDEMAND_TASK_QUEUE'',
	N''PREEMPTIVE_XE_GETTARGETSTATE'',
	N''PWAIT_ALL_COMPONENTS_INITIALIZED'',
	N''PWAIT_DIRECTLOGCONSUMER_GETNEXT'',
	N''QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'', N''QDS_ASYNC_QUEUE'',
	N''QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP'',
	N''QDS_SHUTDOWN_QUEUE'',
	N''REQUEST_FOR_DEADLOCK_SEARCH'', N''RESOURCE_QUEUE'',
	N''SERVER_IDLE_CHECK'', N''SLEEP_BPOOL_FLUSH'',
	N''SLEEP_DBSTARTUP'', N''SLEEP_DCOMSTARTUP'',
	N''SLEEP_MASTERDBREADY'', N''SLEEP_MASTERMDREADY'',
	N''SLEEP_MASTERUPGRADED'', N''SLEEP_MSDBSTARTUP'',
	N''SLEEP_SYSTEMTASK'', N''SLEEP_TASK'',
	N''SLEEP_TEMPDBSTARTUP'', N''SNI_HTTP_ACCEPT'',
	N''SP_SERVER_DIAGNOSTICS_SLEEP'', N''SQLTRACE_BUFFER_FLUSH'',
	N''SQLTRACE_INCREMENTAL_FLUSH_SLEEP'',
	N''SQLTRACE_WAIT_ENTRIES'', N''WAIT_FOR_RESULTS'',
	N''WAITFOR'', N''WAITFOR_TASKSHUTDOWN'',
	N''WAIT_XTP_RECOVERY'',
	N''WAIT_XTP_HOST_WAIT'', N''WAIT_XTP_OFFLINE_CKPT_NEW_LOG'',
	N''WAIT_XTP_CKPT_CLOSE'', N''XE_DISPATCHER_JOIN'',
	N''XE_DISPATCHER_WAIT'', N''XE_TIMER_EVENT'')
AND	[waiting_tasks_count] > 0

', 
		@database_name=@dbname,
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every 15 mins for a week', 
		@enabled=1, 
		@freq_type=4, 
		@freq_interval=1, 
		@freq_subday_type=4, 
		@freq_subday_interval=15, 
		@freq_relative_interval=0, 
		@freq_recurrence_factor=0, 
		@active_start_date=@today, 
		@active_end_date=@nextweek, 
		@active_start_time=100, 
		@active_end_time=235959, 
		@schedule_uid=N'5b0842fe-8f80-44e9-8a09-aac6ce5c2b2e'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches (slow breaths, slow breaths :))

SQL Deadlock graph – arrows

In a SQL deadlock graph the direction of the arrows is an interesting thing.

pic02

With my mechanistic head on, I am imagining it as …

  1. Spid-a requested a lock, and then got a lock (a two-way trip)
  2. Spid-b requested a lock, and then got a lock (arrow ends-up pointing at spid)
  3. Spid-a requested a lock, and is waiting (a one-way thing)
  4. Spid-b requested a lock, and is waiting (arrow pointing away from spid)

Capture Deadlock Graph using Profiler

To Capture a Deadlock Graph using Profiler (NB: with SQL 2008 and above you can also use an extended-event).

  • File / New trace
  • Connection details
  • Use Template / Blank
  • Events Selection / Locks ..1) DeadlockGraph 2) Lock:Deadlock 3) Lock:Deadlock Chain
  • Event Extraction Settings / Save Deadlock XML events seperately / (somefilename)
  • Each deadlock in a distinct file
  • All Deadlocks
  • Run
  • (wait)
  • File / Export / Extract SQL Server Events / Extract deadlock Events / (somefilename2)

Log-shipping Restore error: empty file

I noticed a log-shipping RESTORE job had started failing. Looking back through the job history I found the last two “good” executions contained errors …

*** Error: Could not apply log backup file ‘SomePath\SomeFile.trn’ to secondary database. The volume is empty.

I looked at the path\file specified and found the file was zero size.

I looked on the network-share where the files are backed-up-to \ copied-from, and found the same file was NOT zero size.

I manually copied the file from the network-share to the destination folder (on the DR server), overwriting the empty file.

Log-shipping recovered over the next few hours.

Check every Linked Server

I was unable to cobble together some Powershell code that I could execute within a job-step to check our linked-servers were working.

So I resorted to making the best of the built-in, but flawed, “SP_testlinkedserver” (Its a flawed procedure as if a link fails, it crashes, slowly).

The code below, when ran in a job-step overnight, will dynamically create one job for each linked-server on the box. The job(s) will then run and email the “DBA” operator every linked-server that fails, before deleting themselves.

-- testlinkedservers.sql

-- get list of all linked servers on this box

	CREATE TABLE #temp (
		srv_name varchar(MAX), 
		srv_providername varchar(MAX), 
		srv_product varchar(MAX), 
		srv_datasource varchar(MAX), 
		srv_providerstring varchar(MAX), 
		srv_location varchar(MAX), 
		srv_cat varchar(MAX))
	INSERT INTO #temp EXEC sp_linkedservers
	DELETE FROM #temp WHERE srv_name LIKE 'LOGSHIP%'
	DELETE FROM #temp WHERE srv_name = @@SERVERNAME

-- loop

	DECLARE @name VARCHAR(MAX), @cmd VARCHAR(MAX), @run VARCHAR(MAX)
	WHILE (SELECT COUNT(*) FROM #temp) > 0
	BEGIN
	SELECT TOP 1 @name = srv_name FROM #temp

	-- create the job code

	SET @cmd = 'BEGIN TRANSACTION
	DECLARE @jobId BINARY(16)
	SELECT @jobId = job_id FROM msdb.dbo.sysjobs WHERE (name = N''DBA - LinkedServerTest ' + @name + ''')
	IF (@jobId IS NULL)
	BEGIN
	EXEC msdb.dbo.sp_add_job @job_name=N''DBA - LinkedServerTest ' + @name + ''', 
		@enabled=1, 
		@notify_level_eventlog=2, 
		@notify_level_email=2, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=3, 
		@description=N''No description available.'', 
		@category_name=N''[Uncategorized (Local)]'', 
		@owner_login_name=N''sa'', 
		@notify_email_operator_name=N''DBA'', 
		@job_id = @jobId OUTPUT
	END

	-- create the job-step code

	IF NOT EXISTS (SELECT * FROM msdb.dbo.sysjobsteps WHERE job_id = @jobId AND step_id = 1)
	EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N''one'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0,
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''sp_testlinkedserver [' + @name + ']'', 
		@database_name=N''master'', 
		@flags=0;

	-- create instantiation code

	EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N''(local)''

	COMMIT TRANSACTION'

	-- create the job

	EXEC(@cmd)

	-- run the job

	SET @run = 'EXECUTE msdb.dbo.sp_start_job ''DBA - LinkedServerTest ' + @name + ''''
	EXEC(@run)

        -- move to next row in loop

	DELETE FROM #temp WHERE srv_name = @name
	END

.

Log-Shipping Monitor incorrect after outage

After a virtualization issue caused an unscheduled rebooted of production, I found the DR (log-shipping) monitor incorrectly reporting issues.

It seems the linked-server was no longer working, as @@servername returned NULL on Prod.

On Production SP_AddServer failed as the servername was in sys.servers – but not with server_id 0 (as needed for @@servername).

Removing the incorrect entry with SP_DropServer failed as there were remote- connections using it. And SP_DropRemoteLogin failed as there was not a remote users called NULL.

The fix was to remove log-shipping first using the GUI, which was only partially successful. Then manually, by deleting jobs from prod and DR, and truncating system-tables in MSDB starting log_shipping~ (on both servers).

Once log-shipping was cleaned off both machines I could use … EXEC SP_DropServer ‘ProdServer’, ‘droplogins’ followed by EXEC SP_AddServer ‘ProdServer’, LOCAL successfully. Now the server-name was correctly at the top of sys.servers the only task left was to schedule a reboot so Select @@ServerName would pick-up the new value.

After which I could re-configure log-shipping.

Move Databases

I wanted to move about 50 databases on a Sharepoint server off the C-drive
(yes I know).

Sadly the only place I could move both datafiles and logfiles to was the D-Drive
(I know, I know).

Here’s the code I wrote to help me …

--move_db.sql

-- to move a user-database to different-drives on the same-server

USE [master]
GO

-- backup

	DECLARE @dbname VARCHAR(max) = 'SomeDatabaseName' -- database name

	DECLARE @backup_cmd VARCHAR(MAX) = 'BACKUP DATABASE ['+ @dbname + ']
	TO DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH INIT, COMPRESSION, STATS = 1;'

	SELECT (@backup_cmd)
	--EXEC (@backup_cmd)


-- kill connections

	DECLARE @kill_cmd VARCHAR(MAX) = 'DECLARE @kill varchar(8000) = '''';
	SELECT @kill=@kill+''kill ''+convert(varchar(5),spid)+'';'' from master..sysprocesses 
	WHERE dbid=db_id(''' + @dbname + ''') and spid>50;
	EXEC (@kill);'

	SELECT (@kill_cmd)
	--EXEC (@kill_cmd)

-- restore

	DECLARE @restore_cmd VARCHAR(MAX) = 'RESTORE DATABASE [' + @dbname + ']
	FROM DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH FILE = 1,  
	MOVE N''' + @dbname + ''' TO N''D:\SQL_Data\' + @dbname + '.mdf'',
	MOVE N''' + @dbname + '_log'' TO N''D:\SQL_Log\' + @dbname + '_log.ldf'',
	REPLACE,  STATS = 1;'

	SELECT (@restore_cmd)
	--EXEC (@restore_cmd)

sp_whoisactive

I tend to paste this (below) into SSMS keyboard shortcuts under Ctrl+F1 so I can instantly see what’s going on. 

EXEC [master].[dbo].[sp_WhoIsActive] @get_outer_command=1, @get_plans=1, @get_additional_info = 1, @get_task_info = 2, @output_column_list = '[dd%][session_id][block%][sql_text][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][sql_command][query_plan][locks][%]'

When it’s too slow 🙂 I use a “lite” version under Crtl+3

EXEC [master].[dbo].[sp_WhoIsActive] @get_additional_info = 1, @output_column_list = '[dd%][session_id][block%][sql_text][sql_command][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][query_plan][locks][%]'

Who deleted that data?

Sadly I could not find out.

Going forward – To capture deletes on a table I set-up a regular ‘after delete’ trigger with some extra columns to hold system functions.

This allowed me to capture the date/time, PC-Name and login that originated deletes. Here is my working lab …

--lab_trigger_deletes.sql

--create table to be monitored and add some data
	CREATE TABLE t1 (c1 INT, c2 int)
	INSERT INTO t1 VALUES (1,7), (2,8), (3,9)

-- create audit table
	CREATE TABLE t1_audit (c1 INT, c2 INT, c3 DATETIME, c4 SYSNAME, c5 SYSNAME, c6 SYSNAME)

-- check contents of both tables
	SELECT * from t1
	SELECT * FROM t1_audit

-- create trigger
	CREATE TRIGGER trg_ItemDelete 
	ON dbo.t1 
	AFTER DELETE 
	AS
	INSERT INTO dbo.t1_audit(c1, c2, c3, c4, c5, c6)
			SELECT d.c1, d.c2, GETDATE(), HOST_NAME(), SUSER_SNAME(), ORIGINAL_LOGIN()
			FROM Deleted d

-- delete a row (firing the trigger)
	DELETE FROM t1 WHERE c1 = 2

-- check contents of both tables again
	SELECT * from t1
	SELECT * FROM t1_audit

-- tidy up
	IF OBJECT_ID ('trg_ItemDelete', 'TR') IS NOT NULL DROP TRIGGER trg_ItemDelete;
   	drop TABLE t1
	drop TABLE t1_audit

Log Space

I noticed a jump in logfile size the other day and was wondering how to predict a autogrowth event.

I know old data is truncated after a log-backup but that’s internal and not normally visable.

I came up with this to run across production …

--LogSpace.sql
-- To help find near-full logfiles that may autogrow soon.

-- create table to hold raw data
CREATE TABLE #temp (DBName varchar(100), SizeMB int, UsedPct float, [STATUS] bit)

-- populate table
INSERT #temp EXEC('DBCC SQLPERF(logspace)')

-- output
SELECT DBName, SizeMB, UsedPct FROM #temp --WHERE UsedPct > 90 -- 90% full

-- clean-up
DROP TABLE #temp