Deadlocks from Entity Framework

Entity Framework was squirting raw SELECT statements at the database and causing deadlocks.

To fix, I captured the query text with sp_BlitzLock and executed it in Plan Explorer.

Plan Explorer confirmed that the data was being retrieved using a non-clustered index combined with the clustered-index (ahah!)

The Plan Explorer / Index Analysis tab, showed the non-clustered index had failed to retrieve over 15 columns.

I was able to create a new index that covered 100% of the columns within the Index Analysis screen.

I executed the query again to confirm it was no longer using the clustered index, and was therefore quicker and less likely to cause a deadlock.

Reducing index count

Sometimes its hard to see the wood for the trees. With over 30 indexes on a table of 50 columns I searched for some graphical way to list the columns against each index so I could easily see a) indexes that were totally encapsulated in a larger one. And b) almost identical indexes where a column (or two) could be added to one so the smaller could be dropped.

Initially it was sp_BlitzIndex that named the tables with too many indexes. The results from which I ran in SentryOne’s Plan Explorer like … select * from dbo.order_items; … or whatever.

Some time later :), in the Index Analysis tab I was able to choose tics to show every column and hey presto! The exact graphical tool I wanted 🙂 And a bonus of an easy way to manipulate them.

But watch out! you need another tool to rank the read/write ratio of each index before you start making changes (I use my old ‘indexmaint’ script).

Removing all duplicate rows

Just recording here an update to my old ‘having’ way to remove duplicate rows

WITH cte AS (
SELECT SomeColumnName,
row_number() OVER(PARTITION BY SomeColumnName ORDER BY SomeColumnName) AS [rn]
from [SomeDatabaseName].[dbo].[SomeTableName]
)
select * from cte where [rn] > 1 -- #1 test
-- delete cte where [rn] > 1 -- #2 execute

Removing unused databases

Here is my work-sheet for safely hiding databases from SSMS that I suspect are unused

-- DetachDB.sql

-- 1. List all attached databases with file paths

	SELECT db_name(database_id) [Database], Physical_Name
	FROM sys.master_files
	order by [Database]

-- 2. Create Attach Script for chosen db (accumulate history here)

	USE [master]; -- on some servername
	CREATE DATABASE xxx ON
	(FILENAME = 'D:\SQLData\xxx.mdf'),
	(FILENAME = 'D:\SQLLogs\xxx.ldf')
	FOR ATTACH;

	USE [master]; -- on some servername
	CREATE DATABASE Test ON
	(FILENAME = 'D:\SQLData\Test.mdf'),
	(FILENAME = 'D:\SQLLogs\Test_log.ldf')
	FOR ATTACH;

-- 3. Detatch Database

	USE [master];
	EXEC MASTER.dbo.sp_detach_db @dbname = N'xxx';

-- 4. To rollback, re-attach database (scripted in step-2)

Space Free

Central to my ‘Alert on low space’ job is this query, which is very handy by its self …

--spaceAlert.sql

select	volume_mount_point Drive, 
	cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
	avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;
 

TSQL Performance Rule #1

There’s no significance to this one being number one 🙂 its just the one I’ve just been thinking about 🙂 I may now have built this up a bit more than is warranted, so hope your not expecting too much from my number one performance rule. Oh, Ok then, here goes …

“All numbers in stored-procedures should be in single quotes”.

Even if they are defined as INT they could potentially force a VARCHAR to be converted to INT.

Consider WHERE SomeColumn = 42. Conversion precedency means VARCHAR’s will always be converted to INT’s never the other way around. The one numeric value above (42) could cause a million rows in the column (“SomeColumn”) to have to be converted to INT to be tested. Significantly affecting performance.

Consider WHERE SomeColumn = ’42’. “SomeColumn” is either numeric or non-numeric. If its INT then just one value (the ’42’ in the where clause) has to be converted to INT (taking no time at all). If “SomeColumn” is VARCHAR then there is no conversion.

AlwaysOn Availability Groups

Terms (SQL Server 2012, 2014, 2016, 2017 Enterprise)

AG = Availability Group = AlwaysOn Availability Group = SQL Server technology that protects one or more databases on a SQL Server instance as a logical group, for HA or DR.

Node = A Windows Server participating in an AG.

SQL Server Instance = A complete installation of SQL Server. Including user databases, system databases, logins, linked servers, jobs etc. A single default-instance plus a number of named-instances could be installed on one node.

Replica = the user database(s) being protected by an AG.

 

List (high level)

Configure Always On (Operating System)
– Enable windows clustering on each node
– Add participating nodes to cluster
– Validate cluster
– Create cluster
– Multi-subnet?

Configure Availability Groups (SQL Server)
– Enable AG on each SQL Server instance through config manager
– Create endpoint on each replica
– Grant connect on each endpoint/replica
– Create an AG (primary replica)
– Join each secondary to the new AG (secondary replicas)

Configure Databases (Database)
– Join database to AG
– Configure synchronous / asynchronous replication
– Configure manual / auto failover
– Configure read only / non read only secondary

Configure the Listener (Availability Group)
– Select IP for each subnet
– Configure listener
– Test listener
– Test failover

Configure Advanced Options
– Read only routing
– Offloading backups
– Failover behavior
– setup monitoring

Update Statistics on a whole database

Whilst performance tuning an SSRS report server I wanted to update all the statistics within the two databases ‘ReportServer’ and ‘ReportserverTempDB’.

I chose a simply-coded, two step method (for safety and to keep control).

First I generated the commands (per database) …

Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportserver.sys.tables
Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' 
from reportservertempdb.sys.tables

… before executing them in a separate session.

Adding ‘All’ to a report drop-down list

There is a problem when you configure a report parameter to ‘Accept Multiply Items’, in that it won’t work with a stored procedure (SP).

One work-around is to only Accept Single Items, but make one of them ‘All’, like this …

1) Create a separate SP to populate the parameter and add an ‘All’ option …

SELECT 'All' CustId
UNION
SELECT DISTINCT CustomerId
FROM SomeTable
ORDER BY CustId

2) Then amend the main SP by joining to SomeTable, and adding a CASE statement in the WHERE clause, like so …

WHERE SomeTable.CustomerId = CASE WHEN @CustId != 'All' THEN @CustId ELSE SomeTable.CustomerId END

Which translates as …

WHERE SomeTable.CustomerId = SomeTable.CustomerId

… when ‘All’ is selected (which will let everything through), or …

WHERE SomeTable.CustomerId = @CustId

… where ‘All’ is Not selected.

This allows the user to select All or a single value.

However, if you need to select more than one value but not all values – you will need another approach. You will need to use a split-string function.

A Report that uses its location

A customer had many copies of the same report – one for each site. And the application database had multiple copies of each table – one for each site. The table names were all appended with the sites name EG: “WP London_SalesHeader”, “WP Barcelona_SalesHeader” etc.

In Report Manager there was a folder for each site, containing sub-folders for different categories of report (EG: /London/Sales, OR /London/Production).

This is a simplified account of how I created a Report that returned only information specific to its location.

In Report Builder I created a hidden parameter called @site of type Text with no “Available Values” and its “Default Values” using the global variable ReportFolder.

As the output from this built-in variable would be like “\Paris\Sales” I had to create an expression for the “Default Value” of @site searching through each site name in turn …

=IIf(Globals!ReportFolder.Contains("Barcelona"),"WP Barcelona",
IIf(Globals!ReportFolder.Contains("Paris"),"WP Paris", "WP London"))

Finally, in the report query I derived the table name using the @site parameter.

declare @cmd varchar(max) = 
'select	[SalesCode],
	[Description],
	[NetWeight],
	[SalesDate]
from	[Production].[dbo].[' + @site + '_SalesHeader]'

exec(@cmd)

(NB: As a best-practice I displayed the value of @site, along with the other parameter choices, in the report sub-title.)

Configuring SQL 2014 Clustering

Practical High Availability (HA) my lists:-

– HA is measured in uptime (EG: five nines), zero data loss, automatic failover
– DR is measured in recovery time, minimal data loss, point-in-time recovery

Cluster – (AKA Windows Cluster, Failover Cluster) is an HA solution
Node – a server, part of a cluster, can support production by its self

Cluster prerequisites
– same architecture (32 bit / 64 bit)
– SQL Server
– same OS version (eg: Windows Server 2012/2016)
– (memory, cpu, patch-level, hardware, configuration)
– at least two nodes
– shared storage (eg’s: san, SCSI)
– common network (with no single point of failure

Cluster setup click-by-click …
– node-1, 2
– server manager
– add feature
– failover clustering
– iSCSI virtual disks
– 1 data, 2 log files, 3 quorum
– rescan storage
– new volume wizard 1, 2, 3
– size, drive letter, label (ie: Data, log, quorum)
– File and Storage Services
– iSCSI initiator
– target vsan ip
– quick connect
– auto configure (1, 2, 3)
– failover cluster manager
– validate configuration
– node1, node2, run all tests
– warning: only one network found (single point of failure) ignore if vm
– create cluster
– just add node-1 (simpler to identify/fix issues, eg: security)
– cluster name
– warning: network single-point-of-failure (ignore if vm)
– actions: add node (node-2) …

Install SQL Server (2014)
– node-1 (individual features)
– advanced, advanced cluster preparation *
– node-2 (individual features
– advanced, advanced cluster preparation **
– node-1 or 2 (common features)
– advanced cluster completion ***

* eg: – Database Engine Services (tic), Management tools – complete (tic)
– instance root directory etc – d-drive
– named instance
** repeat

*** – sql server network name (for dns / ad)
– cluster disk 1,3 (tic) not quorum
– cluster network configuration (type in) ip
– collation, mixed mode authentication, add self, add svc-account
– data directory (eg: g-drive), log directory (eg h-drive)

Test cluster #1
– failover cluster manager
– running / owner node (~1)
– move / pending
– running / owner node (~2)

Test cluster #2
– SSMS (3 queries)
– select @@servername
– select serverproperty(‘ComputerNamePhysicalNetBIOS’)
– select * from sys.dm_os_cluster_nodes — node-1 is up
– hard crash node-1
– repeat 3 queries (node-2 is up)

Collecting Wait Stats

Based on the work of GS, here is my script to create a Job that collects wait stats every 15 minutes.

--CreateJob_DBA_CollectWaitStats.sql

USE [msdb]
GO

IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'DBA_CollectWaitStats')
EXEC msdb.dbo.sp_delete_job @job_name=N'DBA_CollectWaitStats', @delete_unused_schedule=1
GO

USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'Database Maintenance' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'Database Maintenance'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
END

declare 
	@today varchar(50) = (select convert(varchar, getdate(), 112)),
	@nextweek varchar(50) = (select convert(varchar, getdate()+8, 112)),
	@dbname varchar(50) = 'master' --<<<<<>>>>>>>>

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBA_CollectWaitStats', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Collects wait stats for performance tuning.', 
		@category_name=N'Database Maintenance', 
		@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Create the table', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=3, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'create table [dbo].[WaitStats] 
(
	WaitType nvarchar(60) not null,
	NumberOfWaits bigint not null,
	SignalWaitTime bigint not null,
	ResourceWaitTime bigint not null,
	SampleTime datetime not null
)', 
		@database_name=@dbname, 
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Collect current waits', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO [dbo].[WaitStats]
SELECT  wait_type as WaitType,
        waiting_tasks_count AS NumberOfWaits,
        signal_wait_time_ms AS SignalWaitTime,
        wait_time_ms - signal_wait_time_ms AS ResourceWaitTime,
        GETDATE() AS SampleTime
FROM    sys.dm_os_wait_stats
WHERE [wait_type] NOT IN (
	N''BROKER_EVENTHANDLER'', N''BROKER_RECEIVE_WAITFOR'',
	N''BROKER_TASK_STOP'', N''BROKER_TO_FLUSH'',
	N''BROKER_TRANSMITTER'', N''CHECKPOINT_QUEUE'',
	N''CHKPT'', N''CLR_AUTO_EVENT'',
	N''CLR_MANUAL_EVENT'', N''CLR_SEMAPHORE'',
	N''DBMIRROR_DBM_EVENT'', N''DBMIRROR_EVENTS_QUEUE'',
	N''DBMIRROR_WORKER_QUEUE'', N''DBMIRRORING_CMD'',
	N''DIRTY_PAGE_POLL'', N''DISPATCHER_QUEUE_SEMAPHORE'',
	N''EXECSYNC'', N''FSAGENT'',
	N''FT_IFTS_SCHEDULER_IDLE_WAIT'', N''FT_IFTSHC_MUTEX'',
	N''HADR_CLUSAPI_CALL'', N''HADR_FILESTREAM_IOMGR_IOCOMPLETION'',
	N''HADR_LOGCAPTURE_WAIT'', N''HADR_NOTIFICATION_DEQUEUE'',
	N''HADR_TIMER_TASK'', N''HADR_WORK_QUEUE'',
	N''KSOURCE_WAKEUP'', N''LAZYWRITER_SLEEP'',
	N''LOGMGR_QUEUE'', N''MEMORY_ALLOCATION_EXT'',
	N''ONDEMAND_TASK_QUEUE'',
	N''PREEMPTIVE_XE_GETTARGETSTATE'',
	N''PWAIT_ALL_COMPONENTS_INITIALIZED'',
	N''PWAIT_DIRECTLOGCONSUMER_GETNEXT'',
	N''QDS_PERSIST_TASK_MAIN_LOOP_SLEEP'', N''QDS_ASYNC_QUEUE'',
	N''QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP'',
	N''QDS_SHUTDOWN_QUEUE'',
	N''REQUEST_FOR_DEADLOCK_SEARCH'', N''RESOURCE_QUEUE'',
	N''SERVER_IDLE_CHECK'', N''SLEEP_BPOOL_FLUSH'',
	N''SLEEP_DBSTARTUP'', N''SLEEP_DCOMSTARTUP'',
	N''SLEEP_MASTERDBREADY'', N''SLEEP_MASTERMDREADY'',
	N''SLEEP_MASTERUPGRADED'', N''SLEEP_MSDBSTARTUP'',
	N''SLEEP_SYSTEMTASK'', N''SLEEP_TASK'',
	N''SLEEP_TEMPDBSTARTUP'', N''SNI_HTTP_ACCEPT'',
	N''SP_SERVER_DIAGNOSTICS_SLEEP'', N''SQLTRACE_BUFFER_FLUSH'',
	N''SQLTRACE_INCREMENTAL_FLUSH_SLEEP'',
	N''SQLTRACE_WAIT_ENTRIES'', N''WAIT_FOR_RESULTS'',
	N''WAITFOR'', N''WAITFOR_TASKSHUTDOWN'',
	N''WAIT_XTP_RECOVERY'',
	N''WAIT_XTP_HOST_WAIT'', N''WAIT_XTP_OFFLINE_CKPT_NEW_LOG'',
	N''WAIT_XTP_CKPT_CLOSE'', N''XE_DISPATCHER_JOIN'',
	N''XE_DISPATCHER_WAIT'', N''XE_TIMER_EVENT'')
AND	[waiting_tasks_count] > 0

', 
		@database_name=@dbname,
		@flags=0
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Every 15 mins for a week', 
		@enabled=1, 
		@freq_type=4, 
		@freq_interval=1, 
		@freq_subday_type=4, 
		@freq_subday_interval=15, 
		@freq_relative_interval=0, 
		@freq_recurrence_factor=0, 
		@active_start_date=@today, 
		@active_end_date=@nextweek, 
		@active_start_time=100, 
		@active_end_time=235959, 
		@schedule_uid=N'5b0842fe-8f80-44e9-8a09-aac6ce5c2b2e'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR  0 OR @ReturnCode  0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

My Hekaton template

--hek_template.sql

-- 1) add filegroup (if not already there)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILEGROUP [MemoryOptimizedFG] CONTAINS MEMORY_OPTIMIZED_DATA
end try begin catch end catch

-- 2) add filestream file into filegroup (unless already done)

USE [master]
GO
begin try
ALTER DATABASE [DemoDW] ADD FILE ( NAME = N'DemoDB_hek', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\DemoDB_hek' ) TO FILEGROUP [MemoryOptimizedFG]
end try begin catch end catch

-- 3) remove table if it already exists

USE [DemoDW]
GO
begin try
drop table recentsales
end try begin catch end catch

-- 4) create memory optimized table (hekaton!)

USE [DemoDW]
GO
create table recentsales
(
ItemID int NOT NULL PRIMARY KEY NONCLUSTERED HASH (ItemID) with (BUCKET_COUNT=1024),
[Name] varchar NOT NULL,
Price decimal NOT NULL
)
with (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_AND_DATA
)

 

More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches (slow breaths, slow breaths :))

SQL Deadlock graph – arrows

In a SQL deadlock graph the direction of the arrows is an interesting thing.

pic02

With my mechanistic head on, I am imagining it as …

  1. Spid-a requested a lock, and then got a lock (a two-way trip)
  2. Spid-b requested a lock, and then got a lock (arrow ends-up pointing at spid)
  3. Spid-a requested a lock, and is waiting (a one-way thing)
  4. Spid-b requested a lock, and is waiting (arrow pointing away from spid)

Capture Deadlock Graph using Profiler

To Capture a Deadlock Graph using Profiler (NB: with SQL 2008 and above you can also use an extended-event).

  • File / New trace
  • Connection details
  • Use Template / Blank
  • Events Selection / Locks ..1) DeadlockGraph 2) Lock:Deadlock 3) Lock:Deadlock Chain
  • Event Extraction Settings / Save Deadlock XML events seperately / (somefilename)
  • Each deadlock in a distinct file
  • All Deadlocks
  • Run
  • (wait)
  • File / Export / Extract SQL Server Events / Extract deadlock Events / (somefilename2)

Check every Linked Server

I was unable to cobble together some Powershell code that I could execute within a job-step to check our linked-servers were working.

So I resorted to making the best of the built-in, but flawed, “SP_testlinkedserver” (Its a flawed procedure as if a link fails, it crashes, slowly).

The code below, when ran in a job-step overnight, will dynamically create one job for each linked-server on the box. The job(s) will then run and email the “DBA” operator every linked-server that fails, before deleting themselves.

-- testlinkedservers.sql

-- get list of all linked servers on this box

	CREATE TABLE #temp (
		srv_name varchar(MAX), 
		srv_providername varchar(MAX), 
		srv_product varchar(MAX), 
		srv_datasource varchar(MAX), 
		srv_providerstring varchar(MAX), 
		srv_location varchar(MAX), 
		srv_cat varchar(MAX))
	INSERT INTO #temp EXEC sp_linkedservers
	DELETE FROM #temp WHERE srv_name LIKE 'LOGSHIP%'
	DELETE FROM #temp WHERE srv_name = @@SERVERNAME

-- loop

	DECLARE @name VARCHAR(MAX), @cmd VARCHAR(MAX), @run VARCHAR(MAX)
	WHILE (SELECT COUNT(*) FROM #temp) > 0
	BEGIN
	SELECT TOP 1 @name = srv_name FROM #temp

	-- create the job code

	SET @cmd = 'BEGIN TRANSACTION
	DECLARE @jobId BINARY(16)
	SELECT @jobId = job_id FROM msdb.dbo.sysjobs WHERE (name = N''DBA - LinkedServerTest ' + @name + ''')
	IF (@jobId IS NULL)
	BEGIN
	EXEC msdb.dbo.sp_add_job @job_name=N''DBA - LinkedServerTest ' + @name + ''', 
		@enabled=1, 
		@notify_level_eventlog=2, 
		@notify_level_email=2, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=3, 
		@description=N''No description available.'', 
		@category_name=N''[Uncategorized (Local)]'', 
		@owner_login_name=N''sa'', 
		@notify_email_operator_name=N''DBA'', 
		@job_id = @jobId OUTPUT
	END

	-- create the job-step code

	IF NOT EXISTS (SELECT * FROM msdb.dbo.sysjobsteps WHERE job_id = @jobId AND step_id = 1)
	EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N''one'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0,
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''sp_testlinkedserver [' + @name + ']'', 
		@database_name=N''master'', 
		@flags=0;

	-- create instantiation code

	EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N''(local)''

	COMMIT TRANSACTION'

	-- create the job

	EXEC(@cmd)

	-- run the job

	SET @run = 'EXECUTE msdb.dbo.sp_start_job ''DBA - LinkedServerTest ' + @name + ''''
	EXEC(@run)

        -- move to next row in loop

	DELETE FROM #temp WHERE srv_name = @name
	END

.

Move Databases

I wanted to move about 50 databases on a Sharepoint server off the C-drive
(yes I know).

Sadly the only place I could move both datafiles and logfiles to was the D-Drive
(I know, I know).

Here’s the code I wrote to help me …

--move_db.sql

-- to move a user-database to different-drives on the same-server

USE [master]
GO

-- backup

	DECLARE @dbname VARCHAR(max) = 'SomeDatabaseName' -- database name

	DECLARE @backup_cmd VARCHAR(MAX) = 'BACKUP DATABASE ['+ @dbname + ']
	TO DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH INIT, COMPRESSION, STATS = 1;'

	SELECT (@backup_cmd)
	--EXEC (@backup_cmd)


-- kill connections

	DECLARE @kill_cmd VARCHAR(MAX) = 'DECLARE @kill varchar(8000) = '''';
	SELECT @kill=@kill+''kill ''+convert(varchar(5),spid)+'';'' from master..sysprocesses 
	WHERE dbid=db_id(''' + @dbname + ''') and spid>50;
	EXEC (@kill);'

	SELECT (@kill_cmd)
	--EXEC (@kill_cmd)

-- restore

	DECLARE @restore_cmd VARCHAR(MAX) = 'RESTORE DATABASE [' + @dbname + ']
	FROM DISK = N''\\SomeNetworkShare\SomeServerName_' + @dbname + '.bak''
	WITH FILE = 1,  
	MOVE N''' + @dbname + ''' TO N''D:\SQL_Data\' + @dbname + '.mdf'',
	MOVE N''' + @dbname + '_log'' TO N''D:\SQL_Log\' + @dbname + '_log.ldf'',
	REPLACE,  STATS = 1;'

	SELECT (@restore_cmd)
	--EXEC (@restore_cmd)

sp_whoisactive

I tend to paste this (below) into SSMS keyboard shortcuts under Ctrl+F1 so I can instantly see what’s going on. 

EXEC [master].[dbo].[sp_WhoIsActive] @get_outer_command=1, @get_plans=1, @get_additional_info = 1, @get_task_info = 2, @output_column_list = '[dd%][session_id][block%][sql_text][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][sql_command][query_plan][locks][%]'

When it’s too slow 🙂 I use a “lite” version under Crtl+3

EXEC [master].[dbo].[sp_WhoIsActive] @get_additional_info = 1, @output_column_list = '[dd%][session_id][block%][sql_text][sql_command][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][query_plan][locks][%]'

Who deleted that data?

Sadly I could not find out.

Going forward – To capture deletes on a table I set-up a regular ‘after delete’ trigger with some extra columns to hold system functions.

This allowed me to capture the date/time, PC-Name and login that originated deletes. Here is my working lab …

--lab_trigger_deletes.sql

--create table to be monitored and add some data
	CREATE TABLE t1 (c1 INT, c2 int)
	INSERT INTO t1 VALUES (1,7), (2,8), (3,9)

-- create audit table
	CREATE TABLE t1_audit (c1 INT, c2 INT, c3 DATETIME, c4 SYSNAME, c5 SYSNAME, c6 SYSNAME)

-- check contents of both tables
	SELECT * from t1
	SELECT * FROM t1_audit

-- create trigger
	CREATE TRIGGER trg_ItemDelete 
	ON dbo.t1 
	AFTER DELETE 
	AS
	INSERT INTO dbo.t1_audit(c1, c2, c3, c4, c5, c6)
			SELECT d.c1, d.c2, GETDATE(), HOST_NAME(), SUSER_SNAME(), ORIGINAL_LOGIN()
			FROM Deleted d

-- delete a row (firing the trigger)
	DELETE FROM t1 WHERE c1 = 2

-- check contents of both tables again
	SELECT * from t1
	SELECT * FROM t1_audit

-- tidy up
	IF OBJECT_ID ('trg_ItemDelete', 'TR') IS NOT NULL DROP TRIGGER trg_ItemDelete;
   	drop TABLE t1
	drop TABLE t1_audit

Log Space

I noticed a jump in logfile size the other day and was wondering how to predict a autogrowth event.

I know old data is truncated after a log-backup but that’s internal and not normally visable.

I came up with this to run across production …

--LogSpace.sql
-- To help find near-full logfiles that may autogrow soon.

-- create table to hold raw data
CREATE TABLE #temp (DBName varchar(100), SizeMB int, UsedPct float, [STATUS] bit)

-- populate table
INSERT #temp EXEC('DBCC SQLPERF(logspace)')

-- output
SELECT DBName, SizeMB, UsedPct FROM #temp --WHERE UsedPct > 90 -- 90% full

-- clean-up
DROP TABLE #temp

SQL Safe error “Cannot connect to SQL Server instance”

This was fixed by re-installing SQL Safe. Bonus – Here is a working restore command with move

EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'SomeDatabase',
	@filename = 'J:\backups\SomeDatabase.BAK', 
	@backuptype = 'Full',
	@withmove = 'SomeDatabase_data "J:\sql_data\SomeDatabase_data.mdf"',
	@withmove = 'SomeDatabase_log "J:\sql_log\SomeDatabase_log.ldf"',
	@recoverymode = 'recovery',
	@replace = '1';

Diff Backup / Restore (part 1)

Although Ola’s Backup solution works great …
https://ola.hallengren.com/sql-server-backup.html

For this project I needed a corresponding Restore procedure, so I could setup nightly Diff Backup / Restores (from Prod to Reporting). Without having to write too much code 🙂

I modified the code from here for our environment (see below).
http://jason-carter.net/professional/restore-script-from-backup-directory-modified.html

In my next posts I will detail the SQL Jobs for this project.

USE [master]
GO

CREATE PROCEDURE [dbo].[DatabaseRestore]
		@dbName sysname,
		@SourceServer NVARCHAR(500),
		@backupPath NVARCHAR(500)

AS

/* To restore backups created from ola.hallengren's backup solution (RS) */

	SET NOCOUNT ON
	DECLARE @cmd NVARCHAR(500),
			@lastFullBackup NVARCHAR(500),
			@lastDiffBackup NVARCHAR(500),
			@backupFile NVARCHAR(500)

	DECLARE @fileList TABLE (backupFile NVARCHAR(255))
	DECLARE @directoryList TABLE (backupFile NVARCHAR(255))

/* Kill any connections */

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
		FROM [master].[dbo].[sysprocesses]
		WHERE dbid = DB_ID(@dbName)
		AND spid > 50;
	EXEC (@kill);

/* Match that of Olas output */

	SET @backupPath = @backupPath + '\' + @SourceServer + '\' + @dbName + '\'

/* Get List of Files */

	SET @cmd = 'DIR /s /b /O D ' + @backupPath
	IF (SELECT value_in_use
		FROM sys.configurations
		WHERE name = 'xp_cmdshell') = 0
	BEGIN /* cmd shell is disabled */
		EXEC sp_configure 'show advanced options', 1 RECONFIGURE
		EXEC sp_configure xp_cmdshell, 1 RECONFIGURE
		INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd
		EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE
		EXEC sp_configure 'show advanced options', 0 RECONFIGURE
	END
	ELSE /* cmd shell is enabled */
		INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd

/* Find latest full backup */

	SELECT @lastFullBackup = MAX(backupFile) 
	FROM @fileList 
	WHERE backupFile LIKE '%' + @SourceServer + '_' + @dbName + '_FULL_%.bak'

	SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' 
	   + @lastFullBackup + ''' WITH NORECOVERY, REPLACE'
	SELECT (@cmd); EXEC (@cmd)

/* Find latest diff backup */

	SELECT @lastDiffBackup = MAX(backupFile)
	FROM @fileList 
	WHERE backupFile  LIKE '%' + @SourceServer + '_' + @dbName + '_DIFF_%.bak'
	AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19)

/* check to make sure there is a diff backup */

	IF @lastDiffBackup IS NOT NULL
		BEGIN
		SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' 
			   + @lastDiffBackup + ''' WITH NORECOVERY'
		SELECT (@cmd); EXEC (@cmd)
		SET @lastFullBackup = @lastDiffBackup
		END

--/* check for log backups */

--	DECLARE backupFiles CURSOR FOR 
--	SELECT backupFile 
--	FROM @fileList
--	WHERE backupFile LIKE  '%' + @SourceServer + '_' + @dbName + '_LOG_%.trn'
--	AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19)

--	OPEN backupFiles 

--/* Loop through all the files for the database */

--	FETCH NEXT FROM backupFiles INTO @backupFile 

--	WHILE @@FETCH_STATUS = 0 
--		BEGIN 
--		   SET @cmd = 'RESTORE LOG [' + @dbName + '] FROM DISK = ''' 
--			   + @backupFile + ''' WITH NORECOVERY'
--		   SELECT (@cmd); EXEC (@cmd)
--		   FETCH NEXT FROM backupFiles INTO @backupFile 
--		END

--	CLOSE backupFiles 
--	DEALLOCATE backupFiles 

/* put database in a useable state */

	SET @cmd = 'RESTORE DATABASE [' + @dbName + '] WITH RECOVERY'
	SELECT (@cmd); EXEC (@cmd)

GO

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-2/

https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-3/

sysutility_get_views_data_into_cache_tables

Having uninstalled UCP and reinstalled MDW I found the MDW job “sysutility_get_views_data_into_cache_tables” failing at step-3 with an error message about invalid synonyms.

The fix was to re-create the MSDB sysnonyms from a clean SQL 2012 server.

Namely …

CREATE SYNONYM [dbo].[syn_sysutility_ucp_databases] 
FOR [msdb].[dbo].[sysutility_ucp_databases_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_filegroups] 
FOR [msdb].[dbo].[sysutility_ucp_filegroups_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_dacs] 
FOR [msdb].[dbo].[sysutility_ucp_dacs_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_smo_servers] 
FOR [msdb].[dbo].[sysutility_ucp_smo_servers_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_volumes] 
FOR [msdb].[dbo].[sysutility_ucp_volumes_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_computers] 
FOR [msdb].[dbo].[sysutility_ucp_computers_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_logfiles] 
FOR [msdb].[dbo].[sysutility_ucp_logfiles_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_datafiles] 
FOR [msdb].[dbo].[sysutility_ucp_datafiles_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_space_utilization] 
FOR [msdb].[dbo].[sysutility_ucp_space_utilization_stub]

CREATE SYNONYM [dbo].[syn_sysutility_ucp_cpu_utilization] 
FOR [msdb].[dbo].[sysutility_ucp_cpu_utilization_stub]

SSMS on Windows Server 2012

To start SSMS on Windows Server 2012 R2 Standard I …

1) Clicked in the bottom-left corner of the screen – which brought up a bunch of blue and green boxes.

2) Clicked on the little down-arrow in a circle near the bottom-left of the screen.

2b) If not visible I clicked a blank part of the screen – outside the boxes.

3) This is like an ‘All Programs’ screen.

4) Find and click “SQL Server Management Studio”.

Differential backup / restore using SQL Safe

I wanted to improve a backup / restore sql-job that populated a Reporting Server every night. I felt it would be quicker if it did a weekly Full backup and daily Diff backups.

The 3rd-party backup software was “SQL Safe Backup” from Idera.

I used this code on the Production Server …

--BackupDBA.sql

-- FULL Backup if Friday

	IF (SELECT DATEPART(dw, GETDATE())) = 6
	BEGIN
		EXEC [master].[dbo].[xp_ss_backup]
		@database = 'DBA',
		@filename = '\\ReportingServer\m$\DBA_Full.safe',
		@backuptype = 'Full',
		@overwrite = 1,
		@verify = 1;
	END;


-- DIFF Backup

	EXEC [master].[dbo].[xp_ss_backup]
	@database = 'DBA',
	@filename = '\\ReportingServer\m$\DBA_Diff.safe',
	@backuptype = 'Diff',
	@overwrite = 1,
	@verify = 1;

… and this code I scheduled a few hours later on the Reporting Server …

--RestoreDBA.sql

-- First, kill any connections

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM    [master].[dbo].[sysprocesses]
	WHERE   dbid = DB_ID('DBA')
	AND spid > 50;
	EXEC (@kill);


-- Restore FULL

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Full.safe', 
	@backuptype = 'Full',
	@recoverymode = 'norecovery', 
	@replace = '1';


-- Restore DIFF

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Diff.safe', 
	@backuptype = 'Diff',
	@recoverymode = 'recovery', 
	@replace = '1';

Finally, I added a step on Reporting to repair the orphaned user-accounts …

USE [DBA]
GO

EXEC sp_change_users_login 'Auto_Fix', 'ReportingLogin';
EXEC sp_change_users_login 'Auto_Fix', 'LinkedServerLogin';
GO

TempDB Autogrowth

I was shocked to find TempDB was the most often autogrown in our environment – and changing them by hand soon got old.

Here’s my global script to set them to best-practice values, that will only change files that exist, and will fail if the specified size is less than the current size (IE: it fails safe).

--tempdb_autogrowth.sql

-- report size and growth
select name, size*8/1024 [sizeMB], growth*8/1024 [growthMB]
from master.sys.master_files
where db_name(database_id) = 'tempdb'
order by physical_name


-- resize datafile(s) to 256MB & logfile to 128MB
USE [master]
GO
begin try
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev',  SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev2', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev3', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev4', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog',  SIZE = 131072KB , FILEGROWTH = 131072KB );
end try begin catch end catch

Set Default Autogrowth

Here’s another one of my global changes. This one sets-up ‘Model’, which is the template for all databases created in the future.

-- AutogrowthDefault.sql

-- first examine the current settings
select Name, size*8/1024 [SizeMB], case is_percent_growth 
	when 1 then convert(varchar(50), growth) + ' %' 
	when 0 then convert(varchar(50), growth*8/1024) + ' MB' end AutoGrowth
from master.sys.master_files
where db_name(database_id) = 'Model'
order by [type];

----set Initial-Size to best-practice
ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', SIZE = 256MB);
ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', SIZE = 128MB);

---- set Autogrowth to best-practice
ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', FILEGROWTH = 256MB);
ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', FILEGROWTH = 128MB);