Set every users default schema to DBO where its blank

In this quick script I am assuming the Windows domain is called ‘DOM’ …

-- ChangeDefaultSchemaToDboWhereNull.sql

DECLARE @cmd varchar(1000) 

SET @cmd = 
'USE ? IF DB_ID(''?'') > 4 SELECT ''USE ?; ALTER USER ['' + name + ''] WITH DEFAULT_SCHEMA = [dbo]''
 FROM sys.database_principals
 WHERE default_schema_name IS NULL
 AND [name] LIKE ''DOM\%'''

IF OBJECT_ID('tempdb..#output') IS NOT NULL DROP TABLE #output
CREATE TABLE #output
(command varchar(1000))

INSERT INTO #output
EXEC sp_MSforeachdb @cmd

SELECT * 
FROM #output


Searching every Procedures for a string.

To search every stored procedure in every database on every server (ranging from SQL Server 2005 to SQL Server 2016) for the string ‘QueryTraceOn’, I first registered every server within SSMS.

Right-clicking on the registered server folder, I chose ‘new query’ and ran ‘select 1’ to exclude from my list any server with issues.

Once I had an error free list, I ran this code (which took around 40 minutes) …

-- SearchProcs4String.sql

EXEC sp_MSforeachdb 'use ?
SELECT db_name() [Database], ROUTINE_SCHEMA + ''.'' 
+ ROUTINE_NAME [Proc]
FROM INFORMATION_SCHEMA.ROUTINES WITH (NOLOCK)
WHERE ROUTINE_DEFINITION LIKE ''%QUERYTRACEON%'';'

TSQL Performance Rule #1

There’s no significance to this one being number one 🙂 its just the one I’ve just been thinking about 🙂 I may now have built this up a bit more than is warranted, so hope your not expecting too much from my number one performance rule. Oh, Ok then, here goes …

“All numbers in stored-procedures should be in single quotes”.

Even if they are defined as INT they could potentially force a VARCHAR to be converted to INT.

Consider WHERE SomeColumn = 42. Conversion precedency means VARCHAR’s will always be converted to INT’s never the other way around. The one numeric value above (42) could cause a million rows in the column (“SomeColumn”) to have to be converted to INT to be tested. Significantly affecting performance.

Consider WHERE SomeColumn = ’42’. “SomeColumn” is either numeric or non-numeric. If its INT then just one value (the ’42’ in the where clause) has to be converted to INT (taking no time at all). If “SomeColumn” is VARCHAR then there is no conversion.

More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches (slow breaths, slow breaths :))

SQL Deadlock graph – arrows

In a SQL deadlock graph the direction of the arrows is an interesting thing.

pic02

With my mechanistic head on, I am imagining it as …

  1. Spid-a requested a lock, and then got a lock (a two-way trip)
  2. Spid-b requested a lock, and then got a lock (arrow ends-up pointing at spid)
  3. Spid-a requested a lock, and is waiting (a one-way thing)
  4. Spid-b requested a lock, and is waiting (arrow pointing away from spid)

Capture Deadlock Graph using Profiler

To Capture a Deadlock Graph using Profiler (NB: with SQL 2008 and above you can also use an extended-event).

  • File / New trace
  • Connection details
  • Use Template / Blank
  • Events Selection / Locks ..1) DeadlockGraph 2) Lock:Deadlock 3) Lock:Deadlock Chain
  • Event Extraction Settings / Save Deadlock XML events seperately / (somefilename)
  • Each deadlock in a distinct file
  • All Deadlocks
  • Run
  • (wait)
  • File / Export / Extract SQL Server Events / Extract deadlock Events / (somefilename2)

sp_whoisactive

I tend to paste this (below) into SSMS keyboard shortcuts under Ctrl+F1 so I can instantly see what’s going on. 

EXEC [master].[dbo].[sp_WhoIsActive] @get_outer_command=1, @get_plans=1, @get_additional_info = 1, @get_task_info = 2, @output_column_list = '[dd%][session_id][block%][sql_text][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][sql_command][query_plan][locks][%]'

When it’s too slow 🙂 I use a “lite” version under Crtl+3

EXEC [master].[dbo].[sp_WhoIsActive] @get_additional_info = 1, @output_column_list = '[dd%][session_id][block%][sql_text][sql_command][login_name][CPU%][wait_info][tasks][tran_log%][database%][Program%][percent%][host%][reads%][writes%][query_plan][locks][%]'

Who deleted that data?

Sadly I could not find out.

Going forward – To capture deletes on a table I set-up a regular ‘after delete’ trigger with some extra columns to hold system functions.

This allowed me to capture the date/time, PC-Name and login that originated deletes. Here is my working lab …

--lab_trigger_deletes.sql

--create table to be monitored and add some data
	CREATE TABLE t1 (c1 INT, c2 int)
	INSERT INTO t1 VALUES (1,7), (2,8), (3,9)

-- create audit table
	CREATE TABLE t1_audit (c1 INT, c2 INT, c3 DATETIME, c4 SYSNAME, c5 SYSNAME, c6 SYSNAME)

-- check contents of both tables
	SELECT * from t1
	SELECT * FROM t1_audit

-- create trigger
	CREATE TRIGGER trg_ItemDelete 
	ON dbo.t1 
	AFTER DELETE 
	AS
	INSERT INTO dbo.t1_audit(c1, c2, c3, c4, c5, c6)
			SELECT d.c1, d.c2, GETDATE(), HOST_NAME(), SUSER_SNAME(), ORIGINAL_LOGIN()
			FROM Deleted d

-- delete a row (firing the trigger)
	DELETE FROM t1 WHERE c1 = 2

-- check contents of both tables again
	SELECT * from t1
	SELECT * FROM t1_audit

-- tidy up
	IF OBJECT_ID ('trg_ItemDelete', 'TR') IS NOT NULL DROP TRIGGER trg_ItemDelete;
   	drop TABLE t1
	drop TABLE t1_audit

Set Default Autogrowth

Here’s another one of my global changes. This one sets-up ‘Model’, which is the template for all databases created in the future.

-- AutogrowthDefault.sql

-- first examine the current settings
select Name, size*8/1024 [SizeMB], case is_percent_growth 
	when 1 then convert(varchar(50), growth) + ' %' 
	when 0 then convert(varchar(50), growth*8/1024) + ' MB' end AutoGrowth
from master.sys.master_files
where db_name(database_id) = 'Model'
order by [type];

----set Initial-Size to best-practice
ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', SIZE = 256MB);
ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', SIZE = 128MB);

---- set Autogrowth to best-practice
ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', FILEGROWTH = 256MB);
ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', FILEGROWTH = 128MB);

Bulk import into Arcserve UDP

To bulk import my SQL Servers into Arcserve I ran this query on all my production servers via Registered Servers

--arc_import.sql
select cast(serverproperty('ComputerNamePhysicalNetBIOS') as varchar(100)) + ',DOM\svc-arcserve,password'

I pasted the column into a text file saved to my desktop. Then in Arcserve’s main console I chose to Add a Node, then Import from a File Lastly, I navigated to the text document and imported them all. NOTE1: you have to put the right domain, service-account, and password into the script NOTE2: With clusters the script will only get the name of the current active-node.

POST SCRIPT
by-the-way: that query won’t get the server-name for sql2000 boxes. This one will get then all …

-- physical server names
exec master..xp_regread 'HKEY_LOCAL_Machine',
'SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName\','ComputerName'

Pruning SQL 2005 backup history

When trying to prune the backup history of a SQL 2005 Server (sp3) I kept getting the error.

Msg 547, Level 16, State 0, Line 1
The DELETE statement conflicted with the REFERENCE constraint “FK__backupfil__backu__793DFFAF”.
The conflict occurred in database “msdb”, table “dbo.backupfilegroup”, column ‘backup_set_id’.

The FIX was to drop the constraints, do the dirty deed, then re-create the constraints.

Here’s the code I used …

--PruneBackupHistory_2005.sql

-- remove foreign keys
use msdb
go
--ALTER TABLE [restorefile] drop CONSTRAINT FK__restorefi__resto__00DF2177;
ALTER TABLE [restorefilegroup] drop CONSTRAINT FK__restorefi__resto__02C769E9;
ALTER TABLE [backupmediafamily] drop CONSTRAINT FK__backupmed__media__72910220;
ALTER TABLE [backupset] drop CONSTRAINT FK__backupset__media__76619304;
ALTER TABLE [backupfilegroup] drop CONSTRAINT FK__backupfil__backu__793DFFAF;
ALTER TABLE [backupfile] drop CONSTRAINT FK__backupfil__backu__7C1A6C5A;
ALTER TABLE [restorehistory] drop CONSTRAINT FK__restorehi__backu__7EF6D905;
go

-- empty tables
truncate table backupfile
truncate table backupfilegroup
truncate table backupmediafamily
truncate table backupmediaset
truncate table backupset
truncate table restorefile
truncate table restorefilegroup
truncate table restorehistory
go

-- replace foreign keys
--ALTER TABLE [restorefile] ADD CONSTRAINT FK__restorefi__resto__00DF2177 FOREIGN KEY (restore_history_id) REFERENCES restorehistory (restore_history_id);
ALTER TABLE [restorefilegroup] ADD CONSTRAINT FK__restorefi__resto__02C769E9 FOREIGN KEY (restore_history_id) REFERENCES restorehistory (restore_history_id);
ALTER TABLE [backupmediafamily] ADD CONSTRAINT FK__backupmed__media__72910220 FOREIGN KEY (media_set_id) REFERENCES backupmediaset (media_set_id);
ALTER TABLE [backupset] ADD CONSTRAINT FK__backupset__media__76619304 FOREIGN KEY (media_set_id) REFERENCES backupmediaset (media_set_id);
ALTER TABLE [backupfilegroup] ADD CONSTRAINT FK__backupfil__backu__793DFFAF FOREIGN KEY (backup_set_id) REFERENCES backupset (backup_set_id);
ALTER TABLE [backupfile] ADD CONSTRAINT FK__backupfil__backu__7C1A6C5A FOREIGN KEY (backup_set_id) REFERENCES backupset (backup_set_id);
ALTER TABLE [restorehistory] ADD CONSTRAINT FK__restorehi__backu__7EF6D905 FOREIGN KEY (backup_set_id) REFERENCES backupset (backup_set_id);
go
dbcc shrinkdatabase ('msdb')
go

Its a bit of a sledge-hammer, but that’s just the sort of day I had.

Scripting out SQL Jobs

When I scripted out the “DROP and CREATE” options (on SQL2008R2) for a SQL job I found it only worked once as it created a GUID that fails next time it runs (as it already existed).

To fix this I replaced [@job_id=N’blah blah blah’] in [sp_delete_job] with [@job_name=N’blah blah’] including the actual job name pasted from [sp_add_job].

Also I modified the script to work in additionally with SQL 2000 and SQL 2005 by …

– in [sp_delete_job] commenting out [–, @delete_unused_schedule=1]

– duplicating [sp_add_job] with the first one proceeded by [IF @@version LIKE ‘%2000%’; BEGIN] and no [@notify_email_operator_name]

– and the second one proceeded by [END; ELSE]

Which SQL Process is running which SQL Instance?

To easily see which SQL Process relates to which Instance of SQL Server – In Windows Task Manager / Processes
– right-click on one of the ‘sqlservr.exe’ names
– and select ‘Open file location’.

By examining the path to the file that opens I was able to distinguish the particular instance that this executable related to.

To Empty a Database

To stop the import of ‘already there’ data (twice) I needed to empty every table. Its a very, very good idea to back up your databases before using this code. It empties a whole database in one go!

-- empty database
use SomeDatabaseName -- ** change to subject database first & *** CHECK SERVER NAME **
go
EXEC sp_MSForEachTable 'TRUNCATE TABLE [?]'

Untrusted Keys

Here’s my mod to Aaron Lowe’s @Vendoran script to get rid of untrusted foreign keys and constraints …

--fix_untrusted_keys.sql

SELECT 'ALTER TABLE [' + s.name + '].[' + o.name + '] WITH CHECK CHECK CONSTRAINT [' + i.name + ']' AS keyname
from sys.foreign_keys i
INNER JOIN sys.objects o ON i.parent_object_id = o.object_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE i.is_not_trusted = 1 AND i.is_not_for_replication = 0

Notification Test

Notification emails telling a DBA that something failed are all-well-an-good, but if you don’t get any emails, how can you be sure the notification system is working?

The simplest solutions are often the best. I’ve taken to scheduling an 8am job (on every SQL Server that runs jobs) called ‘Notification Heartbeat’ that contains just an empty step. It’s own notification configuration is set to email the ‘DBA’ operator every time it succeeds.

Of course, I then have an Outlook rule that moves them (all with ‘Heartbeat’ in the subject), to the ‘DBA’ folder so I dont have to read them every day.

Now when my morning inbox is suspiciously empty – I have a way to verify that the Notification system was tested end-to-end at 8am.

NOTE: I used to try combining the notification-test with actual working Jobs like Backups, but this is simpler and the message value is clearer.

Is that database used?

Before an upgrade I wanted to remove any unused databases. To help find them I created a generic sql-job that every 10 mins would log connections.

I say ‘generic’ as the job needed to work on SQL2000, SQL2005, and SQL2012. So I created it using SQL-2000 enterprise-manager and scripted it out as ‘createJob_DbaAudit.sql’.

The script in job-step-1 (‘Build Table – if needed’) was …

select	@@ServerName AS Server, 
	db.[name] [Database],
	getdate() [TimeStamp], 
	Hostname, 
	Program_name, 
	Net_library, 
	Loginame
into master.dbo.tbl_connections
from master.dbo.sysdatabases db
join master.dbo.sysprocesses pr on db.dbid = pr.dbid
where db.dbid > 4

… this step was set to continue on to the next-step regardless of whether it succeded or failed.

Step-2 (‘Record Connections’) was similar …

insert into master.dbo.tbl_connections
select	@@ServerName AS Server, 
	db.[name] [Database],
	getdate() [TimeStamp], 
	Hostname, 
	Program_name, 
	Net_library, 
	Loginame
from master.dbo.sysdatabases db
join master.dbo.sysprocesses pr on db.dbid = pr.dbid
where db.dbid > 4

Final notes: To be robust and generic, I did not set-up Notifications. Also, I added a few lines to the top of the ‘CreateJob_DbaAudit.sql’ script to delete the tbl_connections table from master if it already existed.

Uninstall SQL Server without using Add/Remove

The Add/Remove option on my work PC (Windows XP) was restricted. So I removed SQL Server 2005 by finding and running ‘ARPWrapper.exe’. This brings up a maintenance-type box which after the first few screens has a screen with 2 big square buttons, the lower one is labled ‘Remove SQL’.

Infact you have to run this executable to completion twice (with a reboot inbetween) for the instance and the shared components.

Once complete I deleted the Microsoft SQL Server folder from c:\program files\

UPDATE: on SQL 2008r2 the executable is ‘SetupARP.exe’

How to Start SQL Server Agent via Management Studio

I could not remote onto this customers database-server so looked at my local machines and found the ‘short name’ for the SQLAgent service was ‘SQLAgent$sql2014’.

I recognized the last bit (‘~sql2014’) as my local instance so was able to surmise this code would work for the customer (if there instance was called ‘SomeInstanceName’).

exec xp_cmdshell ‘net start SQLAgent$SomeInstanceName’

** UPDATE **
When a colleague locked up a dev server I was able to free it by restarting the service like this …

xp_cmdshell ‘net start’ — to get list of services

xp_cmdshell ‘net stop “SQL Server Agent (DEV2008r”)”‘ — to stop the agent service in preparation for the next step
xp_cmdshell ‘net stop “SQL Server (DEV2008r2)”‘ — to stop the SQL Server Service NOTE: double-quotes

xp_cmdshell ‘net start “SQL Server (DEV2008r2)”‘
xp_cmdshell ‘net start “SQL Server Agent (DEV2008r2)”‘

Execution Plan – notes

A Clustered Index Scan is similar to a Table Scan. IE: The data is being read row-by-row

A SCAN happens when the optimizer determines that 1) all/most rows need to be returned (so it would be a waste of time reading the index keys), 2) the index is not selective enough, and the optimizer thinks it needs to read all/most of the index. 3) the index stats are found to be out-of-date 4) when the query adds functions to a column’s data, obscuring the columns data from the optimizer.

An (index) SCAN suggests that because a normally sufficient index is NOT sufficient at the moment – more data may be being returned than (normally) needed – suggesting more filtering may be needed – perhaps in the WHERE clause.

A CLUSTERED index seek is beneficial because the data can be read from the index, without having to look at the table.

Notepad++ and hidden characters

Notepad++ is great for finding patterns of text spread over several lines as you can include hidden characters in the search-string (like tab, new-line, etc).

But WATCH OUT!! if you leave a single new-line without a partner carriage-return SQL scripts will miss-behave intermittently.

The FIX – When you’ve completed whatever you’re doing – click Format / Convert to UNIX, followed by Format / Convert to Windows. Then save 🙂

Changing a Table Column to Identity

--AddIdentity.sql

/******* REMEMBER TO SCRIPT-OUT THE TABLE FIRST *******/

--1 delete the old column
Alter Table [dbo].[sometable] 
Drop Column [somecolumn]
Go

--2 if it fails ... drop the primary key (or whatever)
ALTER TABLE [dbo].[sometable]
DROP CONSTRAINT PK_sometable
GO

--3 create the new identity column
Alter Table [dbo].[sometable]
Add [somecolumn] Int Identity(1, 1)
Go

--4 restore the primary key (if dropped)
ALTER TABLE [dbo].[sometable]
ADD CONSTRAINT PK_sometable PRIMARY KEY NONCLUSTERED 
(
	[somecolumn] ASC
)

--5 lastly reset the identity seed
DECLARE @nextid INT;
SET @nextid = (SELECT isnull(MAX([somecolumn]),0) FROM [sometable]);
DBCC CHECKIDENT ('dbo.sometable', RESEED, @nextid);

A little slice of my dba day

A little slice of my dba life 🙂 … during morning-checks I noticed the MSDB database was 2.9GB and growing (in utility explorer). I ran the built-in SSMS report ‘Disk Usage by Table’ and found the table sysjobhistory was by far the largest (at 2.6GB).

I checked the sql-job that manages job history – there wasn’t one :). I created sql-job ‘Prune Backup History’ with the command ‘delete from msdb.dbo.sysjobhistory where run_date < convert(char(8), getdate()-30,112)', and scheduled it to run every 5am.

I ran it manually and was disappointed to find the table was still 2.4GB in size. I fired-up SSDT and created a new IS project. I dragged the 'Data profiling Task' to the design surface and configured it to output to \\server\share\msdbprofile. Within 'quickprofile' I connected to the server/database/table.

I saved the package, ran it (made a coffee), then opened the task again and clicked 'open profile viewer'.

Within 'Column Value Distribution Profiles' I found a disproportionate 74% of the run_date values were the same date (about 2 weeks ago). Double-clicking this result I could NOT see the name of the job (in the raw data) but recognised the step-name as belonging to the cdc job I had set-up … about two weeks ago, lol.

I concluded the sql-job was fine, and would dramatically shrink sysjobhistory – in a few weeks time. Cake time 🙂

Running Checktable in batches

To check for corruption on a large, busy database I was able to use my filegroup checking script on all but one filegroup.

To polish off the final (monster) filegroup, I resorted to creating a database-snapshot (in the morning) and running this (checktable) script during the day.

btw: Whilst I realise checking a database-snapshot is of limited use, I reasoned it was better than the only alternative (not checking).

Before leaving work I would review the results and drop the snapshot (for application optimization), in rediness to create a fresh one (and continue) the next day.

--CheckTablesInAFilegroup_vldb.sql

use TheSnapshotName
go

-- capture table-names from the named filegroup - once
begin try
     select o.name
     into master.dbo.fgtables
     from sys.indexes i
     join sys.filegroups f on i.data_space_id = f.data_space_id
     join sys.all_objects o on i.object_id = o.object_id
     where f.name = 'somefilegroupname'
     order by o.name
end try

-- check tables one-by-one
begin catch
     declare @sql varchar(200)
          while (select count(*) from master.dbo.fgtables) > 0
          begin
               select top 1 @sql = 'dbcc checktable ("dbo.' + name + '")' from master.dbo.fgtables order by name
               exec(@sql)
               delete from master.dbo.fgtables where name = (select top 1 name from master.dbo.fgtables order by name)
          end
end catch

Showing row counts with commas

I wanted to display a changing row-count (in a staging table) including commas (eg: 1,234 or 1,234,567).

I tried using STUFF() in a CASE statement to insert commas, then LEFT(), before settling on this, which is more succinct, and scales from 1 to 1,999,999,999

select replace(convert(varchar(20), convert(money, count(*)), 1), '.00', '')
from dbo.sometable
with(nolock)

To explain how it works …

– the ‘1’ is a style of VARCHAR that includes commas when converting from MONEY
– then REPLACE removes the ‘.00’ also added during the conversion from MONEY

Converting Delphi datetime to SQL Server

--1of2 -- convert delphi datetime to sql

-- input a delphi float for conversion
declare @input float = 41492.9039269676 -- <<<<<<<<<<< user input

-- seperate the date-part
declare @date int = @input

-- seperate the time-part
declare @time float = (@input - @date)

-- display results
select @input 'delphi', cast(dateadd(dd, @date, '1899-12-30 00:00:00') + @time as datetime) [sql]


--2of2 -- convert sql datetime to delphi

-- input the sql datetime for conversion
declare @input2 datetime = '2013-08-06 21:41:39.290' -- <<<<<<<<<<<< user input

-- display results
select @input2 [sql], cast(@input2 as float)+2 'delphi'

Removing unwanted datafiles

Many people are too precious about scripts. Here’s my simple GUI-based method for removing unwanted datafiles.

1a) fire up the shrink gui (tasks / shink / files)
1b) locate your target datafile
1c) run the 3 ‘shrink actions’ in order, one at a time (2nd one to ‘1’)

2a) Fire up the Database properies files tab
2b) highlight the unwanted datafile, click ‘remove’, and then ‘ok’

Backup Status

Here’s my script that estimates when a running backup will finish …

--backup_status.sql
SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete) AS [Percent Complete],
CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],
CONVERT(NUMERIC(10,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],
CONVERT(NUMERIC(10,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],
CONVERT(VARCHAR(1000),(SELECT SUBSTRING(text,r.statement_start_offset/2,
CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)
FROM sys.dm_exec_sql_text(sql_handle))) [CommandText]
FROM sys.dm_exec_requests r WHERE r.session_id > 50
--AND command IN ('RESTORE DATABASE','BACKUP DATABASE', 'KILLED/ROLLBACK')
--AND CONVERT(NUMERIC(6,2),r.percent_complete) > 0

Try/Catch and Transactions

Try/Catch-blocks and Transaction-Blocks really don’t play nice together. Within nests for example an outer @@ROLLBACK TRANSACTION will rollback both inner and outer transaction-blocks regardless of already executed COMMIT TRANSACTION statements.

This is the most robust template I could come up with for nested Try/Catch-blocks that contain Transactions.

BEGIN TRY
BEGIN TRY
(inner DDL statements)
begin transaction
(inner DML statements)
commit transaction
END TRY

BEGIN CATCH
(inner error capture statements)
rollback transaction
END CATCH
(outer DDL statements)
begin transaction
(outer DML statements)
while @@trancount > 0 commit transaction
END TRY

BEGIN CATCH
(outer error capture statements)
while @@trancount > 0 rollback transaction
END CATCH

Configuring DAC

There is another definition of DAC in SQL Server – but this one is about the emergency DBA connection method, used when the server it too busy to get onto any other way.

DAC can be a life saver but has to be configured in advance. Luckily its a two minute job and I do it on all Production servers. and here’s how …

– Right click on server-name (in SSMS Object explorer), and choose facets.
– when the Facets box pops up select Surface Area Configuration in the top box.
– in the lower box set RemoteDacEnabled to True, and save with OK.

** TO USE IT **

– startup SSMS and ensure nothing is connected.
– click on “New Query”
– in the “Connect to Server” box prefix the Server Name with “admin: ”
– complete Authentication and click “Connect”

You should now be connected within a minimal safe-mode type environment. That is, with a blank Object Explorer pane.

** UPDATE 05 Mar 2015 **

To enable DAC on multiple servers (2005 and above) programmatically (via Registered Servers) use this …

Use master
GO
sp_configure 'remote admin connections', 1
GO
RECONFIGURE
GO

Check a whole server for database corruption

To check every database on a sql-2005 server for corruption (before an upgrade) I created a job with one step containing this code …

exec sp_msforeachdb 'DBCC CHECKDB(''?'')  WITH NO_INFOMSGS'

… then in the advanced tab I created an output file d:\checkdb.log.

A few hours after I’d manually started the job I was happy to see the output file was empty.

Using OUTPUT to simplify the auditing of data changes.

Here’s an example of using OUTPUT to simplify the recording of updates etc …

-- create the audit log
create table audit_table (fname varchar(50), colour varchar(50), 
old_tasty int, changed smalldatetime, new_tasty int)

-- create and populate the fruit table
create table fruit (fname varchar(50),colour varchar(50),tasty int)
insert into fruit values ('banana','yellow',6),('kiwi','green',3),('gooseberry','green',2)

-- change score to 'yuk' for any green fruits
update fruit set tasty = 1
output deleted.*, getdate(), inserted.tasty into audit_table
where colour = 'green'

-- check audit log
select * from audit_table

… and another nail in the coffin of Triggers 🙂

Log Shipping is simply the best!

For robust High Availability and/or Disaster Recovery projects Log-Shipping is my ALL TIME FAVORITE option.

Log-Shipping is the idea that you take your -already there- backups and restore them on any number of other SQL Servers. This means it works with all versions of SQL Server, and all edition.

There are no overheads on the production server that is being protected and the standby server(s) are fully open to use between restores. For example you could schedule restores to a reporting-server out of hours.

Compared to the complexity, restrictions, over-head, and manageability of other more fashionable HA/DR options its a “no-brainer” for me.

For example Mirroring, is only one prod-server to one (read-only) standby server, Replication is high maintenance and flattens referential-integrity, and AlwaysOn Availability Groups is difficult to setup correctly and near-impossible to troubleshoot.

ss11

** UPDATE ** It is best to log ship between the same versions of SQL Server.

SQL 2005 to SQ 2008 for example will look like its worked, but all restores will be skipped *Unless you use no-recovery mode*.

Whats a Workspace spill?

nothing to do with coffee 🙂

In SQL Server memory internals when the estimated memory needed to store the results of SORT or JOIN operations proves to be too small during execution, the system database TempDB is used as additional storage.

These Workspace Spills can happen when Stats are out-of-date and data carnality is miscalculated.

Importing CSV-file produces NULL values

I had an issue where importing data from a CSV file resulted in NULL’s in a column that should have contained bank sort-codes.

The CSV data in question was in the format ‘nn-nn-nn’ or ‘nn/nn/nn’.

Here’s the Select part of the import command …

SELECT sortcode
FROM OPENROWSET ('Microsoft.ACE.OLEDB.12.0',
		 'Text;Database=\someserver\somefolder;HDR=Yes;',
		 'SELECT * FROM [somefile.csv]')

I found adding IMEX=1 didn’t help.

The answer was to change the registry value …

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office14.0\Access\Connectivity Engine\Engines\TextImportMixedTypes

from ‘Majority Type’ to ‘Text’.

(the majority of numbers in nn/nn/nn were setting this to numeric. Then on import the slashes made it look like dates lol)

** BEWARE – This is off-course a global server setting – be ready to undo – BEWARE **

CoreFTP: “Can’t find site profile”

Further to my post about using CoreFTP to automate FTPS transfers … I had an issue where the SQL Job-step kept failing with “cannot find profile!”.

The issue was caused by my configuring a ‘site-profile’ within CoreFTPs site-manager whilst logged in as myself. This was then stored in my registry settings and when the job-step executed as the SQL-Service-account it did not have access.

The fix (I chose) was to change the default-storage setting from the registry to a file (View / Options / Data / lock / admin / use default configuration file / lock / admin) then set-up the sites profile again.

Script to examine SQL Server 2005/08 logins

Adapting my previous “Script to examine SQL Server 2000 logins” for an upcoming SQL-2005 audit 🙂 …

-- sp_logins_report.sql
use dba
go

begin try drop table #rolemember, #dbnames, #report END TRY
BEGIN CATCH END CATCH -- ignore errors

--variables
declare @counter int
declare @dbname varchar(50)
declare @sqlstr nvarchar(4000)

--make table to hold database, user-define roles & user-names
create table #rolemember
(
rm_servername varchar(50) default @@servername,
rm_dbname varchar(1000),
rm_rolename varchar(1000),
rm_username varchar(1000),
rm_userid varchar(1000)
)

--make table to hold database names
create table #dbnames
(dbid int identity(1,1),
db_dbname varchar(50))

--make table to accumulate report
create table #report
(
re_servername varchar(50),
re_dbname varchar(1000),
re_rolename varchar(1000),
re_username varchar(1000),
re_userid varchar(1000)
)

--get members of each server role
insert into #rolemember (rm_rolename, rm_username, rm_userid)
exec dbo.sp_helpsrvrolemember

--get database names
insert into #dbnames (db_dbname)
select '[' + name + ']' from master.dbo.sysdatabases
where version > 0  -- online
set @counter = @@rowcount

--loop through databases to get members of database roles and user-defined roles
while @counter > 0
	begin

	--get database name from #dbnames table
	set @dbname = (select db_dbname from #dbnames where dbid = @counter)

	--get members of each database and user-defined role
	set @sqlstr = 'insert into #rolemember (rm_rolename, rm_username, rm_userid)
	exec ' + @dbname + '.dbo.sp_helprolemember'
	exec sp_executesql @sqlstr

	--update database name in rolemember table
	update #rolemember
	set rm_dbname = @dbname
	where rm_dbname is null

	set @counter = @counter - 1
	end

--put data into report table
insert into #report
select rm.* from #rolemember rm
left join #report re
on rm.rm_username = re.re_username
and rm.rm_dbname = re.re_dbname
and rm.rm_rolename = re.re_rolename
and rm.rm_servername = re.re_servername
where re.re_servername is null

--display report
select re_username UserName, re_rolename RoleName, re_dbname DBName, re_servername ServerName
from #report
where re_username != 'dbo'
--order by re_username --display by user
order by re_rolename --display by role
--order by re_dbname --display by database

Error displaying Standard-Reports

I installed the latest sp and Performance-Dashboard download on my sql 2005 production server so I could see some of the reports I’m used to in SQL 2008. I’m aware these are just front-ends to the dynamic-views so wouldn’t increase the server-load.

Unfortunately the very first report I wanted “Server Dashboard” came up with an error about the compatibility-mode setting …

“sql_handle” is not a recognized table hints option. If it is intended as a parameter to a table-valued function, ensure that your database compatibility mode is set to 90.

After much head-scratching the fix was to simply set my default database back to ‘Master’. lol.

Manage SQL job owner’s

Whilst taking over from the outgoing DBA I wrote this handy query to list the SQL-Jobs he owned …

--list_job_owners.sql
SELECT name job_name, SUSER_SNAME(owner_sid) job_owner
FROM msdb.dbo.SysJobs
ORDER BY name;

… and here’s the code to change them to ‘sa’ …

--change_job_owners.sql
EXEC msdb.dbo.sp_manage_jobs_by_login
    @action = N'REASSIGN',
    @current_owner_login_name = N'SomeLoginName',
    @new_owner_login_name = N'sa';

Biggest Tables

Here’s a handy script to list the sizes of all tables in the current database …

-- biggest_tables.sql

CREATE TABLE #temp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))

SET NOCOUNT ON
INSERT #temp
EXEC sp_msforeachtable 'sp_spaceused ''?'''

SELECT a.table_name,
a.row_count,
COUNT(*) AS col_count,
a.data_size
FROM #temp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
order by a.row_count desc --row_count
--order by cast(COUNT(*) as int) desc --col_count
--ORDER BY CAST(REPLACE(a.data_size, ' KB', '') AS integer) DESC --data_size

DROP TABLE #temp

Orphaned users

Here’s my crib-sheet for finding and dealing with orphaned users …

--orphaned_users.sql

-- SOURCE = 
-- TARGET = 

-- step-1: list orphaned users in current db
EXEC sp_change_users_login 'Report';

-- step-2: map user to login - if they both exist
EXEC sp_change_users_login 'update_one', 'some_user', 'some_login';

-- step-3: copy login, then step-2, then step-1
-- (NOTE: execute this on the source, then execute the output on the target)
Select
'Create Login ' + QUOTENAME(A.name) 
+ ' With Password=' + CONVERT(varchar(max), A.password_hash, 1) + ' hashed'		--script out the passwrod
+ ', DEFAULT_DATABASE=' + quotename(A.default_database_Name) --if you apply the scrip to a mirroed server or log-shipped server user master database, as user database is not in useable state
+ ', DEFAULT_LANGUAGE=' + quotename(A.default_language_Name)
+ ', CHECK_POLICY=' + Case A.is_policy_checked when 0 then 'OFF' When 1 Then 'On' End 
+ ', CHECK_EXPIRATION=' + Case A.is_expiration_checked when 0 then 'OFF' When 1 Then 'On' End
+ ', SID=' + CONVERT(varchar(max), A.SID, 1)		--script out the SIDs
 As SQLLogin
From 
sys.sql_logins A
Where A.name Not like '##%##'  --remove those system generated sql logins
And A.sid != 0x01 --SA sid is always same
and QUOTENAME(A.name) = '[' + 'some_login' + ']'


-- Extra

-- list USERS (current server/db)
SELECT * FROM sys.database_principals where type in ('S', 'U') order by 1

-- list LOGINS (current server/db)
SELECT * FROM sys.server_principals where type in ('S', 'U') order by 1

Database sizes

Although I already have one – here’s a neat script to show all database sizes …

--database_sizes.sql
select db.name, round(sum(convert(bigint, mf.size)) * 8 /1024 /1000 ,0) [SizeGB]
from master.sys.master_files mf
inner join master.sys.databases db
on db.database_id = mf.database_id
where db.database_id > 4
group by db.name
--order by db.name
order by [SizeGB] desc

Scheduling CHECKDB

I modified my backup-all script to schedule a DBCC CHECKDB run on ‘most’ databases. Then I augmented it further – to quit if still running at 18:30 …

--Checkdb_most.sql
DECLARE @dbname VARCHAR(100)
declare @return int
set @return = 0

DECLARE db_cursor CURSOR FOR
SELECT name FROM master.dbo.sysdatabases
where dbid > 4 -- exclude system databases
and name not in ('somedatabasename')

OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @dbname

WHILE (@@FETCH_STATUS = 0) AND (@return = 0)
BEGIN
       if getdate() > '01 feb 2012 18:20' set @return = 1
       DBCC CHECKDB (@dbname) with PHYSICAL_ONLY
       FETCH NEXT FROM db_cursor INTO @dbname
END

CLOSE db_cursor
DEALLOCATE db_cursor