1)create service accounts & add to local admin
2) optimize power settings
3) optimize TempDB datafiles (number, size, & T1118)
4) set max-memory
5) set compressed backups
6) optimize cost parallel >> 80
7) adhoc workloads
8) resize & set autogrowth for system db’s
9) transfer logins (revlogins)
10) ola solution install customize, schedule
11) setup operator & database mail
12) config notifications
13) setup links & permissions
14) install whoisactive
15) check recovery models are full for log backups
16) initial manual backup/restore
17) update usage and purity
18) configure remote dtc
19) set cmdshell to match old server
20) add me to local admin
21) make catchall backup job
22) transfer jobs
Category: MS SQL 2008R2
Orphaned Users
Here’s a quick script to fix orphaned users after a migration …
--OrphanedUsers.sql -- create temp table CREATE TABLE #orphans (oname VARCHAR(100), oSID VARCHAR(100) PRIMARY KEY) DECLARE @cmd VARCHAR(MAX), @name VARCHAR(100) -- populate temp table with orphaned logins INSERT #orphans(oname,osid) EXEC sp_change_users_login @Action='Report'; -- loop to fix / or else create login with default pw WHILE (SELECT COUNT(*) FROM #orphans) > 0 BEGIN SELECT TOP 1 @name = oname FROM #orphans SET @cmd = 'EXEC sp_change_users_login ''Auto_Fix'', ''' + @name + ''', NULL, ''B3r12-3x$098f6'';' DELETE FROM #orphans WHERE oname = @name EXEC (@cmd) END -- tidy up DROP TABLE #orphans
Making index changes to Production
These days I use a SQL Job called ‘DBA – index maint’.
Whenever I have an index change to make I paste the script into a new step, name that step with today’s date, change the ‘start step’ to that step, and schedule it to run once overnight.
This gives me a history and outcome, along-side the exact action.
SQL Safe error “Cannot connect to SQL Server instance”
This was fixed by re-installing SQL Safe. Bonus – Here is a working restore command with move
EXEC [master].[dbo].[xp_ss_restore] @database = 'SomeDatabase', @filename = 'J:\backups\SomeDatabase.BAK', @backuptype = 'Full', @withmove = 'SomeDatabase_data "J:\sql_data\SomeDatabase_data.mdf"', @withmove = 'SomeDatabase_log "J:\sql_log\SomeDatabase_log.ldf"', @recoverymode = 'recovery', @replace = '1';
Compress all tables
As part of my management of our MDW I wrote this to help compress the user-table in the database.
SELECT 'ALTER TABLE ' + s.name + '.' + t.name + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)' FROM sys.tables AS t INNER JOIN sys.schemas AS s ON t.[schema_id] = s.[schema_id]
Diff Backup / Restore (part 3)
On my Reporting Server I created the SQL Job ‘DBA – Restore MyDatabase’ with two steps 1) Restore MyDatabase, and 2) Add Users.
Step 2 just de-orphaned some user accounts EG:-
EXEC sp_change_users_login 'Auto_Fix', 'uidServerLink';
Step 1 contained this code …
-- Restore EXECUTE [master].[dbo].[DatabaseRestore] @dbName = 'MyDatabase', @SourceServer = 'MySourceServer', @backupPath = 'M:\Backups' -- Change recovery model ALTER DATABASE MyDatabase set recovery SIMPLE
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-1/
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-2/
Diff Backup / Restore (part 2)
On the Production Server I created the Job ‘DBA – Backup/Restore MyDatabase’ with two steps 1) Backup MyDatabase, and 2) Restore MyDatabase.
Step 2 just started the Restore job on the Reporting server (detailed in “Part 3”)
Step 1 needed to check that no random backups had happened in the last 24 hours before starting a Diff Backup …
-- If its Friday or the LSN's do not match - do a FULL backup DECLARE @DBName VARCHAR(100) = 'MyDatabase' --<< Database Name IF (SELECT DATEPART(dw, GETDATE())) = 6 --<< = Friday OR (SELECT MAX(differential_base_lsn) FROM [MyProdServer].[master].[sys].[master_files] WHERE [name] LIKE '%' + @DBName + '%') != (SELECT MAX(differential_base_lsn) FROM [MyReportServer].[master].[sys].[master_files] WHERE [name] LIKE '%' + @DBName + '%') BEGIN SELECT 'We can only do a FULL backup' EXECUTE [master].[dbo].[DatabaseBackup] @Databases = @DBName, @Directory = N'\\MyReportServer\backups', @BackupType = 'FULL', @CleanupTime = 1, --<< ONE HOUR @CleanupMode = 'BEFORE_BACKUP', @Compress = 'Y', @CheckSum = 'Y', @LogToTable = 'Y' END -- Else do a DIFF backup ELSE BEGIN SELECT 'we can do a diff backup' EXECUTE [master].[dbo].[DatabaseBackup] @Databases = @DBName, @Directory = N'\\MyReportServer\backups', @BackupType = 'DIFF', @CleanupTime = 168, --<< ONE WEEK @CleanupMode = 'BEFORE_BACKUP', @Compress = 'Y', @CheckSum = 'Y', @LogToTable = 'Y' END
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-3/
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-1/
Diff Backup / Restore (part 1)
Although Ola’s Backup solution works great …
https://ola.hallengren.com/sql-server-backup.html
For this project I needed a corresponding Restore procedure, so I could setup nightly Diff Backup / Restores (from Prod to Reporting). Without having to write too much code 🙂
I modified the code from here for our environment (see below).
http://jason-carter.net/professional/restore-script-from-backup-directory-modified.html
In my next posts I will detail the SQL Jobs for this project.
USE [master] GO CREATE PROCEDURE [dbo].[DatabaseRestore] @dbName sysname, @SourceServer NVARCHAR(500), @backupPath NVARCHAR(500) AS /* To restore backups created from ola.hallengren's backup solution (RS) */ SET NOCOUNT ON DECLARE @cmd NVARCHAR(500), @lastFullBackup NVARCHAR(500), @lastDiffBackup NVARCHAR(500), @backupFile NVARCHAR(500) DECLARE @fileList TABLE (backupFile NVARCHAR(255)) DECLARE @directoryList TABLE (backupFile NVARCHAR(255)) /* Kill any connections */ DECLARE @kill VARCHAR(8000) = ''; SELECT @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';' FROM [master].[dbo].[sysprocesses] WHERE dbid = DB_ID(@dbName) AND spid > 50; EXEC (@kill); /* Match that of Olas output */ SET @backupPath = @backupPath + '\' + @SourceServer + '\' + @dbName + '\' /* Get List of Files */ SET @cmd = 'DIR /s /b /O D ' + @backupPath IF (SELECT value_in_use FROM sys.configurations WHERE name = 'xp_cmdshell') = 0 BEGIN /* cmd shell is disabled */ EXEC sp_configure 'show advanced options', 1 RECONFIGURE EXEC sp_configure xp_cmdshell, 1 RECONFIGURE INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd EXEC sp_configure 'xp_cmdshell', 0 RECONFIGURE EXEC sp_configure 'show advanced options', 0 RECONFIGURE END ELSE /* cmd shell is enabled */ INSERT INTO @fileList(backupFile) EXEC master.sys.xp_cmdshell @cmd /* Find latest full backup */ SELECT @lastFullBackup = MAX(backupFile) FROM @fileList WHERE backupFile LIKE '%' + @SourceServer + '_' + @dbName + '_FULL_%.bak' SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' + @lastFullBackup + ''' WITH NORECOVERY, REPLACE' SELECT (@cmd); EXEC (@cmd) /* Find latest diff backup */ SELECT @lastDiffBackup = MAX(backupFile) FROM @fileList WHERE backupFile LIKE '%' + @SourceServer + '_' + @dbName + '_DIFF_%.bak' AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19) /* check to make sure there is a diff backup */ IF @lastDiffBackup IS NOT NULL BEGIN SET @cmd = 'RESTORE DATABASE [' + @dbName + '] FROM DISK = ''' + @lastDiffBackup + ''' WITH NORECOVERY' SELECT (@cmd); EXEC (@cmd) SET @lastFullBackup = @lastDiffBackup END --/* check for log backups */ -- DECLARE backupFiles CURSOR FOR -- SELECT backupFile -- FROM @fileList -- WHERE backupFile LIKE '%' + @SourceServer + '_' + @dbName + '_LOG_%.trn' -- AND RIGHT(backupfile, 19) > RIGHT(@lastFullBackup, 19) -- OPEN backupFiles --/* Loop through all the files for the database */ -- FETCH NEXT FROM backupFiles INTO @backupFile -- WHILE @@FETCH_STATUS = 0 -- BEGIN -- SET @cmd = 'RESTORE LOG [' + @dbName + '] FROM DISK = ''' -- + @backupFile + ''' WITH NORECOVERY' -- SELECT (@cmd); EXEC (@cmd) -- FETCH NEXT FROM backupFiles INTO @backupFile -- END -- CLOSE backupFiles -- DEALLOCATE backupFiles /* put database in a useable state */ SET @cmd = 'RESTORE DATABASE [' + @dbName + '] WITH RECOVERY' SELECT (@cmd); EXEC (@cmd) GO
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-2/
https://richardbriansmith.wordpress.com/2015/10/02/diff-backup-restore-part-3/
DBA Rule #8
Avoid 3rd Party management software. Backup software doubly so.
Slave SQL Jobs
To run a SQL-Job on the same Server use …
EXECUTE msdb.dbo.sp_start_job 'JobName'
To run a SQL-Job on another Server use …
EXECUTE [ServerName].msdb.dbo.sp_start_job 'JobName'
(assuming LinkedServers etc are already set-up)
** Update ** I had an issue where the remote job-name could not be found. The cause (I saw in sys.servers) was that I had used the wrong version of SSMS to create the link. The fix was to amend a working scripted-out link.
Differential backup / restore using SQL Safe
I wanted to improve a backup / restore sql-job that populated a Reporting Server every night. I felt it would be quicker if it did a weekly Full backup and daily Diff backups.
The 3rd-party backup software was “SQL Safe Backup” from Idera.
I used this code on the Production Server …
--BackupDBA.sql -- FULL Backup if Friday IF (SELECT DATEPART(dw, GETDATE())) = 6 BEGIN EXEC [master].[dbo].[xp_ss_backup] @database = 'DBA', @filename = '\\ReportingServer\m$\DBA_Full.safe', @backuptype = 'Full', @overwrite = 1, @verify = 1; END; -- DIFF Backup EXEC [master].[dbo].[xp_ss_backup] @database = 'DBA', @filename = '\\ReportingServer\m$\DBA_Diff.safe', @backuptype = 'Diff', @overwrite = 1, @verify = 1;
… and this code I scheduled a few hours later on the Reporting Server …
--RestoreDBA.sql -- First, kill any connections DECLARE @kill VARCHAR(8000) = ''; SELECT @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';' FROM [master].[dbo].[sysprocesses] WHERE dbid = DB_ID('DBA') AND spid > 50; EXEC (@kill); -- Restore FULL EXEC [master].[dbo].[xp_ss_restore] @database = 'DBA', @filename = 'M:\DBA_Full.safe', @backuptype = 'Full', @recoverymode = 'norecovery', @replace = '1'; -- Restore DIFF EXEC [master].[dbo].[xp_ss_restore] @database = 'DBA', @filename = 'M:\DBA_Diff.safe', @backuptype = 'Diff', @recoverymode = 'recovery', @replace = '1';
Finally, I added a step on Reporting to repair the orphaned user-accounts …
USE [DBA] GO EXEC sp_change_users_login 'Auto_Fix', 'ReportingLogin'; EXEC sp_change_users_login 'Auto_Fix', 'LinkedServerLogin'; GO
Recovery Pending
After exhausting all the usual methods to get a database out of ‘Recovery Pending’ mode, I deleted it then reattached the mdf file.
(NB: I made a copy of mdf file first, but did not need it).
Reset File size and Autogrowth settings
This is the logical conclusion of my reset_tempdb and reset_model scripts. It show all of the file sizes and autogrowth settings in the current instance and the code to change them.
The suggested sizes (128 MB for Logfiles and 256 MB for Datafiles) are reasonable for Model, but should probably be amended for other databases dependent on current size and autogrowth history.
--autogrowth_all.sql -- get current settings & create commands to change them select db.Name, case mf.[Type] when 0 then 'DATA' else 'LOG' end [FileType], convert(varchar(50), size*8/1024) + ' MB' [CurrentSize], case mf.is_percent_growth when 1 then convert(varchar(50), growth) + ' %' when 0 then convert(varchar(50), growth*8/1024) + ' MB' end [AutoGrowth], 'ALTER DATABASE [' + db.Name + '] MODIFY FILE (NAME = N''' + mf.Name + ''', SIZE = ' + case mf.[type] when 0 then '256' else '128' end + 'MB);' [ReSizeCommand], 'ALTER DATABASE [' + db.Name + '] MODIFY FILE (NAME = N''' + mf.Name + ''', FILEGROWTH = ' + case mf.[type] when 0 then '256' else '128' end + 'MB);' [AutogrowthCommand] from [master].[sys].[master_files] mf join [master].[sys].[databases] db on mf.database_id = db.database_id order by mf.database_id, mf.[type];
TempDB Autogrowth
I was shocked to find TempDB was the most often autogrown in our environment – and changing them by hand soon got old.
Here’s my global script to set them to best-practice values, that will only change files that exist, and will fail if the specified size is less than the current size (IE: it fails safe).
--tempdb_autogrowth.sql -- report size and growth select name, size*8/1024 [sizeMB], growth*8/1024 [growthMB] from master.sys.master_files where db_name(database_id) = 'tempdb' order by physical_name -- resize datafile(s) to 256MB & logfile to 128MB USE [master] GO begin try ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 262144KB , FILEGROWTH = 262144KB ); ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev2', SIZE = 262144KB , FILEGROWTH = 262144KB ); ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev3', SIZE = 262144KB , FILEGROWTH = 262144KB ); ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev4', SIZE = 262144KB , FILEGROWTH = 262144KB ); ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog', SIZE = 131072KB , FILEGROWTH = 131072KB ); end try begin catch end catch
Set Default Autogrowth
Here’s another one of my global changes. This one sets-up ‘Model’, which is the template for all databases created in the future.
-- AutogrowthDefault.sql -- first examine the current settings select Name, size*8/1024 [SizeMB], case is_percent_growth when 1 then convert(varchar(50), growth) + ' %' when 0 then convert(varchar(50), growth*8/1024) + ' MB' end AutoGrowth from master.sys.master_files where db_name(database_id) = 'Model' order by [type]; ----set Initial-Size to best-practice ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', SIZE = 256MB); ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', SIZE = 128MB); ---- set Autogrowth to best-practice ALTER DATABASE [model] MODIFY FILE (NAME = N'modeldev', FILEGROWTH = 256MB); ALTER DATABASE [model] MODIFY FILE (NAME = N'modellog', FILEGROWTH = 128MB);
Bulk import into Arcserve UDP
To bulk import my SQL Servers into Arcserve I ran this query on all my production servers via Registered Servers
--arc_import.sql select cast(serverproperty('ComputerNamePhysicalNetBIOS') as varchar(100)) + ',DOM\svc-arcserve,password'
I pasted the column into a text file saved to my desktop. Then in Arcserve’s main console I chose to Add a Node, then Import from a File Lastly, I navigated to the text document and imported them all. NOTE1: you have to put the right domain, service-account, and password into the script NOTE2: With clusters the script will only get the name of the current active-node.
POST SCRIPT
by-the-way: that query won’t get the server-name for sql2000 boxes. This one will get then all …
-- physical server names exec master..xp_regread 'HKEY_LOCAL_Machine', 'SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName\','ComputerName'
Find a SQL Job from an JobStepID
Looking through Waits in my MDW I often see things like …
Program Name: SQLAgent- TSQL JobStep (Job 0xD56…
… and can get the job name from …
SELECT * FROM msdb.dbo.sysjobs WHERE job_id = 0xD56...
Or even better! to go directly from the JobStep-ID to the Executable causing the issue …
SELECT * FROM msdb.dbo.sysjobsteps WHERE job_id = 0xD56...
Torn Page Detection
Here is another one of those best practice settings I run against new-to-me servers …
EXEC sp_MSForEachDB 'ALTER DATABASE [?] SET PAGE_VERIFY CHECKSUM WITH NO_WAIT';
On SQL2005 it errors saying you cannot change tempdb but that’s fine.
When SnapManager backup fails
When a SnapManager backup failed I …
– manually ran the SQL job – which failed again
– remoted onto the server and read the logs (start/ NetApp/ SnapManager for SQL Management) – there weren’t any for that day!
– fired up SnapDrive (start/ NetApp/ SnapDrive) and examined the Disks – there weren’t any (dum dum duuuum)
– restarted all Snap~ services (x3)
– found the disks had re-appeared in Snapdrive
– manually started the sql backup job – success!
SQL Server Statistics – my definition
STATISTICS is metadata that indicates to the query-optimizer how many rows it needs to read to get a representative sample of all (relevant) data in a table.
Indexes that return a smaller number of rows are not used for the current query.
Who changed a stored-procedure?
To quickly find out who recently changed a stored procedure. First I found the date/time it was last changed …
SELECT name, create_date, modify_date FROM sys.objects WHERE type = 'P' order by modify_date desc
… then in SQL Server Management Studio I right-clicked the Server-Name and chose Reports \ Standard Reports \ Schema-Changes History, and scrolled down to that data/time.
Scripting out SQL Jobs
When I scripted out the “DROP and CREATE” options (on SQL2008R2) for a SQL job I found it only worked once as it created a GUID that fails next time it runs (as it already existed).
To fix this I replaced [@job_id=N’blah blah blah’] in [sp_delete_job] with [@job_name=N’blah blah’] including the actual job name pasted from [sp_add_job].
Also I modified the script to work in additionally with SQL 2000 and SQL 2005 by …
– in [sp_delete_job] commenting out [–, @delete_unused_schedule=1]
– duplicating [sp_add_job] with the first one proceeded by [IF @@version LIKE ‘%2000%’; BEGIN] and no [@notify_email_operator_name]
– and the second one proceeded by [END; ELSE]
Database Restore Msg 3634 Error ’32
I kept getting an odd error message …
Msg 3634, Level 16, State 1, Line 3
The operating system returned the error ’32(The process cannot access the file because it is being used by another process.)’ while attempting ‘RestoreContainer::ValidateTargetForCreation’ on ‘E:\Website.mdf’.
blah blah blah
… while executing this code (simplified) …
RESTORE DATABASE [Website] FROM DISK = N'E:\backups\Website.bak' WITH FILE = 1, MOVE N'Website' TO N'D:\Website.mdf', MOVE N'Website_Indexes' TO N'D:\Website_1.ndf', MOVE N'Website_Customers' TO N'D:\Website_2.ndf' MOVE N'Website_log' TO N'D:\Website_3.ldf', NOUNLOAD, REPLACE, STATS = 1 GO
The solution was to use a double-backslash in the path like so …
RESTORE DATABASE [Website] FROM DISK = N'E:\\backups\Website.bak' WITH FILE = 1, MOVE N'Website' TO N'D:\Website.mdf', MOVE N'Website_Indexes' TO N'D:\Website_1.ndf', MOVE N'Website_Customers' TO N'D:\Website_2.ndf' MOVE N'Website_log' TO N'D:\Website_3.ldf', NOUNLOAD, REPLACE, STATS = 1 GO
Bids Crashing
To stop “Business Intelligence Development Studio” crashing when you click CALCULATIONS just remove / re-install BIDS.
Empty Database
Further to my last – here is my script to empty (using the very fast TRUNCATE command) all the tables that are not referenced by a foreign-key. Then in (2) attempt to delete the contents of those remaining (This fails for tables that are referenced by non-empty tables). Step-3 has another go at those that failed, hopefully more successfully now the referencing tables are empty.
-- EmptyDatabase.sql -- 1) TRUNCATE child & independent tables -- capture non-parent table names SELECT SCHEMA_name(schema_id) [schema], [name] INTO #tables FROM sys.tables where name NOT IN (SELECT OBJECT_NAME(referenced_object_id) FROM sys.foreign_keys) ORDER BY name -- loop through list truncating tables DECLARE @sql VARCHAR(300) WHILE (SELECT COUNT(*) FROM #tables) > 0 BEGIN SELECT TOP 1 @sql = [schema] + '.[' + [name] + ']' FROM #tables DELETE FROM #tables WHERE [schema] + '.[' + [name] + ']' = @sql SELECT @sql = 'truncate table ' + @sql EXEC(@sql) END DROP TABLE #tables -- 2) DELETE parents of child tables --list all parent-tables SELECT OBJECT_NAME(referenced_object_id) pname into #ptables FROM sys.foreign_keys GO -- loop through list DELETING tables DECLARE @sql VARCHAR(300) WHILE (SELECT COUNT(*) FROM #ptables) > 0 BEGIN SELECT TOP 1 @sql = [pname] FROM #ptables DELETE FROM #ptables WHERE [pname] = @sql SET @sql = 'delete ' + @sql exec(@sql) END drop table #ptables -- 3) DELETE parents of (now-empty) parent tables --list all parent-tables SELECT OBJECT_NAME(referenced_object_id) pname into #ptables2 FROM sys.foreign_keys GO -- loop through list DELETING tables DECLARE @sql VARCHAR(300) WHILE (SELECT COUNT(*) FROM #ptables2) > 0 BEGIN SELECT TOP 1 @sql = [pname] FROM #ptables2 DELETE FROM #ptables2 WHERE [pname] = @sql SET @sql = 'delete ' + @sql exec(@sql) END drop table #ptables2
In use, this reduced a database of [965,849,336] total rows of data to just [10,860] rows in less than two seconds. Which after shrinking (this was a copy of a production-database on a dev-server set to Simple-mode), shrunk it from [822000.50 MB] to [65100.13 MB].
I think I deserve a beer!
Tables referenced by Foreign Keys
As part of my project to empty a database I wanted to TRUNCATE tables without Foreign Keys. Here’s a neat script I have not seen elsewhere, that lists the tables that are referenced my at-least one Foreign Key (that therefore could not be truncated)
-- ParentTables.sql SELECT OBJECT_NAME(referenced_object_id) AS TablesRefdByFKs FROM sys.foreign_keys
And just for interest here’s the script to list tables that are not parents and may or maybe not children …
-- NonParentTables.sql SELECT name NonParentTables FROM sys.tables where name NOT IN (SELECT OBJECT_NAME(referenced_object_id) FROM sys.foreign_keys) ORDER BY name
Which SQL Process is running which SQL Instance?
To easily see which SQL Process relates to which Instance of SQL Server – In Windows Task Manager / Processes
– right-click on one of the ‘sqlservr.exe’ names
– and select ‘Open file location’.
By examining the path to the file that opens I was able to distinguish the particular instance that this executable related to.
When CHECK DB found Corruption!
Looking over a new SQL Server I noticed a failing job. It was the ‘Check DB’ part of a maintenance-plan and contained this error message …
Msg 2508, Level 16, State 3, Line 1
The In-row data RSVD page count for object “SomeTable”, index ID 0, partition ID 104586854531027, alloc unit ID 104586854531027 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.
CHECKDB found 0 allocation errors and 1 consistency errors in table ‘SomeTable’ (object ID 1595868725).
It transpired that this box was once upgraded from SQL-2000 and some steps had been missed.
It was trivial to fix (and not a sign of corruption) …
DBCC CHECKDB WITH DATA_PURITY; GO EXEC sp_MSForEachDB 'DBCC UPDATEUSAGE(?);';
The AutoShrink code I use
--autoshrink.sql -- list databases where autoshrink is ON select * from sys.databases where is_auto_shrink_on != 0 -- make script to turn autoshrink OFF SELECT 'ALTER DATABASE [' + name + '] SET AUTO_SHRINK OFF WITH NO_WAIT' FROM sys.databases WHERE database_id > 4 and is_auto_shrink_on != 0
To Empty a Database
To stop the import of ‘already there’ data (twice) I needed to empty every table. Its a very, very good idea to back up your databases before using this code. It empties a whole database in one go!
-- empty database use SomeDatabaseName -- ** change to subject database first & *** CHECK SERVER NAME ** go EXEC sp_MSForEachTable 'TRUNCATE TABLE [?]'
Untrusted Keys
Here’s my mod to Aaron Lowe’s @Vendoran script to get rid of untrusted foreign keys and constraints …
--fix_untrusted_keys.sql SELECT 'ALTER TABLE [' + s.name + '].[' + o.name + '] WITH CHECK CHECK CONSTRAINT [' + i.name + ']' AS keyname from sys.foreign_keys i INNER JOIN sys.objects o ON i.parent_object_id = o.object_id INNER JOIN sys.schemas s ON o.schema_id = s.schema_id WHERE i.is_not_trusted = 1 AND i.is_not_for_replication = 0
Notification Test
Notification emails telling a DBA that something failed are all-well-an-good, but if you don’t get any emails, how can you be sure the notification system is working?
The simplest solutions are often the best. I’ve taken to scheduling an 8am job (on every SQL Server that runs jobs) called ‘Notification Heartbeat’ that contains just an empty step. It’s own notification configuration is set to email the ‘DBA’ operator every time it succeeds.
Of course, I then have an Outlook rule that moves them (all with ‘Heartbeat’ in the subject), to the ‘DBA’ folder so I dont have to read them every day.
Now when my morning inbox is suspiciously empty – I have a way to verify that the Notification system was tested end-to-end at 8am.
NOTE: I used to try combining the notification-test with actual working Jobs like Backups, but this is simpler and the message value is clearer.
Remove oci.dll
When trying to uninstall Oracle Client (from a Windows server) the standalone deinstall tool couldn’t remove the file “oci.dll”.
To remove it (without rebooting) I tried to manually delete the file. The error message indicated which Windows service I had to stop. (IE: First it was locked by “SQL Server” then when I’d stopped that “VMWare Tools”, then DCOM, then COM+.
After stopping all the locking services in order – I was able to delete the file.
Lastly I restarted all the services I’d stopped 😉
Is that database used?
Before an upgrade I wanted to remove any unused databases. To help find them I created a generic sql-job that every 10 mins would log connections.
I say ‘generic’ as the job needed to work on SQL2000, SQL2005, and SQL2012. So I created it using SQL-2000 enterprise-manager and scripted it out as ‘createJob_DbaAudit.sql’.
The script in job-step-1 (‘Build Table – if needed’) was …
select @@ServerName AS Server, db.[name] [Database], getdate() [TimeStamp], Hostname, Program_name, Net_library, Loginame into master.dbo.tbl_connections from master.dbo.sysdatabases db join master.dbo.sysprocesses pr on db.dbid = pr.dbid where db.dbid > 4
… this step was set to continue on to the next-step regardless of whether it succeded or failed.
Step-2 (‘Record Connections’) was similar …
insert into master.dbo.tbl_connections select @@ServerName AS Server, db.[name] [Database], getdate() [TimeStamp], Hostname, Program_name, Net_library, Loginame from master.dbo.sysdatabases db join master.dbo.sysprocesses pr on db.dbid = pr.dbid where db.dbid > 4
Final notes: To be robust and generic, I did not set-up Notifications. Also, I added a few lines to the top of the ‘CreateJob_DbaAudit.sql’ script to delete the tbl_connections table from master if it already existed.
Dropping all Temp tables
I wanted to drop all my temp tables at the top of a large SELECT statement and already had a query to drop a Named temp-table …
IF OBJECT_ID('tempdb..#Temp2') IS NOT NULL DROP TABLE #Temp2
… so just needed to make that dynamic. Looking at the temp tables …
select * from tempdb.sys.objects where name like '#%'
… I noticed all mine had a bunch of underscores and a GUID appended to the name. And the GUID always contained atleast 5 zero’s. So I was able to filter out just mine with
select name from tempdb.sys.objects where name like '#%00000%'
… and paste that into a while loop to drop them …
--first drop all temp tables IF OBJECT_ID('temptables') IS NOT NULL DROP TABLE temptables select name into temptables from tempdb.sys.objects where name like '#%00000%' declare @table varchar(max), @cmd varchar(max) while (select count(*) from temptables) > 0 begin set @table = (select top 1 name from temptables) set @cmd = 'IF OBJECT_ID(''tempdb..' + @table + ''') IS NOT NULL DROP TABLE ' + @table exec(@cmd) delete from temptables where name = @table end DROP TABLE temptables
By initially creating a list and then working through it deleting the temp-table and its entry in the list, I ensured there would be no endless looping should it try (and fail) to drop a temp-table I don’t have permissions on.
Simplest CTE
Here’s my simplest working CTE, to play with …
drop table t1 create table t1 (c1 int) insert into t1 values(10), (11), (12) ;with cte(x) as (select 11) delete from t1 where c1 in (select x from cte) select * from t1
The power of SSIS
When you start SSIS development it seems rather easy to paste some tested t-sql into a box, QED! job done!
However, the real power of SSIS lies in the 2 areas where it out-performs raw t-sql.
1) Server linking: Because it uses a native format, grabbing data from a distant source and applying it somewhere else is quick and robust.
2) There are blinding-fast (in memory) replacements for the t-sql operations … GROUP BY (called the ‘Aggregate’ transformation), CASE (called ‘Conditional Split’), CAST (‘Data Conversion’), UPDATE (Derived Column), SOUNDEX (‘Fuzzy Lookup’), SORT (er called ‘sort’).
Simplest Merge example
Here’s the simplest example of a working Merge statement I could make. Its great to build-on to test the formatting of ‘real’ merges.
drop table d, s -- destination table containing old data create table d (id int, fruit varchar(50), tasty int) insert into d values (1,'apple', 5), (2,'orange', 5) select * from d -- source table containing unchanged, updated, and new data create table s (id int, fruit varchar(50), tasty int) insert into s values (1, 'apple', 5), (2,'orange', 0), (3,'banana', 9) select * from s --merge statement merge d target using s source on target.id = source.id when matched then update set tasty = source.tasty when not matched then insert values (id, fruit, tasty); -- show new destination table select * from d
To remove an item from the ‘Recent Projects’ list on the Start-Page of SQL 2008r2 BIDS
– Hover over the ‘Recent Project’ you want to remove.
– Note the path shown at the bottom-left of the screen.
– Go to that path and delete the dtproj or sln file noted previously.
– Left-click the item in BIDS ‘Recent Projects’ and click YES to the message ‘Remove this item blah blah blah …’.
A severe error occurred on the current command. The results, if any, should be discarded.
This worrying error was fixed by naming columns selected from a linked server EG:-
select * from openquery([someserver\someinstance], 'select * from [databasename].[schema].[tablename]')
Msg 0, Level 11, State 0, Line 4
A severe error occurred on the current command. The results, if any, should be discarded.
select * from openquery([someserver\someinstance], 'select [LifeID] from [databasename].[schema].[tablename]')
(111878 row(s) affected)
SSIS Error -1071607685
We had this error number written multiple times into an SSIS errors table. It seems to be one of those Microsoft generic codes.
In this instance it indicated that we were using ‘fast load’ to write to a table, and one (or more) of the rows was failing.
To find the erroneous row(s), and get specific error(s), we changed to ‘normal’ load (called “Table or View”), which is fully-logged.
How to Start SQL Server Agent via Management Studio
I could not remote onto this customers database-server so looked at my local machines and found the ‘short name’ for the SQLAgent service was ‘SQLAgent$sql2014’.
I recognized the last bit (‘~sql2014’) as my local instance so was able to surmise this code would work for the customer (if there instance was called ‘SomeInstanceName’).
exec xp_cmdshell ‘net start SQLAgent$SomeInstanceName’
** UPDATE **
When a colleague locked up a dev server I was able to free it by restarting the service like this …
xp_cmdshell ‘net start’ — to get list of services
xp_cmdshell ‘net stop “SQL Server Agent (DEV2008r”)”‘ — to stop the agent service in preparation for the next step
xp_cmdshell ‘net stop “SQL Server (DEV2008r2)”‘ — to stop the SQL Server Service NOTE: double-quotes
xp_cmdshell ‘net start “SQL Server (DEV2008r2)”‘
xp_cmdshell ‘net start “SQL Server Agent (DEV2008r2)”‘
Execution Plan – notes
A Clustered Index Scan is similar to a Table Scan. IE: The data is being read row-by-row
A SCAN happens when the optimizer determines that 1) all/most rows need to be returned (so it would be a waste of time reading the index keys), 2) the index is not selective enough, and the optimizer thinks it needs to read all/most of the index. 3) the index stats are found to be out-of-date 4) when the query adds functions to a column’s data, obscuring the columns data from the optimizer.
An (index) SCAN suggests that because a normally sufficient index is NOT sufficient at the moment – more data may be being returned than (normally) needed – suggesting more filtering may be needed – perhaps in the WHERE clause.
A CLUSTERED index seek is beneficial because the data can be read from the index, without having to look at the table.
Kill connections
Here’s a handy piece of code to put before a backup or snapshot-restore, where you need to kill current connections …
--first remove any connections
USE master
GO
DECLARE @kill varchar(8000) = '';
SELECT @kill=@kill+'kill '+convert(varchar(5),spid)+';'
FROM master..sysprocesses
WHERE dbid=db_id('SomeDatabaseName')
AND spid>50;
EXEC (@kill);
Notepad++ and hidden characters
Notepad++ is great for finding patterns of text spread over several lines as you can include hidden characters in the search-string (like tab, new-line, etc).
But WATCH OUT!! if you leave a single new-line without a partner carriage-return SQL scripts will miss-behave intermittently.
The FIX – When you’ve completed whatever you’re doing – click Format / Convert to UNIX, followed by Format / Convert to Windows. Then save 🙂
Changing a Table Column to Identity
--AddIdentity.sql /******* REMEMBER TO SCRIPT-OUT THE TABLE FIRST *******/ --1 delete the old column Alter Table [dbo].[sometable] Drop Column [somecolumn] Go --2 if it fails ... drop the primary key (or whatever) ALTER TABLE [dbo].[sometable] DROP CONSTRAINT PK_sometable GO --3 create the new identity column Alter Table [dbo].[sometable] Add [somecolumn] Int Identity(1, 1) Go --4 restore the primary key (if dropped) ALTER TABLE [dbo].[sometable] ADD CONSTRAINT PK_sometable PRIMARY KEY NONCLUSTERED ( [somecolumn] ASC ) --5 lastly reset the identity seed DECLARE @nextid INT; SET @nextid = (SELECT isnull(MAX([somecolumn]),0) FROM [sometable]); DBCC CHECKIDENT ('dbo.sometable', RESEED, @nextid);
Odd violation of Primary Key
This error didn’t seem to make sense as the Primary-Key was an identity column – and therefore the Insert that generated the error didn’t include it.
Msg 2627, Level 14, State 1, Line 5
Violation of PRIMARY KEY constraint ‘PK_somecolumn’. Cannot insert duplicate key in object ‘dbo.sometable’.
It transpired that the database had recently been restored in an inconsistent state, and the identity column had lost track of how many rows were in the table, and was trying to insert a Primary-Key value that already existed.
The fix was to refresh the identity column with the biggest value it contained, before the Insert …
DECLARE @nextid INT; SET @nextid = (SELECT MAX([columnname]) FROM [tablename]); DBCC CHECKIDENT ([tablename], RESEED, @nextid);
A little slice of my dba day
A little slice of my dba life 🙂 … during morning-checks I noticed the MSDB database was 2.9GB and growing (in utility explorer). I ran the built-in SSMS report ‘Disk Usage by Table’ and found the table sysjobhistory was by far the largest (at 2.6GB).
I checked the sql-job that manages job history – there wasn’t one :). I created sql-job ‘Prune Backup History’ with the command ‘delete from msdb.dbo.sysjobhistory where run_date < convert(char(8), getdate()-30,112)', and scheduled it to run every 5am.
I ran it manually and was disappointed to find the table was still 2.4GB in size. I fired-up SSDT and created a new IS project. I dragged the 'Data profiling Task' to the design surface and configured it to output to \\server\share\msdbprofile. Within 'quickprofile' I connected to the server/database/table.
I saved the package, ran it (made a coffee), then opened the task again and clicked 'open profile viewer'.
Within 'Column Value Distribution Profiles' I found a disproportionate 74% of the run_date values were the same date (about 2 weeks ago). Double-clicking this result I could NOT see the name of the job (in the raw data) but recognised the step-name as belonging to the cdc job I had set-up … about two weeks ago, lol.
I concluded the sql-job was fine, and would dramatically shrink sysjobhistory – in a few weeks time. Cake time 🙂
Running Checktable in batches
To check for corruption on a large, busy database I was able to use my filegroup checking script on all but one filegroup.
To polish off the final (monster) filegroup, I resorted to creating a database-snapshot (in the morning) and running this (checktable) script during the day.
btw: Whilst I realise checking a database-snapshot is of limited use, I reasoned it was better than the only alternative (not checking).
Before leaving work I would review the results and drop the snapshot (for application optimization), in rediness to create a fresh one (and continue) the next day.
--CheckTablesInAFilegroup_vldb.sql use TheSnapshotName go -- capture table-names from the named filegroup - once begin try select o.name into master.dbo.fgtables from sys.indexes i join sys.filegroups f on i.data_space_id = f.data_space_id join sys.all_objects o on i.object_id = o.object_id where f.name = 'somefilegroupname' order by o.name end try -- check tables one-by-one begin catch declare @sql varchar(200) while (select count(*) from master.dbo.fgtables) > 0 begin select top 1 @sql = 'dbcc checktable ("dbo.' + name + '")' from master.dbo.fgtables order by name exec(@sql) delete from master.dbo.fgtables where name = (select top 1 name from master.dbo.fgtables order by name) end end catch
Showing row counts with commas
I wanted to display a changing row-count (in a staging table) including commas (eg: 1,234 or 1,234,567).
I tried using STUFF() in a CASE statement to insert commas, then LEFT(), before settling on this, which is more succinct, and scales from 1 to 1,999,999,999
select replace(convert(varchar(20), convert(money, count(*)), 1), '.00', '') from dbo.sometable with(nolock)
To explain how it works …
– the ‘1’ is a style of VARCHAR that includes commas when converting from MONEY
– then REPLACE removes the ‘.00’ also added during the conversion from MONEY