Logins with no default database

This can happen when databases are dropped or renamed.

In Azure, cross database connections are disabled so its important to connect to the actual database you will be working on, rather than -say- master.

This will list all logins where the default database is missing:-

SELECT [name] Login,
       default_database_name MissingDefaultDB
FROM sys.server_principals
WHERE DATABASEPROPERTYEX(default_database_name, 'Status') IS NULL
AND default_database_name IS NOT NULL
ORDER BY [name];

Removing duplicates from a large table before adding a clustered primary key

The challenge with this one, was to remove over 2,000 duplicate rows from a 600,000,000 row heap, before a multi-column clustered primary key could be added.

(I am so happy to finally be able to document this, from a while ago. The phrase “uniquely challenging” was never so apt.)

My first idea was to create a empty copy of the source table, add a “ignore duplicates” index, then start an INSERT [dbo].[targettable] SELECT * FROM [dbo].[sourcetable]. However, I stopped after an hour as it was going to take too long.

My second idea was to use SSIS to populate the target-table. Again this was just too slow.

My third idea was to just remove the duplicates (2,000 being a much smaller number than 600,000,000). First though, a disclaimer, I only considered this ** high risk ** solution because the data was in a QA environment (not Live), and people were waiting for me.

Step one then, was to populate a temp-table with the duplicates …

SELECT column1, column2, column3, column4
INTO #tmp1
FROM [dbo].[sometablename]
GROUP BY column1, column2, column3, column4
HAVING COUNT(*) > 1;

(“column1” etc were not the real column names by-the-way 🙂 ). Step two was to loop through this list removing duplicates from the permanent table

WHILE exists (SELECT 1 FROM #tmp1)
BEGIN

	DECLARE @1 BIGINT, @2 INT, @3 BIGINT
	DECLARE @4 VARCHAR(10)

	SELECT TOP(1) 
		@1 = column1, 
		@2 = column2, 
		@3 = column3, 
		@4 = column4 
	FROM #tmp1

	DELETE TOP(1) 
	FROM [dbo].[sometablename] 
	WHERE column1 = @1
	AND column2 = @2
	AND column3 = @3
	AND column4 = @4

	DELETE 
	FROM #tmp1
	WHERE column1 = @1
	AND column2 = @2
	AND column3 = @3
	AND column4 = @4

END

Step three was to implement the clustered primary key. (by-the-way I was half expecting this to fail with a duplicates error, but happily the above loop had cleared a single duplicate from each pair, and thats all that was needed).

ALTER TABLE [dbo].[sometablename] 
ADD CONSTRAINT pk_sometablename
 PRIMARY KEY CLUSTERED
 ([column1], [column2], [column3], [column4]);

** just to point out – I consider the loop “high risk” because, an error message, a typo, accidentally running it twice, or faulty logic, could have resulted in disaster 🙂

Resource Governor

The Resource Governor is one of those rarely used tools, as its impact is inherently ‘negative’. It restricts resources from a user-group, but cannot be seen to boost performance of a more favoured group.

For example, you could restrict the SA login to use just 5% of the CPU resource. The effect of this would be that SA can use 100% of the CPU resource, until another user-group wants to use more CPU than is currently available. SA will then be throttled back, and back, until it is only using 5%. Five percent is the minimum SA can use.

It is not easy to see this throttling in action – without firing up perf-mon. Also, the GUI view within SSMS is not-so-good, showing only some of the configuration. Therefore this is one of those things that I work with completely at the code level.

So, here then is my crib-sheet …

-- ResPool.sql

-- 1 create custom resource pool of available hardware
USE master;
GO
CREATE RESOURCE POOL MxResourcePool WITH
(
	MIN_CPU_PERCENT = 0,
	MAX_CPU_PERCENT = 5
);
GO

-- 2 create a group and link it to the resource pool
USE master;
GO
CREATE WORKLOAD GROUP MxWorkloadGroup
USING MxResourcePool;
GO

-- 3 populate the group
USE master;
GO
CREATE FUNCTION dbo.fn_MxClassifier()
RETURNS sysname WITH SCHEMABINDING
AS
BEGIN
DECLARE @wg AS sysname
IF SUSER_NAME() = 'sa'
	SET @wg = 'MxWorkloadGroup'
ELSE
	SET @wg = 'default'

RETURN @wg
END;
GO

ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.fn_MxClassifier);
ALTER RESOURCE GOVERNOR RECONFIGURE;
ALTER RESOURCE GOVERNOR RECONFIGURE;

-- check config

SELECT s.session_id,
s.host_name,
s.program_name,
s.nt_user_name,
s.login_name, 
w.name WorkgroupAssignment,
r.name ResourcePoolAssignment

FROM sys.dm_exec_sessions s
JOIN sys.dm_resource_governor_workload_groups w
  ON s.group_id = w.group_id
JOIN sys.dm_resource_governor_resource_pools r
  ON w.pool_id = r.pool_id
WHERE s.host_name IS NOT NULL
ORDER BY nt_user_name-- s.session_id desc

-- undo

ALTER RESOURCE GOVERNOR DISABLE;
DROP WORKLOAD GROUP MxWorkloadGroup;
DROP RESOURCE POOL MxResourcePool;
DROP FUNCTION dbo.fn_MxClassifier;
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL);
ALTER RESOURCE GOVERNOR RECONFIGURE;


To sync a materialized view

To manually keep a results cache updated I recommend concatenating multi column primary keys rather than using full joins. If it looks simple it should last.

/* 1 Insert */

INSERT INTO 
	[mv].[sometablename]
SELECT 
	*
FROM 
	[vw].[vw_someviewname]
WHERE
	[pkcomposite1] + [pkcomposite2]
NOT IN
	(
	SELECT
		[pkcomposite1] + [pkcomposite2]
	FROM
		[mv].[sometablename]
	);


/* 2 Delete */

DELETE FROM
	[mv].[sometablename]
WHERE
	[pkcomposite1] + [pkcomposite2]
NOT IN
	(
	SELECT
		[pkcomposite1] + [pkcomposite2]
	FROM
		[vw].[vw_someviewname]
	);


/* 3 Update */

UPDATE
	MAT
SET
	MAT.[somecolumn1] = VW.[somecolumn1],
	MAT.[somecolumn2] = VW.[somecolumn2]
FROM
	[mv].[sometablename] MAT
JOIN
	[vw].[vw_someviewname] VW
ON
	MAT.[pkcomposite1] + MAT.[pkcomposite2]
	=
	VW.[pkcomposite1] + VW.[pkcomposite2]
WHERE
	MAT.[somecolumn1] <> VW.[somecolumn1]
OR
	MAT.[somecolumn1] <> VW.[somecolumn2];

Recovery Interval not optimal

The SQL Server “Recovery Time Interval” setting used to be ‘0’ by default, which could have a performance impact during a Checkpoint, by amplifying lazywriter contention.

Indirect checkpointing can alleviate this and has been available since SQL Server 2012. Indeed the default value was changed to ’60’ (seconds) from SQL Server 2016 onwards.

Cannot drop user as it owns a schema

Whilst removing orphaned user accounts, I came across this error …

Msg 15138, Level 16, State 1, Line 1
The database principal owns a schema in the database, and cannot be dropped.

Looking at the properties of the user within the database I noticed that they owned the db_datawriter schema. Within the properties of that schema I was able to type db_datawriter as the new owner. After which I was able to remove the user from the database.

Here is a scripted way to do this …

https://www.mssqltips.com/sqlservertip/3439/script-to-drop-all-orphaned-sql-server-database-users/

Todays failed jobs

I cobbled this together to run on multiply servers via CMS. It is simpler and more trustworthy than alerts from monitoring-software.

--JobFailures.sql

SELECT
    J.name,
	H.FailedAt,
    H.Message
FROM
    msdb.dbo.sysjobs AS J
    CROSS APPLY (
        SELECT TOP(1)
            FailedAt = msdb.dbo.agent_datetime(T.run_date, T.run_time),
            Message = T.message
        FROM
            msdb.dbo.sysjobhistory AS T
        WHERE
            T.job_id = J.job_id
		AND
			T.run_status = 0 -- failed
		AND
			msdb.dbo.agent_datetime(T.run_date, T.run_time) > getdate()-1 -- in the last 24 hrs
--			msdb.dbo.agent_datetime(T.run_date, T.run_time) > getdate()-3 -- covering the weekend
        ORDER BY
            T.instance_id) H

Original code from here …

https://stackoverflow.com/questions/54215008/sql-agent-job-last-run-status

Quick row count

When COUNT(*) was too slow to get total rows from a billion row heap …

SELECT MAX(REPLACE(CONVERT(VARCHAR(20), 
       CONVERT(MONEY, rowcnt), 1), '.00', '')) [rows]
FROM sys.sysindexes
WHERE OBJECT_NAME(id) = 'TableName';

Original code from here …

https://www.i-programmer.info/programming/database/6576-sql-server-quickly-get-row-counts-for-tables-heaps-indexes-and-partitions.html

Adding a database to an Azure backup policy

Twice I had to ask how to do this – so it is probably time for me to write it down.

Setting the scene – this was a virtual SQL Server hosted on Azure where I had already configured a backup policy within the Portal, and then wanted to add another database to that policy.

Clicking around in the Backup center and Backup policy screens did not help. The trick was to go through the motions of taking a backup.

  1. Within the Backup center click “+ Backup”

2. On the next screen make sure the Datasource type is “SQL in Azure VM” before selecting the right vault.

3. Choose “Configure Backup” …

4. Click “Add” …

5. Then – you can choose the Instance (or AlwaysOn AG), database, then Enable Backup. Simples 🙂

Removing Partitions

Remembering that indexes are partitioned not tables, I used this query to list them …

SELECT OBJECT_SCHEMA_NAME(i.object_id) AS [schema],
       OBJECT_NAME(i.object_id) AS [object],
       i.name AS [index],
       s.name AS [partition_scheme]
FROM sys.indexes i
JOIN sys.partition_schemes s
  ON i.data_space_id = s.data_space_id;

I found the quickest way to remove partitioning is :-

  • Make a copy of the partitioned table with all the data (EG: “tablename2”)
  • Recreate all indexes, keys, and constraints (specifying the “ON [PRIMARY]” option)
  • Drop the partitioned table
  • Rename the copy to the table that was just dropped
  • Drop any partition schemas and functions
SELECT * FROM sys.partition_schemes
--DROP PARTITION SCHEME somename

SELECT * FROM sys.partition_functions
--DROP PARTITION FUNCTION somename

Send an Email with a PDF attached

There are many posts on how to automatically generate of a PDF receipt and email it by leveraging SSRS. Here is how it went for me.

  1. Create a report that shows the details required.
  2. Create a parameters table.
  3. Subscribe to the report using the parameters table.
  4. Create a stored-procedure to populate and fire the subscription.

On a version of SQL Server Reporting Services (SSRS) that supports data-driven-subscriptions (DDS) I created a report called ‘SingleInvoice’ with one input parameter ‘invoice number’.

Outside of this report, in preparation for the DDS, I created a data source pointing to the local [ReportServer] database.

Within the [ReportServer] database I created a table called [dbo].[InvoiceParams]

CREATE TABLE [dbo].[InvoiceParams](
	[InvoiceNumber] [VARCHAR](100) NULL,
	[ToEmailAddress] [VARCHAR](200) NULL,
	[CCEmailAddress] [VARCHAR](200) NULL,
	[BccEmailAddress] [VARCHAR](200) NULL,
	[ReplyToEmailAddress] [VARCHAR](200) NULL,
	[IncludeReport] [BIT] NULL,
	[RenderFormat] [VARCHAR](20) NULL,
	[Priority] [VARCHAR](15) NULL,
	[Subject] [VARCHAR](150) NULL,
	[Comment] [VARCHAR](150) NULL,
	[IncludeLink] [BIT] NULL,
	[Active] [BIT] NULL,
	[DateInserted] [DATETIME] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [CCEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [BccEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [ReplyToEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((1)) FOR [IncludeReport]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ('PDF') FOR [RenderFormat]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ('Normal') FOR [Priority]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((0)) FOR [IncludeLink]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((1)) FOR [Active]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (GETDATE()) FOR [DateInserted]
GO

To simplify the stored procedure I defined default values for all the columns I would not be dynamically populating.

Next I created a Data-driven subscription on the report with a schedule in the past – so it would never fire. For Destination I chose E-Mail.

Within the subscription I edited the dataset and chose the previously created shared data source [ReportServer].

I added this query before clicking ‘Apply’….

SELECT * 
FROM dbo.InvoiceParams
WHERE Active = 1;

Back in the New Subscription form, I completed the Delivery options like this …

Within the user database I created this stored-procedure …

/*==================================================
  Author:		Richard Smith
  Create date:	10 Jul 2020
  Description:	To Email PDF receipts - demo version
  Test: Exec [dbo].[EmailReceipts] 'INV123456789',
            'Richard.Smith@company.com'
  =================================================*/

ALTER PROC [dbo].[EmailReceipts]
    @InvoiceNumber VARCHAR(100),
    @ToEmailAddress VARCHAR(200),
    @Subject VARCHAR(150) = 'test subject',
    @Comment VARCHAR(150) = 'test body',
    @SubscriptionID NVARCHAR(260) = '987654321' 
                       /* Report = "SingleInvoice" */
AS
BEGIN
    SET NOCOUNT ON;


/* 1 Save the inputs */

    INSERT INTO [ReportServer].[dbo].[InvoiceParams] 
            (InvoiceNumber, ToEmailAddress, [Subject], Comment)
    VALUES (@InvoiceNumber, @ToEmailAddress, @Subject, @Comment);


/* 2 Trigger subscription. Which will send the report (+ inputs) to the email-subsystem-queue */

    EXEC [ReportServer].[dbo].[AddEvent] @EventType = 'TimedSubscription', @EventData = @SubscriptionID;
    WAITFOR DELAY '00:00:10';


/* 3 If no longer in queue, flag as sent */

    IF NOT EXISTS (SELECT 1 FROM [ReportServer].[dbo].[Event] WHERE EventData = @SubscriptionID)
        UPDATE [ReportServer].[dbo].[InvoiceParams] 
        SET Active = 0
        WHERE InvoiceNumber = @InvoiceNumber
        AND ToEmailAddress = @ToEmailAddress;


/* 4 Manage the log */

	DELETE FROM [ReportServer].[dbo].[InvoiceParams] WHERE DateInserted < GETDATE()-30;
	SELECT * FROM [ReportServer].[dbo].[InvoiceParams] ORDER BY DateInserted DESC;

END;
GO

When executed with an email address and invoice number this stored procedure will send an email to the email address with the PDF invoice attached.

NOTE

To find @SubcriptionID I used this …

SELECT SCH.SubscriptionID
FROM [ReportServer].[dbo].[Catalog] CAT
JOIN [ReportServer].[dbo].[ReportSchedule] SCH
  ON CAT.ItemID = SCH.ReportID
WHERE CAT.Path= '/ReportPath/ReportName';
 

Removing duplicate rows

No need to over develop this – once you realise that the DELETE command can include the TOP option.

More? Ok. Create a SELECT command that shows the issue. EG:-

SELECT ID
FROM dbo._op2
WHERE ID = 'X123456';

Add TOP to return just the unwanted rows. IE: if the above query should return 1 row but erroneously returns 2 rows use TOP(1), if it returns 5 rows use TOP(4) …

(NOTE: It does not matter which particular rows get deleted … they are duplicates remember)

SELECT TOP(1) ID
FROM dbo._op2
WHERE ID = 'X123456';

Change SELECT to DELETE and remove the column name …

DELETE TOP(1)
FROM dbo._op2
WHERE ID = 'X123456';

… and only run it once 😉

Except and Intersect

Here is the simplest working example of EXCEPT and INTERSECT I can come up with …

/* Except.sql */

IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1 (c1 INT);
INSERT INTO #t1 VALUES (1), (2);

IF OBJECT_ID('tempdb..#t2') IS NOT NULL DROP TABLE #t2;
CREATE TABLE #t2 (c2 INT);
INSERT INTO #t2 VALUES (2), (3);

SELECT * FROM #t1
EXCEPT
SELECT * FROM #t2; /* = 1 */

SELECT * FROM #t1
INTERSECT
SELECT * FROM #t2; /* = 2 */

SELECT * FROM #t2
EXCEPT
SELECT * FROM #t1; /* = 3 */

SELECT * FROM #T1
EXCEPT
SELECT * FROM #T2
UNION
(SELECT * FROM #T2
EXCEPT
SELECT * FROM #T1); /* = 1 & 3 */

I use this frequently whilst refactoring to check the outputs are identical. And sometimes when syncing a MySQL table to MSSQL.

How to check SQL Jobs are actually doing something.

Looking through the Database Mail log today, I accidentally discovered a job that had been busy sending emails for I-don’t-know-how-long using an email profile that no longer worked. The output of the job was ‘success’ as the emails had been successfully queued with the Database Mail sub-system.

After finding the emails would have been empty anyway, I disabled the job. But it made me wonder if there might be other jobs that were busy doing nothing – hour after hour – day after day.

Knowing the dangers of weakening the system, I did not want to fail a job or job-step just to flag a maintenance issue.

The lowest-risk change I could think of making (to the many, legacy, unfamiliar jobs) was to leave pertinent messages in the job history log using the PRINT command. For example:-

IF EXISTS (SELECT 1 FROM SomeTable)
   BEGIN 
      PRINT 'YES: there is new data'
      (Do meaningful stuff)
   END
   ELSE
      PRINT 'NO: there is no new data';

Then in the future I might notice that there is Never any new data!

Expensive Spaghetti

Now that cloud computing has made the cost of things more visable. It is easier to see how much money is being wasted carrying around legacy spaghetti code.

Where once I.T. Managers prioritized the task of cleaning up inefficient code that works as “one day”. Now a cloud-compute-unit can be tied directly to cold-hard-cash. “Technical Debt” suddenly has an individual price sticker.

Subtree Cost = DTU’s = Money

Set every users default schema to DBO where its blank

In this quick script I am assuming the Windows domain is called ‘DOM’ …

-- ChangeDefaultSchemaToDboWhereNull.sql

DECLARE @cmd varchar(1000) 

SET @cmd = 
'USE ? IF DB_ID(''?'') > 4 SELECT ''USE ?; ALTER USER ['' + name + ''] WITH DEFAULT_SCHEMA = [dbo]''
 FROM sys.database_principals
 WHERE default_schema_name IS NULL
 AND [name] LIKE ''DOM\%'''

IF OBJECT_ID('tempdb..#output') IS NOT NULL DROP TABLE #output
CREATE TABLE #output
(command varchar(1000))

INSERT INTO #output
EXEC sp_MSforeachdb @cmd

SELECT * 
FROM #output


Save emergency contact details to Azure Blob Storage

This was part of the business continuity plan, so that HR had a list of employees next-of-kin etc, off-site.

I created a Job with 2 steps (Export using BCP and Import using AZCopy).

-- bcp.sql


-- Create the command to be executed

	DECLARE @cmd VARCHAR(1000) = 
                'bcp SomeDatabase..SomeView out 
                E:\SomePath\SomeFileName.txt -c -t, -T -S' + @@servername;

-- Run this if command-prompt is disabled

	IF (SELECT value_in_use FROM sys.configurations WHERE name = 'xp_cmdshell') = 0
	BEGIN
		EXEC sp_configure 'show advanced options', 1; RECONFIGURE; -- show advanced options
		EXEC sp_configure xp_cmdshell, 1; RECONFIGURE; -- enable command-shell
		EXEC xp_cmdshell @cmd; -- run the command
		EXEC sp_configure 'xp_cmdshell', 0; RECONFIGURE; -- disable command-shell
		EXEC sp_configure 'show advanced options', 0; RECONFIGURE; -- hide advanced options
	END;

-- Run this if command-prompt is enabled
		
	ELSE
		EXEC xp_cmdshell @cmd;

Step-1 overwrites the file each time.

In preparation for the azcopy command I used Azure Explorer to connect and create a fileshare called “BCExtract”.

Right-Clicking on this I chose “Generate Shared Access Signature”, and “Ad hoc Shared Access Signature”. I left all the defaults as they were except I increased Expiry time a bit.

I pasted the “Shared Access Signature URI” into notepad, added a backslash, then the “Shared Access Signature Token” making one long string.

Before this string I added “azcopy copy ” then the local path to the CSV file in double quotes.

-- azcopy.sql


-- Create the command to be executed

	DECLARE @cmd VARCHAR(1000) = 
		'azcopy copy "E:\SomePath\SomeFileName.txt" 
                "https://SomeLocation/SomeSharedAccessToken"'

-- Run this if command-prompt is disabled

	IF (SELECT value_in_use FROM sys.configurations WHERE name = 'xp_cmdshell') = 0
	BEGIN
		EXEC sp_configure 'show advanced options', 1; RECONFIGURE; -- show advanced options
		EXEC sp_configure xp_cmdshell, 1; RECONFIGURE; -- enable command-shell
		EXEC xp_cmdshell @cmd; -- run the command
		EXEC sp_configure 'xp_cmdshell', 0; RECONFIGURE; -- disable command-shell
		EXEC sp_configure 'show advanced options', 0; RECONFIGURE; -- hide advanced options
	END;
	
-- Run this if command-prompt is enabled
		
	ELSE
		EXEC xp_cmdshell @cmd;

Azure Backups and Restores

Backups

“Azure SQL Database” includes automatic (compressed) backups. The retention period is 7 days for Basic and 35 days for Standard and Premium. Backups are geo-replicated. The backup schedule is weekly full, hourly diff, and tlog backups every 5 minutes.

All backups support point-in-time recovery with a 12 hour RTO, and 5 minute RPO for in-region backups.

Max backup storage size is twice the tier database size limit (EG: Standard = 500GB of backup space). If exceeded a) request to reduce retention period. b) pay extra at standard read rate.

Long term backup archiving can be done using a) manual export to BACPAC on Azure Blob Storage. b) Long-Term-Retention to Azure Backup Vault for up to 10 years.

Restores

Databases can be restored (if within retention period). Further, all databases and their backups are deleted if you drop a Server.

Local restores can be to a point-in-time, can be renamed, but cannot overwrite an existing database.

Install Azure SQL Server and Databases

Portal click-by-click

1. Create SQL Server
– Home / SQL Servers / Add
– Subscription: “FT”
– Resource Group: “(New) SD-resource”
– Server Name “SD-server”
– Region: “(Europe) UK South” (London)
– Admin login “SD-login” / pw
– Allow Azure Services to access this server: “No”
– Enable advanced security: “Not now”
(wait 2 minutes)

2. Create SQL Database
– Home / SQL Databases / Add
– Database name: “SD-db”
– Elastic pool?: No
– Resources: “5 DTU’s (Basic)”
– Data source: “None” (blank database)
(wait 2 minute)
– Home / SQL Databases
– Database Features
– TDE: “Off”

3. Server level Connectivity
– Home / SQL Servers / Server name: “SD-server.database.windows.net” (cut)
– SSMS (paste) / SQL Server Authentication / SD-login / pw

3b. Messages
– “The requested name is valid, but no data of the requested type was found” =
server name was wrong.
– “New Firewall Rule” = your i/p is not on the server firewall list. Do you want to add it ? = Yes

4. (optional) database connectivity
a. exec sp_set_database_firewall_rule dbfwrule1, ‘86.168.15.119’, ‘86.168.15.119’;
b. select * from sys.database_firewall_rules
c. exec sp_delete_database_firewall_rule dbfwrule1

Notes
– Azure SQL DB offers 3 service tiers: Basic, Standard, and Premium.
– These tiers define performance and recovery.
– Performance is measured in DTU’s
– Multiple DB’s can share resources through Elastic Database Pools.

Differential backup / restore using SQL Safe

I wanted to improve a backup / restore sql-job that populated a Reporting Server every night. I felt it would be quicker if it did a weekly Full backup and daily Diff backups.

The 3rd-party backup software was “SQL Safe Backup” from Idera.

I used this code on the Production Server …

--BackupDBA.sql

-- FULL Backup if Friday

	IF (SELECT DATEPART(dw, GETDATE())) = 6
	BEGIN
		EXEC [master].[dbo].[xp_ss_backup]
		@database = 'DBA',
		@filename = '\\ReportingServer\m$\DBA_Full.safe',
		@backuptype = 'Full',
		@overwrite = 1,
		@verify = 1;
	END;


-- DIFF Backup

	EXEC [master].[dbo].[xp_ss_backup]
	@database = 'DBA',
	@filename = '\\ReportingServer\m$\DBA_Diff.safe',
	@backuptype = 'Diff',
	@overwrite = 1,
	@verify = 1;

… and this code I scheduled a few hours later on the Reporting Server …

--RestoreDBA.sql

-- First, kill any connections

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM    [master].[dbo].[sysprocesses]
	WHERE   dbid = DB_ID('DBA')
	AND spid > 50;
	EXEC (@kill);


-- Restore FULL

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Full.safe', 
	@backuptype = 'Full',
	@recoverymode = 'norecovery', 
	@replace = '1';


-- Restore DIFF

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Diff.safe', 
	@backuptype = 'Diff',
	@recoverymode = 'recovery', 
	@replace = '1';

Finally, I added a step on Reporting to repair the orphaned user-accounts …

USE [DBA]
GO

EXEC sp_change_users_login 'Auto_Fix', 'ReportingLogin';
EXEC sp_change_users_login 'Auto_Fix', 'LinkedServerLogin';
GO

TempDB Autogrowth

I was shocked to find TempDB was the most often autogrown in our environment – and changing them by hand soon got old.

Here’s my global script to set them to best-practice values, that will only change files that exist, and will fail if the specified size is less than the current size (IE: it fails safe).

--tempdb_autogrowth.sql

-- report size and growth
select name, size*8/1024 [sizeMB], growth*8/1024 [growthMB]
from master.sys.master_files
where db_name(database_id) = 'tempdb'
order by physical_name


-- resize datafile(s) to 256MB & logfile to 128MB
USE [master]
GO
begin try
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev',  SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev2', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev3', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev4', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog',  SIZE = 131072KB , FILEGROWTH = 131072KB );
end try begin catch end catch