Log shipping to SSRS

I like simplicity, and job schedules can add a ton of it. I set up one job on the primary server to backup logs, and one job on the secondary server to restore them from the primary share.

Both schedules were set to – every 15 minutes between 18:00 and 08:00 – so no overhead during the working day.

Incidentally, so I would never have to manually fix log shipping of a 2 TB database, I set up a 01:00 [full backup] job on the primary …

Which on completion would start a [full restore] job on the secondary (to “standby”)…

The first step of the [full backup] job was to check log shipping latency, and only proceed if it were over 2 hours.

Normally log shipping would run all night and the secondary server would be updated to 08:00. If there were an issue then the [full backup] / [full restore] jobs would “fix” log shipping and again the secondary would be updated to 08:00. If there were a more fundamental failure then the [full backup] / [full restore] jobs alone, would update the secondary to 01:00.

The final touches were to set the [full restore] job to notify me if it were ever executed, and to remove all notifications from the [log restore] job ๐Ÿ™‚

Moving datafiles

Just so I can find this script when I need it …

--MoveDataFile.sql

USE master;
GO

-- 1 alter the metadata to the new path

ALTER DATABASE [SomeDatabaseName] MODIFY FILE (NAME = SomeLogicalFileName, FILENAME = 'F:\SQLData\SomeFileName.mdf');

-- 2 take database offline

ALTER DATABASE [SomeDatabaseName] SET OFFLINE;

-- 3 move the datafile via windows explorer

-- 4 bring database online

ALTER DATABASE [SomeDatabaseName] SET ONLINE;

Recovery Interval not optimal

The SQL Server “Recovery Time Interval” setting used to be ‘0’ by default, which could have a performance impact during a Checkpoint, by amplifying lazywriter contention.

Indirect checkpointing can alleviate this and has been available since SQL Server 2012. Indeed the default value was changed to ’60’ (seconds) from SQL Server 2016 onwards.

Splitting strings in practice.

Quite often on technical forums a request to extract part of a text string is tackled in splendid isolation. I’ve used the technique below with postcodes before, but this time wanted to split the text in an SSRS log file to isolate report names from paths.

For example the string …

/Finance/Monthly/Outgoing/Summary

… means the report called “Summary” is contained in the “Outgoing” folder within the “Monthly” folder inside the “Finance” folder in the root.

The tricky bit being that the nesting levels vary. There are reports in root. And there are reports buried ten layers deep like russian dolls.

Task one, getting the report names is relatively easy. You could use the REVERSE function to help grab the text up to the “first” slash using CHARINDEX.

Task two, getting the path, is where forums go deep, with strings of functions to extract all the text to the left of the last slash (where there are an unknown number of slashes).

Where as, it is often far simpler to use REPLACE to remove the already found report name from the whole string, leaving just the path.

SELECT
       Report,
       REPLACE(ItemPath,Report,'') [Path],
       ...
FROM
(
    SELECT RIGHT(ItemPath, CHARINDEX ('/', REVERSE(ItemPath))-1) Report,
           ...
 

Cannot drop user as it owns a schema

Whilst removing orphaned user accounts, I came across this error …

Msg 15138, Level 16, State 1, Line 1
The database principal owns a schema in the database, and cannot be dropped.

Looking at the properties of the user within the database I noticed that they owned the db_datawriter schema. Within the properties of that schema I was able to type db_datawriter as the new owner. After which I was able to remove the user from the database.

Here is a scripted way to do this …

https://www.mssqltips.com/sqlservertip/3439/script-to-drop-all-orphaned-sql-server-database-users/

Todays failed jobs

I cobbled this together to run on multiply servers via CMS. It is simpler and more trustworthy than alerts from monitoring-software.

--JobFailures.sql

SELECT
    J.name,
	H.FailedAt,
    H.Message
FROM
    msdb.dbo.sysjobs AS J
    CROSS APPLY (
        SELECT TOP(1)
            FailedAt = msdb.dbo.agent_datetime(T.run_date, T.run_time),
            Message = T.message
        FROM
            msdb.dbo.sysjobhistory AS T
        WHERE
            T.job_id = J.job_id
		AND
			T.run_status = 0 -- failed
		AND
			msdb.dbo.agent_datetime(T.run_date, T.run_time) > getdate()-1 -- in the last 24 hrs
--			msdb.dbo.agent_datetime(T.run_date, T.run_time) > getdate()-3 -- covering the weekend
        ORDER BY
            T.instance_id) H

Original code from here …

https://stackoverflow.com/questions/54215008/sql-agent-job-last-run-status

Quick row count

When COUNT(*) was too slow to get total rows from a billion row heap …

SELECT MAX(REPLACE(CONVERT(VARCHAR(20), 
       CONVERT(MONEY, rowcnt), 1), '.00', '')) [rows]
FROM sys.sysindexes
WHERE OBJECT_NAME(id) = 'TableName';

Original code from here …

https://www.i-programmer.info/programming/database/6576-sql-server-quickly-get-row-counts-for-tables-heaps-indexes-and-partitions.html

Send an Email with a PDF attached

There are many posts on how to automatically generate of a PDF receipt and email it by leveraging SSRS. Here is how it went for me.

  1. Create a report that shows the details required.
  2. Create a parameters table.
  3. Subscribe to the report using the parameters table.
  4. Create a stored-procedure to populate and fire the subscription.

On a version of SQL Server Reporting Services (SSRS) that supports data-driven-subscriptions (DDS) I created a report called ‘SingleInvoice’ with one input parameter ‘invoice number’.

Outside of this report, in preparation for the DDS, I created a data source pointing to the local [ReportServer] database.

Within the [ReportServer] database I created a table called [dbo].[InvoiceParams]

CREATE TABLE [dbo].[InvoiceParams](
	[InvoiceNumber] [VARCHAR](100) NULL,
	[ToEmailAddress] [VARCHAR](200) NULL,
	[CCEmailAddress] [VARCHAR](200) NULL,
	[BccEmailAddress] [VARCHAR](200) NULL,
	[ReplyToEmailAddress] [VARCHAR](200) NULL,
	[IncludeReport] [BIT] NULL,
	[RenderFormat] [VARCHAR](20) NULL,
	[Priority] [VARCHAR](15) NULL,
	[Subject] [VARCHAR](150) NULL,
	[Comment] [VARCHAR](150) NULL,
	[IncludeLink] [BIT] NULL,
	[Active] [BIT] NULL,
	[DateInserted] [DATETIME] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [CCEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [BccEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (NULL) FOR [ReplyToEmailAddress]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((1)) FOR [IncludeReport]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ('PDF') FOR [RenderFormat]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ('Normal') FOR [Priority]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((0)) FOR [IncludeLink]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT ((1)) FOR [Active]
ALTER TABLE [dbo].[InvoiceParams] ADD  DEFAULT (GETDATE()) FOR [DateInserted]
GO

To simplify the stored procedure I defined default values for all the columns I would not be dynamically populating.

Next I created a Data-driven subscription on the report with a schedule in the past – so it would never fire. For Destination I chose E-Mail.

Within the subscription I edited the dataset and chose the previously created shared data source [ReportServer].

I added this query before clicking ‘Apply’….

SELECT * 
FROM dbo.InvoiceParams
WHERE Active = 1;

Back in the New Subscription form, I completed the Delivery options like this …

Within the user database I created this stored-procedure …

/*==================================================
  Author:		Richard Smith
  Create date:	10 Jul 2020
  Description:	To Email PDF receipts - demo version
  Test: Exec [dbo].[EmailReceipts] 'INV123456789',
            'Richard.Smith@company.com'
  =================================================*/

ALTER PROC [dbo].[EmailReceipts]
    @InvoiceNumber VARCHAR(100),
    @ToEmailAddress VARCHAR(200),
    @Subject VARCHAR(150) = 'test subject',
    @Comment VARCHAR(150) = 'test body',
    @SubscriptionID NVARCHAR(260) = '987654321' 
                       /* Report = "SingleInvoice" */
AS
BEGIN
    SET NOCOUNT ON;


/* 1 Save the inputs */

    INSERT INTO [ReportServer].[dbo].[InvoiceParams] 
            (InvoiceNumber, ToEmailAddress, [Subject], Comment)
    VALUES (@InvoiceNumber, @ToEmailAddress, @Subject, @Comment);


/* 2 Trigger subscription. Which will send the report (+ inputs) to the email-subsystem-queue */

    EXEC [ReportServer].[dbo].[AddEvent] @EventType = 'TimedSubscription', @EventData = @SubscriptionID;
    WAITFOR DELAY '00:00:10';


/* 3 If no longer in queue, flag as sent */

    IF NOT EXISTS (SELECT 1 FROM [ReportServer].[dbo].[Event] WHERE EventData = @SubscriptionID)
        UPDATE [ReportServer].[dbo].[InvoiceParams] 
        SET Active = 0
        WHERE InvoiceNumber = @InvoiceNumber
        AND ToEmailAddress = @ToEmailAddress;


/* 4 Manage the log */

	DELETE FROM [ReportServer].[dbo].[InvoiceParams] WHERE DateInserted < GETDATE()-30;
	SELECT * FROM [ReportServer].[dbo].[InvoiceParams] ORDER BY DateInserted DESC;

END;
GO

When executed with an email address and invoice number this stored procedure will send an email to the email address with the PDF invoice attached.

NOTE

To find @SubcriptionID I used this …

SELECT SCH.SubscriptionID
FROM [ReportServer].[dbo].[Catalog] CAT
JOIN [ReportServer].[dbo].[ReportSchedule] SCH
  ON CAT.ItemID = SCH.ReportID
WHERE CAT.Path= '/ReportPath/ReportName';
 

Removing duplicate rows

No need to over develop this – once you realise that the DELETE command can include the TOP option.

More? Ok. Create a SELECT command that shows the issue. EG:-

SELECT ID
FROM dbo._op2
WHERE ID = 'X123456';

Add TOP to return just the unwanted rows. IE: if the above query should return 1 row but erroneously returns 2 rows use TOP(1), if it returns 5 rows use TOP(4) …

(NOTE: It does not matter which particular rows get deleted … they are duplicates remember)

SELECT TOP(1) ID
FROM dbo._op2
WHERE ID = 'X123456';

Change SELECT to DELETE and remove the column name …

DELETE TOP(1)
FROM dbo._op2
WHERE ID = 'X123456';

… and only run it once ๐Ÿ˜‰

Stop notification spam

Although I have previously tackled this problem by rolling my own notifications. This time I realised that the SQL jobs that run every minute or two are of low importance – and I don’t really want emails when they fail. I will notice at some point during the working day.

Here is the code to lists jobs that run frequently and are configured to send notification emails. Along with the commands to remove those notifications.

/* JobNotificationEmailStop.sql */

SELECT S.[name] JobName,
    SS.freq_subday_interval [ScheduleFreq(mins)],
    'EXEC msdb.dbo.sp_update_job @job_name = N''' 
    + S.[name] 
    + ''', @notify_email_operator_name = N'''';' 
    CommandToDisableEmailNotification

  FROM msdb.dbo.sysjobs S
  JOIN msdb.dbo.sysjobschedules SJ
    ON S.job_id = SJ.job_id
  JOIN msdb.dbo.sysschedules SS
    ON SS.schedule_id = SJ.schedule_id

 WHERE SS.freq_subday_interval > 0
   AND S.notify_level_email > 0
 ORDER BY SS.freq_subday_interval;
 

Except and Intersect

Here is the simplest working example of EXCEPT and INTERSECT I can come up with …

/* Except.sql */

IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1 (c1 INT);
INSERT INTO #t1 VALUES (1), (2);

IF OBJECT_ID('tempdb..#t2') IS NOT NULL DROP TABLE #t2;
CREATE TABLE #t2 (c2 INT);
INSERT INTO #t2 VALUES (2), (3);

SELECT * FROM #t1
EXCEPT
SELECT * FROM #t2; /* = 1 */

SELECT * FROM #t1
INTERSECT
SELECT * FROM #t2; /* = 2 */

SELECT * FROM #t2
EXCEPT
SELECT * FROM #t1; /* = 3 */

SELECT * FROM #T1
EXCEPT
SELECT * FROM #T2
UNION
(SELECT * FROM #T2
EXCEPT
SELECT * FROM #T1); /* = 1 & 3 */

I use this frequently whilst refactoring to check the outputs are identical. And sometimes when syncing a MySQL table to MSSQL.

How to check SQL Jobs are actually doing something.

Looking through the Database Mail log today, I accidentally discovered a job that had been busy sending emails for I-don’t-know-how-long using an email profile that no longer worked. The output of the job was ‘success’ as the emails had been successfully queued with the Database Mail sub-system.

After finding the emails would have been empty anyway, I disabled the job. But it made me wonder if there might be other jobs that were busy doing nothing – hour after hour – day after day.

Knowing the dangers of weakening the system, I did not want to fail a job or job-step just to flag a maintenance issue.

The lowest-risk change I could think of making (to the many, legacy, unfamiliar jobs) was to leave pertinent messages in the job history log using the PRINT command. For example:-

IF EXISTS (SELECT 1 FROM SomeTable)
   BEGIN 
      PRINT 'YES: there is new data'
      (Do meaningful stuff)
   END
   ELSE
      PRINT 'NO: there is no new data';

Then in the future I might notice that there is Never any new data!

Capturing input parameters

Often when a stored-procedure is executed I want to know the parameters that were input. Which is handy for performance tuning.

There is a mechanism to automatically save input parameters with the cached execution plans, but quite often this does not work well.

On this occasion I embedded the facility right into the procedure as a temporary measure (please, don’t talk to me about triggers brrr).

CREATE PROCEDURE [dbo].[sp_SomeName]
 @ID UNIQUEIDENTIFIER = NULL,    
 @Record VARCHAR(50) = NULL    
AS  
BEGIN

 /* log parameters for performance tuning 1 of 2 */

  IF OBJECT_ID('[SomeDatabase].[dbo].[tbl_SomeTable]') IS NULL
  	SELECT GETDATE() STIME, GETDATE() ETIME, @ID ID, @Record RC 
  	INTO [dbo].[tbl_SomeTable]
  ELSE
  	INSERT INTO [dbo].[tbl_SomeTable]
  	SELECT GETDATE(), GETDATE(), @ID, @Record

 /* log parameters for performance tuning 1 of 2 */

...

Overkill really, but at the end of the procedure I added …

 ...

/* log parameters for performance tuning 2 of 2 */
  
  UPDATE [dbo].[tbl_SomeTable]
  SET ETIME = getdate()
  WHERE ETIME = STIME;
  
 /* log parameters for performance tuning 2 of 2 */

END
GO

Note: the real procedure had many more input parameters, and I suspected they are all set to null. Which would explain the poor performance.

Still, best to know what we’re optimizing for ๐Ÿ™‚

Orphaned users

Servers have Logins, and databases have Users.

A Login and a User account are linked, sharing the same name and the same SID. Naturally, for each Login there can be many User accounts – one in each database.

Now, if you backup a database on one server and restore it onto another server. It may contain Users within that database, that do not have a corresponding Login on the second server.

Execute this command (against each user database) to list any orphaned Users

SELECT 	DP.type_desc,
        DP.SID,
        DP.[name] UserName
FROM [sys].[database_principals] DP
LEFT JOIN [sys].[server_principals] SP
       ON DP.SID = SP.SID
WHERE SP.SID IS NULL
AND DP.authentication_type_desc = 'INSTANCE';

To fix this you can either remove the orphaned user account or create a matching login. Removing a user account is tricky to script as you need to remove individual attributes first.

Creating a login depends on the type, SQL or Windows. For SQL logins paste the name, SID, and password* into this command and execute it.

USE [master]
GO
CREATE LOGIN [SomeLogin]
WITH PASSWORD = 'SomePassword',  
SID = 0xSomeSid;

(*If you do not know the SQL login password that was used on the old server then create a new one. They are not linked)

To create a Windows login use this command …

USE [master]
GO
CREATE LOGIN [SomeDomain\SomeLogin] 
FROM WINDOWS 
WITH DEFAULT_DATABASE = [master];

Set every users default schema to DBO where its blank

In this quick script I am assuming the Windows domain is called ‘DOM’ …

-- ChangeDefaultSchemaToDboWhereNull.sql

DECLARE @cmd varchar(1000) 

SET @cmd = 
'USE ? IF DB_ID(''?'') > 4 SELECT ''USE ?; ALTER USER ['' + name + ''] WITH DEFAULT_SCHEMA = [dbo]''
 FROM sys.database_principals
 WHERE default_schema_name IS NULL
 AND [name] LIKE ''DOM\%'''

IF OBJECT_ID('tempdb..#output') IS NOT NULL DROP TABLE #output
CREATE TABLE #output
(command varchar(1000))

INSERT INTO #output
EXEC sp_MSforeachdb @cmd

SELECT * 
FROM #output


SQL Snapshot worksheet

— snapshots.sql

–1. create a snapshot

USE master;
GO
CREATE DATABASE Credit_Snap
ON  
    (
    NAME = CreditData,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\DATA\CreditData.ss'
    ),
    (
    NAME = CreditCatalog,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.SQL2019\MSSQL\DATA\CreditCatalog.ss'
    )
AS SNAPSHOT OF Credit;

–2. restore database from a snapshot

USE master;
GO
DECLARE @kill VARCHAR(8000) = '';
SELECT @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
  FROM master..sysprocesses
 WHERE dbid = DB_ID('Credit')
   AND spid > 50;
EXEC (@kill);
RESTORE DATABASE Credit FROM DATABASE_SNAPSHOT = 'Credit_Snap';

–3. delete snapshot

USE master;
GO
DROP DATABASE Credit_Snap;

— testing

SELECT *
FROM [Credit].[dbo].[member]
WHERE member_no = 22

BEGIN TRAN
UPDATE [Credit].[dbo].[member]
SET Firstname = 'DRY'
WHERE Firstname = 'CRRY'
ROLLBACK TRAN

SELECT aa.*
FROM [Credit].[dbo].[member] aa
JOIN [Credit_Snap].[dbo].[member] bb
  ON aa.member_no = bb.member_no 
WHERE aa.Firstname <> bb.Firstname;

Transactional Replication causing High CPU

In Publication Properties the setting โ€œSubscriptions never expire โ€ฆโ€ has a surprising effect on the job โ€œDistribution clean up: distributionโ€.

This job removes orphaned and replicated transactions from the Distribution database once the retention period has expired.

However, โ€œSubscriptions never expire โ€ฆโ€ stops this procedure from removing orphaned transactions – left by a deleted subscription – or for any other reason.

This results in the Distribution database growing and high CPU.

To fix this, allow subscriptions to be able to expire.

So failed subscriptions may be deleted if not fixed within a year. This preserves the robustness of “Subscriptions never expire”, whilst allowing orphaned transactions to be cleaned up.

Change SQL Server Collation

To change the default collation of my SQL 2019 instance on my laptop to “SQL_Latin1_General_CP1_CI_AS” I …

– Found the saved the location of sqlservr.exe into notepad.

– Added [cd ] in front of the path (that’s cd and a space, without the square brackets)

– Added a second line in notepad [sqlservr -m -T4022 -T3659 -s”SQL2019″
-q”SQL_Latin1_General_CP1_CI_AS”] (without the square brackets)

– Stopped the SQL Server service

– Opened a Command Prompt as Administrator

– Executed the first command (cd …)

– Executed the second line (sqlservr …)

– Rebooted.

Dropping a user that owns a schema (Error: 15138)

Manually highlight and run #1. Paste the result into #2.

-- DropFailedForUser.sql

-- The statement (that caused the error)

   USE [master] -- in this case
   GO
   DROP USER [Dom\SomeUser]
   GO

/* The Error ...

   Drop failed for user 'Dom\SomeUser'
   The database principal owns a schema in the database
   and cannot be dropped. Error: 15138
   */

-- #1. find the name of the schema

   SELECT [name]
   FROM sys.schemas s
   WHERE s.principal_id = USER_ID('Dom\SomeUser');

-- #2. transfer ownership of the schema to 'dbo'

   ALTER AUTHORIZATION ON SCHEMA::[SomeSchemaName] TO dbo;

-- repeat "The Statement"

Redgate SQL Data Compare

I love this tool for refactoring. With a result set of over 3,000 rows across 60 columns, eyeballing similar outputs in a spreadsheet just would not do.

To use this tool, I modified the original query to output INTO a new table “_output” in the current database. Near the start of the query I put an IF EXISTS/DROP statement (more commonly used with temp tables), and at the bottom of the query I selected star from _output.

After improving the original query code (and saving it with a new name), I modified it similarly to the above – outputting results INTO table “_output” but in a DIFFERENT database.

I configured “SQL Data Compare” to use every column of the first “_output” table as a “comparison key”. And can now confirm the two tables called “_output” in different databases are identical.

Drop all tables that start with underscore

In an ironic twist of fate I adapted my ‘Drop all temp-tables’ script, to drop all tables beginning with an underscore.

Yes, it is ironic, because it uses a temp-table to store the working list of tables to be deleted. Whilst my original script used a real table to store a list of temp-tables.

Well … ok then

-- DropAllTablesStartingWithAnUnderscore.sql

IF OBJECT_ID('tempdb..#tables') IS NOT NULL DROP TABLE #tables
SELECT [name] 
INTO #tables
FROM sys.tables
WHERE [name] LIKE '/_%' ESCAPE '/'

DECLARE @table VARCHAR(200), @cmd VARCHAR(500)
WHILE (SELECT COUNT(*) FROM #tables) > 0
BEGIN
	SET @table = (SELECT TOP(1) [name] FROM #tables)
	SET @cmd = 'drop table ' + @table
	EXEC(@cmd)
	DELETE FROM #tables WHERE [name] = @table
END

(In case you were wondering why I created these underscore-tables in the first place. I was refactoring a stored-procedure that took over an hour to run, and had a large number of temp-tables. I wanted to persist those temp-tables for another day, and not have to start from scratch.)

Find the partner of a bracket in SSMS

Faced with a barrage of tabbed T-SQL in SSMS it can sometimes be quite difficult to see the close bracket that signifies – for example – the end of a CTE.

TIP: swipe from the inside of the bracket outwards to highlight (in grey) the bracket itself, and also its partner. EG: swipe right-to-left across an open bracket.

Running CHECKDB on TempDB

Normally I would not bother, but when CHECKDB runs on TempDB it cannot use a snapshot so has to lock all the temp tables. This script will wait for exclusive access for up to a minute.

DECLARE @outcome VARCHAR(50) = 'TempDB is currently too busy for CHECHDB', 
	@endtime DATETIME = DATEADD(mi,1,GETDATE())
WHILE GETDATE() < @endtime
BEGIN
  IF NOT EXISTS (SELECT 1 FROM sys.dm_tran_locks WHERE request_mode = 'X' AND resource_database_id = 2)
  BEGIN
    DBCC CheckDB([tempdb]) WITH NO_INFOMSGS;
    SET @outcome = 'success'
    BREAK;
  END
  WAITFOR DELAY '00:00:01';
END
SELECT @outcome

A TDE test restore

Post migration, I wanted to make sure an encrypted database on a new SQL 2014 Enterprise edition server could be restored.

I installed SQL 2014 Developer edition on a second machine and initially got the expected error …

Msg 33111, Level 16, State 3, Line 2
Cannot find server certificate with thumbprint.
Msg 3013, Level 16, State 3, Line 2
RESTORE DATABASE is terminating abnormally

1. I checked if the target server was enabled for TDE …

SELECT * FROM sys.symmetric_keys

This returned one row, I checked against the source server and that returned two rows. I concluded that the target server was NOT yet TDE enabled.

2. To enable TDE on the target …

USE Master
GO
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = '[SomePwIJustMadeUp]';
GO

3. Next I copied over the two files (*.cer and *.pvk) from the backup location to the target server and installed the certificate into Master …

USE Master
GO
CREATE CERTIFICATE [NameOfTheDatabase]_Cert2
FROM FILE = '[PathAndNameOfLocalCopyOfCertFile].cer'
WITH PRIVATE KEY (FILE = N'[PathAndNameOfLocalCopyOfPvkFile].pvk',
	  DECRYPTION BY PASSWORD ='[TheSourcePwIGotFromKeepAss]');
GO

4. After which I was able to restore the database as normal.

Searching every Procedures for a string.

To search every stored procedure in every database on every server (ranging from SQL Server 2005 to SQL Server 2016) for the string ‘QueryTraceOn’, I first registered every server within SSMS.

Right-clicking on the registered server folder, I chose ‘new query’ and ran ‘select 1’ to exclude from my list any server with issues.

Once I had an error free list, I ran this code (which took around 40 minutes) …

-- SearchProcs4String.sql

EXEC sp_MSforeachdb 'use ?
SELECT db_name() [Database], ROUTINE_SCHEMA + ''.'' 
+ ROUTINE_NAME [Proc]
FROM INFORMATION_SCHEMA.ROUTINES WITH (NOLOCK)
WHERE ROUTINE_DEFINITION LIKE ''%QUERYTRACEON%'';'

Getting away from Dedupe jobs

Duplicate data should ideally be stopped at the front end. However if a table already contains duplicate data you may want to bash out some code to clean it up. And schedule a SQL job to run the code regularly.

However, there are mechanisms baked right into SQL Server to manage this more efficiently (step away from scripting everything – devops ๐Ÿ˜‰ )

True enough, you need to run code (once) to clean out all the current duplicates, but going forward a unique filtered index can keep them out.

For this particular project “duplicate data” meant that an CustID column should not contain a number already in that column if the Country was ‘GB’ and the PackageID was ‘5’.

Here is a simplified example of my solution …

-- ix_BlockDupCustIDs.sql

IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1 (CustID INT, CountryCode CHAR(2), PackageID INT);

CREATE UNIQUE INDEX ix_BlockDupCustIDs 
ON #t1 (CustID) 
WHERE CountryCode = 'GB' AND PackageID = 5;

INSERT INTO #t1 VALUES (1, 'GB', 5) -- yes
INSERT INTO #t1 VALUES (2, 'GB', 5) -- yes
INSERT INTO #t1 VALUES (1, 'GB', 4) -- yes
INSERT INTO #t1 VALUES (1, 'US', 5) -- yes
--INSERT INTO #t1 VALUES (1, 'GB', 5) -- no, duplicate
--INSERT INTO #t1 VALUES (2, 'GB', 5), (3, 'IR', 1) -- no for both
--UPDATE #t1 SET PackageID = 5 WHERE PackageID = 4 -- nope

SELECT * FROM #t1;

How to run SQL Server within Docker

Here is my bare-bones walk-through for setting up SQL Server Developer Edition within Docker for those of us who don’t have time to learn docker right now.

1. I navigated to the Docker website and read about Docker Desktop before installing Docker Desktop for Windows (and creating an account) from my Windows 10 Pro laptop.

During installation I chose to use Windows containers by default, rather than Linux.

2. To find SQL Server Developer Edition, I reached out to the Docker Hub online library from my laptop with this command (typed into a Command-Prompt or Powershell window)

docker search mssql

… which brought back this list …

Annotation 2020-04-04 113431

The second item down on the list is from Microsoft, is in a Windows container, and is the Developer edition.

I copied the name into Notepad and composed this command around it to download, extract, and run SQL Server within a docker container …

docker run -d -p 1433:1433 -e sa_password=Pa55word# -e ACCEPT_EULA=Y --restart always -v C:\temp:c:\temp -v sqldata:c:\var\opt\mssql microsoft/mssql-server-windows-developer

Breaking down that command …

  • Docker = invoke Docker
  • Run = download, extract, and execute something from Docker Hub.
  • -d = detached mode. That is, run independent of the command line.
  • -p 1433:1433 = map external port 1433 to internal port 1433.
  • -e sa_password=Pa55word# = create an environment variable called sa_password containing the text “Pa55word#” (without the double quotes).
  • -e ACCEPT_EULA=Y = create another variable called ACCEPT_EULA containing the value “Y”.
  • –restart always = automatically start when Windows starts
  • -v c:\temp:c:\tempย = map my laptop folder c:\temp to the docker folder c:\temp
  • -v sqldata:c:\var\opt\mssql = persist user databases
  • microsoft/mssql-server-windows-developer = this is the thing to “Run” (see above).

When complete (after 30 minutes or so) I connected via my regular laptop SSMS (IE: outside of the container) …

Annotation 2020-04-04 110654

… and added it to my registered servers as “Docker SQL Server Developer edition”

(Incidentally, that server name “dot” implies the local default instance on the default port. I could just as correctly have typed “localhost” or “localhost,1433”. Or from another laptop I would have used the IP address of the host laptop.)

Next I changed my Windows setting so the Docker icon would always be visible …

Capture

… so that on subsequent boots, I just need to wait for the icon to stop wobbling before connecting to the database via SSMS (and no need for any pesky Docker commands).

To import a backup I moved it to my laptop c:\temp and restored from there using SSMS.

“And finally” (as the late, great, Ronnie Corbett used to say), if after a while it all goes horribly wrong, as software tends to do, you can either uninstall docker and start this article again.

Or venture into the docker command-line to list (docker ps), stop (docker stop ~), then remove the SQL container (docker rm ~). Before using the green command above again.

More on pesky Docker commands soon.

Migrating to SQL Server 2017 / 2019

(for JH) When migrating to SQL Server 2017 or SQL Server 2019 the name of the game is “Risk Mitigation”. Avoid the temptation to bundle other improvements into the project. Business continuity is first, last, and everything in between.

Server Architecture should be duplicated Exactly in the new environment.

Server Rationalization – nope. Before the migration would be best, because if unexpected issues arise you are on familiar ground. But you have unused logins? databases? jobs? – double nope (Before or after, not during).

SQL Server Role based security – nope. That has been best practice for the last 20 years. This is not the time to make up for lack of database administration. 21 years will be fine.

High Availability – Avoiding downtime is important and HA helps with that. But as long as you have a mature backup system in place, this can wait until the migration is complete. Nope.

Backups – Are less clear cut. But even if the old system is a quirky high maintenance mill stone. It is still familiar, and you don’t want to add unfamiliar complexity at this time. This should be next on your list though.

Disaster Recovery – There are new toys, like clusterless AG’s but again – nope.

Compatibility Level – Should match the old system. A few months after the deed is done, you can quietly try switching less important databases, one at a time (database administration is a perpetual business).

SSIS Packages – Should be rationalized from Packages to Projects … after the migration. Oh and if a package fails after being moved and upgraded, change the environment NOT the package.

Encryption is much easier with modern versions of SQL Server. Schedule this for a few months down the line.

Cardinality Estimator – You may have picked up on a theme here ๐Ÿ™‚ – nope. Set it to legacy for reliable performance, for now.

Log Shipping for Migration

The trouble with backing up databases in ‘old production’ then restoring them to ‘new production’ is that it takes time.

And there may be some unforeseen delay switching the front-end apps over.

Resulting in ‘old production’ being updated with new data, and ‘new production’ becoming out of date.

Log-shipping is an ideal, built-in, tool that can be used to keep ‘new production’ in sync with ‘old production’ during that phase between backup/restore and switching the front-end to ‘new production’.

This time around there was no need to script the setting up of log-shipping. There were only 13 databases, so using the GUI did not take long.

The idea is to complete the backup/restore a week or so before the switch-over and set-up log-shipping to keep the data in sync.

Then at the designated switch-over time, it takes only a moment to bring ‘new production’ on-line, as a fully up-to-date copy of ‘old production’.

Here is my crib-sheet …

ย 

Preparation

  • Primary and secondary servers should have as near as possible the same instance settings eg: max-memory, numa configuration, CLR, max dop, etc
  • Ensure user databases are using full recovery model
  • Create shared folder (on the target ideally)
  • Default backup compression is enabled (ideally)
  • Reduce VLF counts
  • Configure file share folder and connectivity
  • Disable tlog backups on primary

Preparation – Secondary Instance

  • Ensure enough space for databases
  • Matching drive letters for datafiles and logfiles (ideally)
  • Configure file share

Preperation – Monitor Instance

  • Ideally separate from primary and secondary

Security

  • Config login is a sys admin role
  • SQL server service account on primary needs read/write permission on backup directory (for the backup job)
  • SQL server service account on secondary needs read permission on backup share and read/write permission to secondary share (for copy / restore jobs)

Configuring log-shipping

  • Manually backup and restore database (with no recovery)
  • Use notepad to cut and paste connection strings and paths
  • Transfer logins, jobs and linked servers

The switch over

  • Manually execute the backup, copy, and restore jobs a final time
  • Manually restore each database “with recovery”
  • Detach old databases (so there is no chance of them being updated)
  • Point front-end-applications to new back-end server

Post Migration

  • Full backups (the old ones cannot be restored now)
  • Update all statistics
  • Check compatability level
  • Execute dbcc checkdb
  • Enable plan-store (read/write)
  • Monitor health and performance

Postcode search

The issue was that some postcodes were stored with spaces and some without.

This was further complicated by some user inputs (into the “Postcode Search” SSRS Report) had a space and some did not.

The root cause of the slow report was that the 90 MILLION stored postcode was being retrieved and manipulated (to remove spaces) before being compared with the ONE input.

--- OLD CODE -----------------

DECLARE @PostCode VARCHAR(8)
SELECT Forename,
       Surname,
       AccountNumber AS CustomerNo,
       AccountStartDate,
       AddressLine2 AS Address,
       PostCode,
       DateOfBirth
FROM [dbo].[SV_Customers]
WHERE (CountryCode = 'GB')
      AND (REPLACE(Postcode, ' ', '') = @PostCode);

My insight was to manipulate just the ONE input postcode before comparing it TWICE (with and without a space) to the un-manipulated postcodes stored in the database.

The first task then, was to split the input postcode into two parts. In all formats the last 3 characters were number, letter, letter.

So after saving the last part of the postcode separately, it was easy to deduce that the first part must be the whole thing minus the last part.

--- NEW CODE ------------------------

DECLARE @PostCode VARCHAR(8)
DECLARE @pc2 CHAR(3) = RIGHT(@PostCode, 3);
DECLARE @pc1 VARCHAR(4) = RTRIM(REPLACE(@PostCode, @pc2, ''));

SELECT Forename,
       Surname,
       AccountNumber AS CustomerNo,
       AccountStartDate,
       AddressLine2 AS Address,
       Postcode,
       DateOfBirth
FROM [dbo].[SV_Customers]
WHERE CountryCode = 'GB'
      AND (PostCode = @pc1 + @pc2         -- without space
        OR PostCode = @pc1 + ' ' + @pc2); -- or with space

The final task was to write the WHERE clause as simply as possible for long term maintenance. That’s the DBA in me ๐Ÿ™‚

Managed Backups

Managed Backups were a great new feature with SQL 2014 and above. They allow backups to the cloud and are managed from within SSMS.

There is a GUI but its just for initialization. Configuration all happens through TSQL. Here is my work sheet …

-- managedBackups.sql

-- view server config

USE msdb;
SELECT * FROM smart_admin.fn_backup_instance_config();

-- view server config details

USE msdb;SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm
FROM smart_admin.fn_backup_db_config(NULL)

-- disable individual log backup

USE msdb;
EXEC smart_admin.sp_set_db_backup
@database_name = [DISC_Green_Abbey_8230003_test],
@enable_backup = 0;

RCSI testing

Here’s some code to create a large number of ghost records.

--rcsi_testing.sql

-- create and populate a test table

CREATE TABLE dbo.demo_table
  (
      ID    INT       NOT NULL    IDENTITY (1, 1),
      C1    CHAR(100) NOT NULL
  );
  GO
   
  INSERT INTO dbo.demo_table (C1)
  SELECT TOP (1000)
         CAST(TEXT AS CHAR(100)) AS C1
  FROM   sys.messages
  WHERE  language_id = 1031;
  GO
    
  CREATE UNIQUE CLUSTERED INDEX cuix_demo_table_Id
  ON dbo.demo_table (Id);
  GO

 
-- start a 1 minute workload
 
  SET NOCOUNT ON;
  GO
  BEGIN TRANSACTION; ---------**********KEY
  GO
  	-- Insert new record into dbo.demo_table
  	DECLARE	@finish_date DATETIME2(0) = DATEADD(MINUTE, 1, GETDATE());
  	WHILE @finish_date >= GETDATE()
  	BEGIN
  		-- wait 10 ms before each new process
  		INSERT INTO dbo.demo_table(C1)
  		SELECT C1
  		FROM   dbo.demo_table
  		WHERE  Id = (SELECT MIN(Id) FROM dbo.demo_table);
    
  		-- Wait 10 ms to delete the first record from the table
  		WAITFOR DELAY '00:00:00:010';
    
  		-- Now select the min record from the table
  		DELETE dbo.demo_table WHERE Id = (SELECT MIN(Id) FROM dbo.demo_table);
  	END
  ROLLBACK TRAN;
  GO

Making index changes to Production

These days I use a SQL Job called ‘DBA – index maint’.

Whenever I have an index change to make I paste the script into a new step, name that step with today’s date, change the ‘start step’ to that step, and schedule it to run once overnight.

This gives me a history and outcome, along-side the exact action.

SQL Safe error “Cannot connect to SQL Server instance”

This was fixed by re-installing SQL Safe. Bonus – Here is a working restore command with move

EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'SomeDatabase',
	@filename = 'J:\backups\SomeDatabase.BAK', 
	@backuptype = 'Full',
	@withmove = 'SomeDatabase_data "J:\sql_data\SomeDatabase_data.mdf"',
	@withmove = 'SomeDatabase_log "J:\sql_log\SomeDatabase_log.ldf"',
	@recoverymode = 'recovery',
	@replace = '1';

Differential backup / restore using SQL Safe

I wanted to improve a backup / restore sql-job that populated a Reporting Server every night. I felt it would be quicker if it did a weekly Full backup and daily Diff backups.

The 3rd-party backup software was “SQL Safe Backup” from Idera.

I used this code on the Production Server …

--BackupDBA.sql

-- FULL Backup if Friday

	IF (SELECT DATEPART(dw, GETDATE())) = 6
	BEGIN
		EXEC [master].[dbo].[xp_ss_backup]
		@database = 'DBA',
		@filename = '\\ReportingServer\m$\DBA_Full.safe',
		@backuptype = 'Full',
		@overwrite = 1,
		@verify = 1;
	END;


-- DIFF Backup

	EXEC [master].[dbo].[xp_ss_backup]
	@database = 'DBA',
	@filename = '\\ReportingServer\m$\DBA_Diff.safe',
	@backuptype = 'Diff',
	@overwrite = 1,
	@verify = 1;

… and this code I scheduled a few hours later on the Reporting Server …

--RestoreDBA.sql

-- First, kill any connections

	DECLARE @kill VARCHAR(8000) = '';
	SELECT  @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';'
	FROM    [master].[dbo].[sysprocesses]
	WHERE   dbid = DB_ID('DBA')
	AND spid > 50;
	EXEC (@kill);


-- Restore FULL

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Full.safe', 
	@backuptype = 'Full',
	@recoverymode = 'norecovery', 
	@replace = '1';


-- Restore DIFF

	EXEC [master].[dbo].[xp_ss_restore] 
	@database = 'DBA',
	@filename = 'M:\DBA_Diff.safe', 
	@backuptype = 'Diff',
	@recoverymode = 'recovery', 
	@replace = '1';

Finally, I added a step on Reporting to repair the orphaned user-accounts …

USE [DBA]
GO

EXEC sp_change_users_login 'Auto_Fix', 'ReportingLogin';
EXEC sp_change_users_login 'Auto_Fix', 'LinkedServerLogin';
GO

TempDB Autogrowth

I was shocked to find TempDB was the most often autogrown in our environment – and changing them by hand soon got old.

Here’s my global script to set them to best-practice values, that will only change files that exist, and will fail if the specified size is less than the current size (IE: it fails safe).

--tempdb_autogrowth.sql

-- report size and growth
select name, size*8/1024 [sizeMB], growth*8/1024 [growthMB]
from master.sys.master_files
where db_name(database_id) = 'tempdb'
order by physical_name


-- resize datafile(s) to 256MB & logfile to 128MB
USE [master]
GO
begin try
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev',  SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev2', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev3', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev4', SIZE = 262144KB , FILEGROWTH = 262144KB );
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog',  SIZE = 131072KB , FILEGROWTH = 131072KB );
end try begin catch end catch