Migration with Log-Shipping

I had a requirement to script a repeatable SQL 2014 ent to SQL 2016 std migration. This was to be for up to 200 databases and therefore needed to be automated.

I chose to encapsulate a blend of TSQL and Powershell in a non-scheduled SQL Job. And as we were going UP a version but DOWN an edition, I felt log-shipping would be the best option.

I idea was to run the job a week or so before hand. Then at the time of the migration (a weekend), just 15 minutes of data (per database) would need to traverse the network.

The SETUP job had 9 steps :-
1. Create a control table
2. Backup (because you never know)
3. Decrypt
4. Move Logins and Fix Orphans
5. Shrink the log-file
6. Log-Ship: Initial Full backup to remote-server
7. Log-Shipping: Initial Restores on Remote in recovery mode.
8. Log-Shipping: Create BACKUP jobs locally
9. Log-Shipping: Create COPY and RESTORE jobs remotely.


-- 1.ControlTable.sql

USE msdb;

IF OBJECT_ID('[msdb].[dbo].[LSList]') IS NOT NULL
    DROP TABLE [msdb].[dbo].[LSList];

CREATE TABLE [msdb].[dbo].[LSList] ([database] NVARCHAR(255) NOT NULL,
                                    backup_directory NVARCHAR(255) NOT NULL,
                                    backup_share NVARCHAR(255) NOT NULL,
                                    backup_destination_directory NVARCHAR(255) NOT NULL,
                                    pre_mig_backup INT NOT NULL,
                                    is_encrypted INT NULL,
                                    LS_backup INT NULL,
                                    start_time_offset INT NOT NULL);

INSERT INTO [msdb].[dbo].[LSList] ([database],
VALUES (N'DatabaseName1', N'h:\shipping', N'\\LocalServerName\shipping', N'\\RemoteServerName\shipping', 0, 2);

INSERT INTO [msdb].[dbo].[LSList] ([database],
VALUES (N'DatabaseName2', N'h:\shipping', N'\\LocalServerName\shipping', N'\\RemoteServerName\shipping',0, 4);

-- populate encryption flag

UPDATE [msdb].[dbo].[LSList]
    SET is_encrypted = 1 -- yes
    WHERE [database] IN (SELECT db.[name]
                                          FROM sys.databases db
                                          JOIN sys.dm_database_encryption_keys dm
                                          ON db.database_id = dm.database_id );

-- select * FROM [msdb].[dbo].[LSList]


-- 2.PreMigBackups.sql

-- select * from [msdb].[dbo].[LSList]
-- update [msdb].[dbo].[LSList] SET pre_mig_backup = 0

        @dbname VARCHAR(200);

WHILE (SELECT COUNT(*) FROM [msdb].[dbo].[LSList] WHERE pre_mig_backup = 0) > 0
    SET @dbname = (   SELECT TOP 1 [database]
                        FROM [msdb].[dbo].[LSList]
                       WHERE pre_mig_backup = 0);

    SET @Query = N'BACKUP DATABASE [' + @dbname + '] 
	TO  DISK = N''H:\SQL Backup\' + @dbname + '_' + replace(convert(varchar(16), getdate(),126), ':','') + '.bak'' 

    EXEC sp_executesql @Query;

    UPDATE [msdb].[dbo].[LSList]
    SET pre_mig_backup = 1
    WHERE [database] = @dbname;


-- 3.decrypt.sql

DECLARE @Query  NVARCHAR(MAX), @dbname VARCHAR(200);

 null = no
 1 = yes
 0 = not any more

WHILE (SELECT COUNT(*) FROM [msdb].[dbo].[LSList] WHERE is_encrypted = 1) > 0

    SET @dbname = (SELECT TOP 1 [database] FROM [msdb].[dbo].[LSList] WHERE is_encrypted = 1);

  /* 1 set encryption off */

    SET @Query = N'ALTER DATABASE [' + @dbname + N'] SET ENCRYPTION OFF;';
    EXEC sp_executesql @Query;

  /* 2 pause until decrypted */

    WHILE (  SELECT dm.encryption_state
                    FROM sys.databases db
                    LEFT JOIN sys.dm_database_encryption_keys dm
                       ON db.database_id = dm.database_id
                    WHERE [name] = @dbname)  1
        WAITFOR DELAY '00:00:10';

  /*3 drop key */

    SET @Query = 'USE [' + @dbname + ']; DROP DATABASE ENCRYPTION KEY';
    EXEC sp_executesql @Query;

  /* 4 log changes then move on */

    UPDATE [msdb].[dbo].[LSList]
       SET is_encrypted = 0
     WHERE [database] = @dbname;


-- Stop MLB

--        @dbname VARCHAR(200);

IF OBJECT_ID('tempdb..#tlist') IS NOT NULL
    DROP TABLE #tlist;
SELECT [database]
  INTO #tlist
  FROM [msdb].[dbo].[LSList];


    SET @dbname = (SELECT TOP 1 [database] FROM #tlist);

    SET @Query = N'EXEC [msdb].[smart_admin].[sp_set_db_backup]
					@database_name = [' + @dbname + N'],
					@enable_backup = 0'; -- off

    EXEC sp_executesql @Query;

    DELETE FROM #tlist
     WHERE [database] = @dbname;



Powershell.exe "Export-DbaLogin -SqlInstance LocalServerName -Append  -Path C:\temp\LocalServerName-logins.sql"

Powershell.exe "Export-DbaUser -SqlInstance LocalServerName -Append  -Path C:\temp\LocalServerName-users.sql"

Powershell.exe "Copy-DbaLogin -Source LocalServerName -Destination RemoteServerName -ExcludeSystemLogins"


Powershell.exe "Repair-DbaOrphanUser -SqlInstance RemoteServerName"

Caching result sets

(For Sam) I wanted to performance tune a stored-procedure that was just one big SELECT statement (used to return all current Orders).

The code was just about as optimum as it could get, and returned around 8,000 rows each time, taking about 35 seconds to do so.

I saved the output over a few consecutive days and noticed (crucially) that most of the rows were the same each day.

My big-idea then, was to pre-cache (and pre-format) the results on “Day One”, and just append new rows to that going forward.

The final working stored-procedure contained 5 labeled areas:-

 - (1. Create and fill a cache-table if there isn't one)
 - 2. Save a thin version of the current data to a temp-table
 - 3. Add only NEW data to the cache-table
 - 4. Remove DELETED data from the cache-table
 - 5. Output the cache-table

1. If the cache-table didn’t exist, run the original query, but saving INTO a cache-table. Mostly this step was not executed, but I wanted the stored-procedure to be complete.

There was a DateTime column in the results set that was guaranteed to be unique. I made this the primary-key of the cache-table.

2. In a separate window, I stripped back the original query until just the DateTime column was returned. Unnecessarily, I added code to the top to delete any temp-table called “#thin” if it already existed (my habit). Then I added code to save the stripped back query results INTO a temp-table … called “#thin”.

This step would run every time, and the output could be compared with the old data (in the cache-table) to add any new rows, and knock off any old ones.

3. The original query was executed but with a WHERE clause added, like WHERE prod.DateTime not in (SELECT DateTime FROM #thin). The 2 or 3 (fat) rows returned from this step were appended to the cache-table.

4. A simple DELETE removed any rows from the cache-table where the DateTime was not in the #thin table.

5. The Cache-table was SELECT’ed in full as the stored-procedures output. Which typically ran in around 7 seconds. Despite the extra round-trip to the database.

Testing. After a day or two compare the old / new result sets in spreadsheet tabs and adjust indexing accordingly (As always, full responsibility lies with the implementer).

Addendum. To help performance I later changed Step-3 from …

WHERE prod.DateTime not in (SELECT DateTime FROM #thin)

… to …

LEFT JOIN cache.table cac ON cac.DateTime = prod.DateTime
WHERE cac.DateTime IS NULL

Log-shipping Restore error: empty file

I noticed a log-shipping RESTORE job had started failing. Looking back through the job history I found the last two “good” executions contained errors …

*** Error: Could not apply log backup file ‘SomePath\SomeFile.trn’ to secondary database. The volume is empty.

I looked at the path\file specified and found the file was zero size.

I looked on the network-share where the files are backed-up-to \ copied-from, and found the same file was NOT zero size.

I manually copied the file from the network-share to the destination folder (on the DR server), overwriting the empty file.

Log-shipping recovered over the next few hours.