Power BI is the new Excel. And like Excel few people will be paid for using it full-time, but most office workers will use it routinely.
Azure Backups and Restores
Backups
“Azure SQL Database” includes automatic (compressed) backups. The retention period is 7 days for Basic and 35 days for Standard and Premium. Backups are geo-replicated. The backup schedule is weekly full, hourly diff, and tlog backups every 5 minutes.
All backups support point-in-time recovery with a 12 hour RTO, and 5 minute RPO for in-region backups.
Max backup storage size is twice the tier database size limit (EG: Standard = 500GB of backup space). If exceeded a) request to reduce retention period. b) pay extra at standard read rate.
Long term backup archiving can be done using a) manual export to BACPAC on Azure Blob Storage. b) Long-Term-Retention to Azure Backup Vault for up to 10 years.
Restores
Databases can be restored (if within retention period). Further, all databases and their backups are deleted if you drop a Server.
Local restores can be to a point-in-time, can be renamed, but cannot overwrite an existing database.
Install Azure SQL Server and Databases
Portal click-by-click
1. Create SQL Server
– Home / SQL Servers / Add
– Subscription: “FT”
– Resource Group: “(New) SD-resource”
– Server Name “SD-server”
– Region: “(Europe) UK South” (London)
– Admin login “SD-login” / pw
– Allow Azure Services to access this server: “No”
– Enable advanced security: “Not now”
(wait 2 minutes)
2. Create SQL Database
– Home / SQL Databases / Add
– Database name: “SD-db”
– Elastic pool?: No
– Resources: “5 DTU’s (Basic)”
– Data source: “None” (blank database)
(wait 2 minute)
– Home / SQL Databases
– Database Features
– TDE: “Off”
3. Server level Connectivity
– Home / SQL Servers / Server name: “SD-server.database.windows.net” (cut)
– SSMS (paste) / SQL Server Authentication / SD-login / pw
3b. Messages
– “The requested name is valid, but no data of the requested type was found” =
server name was wrong.
– “New Firewall Rule” = your i/p is not on the server firewall list. Do you want to add it ? = Yes
4. (optional) database connectivity
a. exec sp_set_database_firewall_rule dbfwrule1, ‘86.168.15.119’, ‘86.168.15.119’;
b. select * from sys.database_firewall_rules
c. exec sp_delete_database_firewall_rule dbfwrule1
Notes
– Azure SQL DB offers 3 service tiers: Basic, Standard, and Premium.
– These tiers define performance and recovery.
– Performance is measured in DTU’s
– Multiple DB’s can share resources through Elastic Database Pools.
Coding style
Initially, code should be written for readability. So whoever looks at it next can quickly see what you were trying to do 🙂
Subsequently, if it becomes an issue, code can be re written for better performance.
IE: Don’t initially code for performance.
My cloud definitions
IAAS – Renting a VM and outsourcing your sys-admin.
PAAS – Renting a loaded server and outsourcing your sys-admin and production-dba.
SAAS – Renting a loaded server and outsourcing your sys-admin, production-dba, and help-desk roles.
Postcode search
The issue was that some postcodes were stored with spaces and some without.
This was further complicated by some user inputs (into the “Postcode Search” SSRS Report) had a space and some did not.
The root cause of the slow report was that the 90 MILLION stored postcode was being retrieved and manipulated (to remove spaces) before being compared with the ONE input.
--- OLD CODE ----------------- DECLARE @PostCode VARCHAR(8) SELECT Forename, Surname, AccountNumber AS CustomerNo, AccountStartDate, AddressLine2 AS Address, PostCode, DateOfBirth FROM [dbo].[SV_Customers] WHERE (CountryCode = 'GB') AND (REPLACE(Postcode, ' ', '') = @PostCode);
My insight was to manipulate just the ONE input postcode before comparing it TWICE (with and without a space) to the un-manipulated postcodes stored in the database.
The first task then, was to split the input postcode into two parts. In all formats the last 3 characters were number, letter, letter.
So after saving the last part of the postcode separately, it was easy to deduce that the first part must be the whole thing minus the last part.
--- NEW CODE ------------------------ DECLARE @PostCode VARCHAR(8) DECLARE @pc2 CHAR(3) = RIGHT(@PostCode, 3); DECLARE @pc1 VARCHAR(4) = RTRIM(REPLACE(@PostCode, @pc2, '')); SELECT Forename, Surname, AccountNumber AS CustomerNo, AccountStartDate, AddressLine2 AS Address, Postcode, DateOfBirth FROM [dbo].[SV_Customers] WHERE CountryCode = 'GB' AND (PostCode = @pc1 + @pc2 -- without space OR PostCode = @pc1 + ' ' + @pc2); -- or with space
The final task was to write the WHERE clause as simply as possible for long term maintenance. That’s the DBA in me 🙂
T-SQL Window Functions
“Window” sounds a bit like the singular of Microsoft’s Operating System, huh?
But no, imagine that each cell in a spreadsheet has two little glass “Windows”, one in the ceiling of its cell and one in the floor.
Then the occupant of cell C3 could look up at C2 and wave, or down at C4 and blow a raspberry.
But there’s more, C3 can now look up and down past C2 and C4 at ALL the values in the C column.
Now instead of cells in a spreadsheet imagine cells in a database table.
create table #t1 (c int) insert into #t1 values (10), (20), (30), (40) select * from #t1 select *, lag(c, 1) over(order by c) [Waving up], lead(c, 1) over(order by c) [Rasberrying down], SUM(c) OVER() [Sum of c] from #t1 drop table #t1
Stored Procedure Template
I always try to adopt the local standards. But where I’m setting one, here’s my Stored Procedure starting template …
-- NewProcTemplate.sql USE DemoDW GO /* ======================================================================== Author: Richard (RbS) Date: 19 July 2019 Usage: To list SalesPeople by Store. Example: Exec [DemoDW].[dbo].[SPU_DimSalespersonGetByStore] @Store = '1' Safe4Prod: NO! {by default} ============================================================================ */ ALTER proc SPU_DimSalespersonGetByStore -- SPU_{Object}{Action} @Store NVARCHAR(50) AS BEGIN; SET NOCOUNT ON; SELECT StoreName, SalespersonName FROM [DemoDW].[dbo].[DimSalesperson] WHERE StoreName = @Store; END GO
NOTE: I do not develop within this template. To stay open minded I always start development from a simple select star statement. Then when that’s all good, its pasted in here, parameterized, tested, and adjusted (thanks Doug).
Migration with Log-Shipping
I had a requirement to script a repeatable SQL 2014 ent to SQL 2016 std migration. This was to be for up to 200 databases and therefore needed to be automated.
I chose to encapsulate a blend of TSQL and Powershell in a non-scheduled SQL Job. And as we were going UP a version but DOWN an edition, I felt log-shipping would be the best option.
I idea was to run the job a week or so before hand. Then at the time of the migration (a weekend), just 15 minutes of data (per database) would need to traverse the network.
The SETUP job had 9 steps :-
1. Create a control table
2. Backup (because you never know)
3. Decrypt
4. Move Logins and Fix Orphans
5. Shrink the log-file
6. Log-Ship: Initial Full backup to remote-server
7. Log-Shipping: Initial Restores on Remote in recovery mode.
8. Log-Shipping: Create BACKUP jobs locally
9. Log-Shipping: Create COPY and RESTORE jobs remotely.
Step-1
-- 1.ControlTable.sql USE msdb; GO IF OBJECT_ID('[msdb].[dbo].[LSList]') IS NOT NULL DROP TABLE [msdb].[dbo].[LSList]; GO CREATE TABLE [msdb].[dbo].[LSList] ([database] NVARCHAR(255) NOT NULL, backup_directory NVARCHAR(255) NOT NULL, backup_share NVARCHAR(255) NOT NULL, backup_destination_directory NVARCHAR(255) NOT NULL, pre_mig_backup INT NOT NULL, is_encrypted INT NULL, LS_backup INT NULL, start_time_offset INT NOT NULL); INSERT INTO [msdb].[dbo].[LSList] ([database], backup_directory, backup_share, backup_destination_directory, pre_mig_backup, start_time_offset) VALUES (N'DatabaseName1', N'h:\shipping', N'\\LocalServerName\shipping', N'\\RemoteServerName\shipping', 0, 2); INSERT INTO [msdb].[dbo].[LSList] ([database], backup_directory, backup_share, backup_destination_directory, pre_mig_backup, start_time_offset) VALUES (N'DatabaseName2', N'h:\shipping', N'\\LocalServerName\shipping', N'\\RemoteServerName\shipping',0, 4); -- populate encryption flag UPDATE [msdb].[dbo].[LSList] SET is_encrypted = 1 -- yes WHERE [database] IN (SELECT db.[name] FROM sys.databases db JOIN sys.dm_database_encryption_keys dm ON db.database_id = dm.database_id ); -- select * FROM [msdb].[dbo].[LSList]
Step-2
-- 2.PreMigBackups.sql -- select * from [msdb].[dbo].[LSList] -- update [msdb].[dbo].[LSList] SET pre_mig_backup = 0 DECLARE @Query NVARCHAR(MAX), @dbname VARCHAR(200); WHILE (SELECT COUNT(*) FROM [msdb].[dbo].[LSList] WHERE pre_mig_backup = 0) > 0 BEGIN SET @dbname = ( SELECT TOP 1 [database] FROM [msdb].[dbo].[LSList] WHERE pre_mig_backup = 0); SET @Query = N'BACKUP DATABASE [' + @dbname + '] TO DISK = N''H:\SQL Backup\' + @dbname + '_' + replace(convert(varchar(16), getdate(),126), ':','') + '.bak'' WITH COPY_ONLY, NOFORMAT, INIT, STATS = 10'; EXEC sp_executesql @Query; UPDATE [msdb].[dbo].[LSList] SET pre_mig_backup = 1 WHERE [database] = @dbname; END;
Step-3
-- 3.decrypt.sql DECLARE @Query NVARCHAR(MAX), @dbname VARCHAR(200); /* is_encrypted null = no 1 = yes 0 = not any more */ WHILE (SELECT COUNT(*) FROM [msdb].[dbo].[LSList] WHERE is_encrypted = 1) > 0 BEGIN SET @dbname = (SELECT TOP 1 [database] FROM [msdb].[dbo].[LSList] WHERE is_encrypted = 1); /* 1 set encryption off */ SET @Query = N'ALTER DATABASE [' + @dbname + N'] SET ENCRYPTION OFF;'; EXEC sp_executesql @Query; /* 2 pause until decrypted */ WHILE ( SELECT dm.encryption_state FROM sys.databases db LEFT JOIN sys.dm_database_encryption_keys dm ON db.database_id = dm.database_id WHERE [name] = @dbname) 1 BEGIN WAITFOR DELAY '00:00:10'; END; /*3 drop key */ SET @Query = 'USE [' + @dbname + ']; DROP DATABASE ENCRYPTION KEY'; EXEC sp_executesql @Query; /* 4 log changes then move on */ UPDATE [msdb].[dbo].[LSList] SET is_encrypted = 0 WHERE [database] = @dbname; END; -- Stop MLB -- DECLARE @Query NVARCHAR(MAX), -- @dbname VARCHAR(200); IF OBJECT_ID('tempdb..#tlist') IS NOT NULL DROP TABLE #tlist; SELECT [database] INTO #tlist FROM [msdb].[dbo].[LSList]; WHILE (SELECT COUNT(*) FROM #tlist) > 0 BEGIN SET @dbname = (SELECT TOP 1 [database] FROM #tlist); SET @Query = N'EXEC [msdb].[smart_admin].[sp_set_db_backup] @database_name = [' + @dbname + N'], @enable_backup = 0'; -- off EXEC sp_executesql @Query; DELETE FROM #tlist WHERE [database] = @dbname; END;
Step-4
Powershell.exe "Export-DbaLogin -SqlInstance LocalServerName -Append -Path C:\temp\LocalServerName-logins.sql" Powershell.exe "Export-DbaUser -SqlInstance LocalServerName -Append -Path C:\temp\LocalServerName-users.sql" Powershell.exe "Copy-DbaLogin -Source LocalServerName -Destination RemoteServerName -ExcludeSystemLogins"
Step-5
Powershell.exe "Repair-DbaOrphanUser -SqlInstance RemoteServerName"
Managed Backups
Managed Backups were a great new feature with SQL 2014 and above. They allow backups to the cloud and are managed from within SSMS.
There is a GUI but its just for initialization. Configuration all happens through TSQL. Here is my work sheet …
-- managedBackups.sql -- view server config USE msdb; SELECT * FROM smart_admin.fn_backup_instance_config(); -- view server config details USE msdb;SELECT db_name, is_managed_backup_enabled, retention_days, storage_url, encryption_algorithm FROM smart_admin.fn_backup_db_config(NULL) -- disable individual log backup USE msdb; EXEC smart_admin.sp_set_db_backup @database_name = [DISC_Green_Abbey_8230003_test], @enable_backup = 0;
Caching result sets
(For Sam) I wanted to performance tune a stored-procedure that was just one big SELECT statement (used to return all current Orders).
The code was just about as optimum as it could get, and returned around 8,000 rows each time, taking about 35 seconds to do so.
I saved the output over a few consecutive days and noticed (crucially) that most of the rows were the same each day.
My big-idea then, was to pre-cache (and pre-format) the results on “Day One”, and just append new rows to that going forward.
The final working stored-procedure contained 5 labeled areas:-
- (1. Create and fill a cache-table if there isn't one) - 2. Save a thin version of the current data to a temp-table - 3. Add only NEW data to the cache-table - 4. Remove DELETED data from the cache-table - 5. Output the cache-table
1. If the cache-table didn’t exist, run the original query, but saving INTO a cache-table. Mostly this step was not executed, but I wanted the stored-procedure to be complete.
There was a DateTime column in the results set that was guaranteed to be unique. I made this the primary-key of the cache-table.
2. In a separate window, I stripped back the original query until just the DateTime column was returned. Unnecessarily, I added code to the top to delete any temp-table called “#thin” if it already existed (my habit). Then I added code to save the stripped back query results INTO a temp-table … called “#thin”.
This step would run every time, and the output could be compared with the old data (in the cache-table) to add any new rows, and knock off any old ones.
3. The original query was executed but with a WHERE clause added, like WHERE prod.DateTime not in (SELECT DateTime FROM #thin). The 2 or 3 (fat) rows returned from this step were appended to the cache-table.
4. A simple DELETE removed any rows from the cache-table where the DateTime was not in the #thin table.
5. The Cache-table was SELECT’ed in full as the stored-procedures output. Which typically ran in around 7 seconds. Despite the extra round-trip to the database.
Testing. After a day or two compare the old / new result sets in spreadsheet tabs and adjust indexing accordingly (As always, full responsibility lies with the implementer).
Addendum. To help performance I later changed Step-3 from …
WHERE prod.DateTime not in (SELECT DateTime FROM #thin)
… to …
LEFT JOIN cache.table cac ON cac.DateTime = prod.DateTime WHERE cac.DateTime IS NULL
ORDER BY CASE
In TSQL I recently discovered how to use the CASE command in the ORDER BY clause to sort results in custom ways.
For example, to order countries with the UK and USA at the top then the rest alphabetically would in the past have caused me to either generate a calculated ‘CountrySort’ column or UNION two queries.
Now I can do this …
ORDER BY CASE WHEN countryid = 1 THEN 'AAA' WHEN countryid = 23 THEN 'AAB' ELSE countryname END
Which translates as …
‘Order by countryname
having first replaced the countryname with ‘AAA’ where the countryid is 1
and ‘AAB’ where its 23′.
Here are the results (including countryid for clarity)…
Column Max Length
From my “Spreadsheet sizer” script, this one helped me move sensibly away from pesky varchar(max) columns.
-- ColumnMaxLength.sql DECLARE @TableName VARCHAR(255) = 'customers' --<< input DECLARE @SchemaName VARCHAR(255) = 'dbo' DECLARE @sqlcmd varchar(max) select @sqlcmd = stuff((SELECT ' union all select ' + QUOTENAME(table_schema,'''') + ' [Schema], ' + QUOTENAME(TABLE_NAME,'''') + ' [Table], ' + quotename(column_name,'''') + ' [Column], max(datalength(' + quotename(column_name) + ')) MaxLength from ' + quotename(table_schema) + '.' + quotename(table_name) from information_schema.columns where 1=1 AND table_name = @TableName AND table_schema = @SchemaName order by column_name for xml path(''),type).value('.','varchar(max)'),1,11,'') exec(@sqlcmd)
Comparing Stored-Procedures
Had a bit of a problem today with the re-write project.
I had been checking new stored-procedures in the DEV database, and (if good) pasting them into the WEB database.
The issue was that some DEV stored-procedures that I had already checked-in to WEB had been modified again.
Rather than trying to enforce version-control (mmm), or download Redgate’s SQL Compare, I modified my ‘Whats New” routine to compare the modify-dates between the DEV and WEB databases.
-- CompareSP.sql SELECT [dev].[type_desc], (SELECT [name] FROM [companydev].[sys].[schemas] WHERE [schema_id] = [dev].[schema_id]) [schema], CASE [dev].[parent_object_id] WHEN '0' THEN [dev].[name] ELSE OBJECT_NAME([dev].[parent_object_id]) + '.' + [dev].[name] END [object_name], [dev].[create_date], [dev].[modify_date], -- or create-date if there isn't one '' v, [web].[modify_date] web_modify_date , DATEDIFF(MINUTE, [dev].[modify_date], [web].[modify_date]) mod_diff FROM [companydev].[sys].[objects] dev JOIN [companyweb].[sys].[objects] web ON [dev].[name] = [web].[name] WHERE [dev].[is_ms_shipped] = 0 -- exclude system-objects AND [dev].[type] = 'P' -- just stored-procedures --AND [dev].[modify_date] > '21 nov 2018' ORDER BY [dev].[modify_date] DESC;
Adding a NOT NULL column to an existing table
-- AddingNotNullColumnToExistingTable.sql -- 1. Add new column to the old table, as NULL for now ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NULL -- 2. Set the default to zero for new rows ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [DF_TableName_ColumnName] DEFAULT(0) FOR [ColumnName] -- 3. Change all existing null values to zeros UPDATE [dbo].[TableName] SET [ColumnName] = 0 WHERE [ColumnName] IS NULL -- 4. Change column from NULL to NOT NULL ALTER TABLE [dbo].[TableName] ALTER COLUMN [ColumnName] INT NOT NULL -- Undo (while testing) ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [DF_TableName_ColumnName] ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName]
Calendar UK
Must be that time of year again :). Adapted from Aaron’s beautiful US calendar script …
-- CalendarUK.sql use [Dev]; -- initialize period DECLARE @StartDate DATE = '20000101', @NumberOfYears INT = 30; -- prevent set or regional settings from interfering with -- interpretation of dates / literals SET DATEFIRST 7; -- sunday is the first day of week SET DATEFORMAT mdy; -- thats month/day/year SET LANGUAGE US_ENGLISH; DECLARE @CutoffDate DATE = DATEADD(YEAR, @NumberOfYears, @StartDate); -- 1. this is just a holding table for intermediate calculations: IF OBJECT_ID('tempdb..#cal') IS NOT NULL DROP TABLE #cal CREATE TABLE #cal ( [date] DATE PRIMARY KEY, [day] AS DATEPART(DAY, [date]), [month] AS DATEPART(MONTH, [date]), FirstOfMonth AS CONVERT(DATE, DATEADD(MONTH, DATEDIFF(MONTH, 0, [date]), 0)), [MonthName] AS DATENAME(MONTH, [date]), [week] AS DATEPART(WEEK, [date]), [ISOweek] AS DATEPART(ISO_WEEK, [date]), [DayOfWeek] AS DATEPART(WEEKDAY, [date]), [quarter] AS DATEPART(QUARTER, [date]), [year] AS DATEPART(YEAR, [date]), FirstOfYear AS CONVERT(DATE, DATEADD(YEAR, DATEDIFF(YEAR, 0, [date]), 0)), Style112 AS CONVERT(CHAR(8), [date], 112), Style101 AS CONVERT(CHAR(10), [date], 101) ); -- use the catalog views to generate as many rows as we need INSERT #cal([date]) SELECT d FROM ( SELECT d = DATEADD(DAY, rn - 1, @StartDate) FROM ( SELECT TOP (DATEDIFF(DAY, @StartDate, @CutoffDate)) rn = ROW_NUMBER() OVER (ORDER BY s1.[object_id]) FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2 ORDER BY s1.[object_id] ) AS x ) AS y; -- 2. create the real table IF OBJECT_ID('dbo.CalendarUK') IS NOT NULL DROP TABLE dbo.CalendarUK CREATE TABLE [dbo].[CalendarUK] ( DateKey INT NOT NULL PRIMARY KEY, [Date] DATE NOT NULL, [Day] TINYINT NOT NULL, DaySuffix CHAR(2) NOT NULL, [Weekday] TINYINT NOT NULL, WeekDayName VARCHAR(10) NOT NULL, IsWeekend BIT NOT NULL, IsHoliday BIT NOT NULL, HolidayText VARCHAR(64) SPARSE, DOWInMonth TINYINT NOT NULL, [DayOfYear] SMALLINT NOT NULL, WeekOfMonth TINYINT NOT NULL, WeekOfYear TINYINT NOT NULL, ISOWeekOfYear TINYINT NOT NULL, [Month] TINYINT NOT NULL, [MonthName] VARCHAR(10) NOT NULL, [Quarter] TINYINT NOT NULL, QuarterName VARCHAR(6) NOT NULL, [Year] INT NOT NULL, MMYYYY CHAR(6) NOT NULL, MonthYear CHAR(7) NOT NULL, FirstDayOfMonth DATE NOT NULL, LastDayOfMonth DATE NOT NULL, FirstDayOfQuarter DATE NOT NULL, LastDayOfQuarter DATE NOT NULL, FirstDayOfYear DATE NOT NULL, LastDayOfYear DATE NOT NULL, FirstDayOfNextMonth DATE NOT NULL, FirstDayOfNextYear DATE NOT NULL ); GO -- 3 populate the real table from the temp table INSERT dbo.CalendarUK WITH (TABLOCKX) SELECT DateKey = CONVERT(INT, Style112), [Date] = [date], [Day] = CONVERT(TINYINT, [day]), DaySuffix = CONVERT(CHAR(2), CASE WHEN [day] / 10 = 1 THEN 'th' ELSE CASE RIGHT([day], 1) WHEN '1' THEN 'st' WHEN '2' THEN 'nd' WHEN '3' THEN 'rd' ELSE 'th' END END), [Weekday] = CONVERT(TINYINT, [DayOfWeek]), [WeekDayName] = CONVERT(VARCHAR(10), DATENAME(WEEKDAY, [date])), [IsWeekend] = CONVERT(BIT, CASE WHEN [DayOfWeek] IN (1,7) THEN 1 ELSE 0 END), [IsHoliday] = CONVERT(BIT, 0), HolidayText = CONVERT(VARCHAR(64), NULL), [DOWInMonth] = CONVERT(TINYINT, ROW_NUMBER() OVER (PARTITION BY FirstOfMonth, [DayOfWeek] ORDER BY [date])), [DayOfYear] = CONVERT(SMALLINT, DATEPART(DAYOFYEAR, [date])), WeekOfMonth = CONVERT(TINYINT, DENSE_RANK() OVER (PARTITION BY [year], [month] ORDER BY [week])), WeekOfYear = CONVERT(TINYINT, [week]), ISOWeekOfYear = CONVERT(TINYINT, ISOWeek), [Month] = CONVERT(TINYINT, [month]), [MonthName] = CONVERT(VARCHAR(10), [MonthName]), [Quarter] = CONVERT(TINYINT, [quarter]), QuarterName = CONVERT(VARCHAR(6), CASE [quarter] WHEN 1 THEN 'First' WHEN 2 THEN 'Second' WHEN 3 THEN 'Third' WHEN 4 THEN 'Fourth' END), [Year] = [year], MMYYYY = CONVERT(CHAR(6), LEFT(Style101, 2) + LEFT(Style112, 4)), MonthYear = CONVERT(CHAR(7), LEFT([MonthName], 3) + LEFT(Style112, 4)), FirstDayOfMonth = FirstOfMonth, LastDayOfMonth = MAX([date]) OVER (PARTITION BY [year], [month]), FirstDayOfQuarter = MIN([date]) OVER (PARTITION BY [year], [quarter]), LastDayOfQuarter = MAX([date]) OVER (PARTITION BY [year], [quarter]), FirstDayOfYear = FirstOfYear, LastDayOfYear = MAX([date]) OVER (PARTITION BY [year]), FirstDayOfNextMonth = DATEADD(MONTH, 1, FirstOfMonth), FirstDayOfNextYear = DATEADD(YEAR, 1, FirstOfYear) FROM #cal OPTION (MAXDOP 1); -- 4 add holidays ;WITH x AS ( SELECT DateKey, [Date], IsHoliday, HolidayText, FirstDayOfYear, DOWInMonth, [MonthName], [WeekDayName], [Day], LastDOWInMonth = ROW_NUMBER() OVER ( PARTITION BY FirstDayOfMonth, [Weekday] ORDER BY [Date] DESC ) FROM dbo.CalendarUK ) UPDATE x SET IsHoliday = 1, HolidayText = CASE WHEN ([Date] = FirstDayOfYear) THEN 'New Years Day' WHEN ([DOWInMonth] = 3 AND [MonthName] = 'April' AND [WeekDayName] = 'Friday') THEN 'Good Friday' -- (3rd Monday in January) WHEN ([DOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') THEN 'May Day' -- (first Monday in May) WHEN ([LastDOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') THEN 'May Bank Holiday' -- (last Monday in May) WHEN ([LastDOWInMonth] = 1 AND [MonthName] = 'August' AND [WeekDayName] = 'Monday') THEN 'August Bank Hoiliday' -- (last Monday in August) WHEN ([MonthName] = 'December' AND [Day] = 25) THEN 'Christmas Day' WHEN ([MonthName] = 'December' AND [Day] = 26) THEN 'Boxing Day' END WHERE -- IsHoliday ([Date] = FirstDayOfYear) OR ([LastDOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') OR ([DOWInMonth] = 1 AND [MonthName] = 'May' AND [WeekDayName] = 'Monday') OR ([LastDOWInMonth] = 1 AND [MonthName] = 'August' AND [WeekDayName] = 'Monday') OR ([MonthName] = 'December' AND [Day] = 25) OR ([MonthName] = 'December' AND [Day] = 26); -- 5. create a function to calculate easter etc IF OBJECT_ID('dbo.GetEasterHolidays') IS NOT NULL DROP FUNCTION dbo.GetEasterHolidays GO CREATE FUNCTION dbo.GetEasterHolidays(@year INT) RETURNS TABLE WITH SCHEMABINDING AS RETURN ( WITH x AS ( SELECT [Date] = CONVERT(DATE, RTRIM(@year) + '0' + RTRIM([Month]) + RIGHT('0' + RTRIM([Day]),2)) FROM (SELECT [Month], [Day] = DaysToSunday + 28 - (31 * ([Month] / 4)) FROM (SELECT [Month] = 3 + (DaysToSunday + 40) / 44, DaysToSunday FROM (SELECT DaysToSunday = paschal - ((@year + @year / 4 + paschal - 13) % 7) FROM (SELECT paschal = epact - (epact / 28) FROM (SELECT epact = (24 + 19 * (@year % 19)) % 30) AS epact) AS paschal) AS dts) AS m) AS d ) SELECT DATEADD(DAY,-2,[Date]) [Date], 'Good Friday' HolidayName FROM x UNION ALL SELECT DATEADD(DAY, 1,[Date]), 'Easter Monday' FROM x ); GO -- 6. use the function to insert easter etc ;WITH x AS ( SELECT d.[Date], d.IsHoliday, d.HolidayText, h.HolidayName FROM dbo.CalendarUK AS d CROSS APPLY dbo.GetEasterHolidays(d.[Year]) AS h WHERE d.[Date] = h.[Date] ) UPDATE x SET IsHoliday = 1, HolidayText = HolidayName; -- 7. show results SELECT * FROM dbo.CalendarUK WHERE [year] = '2019' --WHERE [year] in ('2019', '2020') AND (IsHoliday = 1 OR HolidayText IS NOT NULL) --and DateKey = '20181231'
RCSI testing
Here’s some code to create a large number of ghost records.
--rcsi_testing.sql -- create and populate a test table CREATE TABLE dbo.demo_table ( ID INT NOT NULL IDENTITY (1, 1), C1 CHAR(100) NOT NULL ); GO INSERT INTO dbo.demo_table (C1) SELECT TOP (1000) CAST(TEXT AS CHAR(100)) AS C1 FROM sys.messages WHERE language_id = 1031; GO CREATE UNIQUE CLUSTERED INDEX cuix_demo_table_Id ON dbo.demo_table (Id); GO -- start a 1 minute workload SET NOCOUNT ON; GO BEGIN TRANSACTION; ---------**********KEY GO -- Insert new record into dbo.demo_table DECLARE @finish_date DATETIME2(0) = DATEADD(MINUTE, 1, GETDATE()); WHILE @finish_date >= GETDATE() BEGIN -- wait 10 ms before each new process INSERT INTO dbo.demo_table(C1) SELECT C1 FROM dbo.demo_table WHERE Id = (SELECT MIN(Id) FROM dbo.demo_table); -- Wait 10 ms to delete the first record from the table WAITFOR DELAY '00:00:00:010'; -- Now select the min record from the table DELETE dbo.demo_table WHERE Id = (SELECT MIN(Id) FROM dbo.demo_table); END ROLLBACK TRAN; GO
Monitoring RCSI
I created a sql-job to run every 10 minutes to a) save the current ghost count, and b) email me if its a new high!
Step-1 create the table
CREATE TABLE [maint_db].[dbo].[rcsi_monitor] ( date_time DATETIME, table_name VARCHAR(50), ghost_records BIGINT );
If this step succeeded the job would end there. If the step failed (-say- because the table already existed) the job would continue to step-2
Step-2 save the current counts to the table
INSERT INTO [maint_db].[dbo].[rcsi_monitor] SELECT GETDATE(), OBJECT_NAME(object_id), version_ghost_record_count FROM sys.dm_db_index_physical_stats(DB_ID(), null, null, null, 'sampled') WHERE version_ghost_record_count > 0;
The SELECT statement above is the only novel thing here, and perhaps the most useful take-away. (Note: the DB_ID() means the current database, so ensure it runs under the right one).
Step-3 send an alert – if the current count is the new HIGH SCORE!
IF (SELECT MAX(version_ghost_record_count) FROM sys.dm_db_index_physical_stats(DB_ID(), null, null, null, 'sampled')) >= (SELECT ISNULL(MAX(ghost_records), 0) FROM [maint_db].[dbo].[rcsi_monitor]) AND (SELECT ISNULL(MAX(ghost_records), 0) FROM [maint_db].[dbo].[rcsi_monitor]) > 0 BEGIN RAISERROR ('Too many Ghost! AAAAAaaarrrrrrggggghh!', 16, 1) RETURN END
The RAISERROR and RETURN would force the job to fail, triggering an email via Notifications.
Fix sp_BlitzLock
I notice whenever there is corruption in a single extended events deadlock report …
… sp_BlitzLock would not work at all …
Msg 9411, Level 16, State 1, Procedure sp_BlitzLock, Line 185 [Batch Start Line 12]
XML parsing: line 37, character 166, semicolon expected
My work-around was to replace line 196 …
AS ( SELECT CONVERT(XML, event_data) AS deadlock_xml
… with this …
AS ( SELECT CONVERT(XML, REPLACE(event_data,'&',';')) AS deadlock_xml
Stored-Proc to insert into PostgreSQL
It turned out Not to be so straight forward, executing a SQL Server stored-procedure from Entity Framework to insert data into Postgres (yes, I know, I said that in the meeting).
(A stored-procedure with an ordinary INSERT executed perfectly from within SSMS, using a linked server with an ODBC driver.)
After a fruitless day tweeking RPC and Distributed Transactions, my pragmatic (dreary) solution was to manually create a Postgres table, then a Postgres function to do the insert, then a stored-procedure to pass parameters to that function.
Here’s the code …
-- in postgres -- DROP TABLE public.errorlogs; CREATE TABLE public."ErrorLogs" ( "id" serial primary key, "edesc" char(500), "etype" int, "appid" int ) WITH ( OIDS = FALSE ) TABLESPACE pg_default; ALTER TABLE public.ErrorLogs OWNER to postgres;
Then to create the insert function …
--in postgres CREATE OR REPLACE FUNCTION pg_insert_new_error(edesc varchar(500), etype int, appid int) RETURNS VOID AS $$ BEGIN INSERT INTO "ErrorLogs" (edesc, etype, appid) VALUES (edesc, etype, appid); END $$ LANGUAGE 'plpgsql';
And finally the stored-proc …
-- in sql server alter procedure [PostgreSQL].[SPU_InsertNewErrorLog] @description varchar(500), @messagetype int, @appid int as BEGIN DECLARE @cmd VARCHAR(500) = 'SELECT a.* FROM OPENQUERY(pgserver,''select pg_insert_new_error(''''' + @description + ''''', ' + CONVERT(VARCHAR(5), @messagetype) + ', ' + CONVERT(VARCHAR(5), @appid) + ')'') a' EXEC (@cmd) END GO
(Notice, I had to encapsulate the whole query and its parameters to work with OPENQUERY).
Whats New!
This very handy little script lists stored-procedures, tables, etc with the most recent at the top.
Great when you have been away, or even as the foundation of a migration tracking SSRS report.
-- WhatsNew.sql SELECT [type_desc], (SELECT [name] FROM sys.schemas WHERE schema_id = ob.schema_id) [schema], CASE parent_object_id WHEN '0' THEN [name] ELSE OBJECT_NAME (parent_object_id) + '.' + [name] END [object_name], create_date, modify_date -- or create-date if there isn't one FROM sys.objects ob WHERE is_ms_shipped = 0 -- exclude system-objects --AND [type] = 'P' -- just stored-procedures -- ORDER BY [schema] DESC, modify_date DESC ORDER BY modify_date DESC;
Audit Logins (light)
This is a partial update of my “DBA Audit” post, using code more suited to SQL 2014 and beyond.
Before a migration I created a job called “Audit Logins” scheduled to run every minute to help flag unused logins.
The first step ‘setup’ creates and populates a table with all enabled logins …
/* initial setup */ /* create table */ CREATE TABLE [master].[dbo].[LoginAudit] ( LoginName VARCHAR(200), LastLoginDate DATETIME) /* populate with logins */ INSERT INTO [master].[dbo].[LoginAudit] (LoginName, LastLoginDate) SELECT [name], NULL FROM [master].[sys].[server_principals] WHERE type 'R' /* is not a Role */ AND is_disabled 1; /* is not Disabled */
Step-1 fails after the first run by design (as the table already exists) and continues onward with step-2 ‘update’ …
/* update logins */ SELECT MAX(login_time) LoginTime, login_name LoginName INTO #LoginTempTable FROM [sys].[dm_exec_sessions] WHERE login_name '' /* exclude ef */ GROUP BY login_name; UPDATE [master].[dbo].[LoginAudit] SET LastLoginDate = tmp.LoginTime FROM #LoginTempTable tmp WHERE LoginAudit.LoginName = tmp.LoginName;
I called it ~light as it is designed to have one row per login. Therefore if it is forgotten, and runs for years, the audit table will never grow.
Copying all tables to a new database
As part of an e-commerce re-write project I was tasked with copying over 300 tables from one database to another, including all data, identity columns, indexes, constraints, primary and foreign keys.
I was unable to simply backup / restore due to space and security issues. Here is my solution …
1. Script Create statements for every table. On the ‘old’ database I expanded the database and clicked ‘Tables’ then clicked ‘View’ / ‘Object Explorer Details’ so all the tables were listed in the right-hand pane. Then I was able to highlight all the tables there, and right-click ‘Script Table as’ / ‘Create To’ / ‘New Query Editor Window’.
When finished I changed connection to the ‘new’ empty database and ran the script to create all the tables – without data.
2. Disable all foreign-key constraints. (from here https://stackoverflow.com/questions/11639868/temporarily-disable-all-foreign-key-constraints). I ran this script on the new database …
-- disable fks use targetdb go DECLARE @sql NVARCHAR(MAX) = N''; ;WITH x AS ( SELECT DISTINCT obj = QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' + QUOTENAME(OBJECT_NAME(parent_object_id)) FROM sys.foreign_keys ) SELECT @sql += N'ALTER TABLE ' + obj + ' NOCHECK CONSTRAINT ALL; ' FROM x; EXEC sp_executesql @sql;
3. Populate all the tables using the SSIS wizard. In SSMS I right-clicked the old database / ‘Tasks’, ‘Export Data…’. In the wizard I accepted the old database as the Source and typed in the new database details as the Target. I ticked all tables, and clicked ‘edit Mappings’ to tick ‘Enable identity insert’. I then deselected the Views, and executing the SSIS package.
4. To Re-enable all foreign keys – I ran this script (from the same web page as 2.) on the new database …
-- re-enable fks use targetdb go DECLARE @sql NVARCHAR(MAX) = N''; ;WITH x AS ( SELECT DISTINCT obj = QUOTENAME(OBJECT_SCHEMA_NAME(parent_object_id)) + '.' + QUOTENAME(OBJECT_NAME(parent_object_id)) FROM sys.foreign_keys ) SELECT @sql += N'ALTER TABLE ' + obj + ' WITH CHECK CHECK CONSTRAINT ALL; ' FROM x; EXEC sp_executesql @sql;
To check progress I used my old ‘database_compare’ script.
Deadlocks from Entity Framework
Entity Framework was squirting raw SELECT statements at the database and causing deadlocks.
To fix, I captured the query text with sp_BlitzLock and executed it in Plan Explorer.
Plan Explorer confirmed that the data was being retrieved using a non-clustered index combined with the clustered-index (ahah!)
The Plan Explorer / Index Analysis tab, showed the non-clustered index had failed to retrieve over 15 columns.
I was able to create a new index that covered 100% of the columns within the Index Analysis screen.
I executed the query again to confirm it was no longer using the clustered index, and was therefore quicker and less likely to cause a deadlock.
Improving sp_BlitzLock
I notice the output from sp_BlitzLock surfaces object_id’s in the query column.
It is a simple matter to decode object_id’s like …
SELECT OBJECT_NAME(SomeNumber);
… where SomeNumber is the object id.
Reducing index count
Sometimes its hard to see the wood for the trees. With over 30 indexes on a table of 50 columns I searched for some graphical way to list the columns against each index so I could easily see a) indexes that were totally encapsulated in a larger one. And b) almost identical indexes where a column (or two) could be added to one so the smaller could be dropped.
Initially it was sp_BlitzIndex that named the tables with too many indexes. The results from which I ran in SentryOne’s Plan Explorer like … select * from dbo.order_items; … or whatever.
Some time later :), in the Index Analysis tab I was able to choose tics to show every column and hey presto! The exact graphical tool I wanted 🙂 And a bonus of an easy way to manipulate them.
But watch out! you need another tool to rank the read/write ratio of each index before you start making changes (I use my old ‘indexmaint’ script).
Optimizing Updates
Removing all duplicate rows
Just recording here an update to my old ‘having’ way to remove duplicate rows
WITH cte AS ( SELECT SomeColumnName, row_number() OVER(PARTITION BY SomeColumnName ORDER BY SomeColumnName) AS [rn] from [SomeDatabaseName].[dbo].[SomeTableName] ) select * from cte where [rn] > 1 -- #1 test -- delete cte where [rn] > 1 -- #2 execute
Migrating SQL Server to PostgreSQL
- download and install latest postgresql (eg: 10.4)
download and install latest admin tool (eg: pgadmin 4.3) - create a postgres “Server” that matches the actual server name, and a database
5. download the newest 32 bit postgresql driver (eg: psqlodbc_x64.msi)
6. install the driver (image) …
7. run SSIS export wizard (image) …
Where the wizard stops with an error use the following script to change the offending column to varchar(max). The idea at this stage is just to get the data in, in whatever form.
-- convertTablesForPg.sql -- 1. to find tables with a named column, or select so.[name] table_name, sc.[name] column_name, st.[name] data_type from sysobjects so inner join syscolumns sc on (so.id = sc.id) inner join systypes st on (st.[type] = sc.[type]) where so.[type] = 'U' and sc.[name] = 'Email' -- 1b. to find named tables select so.[name] table_name, sc.[name] column_name, st.[name] data_type, 'ALTER TABLE [dbo].[' + so.[name] + '] ALTER COLUMN [' + sc.[name] + '] VARCHAR(MAX);' from sysobjects so inner join syscolumns sc on (so.id = sc.id) inner join systypes st on (st.[type] = sc.[type]) where so.[type] = 'U' and so.[name] = 'translations' ORDER BY 2 -- 2. to create command to change date/time/bit columns to varchar(max) select so.[name] table_name, sc.[name] column_name, st.[name] data_type, 'ALTER TABLE [dbo].[' + so.[name] + '] ALTER COLUMN [' + sc.[name] + '] VARCHAR(MAX);' from sysobjects so inner join syscolumns sc on (so.id = sc.id) inner join systypes st on (st.[type] = sc.[type]) where so.[type] = 'U' and st.[name] IN ('DATE', 'TIME', 'BIT') ORDER BY 1,2,3 /* missed tables [dbo].[BuyersBySupplierData] */
See me for part 2 😉
Change Job notification Operators
I wanted to standardize job error notifications, so created an new operator called ‘DBA’ (with multiple email addresses).
This code semi-automates the process of updating all the jobs by listing them, along with the code needed to change them …
/* ChangeNotifications.sql */ SELECT J.[name] JobName, O.[name] OperatorName, O.email_address Email, 'EXEC msdb.dbo.sp_update_job @job_name = N''' + J.[name] + ''', @notify_level_email=2, @notify_email_operator_name = N''DBA''' CommandToChangeIt FROM msdb..sysjobs J LEFT JOIN msdb..sysoperators O ON O.id = J.notify_email_operator_id ORDER BY Email, JobName;
Bulk Email Sender
Although its inherently a bad idea to use a database server for application processes, and the built in function “sp_send_dbmail” can be a bit temperamental.
Never-the-less, the brief was to send emails this way, and individually (IE: no long strings of addresses).
The environment was mature, with a working Email Profile, a database and tables already in-place and holding HTML style emails ready to go.
-- SendEmailProcess.sql USE [SomeDatabase] GO CREATE PROCEDURE [dbo].[SendEmailProcess] @Test varchar(100) = null AS /* clear out the sysmail tables */ DELETE FROM [msdb].[dbo].[sysmail_allitems] /* parameters */ DECLARE @ID uniqueidentifier, @To varchar(100), @Subject varchar(255), @Html varchar(max), @Return int /* start of loop */ WHILE (SELECT COUNT(*) FROM [SomeDatabase].[dbo].[EmailMessage] EM JOIN [SomeDatabase].[dbo].[Recipient] R ON EM.Id = R.EmailMessage_Id2 WHERE EM.[Status] = 'Submitted') > 0 BEGIN /* get any one email message */ SELECT TOP 1 @ID = EM.ID, @To = isnull(@Test, R.EmailAddress), @Subject = EM.[Subject], @Html = EM.HtmlContent FROM [SomeDatabase].[dbo].[EmailMessage] EM JOIN [SomeDatabase].[dbo].[Recipient] R ON EM.Id = R.EmailMessage_Id2 WHERE EM.[Status] = 'Submitted'; /* send it */ EXEC [msdb].[dbo].[sp_send_dbmail] @profile_name = 'BulkMail', @recipients = @To, @subject = @Subject, @body = @Html, @body_format = 'HTML'; /* check it worked */ SET @Return = @@error /* if it worked - mark it as Sent */ IF @Return = 0 BEGIN UPDATE [SomeDatabase].[dbo].[EmailMessage] SET [Status] = 'Sent' WHERE Id = @ID END /* if it failed - flag it and move on */ IF @Return != 0 /* less-than greater-than does not work in WordPress */ BEGIN UPDATE [SomeDatabase].[dbo].[EmailMessage] SET [Status] = 'Failed' WHERE Id = @ID END /* end of loop */ END GO
Splitting up a large MDF file
I wanted to break-up a single 500GB MDF file into 5 files of around 100GB each, so created 4 NDF files.
I set autogrowth to 1024 MB for the NDF files and OFF for the MDF file.
In a SQL Job I used code like …
DBCC SHRINKFILE (N'MDFLogicalFileName', EMPTYFILE);
Which after the second weekend left about 88GB in the MDF file.
--datafiles.sql select Physical_Name, ROUND(CAST((size) AS FLOAT)/128,2) Size_MB, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS FLOAT)/128,2) Used_MB, convert(int, ROUND(CAST((FILEPROPERTY([name],'SpaceUsed')) AS float)/128,2) / ROUND(CAST((size) AS float)/128,2) * 100) Used_Pct FROM sys.database_files where physical_name not like '%ldf%' order by physical_name;
I cancelled and deleted the job.
Over the next 3 week nights I reduced its physical size to 300GB, 200GB, then 100GB using a Job step like …
DBCC SHRINKFILE (N'MDFLogicalFileName', 100000);
I set MDF autogrowth to match the NDF files, so the five would naturally balance (size wise) over time.
Lastly I set up a nightly job to rebuild the most fragmented indexes (Thanks again Ola).
Update Reporting from Live (part 2 of 2)
… I created a SQL Job on the Reporting server called “Restore from Live”.
Step-1 “Kill any connections”
Before a database can be restored it needs to be unused. Removing connections in this way is more reliable then changing the database to “Single-user-mode”.
DECLARE @kill VARCHAR(8000) = ''; SELECT @kill = @kill + 'kill ' + CONVERT(VARCHAR(5), spid) + ';' FROM [master].[dbo].[sysprocesses] WHERE dbid = DB_ID('LiveDatabaseName') AND spid > 50; EXEC (@kill);
Step-2 “Full Restore”
This step restores a Full backup. Changing the database name and file locations as required.
When complete the database will be left in a “Restoring” state. *It can be brought online either by completing the next step or by manual recovery EG: “Restore Database [LiveDatabaseName] with Recovery;”.
RESTORE DATABASE [LiveDatabaseName] FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_FULL.bak' WITH NORECOVERY, REPLACE;
Step-3 “Diff Restore”
This step restores a Diff backup similar to the last step, however it brings the database back online after completion. If this step ever fails see * above.
RESTORE DATABASE [LiveDatabaseName] FROM DISK = N'\\LiveServerName\SQLBackups\LiveDatabaseName\LiveDatabaseName_DIFF.bak' WITH RECOVERY;
Step-4 “Switch to Simple Recovery Mode”
This step changes the database to Simple recovery mode where the log-files are re-used and do not require management (EG: regular log backups). This is appropriate for report servers where the data is already backed up from live “for recovery” (IE: outside of the backups detailed in these 2 posts).
ALTER DATABASE [LiveDatabaseName] set recovery SIMPLE;
Step-5 “Remove Orphans”
This deprecated command changes the password for the “mssql” login to match the password from the “mssql” user. Login passwords are not captured by backups.
sp_change_users_login 'auto_fix', 'mssql';
Footnote
I wrote these 2 jobs using minimal variables and dynamic SQL, and using a generous number of jobs and job-steps, in the hope that this will be robust and easy to manage. (And because I really like simplifying such things)
Removing unused databases
Here is my work-sheet for safely hiding databases from SSMS that I suspect are unused
-- DetachDB.sql -- 1. List all attached databases with file paths SELECT db_name(database_id) [Database], Physical_Name FROM sys.master_files order by [Database] -- 2. Create Attach Script for chosen db (accumulate history here) USE [master]; -- on some servername CREATE DATABASE xxx ON (FILENAME = 'D:\SQLData\xxx.mdf'), (FILENAME = 'D:\SQLLogs\xxx.ldf') FOR ATTACH; USE [master]; -- on some servername CREATE DATABASE Test ON (FILENAME = 'D:\SQLData\Test.mdf'), (FILENAME = 'D:\SQLLogs\Test_log.ldf') FOR ATTACH; -- 3. Detatch Database USE [master]; EXEC MASTER.dbo.sp_detach_db @dbname = N'xxx'; -- 4. To rollback, re-attach database (scripted in step-2)
SSRS: User defined Subscriptions failing
I had an issue in SSRS where user configured subscriptions would always fail to find their email address (an email server configuration problem probably).
My work-around was to allow users to type in (or paste in) their full email addresses.
To do this I first created a copy of the reporting services configuration file “c:\ Microsoft SQL Server\ MSRS11.MSSQLSERVER\ Reporting Services\ ReportServer\ rsreportserver.config”.
Then edited the original in two places …
1) I changed [SendEmailToUserAlias]True[/SendEmailToUserAlias] to False.
2) Then I inserted the name of the SMTP Server into the middle of [DefaultHostName][/DefaultHostName].
NOTE: To find the name of the SMTP Server I opened Reporting Services Configuration Manager, and navigated to “E-Mail Settings”.
Space Free
Central to my ‘Alert on low space’ job is this query, which is very handy by its self …
--spaceAlert.sql
select volume_mount_point Drive,
cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;
GDPR Data Mapping
There is a fab new tool in SSMS 17.5 that helps with the GDPR spadework.
That is, the right of EU citizens to have their personal data deleted on request, enforceable from 25 May 2018.
To start right-click on a database, choose Tasks / Classify Data.
The wizard then searches the current database and attempts to classify table-columns into categories. For example a column called ‘mobile’ containing telephone numbers would be categorized as ‘contact Info’.
Then the wizard adds a sensitivity label (contact-info would be “Confidential – GDPR”)
Its a good idea to look at the actual data in a second screen whilst working down the recommendations list (in the first screen).
For each table-column you can accept / change / delete the recommendation.
Then, when you are done, you can save your work by clicking on “Accept selected recommendations”.
This is then saved within each databases system view called sys.extended_properties.
Be assured – that all selections can be changed / removed indefinitely, and that the tables / columns / data is not directly changed in any way.
The result, is a smart Report which can be printed or emailed out, demonstrating that you have it all under control 😉
TSQL Performance Rule #1
There’s no significance to this one being number one 🙂 its just the one I’ve just been thinking about 🙂 I may now have built this up a bit more than is warranted, so hope your not expecting too much from my number one performance rule. Oh, Ok then, here goes …
“All numbers in stored-procedures should be in single quotes”.
Even if they are defined as INT they could potentially force a VARCHAR to be converted to INT.
Consider WHERE SomeColumn = 42. Conversion precedency means VARCHAR’s will always be converted to INT’s never the other way around. The one numeric value above (42) could cause a million rows in the column (“SomeColumn”) to have to be converted to INT to be tested. Significantly affecting performance.
Consider WHERE SomeColumn = ’42’. “SomeColumn” is either numeric or non-numeric. If its INT then just one value (the ’42’ in the where clause) has to be converted to INT (taking no time at all). If “SomeColumn” is VARCHAR then there is no conversion.
Shrink TempDB
To shrink the TempDB MDF file (before adding additional NDF files for example), whilst retaining the total size of 50GB …
DBCC FREEPROCCACHE USE [tempdb] GO DBCC SHRINKFILE (N'tempdev' , 8192) GO
Exporting a Report to Excel
Finance wanted to export their reports into spread sheets but the company Logo and report Title were messing up the rendering.
To fix this I amended the SQL Server 2012 SSRS config file (called “rsreportserver.config“) after taking a copy.
The location of the config file was …
C:\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\ReportServer
I commented out the line … (please note: I have replaced the greater-than and less-than symbols with square brackets. WordPress does not like those symbols in posts)
[Extension Name="EXCELOPENXML" Type="Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.ExcelOpenXmlRenderer,Microsoft.ReportingServices.ExcelRendering"/]
… and replaced it with these 7 lines …
[Extension Name="EXCELOPENXML" Type="Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.ExcelOpenXmlRenderer,Microsoft.ReportingServices.ExcelRendering"] [Configuration] [DeviceInfo] [SimplePageHeaders]True[/SimplePageHeaders] [/DeviceInfo] [/Configuration] [/Extension]
To use this, I moved the report Logo and Title into a heading-block within the reports.
** UPDATE **
On another occasion I was unable to make this global change and resorted to making changes within individual reports.
The method was to right-click on the items to be hidden, choose properties, then Visibility. I pasted this expression into the appropriate box …
=IIF(Globals!RenderFormat.Name = "EXCELOPENXML", True, False)
AlwaysOn Availability Groups
Terms (SQL Server 2012, 2014, 2016, 2017 Enterprise)
AG = Availability Group = AlwaysOn Availability Group = SQL Server technology that protects one or more databases on a SQL Server instance as a logical group, for HA or DR.
Node = A Windows Server participating in an AG.
SQL Server Instance = A complete installation of SQL Server. Including user databases, system databases, logins, linked servers, jobs etc. A single default-instance plus a number of named-instances could be installed on one node.
Replica = the user database(s) being protected by an AG.
List (high level)
Configure Always On (Operating System)
– Enable windows clustering on each node
– Add participating nodes to cluster
– Validate cluster
– Create cluster
– Multi-subnet?
Configure Availability Groups (SQL Server)
– Enable AG on each SQL Server instance through config manager
– Create endpoint on each replica
– Grant connect on each endpoint/replica
– Create an AG (primary replica)
– Join each secondary to the new AG (secondary replicas)
Configure Databases (Database)
– Join database to AG
– Configure synchronous / asynchronous replication
– Configure manual / auto failover
– Configure read only / non read only secondary
Configure the Listener (Availability Group)
– Select IP for each subnet
– Configure listener
– Test listener
– Test failover
Configure Advanced Options
– Read only routing
– Offloading backups
– Failover behavior
– setup monitoring
Add a Column to a Report Model
I had the odd request to add an extra column to a SQL 2008r2 “Report Model”. I had never heard of one of those, but it turned-out to be a type of amalgamated data-source that the users created there own ad-hock reports from (via Report Builder 1).
To add the extra column I just added it to the SQL View (which was named in the amalgamated data-source definition). Then to refresh the Report Model I downloaded it and uploaded it with a new name.
Report-Builder Cube Data-Source
Had a tricky situation connecting Report Builder 3 to a cube. I was able to copy the connection string from withing SSDT but it still would not work.
I will use “Adventure Works” for illustration.
The solution was in the error message “Either the user, does not have access to the AdventureWorksDW2012 database, or the database does not exist.”
It turned out the database did not exist … as its SSMS Database-Engine name (“AdventureWorksDW2012”).
Connecting SSMS to Analysis Services however showed a different name “AdventureWorksDW2012Multidimensional-EE”
Plugging this into my connection string (with Data Source Type being Analysis services, and Connect Using being Windows Integrated Security) worked eg:-
Provider=SQLNCLI11.1;
Data Source=(ServerName\InstanceName);
Integrated Security=SSPI;
Initial Catalog=AdventureWorksDW2012Multidimensional-EE
Annoyingly (grrr!), I found just removing the Initial Catalog worked also (ah bah).
Update Statistics on a whole database
Whilst performance tuning an SSRS report server I wanted to update all the statistics within the two databases ‘ReportServer’ and ‘ReportserverTempDB’.
I chose a simply-coded, two step method (for safety and to keep control).
First I generated the commands (per database) …
Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' from reportserver.sys.tables
Select 'UPDATE STATISTICS [' + [name] + '] WITH FULLSCAN' from reportservertempdb.sys.tables
… before executing them in a separate session.
Only emailing a spreadsheet when it contains data.
This was a new one to me! A user subscribed to an SSRS Report but only wanted to receive the email if the report contained data.
There seems to be loads of write-ups on the web about how to set this up. Here is how it went for me …
Firstly, I created the stored-procedure that would return rows from a live table that contained (business logic) errors (EG: “rep_exceptions”).
I created the report and subscribed to it (making sure to paste in my actual email address, not my login).
In the subscription form I specified both a start-date and an end-date that were in the past (ensuring the subscription would never actually fire, and ‘OK’ed the form.
Within SSMS “Job Activity Monitor” I located the job that had been created by this subscription (I looked for jobs that had never run, and then matched the start / end dates in the schedule with those I used in the subscription-form)
I copied the action statement from the job-step eg:
EXECUTE [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='5ec60080-b30c-4cfc-8571-19c4ba59aaac'
… into a new job. Then augmented it, to only run if there was data …
EXECUTE [ServerName].[dbo].[rep_exceptions] if @@ROWCOUNT > 0 EXECUTE [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='5ec60080-b30c-4cfc-8571-19c4ba59aaac'
I scheduled the new-job to run every 8am.
Once this was all tested and working – I changed the SSRS subscription email address to the user (and left me as CC temporarily).
Adding ‘All’ to a report drop-down list
There is a problem when you configure a report parameter to ‘Accept Multiply Items’, in that it won’t work with a stored procedure (SP).
One work-around is to only Accept Single Items, but make one of them ‘All’, like this …
1) Create a separate SP to populate the parameter and add an ‘All’ option …
SELECT 'All' CustId UNION SELECT DISTINCT CustomerId FROM SomeTable ORDER BY CustId
2) Then amend the main SP by joining to SomeTable, and adding a CASE statement in the WHERE clause, like so …
WHERE SomeTable.CustomerId = CASE WHEN @CustId != 'All' THEN @CustId ELSE SomeTable.CustomerId END
Which translates as …
WHERE SomeTable.CustomerId = SomeTable.CustomerId
… when ‘All’ is selected (which will let everything through), or …
WHERE SomeTable.CustomerId = @CustId
… where ‘All’ is Not selected.
This allows the user to select All or a single value.
However, if you need to select more than one value but not all values – you will need another approach. You will need to use a split-string function.
Powershell: sort by a column name that contains a space
By trial and error I found the answer is to just remove the space. For example to sort by the column “Recovery Model” …
import-module SQLPS; cls $smoserver = new-object microsoft.sqlserver.management.smo.server $smoserver.databases | sort-object RecoveryModel
Preflight Checklist – Installing SQL 2016 Failover Cluster
How to Connect
- VPN: [SSL1.upcBroadband.com] (login using UIM account)
- [NL CSA P JUMP 001] (login using CSA account)
- 172.23.103.219 / 38 (login using CSA ADMIN account)
Nodes
- Node1 Full Name (rdc ip): LG-W-P-DBS00007.CSA.INTERNAL (172.23.103.219)
- Node2 Full Name (rdc ip): LG-W-P-DBS00008.CSA.INTERNAL (172.23.103.38)
IP’s
- Cluster Management (node1 / node2): 172.23.103.219 / 172.23.103.38
- SQL Server Instance: ??
- DTC: ??
Drive Allocation
- D. Media
- E. Sys & TempDB
- F. User ID
- G. Logs
- H. Backups etc
Collation
- [SQL Latin1 General CP1 AS]
Windows Checks
- Power saving NOT enabled
- Instant File Initialization Enabled (secpol.msc, Local ~, User ~, Perform ~
- Service Account
MDX Commandments
1) Always use fully qualified names.
selecting from ‘2016’ may work initially as the engine returns the first object it finds with this name (which just so happens to be in the calendar dimension).
However after a time there may be a customer-number ‘2016’ (in the customer dimension) that will be returned erroneously.
2) A Tuple marks the co-ordinates of the data to be returned.
A list of comma separated dimensions that intersect on the required data. EG:
(customer 123, date 10/07/2016, sales-item 456).
3) A Set is a list of related objects
EG:{date 10/07/2016, date 11/07/2016, date 12/07/2016}. A Set can be a list of Tuples.
Re run Subscriptions
When an email server failed overnight I used this script to generate the commands to re-send reports to subscribers …
-- RerunFailedSubscriptions.sql -- generate the commands to resend report-emails that failed for some reason select s.LastStatus, LastRunTime, 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' [Command To Re-run the Job] from msdb.dbo.sysjobs j join msdb.dbo.sysjobsteps js on js.job_id = j.job_id join [ReportServer$REPORTS].[dbo].[Subscriptions] s on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%';