I had my first play with SQLCMD on a SQL 2012 Development Server on which the evaluation period had expired. Incidentally this meant SSMS would not run locally (along with maint-plan backups), however the services and other jobs where still working, and remote SSMS could connect and interact as normal.
SQLCMD remind me of Oracle SQLPlus, with the added benefit that you can use the up & down arrow keys to step through history.
— connect (as current windows user)
sqlcmd -S server\instance
sqlcmd -S .\instance
— interact over multiple rows
— clear the command cache
— redirect output to a file
— reset output back to the screen
A little slice of my dba life 🙂 … during morning-checks I noticed the MSDB database was 2.9GB and growing (in utility explorer). I ran the built-in SSMS report ‘Disk Usage by Table’ and found the table sysjobhistory was by far the largest (at 2.6GB).
I checked the sql-job that manages job history – there wasn’t one :). I created sql-job ‘Prune Backup History’ with the command ‘delete from msdb.dbo.sysjobhistory where run_date < convert(char(8), getdate()-30,112)', and scheduled it to run every 5am.
I ran it manually and was disappointed to find the table was still 2.4GB in size. I fired-up SSDT and created a new IS project. I dragged the 'Data profiling Task' to the design surface and configured it to output to \\server\share\msdbprofile. Within 'quickprofile' I connected to the server/database/table.
I saved the package, ran it (made a coffee), then opened the task again and clicked 'open profile viewer'.
Within 'Column Value Distribution Profiles' I found a disproportionate 74% of the run_date values were the same date (about 2 weeks ago). Double-clicking this result I could NOT see the name of the job (in the raw data) but recognised the step-name as belonging to the cdc job I had set-up … about two weeks ago, lol.
I concluded the sql-job was fine, and would dramatically shrink sysjobhistory – in a few weeks time. Cake time 🙂
To check for corruption on a large, busy database I was able to use my filegroup checking script on all but one filegroup.
To polish off the final (monster) filegroup, I resorted to creating a database-snapshot (in the morning) and running this (checktable) script during the day.
btw: Whilst I realise checking a database-snapshot is of limited use, I reasoned it was better than the only alternative (not checking).
Before leaving work I would review the results and drop the snapshot (for application optimization), in rediness to create a fresh one (and continue) the next day.
-- capture table-names from the named filegroup - once
from sys.indexes i
join sys.filegroups f on i.data_space_id = f.data_space_id
join sys.all_objects o on i.object_id = o.object_id
where f.name = 'somefilegroupname'
order by o.name
-- check tables one-by-one
declare @sql varchar(200)
while (select count(*) from master.dbo.fgtables) > 0
select top 1 @sql = 'dbcc checktable ("dbo.' + name + '")' from master.dbo.fgtables order by name
delete from master.dbo.fgtables where name = (select top 1 name from master.dbo.fgtables order by name)
I wanted to display a changing row-count (in a staging table) including commas (eg: 1,234 or 1,234,567).
I tried using STUFF() in a CASE statement to insert commas, then LEFT(), before settling on this, which is more succinct, and scales from 1 to 1,999,999,999
select replace(convert(varchar(20), convert(money, count(*)), 1), '.00', '')
To explain how it works …
– the ‘1’ is a style of VARCHAR that includes commas when converting from MONEY
– then REPLACE removes the ‘.00’ also added during the conversion from MONEY
To check for corruption on a very large production database I created a database-snapshot to run CHECKDB against.
This found a corrupt-table (“CHECKDB found 0 allocation errors and 4191 consistance errors in table … repair_allow_data_loss is the minimum repair level for the erors found…”).
I tried running CHECKTABLE against the suspect table within the live database, but couldn’t due to lack of space.
Finally I used DBCC CHECKTABLE(sometablename) WITH TABLOCK so dbcc wouldn’t create a background copy on the database to check – and run out of space!
This confirmed the live table was fine (yay! ooo cake).