Removal of SQL Server 2008 R1

The day after I installed two instances of SQL Server 2008 R1 on my old Windows-2003 test-server I decided to remove them, here’s what I did …

– rebooted 😉
– start/ settings/ control panel/ add or remove programs/ Microsoft SQL 2008/ remove
– from the instance screen I chose the last instance installed
– from the component screen I ticked All Components
– rebooted and repeated the above steps for the other instance
– used the Add or Remove screen to remove the SQL 2008 browser then native client
– removed the base folders for the two instances from my c-drive
– used Start/Search to find any other folders modified yesterday (happily there were none)
– rebooted then fired up SQL Server 2005 to test it still worked – no problems 🙂

An example of a sql Restore

An example of restoring database Liath using sql-script.

If database exists make it single-user-mode

Alter database liath set single_user with rollback immediate
Go

If possible backup the tail IE: unbacked up data

Backup log liath to disk = N’\192.168.0.32volume_1SQLBackups_D2liathliathTail.bak’ with init, norecovery
go

Restore the last full backup EG:-

Restore database liath from disk = N’\192.168.0.32volume_1SQLBackups_D2liathliath_backup_2011_10_13_174000_5625000.bac’ with norecovery
go

Restore the latest differential backup EG:-

Restore database liath from disk = N’\192.168.0.32volume_1SQLBackups_D2liathliath_backup_2011_10_14_000001_1250000.dif’ with norecovery
Go

Restore all the transactional logs in sequence from the last diff backup EG:-

Restore log liath from disk = N’\192.168.0.32volume_1SQLBackups_D2liathliath_backup_2011_10_14_003001_1718750.trn’ with norecovery
Go

Restore log liath from disk = N’\192.168.0.32volume_1SQLBackups_D2liathliath_backup_2011_10_14_013001_3437500.trn’ with norecovery
Go

Restore the final tail backup

Restore log liath from disk  = N’\192.168.0.32volume_1SQLBackups_D2liathliathTail.bak’ with recovery
Go

Change database to mult-user mode

Alter database liath set multi-user
Go

Remove Completely Unused Indexes

Here’s a great little script I found that lists indexes in the current database with no stats at all. That is, indexes that have never been read or even written to (since the last reboot, note). I’m not sure how much overhead these will have on the system – but it’s probably worth clearing them out anyway.

From the results I wouldn’t delete the ‘system’ indexes (that is ones starting with ‘MS~’ or ‘sys~’ and the like), or Clustered Indexes 🙂

I was about to work-up a script to delete them, then (phew) remembered … “dba-rule #23: never automate deletes”

SELECT object_name(i.object_id) as tableName, i.name as indexName
FROM sys.indexes i
LEFT JOIN sys.dm_db_index_usage_stats s ON i.object_id = s.object_id AND i.index_id = s.index_id AND s.database_id = db_id()
WHERE objectproperty(i.object_id,'IsUserTable') = 1 and i.index_id> 0
AND s.object_id IS NULL
AND i.is_Primary_Key = 0
AND i.is_unique_constraint = 0
AND i.is_unique = 0
--DROP INDEX [index-name] ON [dbo].[table-name]

Record db Growth

To get my own metrics in place quickly at a new sql2008r1 site I created a ‘dba’ database with a table called ‘dbsizes. Then a job scheduled to run every 10pm with the first step called ‘record sizes’ containing this code …

set nocount on

create table #sizes (
name varchar(50),
db_size varchar(50),
[owner] varchar(50),
[dbid] char(1),
created smalldatetime,
[status] varchar(500),
compatability_level char(2))

insert #sizes exec sp_helpdb

insert into dba.dbo.dbsizes
select name, getdate() dbdate, db_size
from #sizes
order by name desc

drop table #sizes

Simple-Mode to Full-Recovery-Mode script

Leading on from my script to list databases set to Simple recovery-mode, I found changing them to Full was not as easy as just changing the sys.databases flag from ‘3’ to ‘1’ (not allowed).

I had a requirement to change a whole server full of databases, but due to our ‘smart’ backup software (that on discovering new Full recovery-mode databases would immediately want to take full backups), had to stagger the changes to one-an-hour – so the (production) box wouldn’t slow to a crawl.

Here’s the job-step …

--capture name of one simple database
declare @name varchar(50)
select @name = [name] from sys.databases where recovery_model = 3 and [name] not in ('master', 'msdb', 'tempdb')
-- 1 = Full, 3 = Simple

--change it to full
declare @cmd varchar(200)
set @cmd = 'alter database ' + @name + ' set recovery full'
exec (@cmd)

When were the last Full, Diff, & Log Backups?

My all-time favorite script is this cross-tab script showing the latest backups of all databases on the current box …

--lastbackups.sql
select sdb.name,
max(case bst.[type] when 'D' then convert(varchar(16), bst.backup_finish_date, 120) end) last_full_backup,
max(case bst.[type] when 'I' then convert(varchar(16), bst.backup_finish_date, 120) end) last_diff_backup,
max(case bst.[type] when 'L' then convert(varchar(16), bst.backup_finish_date, 120) end) last_log_backup,
case sdb.[status]
when '0' then 'Full'
when '4' then 'Bulk-logged'
when '8' then 'Simple'
when '16' then 'Full'
when '24' then 'Simple'
when '32' then 'Restoring'
when '528' then 'Off_line'
when '536' then 'Off_line'
when '1048' then 'Read_only'
when '2072' then 'Restricted User'
when '65536' then 'Full'
when '65544' then 'Simple'
when '2098176' then 'Standby / Read-Only'
when '4194328' then 'Simple'
when '4259848' then 'Simple'
when '1073741840' then 'Full'
when '1073807360' then 'ShutDown'
else '' end db_status, sdb.[status]
from master..sysdatabases sdb
left join msdb..backupset bst on bst.database_name = sdb.name
where sdb.name != 'tempdb'
group by sdb.name, sdb.[status]
--order by sdb.name
order by max(bst.backup_finish_date) -- nulls at top
--order by sdb.[status]

Script to help manually resize datafiles.

Here’s a great script I use when manually resizing datafiles. It lists the percentage of free space and suggest a new size that would have 60% free.

--freespace.sql
SET NOCOUNT ON

IF OBJECT_ID('tempdb..#DBSTATS') IS NOT NULL
BEGIN
   DROP TABLE #DBSTATS
END

CREATE TABLE #DBSTATS (
   [dbname]   sysname,
   lname    sysname,
   usage    varchar(20),
   [size]   decimal(9, 2) NULL ,
   [used]   decimal(9, 2) NULL
)

IF OBJECT_ID('tempdb..#temp_log') IS NOT NULL
BEGIN
   DROP TABLE #temp_log
END

CREATE TABLE #temp_log
(
   DBName          sysname,
   LogSize         real,
   LogSpaceUsed    real,
   Status          int
)

IF OBJECT_ID('tempdb..#temp_sfs') IS NOT NULL
BEGIN
   DROP TABLE #temp_sfs
END

CREATE TABLE #temp_sfs
(
   fileid          int,
   filegroup       int,
   totalextents    int,
   usedextents     int,
   name            varchar(1024),
   filename        varchar(1024)
)

DECLARE @dbname sysname
       ,@sql varchar(8000)

IF OBJECT_ID('tempdb..#temp_db') IS NOT NULL
BEGIN
    DROP TABLE #temp_db
END

SELECT name INTO #temp_db
   FROM master.dbo.sysdatabases
   WHERE DATABASEPROPERTY(name,'IsOffline') = 0
   AND has_dbaccess(name) = 1
   ORDER BY name

WHILE (1 = 1)
BEGIN
   SET @dbname = NULL

   SELECT TOP 1 @dbname = name
   FROM #temp_db
   ORDER BY name

   IF @dbname IS NULL
      GOTO _NEXT

   SET @sql = ' USE [' + @dbname + '] 

      TRUNCATE TABLE #temp_sfs

      INSERT INTO #temp_sfs
         EXECUTE(''DBCC SHOWFILESTATS'')

      INSERT INTO #DBSTATS (dbname, lname, usage, [size], [used])
         SELECT db_name(), name, ''Data''
         , totalextents * 64.0 / 1024.0
         , usedextents * 64.0 / 1024.0
         FROM #temp_sfs

      INSERT INTO #DBSTATS (dbname, lname, usage, [size], [used])
         SELECT db_name(), name, ''Log'', null, null
         FROM sysfiles
         WHERE status & 0x40 = 0x40'

    EXEC(@sql)

    DELETE FROM #temp_db WHERE name = @dbname
END

_NEXT:

INSERT INTO #Temp_Log
   EXECUTE ('DBCC SQLPERF(LOGSPACE)')

--select * from #Temp_Log

UPDATE #DBSTATS
   SET SIZE = B.LogSize
   , USED = LogSize * LogSpaceUsed / 100
FROM #DBSTATS A
INNER JOIN #Temp_Log B
    ON (A.DBNAME = B.DBNAME)AND(A.Usage = 'LOG')

SELECT dbname AS [database name],
   lname AS [file name],
   usage,
   [size] AS [space allocated (MB)],
   [size]*1.6 as 'plus 60% (MB)',
-- used AS[space used (MB)],
-- [size] - used  AS [free space (MB)],
-- cast(used/[size]*100 AS numeric(9,2)) AS [space used %],
   cast(100-(used/[size]*100) AS numeric(9,2)) AS [free space %]
FROM #DBSTATS
ORDER BY [free space %] --dbname, usage

DROP TABLE #DBSTATS
DROP TABLE #temp_db
DROP TABLE #temp_sfs
DROP TABLE #temp_log

SET NOCOUNT OFF

Script to return the SQL Server service-account

--service_accounts.sql

-- return sql-server-service-account used by the default-instance
DECLARE @serviceaccount varchar(100)
EXECUTE master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE',
N'SYSTEMCurrentControlSetServicesMSSQLSERVER',
N'ObjectName',
@ServiceAccount OUTPUT,
N'no_output'
SELECT @Serviceaccount ServiceAccount_DefaultInstance

-- ditto for named-instance
DECLARE @serviceaccount2 varchar(100)
exec master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE',
N'SystemCurrentControlSetServicesMSSQL$sqlins01', -- edit
N'ObjectName',
@ServiceAccount2 OUTPUT,
'no_output'
SELECT @Serviceaccount2 ServiceAccount_NamedInstance

-- and for named-agent
DECLARE @agentaccount varchar(100)
EXECUTE master.dbo.xp_instance_regread
N'HKEY_LOCAL_MACHINE',
N'SYSTEMCurrentControlSetServicessqlagent$sqlins01', --edit
N'ObjectName',
@agentaccount OUTPUT,
N'no_output'
select @agentaccount ServiceAccount_NamedAagent

Search All Tables – script

--dba_SearchAllTables.sql
--Searches all columns of all tables for a given search string (4.47 mins)
--EG: exec dba_searchalltables 'patriqu%'

alter proc dba_SearchAllTables
(
	@SearchStr nvarchar(100)
)
AS
begin

	create table #Results (ColumnName nvarchar(370), ColumnValue nvarchar(3630))

	set nocount on

	declare @TableName nvarchar(256), @ColumnName nvarchar(128), @SearchStr2 nvarchar(110)
	set  @TableName = ''
--	set @SearchStr2 = quotename('%' + @SearchStr + '%','''')
	SET @SearchStr2 = quotename('' + @SearchStr + '','''')

	while @TableName is not null
	begin
		set @ColumnName = ''
		set @TableName =
		(
			select min(quotename(table_schema) + '.' + quotename(table_name))
			from information_schema.tables
			where table_type = 'BASE TABLE'
				and	quotename(table_schema) + '.' + quotename(table_name) > @TableName
				and	objectproperty(
						object_id(
							quotename(table_schema) + '.' + quotename(table_name)
							 ), 'IsMSShipped'
						       ) = 0
		)

		while (@TableName is not null) and (@ColumnName is not null)
		begin
			set @ColumnName =
			(
				select min(quotename(column_name))
				from information_schema.columns
				where table_schema = parsename(@TableName, 2)
					and	table_name = parsename(@TableName, 1)
					and	data_type in ('char', 'varchar', 'nchar', 'nvarchar')
					and	quotename(column_name) > @ColumnName
			)

			if @ColumnName is not null
			begin
				insert into #Results
				exec
				(
					'select ''' + @TableName + '.' + @ColumnName + ''', left(' + @ColumnName + ', 3630)
					from ' + @TableName + ' (nolock) ' +
					' where ' + @ColumnName + ' like ' + @SearchStr2
				)
			end
		end
	end

	select ColumnName, ColumnValue from #Results
end

GO

Orphaned Indexes

Our ‘check indexes’ job was failing occasionally with an error like …

Msg 7999, Level 16, State 40, Line 1 Could not find any index named ‘index_name’ for table ‘table_name’.

I tried to just exclude the named index, but a few days later another one appeared. The fix was to dig into the SP and add a condition to the statement …

...
insert into #indexlist
            select name
	    from sysindexes
	    where name not like '_WA%'
	    and indid not in (0,255)
	    and [root] is not null
	    and id = @tab_id
...

The Oracle SCN issue

Oracle-database uses a large number (‘SCN’) as a time-stamp for referential-integrity, backup/restores, patch levels etc etc. When instances are linked the highest SCN number overwrites all others.

Trouble is … a ‘warning limit’ is being reached (three quarters of the actual limit). And when exceeded Oracle-database assumes its corrupt and becomes unstable.

But wait! There’s a ‘fix’, a patch which lets you manually set the SCN back below the soft-limit. BUT this needs to occur on all linked instances within a short period, or they are just updated from the one with the highest number (dowh!).

And Finally … the fix is only available for the latest version of Oracle. So an obscure, forgotten, out-of-date, instance in the back-of-beyond can bring down hundreds of shiny new machines (‘patched’ or not) until upgraded.

‘Oracle SCN management’ then, not a disaster just another ongoing DBA task 🙂

Taste the freshness of a restore.

I struggled for ages to find the simplest way to handle a situation where a backup file may or may not have been uploaded to an FTP site for restoring (see my last, which worked fine – but failed to produce the desired error message when the folder was empty).

As it only took a few minutes to restore this particular database, it was acceptable to blindly restore it and then decide if doing so had been useful. For this I made use of the backup_set_id column in the restorehistory table within MSDB.

This column appears to contain a check-sum for every restore, and therefore it was easy enough to see if the last two restores had the same value. Here’s the full job-step code …

select top 2 backup_set_id
into #temp
from restorehistory
where destination_database_name = 'AdventureWorks'
order by restore_history_id desc

if
(select max(backup_set_id) from #temp)
 =
(select min(backup_set_id) from #temp)
exec sp_send_dbmail
 @recipients = 'somebody@somewhere.com',
 @subject = 'AdventureWorks was restored from an old backup';

Using ‘CoreFTP’ command line

To download a backup from an FTP site then delete the original …

C:pathcorecmd.exe -s -O -site site -d /path/latestbackup.bak -p c:ftp -delsrc -log c:logfile.log

-s = silent

-O = overwrite

-site = pre configured connection details within coreftp gui

-d = path to ftp site

-p = local destination

-delsrc = delete source after successful download

-log = record a logfile

******** UPDATE Dec 2012 ********

I found that this utility was the only way I could find to script/schedule FTPS uploads & downloads. Here’s more working examples …

“C:Program Files (x86)CoreFTPcorecmd.exe” -s -OS -site thirdpartyname -u “E:DBAthirdpartynamesomefilename_2012*.txt” -p “/export files/” -log E:DBAthirdpartynamelogfile.log

“C:Program Files (x86)CoreFTPcorecmd.exe” -s -OS -site thirdpartyname -d “/Client imports/somefilename_*.txt” -p \mylocalsqlserverdbathirdpartyname -log \mylocalsqlserverdbathirdpartynamelogfile.log

Note:-
– The use of double quotes so the utility isn’t confused by spaces in the path.
– The -OS switch which means copy everything except what’s already copied.
– The use of wild-cards with -OS to simplify management of file-names containing dates.
– the use of -u for uploading, and -d for downloads.

Best Practice – Creating a Procedure that includes remote references

To create or alter a stored procedure that contains references to data on remote servers you need to connect using an account that can access all the data.

That is, to successfully alter a sp that includes links to 2 remote servers, connect (in SSMS) using an account that is common in the ‘Linked Servers’ configuration. Test this by drilling down through ‘Linked Servers’ to see both lots of tables.

Kill all users

In a technical sense of course 🙂

Here’s the code – very handy before a restore …

DECLARE processes CURSOR FOR
  select spid from sysprocesses where dbid=(select dbid from sysdatabases where name='AdventureWorks') and status in ('runnable','sleeping')
DECLARE @processid int   -- spid of active processes

OPEN processes --open the cursor
 FETCH FROM processes INTO @processid
 WHILE (@@FETCH_STATUS = 0)
  --cycle through the list and kill each spid
 BEGIN 
  Print @processid
  Print 'killed'
  Exec ('
  kill '+@processid+'
  ')
 FETCH FROM processes INTO @processid
 END -- END OF WHILE (@@FETCH_STATUS = 0) 
CLOSE processes
DEALLOCATE processes

Find stored-procedure on current server

A SQL-Job was failing with Could not find stored procedure ‘RemovePushMsgs’.

I noticed the drop-down-value ‘database’ was missing from the job-step – which was odd. To confirm my suspicion that the database had been removed but the job forgotton I wrote the following script to search each and every database for the missing SP.

--find_sp.sql
-- to find stored-procedure names like '%RemoveM%' in all databases

create table #temp (name varchar(50), dbname varchar(50), xtype varchar(50))

exec sp_Msforeachdb "use [?];insert #temp select name, '?', xtype from sysobjects where name like '%RemoveM%'"

select name, dbname, case xtype
when 'P' then 'SP'
when 'S' then 'System Table'
when 'X' then 'XP'
when 'U' then 'Table'
when 'TF' then 'FunctionT'
when 'FN' then 'FunctionS'
when 'V' then 'View'
when 'IF' then 'FunctionI'
when 'D' then 'Default'
else xtype end [type]
from #temp
order by 1

drop table #temp

Nightly Restore from FTP

Similar to a previous project, a customer wanted a SQL-2005 database refreshing over-night from a backup on an FTP server. Using an already inplace tool ‘CoreFTP’ via an already inplace stored-procedure ‘usr_FTPDownload’ I created a job …

Step-1 “Download backup(s) from ftp site”
exec dba.dbo.usr_FTPDownload
@ftpfile = ‘*.bak’,
@DLdir = ‘E:FTP’,
@ftpprofile = ‘FTP’,
@logfile = ‘E:Microsoft SQL ServerMSSQL.1MSSQLLOGRestore of AdventureWorks from FTP.log’,
@email = ‘someone@somewhere.com’ — on failure

Step-2 “Restore the latest backup”
— GET NAME OF LATEST BACKUP
create table #temp (col1 varchar(max))
insert #temp(col1)
exec xp_cmdshell ‘dir /b /o-d e:ftp*.bak’

declare @newest varchar(50)
set @newest = (select top 1 * from #temp)

–BUILD DYNAMIC COMMAND
declare @cmdstr varchar(200)
set @cmdstr = ‘restore database AdventureWorks
from disk=”e:ftp’ + @newest + ”’
with file=1,
nounload,
replace,
stats=10′

–RUN
exec (@cmdstr)

Step-3 “grant access to user” (running on AdventureWorks)
create user an_other for login an_other
exec sp_addrolemember ‘db_datareader’, ‘an_other’

Step-4 “rename the backup-file for housekeeping”
— get name of latest backup
create table #temp (col1 varchar(max))
insert #temp(col1)
exec xp_cmdshell ‘dir /b /o-d e:sftp*_*.bak’

declare @oldname varchar(50)
set @oldname = (select top 1 * from #temp)

–check if already renamed
if @oldname like ‘%File Not Found%’ return

— extract namepart
declare @namepart varchar(50)
set @namepart = left(@oldname,(patindex(‘%[_]%’,@oldname)-1)) — text before underscore

— convert date-part into day-of-week
declare @weekday varchar(10)
set @weekday = datepart(dw, right(left(@oldname,(patindex(‘%.%’,@oldname)-1)),10))

— buld-up new-name
declare @newname varchar(50)
set @newname = @namepart + @weekday + ‘.bak’

— run it
declare @cmdstr varchar(100)
set @cmdstr = ‘ren e:sftp’ + @oldname + ‘ ‘ + @newname
exec xp_cmdshell @cmdstr

Step-5 “email results – on success”

declare @datetime as varchar(20)
declare @email as varchar(100)

set @datetime = convert(varchar(17),(select top 1 restore_date
from msdb.dbo.restorehistory
where (destination_database_name = ‘AdventureWorks’)
order by restore_date desc),113)

set @email = ‘Database “AdventureWorks” Refreshed at “‘+@datetime+'”.’

exec msdb.dbo.sp_send_dbmail
@recipients = ‘someone@somewhere.com’,
@subject = @email ;

Script to list Simple-recovery-mode databases.

— simple_mode.sql
— script to list Simple-recovery-mode databases

declare @ver varchar(50)
set @ver = (select @@version)

— SQL 2000
if @ver like ‘%2000%’
select name from sysdatabases where status = ’24’ — simple, excluding tempdb

— SQL 2005/8
else
select name simple_mode from sys.databases where recovery_model = 3 — (3=Simple 1=Full)

Enabling the ‘Dedicated Administrator Connection’

You need to have pre-enabled DAC before you can use this feature (that’s emergency access to SQL-Server-Management-Studio for Administrators), oh and it only seems to work to default instances. Firstly connect as normal to SSMS

– right-click on the server/instance and choose ‘facets’
– choose the bottom facet ‘Surface Area Configuration’
– change ‘RemoteDacEnabled’ to ‘True’ and click OK
– close SSMS

To test
– open SSMS on the local machine
– Click the Cancel button
– click the ‘x’ to close Object Explorer
– choose File / New / Database Engine Query
– prefix the Server name with ‘admin:’ and click Connect.

You can now run commands to troubleshoot the problem, but without help from the Object Explorer window.

Manually Resizing SQL files

The easy way to calculate what size of datafile would have 40% free …
– using the GUI choose Tasks / Shrink / Files
– if the percentage free is less than 40%
– find the last numeric value on the page. EG: “MB (Minimum is 998 MB)”
– multiply this value by 1.6. EG: 1593
– resize the datafile to 1593 or better, IE: 1600 MB

(btw, on this subject – my best-practice is to adjust the increment-size so there is about one ‘autogrowth’ each week).

What physical box am I on?

To find out the Windows Server Name that is currently running the live node use

begin try
exec xp_cmdshell 'ping -a localhost'
end try

begin catch
exec sp_configure 'show advanced options', 1 reconfigure -- to show the advanced options
exec sp_configure xp_cmdshell, 1 reconfigure -- to enable the command-shell
exec xp_cmdshell 'ping -a localhost' -- run a dos-command
exec sp_configure 'xp_cmdshell', 0 reconfigure -- to disable the command-shell
exec sp_configure 'show advanced options', 0 reconfigure  -- to hide the advanced options
end catch

SQL Server Auditing

After running the logins-report script (below) before an external audit, I wanted to ‘lock’ or ‘disable’ unused accounts for a month – just to confirm that they really were unused before deleting them 🙂

Unfortunatly SQL 2000 doesn’t have a facility to lock/disable sql accounts, so I changed the passwords to a known value. This looses the original p/w forever, but gives us more options than mearly deleting the accounts.

option-a inform the user of the new password.
option-b create a new account with the configuration of the ‘old’ account.

Windows logins (on the other hand) ARE lockable in SQL 2000. You use

exec sp_denylogin ‘domainuser’

and

exec sp_grantlogin ‘domainuser’

to unlock it (if required).

BTW: You need to use SQL 2000 Enterprise Manager to see locked Windows accounts, not SQL 2005/8 Management Studio.

UPDATE: watch-out for the situation where you disable a users individual account (as above), then make him a member of a group for access. Remember ‘Deny’ overrules ‘grant’.

Script to list dates of last Full backups

Script to list dates of last Full backups


--last_full_backup.sql
select sdb.name,
convert(varchar, max(bst.backup_finish_date), 113) last_full_backup
from master..sysdatabases sdb
left join msdb..backupset bst
on bst.database_name = sdb.name
left join msdb..backupmediafamily bmf
on bmf.media_set_id = bst.media_set_id
where sdb.name 'tempdb'
and bst.type is null -- never been backed up
or bst.type = 'D' -- or full
group by sdb.name
order by last_full_backup -- oldest dates at top
--order by sdb.name -- alphabetical by name

SSH Authentication

To login to a customers remote Oracle/AIX server – without any passwords being exchanged – I downloaded puttygen. Entered a pass-phrase,  and generated matching public and private keys.

On my desktop I had two text-files called ‘public’ and ‘private’. I emailed the public one to the customer. And when they had authorized it, pasted the private one to a file on the remote machine (called ‘id_rsa.ppk’ if I recall).

I configured putty with the username/host (eg: oracle@127.0.0.1), and in the SSH/auth box browsed to the ppk file. I saved my settings and exited.

When I started putty again I loaded the profile for this host and checked everything had been saved correctly, then connected.

I was logged in as ‘oracle’ and asked for my original pass-phrase. And thats all there was to it :-))

Nightly database restore

A customer had a third-party application (‘qlikviev’) that needed a database-backup downloading from the third-party’s ftp site every night and restoring over the ‘old’ one.

Already supplied was a vbs script that would do the downloading and produce a log-file.

Actually, I improved the vbs a bit to add leading zero’s to the date field’s if needed. Here’s the old then new lines …

‘ strFile = strDayOfMonth & strMonth & strYear & ” ” & strHour & strMinute

strFile = Right(“0” & strDayofMonth, 2) & Right(“0″ & strMonth, 2) & strYear & ” ” & strHour & strMinute

My task then was a) to read the log-file and b) if the download had been successful to restore the backup overwriting the ‘old’ database.

I looked at Bulk-Insert & SSIS, but they could only import the log-file not read it, before settling on the Operating-System command …

‘findstr “success” E:somefoldersomefile.txt’

… as job-step-1. I ensured that if the step failed (IE: the string was not found), the whole job would end.

Then in step-2 pasted some transact I’d bashed-out to run the restore …

RESTORE DATABASE adventureworks
FROM DISK=’E:somefolderadventureworks.bak’
WITH FILE=1,
NOUNLOAD,
REPLACE,
STATS = 10
GO

BTW: To quickly create the right ‘restore’ command I went through the configuration-screens of the restore-wizard (not forgetting to set the ‘overwrite’ option) then hit ‘script’ to generate typo-free code.

Job step-3 was to grant db_reader access to a user. Here’s the code …

create user [domainQlikviewDevelopers] for login [domainQlikviewDevelopers]
exec sp_addrolemember ‘db_datareader’, ‘domainQlikviewDevelopers’

Finally, job-step-4 was to email the results …

declare @datetime as varchar(20)
declare @email as varchar(100)

set @datetime = convert(varchar(17),(select top 1 restore_date
from msdb.dbo.restorehistory
where (destination_database_name = ‘adventureworks’)
order by restore_date desc),113)

set @email = ‘Database ”Adventureworks” Refreshed ”’+@datetime+”’.’

exec msdb.dbo.sp_send_dbmail
@recipients = ‘someone@somewhere.co.uk’,
@subject = @email ;

As the vbs script was to run every 2am I created a sql-job to execute every 3am.

autocheck wip

— autocheck.sql, RbS, to automate daily checks.

— create (temp) table (during development) to store results
SET NOCOUNT ON
drop table #ac
go
create table #ac(
cdate smalldatetime default getdate(),
c_free_gb int,
f_free_gb int,
up_since smalldatetime,
last_err smalldatetime,
last_jobf int)

— start a new row with todays date
insert into #ac default values

— temp-table to store raw drive-space
drop table #fs
go
create table #fs(
drive char(1),
mb_free int)
insert #fs exec xp_fixeddrives

— save drive-spaces
update #ac set c_free_gb = mb_free/1024 from #fs where drive = ‘C’
update #ac set f_free_gb = mb_free/1024 from #fs where drive = ‘F’

— save up_since
update #ac set up_since = crdate from sys.sysdatabases where name = ‘tempdb’

— latest error in sql-error-log
drop table #er
go
create table #er(
logdate datetime,
processinfo varchar(20),
text varchar(500))

insert #er exec xp_ReadErrorLog 0, 1, ‘error’ — errors in current sql-error-log
update #ac set last_err = (select max(logdate) from #er)

— date of last job failure
update #ac set last_jobf = (select max(run_date) from msdb.dbo.sysjobhistory)

select * from #ac

Database Sizes

The script below is very useful for listing database sizes. However sometimes it comes up with the following error…

Cannot insert the value NULL into column ” owner”;

This is the result of a database not having an owner which is quite a serious condition. Happily it is easiy identified like this …

select name, suser_sname(sid) owner from master..sysdatabases

… and fixed like this …

alter authorization on database::[SomeDatabase] to sa;

Or this (which is depreceiated from SQL2012) …

exec sp_changedbowner ‘sa’;

Back on topic 🙂 … here’s the script …

— dbsize.sql
set nocount on

create table #sizes (
name varchar(50),
db_size varchar(50),
owner varchar(50),
dbid char(1),
created smalldatetime,
status varchar(500),
compatability_level char(2))

insert #sizes exec sp_helpdb

select name, db_size
from #sizes
order by db_size desc

drop table #sizes

Dataguard gap check

Thanks to E for this manual check that dataguard is updating a standby machine.

On Primary

alter system switch logfile;
select * from v$log;
Record the current Sequence#

On Standby

select process, sequence#, status from v$managed_standby;
Check RFS & MRP0 values match, and are related to the primary Sequence#.

Disk Space Used (GB)

I cobbled together this script from bits on the internet. It works fine, but needs OLE Automation to be enabled on the server. Should you wish to do this, use the Surface-Area configuration tool.

— “DiskSpaceUsed.sql” eg: C-Drive 65.3GB used
SET NOCOUNT ON
DECLARE @hr int
DECLARE @fso int
DECLARE @drive char(1)
DECLARE @odrive int
DECLARE @TotalSize varchar(20)
DECLARE @MB bigint

SET @MB = 1048576

CREATE TABLE #drives (drive char(1) PRIMARY KEY,
FreeSpace int NULL,
TotalSize int NULL)

INSERT #drives(drive,FreeSpace)
EXEC master.dbo.xp_fixeddrives

EXEC @hr=sp_OACreate ‘Scripting.FileSystemObject’,@fso OUT
DECLARE dcur CURSOR LOCAL FAST_FORWARD
FOR SELECT drive from #drives
ORDER by drive

OPEN dcur
FETCH NEXT FROM dcur INTO @drive

WHILE @@FETCH_STATUS=0
BEGIN

EXEC @hr = sp_OAMethod @fso,’GetDrive’, @odrive OUT, @drive
EXEC @hr = sp_OAGetProperty @odrive,’TotalSize’, @TotalSize OUT
UPDATE #drives

SET TotalSize=@TotalSize/@MB
WHERE drive=@drive
FETCH NEXT FROM dcur INTO @drive
END

CLOSE dcur
DEALLOCATE dcur
EXEC @hr=sp_OADestroy @fso

SELECT Drive,
cast((TotalSize – FreeSpace)/1024.0 as decimal(4,2)) as ‘Used(GB)’
FROM #drives
ORDER BY drive
DROP TABLE #drives

A future proof backup plan.

Yup, I’m really getting to like SQL 2008. How’s this for two backup plans? The first completes a full backup of every database (system included) every 1am. The second backs up the transaction-logs of every database every 4 hours, from 00:45.

The beauty is, plan-1 will automatically incorporate any new databases that the developer cares to create. Similarly plan-2, backs-up every database’s transaction log, ignoring those in Simple recovery mode.

First look at SQL 2008

My first look at sql 2008 (standard edition, 64 bit, on Windows 2008) was … mixed.

First thing I wanted to do was set-up a full daily on-disk backup. I found I was unable to backup to a remote server IE: ‘\[ipaddress]backup$’, as I would get ‘access denied’. Further, I was then unable to edit the maint-plan as the Modify option from the context menu didn’t seem to work. For now I just deleted the plan and made a new one that backed up locally.

On the brighter side I like the ‘activity monitor’ with the row of graphs at the top. Influenced by Oracle me thinks.

Additionally I found the new object-explorer-details panel a joy. Although its really just a re-working of the old sql2000 ‘Taskpad’ view.

So, my initial impression of sql2008 is that its fresh, reflective of current fashion, and a bit buggy.

RMAN-06059: expected archived log not found

The customer alerted us that the scripted RMAN backups for smstrain were failing. I confirmed this by looking in the log …

oracle> cd /mserv/oracle/logs/RMAN

oracle> ls -lrt *smstrain*

oracle> pg sms_rmanhot_smstrain_Thu.log

Starting backup at 04-JUN-09 05:10:28

current log archived

released channel: t1

RMAN-00571:

===========================================================

RMAN-00569: =============== ERROR MESSAGE

STACK FOLLOWS ===============

RMAN-00571:

===========================================================

RMAN-03002: failure of backup plus archivelog

command at 06/04/2009 05:14:05

RMAN-06059: expected archived log not found, lost of archived log compromises recoverability

ORA-19625: error identifying file /dbsms/smstrain/oradata/smstrain/archive/arch_641734909_1_6050.arc

ORA-27037: unable to obtain file status

SVR4 Error: 2: No such file or directory

Additional information:

3

… which seems to say that an expected archive-log was not found. I guess another backup deleted it, or perhaps it was ‘tidied away’ to save space.
The fix was to manually resync rmans catalog for the instance …

oracle>  .oraenv

oracle> smstrain

oracle> rman target /

rman> crosscheck archivelog all;

rman> exit

This prompted me to a) resync all instances on the server, and b) run a full backup on each.

Manual Removal of Oracle9i from Solaris

I was asked to remove an unused Oracle 9206 database from a Solaris server. The first thing was to make a backup – because you never know 🙂

The database needed to be mounted so I could list all the files.

Select name from v$datafile;
Select name from v$controlfile;
Select member from v$logfile;

After ‘shutdown abort’ the unix administrator backed up these files.

I then deleted the same files, and hashed this database entry from listerner.ora and tnsnames.ora.