A regular index maintenance Job we run started returning an error …
Msg 7999, Level 16, State 40, Line 1
Could not find any index named ‘index_name’ for table ‘table_name’.
… the fix was to drop the statistics for the non-existent index …
drop statistics table_name.index_name
Script to list Simple-recovery-mode databases.
— simple_mode.sql
— script to list Simple-recovery-mode databases
declare @ver varchar(50)
set @ver = (select @@version)
— SQL 2000
if @ver like ‘%2000%’
select name from sysdatabases where status = ’24’ — simple, excluding tempdb
— SQL 2005/8
else
select name simple_mode from sys.databases where recovery_model = 3 — (3=Simple 1=Full)
Enabling the ‘Dedicated Administrator Connection’
You need to have pre-enabled DAC before you can use this feature (that’s emergency access to SQL-Server-Management-Studio for Administrators), oh and it only seems to work to default instances. Firstly connect as normal to SSMS
– right-click on the server/instance and choose ‘facets’
– choose the bottom facet ‘Surface Area Configuration’
– change ‘RemoteDacEnabled’ to ‘True’ and click OK
– close SSMS
To test
– open SSMS on the local machine
– Click the Cancel button
– click the ‘x’ to close Object Explorer
– choose File / New / Database Engine Query
– prefix the Server name with ‘admin:’ and click Connect.
You can now run commands to troubleshoot the problem, but without help from the Object Explorer window.
Manually Resizing SQL files
The easy way to calculate what size of datafile would have 40% free …
– using the GUI choose Tasks / Shrink / Files
– if the percentage free is less than 40%
– find the last numeric value on the page. EG: “MB (Minimum is 998 MB)”
– multiply this value by 1.6. EG: 1593
– resize the datafile to 1593 or better, IE: 1600 MB
(btw, on this subject – my best-practice is to adjust the increment-size so there is about one ‘autogrowth’ each week).
What physical box am I on?
To find out the Windows Server Name that is currently running the live node use
begin try exec xp_cmdshell 'ping -a localhost' end try begin catch exec sp_configure 'show advanced options', 1 reconfigure -- to show the advanced options exec sp_configure xp_cmdshell, 1 reconfigure -- to enable the command-shell exec xp_cmdshell 'ping -a localhost' -- run a dos-command exec sp_configure 'xp_cmdshell', 0 reconfigure -- to disable the command-shell exec sp_configure 'show advanced options', 0 reconfigure -- to hide the advanced options end catch
Reading text files in a query.
Occassionly with SQL 2008 jobs, I find I cannot open the job-step output file. A possible solution is to use xp_cmdshell, for example
exec xp_cmdshell ‘type “C:MSSQL.1MSSQLLOGCheck Backups.txt”‘
SQL Server Auditing
After running the logins-report script (below) before an external audit, I wanted to ‘lock’ or ‘disable’ unused accounts for a month – just to confirm that they really were unused before deleting them 🙂
Unfortunatly SQL 2000 doesn’t have a facility to lock/disable sql accounts, so I changed the passwords to a known value. This looses the original p/w forever, but gives us more options than mearly deleting the accounts.
option-a inform the user of the new password.
option-b create a new account with the configuration of the ‘old’ account.
Windows logins (on the other hand) ARE lockable in SQL 2000. You use
exec sp_denylogin ‘domainuser’
and
exec sp_grantlogin ‘domainuser’
to unlock it (if required).
BTW: You need to use SQL 2000 Enterprise Manager to see locked Windows accounts, not SQL 2005/8 Management Studio.
UPDATE: watch-out for the situation where you disable a users individual account (as above), then make him a member of a group for access. Remember ‘Deny’ overrules ‘grant’.
Script to list dates of last Full backups
Script to list dates of last Full backups
--last_full_backup.sql
select sdb.name,
convert(varchar, max(bst.backup_finish_date), 113) last_full_backup
from master..sysdatabases sdb
left join msdb..backupset bst
on bst.database_name = sdb.name
left join msdb..backupmediafamily bmf
on bmf.media_set_id = bst.media_set_id
where sdb.name 'tempdb'
and bst.type is null -- never been backed up
or bst.type = 'D' -- or full
group by sdb.name
order by last_full_backup -- oldest dates at top
--order by sdb.name -- alphabetical by name
Script to examine SQL Server 2000 logins
Script to examine SQL Server 2000 logins
SSH Authentication
To login to a customers remote Oracle/AIX server – without any passwords being exchanged – I downloaded puttygen. Entered a pass-phrase, and generated matching public and private keys.
On my desktop I had two text-files called ‘public’ and ‘private’. I emailed the public one to the customer. And when they had authorized it, pasted the private one to a file on the remote machine (called ‘id_rsa.ppk’ if I recall).
I configured putty with the username/host (eg: oracle@127.0.0.1), and in the SSH/auth box browsed to the ppk file. I saved my settings and exited.
When I started putty again I loaded the profile for this host and checked everything had been saved correctly, then connected.
I was logged in as ‘oracle’ and asked for my original pass-phrase. And thats all there was to it :-))
Nightly database restore
A customer had a third-party application (‘qlikviev’) that needed a database-backup downloading from the third-party’s ftp site every night and restoring over the ‘old’ one.
Already supplied was a vbs script that would do the downloading and produce a log-file.
Actually, I improved the vbs a bit to add leading zero’s to the date field’s if needed. Here’s the old then new lines …
‘ strFile = strDayOfMonth & strMonth & strYear & ” ” & strHour & strMinute
strFile = Right(“0” & strDayofMonth, 2) & Right(“0″ & strMonth, 2) & strYear & ” ” & strHour & strMinute
My task then was a) to read the log-file and b) if the download had been successful to restore the backup overwriting the ‘old’ database.
I looked at Bulk-Insert & SSIS, but they could only import the log-file not read it, before settling on the Operating-System command …
‘findstr “success” E:somefoldersomefile.txt’
… as job-step-1. I ensured that if the step failed (IE: the string was not found), the whole job would end.
Then in step-2 pasted some transact I’d bashed-out to run the restore …
RESTORE DATABASE adventureworks
FROM DISK=’E:somefolderadventureworks.bak’
WITH FILE=1,
NOUNLOAD,
REPLACE,
STATS = 10
GO
BTW: To quickly create the right ‘restore’ command I went through the configuration-screens of the restore-wizard (not forgetting to set the ‘overwrite’ option) then hit ‘script’ to generate typo-free code.
Job step-3 was to grant db_reader access to a user. Here’s the code …
create user [domainQlikviewDevelopers] for login [domainQlikviewDevelopers]
exec sp_addrolemember ‘db_datareader’, ‘domainQlikviewDevelopers’
Finally, job-step-4 was to email the results …
declare @datetime as varchar(20)
declare @email as varchar(100)set @datetime = convert(varchar(17),(select top 1 restore_date
from msdb.dbo.restorehistory
where (destination_database_name = ‘adventureworks’)
order by restore_date desc),113)set @email = ‘Database ”Adventureworks” Refreshed ”’+@datetime+”’.’
exec msdb.dbo.sp_send_dbmail
@recipients = ‘someone@somewhere.co.uk’,
@subject = @email ;
As the vbs script was to run every 2am I created a sql-job to execute every 3am.
CPU Bound?
This query will show queued tasks in SQL 2005. Normally the last column should be all zero’s.
select
scheduler_id,
current_tasks_count,
runnable_tasks_count
from
sys.dm_os_schedulers
where
scheduler_id < 255
autocheck wip
— autocheck.sql, RbS, to automate daily checks.
— create (temp) table (during development) to store results
SET NOCOUNT ON
drop table #ac
go
create table #ac(
cdate smalldatetime default getdate(),
c_free_gb int,
f_free_gb int,
up_since smalldatetime,
last_err smalldatetime,
last_jobf int)
— start a new row with todays date
insert into #ac default values
— temp-table to store raw drive-space
drop table #fs
go
create table #fs(
drive char(1),
mb_free int)
insert #fs exec xp_fixeddrives
— save drive-spaces
update #ac set c_free_gb = mb_free/1024 from #fs where drive = ‘C’
update #ac set f_free_gb = mb_free/1024 from #fs where drive = ‘F’
— save up_since
update #ac set up_since = crdate from sys.sysdatabases where name = ‘tempdb’
— latest error in sql-error-log
drop table #er
go
create table #er(
logdate datetime,
processinfo varchar(20),
text varchar(500))
insert #er exec xp_ReadErrorLog 0, 1, ‘error’ — errors in current sql-error-log
update #ac set last_err = (select max(logdate) from #er)
— date of last job failure
update #ac set last_jobf = (select max(run_date) from msdb.dbo.sysjobhistory)
select * from #ac
Show Error Logs
–ShowErrorLogs.sql
set nocount on
exec xp_ReadErrorLog 0, 1, ‘error’ — current sql error log
exec xp_ReadErrorLog 0, 1, ‘fail’
exec xp_ReadErrorLog 0, 2, ‘error’ — current sql agent log
exec xp_ReadErrorLog 0, 2, ‘fail’
from source article
http://www.sqlteam.com/article/using-xp_readerrorlog-in-sql-server-2005
Database Sizes
The script below is very useful for listing database sizes. However sometimes it comes up with the following error…
Cannot insert the value NULL into column ” owner”;
This is the result of a database not having an owner which is quite a serious condition. Happily it is easiy identified like this …
select name, suser_sname(sid) owner from master..sysdatabases
… and fixed like this …
alter authorization on database::[SomeDatabase] to sa;
Or this (which is depreceiated from SQL2012) …
exec sp_changedbowner ‘sa’;
Back on topic 🙂 … here’s the script …
— dbsize.sql
set nocount on
create table #sizes (
name varchar(50),
db_size varchar(50),
owner varchar(50),
dbid char(1),
created smalldatetime,
status varchar(500),
compatability_level char(2))
insert #sizes exec sp_helpdb
select name, db_size
from #sizes
order by db_size desc
drop table #sizes
Dataguard gap check
Thanks to E for this manual check that dataguard is updating a standby machine.
On Primary
alter system switch logfile;
select * from v$log;
Record the current Sequence#
On Standby
select process, sequence#, status from v$managed_standby;
Check RFS & MRP0 values match, and are related to the primary Sequence#.
Disk Space Used (GB)
I cobbled together this script from bits on the internet. It works fine, but needs OLE Automation to be enabled on the server. Should you wish to do this, use the Surface-Area configuration tool.
— “DiskSpaceUsed.sql” eg: C-Drive 65.3GB used
SET NOCOUNT ON
DECLARE @hr int
DECLARE @fso int
DECLARE @drive char(1)
DECLARE @odrive int
DECLARE @TotalSize varchar(20)
DECLARE @MB bigint
SET @MB = 1048576
CREATE TABLE #drives (drive char(1) PRIMARY KEY,
FreeSpace int NULL,
TotalSize int NULL)
INSERT #drives(drive,FreeSpace)
EXEC master.dbo.xp_fixeddrives
EXEC @hr=sp_OACreate ‘Scripting.FileSystemObject’,@fso OUT
DECLARE dcur CURSOR LOCAL FAST_FORWARD
FOR SELECT drive from #drives
ORDER by drive
OPEN dcur
FETCH NEXT FROM dcur INTO @drive
WHILE @@FETCH_STATUS=0
BEGIN
EXEC @hr = sp_OAMethod @fso,’GetDrive’, @odrive OUT, @drive
EXEC @hr = sp_OAGetProperty @odrive,’TotalSize’, @TotalSize OUT
UPDATE #drives
SET TotalSize=@TotalSize/@MB
WHERE drive=@drive
FETCH NEXT FROM dcur INTO @drive
END
CLOSE dcur
DEALLOCATE dcur
EXEC @hr=sp_OADestroy @fso
SELECT Drive,
cast((TotalSize – FreeSpace)/1024.0 as decimal(4,2)) as ‘Used(GB)’
FROM #drives
ORDER BY drive
DROP TABLE #drives
Last Reboot
A future proof backup plan.
Yup, I’m really getting to like SQL 2008. How’s this for two backup plans? The first completes a full backup of every database (system included) every 1am. The second backs up the transaction-logs of every database every 4 hours, from 00:45.
The beauty is, plan-1 will automatically incorporate any new databases that the developer cares to create. Similarly plan-2, backs-up every database’s transaction log, ignoring those in Simple recovery mode.
First look at SQL 2008
My first look at sql 2008 (standard edition, 64 bit, on Windows 2008) was … mixed.
First thing I wanted to do was set-up a full daily on-disk backup. I found I was unable to backup to a remote server IE: ‘\[ipaddress]backup$’, as I would get ‘access denied’. Further, I was then unable to edit the maint-plan as the Modify option from the context menu didn’t seem to work. For now I just deleted the plan and made a new one that backed up locally.
On the brighter side I like the ‘activity monitor’ with the row of graphs at the top. Influenced by Oracle me thinks.
Additionally I found the new object-explorer-details panel a joy. Although its really just a re-working of the old sql2000 ‘Taskpad’ view.
So, my initial impression of sql2008 is that its fresh, reflective of current fashion, and a bit buggy.
Where’s tnsnames.ora ?
To locate tnsnames … echo $TNS_ADMIN.
RMAN-06059: expected archived log not found
The customer alerted us that the scripted RMAN backups for smstrain were failing. I confirmed this by looking in the log …
oracle> cd /mserv/oracle/logs/RMAN
oracle> ls -lrt *smstrain*
oracle> pg sms_rmanhot_smstrain_Thu.log
…
Starting backup at 04-JUN-09 05:10:28
current log archived
released channel: t1
RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE
STACK FOLLOWS ===============
RMAN-00571:
===========================================================
RMAN-03002: failure of backup plus archivelog
command at 06/04/2009 05:14:05
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /dbsms/smstrain/oradata/smstrain/archive/arch_641734909_1_6050.arc
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information:
3
…
… which seems to say that an expected archive-log was not found. I guess another backup deleted it, or perhaps it was ‘tidied away’ to save space.
The fix was to manually resync rmans catalog for the instance …
oracle> .oraenv
oracle> smstrain
oracle> rman target /
rman> crosscheck archivelog all;
rman> exit
This prompted me to a) resync all instances on the server, and b) run a full backup on each.
Testing the ‘sys’ password.
And old gotcha 🙂
To test a ‘sys’ password without getting a false positive from your o/s credentials …
First check with a wrong password …
oracle> sqlplus “sys/wrongpw@sid as sysdba”
… then with the right password
oracle> sqlplus “sys/rightpw@sid as sysdba”
Manual Removal of Oracle9i from Solaris
I was asked to remove an unused Oracle 9206 database from a Solaris server. The first thing was to make a backup – because you never know 🙂
The database needed to be mounted so I could list all the files.
Select name from v$datafile;
Select name from v$controlfile;
Select member from v$logfile;
After ‘shutdown abort’ the unix administrator backed up these files.
I then deleted the same files, and hashed this database entry from listerner.ora and tnsnames.ora.
UNIX Deleting Directories
Where’s tnsnames.ora ?
To locate tnsnames … echo $TNS_ADMIN.
MSDE Crib Sheet
MSDE is the SQL2000 version of SQL Express
To interact with the server you should type osql -E at the command-prompt. This connects to the default MSDE local instance.
Upgrading SQL 2005 Express to SQL Server 2005 Standard
Having completed building a new application a customer wanted to upgrade there back-end database from SQL Express to full SQL Server 2005 for licencing issues.
Here’s my walk through …
Having already installed IIS and navigated within a command-prompt to the media (cd “D:AppsSQL 2005Disk 1”). I ran “setup.exe SKUUPGRADE=1”.
I accepted the licencing conditions, and continued installing components
1) SQL Server Database Services
2) Integration services
(resisting the temptation to re-assign the datafiles from the small C-Drive to the larger D-Drive), until Getting to the ‘Instance Name’ screen where I clicked the ‘Installed Instances’ button.
The next screen confirms the SQL Express instance name, I clicked ‘OK’, then with the instance-name autocompleted clicked Next.
In the ‘existing components’ screen I ticked the one box to request an upgrade and clicked ‘next’.
–NOTES
The resulting upgrade is not obvious, however it can be confirmed my right-clicking on the server in enterprize manager and examining the version.
Following on from this SP3 should be applied.
Export of GEMS database
To action the request to create a full daytime export on the GEMS database on LSC3. I first deleted the previous export from ‘/tmp/exports’ to create the needed 7.7GB.
Then I navigated to ‘/ad00/orabase/admin/scripts/exports’ and checked there was a directory called ‘exp_gems_logs’. As GEMS is quite small, and the customer wants a ‘daytime’ export so not to interfere with the evening backups, I ran the export at the command line with ‘./ exp_gems.sh’.
Thirty minutes later the request was complete.
Which Oracle instance am I connected to?
Within UNIX you can ‘echo $ORACLE_SID’, or from sqlplus use ‘select instance_name from v$instance;’.
Export table and index DDL
As part of a project to reduce space within a cloned database it was decided that a large table could be excluded from the regular scripted copy.
I needed to extract the tables definition from the master, to create an empty version on the clone after the copy was complete.
Being Oracle version 9.2.0.9, I didn’t have to use the ‘exp’ command (the output of which needs a lot of tidying), but could instead use the <b>’get_ddl'</b> function of the <b>’dbms_metadata'</b> package – like so …
— Script to export the definition of table PROC_FIL and its index.
— Remember to add semi-colons, amend ‘storage (initial 1k next 1k minextents
— 1 pctincrease 0 freelists 1 freelist groups 1 buffer_pool default);
— for example, and fix broken lines.
set heading off;set echo off;
set pages 999;
set long 90000;
spool ddl_pf.sqlselect dbms_metadata.get_ddl(‘TABLE’,’PROC_FIL’,’KIS’) from dual;
select dbms_metadata.get_ddl(‘INDEX’,’PF_PK’,’KIS’) from dual;
spool off;
Reinstalling HPOV to stop it freezing.
After a while using HPOV over a slow network the application may freezes when I try to perform any action that spawnes a new window (eg: ‘Closure Code’).
The cure is to re-install, but its not too difficult.
1) take copies of srv.DAT and Views.DAT from ‘C:Documents and Settingsrichard smithApplication DataHewlett-PackardOpenViewService Desk’
2) delete the folder ‘C:Documents and Settingsrichard smithApplication DataHewlett-PackardOpenViewService Desk’.
3) start up hpov and click ‘next’ within the connection wizard
4) paste the server name from srv.DAT (eg: ’nta-hpsd02.steria.co.uk:30999′), into the text-box.
5) type name and password (eg: ‘Richard Smith’, ‘password’).
6) type anything for the friendly name (eg: ‘hpov’), then click finish.
7) before loging-in copy the Views.DAT back into the folder deleted in step-2
8) login as normal (there will be an unusually long delay first time)
How to change an Oracle users password temporarily?
How to change an Oracle password temporarily?
In Oracle it is possible to change a password temporarily. This can be useful for DBAs who wish to act as a different user.
SQL> select username,password from dba_users where username='SCOTT';
USERNAME PASSWORD
-------- ----------------
SCOTT F894844C34402B67
Save the hash of the password somewhere safe. Then change it …
SQL> alter user scott identified by tmppassword;
Login with the credentials: scott/tmppassword, then after doing your work change the password back by using an undocumented feature called “by values”
SQL> alter user scott identified by values 'F894844C34402B67';
Tablespace has only 1% free and cannot extend any datafiles
Similar to this page where I manually extended a tablespace that had only 1% free.
But this time I needed to create a new datafile as all the current datafile’s were at there max (2GB for this customer).
1) Confirmed tablespace ‘I_Large5’ has only 1% free.
column owner format a10
column "%free" format 999
select a.tsname, a.total "Tot/Kb", a.total - b.free "Used/Kb",
b.free "Free/K", (b.free * 100) / a.total "%free"
from
(select tablespace_name tsname, sum(bytes)/1024 total
from dba_data_files
group by tablespace_name ) a,
(select tablespace_name tsname, sum(bytes)/1024 free
from dba_free_space
group by tablespace_name ) b
where a.tsname = b.tsname(+)
/
TSNAME Tot/Kb Used/Kb Free/K %free
------------------------------ ---------- ---------- ---------- -----
D_LARGE1 58312704 56477696 1835008 3
I_LARGE5 56929280 56142848 786432 1
2) I listed the I_LARGE5 datafile’s to see how large the current ones are, and to get the path and name of the latest one.
col tablespace_name for a20
col file_name for a40
select tablespace_name, file_name, bytes/1024/1024
from dba_data_files
where tablespace_name = 'I_LARGE5'
order by file_name;
3) looked in the path to make sure there was enough space for a new 1GB file.
df -k /db01/oradata/CIX/
4) Then created a new datafile, incrementing the name by one …
alter tablespace I_LARGE5 add datafile '/db01/oradata/CIX/i_large5_14.dbf' size 500m;
5) lastly, as the datafiles of this database are copied to a standby server every night I needed to amend the copy scripts to include this newly created datafile.
Oracle Financials ‘Active Users’ report.
To get a report of Active Users for a customer I
– logged in as ‘sysadmin’.
– Started Systems Administrator
– Chose Requests / Run / Single Request
– Typed ‘active’ in the Name field, and clicked Submit.
– Chose Active Users, ok and ok again.
– Clicked Refresh Data until I found the ‘Active Users’ job complete …
– Chose View Output, then Tools / Copy File, File / Save as
– Chose the location as h-drive (a shared drive) / Sequent / ANN (in this case) and clicked Save.
– I exited Oracle Financials
– Typed ‘net use’ at a command prompt to find the path to the h-drive
– And emailed the file’s name and location to the customer (eg: \NW-GIR-TSTSSHARESequentANN)
Tablespace has only 1% free
My daily checks reveiled that tablespace D_LARGE3 had only 1% free (NB: these are manual extend).
I listed the current datafile’s to see the path and name, and to see how large the current ones were …
<code>col tablespace_name for a20
col file_name for a40
select tablespace_name, file_name, bytes/1024/1024
from dba_data_files
where tablespace_name = ‘D_LARGE3’
order by file_name;
TABLESPACE FILE_NAME BYTES/1024/1024———- —————————————- —————D_LARGE3 /db010/oradata/CIX/d_large3_01.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_02.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_03.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_04.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_05.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_06.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_07.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_08.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_09.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_10.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_11.dbf 1985
TABLESPACE FILE_NAME BYTES/1024/1024———- —————————————- —————D_LARGE3 /db010/oradata/CIX/d_large3_12.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_13.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_14.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_15.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_16.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_17.dbf 1985D_LARGE3 /db010/oradata/CIX/d_large3_18.dbf 1985D_LARGE3 /db013/oradata/CIX/d_large3_19.dbf 1000</code>
… and noticed the last one was only 1GB in size. Therefore I didn’t need to add a new datafile, just expand it …
<code>alter database datafile ‘/db013/oradata/CIX/d_large3_19.dbf’ resize 2000m
</code>
I confirmed this had worked with the ts-free script (attached) …
<code>column owner format a10
column “%free” format 999
select a.tsname, a.total “Tot/Kb”, a.total – b.free “Used/Kb”,
b.free “Free/K”, (b.free * 100) / a.total “%free”
from
(select tablespace_name tsname, sum(bytes)/1024 total
from dba_data_files
group by tablespace_name ) a, (select tablespace_name tsname, sum(bytes)/1024 free from dba_free_space group by tablespace_name ) b where a.tsname = b.tsname(+)/
TSNAME Tot/Kb Used/Kb Free/K %free—————————— ———- ———- ———- —–D_LARGE1 58312704 56477696 1835008 3D_LARGE3 56929280 56142848 786432 2</code>
job done :-))
btw <a href=”/node/179″>here’s</a> a similar incident were I had to add a datafile
How big is the current Oracle instance?
Here’s some code to report the size or the current Oracle instance …
col “Database Size” format a20
col “Free space” format a20
col “Used space” format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ‘ GB’ “Database Size”
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) –
round(free.p / 1024 / 1024 / 1024) || ‘ GB’ “Used space”
, round(free.p / 1024 / 1024 / 1024) || ‘ GB’ “Free space”
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/
Useful Windows commands
This will be my repository for tried-and-tested useful Windows commands …
1) This is my perpetual alarm-lock whilst working-at-home during the week. Stops me clock-watching, and lets me know when I’m doing overtime. (Don’t forget to check the messenger-service is started).
at 17:00 /every:M,T,W,Th,F net send “richard” Its Five O’Clock lol
2) In the path (EG: C:Windows) I create a file ‘remindme.cmd’ containing the code …
at %1 net send “richard” %*
exit
… then at some time in my busy future I can set a simple, robust, reminder (robust as I don’t need anything running – like an email client). I just open a command prompt and type something like ‘remindme 13:58 call joe for lunch’.
(Oh, and don’t forget to check the messenger-service is working).
MySQL monitoring tools
“AjaxMyTop” is a really useful MySQL monitoring tool.
If you’re on a web-server, and therefore have apache and mysql readily available you can easily implement a live web-page version of the Unix ‘top’ command.
– Download and unpack the ‘ajaxMyTop’ folder into htdocs.
– Configure the config.php file with user (root?), and password
– Then browse to http://localhost/ajaxMyTop/ and wait.
After a few seconds you should at-the-least see a line for ‘show processlist’ generated by the tool.
Testing Apache, PHP, and MySQL.
Here’s the simplest test-page I can think of that will test …
– the supplied credentials
– PHP can connect to MySQL
– Apache can display the results
<?php
@mysql_connect(“localhost”,”root”,”[password]”);
$result = mysql_query(“show databases”);
echo “<h1>Databases this user can connect to …</h1>”;
while ($row=mysql_fetch_array($result)) echo($row[“Database”].”<br />”);
?>
Paste it into a file called ‘test.php’ (with the correct password) in Apache’s ‘htdocs’ folder, then browse ‘http://localhost/test.php’.