More thoughts against Triggers

For Rob & Karl – Triggers run outside of transactions. An insert that fires a trigger may be rolled back, but the trigger rolls on.

Triggers introduce a long-term maintenance headache. You can read a stored-procedure from top to bottom and imagine you understand what it does. But unless you examine every tables it touches – you don’t. Little bits of code may be running silently which augment or even reverse some of the logic within the stored-procedure.

Triggers are used by lazy developers to ‘bolt on’ new features to applications, rather than track-down all the code that could insert/update/delete from a table and add the code (or a link to it) there.

This would be forgivable if the application code was closed or propitiatory, but never when the application is open to the application developer, who just cannot be bothered to integrate code changes properly, and cares not-a-jot about long-term maintenance headaches (slow breaths, slow breaths :))

The Oracle SCN issue

Oracle-database uses a large number (‘SCN’) as a time-stamp for referential-integrity, backup/restores, patch levels etc etc. When instances are linked the highest SCN number overwrites all others.

Trouble is … a ‘warning limit’ is being reached (three quarters of the actual limit). And when exceeded Oracle-database assumes its corrupt and becomes unstable.

But wait! There’s a ‘fix’, a patch which lets you manually set the SCN back below the soft-limit. BUT this needs to occur on all linked instances within a short period, or they are just updated from the one with the highest number (dowh!).

And Finally … the fix is only available for the latest version of Oracle. So an obscure, forgotten, out-of-date, instance in the back-of-beyond can bring down hundreds of shiny new machines (‘patched’ or not) until upgraded.

‘Oracle SCN management’ then, not a disaster just another ongoing DBA task 🙂

SSH Authentication

To login to a customers remote Oracle/AIX server – without any passwords being exchanged – I downloaded puttygen. Entered a pass-phrase,  and generated matching public and private keys.

On my desktop I had two text-files called ‘public’ and ‘private’. I emailed the public one to the customer. And when they had authorized it, pasted the private one to a file on the remote machine (called ‘id_rsa.ppk’ if I recall).

I configured putty with the username/host (eg: oracle@, and in the SSH/auth box browsed to the ppk file. I saved my settings and exited.

When I started putty again I loaded the profile for this host and checked everything had been saved correctly, then connected.

I was logged in as ‘oracle’ and asked for my original pass-phrase. And thats all there was to it :-))

Export table and index DDL

As part of a project to reduce space within a cloned database it was decided that a large table could be excluded from the regular scripted copy.
I needed to extract the tables definition from the master, to create an empty version on the clone after the copy was complete.

Being Oracle version, I didn’t have to use the ‘exp’ command (the output of which needs a lot of tidying), but could instead use the <b>’get_ddl'</b> function of the <b>’dbms_metadata'</b> package – like so …

— Script to export the definition of table PROC_FIL and its index.

— Remember to add semi-colons, amend ‘storage (initial 1k next 1k minextents

— 1 pctincrease 0 freelists 1 freelist groups 1 buffer_pool default);

— for example, and fix broken lines.
set heading off;

set echo off;

set pages 999;

set long 90000;
spool ddl_pf.sql

select dbms_metadata.get_ddl(‘TABLE’,’PROC_FIL’,’KIS’) from dual;

select dbms_metadata.get_ddl(‘INDEX’,’PF_PK’,’KIS’) from dual;

spool off;

Tablespace has only 1% free and cannot extend any datafiles

Similar to this page where I manually extended a tablespace that had only 1% free.

But this time I needed to create a new datafile as all the current datafile’s were at there max (2GB for this customer).

1) Confirmed tablespace ‘I_Large5’ has only 1% free.

column owner format a10
column "%free" format 999
select a.tsname, "Tot/Kb", - "Used/Kb", "Free/K", ( * 100) / "%free"
(select tablespace_name tsname, sum(bytes)/1024 total
from dba_data_files
group by tablespace_name ) a,
(select tablespace_name tsname, sum(bytes)/1024 free
from dba_free_space
group by tablespace_name ) b
where a.tsname = b.tsname(+)

TSNAME Tot/Kb Used/Kb Free/K %free
------------------------------ ---------- ---------- ---------- -----
D_LARGE1 58312704 56477696 1835008 3
I_LARGE5 56929280 56142848 786432 1

2) I listed the I_LARGE5 datafile’s to see how large the current ones are, and to get the path and name of the latest one.

col tablespace_name for a20
col file_name for a40
select tablespace_name, file_name, bytes/1024/1024
from dba_data_files
where tablespace_name = 'I_LARGE5'
order by file_name;

3) looked in the path to make sure there was enough space for a new 1GB file.

df -k /db01/oradata/CIX/

4) Then created a new datafile, incrementing the name by one …

alter tablespace I_LARGE5 add datafile '/db01/oradata/CIX/i_large5_14.dbf' size 500m;

5) lastly, as the datafiles of this database are copied to a standby server every night I needed to amend the copy scripts to include this newly created datafile.