Contents:
Oracle & Pl/Sql TipsWhat is a deadlock and how does oracle handle
it
A deadlock is a condition where two or more users are waiting for data
locked by each other. Oracle automatically detects a deadlock and resolves
them by rolling back one of the statements involved in the deadlock, thus
releasing one set of data locked by that statement. Statement rolled back
is usually the one which detects the deadlock.
Deadlocks are mostly caused by explicit locking because oracle does not do lock escalation and does not use read locks. Multitable deadlocks can be avoided by locking the tables in same order in all the applications, thus precluding a deadlock.
What is "snapshot too old" error and how to
avoid it
First thing to be understood here is how rollback segments work. These
work in a circular fashion by looping around to reuse the extents which
have been released by committed transactions. Thus in case of long running
queries if data is changed while the query is running, rollback segments
created for that change are needed to make a read consistent image for
the lon running query. When these extents are reused while these were still
need by the query, this error occurs, because oracle can no longer provide
a read consistent image of the data.
To avoid this error you need to postpone the reuse of extents. Thus increasing the size of rollback segments and the value of OPTIMAL parameter should do the trick. Comitting less often would be other solution. As until transaction is comitted rollback segment can not be reused thus reducing the chances of a snapshot too old error.
How to find the version of each oracle product
installed
$ORACLE_HOME/orainst/inspdver utility provides a list of all the oracle
products installed on the server with their verion numbers
Set up for calling external procedures from
Pl/SQL
Oracle8 has provided a coll new feature called "External procedures".
Which in simple terms gives you ability to call any program on Operating
system from within a stored program unit. Prior to this, only way to do
this was to use deamons & pipes or Pro*C. Fot this to work DBA needs
to do following setup.
Create a separate listener for external procedures. Add following entries
to your listener.ora
EXTERNAL_PROC_LISTENER =
(ADDRESS_LIST =
(ADDRESS= (PROTOCOL= TCP)(Host= machine)(Port= 1526))
)
SID_LIST_EXTERNAL_PROC_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = external)
(ORACLE_HOME = /oracle/app/oracle/product/8.0.5)
(PROGRAM = /oracle/app/oracle/product/8.0.5/bin/extproc)
)
)
Thing to take care is that SID should not clash with a valid database
SID
Then add an alias in tnsnames.ora for this connection
extproc_connection_data.world =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(Host=machine)(Port=1526))
(CONNECT_DATA = (SID = external))
)
Passing values back to unix script from sqlplus
Sometimes while calling sql scripts from shell scripts, it is required
that you find out thatwhether the scripts failed or not. This is more imperative
if you have a dependency between jobs. following is an example of this.
write following code in sql script
set echo on
whenever sqlerror exit 1
whenever oserror exit 2
--your code--
exit 0
write following code in shell script to receive the value
--call sql script
ret_val=$?
now ret val will have value 0 if sql script did not fail and 1 or 2
otherwise
Identifying culprit rows when enabling of a
constraint fails
When you need to upload huge amount of data, everybody says, that it
is better to disable the constraints to give better performance. But, what
if afterwards enabling of constraints fail due to bad data. You can find
the culprit records by using EXCEPTIONS clause of ALTER TABLE statement.
For ex.
ALTER TABLE test ENABLE CONSTRAINT pk_test exceptions into exceptions.
where exceptions table can be created by running $ORACLE_HOME/rdbms/admin/utlexcpt.sql
script.
Simpler way of finding plan and statistics
of a query
Executing EXPLAIN PLAN and then selecting from plan_table is one way
to get the execution plan of a sql statement. But for this you need syntax
of both statements and patience to type all that stuff for each statement.
Instead SQL*PLUS offers a nifty command to enable and disable the display
of execution plan of each statement executed in that session. In addition
to this it can display the statistics for each statement.
SET AUTOT[RACE] {OFF | ON | TRACE[ONLY]} [EXP[LAIN]] [STAT[ISTICS]]
thus SET AUTOTRACE ON will do the whole work for you
How to trap errors from sqlplus in NT bat
scripts
There is a variable ERRORLEVEL in NT. When you exits from sqlplus with
a constant and check this variable in NT script, you can trap the error
. For ex. IF ERRORLEVEL 1 PLUS33 will startup the sqlplus if youspecified
EXIT 1 to get out from the sqlplus.
How to find your log block size?
You can find your log block size in bytes with the following query
(connected as SYS or internal):
select distinct lebsz from x$kccle;
This is the unit in which you should set the log_checkpoint_interval
parameter (otherwise known as the operating system block size). Some sample
sizes are:
O/S Log Block Size
======= ==============
Solaris 512 bytes
HP-UX 1024 bytes
NT 512 bytes
OpenVMS 512 bytes
Digital UNIX 1024 bytes
To get it from the operating system, try grep DEV_BSIZE /usr/include/sys/param.h
How to find the datafile which contains a
particular table?
If you query DBA_EXTENTS based on segment_name equal to your table
name you will see that the column FILE_ID will show you the database file
where each extent is located. You can then use FILE_ID to query DBA_DATA_FILES
to find the name of the datafile. Or you could do it all in one query using
a join.
How to measure less than a second of time
interval?
It is possible to measure time interval of upto 1/100th of a second
inoracle. DBMS_UTILITY.GET_TIME function returns a number which increments
every 100th of second. Keep in mind that this number can be negative as
well. Thus it can only be used for measuring time intervals. Also in sys.v_$timer
table the column hsecs contains a number which also incre,emts every 100th
of a second. This number can also be negative
What does 'SNIPED' status in v$session mean?
How to roll back sequences?
Database Crashed. Server manager doesn't come
up
How to use date index while using date ranges
Recover database to a different host using legato
&
EBU
Recover database to a different host using legato
& RMAN
svrmgrl << EOF
Following points need to be taken note of:
And then remove it from the catalog using
Change datafilecopy primary_key uncatalog; Exit to : Oracle
Hints and Tips
When IDLE_TIME is set in the users' profiles or the default profile.
This will kill the sessions in the database (status in v$session now becomes
SNIPED) and they will eventually disconnect. It does not always clean up
the Unix session (LOCAL=NO sessions). At this time all oracle resources
are released but the shadow processes remains and OS resources are not
released. This shadow process is still counted towards the
This tip comes from Kevin Loney. Use a negative increment and select
from the sequence once, then reset the increment back to its correct setting.
For example, this should reset the qcs_ranum sequence back 100:
alter sequence qcs_ranum increment by -100;
select qcs_ranum.nextval from dual;
alter sequence qcs_ranum increment by 1;
This happened to me one fine morning. For one of my databases, DBWR
crashed and then all other processes died. After that neither svrmgrl not
sqlplus were even starting. These just returned back to unix prompt. The
errors in alert log were 600 and 7445 errors. I was running on 7.3.4 on
digital unix 4.0D. Shared memory was still allocated for this instance
and i was getting 6 trace files in udump directory every 5 minutes.
I Worked with oracle suppport on this. It seemed I had hit a bug resolved
in 7.3.4.1 . Solution was to find shared memory segment for victim database
by process of elimination using "oradebug ipc" on all other databases and
"ipcs -b" on unix prompt and then removing the shared memory segments using
ipcsrm (-s or -m for shared memory or semaphores)
It becomes tricky to use an index on date columns for date ranges beacause
as soon as you use trunc(), oops! there oes you index. So here is a way
to use your index on date column for date ranges...
where datefield >= to_date(to_char(date1,’DD-MON-YYYY’)||'00:00:00','DD-MON-YYYYHH24:MI:SS')
and datefield <= to_date(to_char(date2,’DD-MON-YYYY’)||'23:59:59','DD-MON-YYYYHH24:MI:SS')
Legato Steps Install legato client only (server not needed)
on the new node and copy usr/opt/networker/bin/libobk.so to /usr/shlib/libobk.so
. NSR_CLIENT should be set up to the name of host whose data is being restored.
For ex. If Database is on HOST_A and it has been backed up to tapes on
HOST_B and the restore is being done to HOST_C, then NSR_CLIENT should
be A (called client in legato jargon). NSR_SERVER should be set up to the
name of host that backed up the data of the client. Thus in above example
NSR_SERVER would be B (called server in legato jargon). C must be defined
in remote access list of client A on HOST_B.
EBU Steps Create the directories where datafiles will
be restored. Set up the OFA structure on new host and copy the init.ora,
config.ora files. Create the link in ORACLE_HOME/dbs directory for init.ora
file. Add entry in /etc/oratab and /etc/ORACLE.PROFILE for the database.
Change init.ora and config.ora to reflect new directories. If link is not
created then startup_pfile parameter should be specified to give the location
of parameter file to be used
EBU scipt should be as folows for latest cold database restore. db_name
is not used to determine the database to be restored. ORACLE_SID determines
that.
restore database
db_name = "DTSPD"
backup_host=host_b
parallel=3
rename
remap_path = "/dts/oradata/DTSPD" to "/disk3/oradata/DTSPD","/its/oradata/DTSPD"
to "/disk3/oradata/DTSPD"
archivelog = "/disk3/oradata/DTSPD"
log = "/users/oracle/dba/legato/log/restore_dtspd.log"
remap_path parameter should be used to specify the directories
where files are to be restored. One target directory can be specified for
each source directory.
rename parameter should be used for automatic renaming of data
and logfiles in the controlfile. As EBU renamed online redo logs first
and datafiles next so if dummy (because neither EBU nor Rman backs up online
redo logs) online redo logs are not created prior to restore then the automatic
renaming will fail. In that case either created a new controlfile using
backup create controfile script or rename all the datafiles manually using
‘alter database rename file ’ command.
backup_host parameter should be used to specify the host which
had done the backup, which is being restored. Thus in our example it would
be HOST_B. (Test this one)
Create the directories where datafiles will be restored. Set up the
OFA structure on new host and copy the init.ora, config.ora files. Create
the link in ORACLE_HOME/dbs directory for init.ora file. Add entry in /etc/oratab
and /etc/ORACLE.PROFILE for the database. Change init.ora and config.ora
to reflect new directories. If link is not created then startup_pfile parameter
should be specified to give the location of parameter file to be used Create
a password file for this new instance and specify password for internal.
Create symbolic link for the old online redo log directory to point to
new online redo log directory.
Rman scipt should be as folows for latest database backup restore.
Unix Shell Script:
#!/bin/ksh
today=`date +%y%m%d%H%M`
export ORACLE_SID=TEST804
export ORACLE_HOME=/oracle/app/oracle/product/8.0.4
export PATH=/users/oracle/bin:/oracle/app/oracle/bin:/oracle/app/oracle/product/8.0.4/bin:/usr/bin:/usr/bin/X11:/usr/dt/bin:/usr/local/bin:.
export NLS_LANG=american
export NLS_DATE_FORMAT='Mon DD YYYY HH24:MI:SS'
connect internal
startup nomount restrict
EOF
rman target internal/sys rcvcat rman/rman@rcat1 cmdfile restore_test804.txt
svrmgrl << EOF
connect internal
shutdown
EOF
RMAN Script:
run {
set newname for datafile '/sales1/oradata/TEST804/TEST804_system01.dbf'
to '/sm1/oradata/TEST804/TEST804_system01.dbf';
set archivelog destination to '/sm1/oradata/TEST804';
allocate channel t1 type 'sbt_tape'
parms 'ENV=(NSR_SERVER=host_b,NSR_CLIENT=host_a)';
allocate channel d2 type disk;
set until time 'Mar 18 1999 18:40:00';
restore controlfile to '/sm1/oradata/TEST804/control01.ctl';
replicate controlfile from '/sm1/oradata/TEST804/control01.ctl';
sql 'alter database mount';
restore (database);
switch datafile all;
}
Following points should be noted in this script:
List copy of ‘filename’;
Or
Using following query on catalog database
Select name,cdf_key
from rc_datafile_copy where db_name=upper(‘&database_name’);