Oracle Certification, Database Administration, SQL, Application, Programming Reference Books
Oracle Tables Performance Tuning

Caching Tables

Before we start our discussion on table caching, a brief description of Oracle's Least Recently Used chain (LRU chain) is in order. When server processes read data blocks from files and place them into Oracle's data buffer cache, the block addresses are recorded in the Most Recently Used (MRU) side of the LRU chain.

As new blocks are read into the data buffer cache, their addresses are also placed on the MRU side of the LRU chain. Oracle makes space available in the chain for the new addresses by moving the existing addresses towards the Least Recently Used (LRU) side of the chain. Oracle will choose block addresses from the LRU side of the chain when selecting blocks to age out (write to disk). This subset of the LRU side of the chain is called the dirty list.

If you have small lookup (some call them code or domain) tables that are being forced out of memory, you can pin them into memory by using the CACHE parameter of the CREATE and ALTER database statements. You can also use the CACHE hint on SQL statements to turn caching on and off.

A common misconception is that the CACHE option forces tables into memory and they are never aged out. When Oracle detects a full table scan, it moves the block addresses obtained by the full table scan to the LRU side of the LRU chain. Oracle wants blocks returned by a full table scan to be aged out more quickly than blocks returned by other access paths.

The problem is that smaller tables, like lookup tables, are most often scanned by a full table scan (and they should be). The CACHE parameter forces Oracle to place the results on the MRU side of the LRU chain. They will still age out, just not as quickly.

Clustering Data in Tables

Before clusters, Oracle didn't have a mechanism to sort data in a table to match the order of a particular index. Clusters can be difficult to tune and administer and have limited applications. If you can't, or don't want to use clusters, try this option.

If the application always access the table by the same column or the table is always joined to other tables by the same column, consider sorting the data on that column before loading. Static tables are the easiest to sort, but I have seen administrators in OLTP systems pump data out of tables into flat files, sort the records in the proper order and reload them using the load utility on a regular basis.

Don't be misled by Oracle experts that say that data order doesn't increase data retrieval and join performance - it most definitely does.

Merging Tables to Increase Performance

If you constantly join two or more tables together in an application, merge them into one table. It's called "de-normalizing for performance" (your data administrator may call it and you other things). It's simple and it works.

Hints

Don't use HINTS unless absolutely necessary. You are altering a SQL statement to influence the optimizer. This may have a negative impact when the data, application or database release changes. Application performance may be negatively impacted when any of the aforementioned changes occur and you won't know why.

In most cases, you probably aren't as smart as the optimizer. Let the optimizer make the choice unless you are absolutely certain the optimizer is choosing the incorrect access path. One exception to the rule is when you use Oracle's Parallel Query Option (see next recommendation).

Parallel Hints

Oracle allows you to set the degree of parallelism for queries at the database, table and statement levels. Statement hints allow you to fine tune individual statements without increasing or decreasing the degree of parallelism for other statements. Set the degree of parallelism at the database and table level carefully. You must balance the number of parallel processes being spawned with a finite amount of system resources.

Performance Testing

Remember the buffer cache when benchmarking and testing SQL. Subsequent runs may be faster because the data has been cached from previous runs.

Parallel Query

Use profiles in a pre-Oracle8i parallel query environment to limit the number of simultaneous sessions to the database to "throttle" the number of parallel query processes spawned. When using parallel query, users can get a lot of multiple processes even with a fairly low parallel degree on the tables. A 2 table join with both tables having DEGREE=4 would use 9 sessions (4 per table + 1 for SQL*Plus). This may be OK for a few users but may not be when you have dozens of users accessing the tables in question. If a user's profile limits the number of user sessions allowed, Oracle will parallelize one table and single thread through the other.

Space Utilization for Parallel Table Creates

You must adjust the INITIAL extent parameter for a table when using the parallel clause of the CREATE TABLE as SELECT * FROM statement. All parallel processes will allocate an extent defined by the table's INTIAL extent parameter. Here's an example:

CREATE TABLE test2 AS SELECT * FROM test1 STORAGE (INITIAL 2047M NEXT
500M... PARALLEL 4;

The above statement would create a table that has 4 extents with each extent being 2047M in size.

Buffer Performance is critical to good Oracle Performance

Make sure enough memory is allocated to the data buffer cache, the shared pool and the log buffers. Their order of importance and hit ratios are:

Tuning Oracle on NT

There are two quick changes you can make on NT that will improve performance:

Terminating Oracle Processes (threads) on NT

Oracle on NT provides the ORAKILL command to kill Oracle processes (called threads) on NT. You need to specify the thread ID in Hex. Try ORAKILL /? or ORAKILL /h for more details.

Tuning Pack

In my very unscientific polling of numerous DBAs, the tuning pack was rated to be the most beneficial of the three packs available from Oracle. The expert advisor is pretty smart. If you don't have tuning expertise, use it until you get the experience you need.

Backup and Recovery

It's the little things that bite you. Most botched recoveries can be attributed to human error. Make sure all tapes have the proper retention periods, verify that all backups are executing correctly and run test recoveries on a regular basis. Don't let missing tapes or backups cause you to lose data. Prevent miscommunications from occurring by meeting with operations and systems personnel on a regular basis. You don't want to hear statements like "I didn't know you wanted that backed up too" or "I thought that the tape retention was 1 day not 7 days" during a recovery.

RELAX

When you are notified of a database failure, take a deep breath and relax. Don't immediately begin to paste the database back together without a plan. Create a recovery plan, put it on paper, have others review it if you can, and then execute it. You shouldn't be trying to determine what the next step is in the middle of the recovery process. Plan your attack and then execute it.

Exports and Recovery

A common misconception is that a database can be recovered up to the current point-in-time by: recreating the database, recreating the data with a full database import and using the logs to roll forward. This recovery methodology will not work. Exports work within the data files and don't change the file headers. The database will only be consistent to the point in time when the export file was created.

V$ Tables

Remember that many V$ tables are cumulative in nature. They track information since the database has been started.

V$RECOVERFILE

Query the V$RECOVER_FILE table to determine how many files are missing or corrupted before you have them restored. Oracle will list the first file that it can't find in the error message it generates during startup. The problem is that it won't tell you how many other files are missing. The operators will like you much better and the recovery process will be a lot faster if you recover all of the files at once. Find out all of the missing files by accessing V$RECOVER_FILE, restore them to their original locations and recovery them by executing the RECOVER DATABASE command.

Watch NO LOGGING

The no logging option is great for performance but it tends to complicate current point-in-time recoveries. If you load or insert data using the no logging option and you don't immediately take a backup, you're asking for trouble. If you have to execute a database recovery to the current point-in-time, the database will be out of synch. The recovery process will not recover the data that was inserted using the no logging option.

Transactions being replayed after the no logging statement or utility execution may be working on a data file that does not contain all of the data that it should. Take a backup after a no logging statement or utility execution.

Striped File Systems

If mirroring is not used, exercise caution when placing redo logs and/or control files on the same disk stripes as data files. If you lose any disk contained in the stripe, you also lose your ability to recover the database to the current point-in-time.

Data Files and Redo Logs

If mirroring is not used, don't place redo logs and data files on the same disk. Once again, you may lose your ability to recover the database to the current point-in-time. Separate your redo logs from your data files. If you lose either type of file, your database will still be recoverable up to the current point-in-time.

Redo Log and Control File Multiplexing

Make sure your redo log groups have two or more members in each group with each member being on a different disk drive. Multiple copies of the control file should also be maintained on different disks. There should be a very limited number of reasons why you have to recover a database due to the loss of all members of a redo log group or control file set. Oracle instructors hate teaching the loss of all members of a redo log group or all controlfiles because they are so easily protected and so complicated to recover from.

OCOPY for Hot Backups on NT

Oracle on NT provides the OCOPY command to circumvent NT file locking problems. Use OCOPY instead of the COPY command when performing hot backups on NT. COPY commands will receive a file locking error if they attempt to copy data files while the database is running.

Automate Your Hot Backup Scripts

Don't hard code tablespaces and data files in your hot backup scripts. Every time you alter the structure of the database (add a data file, create a tablespace, drop a tablespace) you must change the backup scripts to reflect the changes. If you miss updating the backup scripts after a database structure change occurs, your backups become unusable. Write your scripts to access the database V$ tables to generate the backup statements.

The following scripts are examples of how to automate a complete hot database backup of an Oracle8 database on NT and on UNIX. The traditional caveats can be applied (use at your own risk). The scripts are easily changed to fit your specific environment. Both scripts back up the entire database, so they may not be the most appropriate scripts to use for large environments. In addition, watch connecting to a tool by hard coding the account and password in the script.

NT Script
NTBACKUP.CMD sets the ORACLE_SID and calls GENERATE_BACK.SQL. GENERATE_BACK.SQL reads the data dictionary and writes the backup commands to RUNBACK.SQL. NTBACKUP.CMD then performs the actual backup by running RUNBACK.SQL.

NTBACKUP.CMD
set ORACLE_SID=orcl
plus80 system/manager @d:\chris\sql\generate_back.sql
plus80 system/manager @d:\chris\sql\runback.sql

GENERATE_BACK.SQL
set heading off
set feedback off
set termout off
set echo off
spool d:\chris\sql\runback.sql
select 'set heading off' from dual;
select 'set echo on' from dual;
select 'spool c:\backup\runback.out' from dual;
select 'select ''backup started on'',TO_CHAR(SYSDATE, ''fmMonth DD, YYYY:HH:MI:SS'') from dual;' from dual;
select 'alter tablespace '||tablespace_name||' begin backup;' from sys.dba_tablespaces;
select ' host ocopy80 '||file_name||' c:\backup' from sys.dba_data_files;
select 'alter tablespace '||tablespace_name||' end backup;' from sys.dba_tablespaces;
select 'alter system switch logfile;' from dual;
select 'alter database backup controlfile to ''c:\backup\cntrlfile.bck'' reuse;' from dual;
select 'alter database backup controlfile to trace;' from dual;
select 'select ''backup ended on'',TO_CHAR(SYSDATE, ''fmMonth DD, YYYY:HH:MI:SS'') from dual;' from dual;
select 'spool off' from dual;
select 'exit' from dual;
spool off
exit

UNIX Script
This script is a little different. It runs on UNIX, does some simple error checking and backs up the tablespace data files one at a time so it won't dominate the I/O channels. It is another quick example that shows how to read the data dictionary to generate backup commands.

UNIXBACKUP.KSH sets several variables that will be used throughout the two scripts.

Runtime messages are written to HOTBACKUP.LOG. UNIXBACKUP.KSH calls GENERATE_HOTBACKUP.KSH which reads the data dictionary and writes the backup commands to RUNHOTBACK.SQL. UNIXBACKUP.KSH then performs the actual backup by running RUNHOTBACK.SQL.

UNIXBACKUP.KSH
#!/bin/ksh
ORACLE_SID=cif
export ORACLE_SID
GEN_BACKUP=/home/oracle/cif/scripts/generate_hotbackup.ksh
export GEN_BACKUP
RUN_BACKUP=/home/oracle/cif/tmp/runhotback.sql
export RUN_BACKUP
BACKUP_LOG=/home/oracle/cif/logs/hotbackup.log
export BACKUP_LOG
ARCH_DEST=/orcarc/cif
export ARCH_DEST
BACKUP_ARCH=
export BACKUP_ARCH
HOTBACKUP=/orc10/cif/backup
export HOTBACKUP
PROG=`basename $0`;export PROG
echo "Starting the Database Hot Backup" > $BACKUP_LOG

if
[ ! -f $GEN_BACKUP ]

then
echo "Shell Script $GEN_BACKUP not found!" >> $BACKUP_LOG
exit 1
if
$GEN_BACKUP $HOTBACKUP $RUN_BACKUP

if
[ $? -ne 0 ]

then
echo "execution of $GEN_BACKUP failed." >> $BACKUP_LOG
exit 1
if

echo "
@$RUN_BACKUP
" | sqlplus / >> $BACKUP_LOG
if
[ $? -ne 0 ]

then
echo "execution of $RUN_BACKUP failed." >> $BACKUP_LOG
exit 1

if

egrep "DBA-|ORA-" $BACKUP_LOG > /dev/null
if [ $? -eq 0 ]

then
echo "errors were detected while performing a hot backup." >> $BACKUP_LOG
echo "hot backup failed." >> $BACKUP_LOG
else
echo "hot backup completed successfully." >> $BACKUP_LOG
if
exit 0

GENERATE_HOTBACKUP.KSH
#!/bin/ksh
HOT_BACKUP=$1
RUN_BACKUP=$2
OUT_BACKUP=${RUN_BACKUP%.*}.out
sqlplus -s / << eof
set heading off
set feedback off
set termout off
set echo off
spool $RUN_BACKUP
select 'set heading off' from dual;
select 'set echo on' from dual;
select 'alter session set NLS_DATE_FORMAT = ''DD-MON-YY, HH24:MI:SS'';' from dual;
select 'spool $OUT_BACKUP;' from dual;
select 'select ''backup started on:'',SYSDATE from dual;' from dual;
select 'alter system switch logfile;' from dual;
select 'alter tablespace '||tablespace_name||' begin backup;' from sys.dba_tablespaces;
select ''||tablespace_name||' ! cp '||file_name||' $HOT_BACKUP' from sys.dba_data_files;
select 'alter tablespace '||tablespace_name||' end backup;' from sys.dba_tablespaces;
select 'alter system switch logfile;' from dual;
select 'alter database backup controlfile to ''$HOT_BACKUP/cntrlfile.bck'' reuse;' from dual;
select 'alter database backup controlfile to trace;' from dual;
select 'select ''backup ended on:'',SYSDATE from dual;' from dual;
select 'spool off' from dual;
select 'exit' from dual;
spool off
exit
eof
nawk '{if(($3 == "cp") && ($5 == backdest))
{
split($4,fname,"/")
printf("%s%s%s\n %s%s %s %s\n %s%s/%s\n%s%s%s\n", "alter tablespace ", $1, " begin backup;", $2, $3, $4, $5, "!compress ", backdest, fname[5], "alter tablespace ", $1, " end backup;")
}
else {print $0}}' < $RUN_BACKUP backdest=$HOT_BACKUP > tmpfile

/usr/bin/mv tmpfile $RUN_BACKUP

Exit to : Oracle Hints and Tips