Tuesday, December 1, 2009

Oracle Networking Notes

1>--HOST NAMING METHOD--

--PRIMARY DATABASE------ --------CLIENT-------

1>LISTINER SHOULD BE RUNNING 1>SQLNET.ORA SHOULD BE CONFIGURED WITH
NAMES.DIRECTORY_PATH= (HOST)

--PATH FOR SQLNET.ORA FILE-
LINUX:$ORACLE_HOME/NETWORK/ADMIN
WINDOWS:\oracle\product\10.1.0\db_1\NETWORK\ADMIN

2>HOST FILE SHOULD BE REGISTERED WITH
PRIMARY DATABASE IP AND GLOBAL_DB_NAME

IP GLOBAL_DB_NAME
10.60.10.95 DKDB

--PATH FOR HOST FILE---
LINUX:CD /ETC/HOSTS
WINDOWS:C:\WINDOWS\system32\drivers


----CONNECTING FROM CLIENT---
eg:SQLPLUS SYSTEM/DKDB@DKDB
SQL>CONN SYSTEM/DKDB@DKDB


2>--EZCONNECT NAMING METHOD--

--PRIMARY DATABASE------ -------CLIENT-------

1>LISTINER SHOULD BE RUNNING 1>SQLNET.ORA SHOULD BE CONFIGURED WITH
NAMES.DIRECTORY_PATH= (EZCONNECT)

--PATH FOR SQLNET.ORA FILE--
LINUX:$ORACLE_HOME/NETWORK/ADMIN
WINDOWS:\oracle\product\10.1.0\db_1\NETWORK\ADMIN


--CONNECTING FROM CLIENT--
eg:SQLPLUS SYSTEM/DKDB@10.60.10.95:1521/DKDB
SQL>CONN SYSTEM/DKDB@10.60.10.95:1521/DKDB



3>--TNS NAMING METHOD--

--PRIMARY DATABASE------ -------CLIENT-------

1>LISTINER SHOULD BE RUNNING 1>SQLNET.ORA SHOULD BE CONFIGURED WITH
NAMES.DIRECTORY_PATH= (TNSNAMES)

--PATH FOR SQLNET.ORA FILE--
LINUX:$ORACLE_HOME/NETWORK/ADMIN
WINDOWS:\oracle\product\10.1.0\db_1\NETWORK\ADMIN

2>TNSNAMES.ORA SHOULD BE CONFIGURED WITH
DKDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.60.10.95)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dkdb)
)
)

---CONNECTING FROM CLIENT-----
eg:SQLPLUS SYSTEM/DKDB@DKDB
SQL> CONN SYSTEM/DKDB@DKDB

To increase DB process

You might also want to check if the number of connections reaches the database's PROCESSES parameter using following Unix command:

ps -ef | grep oracleSID | grep -v grep | wc -l

or

ps aux | grep oracleSID | grep -v grep | wc -l

If the value found is close to the value of the PROCESSES database parameter value, you might want to augment this value using following command:

alter system set processes=new value [scope=spfile];

A database bounce is required to let the new value become active.

crontab clone db

#30 20 * * 5 /home/oracle/clone_scripts/clone_fopsfat1.sh

-------------------clone_fopsfat1.sh-----------------

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
TMP=/tmp; export TMP
ORACLE_HOME_LISTENER=$ORACLE_BASE ;export ORACLE_HOME_LISTENER
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/data/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

export ORACLE_SID=fopsfat1
lsnrctl stop listenerfat1

cd /data/fopsfat1
rm *.*
cd /data/fops
cp *.* /data/fopsfat1/
cd /data/fopsfat1
rm *.ctl
rm temp01.dbf
cd /data/app/oracle/admin/fopsfat1
rm -rf udump bdump cdump adump
mkdir udump bdump cdump adump

sqlplus 'sys/fopsdb2008 as sysdba' @/home/oracle/clone_scripts/fopsfat1conttrc.sql

exit
---------------fopsfat1conttrc.sql--------------------------


SHUTDOWN ABORT;
STARTUP NOMOUNT pfile='$ORACLE_HOME/dbs/initfopsfat1.ora'
CREATE CONTROLFILE SET DATABASE "fopsfat1" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 887
LOGFILE
GROUP 1 '/data/fopsfat1/redo01.log' SIZE 50M,
GROUP 2 '/data/fopsfat1/redo02.log' SIZE 50M,
GROUP 3 '/data/fopsfat1/redo03.log' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'/data/fopsfat1/system01.dbf',
'/data/fopsfat1/undotbs01.dbf',
'/data/fopsfat1/sysaux01.dbf',
'/data/fopsfat1/users01.dbf',
'/data/fopsfat1/admin_data_01.dbf',
'/data/fopsfat1/admin_data_02.dbf',
'/data/fopsfat1/load_data_01.dbf',
'/data/fopsfat1/navdata_data_01.dbf',
'/data/fopsfat1/spatial_data_01.dbf',
'/data/fopsfat1/statw_data_01.dbf',
'/data/fopsfat1/statw_data_02.dbf',
'/data/fopsfat1/tools_data_01.dbf',
'/data/fopsfat1/weather_data_01.dbf'
CHARACTER SET WE8ISO8859P15
;

alter database open resetlogs;

-- re-create incarnation records.
ALTER TABLESPACE TEMP ADD TEMPFILE '/data/fopsfat1/temp01.dbf'
SIZE 1097M REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

alter user system account unlock;

alter user system identified by fopsdb2008;

!lsnrctl start listenerfat1

exit
------------------------------

Note: 1.add a entry in oratab file
2.create a pfile and edit pfile whit new db name and file locations
3.create a password file
4.edit the contents inside controlfile trace and rename it.

crontab monitor sessions blocking

#* * * * * /data/app/oracle/product/10.2.0/db_1/kill/monitor.sh > /dev/null 2>&1


-----------------------------------monitor.sh----------------------------------

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
TMP=/tmp; export TMP
ORACLE_HOME_LISTENER=$ORACLE_BASE ;export ORACLE_HOME_LISTENER
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/data/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

sqlplus system/fopsdb2008@fopsfat1 @/data/app/oracle/product/10.2.0/db_1/kill/monitor.sql

-----------------monitor.sql-----------------------------
set linesize 120

set echo off

set heading off

spool on

spool /data/app/oracle/product/10.2.0/db_1/kill/monitor_lock.spool


select s1.username,s1.osuser || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;


select sysdate from dual;

spool off

!cat /data/app/oracle/product/10.2.0/db_1/kill/monitor_lock.spool >> /data/app/oracle/product/10.2.0/db_1/kill/monitor.txt
exit

crontab kill blocking sessions using process id

How to kill locking session: -
1) Creating .sh file to automate the job through crontab: -
kill.sh: -
# .bash_profile
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

cd $ORACLE_HOME/kill
sqlplus system/fopsdb2008@fopsfat1 @/u01/app/oracle/product/10.2.0/db_1/kill/kill.sql
exit

2) Creating .sql file to get the locked session ID in OS level: -
kill.sql: -

set heading off
set echo off
spool /u01/app/oracle/product/10.2.0/db_1/kill/kill_in.sh
select 'kill -9 '|| vp.spid from v$session vs, v$process vp
where vs.paddr = vp.addr
and vs.sid in (select s1.sid from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2);
spool off

select name from v$database;
!chmod 777 /u01/app/oracle/product/10.2.0/db_1/kill/kill_in.sh
!export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
!cd /u01/app/oracle/product/10.2.0/db_1/kill
!pwd
!./kill_in.sh
Exit

3) Making an entry to automate this job every 1 minute: -
Crontab -e: -
* * * * * /u01/app/oracle/product/10.2.0/db_1/kill/kill.sh > /dev/null 2>&1

crontab dbshut and dbstart

00 18 * * 6 /data/app/oracle/product/10.2.0/db_1/all_db_shut.sh

---------------------------all_db_shut.sh-----------------------------------

[oracle@FOPSDB ~]$ cat /data/app/oracle/product/10.2.0/db_1/all_db_shut.sh


# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
TMP=/tmp; export TMP
ORACLE_HOME_LISTENER=$ORACLE_BASE ;export ORACLE_HOME_LISTENER
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/data/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
lsnrctl stop listenerdev
lsnrctl stop listenerint
lsnrctl stop listenerfat1
lsnrctl stop listenerfat2
#lsnrctl stop listenerret
lsnrctl stop listenerdem1
lsnrctl stop listenersat1

#echo FOPSTEST db shutdown `date` >> $ORACLE_HOME/db.log
dbshut
#echo FOPSTEST db shutdown completed `date` >> $ORACLE_HOME/db.log


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


----------------------------all_db_start.sh------------------------------

30 4 * * 0 /data/app/oracle/product/10.2.0/db_1/all_db_start.sh

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
TMP=/tmp; export TMP
ORACLE_HOME_LISTENER=$ORACLE_BASE ;export ORACLE_HOME_LISTENER
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/data/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=${PATH}:$ORACLE_HOME/bin/ export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

lsnrctl start listenerdev
lsnrctl start listenerint
lsnrctl start listenerfat1
lsnrctl start listenerfat2
#lsnrctl start listenerret
lsnrctl start listenerdem1
lsnrctl start listenersat1

#echo FOPSTEST db startup `date` >> $ORACLE_HOME/db.log
dbstart
#echo FOPSTEST db startup completed `date` >> $ORACLE_HOME/db.log


------------------------------------------------------------

Tuesday, November 17, 2009

Oracle SDU and TDU parameters in listener.ora and tnsnames.ora

Oracle SDU and TDU parameters in listener.ora and tnsnames.ora
The session data unit (SDU) specifies the size of the packets to send over the network. The maximum transmission unit (MTU) is a fixed value that depends on the actual network implementation used. Ideally, SDU should not surpass the size of the MTU. Oracle recommends that SDU be set equal to the MTU. The tnsnames.ora and listener.ora files house the SDU and TDU parameters.

To group data together, the TDU value is the default packet size used within Oracle*Net. The default value for both SDU and TDU is 2,048 bytes and the maximum value is 32,767 bytes. The TDU parameter should ideally be a multiple of the SDU parameter. For SDU and TDU, the following guidelines apply:

On fast network connections such as T1 or T3 lines, SDU and TDU should be set equal to the MTU for the network. On standard Ethernet networks, the default MTU size should be set to 1,514 bytes. On standard token ring networks, the default MTU size is 4,202 bytes.

If the users are connecting via dial-up modem lines, it may be desirable to set SDU and TDU to smaller values in consideration of the frequent resends that occur over modem connections.

The mts_dispatchers must also be set with the proper MTU - TDU configuration, if a Multi-Threaded Server (MTS) is used.

As a rule, SDU should not be set greater than the TDU because network resources will be wasted by shipping wasted space in each packet.

This brief review of network-related parameters was intended provide an introduction to the scope and complexity of network tuning. It is important to understand that Oracle*Net is simply a layer in the OSI model that is above the network-specific protocol stack, and therefore, virtually all network tuning is external to Oracle.

Ex: -

Doc ID: Note:71921.1
Subject: ORA-12151, ORA-12571 errors on Windows NT
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 09-JUN-1999
Last Revision Date: 07-FEB-2002

PURPOSE

Give an overview of what to adjust and verify in case of ORA-12151
and ORA-12571 errors

SCOPE AND APPLICATION

This notes applies to all who are facing intermittent SQL*Net read and
write errors while working on NT.

==============================================================================


ORA-12151 and ORA-12571 errors on Windows NT

Intermittent SQL*Net TCP/IP read and write errors are sometimes encountered
on NT. The underlying reasons of these errors are a synchronization error
in the TCP/IP layer on NT. To help prevent this kind of errors, a
few things can be adjusted to help the synchronization:

1. TCP.NODELAY parameter
This parameter is to be added in the "PROTOCOL.ORA" file in the
"NETWORK\\ADMIN" directory.

In most cases, TCP/IP info send across the network is buffered
till at least a complete network packet can be send. This means
that in certain cases commands are not issued directly, and kept
buffered until some other info can be send as well.
This has the potential to generate timeouts and errors.
To avoid this, this delay can be switched off.

tcp.nodelay = yes

2. Disabling AUTOMATIC_IPC on Clients
On client PC's, checking for IPC connections is pointless as there
is never a database installed on them. So, in order to save some
time during the connections phase, set AUTOMATIC_IPC=OFF in the
"SQLNET.ORA" file.

3. "NAMES.DIRECTORY_PATH" to force using "TNSNAMES.ORA" or "ONAMES"
If you have a fixed environment, it's best to specify this in the
"SQLNET.ORA" file. The parameter "NAMES.DIRECTORY_PATH" specifies how
the TNS resolving takes place.

By default, if this parameter is not present - the SQL*Net layer
will first check if there is a Names server anywhere on the network,
and will afterwards start checking for a "TNSNAMES.ORA" file.

If you have only a "TNSNAMES.ORA" file, and no Oracle Names installed,
it is best to specify this parameter to skip the Names searching in
order to speed up the TNS resolving time.

The value of this parameter is a comma separated list, with as
possible values TNSNAMES (for "TNSNAMES.ORA" searching) and "ONAMES"
(for Oracle Names searching).

4. TCP/IP timeouts on NT
The default retransmission count on NT is 5, before it detects that
the network is down. With the value of 5, the actual timeout is
aproximately 15 seconds.

This default value can be easily increased to a higher value.
In the registry, change the following value:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

This parameter can be useful on both client and server. Suggested
course of action is to first add this parameter on the machine
generating the SQL*Net errors, and if the problem persists, also
include the parameter in the registry of the other machine.

5. TCP/IP keepalive on NT
KEEPALIVE is an extension to TCP/IP which enables the closing of
dead connections which are no longer being used.

Problems can occur when the server does not close a connection
after a PC client process has disappeared. This typically happens
when a PC user switches off or reboots their machine while still
connected to Oracle. Note that this is not an Oracle problem, but
a limitation of TCP/IP, which has no way of knowing for sure
whether a remote connection has disappeared.

This feature is enabled by default on NT. Problem can occur however
if the timeout period is too fast for some heavily used or slow
network. In those conditions, the KEEPALIVE registry value can be
used to specify a KEEPALIVE value before a connection gets cut.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
KeepAlive REG_DWORD "number"

A value of '10' is a common value specified for this variable.

Again, this parameter can be useful on both client and server.
Start with the machine generating the error, and if needed, also add
it on the machine on the other side.

6. TCP/IP timeouts on Windows 95
The same parameter can also be specified on Windows 95. It has the
same functionality, only the location of the parameter in the
registry is different.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Winsock
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

The purpose and behavior of the parameter is the same on the Windows 95
and Windows 98, as on the Windows NT platform.

7. SDU & TDU parameters
Part of this problem is the sequence of information that gets transmitted.
If there are disruptions in the sequence, the errors ORA-12151 and
ORA-12571 can also appear, alerting the application that not all information
has been send across the network succesfully.

The sequence of information is determined by the amount of data the program
is sending, and the actual size the protocol can send across the network
at a time.

The more data the program wants to send in one 'go', the more sequences and
transport packet split-ups will have to be made.

By default, SQL*Net is using an SDU (Session Data Unit) of 2048 bytes (2Kb)
and a TDU (Transport Data Unit) of 32768 (32Kb) bytes.
On standard Ethernet connections, with no special modifications made, the
SDU is 1500 bytes, while the TDU is 8760 bytes.

With these values, each data request made by SQL*Net will have to be split
up in several smaller packets to actually be able to transmit.

Therefore, to minize the creation the additional packets, it is advised, in
case of these errors, to synchronize the SDU and TDU parameters at the
SQL*Net level with those of the actual network topology used.

These SDU and TDU parameters have to be specified at both the client and
the server configuration files:

TNSNAMES.ORA:
-------------
ORCL.WORLD =
(DESCRIPTION =
(SDU=1500)
(TDU=8760)
(ADDRESS_LIST =
(ADDRESS =(PROTOCOL=TCP)(Host=foobar)(Port=1521))
)
(CONNECT_DATA =
(SID = ORCL)
)
)

LISTENER.ORA:
-------------
SID_DESC_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 1500)
(TDU = 8760)
(SID_NAME = ORCL)
)
)

For more information about the SDU and TDU parameter, see Note 44694.1,
Note 44694.1: SQL*Net Packet Sizes (SDU & TDU Parameters)

8. Setting a new TDU size on Windows NT
You can modify the TDU size on NT, via the TcpWindowSize parameter:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Tcpip
Parameters
TcpWindowSize REG_DWORD "number"


Additional information about the NT network parameters on NT:
-------------------------------------------------------------
Q120642: TCP/IP & NBT Configuration Parameters for Windows NT
http://support.microsoft.com/support/kb/articles/Q120/6/42.asp

Q140375: Default MTU Size for Different Network Topology
http://support.microsoft.com/support/kb/articles/Q140/3/75.asp

Manually Physical STBY DG

Oracle 10g
Manually Create a Physical Standby Database Using Data Guard
• RDBMS Server
Step-by-step instructions on how to create a Physical Standby Database on UNIX server, and maintenance tips on the databases in a Data Guard Environment.
Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/Standby databases using both Grid control and SQL command line for a couple of years, and my latest experience with Data Guard was manually creating a Physical Standby Database for a Laboratory Information Management System (LIMS) half a year ago. I maintain it daily and it works well. I would like to share my experience with the other DBAs.
In this example the database version is 10.2.0.3... The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
I. Before you get started:
1. Make sure the operating system and platform architecture on the primary and standby systems are the same;
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
II. On the Primary Database Side:
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;
- On UNIX:
If it doesn’t exist, use the following command to create one:
$orapwd file=orapwPRIM password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with the password for the SYS user.)
3. Configure a Standby Redo log.
1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;
BYTES
----------
52428800
52428800
52428800
2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
3) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
1) Create pfile from spfile for the primary database:

- On UNIX:
SQL>create pfile=’$ORACLE_HOME/dbs/initPRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
2) Edit initPRIM.ora to add the new primary and standby role parameters:

PRIM.__db_cache_size=385875968
PRIM.__java_pool_size=4194304
PRIM.__large_pool_size=4194304
PRIM.__shared_pool_size=184549376
PRIM.__streams_pool_size=0
*.audit_file_dest='/oracle/app/admin/PRIM/adump'
*.background_dump_dest='/oracle/app/admin/PRIM/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/data/PRIM/controlfile/o1_mf_5g0wj3ro_.ctl','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/controlfile/o1_mf_5g0wj5qy_.ctl'
*.core_dump_dest='/oracle/app/admin/PRIM/cdump'
*.db_block_size=8192
*.db_create_file_dest='/data'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='/data/STAN/datafile','/data/PRIM/datafile'
*.db_name='PRIM'
*.db_recovery_file_dest='/oracle/flash_recovery_area/PRIM/ARCHIVELOG'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='PRIM'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMXDB)'
*.FAL_CLIENT='PRIM'
*.FAL_SERVER='STAN'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oracle/flash_recovery_area/PRIM/ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
*.LOG_ARCHIVE_DEST_2='SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='/data/STAN/onlinelog','/data/PRIM/onlinelog,','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/onlinelog'
*.open_cursors=300
*.pga_aggregate_target=195035136
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=586153984
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/app/admin/PRIM/udump'


6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.

- On UNIX:
SQL> shutdown immediate;
SQL> startup nomount pfile=’$ORACLE_HOME/dbs/initPRIM.ora’;
SQL>create spfile from pfile=’$ORACLE_HOME/dbs/initPRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
III. On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
1) Create directory for data files.
/data/STAN/datafile
create the directory accordingly.
2) Copy the data files and temp files over, from PRIM to STAN.
3) Create directory (multiplexing) for online logs,
For example,
/data/STAN/onlinelog and
/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog
create the directories accordingly.
4) Copy the online logs over, from PRIM to STAN.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
1) Copy initPRIM.ora from Primary server to Standby server, to dbs folder on UNIX under the Oracle home path.
2) Rename it to initSTAN.ora, and modify the file as follows.
db_cache_size=419430400
java_pool_size=4194304
large_pool_size=4194304
shared_pool_size=150994944
streams_pool_size=0
*.audit_file_dest='/oracle/app/admin/STAN/adump'
*.background_dump_dest='/oracle/app/admin/STAN/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/data/STAN/controlfile/STAN.ctl','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/controlfile/STAN.ctl'
*.core_dump_dest='/oracle/app/admin/STAN/cdump'
*.db_block_size=8192
*.db_create_file_dest='/data'
*.db_domain=''
*.db_file_multiblock_read_count=16
# Specify the location of the primary DB datafiles followed by the standby location
*.DB_FILE_NAME_CONVERT='/data/PRIM/datafile','/data/STAN/datafile'
*.db_name='PRIM'
*.db_recovery_file_dest='/oracle/flash_recovery_area/STAN/ARCHIVELOG'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='STAN'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=STANXDB)'
*.FAL_CLIENT='STAN'
*.FAL_SERVER='PRIM'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oracle/flash_recovery_area/STAN/ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
*.LOG_ARCHIVE_DEST_2='SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
# Specify the location of the primary DB online redo log files followed by the standby location
*.LOG_FILE_NAME_CONVERT='/data/PRIM/onlinelog','/data/STAN/onlinelog,','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/onlinelog','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog'
*.open_cursors=300
*.pga_aggregate_target=195035136
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=586153984
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/app/admin/STAN/udump'

(Note: Not all the parameter entries are listed here.)
4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destination.
/oracle/app/product/10.2.0/db_1/dbs/STAN.ctl –Path of STAN.ctl on PRIM.
/data/STAN/controlfile/STAN.ctl—Path to where on STAN it should be copied.

6. Copy the Primary password file to standby and rename it to orapwSTAN.ora.
on UNIX copy it to /dbs directory. And then rename the password file.
7. Configure listeners for the primary and standby databases.
1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
8. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
- On UNIX:
SQL>startup nomount pfile=’$ORACLE_HOME/dbs/initSTAN.ora’;
SQL>create spfile from pfile=’$ORACLE_HOME/dbs/initSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
2) On Primary, force a logfile switch:
SQL>alter system switch logfile;
3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
IV. Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN;
RMAN>backup archivelog all delete input;
To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.

Monday, November 16, 2009

Oracle Scripts 2

1.To get user_name and connect string as your SQL prompt:-

add below line to $ORACLE_HOME/sqlplus/admin/glogin.sql file...

set sqlprompt "&&_USER@&&_CONNECT_IDENTIFIER SQL>"

2.Instance has crashed and resources were not released:-

$ sysresv -i
IPC Resources for ORACLE_SID "orcl":
Shared Memory:
ID KEY
32768 0xdc70f4e4
Semaphores:
ID KEY
98304 0xb11a5934
Oracle Instance not alive for sid "orcl"
Remove ipc resources for sid "orcl" (y/n)? y
Done removing ipc resources for sid "orcl"

3.To find a free space in datafile wise: -

set linesize 150
column file_name format a50
column tablespace_name format a10
column free_space_in_mb format 9999.9999
select ddf.file_name,ddf.tablespace_name,
sum(dfs.bytes)/1024/1024 free_space_in_mb
from dba_data_files ddf, dba_free_space dfs
where ddf.file_id = dfs.file_id
and ddf.tablespace_name like '&tablespace_name_in_caps'
group by ddf.file_name,ddf.tablespace_name;


4.Displays Actual size, used & freespace within the datafiles: -

TTI "Allocated, Used & Free space within datafiles"

COLUMN free_space_mb format 999999.90
COLUMN allocated_mb format 999999.90
COLUMN used_mb format 999999.90

SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 allocated_mb,
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
used_mb,
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;

5.To kill inactive session: -
set heading off
set echo off
spool kill.sql
select 'alter system kill session '''||sid||','||serial#||''';' from v$session where STATUS='INACTIVE';
spool off
@kill.sql

6.To get Current date in filename while spooling: -
set term OFF echo OFF doc OFF
set hea OFF feed OFF ver OFF
ttitle OFF
spool tst1.sql
select 'spool tst'||to_char(sysdate,'ddmmyy')||'.sql' from dual;
spool off;
@tst1.sql
select sysdate from dual;
spool off

7.To get the client address: -
select sys_context('USERENV','IP_ADDRESS') from dual;

8.To get the server address: -
select utl_inaddr.get_host_address from dual;

9.To get the IP address of another server: -
select utl_inaddr.get_host_address ('') from dual;

10.To get the server name from its IP: -
select utl_inaddr.get_host_name ('') from dual;

11.So to get all clients ip: -
select sid, machine, utl_inaddr.get_host_address(machine) ip
from v$session
where type='USER' and username is not null
/

12.Script to calculate the row length (approximately): -
1) As a sys user: -
select sum(de.bytes)/dt.num_rows "Avg Row Size"
from dba_extents de, dba_tables dt
where de.owner like 'OWNER_NAME' and de.segment_name like 'SEGMENT_NAME'
and de.segment_name=dt.table_name
group by dt.num_rows;

2) As a normal user: -
select sum(ue.bytes)/ut.num_rows "Avg Row Size"
from user_extents ue, user_tables ut
where ue.segment_name like 'SEGMENT_NAME' and ue.segment_name=ut.table_name
group by ut.num_rows;

Access your Alert Log via SQL with External Tables

Starting in Oracle9i you can map external flat files to Oracle tables.
This is especially useful for reading the Oracle alert log and mapping a Excel spreadsheet making the Oracle alert log accessible via SQL from Oracle:
Mapping the Oracle alert log is easy and once defined, all you have to do is query it with standard SQL syntax:


create directory BDUMP as '/u01/app/oracle/admin/orcl/bdump';

create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_orcl.log')
)
reject limit 1000;


Now we can easily extract important Oracle alert log information without leaving SQL*Plus:
select msg from alert_log where msg like '%ORA-00600%';

ORA-00600: internal error code, arguments: [17034],[2940981512],[0], [],[],[],[],[]
ORA-00600: internal error code, arguments: [18095],[0xC0000000210D8BF8],[],[],[],[]
ORA-00600: internal error code, arguments: [4400], [48], [], [], []

Db start, Db shut when server reboots.

---Some scripts which can be used in automation of db start and db shut when server reboots------
To allow Oracle start on boot-up, create a file called oracledb (or whatever name you want to call it) and put it in /etc/init.d with the contents below. This script was copied and pasted from a tutorial by Graham Williams. It will read the /etc/oratab and fire up any instances it finds.
#!/bin/bash
#
# /etc/init.d/oracledb
#
# Run-level Startup script for the Oracle Instance, Listener, and Web Interface

export ORACLE_HOME=/oracle/app/product/10.2.0/db_1/
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=cjperf
export TNS_LISTINER=$ORACLE_HOME/network/admin
ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORACLE_HOME/bin/dbstart -o ! -d $ORACLE_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
su $ORA_OWNR -c "$ORACLE_HOME/bin/lsnrctl start"
su $ORA_OWNR -c $ORACLE_HOME/bin/dbstart
touch /var/lock/oracle

su $ORA_OWNR -c "$ORACLE_HOME/bin/emctl start dbconsole"
echo "OK"
;;
stop)
# Oracle listener and instance shutdown
echo -n "Shutdown Oracle: "
su $ORA_OWNR -c "$ORACLE_HOME/bin/lsnrctl stop"
su $ORA_OWNR -c $ORACLE_HOME/bin/dbshut
rm -f /var/lock/oracle

su $ORA_OWNR -c "$ORACLE_HOME/bin/emctl stop dbconsole"
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: `basename $0` start|stop|restart|reload"
exit 1
esac

exit 0



After saving the file, make it executable
# chmod a+x /etc/init.d/oracledb
and if you want, make it run at every boot:
# update-rc.d oracledb defaults 99
Adding system startup for /etc/init.d/oracledb ...
ln -s /etc/init.d/oracledb /etc/rc.d/rc0.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc0.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc2.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc2.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc3.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc3.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc4.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc4.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc5.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc5.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc6.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc6.d/S99oracledb

Modify /etc/oratab entries
vi /etc/oratab

database_sid:oracle_home_dir:Y|N

change entry from N to Y

eg: cjperf:/oracle/app/product/10.2.0/db_1:Y

Before finishing, add the following lines to your /etc/.bash_profile . Be careful, since these values are valid system-wide. So make sure the paths are set according to your particular setup (if you have been doing everything according to this text, you should be fine).
export ORACLE_BASE/oracle/app/product/10.2.0/db_1/
export ORACLE_HOME=/oracle/app/oracle/product/10.2.0/db_1
export ORACLE_SID=cjperf
export PATH=$PATH:/oracle/app/oracle/product/10.2.0/db_1/bin
Last operation: add yourself to the dba group. You can use usermod or just edit the /etc/groups file and add your username at the end of the line that starts with dba (my username is ‘harsha’):
dba:x:1002:oracle,bott
If you chose to not create a starter database during your install, you’ll have to do two extra steps. You should create a listener (with netca) and after that, create the starter database (also with netca). If you chose to have the installer create a database for you, then you should be fine, since when doing that, it asks for a password for the default accounts (SYS, SYSTEM, and DBSNMP, SYSMAN if you choose to install it with the enterprise manager option selected).
If everything has gone well, open a terminal window and, as the oracle user, type:
$ sqlplus

SQL*Plus: Release 10.2.0.1.0 - Beta on Wed Jul 11 17:11:53 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Enter user-name:
If you see these messages (and I sincerely hope you do) you’re all set! That means that you have finished a quite long install of Oracle 10g and you are ready to begin destroying it.

Performance Tuning

Redo buffer Tune:-

set serveroutput on;
declare
val1 number(10,2);
val2 number(10,2);
val3 number(10,8);
begin
select value into val1 from v$sysstat Where name in ('redo entries');
select value into val2 from v$sysstat Where name in ('redo log space requests');
val3:=(val2/val1);
if val3<.0002 then dbms_output.put_line('The ratio (redo log space requests/redo entries)is ok '||val3); end if; if val3>.0002 then
dbms_output.put_line('Increase the size of the redo log buffer '||val3);
end if;
end;
/
set serveroutput off;
/
___________________________________


Buffer Cache tune:-

set serveroutput on;
DECLARE
DB_BLK NUMBER(10,2);
CONS_GET NUMBER(10,2);
PHY_RDS NUMBER(10,2);
FINAL NUMBER(10,2);
begin
Select value INTO DB_BLK From v$sysstat Where name in ('db block gets');
Select value INTO CONS_GET From v$sysstat Where name in ('consistent gets');
Select value INTO PHY_RDS From v$sysstat Where name in ('physical reads');
FINAL:=(1-(PHY_RDS/(DB_BLK+CONS_GET)))*100;
if final>90 then
dbms_output.put_line('The BUFFER CACHE HIT RATIO IS OK '||final);
end if;
if final<90 style="font-style: italic;">tune SORT_AREA_SIZE:-

Select name, value from v$sysstat
where name in ('sorts (memory)', 'sorts (disk)');

--If large number of sorts require I/O to disk,increase the initialisation parameter SORT_AREA_SIZE
_________________________________

Tune Library cache:-

set serveroutput on;
DECLARE
hit NUMBER(10,2);
reload number(10,2);
begin
Select ((sum(pinhits) / sum(pins))*100) into hit From v$librarycache Where namespace in ('SQL AREA','TABLE PROCEDURE','BODY','TRIGGER');
select ((sum(reloads)/sum(pins))*100) into reload From v$librarycache Where namespace in ('SQL AREA','TABLE PROCEDURE','BODY','TRIGGER');
IF hit<85>85 then
dbms_output.put_line('THE hit ratio >85%. check shared pool and open cursor parameter '||hit);
end if;
dbms_output.put_line('*************************************************************************');
if reload>.02 then
dbms_output.put_line('The reload percent should be very low, 2% (i.e. 0.02) or less. this is ok'||reload);
end if;
if reload<.02 then dbms_output.put_line('The reload >2% '||reload);
end if;
end;
/
set serveroutput off;
/
________________________________

Tune Dictionary cache:-

set serveroutput on;
DECLARE
ROWCACHE NUMBER(10,2);
BEGIN
Select ((sum(getmisses) / sum(gets))*100) INTO ROWCACHE From v$rowcache;
IF ROWCACHE<15>15 then
dbms_output.put_line('increase the initialisation parameter SHARED_POOL_SIZE '||rowcache);
end if;
END;
/
set serveroutput off;
/
_________________________________

How to find the Chained rows in table:-

1.We have to run the utlchain.sql file in the Schema
(where we want to find the chained rows) to create CHAINED_ROWS table.

SQL>@$ORACLE_HOME/rdbms/admin/utlchain.sql
Then it will create CHAINED_ROWS table.

2.Execute the below command :-
SQL>analyze table table_name list chained rows into CHAINED_ROWS;

3.Query the CHAINED_ROWS table to get the details of the chained rows.

(or)

SELECT 'Chained or Migrated Rows = '||value FROM v$sysstat
WHERE name = 'table fetch continued row';
____________________________________

Tuning of Indexes:-

grant execute on dbms_stats to &User_name;
grant select on index_stats to &User_name;
connect &Username/&Password
exec dbms_stats.gather_schema_stats(ownname=>'&User_Name');
(or)
analyze index owner.index_name validate structure;

SELECT NAME,(DEL_LF_ROWS_LEN/LF_ROWS_LEN)*100 "Wasted Space"
FROM Index_stats WHERE name IN (select index_name from dba_indexes where owner like 'USER_NAME');

if the value is more then 20% of any Index, then rebuild the Index.
___________________________________

Compiling all views in a Schema: -

set lines 130 pages 9999
spool comp_all_views
select 'alter view '||owner||'.'||view_name||' compile;' from dba_views
where owner = 'user_name';
spool off
@comp_all_views.lst
_______________________________

To bring Table read_only:-

1. SQL>alter table (table_name) constraint (constraint_name) check(1=1);

2. SQL>alter table (table_name) disable validate constraint (constraint_name);

To bring Table read_write:-

1. SQL>alter table (table_name) enable validate constraint (constraint_name);
______________________________

How to See the used & free space in Temporary Tablespace: -

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

(or)

SELECT b.tablespace, b.segfile#, b.segblk#, b.blocks,
a.sid, a.serial#, a.username, a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr
ORDER BY b.tablespace, b.segfile#, b.segblk#, b.blocks;
_____________________________

How to calculate size of a table containing CLOBs and BLOBs datatype: -

select sum(bytes), s.owner, s.segment_name, s.segment_type
from dba_lobs l, dba_segments s
where s.segment_type = 'LOBSEGMENT'
and l.table_name = 'TABLE_NAME'
and s.segment_name = l.segment_name
and s.owner like 'USER_NAME'
group by s.owner,s.segment_name,s.segment_type;

-Will get exact LOBSegment size.
-If we want to get total size of the table (with LOBSegment): -

select sum(bytes), s.owner, s.segment_name, s.segment_type
from dba_lobs l, dba_segments s
where s.segment_type = 'LOBSEGMENT'
and l.table_name = 'TABLE_NAME'
and s.segment_name = l.segment_name
and s.owner like 'USER_NAME'
group by s.owner,s.segment_name,s.segment_type;
--------(+) add both outputs--------
select bytes from dba_segments where segment_name like 'TABLE_NAME'
and owner like 'USER_NAME';


___________________________________


Checking CPU utilization for all users: -

select ss.username,se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID
and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;

Resumeable Operation In Oracle 9i

In Oracle 9i, you can suspend or resume operations that run out of space, or reach space limitations after executing for long time. In Oracle 9i we can switch a transaction into 'resumable' mode. This means that any errors arising due to lack of space will not cause a transaction to fail, but to be suspended. When the space problem is fixed, the operation/transaction resumes automatically as if there was no problem earlier.
Example:-
consider you created table and inserted 20,000 records, after 20,000 record space limitation is reached whether it may be max extent crossed its limit or may be lacking of space in table spaces. Due to this circumstances normally we will get an error or transaction will be rolled back. But using resume option the transaction can be suspended for certain seconds, after fixing space problem, the transaction will be resumed automatically and transaction will proceed further.
Consider you have created tablespace test of datafile size 1m,

SYS> CREATE TABLESPACE test
DATAFILE <'path/filename'> SIZE 1m;
Tablespace created

For the normal user to use Resumable operation the superuser should grant permission

SYS> GRANT RESUMABLE TO
eg:
SYS> GRANT RESUMABLE TO SCOTT;
Grant succeeded
SYS>conn scott/tiger
connected
issue alter session enable resumable timeout
SCOTT> ALTER SESSION ENABLE RESUMABLE TIMEOUT 60;
session altered
SCOTT> CREATE TABLE test1
TABLESPACE test
AS SELECT * from user_objects;
Table created
SCOTT> insert into test1 select * from user_objects;
7 rows created
SCOTT>insert into test1 select * from test1;
14 rows created
SCOTT>/
28 rows created
SCOTT>
114688 rows created.
. .upto 1m of tablespace transaction proceeds later transaction will be in waiting stage
SCOTT>/

The transaction will be suspended until 60 seconds, within that 60 second increase the size of tablespace and the error message will be written to alert log file as (Tue Jan 29 13:50:26 2008
statement in resumable session 'User SCOTT(22), Session 9, Instance 1' was suspended due to
ORA01653:
unable to extend table SCOTT.TEST by 2048 in tablespace TEST1
Tue Jan 29 14:27:22 2008
statement in resumable session 'User SCOTT(22), Session 9, Instance 1' was suspended due to
ORA30036:
unable to extend segment by 256 in undo tablespace 'UNDO1'
if not you increase the size of tablespace, the transaction will get rolled backed and you receive an error message
SCOTT>insert into test select * from test
*
ERROR at line 1:
ORA30032:
the suspended (resumable) statement has timed out
ORA01653:
unable to extend table SCOTT.TEST by 256 in tablespace TEST1
If you resolved problem within 60 second the transaction gets continued.

How to download patchset from metalink?

How to download patchset from metalink?
Filed under: Patching — jennyca @ 3:59 pm

Naming Convention of Oracle Patchset

p__.zipOracle Patchset# list

9.2.0.4 = 3095277
9.2.0.5 = 3501955
9.2.0.6 = 3948480
9.2.0.7 = 4163445
9.2.0.8 = 4547809(9i Final)
10.1.0.3 = 3761843
10.1.0.4 = 4163362
10.1.0.5 = 4505133
10.2.0.2 = 4547817
10.2.0.3 = 5337014
10.2.0.4 = 6810189

C:\Documents and Settings\Jenny>ftp updates.oracle.com
Connected to bigip-updates-test-adc.oracle.com.
220 FTP server ready.
User (bigip-updates-test-adc.oracle.com:(none)): user_name
331 Username OK, please send password.
Password:
230-
230- Welcome to the Oracle Patch Download FTP Server
230-
230- For detailed help, use command "quote site help".
230
ftp> cd 5337014
250 Changed directory OK.
ftp> ls -al
200 PORT command OK.
150 Opening data connection for file listing.
total 1
-r--r--r-- 1 root other 1604896973 Feb 22 2007 p5337014_10203_AIX64-5L.zip
-r--r--r-- 1 root other 1424057924 Feb 22 2007 p5337014_10203_HP64.zip
-r--r--r-- 1 root other 1641826103 May 2 2007 p5337014_10203_HPUX-IA64.zip
-r--r--r-- 1 root other 1443790237 Mar 15 2007 p5337014_10203_IBMPower.zip
-r--r--r-- 1 root other 1103539895 Jun 13 2007 p5337014_10203_LINUX-zSer.zip
-r--r--r-- 1 root other 944121770 Feb 21 2007 p5337014_10203_LINUX.zip
-r--r--r-- 1 root other 1209745348 Feb 22 2007 p5337014_10203_Linux-IA64.zip
-r--r--r-- 1 root other 1082086597 Feb 22 2007 p5337014_10203_Linux-x86-64.zip
-r--r--r-- 1 root other 937018969 Feb 23 2007 p5337014_10203_MSWIN-x86-64.zip
-r--r--r-- 1 root other 881471858 Feb 26 2007 p5337014_10203_MVS.zip
-r--r--r-- 1 root other 1254311329 Feb 22 2007 p5337014_10203_SOLARIS64.zip
-r--r--r-- 1 root other 1811431410 Aug 13 2007 p5337014_10203_Solaris86-64.zip
-r--r--r-- 1 root other 1393733954 Oct 15 04:50 p5337014_10203_TRU64.zip
-r--r--r-- 1 root other 893831865 Feb 21 2007 p5337014_10203_WINNT.zip
-r--r--r-- 1 root other 1122786231 Feb 23 2007 p5337014_10203_WINNT64.zip
226 Listing complete. Data connection has been closed.
ftp: 1294 bytes received in 0.00Seconds 1294000.00Kbytes/sec.
ftp>

Copy and paste
ftp://updates.oracle.com/5337014/p5337014_10203_LINUX.zip
to FlashGet to add new job

In FlashGet, Right click “Site properties”
Check

Login to server:
Username:
Password:

Press “OK”

10g 10.2.0.4 PatchSet Windows Version Released

C:\Documents and Settings\Administrator>ftp updates.oracle.com
Connected to bigip-updates.oracle.com.
220 FTP server ready.
User (bigip-updates.oracle.com:(none)):
331 Username OK, please send password.
Password:
230-
230- Welcome to the Oracle Patch Download FTP Server
230-
230- For detailed help, use command "quote site help".
230
ftp> cd 6810189
250 Changed directory OK.
ftp> ls -al
200 PORT command OK.
150 Opening data connection for file listing.
total 1
-r--r--r-- 1 root other 1195551830 Mar 17 05:36 p6810189_10204_Linux-x86-64.zip
-r--r--r-- 1 root other 1053748381 Feb 22 18:58 p6810189_10204_Linux-x86.zip
-r--r--r-- 1 root other 1034079272 Mar 18 00:03 p6810189_10204_Win32.zip
226 Listing complete. Data connection has been closed.
ftp: 269 bytes received in 0.00Seconds 269000.00Kbytes/sec.
ftp>
_____________________________________________________________________________________

List of Patchset number in metalink

A)For Oracle9iR2, Base version 9.2.0.1
i)To upgrade to 9.2.0.2 patchset number 2632931.
ii)To upgrade to 9.2.0.3 patchset number 2761332
iii)To upgrade to 9.2.0.4 patchset number 3095277
iv)To upgrade to 9.2.0.5 patchset number 3501955
v)To upgrade to 9.2.0.6 patchset number 3948480
vi)To upgrade to 9.2.0.7 patchset number 4163445
vii)To upgrade to 9.2.0.8 patchset number 4547809

B)For Oracle10g, Base version 10.1.0.2
i)To upgrade to 10.1.0.3 patchset number 3761843
ii)To upgrade to 10.1.0.4 patchset number 4163362
iii)To upgrade to 10.1.0.5 patchset number 4505133

C)For Oracle10gR2 Base version 10.2.0.1
i)To upgrade to 10.2.0.2 patchset number 4547817
ii)To upgrade to 10.2.0.3 patchset number 5337014
iii)To upgrade to 10.2.0.4 patchset number 6810189
iv)For Warehouse build of 10.2.0.4 patchset number 7005587

D)For Oracle11gR1 Base version 11.1.0.6
i)To upgrade to 11.1.0.7.0 Patchset number 6890831

Oracle Scripts 1

Script for Deleting Duplicate rows in a table: -

Delete from table_name where rowid not in (select max(rowid) from table
group by duplicate_values_field_name);
_________________________________________________

Finding chained rows in a table: -

@$ORACLE_HOME/rdbms/admin/utlchain.sql;

analyze &object_type &object_name list chained rows into chained_rows;
___________________________________________________

Finding oracle Invalid Objects: -

break on c1 skip 2
set pages 999
col c1 heading 'owner' format a15
col c2 heading 'name' format a40
col c3 heading 'type' format a15
ttitle 'Invalid Objects'
select owner c1,object_type c3,object_name c2 from dba_objects
where status != 'VALID'
order by owner,object_type;


Compiling invalid Objects: -

Set heading off;
set feedback off;
set echo off;
Set lines 999;
Spool run_invalid.sql
select
'ALTER ' || OBJECT_TYPE || ' ' ||
OWNER || '.' || OBJECT_NAME || ' COMPILE;'
from
dba_objects where status = 'INVALID'
and object_type in ('PACKAGE','FUNCTION','PROCEDURE');
spool off;
set heading on;
set feedback on;
set echo on;

@run_invalid.sql

(or)

@$ORACLE_HOME/rdbms/admin/utlrp.sql
___________________________________________

Getting Query which is taking more CPU: -


select SQL_TEXT from V$SQLTEXT_WITH_NEWLINES where HASH_VALUE = (select sql_hash_value from v$session where SID = (select s.sid from v$process p, v$session s where s.paddr=p.addr and p.spid =&top_comm_PID)) ;

____________________________________________

Creating Tablespace Script:-

set verify off;
set termout off;
set feedback off;
set echo off;
set pagesize 0;
set termout on;
select 'Creating tablespace build script...' from dual;
set termout off;

create table ts_temp (lineno number, ts_name varchar2(30),
text varchar2(800));

DECLARE
CURSOR ts_cursor IS select tablespace_name,
initial_extent,
next_extent,
min_extents,
max_extents,
pct_increase,
status
from sys.dba_tablespaces
where tablespace_name != 'SYSTEM'
and status != 'INVALID'
order by tablespace_name;
CURSOR df_cursor (c_ts VARCHAR2) IS select file_name,
bytes,
autoextensible,
maxbytes,
increment_by
from sys.dba_data_files
where tablespace_name = c_ts
and tablespace_name != 'SYSTEM'
order by file_name;
lv_tablespace_name sys.dba_tablespaces.tablespace_name%TYPE;
lv_initial_extent sys.dba_tablespaces.initial_extent%TYPE;
lv_next_extent sys.dba_tablespaces.next_extent%TYPE;
lv_min_extents sys.dba_tablespaces.min_extents%TYPE;
lv_max_extents sys.dba_tablespaces.max_extents%TYPE;
lv_pct_increase sys.dba_tablespaces.pct_increase%TYPE;
lv_status sys.dba_tablespaces.status%TYPE;
lv_file_name sys.dba_data_files.file_name%TYPE;
lv_bytes sys.dba_data_files.bytes%TYPE;
lv_autoextensible sys.dba_data_files.autoextensible%TYPE;
lv_maxbytes sys.dba_data_files.maxbytes%TYPE;
lv_increment_by sys.dba_data_files.increment_by%TYPE;
lv_first_rec BOOLEAN;
lv_string VARCHAR2(800);
lv_lineno number := 0;

procedure write_out(p_line INTEGER, p_name VARCHAR2,
p_string VARCHAR2) is
begin
insert into ts_temp (lineno, ts_name, text) values
(p_line, p_name, p_string);
end;

BEGIN
OPEN ts_cursor;
LOOP
FETCH ts_cursor INTO lv_tablespace_name,
lv_initial_extent,
lv_next_extent,
lv_min_extents,
lv_max_extents,
lv_pct_increase,
lv_status;
EXIT WHEN ts_cursor%NOTFOUND;
lv_lineno := 1;
lv_string := ('CREATE TABLESPACE '||lower(lv_tablespace_name));
lv_first_rec := TRUE;
write_out(lv_lineno, lv_tablespace_name, lv_string);
OPEN df_cursor(lv_tablespace_name);
LOOP
FETCH df_cursor INTO lv_file_name,
lv_bytes,
lv_autoextensible,
lv_maxbytes,
lv_increment_by;
EXIT WHEN df_cursor%NOTFOUND;
if (lv_first_rec) then
lv_first_rec := FALSE;
lv_string := 'DATAFILE ';
else
lv_string := lv_string || ',';
end if;
lv_string:=lv_string||''''||lv_file_name||''''||
' SIZE '||to_char(lv_bytes) || ' REUSE';
if (lv_autoextensible = 'YES') then
lv_string:=lv_string||' AUTOEXTEND ON'||
' NEXT '||to_char(lv_increment_by)||
' MAXSIZE '||to_char(lv_maxbytes);
END IF;
END LOOP;
CLOSE df_cursor;
lv_lineno := lv_lineno + 1;
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' DEFAULT STORAGE (INITIAL ' ||
to_char(lv_initial_extent) ||
' NEXT ' || lv_next_extent);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' MINEXTENTS ' ||
lv_min_extents ||
' MAXEXTENTS ' || lv_max_extents);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' PCTINCREASE ' ||
lv_pct_increase || ')');
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_string := (' '||lv_status);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string:='/';
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string:=' ';
write_out(lv_lineno, lv_tablespace_name, lv_string);
END LOOP;
CLOSE ts_cursor;
END;
/

spool create_tablespaces.sql
set heading off
set recsep off
col text format a80 word_wrap


select text
from ts_temp
order by ts_name, lineno;

spool off;

drop table ts_temp;

exit

Create Users Script:-

set pagesize 0
set escape on
spool create_user.sql
select 'create user ' || U.username || ' identified ' ||
DECODE(password,
NULL, 'EXTERNALLY',
' by values ' || '''' || password || ''''
)
|| chr(10) ||
'default tablespace ' || default_tablespace || chr(10) ||
'temporary tablespace ' || temporary_Tablespace || chr(10) ||
' profile ' || profile || chr(10) ||
'quota ' ||
decode ( Q.max_bytes, -1, 'UNLIMITED', NULL, 'UNLIMITED', Q.max_bytes) ||
' on ' || default_tablespace ||
decode (account_status,'LOCKED', ' account lock',
'EXPIRED', ' password expire',
'EXPIRED \& LOCKED', ' account lock password expire',
null)
||
';'
from dba_users U, dba_ts_quotas Q
-- Comment this clause out to include system & default users
where U.username not in ('SYS','SYSTEM',
'SCOTT','DBSNMP','OUTLN','WKPROXY','WMSYS','ORDSYS','ORDPLUGINS','MDSYS',
'CTXSYS','XDB','ANONYMOUS','OWNER','WKSYS','ODM_MTR','ODM','OLAPSYS',
'HR','OE','PM','SH','QS_ADM','QS','QS_WS','QS_ES','QS_OS','QS_CBADM',

'QS_CB','QS_CS','PERFSTAT')
and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)
;
spool off
set pagesize 100
set escape off


To delete the non-default users:-

set pages 0
spool dropuser.sql
select 'drop user '||username|| 'cascade;' from dba_users
where username not in ('MGMT_VIEW','SYS','SYSTEM','DBSNMP','SYSMAN','OUTLN','MDSYS','ORDSYS', 'EXFSYS','DMSYS','WMSYS','CTXSYS','ANONYMOUS','XDB','ORDPLUGINS',
'SI_INFORMTN_SCHEMA','OLAPSYS','SCOTT','TSMSYS','BI','PM','MDDATA',
'IX','SH','DIP','OE','HR')
/


24hr in 30 days logswitches:-

SET LINESIZE 150
COLUMN day FORMAT A3
COLUMN Total FORMAT 99G990
COLUMN h00 FORMAT 999
COLUMN h01 FORMAT 999
COLUMN h02 FORMAT 999
COLUMN h03 FORMAT 999
COLUMN h04 FORMAT 999
COLUMN h05 FORMAT 999
COLUMN h06 FORMAT 999
COLUMN h07 FORMAT 999
COLUMN h08 FORMAT 999
COLUMN h09 FORMAT 999
COLUMN h10 FORMAT 999
COLUMN h11 FORMAT 999
COLUMN h12 FORMAT 999
COLUMN h13 FORMAT 999
COLUMN h14 FORMAT 999
COLUMN h15 FORMAT 999
COLUMN h16 FORMAT 999
COLUMN h17 FORMAT 999
COLUMN h18 FORMAT 999
COLUMN h19 FORMAT 999
COLUMN h20 FORMAT 999
COLUMN h21 FORMAT 999
COLUMN h22 FORMAT 999
COLUMN h23 FORMAT 999
COLUMN h24 FORMAT 999
BREAK ON REPORT
COMPUTE MAX OF "Total" ON REPORT
COMPUTE MAX OF "h00" ON REPORT
COMPUTE MAX OF "h01" ON REPORT
COMPUTE MAX OF "h02" ON REPORT
COMPUTE MAX OF "h03" ON REPORT
COMPUTE MAX OF "h04" ON REPORT
COMPUTE MAX OF "h05" ON REPORT
COMPUTE MAX OF "h06" ON REPORT
COMPUTE MAX OF "h07" ON REPORT
COMPUTE MAX OF "h08" ON REPORT
COMPUTE MAX OF "h09" ON REPORT
COMPUTE MAX OF "h10" ON REPORT
COMPUTE MAX OF "h11" ON REPORT
COMPUTE MAX OF "h12" ON REPORT
COMPUTE MAX OF "h13" ON REPORT
COMPUTE MAX OF "h14" ON REPORT
COMPUTE MAX OF "h15" ON REPORT
COMPUTE MAX OF "h16" ON REPORT
COMPUTE MAX OF "h17" ON REPORT
COMPUTE MAX OF "h18" ON REPORT
COMPUTE MAX OF "h19" ON REPORT
COMPUTE MAX OF "h20" ON REPORT
COMPUTE MAX OF "h21" ON REPORT
COMPUTE MAX OF "h22" ON REPORT
COMPUTE MAX OF "h23" ON REPORT

SELECT TRUNC(first_time) AS "Date",
TO_CHAR(first_time, 'Dy') AS "Day",
COUNT(1) AS "Total",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'00',1,0)) AS "h00",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'01',1,0)) AS "h01",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'02',1,0)) AS "h02",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'03',1,0)) AS "h03",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'04',1,0)) AS "h04",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'05',1,0)) AS "h05",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'06',1,0)) AS "h06",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'07',1,0)) AS "h07",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'08',1,0)) AS "h08",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'09',1,0)) AS "h09",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'10',1,0)) AS "h10",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'11',1,0)) AS "h11",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'12',1,0)) AS "h12",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'13',1,0)) AS "h13",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'14',1,0)) AS "h14",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'15',1,0)) AS "h15",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'16',1,0)) AS "h16",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'17',1,0)) AS "h17",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'18',1,0)) AS "h18",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'19',1,0)) AS "h19",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'20',1,0)) AS "h20",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'21',1,0)) AS "h21",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'22',1,0)) AS "h22",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'23',1,0)) AS "h23"
FROM V$log_history
GROUP BY TRUNC(first_time), TO_CHAR(first_time, 'Dy')
ORDER BY 1;
CLEAR BREAKS

____________________________________________

Log Miner: -

/* first we have to set utl_file_dir parameter */

@$ORACLE_HOME/rdbms/admin/dbmslm.sql;
@$ORACLE_HOME/rdbms/admin/dbmslmd.sql;

exec dbms_logmnr_d.build('&file_name','&path_of_logmnr',-
dbms_logmnr_d.store_in_flat_file);

exec dbms_logmnr.add_logfile(logfilename=>'&path_of_the_logfile',-
options=>dbms_logmnr.new);

exec dbms_logmnr.add_logfile(logfilename=>'&path_of_2nd_logfile_or_archive',-
options=>dbms_logmnr.addFILE);

exec dbms_logmnr_d.build(dictionary_filename=>'&file_name',-
dictionary_location=>'&path_of_the_logmnr');

exec dbms_logmnr.start_logmnr(DictFileName=>'&path_of_the_file_name');

select username,sql_redo,to_char(timestamp,'dd-mm-yyy hh24:mi:ss')
time_in_sec from v$logmnr_contents where username='&user_name';

____________________________________________

Monitor Import Speed: -

SELECT
SUBSTR(sql_text, INSTR(sql_text,'INTO "'),30) table_name
, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
FROM
sys.v_$sqlarea
WHERE
sql_text like 'INSERT %INTO "%'
AND command_type = 2
AND open_versions > 0;


--------------------------------------------------------------


script to calculate Tablespace Free Space:-

column dummy noprint
column pct_used format 999.9 heading "%|Used"
column name format a19 heading "Tablespace Name"
column Kbytes format 999,999,999 heading "Kbytes"
column used format 999,999,999 heading "Used"
column free format 999,999,999 heading "Free"
column largest format 999,999,999 heading "Largest"
column max_size format 999,999,999 heading "MaxPoss|Kbytes"
column pct_max_used format 999. heading "%|Max|Used"
break on report
compute sum of kbytes on report
compute sum of free on report
compute sum of used on report


select (select decode(extent_management,'LOCAL','*',' ') ||
decode(segment_space_management,'AUTO','a ','m ')
from dba_tablespaces where tablespace_name = b.tablespace_name) ||
nvl(b.tablespace_name,
nvl(a.tablespace_name,'UNKOWN')) name,
kbytes_alloc kbytes,
kbytes_alloc-nvl(kbytes_free,0) used,
nvl(kbytes_free,0) free,
((kbytes_alloc-nvl(kbytes_free,0))/
kbytes_alloc)*100 pct_used,
nvl(largest,0) largest,
nvl(kbytes_max,kbytes_alloc) Max_Size,
decode( kbytes_max, 0, 0, (kbytes_alloc/kbytes_max)*100) pct_max_used
from ( select sum(bytes)/1024 Kbytes_free,
max(bytes)/1024 largest,
tablespace_name
from sys.dba_free_space
group by tablespace_name ) a,
( select sum(bytes)/1024 Kbytes_alloc,
sum(maxbytes)/1024 Kbytes_max,
tablespace_name
from sys.dba_data_files
group by tablespace_name
union all
select sum(bytes)/1024 Kbytes_alloc,
sum(maxbytes)/1024 Kbytes_max,
tablespace_name
from sys.dba_temp_files
group by tablespace_name )b
where a.tablespace_name (+) = b.tablespace_name
order by 1
/

Tuesday, November 10, 2009

Table partition

Table Partitioning
Now a days enterprises run databases of hundred of Gigabytes in size. These databases are known as Very Large Databases (VLDB). From Oracle Ver. 8.0 Oracle has provided the feature of table partitioning i.e. you can partition a table according to some criteria . For example you have a SALES table with the following structure

Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993
and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following

select sum(amt) from sales where year=1991;

select product,sum(amt) from sales where year=1992 Group by product;

Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.

CREATING PARTITION TABLES

To create a partition table give the following statement

create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
(partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5);

In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the partitions will still be available.

The above example the table is partition by range.

In Oracle you can partition a table by
*Range Partitioning
*Hash Partitioning
*List Partitioning
*Composite Partitioning

Range Partitioning: -
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed; for example, value of year. Performance is best when the data evenly distributes across the range

Hash partitioning: -
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key

The following example shows how to create a hash partition table.

The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).

CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);

List Partitioning: -
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete values for the partitioning column in the description for each partition. This is different from range partitioning, where a range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to partition mapping.

List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally

The following example creates a table with list partitioning

Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the city column the row is stored in that partition.

COMPOSITE PARTITIONING: -
Composite partitioning partitions data using the range method, and within each partition, sub-partitions it using the hash method. Composite partitions are ideal for both historical data and striping, and provide improved manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.

When creating composite partitions, you specify the following:

* Partitioning method: range
* Partitioning column(s)
* Partition descriptions identifying partition bounds
* Sub-partitioning method: hash
* Sub-partitioning column(s)
* Number of sub-partitions for each partition or descriptions of subpartitions

The following statement creates a composite-partitioned table. In this example, three range partitions are created, each containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).

CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

ALTERING PARTITION TABLES: -

To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause.

For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);

To add a partition to a Hash Partition table give the following command.
Alter table products add partition;

Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace. To add a partition by user define name and in your specified tablespace give the following command.

Alter table products add partition p5 tablespace u5;

To add a partition to a List partition table give the following command.
alter table customers add partition central_India values (‘BHOPAL’,’NAGPUR’);

Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions of the table.

Coalescing Partitions: -

Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of sub-partitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.

To coalesce a hash partition give the following statement.
Alter table products coalesce partition;

This reduces by one the number of partitions in the table products.

DROPPING PARTITIONS

To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;

Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.

Alter index sales_ind rebuild;

To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this

Delete from sales where year=1994;

Alter table sales drop partition p4;

This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.

Another method of dropping partitions is give the following statement.

ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;

This causes the global index to be updated at the time the partition is dropped.

Exchanging a Range, Hash, or List Partition: -

To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.

ALTER TABLE stocks EXCHANGE PARTITION p3 WITH stock_table_3;

Merging Partitions: -
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.

Alter table sales merge partition p2 and p3 into partition p23;

Modifying Partitions: Adding Values

Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.

The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.

ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');

Modifying Partitions: Dropping Values

Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.

ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);

SPLITTING PARTITIONS: -
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.

Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));

TRUNCATING PARTITON: -
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement

Alter table sales truncate partition p5;

LISTING INFORMATION ABOUT PARTITION TABLES: -

To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;

To see on partition level partitioning information
Select * from user_tab_partitions;

Thursday, November 5, 2009

Steps for installing statspack

Statspack
Statspack is a set of performance monitoring and reporting utilities provided by Oracle starting from Oracle 8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK, users that are not licensed to use the Enterprise Manager Diagnostic Pack should continue to use statspack.

Install statspack
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Drop and install statspack
sqlplus "/ as sysdba" @spcreate.sql -- Enter tablespace names when prompted

Take performance snapshots of the database
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots



Adjusting the STATSPACK Collection Level

STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.

SQL> SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0 This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.
Level 5 This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.
Level 6 This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.
Level 7 This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.
Level 10 This level includes capturing Child Latch statistics, along with all data captured by lower levels.

You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => 'true' changes the level permanent for all snapshots in the future.

SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => 'true');


-- or :
exec perfstat.statspack.snap(i_snap_level=>10); -- or instruct statspack to do gather more details in the snapshot
-- (look up which oracle version supports which level).
The spauto.sql script can be customized and executed to schedule the collection of STATPACK snapshots.

Statspack reporting
-- Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;

select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS')
"Date/Time" from stats$snapshot,v$database;

@spreport.sql -- Enter two snapshot id's for difference report

Other statspack scripts
Some of the other statspack scripts are:
sppurge.sql - Purge (delete) a range of Snapshot Id's between the
specified begin and end Snap Id's
spauto.sql - Schedule a dbms_job to automate the collection of
STATPACK statistics
spcreate.sql - Installs the STATSPACK user, tables and package on a
database (Run as SYS).
spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
spreport.sql - Report on differences between values recorded in two
snapshots
sptrunc.sql - Truncates all data in Statspack tables
Potential problems
Statpack reporting suffers from the following problems:
Some statistics may only be reported on COMPLETION of a query. For example, if a query runs for 12 hours, its processing won't be reported during any of the snapshots taken while the query was busy executing.
If queries are aged out of the shared pool, the stats from V$SQL are reset. This can throw off the delta calculations and even make it negative. For example, query A has 10,000 buffer_gets at snapshot 1, but at snapshot #2, it has been aged out of the pool and reloaded and now shows only 1,000 buffer_gets. So, when you run spreport.sql from snapshot 1 to 2, you'll get 1,000-10,000 = -9,000 for this query.

Thursday, October 15, 2009

Oracle Backup and Recovery on Windows, Part I - Database Backups using RMAN:

1. Introduction:

This article is the first in a series of three, introducing Oracle's RMAN (Recovery MANager) backup and recovery tool to novice DBAs. The articles focus on the Windows operating system, but can be easily adapted to other platforms. RMAN will work in the same manner on all operating systems (OS) that Oracle runs on. However, OS functions such as script scheduling etc. obviously vary from platform to platform. The present article describes the implementation of a simple Oracle backup strategy using RMAN. The next article will discuss some recovery scenarios within this backup framework, and the final article will deal with disaster recovery.

We begin with a statement of assumptions regarding the database environment and business requirements regarding data recoverability. This serves to anchor the subsequent discussion in a definite context. We then briefly discuss the files relevant to operation of Oracle. Finally, we move on to the main topic - a discussion of RMAN and how it can be used to backup a database.

Following are the assumptions pertaining to the database environment and business expectations regarding data recovery:
The database is hosted on a Windows NT / 2000 / 2003 server.
The database software is Oracle 9i.
Users expect the database to be available round the clock, so the database cannot be shutdown for backups. This makes an online (hot) backup mandatory.
In case of a disaster, where the server is destroyed, the business expects us to be able to recover all transactions up to the previous working day - i.e. a maximum of 24 hours data loss is acceptable. (We will discuss options for doing better than this in the third article of this series)
The database is relatively small - say 1 to 30 GB in size
Specified directories on the host server are backed up to to tape every night using an OS backup utility.
We will configure RMAN to backup to disk. These backups will then be copied to tape by the OS backup utility. For completeness we note that RMAN can be configured to backup the database directly to tape, via an interface called MML (Media Management Library) which integrates third party backup products with RMAN. MML can be somewhat fiddly to setup, and the details depend on the third-party product used. We will not discuss RMAN backups to tape in the present article.

The server has three independent disks - named C:, D: and E:. The contents of the drives are as follows:
C: - The operating system and database software (Oracle home is c:\oracle\ora92. The oracle home directory is normally referred to as ORACLE_HOME). We will also keep a copy of the controlfile and a group of online redo logs on this disk. This isn't ideal, but will have to do because of the limited number of disks we have. Database files and their function are described in the next section.
D: - All datafiles, a copy of the controlfile and one group of online redo logs.
E: - A copy of the controlfile, all archived logs and database backups. RMAN will be configured to backup to this drive. Note that the backups could reside on D: instead.
All disks should be mirrored using some form of RAID technology. Note that the above layout isn't ideal - we're limited by the number of disks available. If you have more disks you could, and probably should, configure a better layout.


2. Oracle database files:

In order to perform backups effectively, it is necessary to understand a bit about the various files that comprise a database. This section describes the files that make up an Oracle database. The descriptions given here are very brief. Please check the Oracle Administrator's Guide for further details.

Oracle requires the following files for its operation:
Datafiles: These hold logical units (called blocks) that make up the database. The SYSTEM tablespace datafile, which holds the data dictionary and other system-owned database objects, is required for database operation. The database can operate without non-SYSTEM datafiles as long as no data is requested from or modified in blocks within those files. In our case all datafiles are located on D:.
Data files can be backed up by RMAN.
Online redo logs: These files hold transaction information that is necessary for transaction rollback, and for instance recovery in case of a database crash. Since these files hold critical information, it is wise to maintain two or more identical copies of these files on independent disks (multiplexing). Each set of copies is known as a redo log group. The members of a group should be on separate disks. In our case we maintain three groups with two members per group - each group has one member D: and the other on E: (instead of E:, one could maintain the second set on C:). When writing to the logs, Oracle cycles through a group at a time in a circular manner - i.e. once all groups have been written to, the first one is reused, overwriting its previous contents. Hence the importance of archiving filled logs as the system switches from one log to the next. - see item (4) below.
Note that online logs should NEVER be backed up in a hot backup. If you do back them up, there is a danger that you may overwrite your current (valid) logs with the backed up ones, which are out of date and therefore invalid. Never ever backup your redo logs in a hot backup scenario..
Control file: This file holds, among other things, the physical structure of the database (location of all files), current database state (via a system change number, or SCN - which is a monotonically increasing number that is incremented on every commit), and information regarding backups taken by RMAN. The control files, being critical to database operation, should also be multiplexed. We will maintain three copies, one each on C:, D: and E: drive.
Controlfiles can be optionally backed up by RMAN.
Archived redo logs: These are archived copies of online redo logs, created after a switch occurs from one online log to the next one in the group. Archiving ensures that a complete transaction history is maintained. Once an online log is archived, it can be overwritten without any loss of transaction history. Archive logs are required for online backups, so they are mandatory in our scenario. We maintain a single copy of these on E: drive. This is a weak point in the present backup strategy (only one copy of archive logs). There are ways to do better - one can maintain up to ten independent sets of archive logs at different physical locations - check the documentation for details.
Note that online logs are archived only when the database operates in ARCHIVELOG mode. The default operation mode is NOARCHIVELOG - where online redo logs are not archived before being overwritten. It would take us too far afield to discuss this any further - please check the Oracle Administrator's Guide for instructions on configuring your database to archivelog mode.
Archive logs can be optionally backed up by RMAN.
Server parameter file (also known as Stored parameter file): This file contains initialization parameters that are used to configure the Oracle instance on startup. We maintain this file in its default location (ORACLE_HOME\database).
The server parameter file can be optionally backed up by RMAN.
Password file: This file stores credentials of users who are allowed to log on to Oracle as privileged (sysdba) users without having to supply an Oracle password. All users belonging to the Windows OS group ORA_DBA are included in the password file. Users with sysdba privileges are allowed to perform database startup, shutdown and backups (among other administrative operations). Please check the Oracle Administrator's guide for details on sysdba and sysoper privileges. We maintain the password file in its default location (ORACLE_HOME\database).
This file is not backed up by RMAN, and must be backed up via OS commands.
Networking files: These files (tnsnames.ora, listener.ora and sqlnet.ora) are Oracle network configuration files. They are maintained in their standard location - ORACLE_HOME\network\admin.
These files are not backed up by RMAN, and must be backed up via OS commands.
Additionally, it is a good idea to store a copy of all database creation scripts in the backup directory. This will help in determining the correct database file directory structure in case of a disaster. This, however, is not absolutely necessary as file placement information can be retrieved from the backup controlfile. More on this in the third article of this series.

3. OS backup utilities vs. RMAN- a brief discussion:

OS Backup utilities copy OS files from disk to tape. By themselves they are not useful for database backups, unless the database is closed. The reason they cannot be used to backup open databases is as follows: If the database is open, it is possible that contents of a datafile block are being modified at the very instant that the block is being copied by the utility. In such a situation the copy of the block would be inconsistent, and hence useless for recovery. The way to avoid this is to put a tablespace into a special "backup mode" before copying the datafiles associated with the tablespace. Such OS level backups, also called user managed backups, are the traditional (non-RMAN) way to backup Oracle databases. When a tablespace is in backup mode, the SCN, which is marked in the header of each datafile in the tablespace, is frozen until the tablespace is taken out of backup mode. Additionally, whenever a data block in the tablespace is modified, the entire block is copied to the online redo log (in contrast to only modified rows being copied when the tablespace is not in backup mode). This causes a huge increase in the redo generated, which is a major disadvantage of user managed backups.

One can perform user managed backups of a database using homegrown scripts. Such a script would cycle through all tablespaces in the database, putting each tablespace in backup mode, copying the associated datafiles and finally switching the tablespace out of backup mode. A fragment of a user managed hot backup script for Windows might read as follows:


--put USERS tablespace in backup mode
alter tablespace users begin backup;
--copy files belonging to USERS tablespace
host copy d:\oracle\ora92\orcl\users.dbf e:\backup;
--take USERS tablespace out of backup mode
alter tablespace users end backup;
--continue with other tablespaces and then copy other oracle files...


The above would be invoked from sqlplus, via an appropriately scripted batch file.

Most OS backup utility vendors provide optional add-ons that automate the process of user managed backups. These add-ons, which usually do no more than the script shown above, are sold as extra cost add-ons to the base backup software.

RMAN is a database backup utility that comes with the Oracle database, at no extra cost. As such, it is aware of the internal structure of Oracle datafiles and controlfiles, and knows how to take consistent copies of data blocks even as they are being written to. Furthermore, it does this without putting tablespaces in backup mode. Therefore RMAN does not cause a massive increase in generated redo. Another advantage of using RMAN is that it backs up only those blocks that have held or currently hold data. Hence RMAN backups of datafiles are generally smaller than the datafiles themselves. In contrast, OS copies of datafiles have the same size as the original datafiles. Finally, with RMAN backups it is possible to recover individual blocks in case of block corruption of datafiles. Considering the above, it makes good sense to use RMAN instead of vendor supplied add-ons or homegrown user managed backup scripts.

4. Configuring RMAN:

RMAN is a command line utility that is installed as a part of a standard database installation. Note that RMAN is only a command interface to the database - the actual backup is performed by a dedicated server process on the Oracle database.

RMAN can invoked from the command line on the database host machine like so:


C:\>rman target /

Recovery Manager: Release 9.2.0.1.0 - Production

Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

connected to target database: ORCL (DBID=1036216947)

RMAN>


The first line is the one we type and the remaining ones are feedback from execution of the command. The net result is to leave us connected to the target database - the database we want to back up - with the RMAN> prompt, indicating that RMAN is ready for further work. Here we have invoked RMAN on the server, and have logged on to the server using an account that belongs to the ORA_DBA OS group. As described earlier, this enables us to connect to the target database (as sysdba - this is implicit) without having to supply a password. Note that on Windows, one must also set SQLNET.AUTHENTICATION_SERVICES=(NTS) in the sqlnet.ora file order to connect using OS authentication as above.

At this point a digression is in order. RMAN can be run in two modes - catalog and nocatalog. In the former, backup information and RMAN scripts are stored in another database known as the RMAN catalog. In the latter, RMAN stores backup information in the target database controlfile. Catalog mode is more flexible, but requires the maintenance of a separate database on another machine (there's no point in creating the RMAN catalog on the database to be backed up!). Nocatalog mode has the advantage of not needing a separate database, but places more responsibility on the controlfile. We will use nocatalog mode in our discussion, as this is a perfectly valid choice for sites with a small number of databases.

RMAN can be configured through various persistent parameters. Note that persistent parameters can be configured only for Oracle versions 9i and better. The current configuration can be seen via the "show all" command:


RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'e:\backup\ctl_sp_bak_%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:\backup\%U.bak' MAXPIECESIZE 4G;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\ORA92\DATABASE\SNCFORCL.ORA'; # default

RMAN>


The reader is referred to the RMAN documentation for a detailed explanation of the options attached to each of these parameters. Here we will discuss only those of relevance to our backup requirements.


Retention Policy: This instructs RMAN on the backups that are eligible for deletion. For example: A retention policy with redundancy 2 would mean that two backups - the latest and the one prior to that - should be retained. All other backups are candidates for deletion. Retention policy can also be configured based on time - check the docs for details on this option.
Default Device Type: This can be "disk" or "sbt" (system backup to tape). We will backup to disk and then have our OS backup utility copy the completed backup, and other supporting files, to tape.
Controlfile Autobackup: This can be set to "on" or "off". When set to "on", RMAN takes a backup of the controlfile AND server parameter file each time a backup is performed. Note that "off" is the default.
Controlfile Autobackup Format: This tells RMAN where the controlfile backup is to be stored. The "%F" in the file name instructs RMAN to append the database identifier and backup timestamp to the backup filename. The database identifier, or DBID, is a unique integer identifier for the database. We have configured RMAN to store controlfile backups in the directory e:\backup.
Parallelism: This tells RMAN how many server processes you want dedicated to performing the backups.
Device Type Format: This specifies the location and name of the backup files. We need to specify the format for each channel. The "%U" ensures that Oracle appends a unique identifier to the backup file name. The MAXPIECESIZE attribute sets a maximum file size for each file in the backup set. We have configured RMAN to store backups in the directory e:\backup.
Any of the above parameters can be changed using the commands displayed by the "show all" command. For example, one can turn off controlfile autobackups by issuing:


RMAN> configure controlfile autobackup off;

using target database controlfile instead of recovery catalog
old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP OFF;
new RMAN configuration parameters are successfully stored

RMAN>


5. Scripting the backup:

With the background stuff out of the way, we now move on to the actual backup. We will write a simple script that will backup our database, verify that the backup can be restored and then delete all obsolete backups and archive logs (based on a redundancy of 2, as discussed above). The Windows scheduler will be used to run the script at a time of our choosing.

An Aside: Before we move on it is worth stating that RMAN can perform full or incremental backups. Full backups, as their name suggests, are backups of every data block in the datafiles. In contrast, Incremental backups backup only those database blocks that have changed since the last higher level backup. It would take us too far afield to detail the intricacies of incremental backups - we refer you to the Oracle documentation for more details on this. For the case at hand, we can afford to perform full backups every night as the database is relatively small.

The backup script, which we store in a file named "rman_backup.rcv", is very simple:


#contents of rman_backup.rcv. "#" denotes a comment line, and will be ignored by RMAN.
backup database plus archivelog;
restore database validate;
delete noprompt obsolete;
host 'copy C:\oracle\ora92\database\pwdorcl.ora e:\backup';
host 'copy C:\oracle\ora92\network\admin\tnsnames.ora e:\backup';
host 'copy C:\oracle\ora92\network\admin\listener.ora e:\backup';
exit;


The script backs up the database and all archive logs and then checks that the backup can be restored. After that it deletes backups according to the configured retention policy - the "noprompt" in the delete command instructs RMAN not to prompt us before deleting files. Finally it does an OS copy of the password file and the relevant network configuration files. The RMAN "host" command enables us to execute any operating system command (on Linux, for instance, we would use "cp" instead of "copy"). In the above script the database name is ORCL, hence the password file is pwdORCL.ora. You will need to adapt each of the "host 'copy..." commands in the script to your specific directory structure and filenames. As an aside, it is worth pointing out that SQL commands can be executed from RMAN. A couple of examples:


sql 'alter system archive log current';
sql "create pfile=''e:\backup\initORCL.ora'' from spfile";


The "sql" keyword tells RMAN what follows is to be interpreted as an SQL command. The actual SQL should be enclosed in single or double quotes. The latter is useful if the command contains single quotes, as in the second example above. Note: In the second example, the quotes enclosing the pfile path are two single quotes, and the quotes enclosing the entire command are double quotes.

The script, rman_backup.rcv, is invoked by the following one line batch file:


REM contents of rman_backup.bat
rman target / cmdfile rman_backup.rcv log rman_backup.log


The "target /" indicates that the script logs on to Oracle as sysdba via an OS account that belongs to the ORA_DBA group. The "cmdfile" option indicates the name of the command file that RMAN should execute, in this case it is rman_backup.rcv. The "log" option tells rman that we want a transcript of the RMAN session to be stored in the file that follows the option - rman_backup.log in this case. Remember to check the log file once between each backup for any errors that may have occurred. The log file is overwritten on each execution of the batch file so it may be worth changing the name to include a unique identifier (such as a timestamp). The backup scripts could reside anywhere on the server, but it may be best to keep them in e:\backup so that they are archived off to tape along with the backups.

The next step is to schedule our batch file (rman_backup.bat) to run at the desired interval. This is done by scheduling the batch file via the Window Scheduler wizard, which is accessed through Control Panel>Scheduled Tasks>Add Scheduled Task>.

Finally, it should be ensured that the entire backup directory (e:\backup) is copied to tape nightly, after the database backup has been completed. There is no need to backup any other Oracle related directory. The tapes must be stored offsite so that they aren't destroyed in case the site is struck by disaster. In a disaster situation, we can recreate the database and then restore and recover data files (with up to a 24 hour data loss), using the backups that are on tape. The procedure for recovering from a disaster will be covered in the third article of this series. In case the database fails (say due to datafile corruption, for example) but the host server remains available, we can recover right up to the instant of failure using the backup on disk together with all archive logs since the backup and the current online redo logs. Some of these scenarios will be covered in the next article of this series.

6. Summary and Further Reading:

This article provides steps on setting up automated RMAN based backups of Oracle databases on Windows. As with all critical DBA tasks, scripts and commands described above should be customised to your requirements and tested thoroughly before implementation on your production systems.

In the interest of brevity, we have had to rush through some of the detail that is relevant to backup and recovery. The reader is therefore urged to read the pertinent Oracle documentation for complete coverage. The books of interest are:
Oracle 9i Backup and Recovery Concepts - This book discusses basics of Oracle backup and recovery.
Oracle 9i Recovery Manager User's Guide - This book discusses backup and recovery using RMAN.
Oracle 9i Administrator's Guide - discusses backup related issues such as how to put the database in ARCHIVELOG mode.
These books can be downloaded, free of charge, from Oracle's documentation site. You will need to register with the Oracle Technology Network (OTN) to gain access to the documentation. Membership of OTN is free, and well worth it for Oracle professionals.