Tuesday, November 17, 2009

Oracle SDU and TDU parameters in listener.ora and tnsnames.ora

Oracle SDU and TDU parameters in listener.ora and tnsnames.ora
The session data unit (SDU) specifies the size of the packets to send over the network. The maximum transmission unit (MTU) is a fixed value that depends on the actual network implementation used. Ideally, SDU should not surpass the size of the MTU. Oracle recommends that SDU be set equal to the MTU. The tnsnames.ora and listener.ora files house the SDU and TDU parameters.

To group data together, the TDU value is the default packet size used within Oracle*Net. The default value for both SDU and TDU is 2,048 bytes and the maximum value is 32,767 bytes. The TDU parameter should ideally be a multiple of the SDU parameter. For SDU and TDU, the following guidelines apply:

On fast network connections such as T1 or T3 lines, SDU and TDU should be set equal to the MTU for the network. On standard Ethernet networks, the default MTU size should be set to 1,514 bytes. On standard token ring networks, the default MTU size is 4,202 bytes.

If the users are connecting via dial-up modem lines, it may be desirable to set SDU and TDU to smaller values in consideration of the frequent resends that occur over modem connections.

The mts_dispatchers must also be set with the proper MTU - TDU configuration, if a Multi-Threaded Server (MTS) is used.

As a rule, SDU should not be set greater than the TDU because network resources will be wasted by shipping wasted space in each packet.

This brief review of network-related parameters was intended provide an introduction to the scope and complexity of network tuning. It is important to understand that Oracle*Net is simply a layer in the OSI model that is above the network-specific protocol stack, and therefore, virtually all network tuning is external to Oracle.

Ex: -

Doc ID: Note:71921.1
Subject: ORA-12151, ORA-12571 errors on Windows NT
Type: BULLETIN
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 09-JUN-1999
Last Revision Date: 07-FEB-2002

PURPOSE

Give an overview of what to adjust and verify in case of ORA-12151
and ORA-12571 errors

SCOPE AND APPLICATION

This notes applies to all who are facing intermittent SQL*Net read and
write errors while working on NT.

==============================================================================


ORA-12151 and ORA-12571 errors on Windows NT

Intermittent SQL*Net TCP/IP read and write errors are sometimes encountered
on NT. The underlying reasons of these errors are a synchronization error
in the TCP/IP layer on NT. To help prevent this kind of errors, a
few things can be adjusted to help the synchronization:

1. TCP.NODELAY parameter
This parameter is to be added in the "PROTOCOL.ORA" file in the
"NETWORK\\ADMIN" directory.

In most cases, TCP/IP info send across the network is buffered
till at least a complete network packet can be send. This means
that in certain cases commands are not issued directly, and kept
buffered until some other info can be send as well.
This has the potential to generate timeouts and errors.
To avoid this, this delay can be switched off.

tcp.nodelay = yes

2. Disabling AUTOMATIC_IPC on Clients
On client PC's, checking for IPC connections is pointless as there
is never a database installed on them. So, in order to save some
time during the connections phase, set AUTOMATIC_IPC=OFF in the
"SQLNET.ORA" file.

3. "NAMES.DIRECTORY_PATH" to force using "TNSNAMES.ORA" or "ONAMES"
If you have a fixed environment, it's best to specify this in the
"SQLNET.ORA" file. The parameter "NAMES.DIRECTORY_PATH" specifies how
the TNS resolving takes place.

By default, if this parameter is not present - the SQL*Net layer
will first check if there is a Names server anywhere on the network,
and will afterwards start checking for a "TNSNAMES.ORA" file.

If you have only a "TNSNAMES.ORA" file, and no Oracle Names installed,
it is best to specify this parameter to skip the Names searching in
order to speed up the TNS resolving time.

The value of this parameter is a comma separated list, with as
possible values TNSNAMES (for "TNSNAMES.ORA" searching) and "ONAMES"
(for Oracle Names searching).

4. TCP/IP timeouts on NT
The default retransmission count on NT is 5, before it detects that
the network is down. With the value of 5, the actual timeout is
aproximately 15 seconds.

This default value can be easily increased to a higher value.
In the registry, change the following value:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

This parameter can be useful on both client and server. Suggested
course of action is to first add this parameter on the machine
generating the SQL*Net errors, and if the problem persists, also
include the parameter in the registry of the other machine.

5. TCP/IP keepalive on NT
KEEPALIVE is an extension to TCP/IP which enables the closing of
dead connections which are no longer being used.

Problems can occur when the server does not close a connection
after a PC client process has disappeared. This typically happens
when a PC user switches off or reboots their machine while still
connected to Oracle. Note that this is not an Oracle problem, but
a limitation of TCP/IP, which has no way of knowing for sure
whether a remote connection has disappeared.

This feature is enabled by default on NT. Problem can occur however
if the timeout period is too fast for some heavily used or slow
network. In those conditions, the KEEPALIVE registry value can be
used to specify a KEEPALIVE value before a connection gets cut.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
TCP/IP
Parameters
KeepAlive REG_DWORD "number"

A value of '10' is a common value specified for this variable.

Again, this parameter can be useful on both client and server.
Start with the machine generating the error, and if needed, also add
it on the machine on the other side.

6. TCP/IP timeouts on Windows 95
The same parameter can also be specified on Windows 95. It has the
same functionality, only the location of the parameter in the
registry is different.

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Winsock
Parameters
TcpMaxDataRetransmissions REG_DWORD "number"

This parameter is not present in the registry by default. This
means that the first time, this parameter will need to added to
this registry key.

The purpose and behavior of the parameter is the same on the Windows 95
and Windows 98, as on the Windows NT platform.

7. SDU & TDU parameters
Part of this problem is the sequence of information that gets transmitted.
If there are disruptions in the sequence, the errors ORA-12151 and
ORA-12571 can also appear, alerting the application that not all information
has been send across the network succesfully.

The sequence of information is determined by the amount of data the program
is sending, and the actual size the protocol can send across the network
at a time.

The more data the program wants to send in one 'go', the more sequences and
transport packet split-ups will have to be made.

By default, SQL*Net is using an SDU (Session Data Unit) of 2048 bytes (2Kb)
and a TDU (Transport Data Unit) of 32768 (32Kb) bytes.
On standard Ethernet connections, with no special modifications made, the
SDU is 1500 bytes, while the TDU is 8760 bytes.

With these values, each data request made by SQL*Net will have to be split
up in several smaller packets to actually be able to transmit.

Therefore, to minize the creation the additional packets, it is advised, in
case of these errors, to synchronize the SDU and TDU parameters at the
SQL*Net level with those of the actual network topology used.

These SDU and TDU parameters have to be specified at both the client and
the server configuration files:

TNSNAMES.ORA:
-------------
ORCL.WORLD =
(DESCRIPTION =
(SDU=1500)
(TDU=8760)
(ADDRESS_LIST =
(ADDRESS =(PROTOCOL=TCP)(Host=foobar)(Port=1521))
)
(CONNECT_DATA =
(SID = ORCL)
)
)

LISTENER.ORA:
-------------
SID_DESC_LISTENER =
(SID_LIST =
(SID_DESC =
(SDU = 1500)
(TDU = 8760)
(SID_NAME = ORCL)
)
)

For more information about the SDU and TDU parameter, see Note 44694.1,
Note 44694.1: SQL*Net Packet Sizes (SDU & TDU Parameters)

8. Setting a new TDU size on Windows NT
You can modify the TDU size on NT, via the TcpWindowSize parameter:

HKEY_LOCAL_MACHINE
System
CurrentControlSet
Services
Tcpip
Parameters
TcpWindowSize REG_DWORD "number"


Additional information about the NT network parameters on NT:
-------------------------------------------------------------
Q120642: TCP/IP & NBT Configuration Parameters for Windows NT
http://support.microsoft.com/support/kb/articles/Q120/6/42.asp

Q140375: Default MTU Size for Different Network Topology
http://support.microsoft.com/support/kb/articles/Q140/3/75.asp

Manually Physical STBY DG

Oracle 10g
Manually Create a Physical Standby Database Using Data Guard
• RDBMS Server
Step-by-step instructions on how to create a Physical Standby Database on UNIX server, and maintenance tips on the databases in a Data Guard Environment.
Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/Standby databases using both Grid control and SQL command line for a couple of years, and my latest experience with Data Guard was manually creating a Physical Standby Database for a Laboratory Information Management System (LIMS) half a year ago. I maintain it daily and it works well. I would like to share my experience with the other DBAs.
In this example the database version is 10.2.0.3... The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
I. Before you get started:
1. Make sure the operating system and platform architecture on the primary and standby systems are the same;
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
II. On the Primary Database Side:
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;
- On UNIX:
If it doesn’t exist, use the following command to create one:
$orapwd file=orapwPRIM password=xxxxxxxx force=y
(Note: Replace xxxxxxxxx with the password for the SYS user.)
3. Configure a Standby Redo log.
1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log;
BYTES
----------
52428800
52428800
52428800
2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
3) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
1) Create pfile from spfile for the primary database:

- On UNIX:
SQL>create pfile=’$ORACLE_HOME/dbs/initPRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
2) Edit initPRIM.ora to add the new primary and standby role parameters:

PRIM.__db_cache_size=385875968
PRIM.__java_pool_size=4194304
PRIM.__large_pool_size=4194304
PRIM.__shared_pool_size=184549376
PRIM.__streams_pool_size=0
*.audit_file_dest='/oracle/app/admin/PRIM/adump'
*.background_dump_dest='/oracle/app/admin/PRIM/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/data/PRIM/controlfile/o1_mf_5g0wj3ro_.ctl','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/controlfile/o1_mf_5g0wj5qy_.ctl'
*.core_dump_dest='/oracle/app/admin/PRIM/cdump'
*.db_block_size=8192
*.db_create_file_dest='/data'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='/data/STAN/datafile','/data/PRIM/datafile'
*.db_name='PRIM'
*.db_recovery_file_dest='/oracle/flash_recovery_area/PRIM/ARCHIVELOG'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='PRIM'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMXDB)'
*.FAL_CLIENT='PRIM'
*.FAL_SERVER='STAN'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oracle/flash_recovery_area/PRIM/ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
*.LOG_ARCHIVE_DEST_2='SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='/data/STAN/onlinelog','/data/PRIM/onlinelog,','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/onlinelog'
*.open_cursors=300
*.pga_aggregate_target=195035136
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=586153984
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/app/admin/PRIM/udump'


6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.

- On UNIX:
SQL> shutdown immediate;
SQL> startup nomount pfile=’$ORACLE_HOME/dbs/initPRIM.ora’;
SQL>create spfile from pfile=’$ORACLE_HOME/dbs/initPRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
III. On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
1) Create directory for data files.
/data/STAN/datafile
create the directory accordingly.
2) Copy the data files and temp files over, from PRIM to STAN.
3) Create directory (multiplexing) for online logs,
For example,
/data/STAN/onlinelog and
/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog
create the directories accordingly.
4) Copy the online logs over, from PRIM to STAN.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
1) Copy initPRIM.ora from Primary server to Standby server, to dbs folder on UNIX under the Oracle home path.
2) Rename it to initSTAN.ora, and modify the file as follows.
db_cache_size=419430400
java_pool_size=4194304
large_pool_size=4194304
shared_pool_size=150994944
streams_pool_size=0
*.audit_file_dest='/oracle/app/admin/STAN/adump'
*.background_dump_dest='/oracle/app/admin/STAN/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/data/STAN/controlfile/STAN.ctl','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/controlfile/STAN.ctl'
*.core_dump_dest='/oracle/app/admin/STAN/cdump'
*.db_block_size=8192
*.db_create_file_dest='/data'
*.db_domain=''
*.db_file_multiblock_read_count=16
# Specify the location of the primary DB datafiles followed by the standby location
*.DB_FILE_NAME_CONVERT='/data/PRIM/datafile','/data/STAN/datafile'
*.db_name='PRIM'
*.db_recovery_file_dest='/oracle/flash_recovery_area/STAN/ARCHIVELOG'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='STAN'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=STANXDB)'
*.FAL_CLIENT='STAN'
*.FAL_SERVER='PRIM'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oracle/flash_recovery_area/STAN/ARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
*.LOG_ARCHIVE_DEST_2='SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
# Specify the location of the primary DB online redo log files followed by the standby location
*.LOG_FILE_NAME_CONVERT='/data/PRIM/onlinelog','/data/STAN/onlinelog,','/oracle/flash_recovery_area/PRIM/ARCHIVELOG/PRIM/onlinelog','/oracle/flash_recovery_area/STAN/ARCHIVELOG/STAN/onlinelog'
*.open_cursors=300
*.pga_aggregate_target=195035136
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=586153984
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/oracle/app/admin/STAN/udump'

(Note: Not all the parameter entries are listed here.)
4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destination.
/oracle/app/product/10.2.0/db_1/dbs/STAN.ctl –Path of STAN.ctl on PRIM.
/data/STAN/controlfile/STAN.ctl—Path to where on STAN it should be copied.

6. Copy the Primary password file to standby and rename it to orapwSTAN.ora.
on UNIX copy it to /dbs directory. And then rename the password file.
7. Configure listeners for the primary and standby databases.
1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop
$lsnrctl start
8. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
- On UNIX:
SQL>startup nomount pfile=’$ORACLE_HOME/dbs/initSTAN.ora’;
SQL>create spfile from pfile=’$ORACLE_HOME/dbs/initSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
2) On Primary, force a logfile switch:
SQL>alter system switch logfile;
3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
IV. Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN;
RMAN>backup archivelog all delete input;
To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.

Monday, November 16, 2009

Oracle Scripts 2

1.To get user_name and connect string as your SQL prompt:-

add below line to $ORACLE_HOME/sqlplus/admin/glogin.sql file...

set sqlprompt "&&_USER@&&_CONNECT_IDENTIFIER SQL>"

2.Instance has crashed and resources were not released:-

$ sysresv -i
IPC Resources for ORACLE_SID "orcl":
Shared Memory:
ID KEY
32768 0xdc70f4e4
Semaphores:
ID KEY
98304 0xb11a5934
Oracle Instance not alive for sid "orcl"
Remove ipc resources for sid "orcl" (y/n)? y
Done removing ipc resources for sid "orcl"

3.To find a free space in datafile wise: -

set linesize 150
column file_name format a50
column tablespace_name format a10
column free_space_in_mb format 9999.9999
select ddf.file_name,ddf.tablespace_name,
sum(dfs.bytes)/1024/1024 free_space_in_mb
from dba_data_files ddf, dba_free_space dfs
where ddf.file_id = dfs.file_id
and ddf.tablespace_name like '&tablespace_name_in_caps'
group by ddf.file_name,ddf.tablespace_name;


4.Displays Actual size, used & freespace within the datafiles: -

TTI "Allocated, Used & Free space within datafiles"

COLUMN free_space_mb format 999999.90
COLUMN allocated_mb format 999999.90
COLUMN used_mb format 999999.90

SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 allocated_mb,
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
used_mb,
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;

5.To kill inactive session: -
set heading off
set echo off
spool kill.sql
select 'alter system kill session '''||sid||','||serial#||''';' from v$session where STATUS='INACTIVE';
spool off
@kill.sql

6.To get Current date in filename while spooling: -
set term OFF echo OFF doc OFF
set hea OFF feed OFF ver OFF
ttitle OFF
spool tst1.sql
select 'spool tst'||to_char(sysdate,'ddmmyy')||'.sql' from dual;
spool off;
@tst1.sql
select sysdate from dual;
spool off

7.To get the client address: -
select sys_context('USERENV','IP_ADDRESS') from dual;

8.To get the server address: -
select utl_inaddr.get_host_address from dual;

9.To get the IP address of another server: -
select utl_inaddr.get_host_address ('') from dual;

10.To get the server name from its IP: -
select utl_inaddr.get_host_name ('') from dual;

11.So to get all clients ip: -
select sid, machine, utl_inaddr.get_host_address(machine) ip
from v$session
where type='USER' and username is not null
/

12.Script to calculate the row length (approximately): -
1) As a sys user: -
select sum(de.bytes)/dt.num_rows "Avg Row Size"
from dba_extents de, dba_tables dt
where de.owner like 'OWNER_NAME' and de.segment_name like 'SEGMENT_NAME'
and de.segment_name=dt.table_name
group by dt.num_rows;

2) As a normal user: -
select sum(ue.bytes)/ut.num_rows "Avg Row Size"
from user_extents ue, user_tables ut
where ue.segment_name like 'SEGMENT_NAME' and ue.segment_name=ut.table_name
group by ut.num_rows;

Access your Alert Log via SQL with External Tables

Starting in Oracle9i you can map external flat files to Oracle tables.
This is especially useful for reading the Oracle alert log and mapping a Excel spreadsheet making the Oracle alert log accessible via SQL from Oracle:
Mapping the Oracle alert log is easy and once defined, all you have to do is query it with standard SQL syntax:


create directory BDUMP as '/u01/app/oracle/admin/orcl/bdump';

create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_orcl.log')
)
reject limit 1000;


Now we can easily extract important Oracle alert log information without leaving SQL*Plus:
select msg from alert_log where msg like '%ORA-00600%';

ORA-00600: internal error code, arguments: [17034],[2940981512],[0], [],[],[],[],[]
ORA-00600: internal error code, arguments: [18095],[0xC0000000210D8BF8],[],[],[],[]
ORA-00600: internal error code, arguments: [4400], [48], [], [], []

Db start, Db shut when server reboots.

---Some scripts which can be used in automation of db start and db shut when server reboots------
To allow Oracle start on boot-up, create a file called oracledb (or whatever name you want to call it) and put it in /etc/init.d with the contents below. This script was copied and pasted from a tutorial by Graham Williams. It will read the /etc/oratab and fire up any instances it finds.
#!/bin/bash
#
# /etc/init.d/oracledb
#
# Run-level Startup script for the Oracle Instance, Listener, and Web Interface

export ORACLE_HOME=/oracle/app/product/10.2.0/db_1/
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID=cjperf
export TNS_LISTINER=$ORACLE_HOME/network/admin
ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORACLE_HOME/bin/dbstart -o ! -d $ORACLE_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
su $ORA_OWNR -c "$ORACLE_HOME/bin/lsnrctl start"
su $ORA_OWNR -c $ORACLE_HOME/bin/dbstart
touch /var/lock/oracle

su $ORA_OWNR -c "$ORACLE_HOME/bin/emctl start dbconsole"
echo "OK"
;;
stop)
# Oracle listener and instance shutdown
echo -n "Shutdown Oracle: "
su $ORA_OWNR -c "$ORACLE_HOME/bin/lsnrctl stop"
su $ORA_OWNR -c $ORACLE_HOME/bin/dbshut
rm -f /var/lock/oracle

su $ORA_OWNR -c "$ORACLE_HOME/bin/emctl stop dbconsole"
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: `basename $0` start|stop|restart|reload"
exit 1
esac

exit 0



After saving the file, make it executable
# chmod a+x /etc/init.d/oracledb
and if you want, make it run at every boot:
# update-rc.d oracledb defaults 99
Adding system startup for /etc/init.d/oracledb ...
ln -s /etc/init.d/oracledb /etc/rc.d/rc0.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc0.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc2.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc2.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc3.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc3.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc4.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc4.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc5.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc5.d/S99oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc6.d/K01oracledb
ln -s /etc/init.d/oracledb /etc/rc.d/rc6.d/S99oracledb

Modify /etc/oratab entries
vi /etc/oratab

database_sid:oracle_home_dir:Y|N

change entry from N to Y

eg: cjperf:/oracle/app/product/10.2.0/db_1:Y

Before finishing, add the following lines to your /etc/.bash_profile . Be careful, since these values are valid system-wide. So make sure the paths are set according to your particular setup (if you have been doing everything according to this text, you should be fine).
export ORACLE_BASE/oracle/app/product/10.2.0/db_1/
export ORACLE_HOME=/oracle/app/oracle/product/10.2.0/db_1
export ORACLE_SID=cjperf
export PATH=$PATH:/oracle/app/oracle/product/10.2.0/db_1/bin
Last operation: add yourself to the dba group. You can use usermod or just edit the /etc/groups file and add your username at the end of the line that starts with dba (my username is ‘harsha’):
dba:x:1002:oracle,bott
If you chose to not create a starter database during your install, you’ll have to do two extra steps. You should create a listener (with netca) and after that, create the starter database (also with netca). If you chose to have the installer create a database for you, then you should be fine, since when doing that, it asks for a password for the default accounts (SYS, SYSTEM, and DBSNMP, SYSMAN if you choose to install it with the enterprise manager option selected).
If everything has gone well, open a terminal window and, as the oracle user, type:
$ sqlplus

SQL*Plus: Release 10.2.0.1.0 - Beta on Wed Jul 11 17:11:53 2007

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Enter user-name:
If you see these messages (and I sincerely hope you do) you’re all set! That means that you have finished a quite long install of Oracle 10g and you are ready to begin destroying it.

Performance Tuning

Redo buffer Tune:-

set serveroutput on;
declare
val1 number(10,2);
val2 number(10,2);
val3 number(10,8);
begin
select value into val1 from v$sysstat Where name in ('redo entries');
select value into val2 from v$sysstat Where name in ('redo log space requests');
val3:=(val2/val1);
if val3<.0002 then dbms_output.put_line('The ratio (redo log space requests/redo entries)is ok '||val3); end if; if val3>.0002 then
dbms_output.put_line('Increase the size of the redo log buffer '||val3);
end if;
end;
/
set serveroutput off;
/
___________________________________


Buffer Cache tune:-

set serveroutput on;
DECLARE
DB_BLK NUMBER(10,2);
CONS_GET NUMBER(10,2);
PHY_RDS NUMBER(10,2);
FINAL NUMBER(10,2);
begin
Select value INTO DB_BLK From v$sysstat Where name in ('db block gets');
Select value INTO CONS_GET From v$sysstat Where name in ('consistent gets');
Select value INTO PHY_RDS From v$sysstat Where name in ('physical reads');
FINAL:=(1-(PHY_RDS/(DB_BLK+CONS_GET)))*100;
if final>90 then
dbms_output.put_line('The BUFFER CACHE HIT RATIO IS OK '||final);
end if;
if final<90 style="font-style: italic;">tune SORT_AREA_SIZE:-

Select name, value from v$sysstat
where name in ('sorts (memory)', 'sorts (disk)');

--If large number of sorts require I/O to disk,increase the initialisation parameter SORT_AREA_SIZE
_________________________________

Tune Library cache:-

set serveroutput on;
DECLARE
hit NUMBER(10,2);
reload number(10,2);
begin
Select ((sum(pinhits) / sum(pins))*100) into hit From v$librarycache Where namespace in ('SQL AREA','TABLE PROCEDURE','BODY','TRIGGER');
select ((sum(reloads)/sum(pins))*100) into reload From v$librarycache Where namespace in ('SQL AREA','TABLE PROCEDURE','BODY','TRIGGER');
IF hit<85>85 then
dbms_output.put_line('THE hit ratio >85%. check shared pool and open cursor parameter '||hit);
end if;
dbms_output.put_line('*************************************************************************');
if reload>.02 then
dbms_output.put_line('The reload percent should be very low, 2% (i.e. 0.02) or less. this is ok'||reload);
end if;
if reload<.02 then dbms_output.put_line('The reload >2% '||reload);
end if;
end;
/
set serveroutput off;
/
________________________________

Tune Dictionary cache:-

set serveroutput on;
DECLARE
ROWCACHE NUMBER(10,2);
BEGIN
Select ((sum(getmisses) / sum(gets))*100) INTO ROWCACHE From v$rowcache;
IF ROWCACHE<15>15 then
dbms_output.put_line('increase the initialisation parameter SHARED_POOL_SIZE '||rowcache);
end if;
END;
/
set serveroutput off;
/
_________________________________

How to find the Chained rows in table:-

1.We have to run the utlchain.sql file in the Schema
(where we want to find the chained rows) to create CHAINED_ROWS table.

SQL>@$ORACLE_HOME/rdbms/admin/utlchain.sql
Then it will create CHAINED_ROWS table.

2.Execute the below command :-
SQL>analyze table table_name list chained rows into CHAINED_ROWS;

3.Query the CHAINED_ROWS table to get the details of the chained rows.

(or)

SELECT 'Chained or Migrated Rows = '||value FROM v$sysstat
WHERE name = 'table fetch continued row';
____________________________________

Tuning of Indexes:-

grant execute on dbms_stats to &User_name;
grant select on index_stats to &User_name;
connect &Username/&Password
exec dbms_stats.gather_schema_stats(ownname=>'&User_Name');
(or)
analyze index owner.index_name validate structure;

SELECT NAME,(DEL_LF_ROWS_LEN/LF_ROWS_LEN)*100 "Wasted Space"
FROM Index_stats WHERE name IN (select index_name from dba_indexes where owner like 'USER_NAME');

if the value is more then 20% of any Index, then rebuild the Index.
___________________________________

Compiling all views in a Schema: -

set lines 130 pages 9999
spool comp_all_views
select 'alter view '||owner||'.'||view_name||' compile;' from dba_views
where owner = 'user_name';
spool off
@comp_all_views.lst
_______________________________

To bring Table read_only:-

1. SQL>alter table (table_name) constraint (constraint_name) check(1=1);

2. SQL>alter table (table_name) disable validate constraint (constraint_name);

To bring Table read_write:-

1. SQL>alter table (table_name) enable validate constraint (constraint_name);
______________________________

How to See the used & free space in Temporary Tablespace: -

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

(or)

SELECT b.tablespace, b.segfile#, b.segblk#, b.blocks,
a.sid, a.serial#, a.username, a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr
ORDER BY b.tablespace, b.segfile#, b.segblk#, b.blocks;
_____________________________

How to calculate size of a table containing CLOBs and BLOBs datatype: -

select sum(bytes), s.owner, s.segment_name, s.segment_type
from dba_lobs l, dba_segments s
where s.segment_type = 'LOBSEGMENT'
and l.table_name = 'TABLE_NAME'
and s.segment_name = l.segment_name
and s.owner like 'USER_NAME'
group by s.owner,s.segment_name,s.segment_type;

-Will get exact LOBSegment size.
-If we want to get total size of the table (with LOBSegment): -

select sum(bytes), s.owner, s.segment_name, s.segment_type
from dba_lobs l, dba_segments s
where s.segment_type = 'LOBSEGMENT'
and l.table_name = 'TABLE_NAME'
and s.segment_name = l.segment_name
and s.owner like 'USER_NAME'
group by s.owner,s.segment_name,s.segment_type;
--------(+) add both outputs--------
select bytes from dba_segments where segment_name like 'TABLE_NAME'
and owner like 'USER_NAME';


___________________________________


Checking CPU utilization for all users: -

select ss.username,se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID
and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;

Resumeable Operation In Oracle 9i

In Oracle 9i, you can suspend or resume operations that run out of space, or reach space limitations after executing for long time. In Oracle 9i we can switch a transaction into 'resumable' mode. This means that any errors arising due to lack of space will not cause a transaction to fail, but to be suspended. When the space problem is fixed, the operation/transaction resumes automatically as if there was no problem earlier.
Example:-
consider you created table and inserted 20,000 records, after 20,000 record space limitation is reached whether it may be max extent crossed its limit or may be lacking of space in table spaces. Due to this circumstances normally we will get an error or transaction will be rolled back. But using resume option the transaction can be suspended for certain seconds, after fixing space problem, the transaction will be resumed automatically and transaction will proceed further.
Consider you have created tablespace test of datafile size 1m,

SYS> CREATE TABLESPACE test
DATAFILE <'path/filename'> SIZE 1m;
Tablespace created

For the normal user to use Resumable operation the superuser should grant permission

SYS> GRANT RESUMABLE TO
eg:
SYS> GRANT RESUMABLE TO SCOTT;
Grant succeeded
SYS>conn scott/tiger
connected
issue alter session enable resumable timeout
SCOTT> ALTER SESSION ENABLE RESUMABLE TIMEOUT 60;
session altered
SCOTT> CREATE TABLE test1
TABLESPACE test
AS SELECT * from user_objects;
Table created
SCOTT> insert into test1 select * from user_objects;
7 rows created
SCOTT>insert into test1 select * from test1;
14 rows created
SCOTT>/
28 rows created
SCOTT>
114688 rows created.
. .upto 1m of tablespace transaction proceeds later transaction will be in waiting stage
SCOTT>/

The transaction will be suspended until 60 seconds, within that 60 second increase the size of tablespace and the error message will be written to alert log file as (Tue Jan 29 13:50:26 2008
statement in resumable session 'User SCOTT(22), Session 9, Instance 1' was suspended due to
ORA01653:
unable to extend table SCOTT.TEST by 2048 in tablespace TEST1
Tue Jan 29 14:27:22 2008
statement in resumable session 'User SCOTT(22), Session 9, Instance 1' was suspended due to
ORA30036:
unable to extend segment by 256 in undo tablespace 'UNDO1'
if not you increase the size of tablespace, the transaction will get rolled backed and you receive an error message
SCOTT>insert into test select * from test
*
ERROR at line 1:
ORA30032:
the suspended (resumable) statement has timed out
ORA01653:
unable to extend table SCOTT.TEST by 256 in tablespace TEST1
If you resolved problem within 60 second the transaction gets continued.

How to download patchset from metalink?

How to download patchset from metalink?
Filed under: Patching — jennyca @ 3:59 pm

Naming Convention of Oracle Patchset

p__.zipOracle Patchset# list

9.2.0.4 = 3095277
9.2.0.5 = 3501955
9.2.0.6 = 3948480
9.2.0.7 = 4163445
9.2.0.8 = 4547809(9i Final)
10.1.0.3 = 3761843
10.1.0.4 = 4163362
10.1.0.5 = 4505133
10.2.0.2 = 4547817
10.2.0.3 = 5337014
10.2.0.4 = 6810189

C:\Documents and Settings\Jenny>ftp updates.oracle.com
Connected to bigip-updates-test-adc.oracle.com.
220 FTP server ready.
User (bigip-updates-test-adc.oracle.com:(none)): user_name
331 Username OK, please send password.
Password:
230-
230- Welcome to the Oracle Patch Download FTP Server
230-
230- For detailed help, use command "quote site help".
230
ftp> cd 5337014
250 Changed directory OK.
ftp> ls -al
200 PORT command OK.
150 Opening data connection for file listing.
total 1
-r--r--r-- 1 root other 1604896973 Feb 22 2007 p5337014_10203_AIX64-5L.zip
-r--r--r-- 1 root other 1424057924 Feb 22 2007 p5337014_10203_HP64.zip
-r--r--r-- 1 root other 1641826103 May 2 2007 p5337014_10203_HPUX-IA64.zip
-r--r--r-- 1 root other 1443790237 Mar 15 2007 p5337014_10203_IBMPower.zip
-r--r--r-- 1 root other 1103539895 Jun 13 2007 p5337014_10203_LINUX-zSer.zip
-r--r--r-- 1 root other 944121770 Feb 21 2007 p5337014_10203_LINUX.zip
-r--r--r-- 1 root other 1209745348 Feb 22 2007 p5337014_10203_Linux-IA64.zip
-r--r--r-- 1 root other 1082086597 Feb 22 2007 p5337014_10203_Linux-x86-64.zip
-r--r--r-- 1 root other 937018969 Feb 23 2007 p5337014_10203_MSWIN-x86-64.zip
-r--r--r-- 1 root other 881471858 Feb 26 2007 p5337014_10203_MVS.zip
-r--r--r-- 1 root other 1254311329 Feb 22 2007 p5337014_10203_SOLARIS64.zip
-r--r--r-- 1 root other 1811431410 Aug 13 2007 p5337014_10203_Solaris86-64.zip
-r--r--r-- 1 root other 1393733954 Oct 15 04:50 p5337014_10203_TRU64.zip
-r--r--r-- 1 root other 893831865 Feb 21 2007 p5337014_10203_WINNT.zip
-r--r--r-- 1 root other 1122786231 Feb 23 2007 p5337014_10203_WINNT64.zip
226 Listing complete. Data connection has been closed.
ftp: 1294 bytes received in 0.00Seconds 1294000.00Kbytes/sec.
ftp>

Copy and paste
ftp://updates.oracle.com/5337014/p5337014_10203_LINUX.zip
to FlashGet to add new job

In FlashGet, Right click “Site properties”
Check

Login to server:
Username:
Password:

Press “OK”

10g 10.2.0.4 PatchSet Windows Version Released

C:\Documents and Settings\Administrator>ftp updates.oracle.com
Connected to bigip-updates.oracle.com.
220 FTP server ready.
User (bigip-updates.oracle.com:(none)):
331 Username OK, please send password.
Password:
230-
230- Welcome to the Oracle Patch Download FTP Server
230-
230- For detailed help, use command "quote site help".
230
ftp> cd 6810189
250 Changed directory OK.
ftp> ls -al
200 PORT command OK.
150 Opening data connection for file listing.
total 1
-r--r--r-- 1 root other 1195551830 Mar 17 05:36 p6810189_10204_Linux-x86-64.zip
-r--r--r-- 1 root other 1053748381 Feb 22 18:58 p6810189_10204_Linux-x86.zip
-r--r--r-- 1 root other 1034079272 Mar 18 00:03 p6810189_10204_Win32.zip
226 Listing complete. Data connection has been closed.
ftp: 269 bytes received in 0.00Seconds 269000.00Kbytes/sec.
ftp>
_____________________________________________________________________________________

List of Patchset number in metalink

A)For Oracle9iR2, Base version 9.2.0.1
i)To upgrade to 9.2.0.2 patchset number 2632931.
ii)To upgrade to 9.2.0.3 patchset number 2761332
iii)To upgrade to 9.2.0.4 patchset number 3095277
iv)To upgrade to 9.2.0.5 patchset number 3501955
v)To upgrade to 9.2.0.6 patchset number 3948480
vi)To upgrade to 9.2.0.7 patchset number 4163445
vii)To upgrade to 9.2.0.8 patchset number 4547809

B)For Oracle10g, Base version 10.1.0.2
i)To upgrade to 10.1.0.3 patchset number 3761843
ii)To upgrade to 10.1.0.4 patchset number 4163362
iii)To upgrade to 10.1.0.5 patchset number 4505133

C)For Oracle10gR2 Base version 10.2.0.1
i)To upgrade to 10.2.0.2 patchset number 4547817
ii)To upgrade to 10.2.0.3 patchset number 5337014
iii)To upgrade to 10.2.0.4 patchset number 6810189
iv)For Warehouse build of 10.2.0.4 patchset number 7005587

D)For Oracle11gR1 Base version 11.1.0.6
i)To upgrade to 11.1.0.7.0 Patchset number 6890831

Oracle Scripts 1

Script for Deleting Duplicate rows in a table: -

Delete from table_name where rowid not in (select max(rowid) from table
group by duplicate_values_field_name);
_________________________________________________

Finding chained rows in a table: -

@$ORACLE_HOME/rdbms/admin/utlchain.sql;

analyze &object_type &object_name list chained rows into chained_rows;
___________________________________________________

Finding oracle Invalid Objects: -

break on c1 skip 2
set pages 999
col c1 heading 'owner' format a15
col c2 heading 'name' format a40
col c3 heading 'type' format a15
ttitle 'Invalid Objects'
select owner c1,object_type c3,object_name c2 from dba_objects
where status != 'VALID'
order by owner,object_type;


Compiling invalid Objects: -

Set heading off;
set feedback off;
set echo off;
Set lines 999;
Spool run_invalid.sql
select
'ALTER ' || OBJECT_TYPE || ' ' ||
OWNER || '.' || OBJECT_NAME || ' COMPILE;'
from
dba_objects where status = 'INVALID'
and object_type in ('PACKAGE','FUNCTION','PROCEDURE');
spool off;
set heading on;
set feedback on;
set echo on;

@run_invalid.sql

(or)

@$ORACLE_HOME/rdbms/admin/utlrp.sql
___________________________________________

Getting Query which is taking more CPU: -


select SQL_TEXT from V$SQLTEXT_WITH_NEWLINES where HASH_VALUE = (select sql_hash_value from v$session where SID = (select s.sid from v$process p, v$session s where s.paddr=p.addr and p.spid =&top_comm_PID)) ;

____________________________________________

Creating Tablespace Script:-

set verify off;
set termout off;
set feedback off;
set echo off;
set pagesize 0;
set termout on;
select 'Creating tablespace build script...' from dual;
set termout off;

create table ts_temp (lineno number, ts_name varchar2(30),
text varchar2(800));

DECLARE
CURSOR ts_cursor IS select tablespace_name,
initial_extent,
next_extent,
min_extents,
max_extents,
pct_increase,
status
from sys.dba_tablespaces
where tablespace_name != 'SYSTEM'
and status != 'INVALID'
order by tablespace_name;
CURSOR df_cursor (c_ts VARCHAR2) IS select file_name,
bytes,
autoextensible,
maxbytes,
increment_by
from sys.dba_data_files
where tablespace_name = c_ts
and tablespace_name != 'SYSTEM'
order by file_name;
lv_tablespace_name sys.dba_tablespaces.tablespace_name%TYPE;
lv_initial_extent sys.dba_tablespaces.initial_extent%TYPE;
lv_next_extent sys.dba_tablespaces.next_extent%TYPE;
lv_min_extents sys.dba_tablespaces.min_extents%TYPE;
lv_max_extents sys.dba_tablespaces.max_extents%TYPE;
lv_pct_increase sys.dba_tablespaces.pct_increase%TYPE;
lv_status sys.dba_tablespaces.status%TYPE;
lv_file_name sys.dba_data_files.file_name%TYPE;
lv_bytes sys.dba_data_files.bytes%TYPE;
lv_autoextensible sys.dba_data_files.autoextensible%TYPE;
lv_maxbytes sys.dba_data_files.maxbytes%TYPE;
lv_increment_by sys.dba_data_files.increment_by%TYPE;
lv_first_rec BOOLEAN;
lv_string VARCHAR2(800);
lv_lineno number := 0;

procedure write_out(p_line INTEGER, p_name VARCHAR2,
p_string VARCHAR2) is
begin
insert into ts_temp (lineno, ts_name, text) values
(p_line, p_name, p_string);
end;

BEGIN
OPEN ts_cursor;
LOOP
FETCH ts_cursor INTO lv_tablespace_name,
lv_initial_extent,
lv_next_extent,
lv_min_extents,
lv_max_extents,
lv_pct_increase,
lv_status;
EXIT WHEN ts_cursor%NOTFOUND;
lv_lineno := 1;
lv_string := ('CREATE TABLESPACE '||lower(lv_tablespace_name));
lv_first_rec := TRUE;
write_out(lv_lineno, lv_tablespace_name, lv_string);
OPEN df_cursor(lv_tablespace_name);
LOOP
FETCH df_cursor INTO lv_file_name,
lv_bytes,
lv_autoextensible,
lv_maxbytes,
lv_increment_by;
EXIT WHEN df_cursor%NOTFOUND;
if (lv_first_rec) then
lv_first_rec := FALSE;
lv_string := 'DATAFILE ';
else
lv_string := lv_string || ',';
end if;
lv_string:=lv_string||''''||lv_file_name||''''||
' SIZE '||to_char(lv_bytes) || ' REUSE';
if (lv_autoextensible = 'YES') then
lv_string:=lv_string||' AUTOEXTEND ON'||
' NEXT '||to_char(lv_increment_by)||
' MAXSIZE '||to_char(lv_maxbytes);
END IF;
END LOOP;
CLOSE df_cursor;
lv_lineno := lv_lineno + 1;
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' DEFAULT STORAGE (INITIAL ' ||
to_char(lv_initial_extent) ||
' NEXT ' || lv_next_extent);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' MINEXTENTS ' ||
lv_min_extents ||
' MAXEXTENTS ' || lv_max_extents);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string := (' PCTINCREASE ' ||
lv_pct_increase || ')');
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_string := (' '||lv_status);
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string:='/';
write_out(lv_lineno, lv_tablespace_name, lv_string);
lv_lineno := lv_lineno + 1;
lv_string:=' ';
write_out(lv_lineno, lv_tablespace_name, lv_string);
END LOOP;
CLOSE ts_cursor;
END;
/

spool create_tablespaces.sql
set heading off
set recsep off
col text format a80 word_wrap


select text
from ts_temp
order by ts_name, lineno;

spool off;

drop table ts_temp;

exit

Create Users Script:-

set pagesize 0
set escape on
spool create_user.sql
select 'create user ' || U.username || ' identified ' ||
DECODE(password,
NULL, 'EXTERNALLY',
' by values ' || '''' || password || ''''
)
|| chr(10) ||
'default tablespace ' || default_tablespace || chr(10) ||
'temporary tablespace ' || temporary_Tablespace || chr(10) ||
' profile ' || profile || chr(10) ||
'quota ' ||
decode ( Q.max_bytes, -1, 'UNLIMITED', NULL, 'UNLIMITED', Q.max_bytes) ||
' on ' || default_tablespace ||
decode (account_status,'LOCKED', ' account lock',
'EXPIRED', ' password expire',
'EXPIRED \& LOCKED', ' account lock password expire',
null)
||
';'
from dba_users U, dba_ts_quotas Q
-- Comment this clause out to include system & default users
where U.username not in ('SYS','SYSTEM',
'SCOTT','DBSNMP','OUTLN','WKPROXY','WMSYS','ORDSYS','ORDPLUGINS','MDSYS',
'CTXSYS','XDB','ANONYMOUS','OWNER','WKSYS','ODM_MTR','ODM','OLAPSYS',
'HR','OE','PM','SH','QS_ADM','QS','QS_WS','QS_ES','QS_OS','QS_CBADM',

'QS_CB','QS_CS','PERFSTAT')
and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)
;
spool off
set pagesize 100
set escape off


To delete the non-default users:-

set pages 0
spool dropuser.sql
select 'drop user '||username|| 'cascade;' from dba_users
where username not in ('MGMT_VIEW','SYS','SYSTEM','DBSNMP','SYSMAN','OUTLN','MDSYS','ORDSYS', 'EXFSYS','DMSYS','WMSYS','CTXSYS','ANONYMOUS','XDB','ORDPLUGINS',
'SI_INFORMTN_SCHEMA','OLAPSYS','SCOTT','TSMSYS','BI','PM','MDDATA',
'IX','SH','DIP','OE','HR')
/


24hr in 30 days logswitches:-

SET LINESIZE 150
COLUMN day FORMAT A3
COLUMN Total FORMAT 99G990
COLUMN h00 FORMAT 999
COLUMN h01 FORMAT 999
COLUMN h02 FORMAT 999
COLUMN h03 FORMAT 999
COLUMN h04 FORMAT 999
COLUMN h05 FORMAT 999
COLUMN h06 FORMAT 999
COLUMN h07 FORMAT 999
COLUMN h08 FORMAT 999
COLUMN h09 FORMAT 999
COLUMN h10 FORMAT 999
COLUMN h11 FORMAT 999
COLUMN h12 FORMAT 999
COLUMN h13 FORMAT 999
COLUMN h14 FORMAT 999
COLUMN h15 FORMAT 999
COLUMN h16 FORMAT 999
COLUMN h17 FORMAT 999
COLUMN h18 FORMAT 999
COLUMN h19 FORMAT 999
COLUMN h20 FORMAT 999
COLUMN h21 FORMAT 999
COLUMN h22 FORMAT 999
COLUMN h23 FORMAT 999
COLUMN h24 FORMAT 999
BREAK ON REPORT
COMPUTE MAX OF "Total" ON REPORT
COMPUTE MAX OF "h00" ON REPORT
COMPUTE MAX OF "h01" ON REPORT
COMPUTE MAX OF "h02" ON REPORT
COMPUTE MAX OF "h03" ON REPORT
COMPUTE MAX OF "h04" ON REPORT
COMPUTE MAX OF "h05" ON REPORT
COMPUTE MAX OF "h06" ON REPORT
COMPUTE MAX OF "h07" ON REPORT
COMPUTE MAX OF "h08" ON REPORT
COMPUTE MAX OF "h09" ON REPORT
COMPUTE MAX OF "h10" ON REPORT
COMPUTE MAX OF "h11" ON REPORT
COMPUTE MAX OF "h12" ON REPORT
COMPUTE MAX OF "h13" ON REPORT
COMPUTE MAX OF "h14" ON REPORT
COMPUTE MAX OF "h15" ON REPORT
COMPUTE MAX OF "h16" ON REPORT
COMPUTE MAX OF "h17" ON REPORT
COMPUTE MAX OF "h18" ON REPORT
COMPUTE MAX OF "h19" ON REPORT
COMPUTE MAX OF "h20" ON REPORT
COMPUTE MAX OF "h21" ON REPORT
COMPUTE MAX OF "h22" ON REPORT
COMPUTE MAX OF "h23" ON REPORT

SELECT TRUNC(first_time) AS "Date",
TO_CHAR(first_time, 'Dy') AS "Day",
COUNT(1) AS "Total",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'00',1,0)) AS "h00",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'01',1,0)) AS "h01",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'02',1,0)) AS "h02",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'03',1,0)) AS "h03",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'04',1,0)) AS "h04",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'05',1,0)) AS "h05",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'06',1,0)) AS "h06",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'07',1,0)) AS "h07",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'08',1,0)) AS "h08",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'09',1,0)) AS "h09",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'10',1,0)) AS "h10",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'11',1,0)) AS "h11",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'12',1,0)) AS "h12",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'13',1,0)) AS "h13",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'14',1,0)) AS "h14",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'15',1,0)) AS "h15",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'16',1,0)) AS "h16",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'17',1,0)) AS "h17",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'18',1,0)) AS "h18",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'19',1,0)) AS "h19",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'20',1,0)) AS "h20",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'21',1,0)) AS "h21",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'22',1,0)) AS "h22",
SUM(DECODE(TO_CHAR(first_time, 'HH24'),'23',1,0)) AS "h23"
FROM V$log_history
GROUP BY TRUNC(first_time), TO_CHAR(first_time, 'Dy')
ORDER BY 1;
CLEAR BREAKS

____________________________________________

Log Miner: -

/* first we have to set utl_file_dir parameter */

@$ORACLE_HOME/rdbms/admin/dbmslm.sql;
@$ORACLE_HOME/rdbms/admin/dbmslmd.sql;

exec dbms_logmnr_d.build('&file_name','&path_of_logmnr',-
dbms_logmnr_d.store_in_flat_file);

exec dbms_logmnr.add_logfile(logfilename=>'&path_of_the_logfile',-
options=>dbms_logmnr.new);

exec dbms_logmnr.add_logfile(logfilename=>'&path_of_2nd_logfile_or_archive',-
options=>dbms_logmnr.addFILE);

exec dbms_logmnr_d.build(dictionary_filename=>'&file_name',-
dictionary_location=>'&path_of_the_logmnr');

exec dbms_logmnr.start_logmnr(DictFileName=>'&path_of_the_file_name');

select username,sql_redo,to_char(timestamp,'dd-mm-yyy hh24:mi:ss')
time_in_sec from v$logmnr_contents where username='&user_name';

____________________________________________

Monitor Import Speed: -

SELECT
SUBSTR(sql_text, INSTR(sql_text,'INTO "'),30) table_name
, rows_processed
, ROUND( (sysdate-TO_DATE(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes
, TRUNC(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
FROM
sys.v_$sqlarea
WHERE
sql_text like 'INSERT %INTO "%'
AND command_type = 2
AND open_versions > 0;


--------------------------------------------------------------


script to calculate Tablespace Free Space:-

column dummy noprint
column pct_used format 999.9 heading "%|Used"
column name format a19 heading "Tablespace Name"
column Kbytes format 999,999,999 heading "Kbytes"
column used format 999,999,999 heading "Used"
column free format 999,999,999 heading "Free"
column largest format 999,999,999 heading "Largest"
column max_size format 999,999,999 heading "MaxPoss|Kbytes"
column pct_max_used format 999. heading "%|Max|Used"
break on report
compute sum of kbytes on report
compute sum of free on report
compute sum of used on report


select (select decode(extent_management,'LOCAL','*',' ') ||
decode(segment_space_management,'AUTO','a ','m ')
from dba_tablespaces where tablespace_name = b.tablespace_name) ||
nvl(b.tablespace_name,
nvl(a.tablespace_name,'UNKOWN')) name,
kbytes_alloc kbytes,
kbytes_alloc-nvl(kbytes_free,0) used,
nvl(kbytes_free,0) free,
((kbytes_alloc-nvl(kbytes_free,0))/
kbytes_alloc)*100 pct_used,
nvl(largest,0) largest,
nvl(kbytes_max,kbytes_alloc) Max_Size,
decode( kbytes_max, 0, 0, (kbytes_alloc/kbytes_max)*100) pct_max_used
from ( select sum(bytes)/1024 Kbytes_free,
max(bytes)/1024 largest,
tablespace_name
from sys.dba_free_space
group by tablespace_name ) a,
( select sum(bytes)/1024 Kbytes_alloc,
sum(maxbytes)/1024 Kbytes_max,
tablespace_name
from sys.dba_data_files
group by tablespace_name
union all
select sum(bytes)/1024 Kbytes_alloc,
sum(maxbytes)/1024 Kbytes_max,
tablespace_name
from sys.dba_temp_files
group by tablespace_name )b
where a.tablespace_name (+) = b.tablespace_name
order by 1
/

Tuesday, November 10, 2009

Table partition

Table Partitioning
Now a days enterprises run databases of hundred of Gigabytes in size. These databases are known as Very Large Databases (VLDB). From Oracle Ver. 8.0 Oracle has provided the feature of table partitioning i.e. you can partition a table according to some criteria . For example you have a SALES table with the following structure

Suppose this table contains millions of records, but all the records belong to four years only i.e. 1991, 1992, 1993
and 1994. And most of the time you are concerned about only one year i.e. you give queries like the following

select sum(amt) from sales where year=1991;

select product,sum(amt) from sales where year=1992 Group by product;

Now whenever you give queries like this Oracle will search the whole table. If you partition this table according to year, then the performance is improve since oracle will scan only a single partition instead of whole table.

CREATING PARTITION TABLES

To create a partition table give the following statement

create table sales (year number(4),
product varchar2(10),
amt number(10,2))
partition by range (year)
(partition p1 values less than (1992) tablespace u1,
partition p2 values less than (1993) tablespace u2,
partition p3 values less than (1994) tablespace u3,
partition p4 values less than (1995) tablespace u4,
partition p5 values less than (MAXVALUE) tablespace u5);

In the above example sales table is created with 5 partitions. Partition p1 will contain rows of year 1991 and it will be stored in tablespace u1. Partition p2 will contain rows of year 1992 and it will be stored in tablespace u2. Similarly p3 and p4.
In the above example if you don’t specify the partition p4 with values less than MAVALUE, then you will not be able to insert any row with year above 1994.
Although not required, you can place partitions in different tablespaces. If you place partitions in different tablespaces then you can isolate problems due to failures as only a particular partition will not be available and rest of the partitions will still be available.

The above example the table is partition by range.

In Oracle you can partition a table by
*Range Partitioning
*Hash Partitioning
*List Partitioning
*Composite Partitioning

Range Partitioning: -
This type of partitioning is useful when dealing with data that has logical ranges into which it can be distributed; for example, value of year. Performance is best when the data evenly distributes across the range

Hash partitioning: -
Use hash partitioning if your data does not easily lend itself to range partitioning, but you would like to partition for performance and manageability reasons. Hash partitioning provides a method of evenly distributing data across a specified number of partitions. Rows are mapped into partitions based on a hash value of the partitioning key

The following example shows how to create a hash partition table.

The following example creates a hash-partitioned table. The partitioning column is partno, four partitions are created and assigned system generated names, and they are placed in four named tablespaces (tab1,tab2, ...).

CREATE TABLE products
(partno NUMBER,
description VARCHAR2 (60))
PARTITION BY HASH (partno)
PARTITIONS 4
STORE IN (tab1, tab2, tab3, tab4);

List Partitioning: -
Use list partitioning when you require explicit control over how rows map to partitions. You can specify a list of discrete values for the partitioning column in the description for each partition. This is different from range partitioning, where a range of values is associated with a partition, and from hash partitioning, where the user has no control of the row to partition mapping.

List partitioning allows unordered and unrelated sets of data to be grouped and organized together very naturally

The following example creates a table with list partitioning

Create table customers (custcode number(5),
Name varchar2(20),
Addr varchar2(10,2),
City varchar2(20),
Bal number(10,2))
Partition by list (city),
Partition north_India values (‘DELHI’,’CHANDIGARH’),
Partition east_India values (‘KOLKOTA’,’PATNA’),
Partition south_India values (‘HYDERABAD’,’BANGALORE’,’CHENNAI’),
Partition west India values (‘BOMBAY’,’GOA’);
If a row is inserted in the above table then oracle maps the value of city column and whichever partition list matches the city column the row is stored in that partition.

COMPOSITE PARTITIONING: -
Composite partitioning partitions data using the range method, and within each partition, sub-partitions it using the hash method. Composite partitions are ideal for both historical data and striping, and provide improved manageability of range partitioning and data placement, as well as the parallelism advantages of hash partitioning.

When creating composite partitions, you specify the following:

* Partitioning method: range
* Partitioning column(s)
* Partition descriptions identifying partition bounds
* Sub-partitioning method: hash
* Sub-partitioning column(s)
* Number of sub-partitions for each partition or descriptions of subpartitions

The following statement creates a composite-partitioned table. In this example, three range partitions are created, each containing eight subpartitions. Because the subpartitions are not named, system generated names are assigned, but the
STORE IN clause distributes them across the 4 specified tablespaces (tab1, ...,tab4).

CREATE TABLE PRODUCTS (partno NUMBER,
description VARCHAR(32),
costprice NUMBER)
PARTITION BY RANGE (partno)
SUBPARTITION BY HASH(description)
SUBPARTITIONS 8 STORE IN (tab1, tab2, tab3, tab4)
(PARTITION p1 VALUES LESS THAN (100),
PARTITION p2 VALUES LESS THAN (200),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

ALTERING PARTITION TABLES: -

To add a partition
You can add add a new partition to the "high" end (the point after the last existing partition). To add a partition at the beginning or in the middle of a table, use the SPLIT PARTITION clause.

For example to add a partition to sales table give the following command.
alter table sales add partition p6 values less than (1996);

To add a partition to a Hash Partition table give the following command.
Alter table products add partition;

Then Oracle adds a new partition whose name is system generated and it is created in the default tablespace. To add a partition by user define name and in your specified tablespace give the following command.

Alter table products add partition p5 tablespace u5;

To add a partition to a List partition table give the following command.
alter table customers add partition central_India values (‘BHOPAL’,’NAGPUR’);

Any value in the set of literal values that describe the partition(s) being added must not exist in any of the other partitions of the table.

Coalescing Partitions: -

Coalescing partitions is a way of reducing the number of partitions in a hash-partitioned table, or the number of sub-partitions in a composite-partitioned table. When a hash partition is coalesced, its contents are redistributed into one or more remaining partitions determined by the hash function. The specific partition that is coalesced is selected by Oracle, and is dropped after its contents have been redistributed.

To coalesce a hash partition give the following statement.
Alter table products coalesce partition;

This reduces by one the number of partitions in the table products.

DROPPING PARTITIONS

To drop a partition from Range Partition table, List Partition or Composite Partition table give the following command.
Alter table sales drop partition p5;

Once you have drop the partition and if you have created a global index on the table. Then you have to rebuild the global index after dropping the partition by giving the following statement.

Alter index sales_ind rebuild;

To avoid rebuilding of indexes after dropping of the partitions you can also first delete all the records and then drop
the partition like this

Delete from sales where year=1994;

Alter table sales drop partition p4;

This method is most appropriate for small tables, or for large tables when the partition being dropped contains a small percentage of the total data in the table.

Another method of dropping partitions is give the following statement.

ALTER TABLE sales DROP PARTITION p5 UPDATE GLOBAL INDEXES;

This causes the global index to be updated at the time the partition is dropped.

Exchanging a Range, Hash, or List Partition: -

To exchange a partition of a range, hash, or list-partitioned table with a nonpartitioned table, or the reverse, use the ALTER TABLE ... EXCHANGE PARTITION statement. An example of converting a partition into a nonpartitioned table follows. In this example, table stocks can be range, hash, or list partitioned.

ALTER TABLE stocks EXCHANGE PARTITION p3 WITH stock_table_3;

Merging Partitions: -
Use the ALTER TABLE ... MERGE PARTITIONS statement to merge the contents of two partitions into one partition. Te two original partitions are dropped, as are any corresponding local indexes.
You cannot use this statement for a hash-partitioned table or for hash subpartitions of a composite-partitioned table.
You can only merged two adjacent partitions, you cannot merge non adjacent partitions.
For example the merge the partition p2 and p3 into one partition p23 give the following statement.

Alter table sales merge partition p2 and p3 into partition p23;

Modifying Partitions: Adding Values

Use the MODIFY PARTITION ... ADD VALUES clause of the ALTER TABLE statement to extend the value list of an existing partition. Literal values being added must not have been included in any other partition's value list. The partition value list for any corresponding local index partition is correspondingly extended, and any global index, or global or local index partitions, remain usable.

The following statement adds a new set of cities ('KOCHI', 'MANGALORE') to an existing partition list.

ALTER TABLE customers
MODIFY PARTITION south_india
ADD VALUES ('KOCHI', 'MANGALORE');

Modifying Partitions: Dropping Values

Use the MODIFY PARTITION ... DROP VALUES clause of the ALTER TABLE statement to remove literal values from the value list of an existing partition. The statement is always executed with validation, meaning that it checks to see if any rows exist in the partition that correspond to the set of values being dropped. If any such rows are found then Oracle returns an error message and the operation fails. When necessary, use a DELETE statement to delete corresponding rows from the table before attempting to drop values.
You cannot drop all literal values from the value list describing the partition. You must use the ALTER TABLE ... DROP PARTITION statement instead.
The partition value list for any corresponding local index partition reflects the new value list, and any global index, or global or local index partitions, remain usable.
The statement below drops a set of cities (‘KOCHI' and 'MANGALORE') from an existing partition value list.

ALTER TABLE customers
MODIFY PARTITION south_india
DROP VALUES (‘KOCHI’,’MANGALORE’);

SPLITTING PARTITIONS: -
You can split a single partition into two partitions. For example to split the partition p5 of sales table into two partitions give the following command.

Alter table sales split partition p5 into
(Partition p6 values less than (1996),
Partition p7 values less then (MAXVALUE));

TRUNCATING PARTITON: -
Truncating a partition will delete all rows from the partition.
To truncate a partition give the following statement

Alter table sales truncate partition p5;

LISTING INFORMATION ABOUT PARTITION TABLES: -

To see how many partitioned tables are there in your schema give the following statement
Select * from user_part_tables;

To see on partition level partitioning information
Select * from user_tab_partitions;

Thursday, November 5, 2009

Steps for installing statspack

Statspack
Statspack is a set of performance monitoring and reporting utilities provided by Oracle starting from Oracle 8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Although AWR and ADDM (introduced in Oracle 10g) provide better statistics than STATSPACK, users that are not licensed to use the Enterprise Manager Diagnostic Pack should continue to use statspack.

Install statspack
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Drop and install statspack
sqlplus "/ as sysdba" @spcreate.sql -- Enter tablespace names when prompted

Take performance snapshots of the database
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots



Adjusting the STATSPACK Collection Level

STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.

SQL> SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0 This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.
Level 5 This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.
Level 6 This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.
Level 7 This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.
Level 10 This level includes capturing Child Latch statistics, along with all data captured by lower levels.

You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => 'true' changes the level permanent for all snapshots in the future.

SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => 'true');


-- or :
exec perfstat.statspack.snap(i_snap_level=>10); -- or instruct statspack to do gather more details in the snapshot
-- (look up which oracle version supports which level).
The spauto.sql script can be customized and executed to schedule the collection of STATPACK snapshots.

Statspack reporting
-- Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;

select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS')
"Date/Time" from stats$snapshot,v$database;

@spreport.sql -- Enter two snapshot id's for difference report

Other statspack scripts
Some of the other statspack scripts are:
sppurge.sql - Purge (delete) a range of Snapshot Id's between the
specified begin and end Snap Id's
spauto.sql - Schedule a dbms_job to automate the collection of
STATPACK statistics
spcreate.sql - Installs the STATSPACK user, tables and package on a
database (Run as SYS).
spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
spreport.sql - Report on differences between values recorded in two
snapshots
sptrunc.sql - Truncates all data in Statspack tables
Potential problems
Statpack reporting suffers from the following problems:
Some statistics may only be reported on COMPLETION of a query. For example, if a query runs for 12 hours, its processing won't be reported during any of the snapshots taken while the query was busy executing.
If queries are aged out of the shared pool, the stats from V$SQL are reset. This can throw off the delta calculations and even make it negative. For example, query A has 10,000 buffer_gets at snapshot 1, but at snapshot #2, it has been aged out of the pool and reloaded and now shows only 1,000 buffer_gets. So, when you run spreport.sql from snapshot 1 to 2, you'll get 1,000-10,000 = -9,000 for this query.