Monday, December 2, 2013

How to Improve RMAN backup performance

Although backup and recovery tuning requires a good understanding of hardware and software used
like disk speed , IO , Buffering and/or MML used for net backup.

As many factors can affect backup performance. Often, finding the solution to a slow backup is a
process of trial and error. To get the best performance for a backup, follow the following
suggested steps:

Step 1: Remove RATE Parameters from Configured and Allocated Channels
=========================================================
The RATE parameter on a channel is intended to reduce, rather than increase, backup throughput, so
that more disk bandwidth is available for other database operations.

If your backup is not streaming to tape, then make sure that the RATE parameter is not set on the
ALLOCATE CHANNEL or CONFIGURE CHANNEL commands.


Step 2 : Consider Using I/O Slaves
========================

 -  If You Use Synchronous Disk I/O, Set DBWR_IO_SLAVES

If and only if your disk does not support asynchronous I/O, then try setting the DBWR_IO_SLAVES initialization parameter to a nonzero value. Any nonzero value for DBWR_IO_SLAVES causes a fixed number (four) of disk I/O slaves to be used for backup and restore, which simulates asynchronous I/O.
If I/O slaves are used, I/O buffers are obtained from the SGA. The large pool is used, if configured. Otherwise, the shared pool is used.

Note: By setting DBWR_IO_SLAVES, the database writer processes will use slaves as well.
You may need to increase the value of the PROCESSES initialization parameter.

-  Use Tape slaves To keep the tape streaming (continually moving) by simulating
asynchronous I/O

Set the "init.ora" parameter:
BACKUP_TAPE_IO_SLAVES = true

This causes one tape I/O slave to be assigned to each channel server process.

In 8i/9i/10g, if the DUPLEX option is specified, then tape I/O slaves must be enabled.
In this case, for DUPLCEX=<n>, there are <n> tape slaves per channel. These N slaves
all operate on the same four output buffers. Consequently, a buffer is not freed
up until all <n> slaves have finished writing to tape.


Step 3: If You Fail to Allocate Shared Memory, Set LARGE_POOL_SIZE
=========================================================
Set this initialization parameter if the database reports an error in the alert.log stating that it does not have enough memory and that it will not start I/O slaves.

The message should resemble the following:
ksfqxcre: failure to allocate shared memory means sync I/O will be used whenever async I/O to file not supported natively


When attempting to get shared buffers for I/O slaves, the database does the following:

* If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used.
* If LARGE_POOL_SIZE is not set, then the database attempts to get memory from the shared pool.
* If the database cannot get enough memory, then it obtains I/O buffer memory from the PGA and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.

The memory from the large pool is used for many features, including the shared server (formerly called multi-threaded server), parallel query, and RMAN I/O slave buffers. Configuring the large pool prevents RMAN from competing with other subsystems for the same memory.

Requests for contiguous memory allocations from the shared pool are usually small (under 5 KB) in size. However, it is possible that a request for a large contiguous memory allocation can either fail or require significant memory housekeeping to release the required amount of contiguous memory. Although the shared pool may be unable to satisfy this memory request, the large pool is able to do so. The large pool does not have a least recently used (LRU) list; the database does not attempt to age memory out of the large pool.

Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. To see in which pool (shared pool or large pool) the memory for an object resides, query V$SGASTAT.POOL.

The formula for setting LARGE_POOL_SIZE is as follows:

LARGE_POOL_SIZE = number_of_allocated_channels *
(16 MB + ( 4 * size_of_tape_buffer ) )


Step 4: Tune RMAN Tape Streaming Performance Bottlenecks
================================================
There are several tasks you can perform to identify and remedy bottlenecks that affect RMAN's performance on tape backups:
Using BACKUP... VALIDATE To Distinguish Between Tape and Disk Bottlenecks

One reliable way to determine whether the tape streaming or disk I/O is the bottleneck in a given backup job is to compare the time required to run backup tasks with the time required to run BACKUP VALIDATE of the same tasks.
BACKUP VALIDATE of a backup to tape performs the same disk reads as a real backup but performs no tape I/O. If the time required for the BACKUP VALIDATE to tape is significantly less than the time required for a real backup to tape, then writing to tape is the likely bottleneck.

Using Multiplexing to Improve Tape Streaming with Disk Bottlenecks

In some situations when performing a backup to tape, RMAN may not be able to send data blocks to the tape drive fast enough to support streaming.

For example, during an incremental backup, RMAN only backs up blocks changed since a previous datafile backup as part of the same strategy. If you do not turn on change tracking, RMAN must scan entire datafiles for changed blocks, and fill output buffers as it finds such blocks. If there are not many changed blocks, RMAN may not fill output buffers fast enough to keep the tape drive streaming.

You can improve performance by increasing the degree of multiplexing used for backing up. This increases the rate at which RMAN fills tape buffers, which makes it more likely that buffers are sent to the media manager fast enough to maintain streaming.

Using Incremental Backups to Improve Backup Performance With Tape Bottlenecks

If writing to tape is the source of a bottleneck for your backups, consider using incremental backups as part of your backup strategy. Incremental level 1 backups write only the changed blocks from datafiles to tape, so that any bottleneck on writing to tape has less impact on your overall backup strategy. In particular, if tape drives are not locally attached to the node running the database being backed up, then incremental backups can be faster.

Step 5: Query V$ Views to Identify Bottlenecks
=====================================
If none of the previous steps improves backup performance, then try to determine the exact source of the bottleneck. Use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs.

V$BACKUP_SYNC_IO contains rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup.
V$BACKUP_ASYNC_IO contains rows when the I/O is asynchronous.
Asynchronous I/O is obtained either with I/O processes or because it is supported by the underlying operating system.


To determine whether your tape is streaming when the I/O is synchronous, query the EFFECTIVE_BYTES_PER_SECOND column in the V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO view.
If EFFECTIVE_BYTES_PER_SECOND is less than the raw capacity of the hardware, then the tape is not streaming. If EFFECTIVE_BYTES_PER_SECOND is greater than the raw capacity of the hardware, the tape may or may not be streaming.

Compression may cause the EFFECTIVE_BYTES_PER_SECOND to be greater than the speed of real I/O.
Identifying Bottlenecks with Synchronous I/O

With synchronous I/O, it is difficult to identify specific bottlenecks because all synchronous I/O is a bottleneck to the process. The only way to tune synchronous I/O is to compare the rate (in bytes/second) with the device's maximum throughput rate. If the rate is lower than the rate that the device specifies, then consider tuning this aspect of the backup and restore process. The DISCRETE_BYTES_PER_SECOND column in the V$BACKUP_SYNC_IO view displays the I/O rate. If you see data in V$BACKUP_SYNC_IO, then the problem is that you have not enabled asynchronous I/O or you are not using disk I/O slaves.
Identifying Bottlenecks with Asynchronous I/O

Long waits are the number of times the backup or restore process told the operating system to wait until an I/O was complete. Short waits are the number of times the backup or restore process made an operating system call to poll for I/O completion in a nonblocking mode. Ready indicates the number of time when I/O was already ready for use and so there was no need to made an operating system call to poll for I/O completion.

The simplest way to identify the bottleneck is to query V$BACKUP_ASYNC_IO for the datafile that has the largest ratio for LONG_WAITS divided by IO_COUNT.

Note:
If you have synchronous I/O but you have set BACKUP_DISK_IO_SLAVES, then the I/O will be displayed in V$BACKUP_ASYNC_IO.


Also the following is a recommended for improving RMAN performance on AIX5L based system..
===================================================================
IBM suggestions the following AIX related advices:

1. set AIXTHREAD_SCOPE=S  in /etc/environment.

2. " ioo -o maxpgahead=256 " to set maxpgahead parameter
Initial settings were : Min/Maxpgahead 2 16

3. " vmo -o minfree=360 -o maxfree=1128 " to set minfree and maxfree...
Initial settings were : Min/Maxfree 240 256

Getting %15-20 performance improvements on RMAN backup performance on AIX 5L Based Systems.

Note: Document source ( oracle support)

Monday, October 7, 2013

Flashback database commands

SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;
SQL > SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;
SQL> SELECT CURRENT_SCN FROM V$DATABASE;

SQL> SELECT estimated_flashback_size/1024/1024,flashback_size/1024/1024 
FROM v$flashback_database_log;


prompt -- What Flashback options are currently enabled for this database?
TTITLE 'Flashback Options Currently Enabled:'
COL name FORMAT A32 HEADING 'Parameter'
COL value FORMAT A32 HEADING 'Setting'

SELECT
name
,value
FROM v$parameter
WHERE NAME LIKE '%flash%' OR NAME LIKE '%recovery%'
ORDER BY NAME;



prompt -- status of the Flash Recovery Area ?
TTITLE 'Flash Recovery Area Status'
COL name FORMAT A32 HEADING 'File Name'
COL spc_lmt_mb FORMAT 9999.99 HEADING 'Space|Limit|(MB)'
COL spc_usd_mb FORMAT 9999.99 HEADING 'Space|Used|(MB)'
COL spc_rcl_mb FORMAT 9999.99 HEADING 'Reclm|Space|(MB)'
COL number_of_files FORMAT 99999 HEADING 'Files'

SELECT
name
,space_limit /(1024*1024) spc_lmt_mb
,space_used /(1024*1024) spc_usd_mb
,space_reclaimable /(1024*1024) spc_rcl_mb
,number_of_files
FROM v$recovery_file_dest;

prompt -- Is Flashback Database On ?

TTITLE 'Is Flashback Database Enabled?'
COL name FORMAT A12 HEADING 'Database'
COL current_scn FORMAT 9999999 HEADING 'Current|SCN #'
COL flashback_on FORMAT A8 HEADING 'Flash|Back On?'

SELECT
name
,current_scn
,flashback_on
FROM v$database;


prompt -- What Flashback Logs are available?
TTITLE 'Current Flashback Logs Available'
COL log# FORMAT 9999 HEADING 'FLB|Log#'
COL bytes FORMAT 99999999 HEADING 'Flshbck|Log Size'
COL first_change# FORMAT 99999999 HEADING 'Flshbck|SCN #'
COL first_time FORMAT A24 HEADING 'Flashback Start Time'

SELECT
LOG#
,bytes
,first_change#
,first_time
FROM v$flashback_database_logfile;



PROMPT How Far Back Can We Flashback To (Time)?
PROMPT

select to_char(oldest_flashback_time,'dd-mon-yyyy hh24:mi:ss') "Oldest Flashback Time" from v$flashback_database_log;

PROMPT
PROMPT How Far Back Can We Flashback To (SCN)?
PROMPT

col oldest_flashback_scn format 99999999999999999999999999
select oldest_flashback_scn from v$flashback_database_log;



PROMPT
PROMPT Flashback Area Usage
SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;

PROMPT
col ROUND(SPACE_LIMIT/1048576) heading "Space Allocated (MB)" format 999999
col round(space_used/1048576) heading "Space Used (MB)" format 99999
col name Heading "Flashback Location" format a50

select name, round(space_limit/1048576),round(space_used/1048576) from v$RECOVERY_FILE_DEST;



=====



prompt

prompt -- ----------------------------------------------------------------------- ---

prompt -- Flash Recovery Area ---

prompt -- ----------------------------------------------------------------------- ---

prompt

set heading off

select ' db_recovery_file_dest '||value from V$PARAMETER where name='db_recovery_file_dest'
union
select ' db_recovery_file_dest_size (G) '||to_char(value/1024/1024/1024) from V$PARAMETER where name='db_recovery_file_dest_size'
union
select ' db_flashback_retention_target (minutes) '||value||' (hours : '||value/60||')' from V$PARAMETER where name='db_flashback_retention_target'
union
select ' Flashback size estimated (G) '||to_char(round(ESTIMATED_FLASHBACK_SIZE/1024/1024/1024)) From V$FLASHBACK_DATABASE_LOG
; 

set heading on


column name format a40 heading "Name"
column sl format 9999999 heading "Space Limit|(Gb)"
column su format 9999999 heading "Space Used|(Gb)"
column sr format 9999999 heading "Space|Reclaimable|(Gb)"
column nf format 9999999 heading "N. Files"

Select
Substr(Name,1,40) name
, Space_Limit/1024/1024/1024 sl
, Space_Used/1024/1024/1024 su
, Space_Reclaimable/1024/1024/1024 sr
, Number_Of_Files nf
From V$RECOVERY_FILE_DEST
;

column ft format a15 heading "File Type"
column psu format 9999999 heading "Space|Used %"
column psr format 9999999 heading "Space|Reclaimable %"

Select
File_Type ft
, Percent_Space_Used psu
, Percent_Space_Reclaimable psr
, Number_Of_Files nf
From V$FLASH_RECOVERY_AREA_USAGE
;

column os format 99999999999999 heading "Oldest Flashback SCN"
column ot format a25 heading "Oldest Flashback Time"

Select 
OLDEST_FLASHBACK_SCN os
, TO_CHAR(OLDEST_FLASHBACK_TIME, 'DD-MM-YYYYD HH24:MI:SS') ot
From 
V$FLASHBACK_DATABASE_LOG
;

-- select * from V$RESTORE_POINT;
-- How much space is used by Flashback Logs for each GRP?
-- SQL> SELECT NAME, SCN, STORAGE_SIZE
-- FROM V$RESTORE_POINT
-- WHERE GUARANTEE_FLASHBACK_DATABASE = 'YES';
-- How much space do all GRP Flashback Logs use?
-- SQL> SELECT SUM(STORAGE_SIZE)
-- FROM V$RESTORE_POINT
-- WHERE GUARANTEE_FLASHBACK_DATABASE = 'YES';




How to flashback  ( values given below is just an example )

in the MOUNT (exclusive) mode then issue one of the commands:

FLASHBACK DATABASE TO SCN 600000

FLASHBACK DATABASE TO BEFORE SCN 66666666

FLASHBACK DATABASE TO TIMESTAMP (SYSDATE -2/24)

FLASHBACK DATABASE TO SEQUENCE 67777777

FLASHBACK TABLE persons TO SCN 566666666




Saturday, June 16, 2012

Relationship between sessions, processes and transaction in oracle 10g


Relationship between sessions, processes and transaction in oracle 10g

For increasing sessions parameter you should consider increasing processes and transactions parameter as well.
Below formula can be used to determine their values.
processes=x
sessions=x*1.1+5
transactions=sessions*1.1

E.g.
processes=100
sessions=115
transactions=127

Saturday, March 24, 2012

What happens in Hot Backup of oracle database


alter tablespace ts_name begin backup;

When the command is issued, at this point in time a checkpoint is performed against the target tablespace; then the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data.

The datafiles with the backup in progress will still allow read/write operations just as a regular datafile, I/O activity is not frozen.

Each time a row is modified, not only the row, but the complete block is recorded to the redo log file, this will only happen the first time the block is modified, subsequent transactions on the block will only record the transaction just as normal.

During the user managed backup process the "fractured block" event may be present. Let's remember that the oracle block is the minimum IO unit, and an oracle block is made out of several OS blocks; let's assume a block size of 8K and an OS block of 512b, this will give 16 OS blocks. If during the backup process of a block there is a write operation on the block then the backup will contain a before image and an after image of the oracle block, the complete block in the backup media will be corrupt. This is normal, consistency is not guaranteed on the backup, that is why the header must be frozen to mark the point where the recovery process will have to start, and that is why oracle record a complete block image on the redo log file.

At the time the alter tablespace ts_name end backup; command is issued then the backup process is finished and the datafile header resumes its regular IO activity.

Tuesday, March 6, 2012

How to get/view nodes /server information in RAC


How to get/view nodes /server information in RAC

1) member nodes number/name
olsnodes -n

2)local node/server name
olsnodes -l

3)activates logging
olsnodes -g

RAC administration


Cluster Ready Services (CRS): is Oracle's clusterware software

CRS starts automatically when the server starts/rebooted planned/unplanned, it can be stopped this in the following conditions

Applying a patch set to CRS_HOME
O/S maintenance ( requies reboot)
Troubleshooting CRS problems

1) Starting CRS in 10gR2
$CRS_HOME/bin/crsctl start crs


2)Stopping CRS using Oracle 10g R2
$CRS_HOME/bin/crsctl stop crs

## Stopping CRS using Oracle 10g R1
srvctl stop -d database <database>
srvctl stop asm -n <node>
srvctl stop nodeapps -n <node>
/etc/init.d/init.crs stop

3)disabling/enabling CRS in 10g R2
$CRS_HOME/bin/crsctl [disable|enable] crs

## Oracle 10g R1
/etc/init.d/init.crs [disable|enable]

4)CRS status
$CRS_HOME/bin/crsctl check crs
$CRS_HOME/bin/crsctl check evmd
$CRS_HOME/bin/crsctl check cssd
$CRS_HOME/bin/crsctl check crsd
$CRS_HOME/bin/crsctl check install -wait 600

$CRS_HOME/bin/crs_stat
$CRS_HOME/bin/crs_stat -t
$CRS_HOME/bin/crs_stat -ls
$CRS_HOME/bin/crs_stat -p

Note:
-t more readable display
-ls permission listing
-p parameters


5)create profile for CRS
$CRS_HOME/bin/crs_profile

6)register/unregister application

$CRS_HOME/bin/crs_register
$CRS_HOME/bin/crs_unregister

7)Start/Stop an application
$CRS_HOME/bin/crs_start
$CRS_HOME/bin/crs_stop


8)Resource permissions
$CRS_HOME/bin/crs_getparam
$CRS_HOME/bin/crs_setparam

9)Relocate a resource
$CRS_HOME/bin/crs_relocate

Tuesday, February 21, 2012

Manual Database creation in 10g



The steps to create a database manually on Linux.


Step 1:
First of all create the  directories as per requirement as below.
linux1]$ mkdir -p /opt/scripts/
mkdir -p /opt/prod/datafiles/
mkdir -p /opt/prod/datafiles/admin/adump
mkdir -p /opt/prod/datafiles/admin/bdump
mkdir -p /opt/prod/datafiles/admin/cdump
mkdir -p /opt/prod/datafiles/archive
mkdir -p /opt/prod/datafiles/admin/udump

Step 2:
 database creation script. Location /opt/scripts/db_cr.sql
CREATE DATABASE "prod"
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/prod/datafiles/redo1.log' SIZE 10M,
GROUP 2 '/opt/prod/datafiles/redo2.log' SIZE 10M,
GROUP 3 '/opt/prod/datafiles/redo3.log' SIZE 10M
DATAFILE
'/opt/prod/datafiles/system01.dbf' size 100m,
'/opt/prod/datafiles/usr04.dbf' size 10m
sysaux datafile '/opt/prod/datafiles/sysaux01.dbf' size 100m
undo tablespace undotbs
datafile '/opt/prod/datafiles/undo01.dbf' size 50m
CHARACTER SET US7ASCII
;
Step 3:
Prepare the init file.

 vi  $ORACLE_HOME/dbs/initprod.ora
*.audit_file_dest='/opt/prod/datafiles/admin/adump'
*.background_dump_dest='/opt/prod/datafiles/admin/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/opt/prod/datafiles/control01.ctl',
'/opt/prod/datafiles/control02.ctl','/opt/prod/datafiles/control03.ctl'
*.core_dump_dest='/opt/prod/datafiles/admin/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='prod'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=indiaXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=/opt/prod/datafiles/archive'
*.log_archive_format='%t_%s_%r.arc'
*.open_cursors=300
*.pga_aggregate_target=200278016
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=601882624
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/prod/datafiles/admin/udump'

Step 4:
Now perform the following steps:
$ export ORACLE_SID=prod
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.3.0 - Production on Thu May 22 17:35:28 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup nomount pfile='$ORACLE_HOME/dbs/initprod.ora'
ORACLE instance started.
Total System Global Area 603979776 bytes
Fixed Size 1263176 bytes
Variable Size 167774648 bytes
Database Buffers 427819008 bytes
Redo Buffers 7122944 bytes
SQL> @/opt/scripts/db_cr.sql
Database created.
Step 5:
 run the catalog.sql and catproc.sql scripts.
from $ORACLE_HOME/rdbms/admin
SQL> @/?/rdbms/admin/catalog.sql
SQL> @/?/rdbms/admin/catproc.sql
SQL> select name from v$database;
NAME
---------
prod

Now create the listener and tns entries for your database