Step 1: Check the Alert log for errors
Step 2: Check with the end user on how long the session used to take to complete.
Step 3: Check with user whether any increase in the volume of data being
Processed.
Step 4: Check the wait events for this sessions by using the query given below
select p1, p2, p3, event
from v$session_wait
where sid=&session_id;
Also check the following:
select count(*), event
from v$session_wait
group by event;
Step 5: Identify the tables begin accessed by that session using below query
select owner, object, type
from v$access
where sid=&session_id
and owner not in (‘SYS’,’SYSTEM’);
.
Step 6: Check whether Statistics have been generated for these tables identified
using the following query:
select last_analyzed, num_rows
from dba_tables
where owner=’&table_owner’
and table_name=’&object_name’;
Note: If the table is partitioned, check in dba_tab_partitions also and
for subpartitions check in dba_tab_subpartitions.
Step 7: Check whether Statistics have been generated for the indexes of the tables
Identified Using the following query:
select last_analyzed, num_rows
from dba_indexes
where table_owner=’&table_owner’
and table_name=’&table_name’;
Note: If the table is partitioned, check in dba_ind_partitions and for
subpartitions check in dba_ind_subpartitions.
Step 8: If Statistics is not up-to-date, Generate stats using dbms_stats package.
Step 9: Follow the approach given in below for sql tuning.
Avoid Using the Following:
a. Boolean Operators, Is null & Is not Null.
b. not in, != Operators
c. like ‘%patterns’, not exists
Do’s:
a. Enable aliases to prefix all columns.
b. Use sql joins instead of sub-queries
c. Make the tables with the least number of rows as the driving table by keeping them first in the FROM clause.
d. Use concatenated indexes wherever appropriate.
e. Pick up the Best Join method.
f. Nested loops joins are best for indexed joins of subsets.
g. Hash joins are usually the best choice for "big" joins
h. Pick the best "driving" table
i. Use bind variables. Bind variables are key to application scalability.
j. Use Oracle hints wherever appropriate
k. Compare performance between alternative syntax for your SQL statement
Use Explain Plan to Identify the Access path being used by the query.
Syntax is explain plan for actual_sql_statement;
You can see the output of the explain plan by running the following sql:
$ORACLE_HOME/rdbms/admin/utlxplp.sql
Alternatively, You can also trace the session by using the
following Command:
alter session set events '10046 trace name context forever,level 12';
Run the sql Query. This will generate the trace file in udump directory.
Use tkprof utility to get the readable output of this trace file. Use the
following Syntax:
tkprof trace_file_name trace_file_name.out sys=no explain=userid/pwd
This tkprof output file trace_file_name.out will have the access path
Used by the queries and as well the various timed statistics like
cpu time, elapsed time etc.
Step 2: Check with the end user on how long the session used to take to complete.
Step 3: Check with user whether any increase in the volume of data being
Processed.
Step 4: Check the wait events for this sessions by using the query given below
select p1, p2, p3, event
from v$session_wait
where sid=&session_id;
Also check the following:
select count(*), event
from v$session_wait
group by event;
Step 5: Identify the tables begin accessed by that session using below query
select owner, object, type
from v$access
where sid=&session_id
and owner not in (‘SYS’,’SYSTEM’);
.
Step 6: Check whether Statistics have been generated for these tables identified
using the following query:
select last_analyzed, num_rows
from dba_tables
where owner=’&table_owner’
and table_name=’&object_name’;
Note: If the table is partitioned, check in dba_tab_partitions also and
for subpartitions check in dba_tab_subpartitions.
Step 7: Check whether Statistics have been generated for the indexes of the tables
Identified Using the following query:
select last_analyzed, num_rows
from dba_indexes
where table_owner=’&table_owner’
and table_name=’&table_name’;
Note: If the table is partitioned, check in dba_ind_partitions and for
subpartitions check in dba_ind_subpartitions.
Step 8: If Statistics is not up-to-date, Generate stats using dbms_stats package.
Step 9: Follow the approach given in below for sql tuning.
Avoid Using the Following:
a. Boolean Operators, Is null & Is not Null.
b. not in, != Operators
c. like ‘%patterns’, not exists
Do’s:
a. Enable aliases to prefix all columns.
b. Use sql joins instead of sub-queries
c. Make the tables with the least number of rows as the driving table by keeping them first in the FROM clause.
d. Use concatenated indexes wherever appropriate.
e. Pick up the Best Join method.
f. Nested loops joins are best for indexed joins of subsets.
g. Hash joins are usually the best choice for "big" joins
h. Pick the best "driving" table
i. Use bind variables. Bind variables are key to application scalability.
j. Use Oracle hints wherever appropriate
k. Compare performance between alternative syntax for your SQL statement
Use Explain Plan to Identify the Access path being used by the query.
Syntax is explain plan for actual_sql_statement;
You can see the output of the explain plan by running the following sql:
$ORACLE_HOME/rdbms/admin/utlxplp.sql
Alternatively, You can also trace the session by using the
following Command:
alter session set events '10046 trace name context forever,level 12';
Run the sql Query. This will generate the trace file in udump directory.
Use tkprof utility to get the readable output of this trace file. Use the
following Syntax:
tkprof trace_file_name trace_file_name.out sys=no explain=userid/pwd
This tkprof output file trace_file_name.out will have the access path
Used by the queries and as well the various timed statistics like
cpu time, elapsed time etc.
This is one awesome blog article. Much thanks again.
ReplyDeleteI really enjoy the blog.Much thanks again. Really Great.
oracle online training
sap fico online training
dotnet online training
qa-qtp-software-testing-training-tutorial