Updates

Steps to import data from 11g to 9i database

0 comments
Steps to import data from 11g to 9i database


In 11g database Change the definition of view EXU9DEFPSWITCHES from:

CREATE OR REPLACE VIEW exu9defpswitches (
compflgs, nlslensem ) AS
SELECT a.value, b.value
FROM sys.v$parameter a, sys.v$parameter b
WHERE a.name = 'plsql_compiler_flags' AND
b.name = 'nls_length_semantics'

to

CREATE OR REPLACE VIEW exu9defpswitches (
compflgs, nlslensem ) AS
SELECT a.value, b.value
FROM sys.v$parameter a, sys.v$parameter b
WHERE a.name = 'plsql_code_type' AND
b.name = 'nls_length_semantics'

(see new parameter PLSQL_CODE_TYPE)

This need to be change because of following issue.
Export From 11g using EXP Utility Version 9iR2 Produces Corrupt Export Dump [ID 550740.1] 

1- Go to 9i database server

2- Add the tns entry fro 11g database

(Export from 11.1.0.7 using export utility of 9.2.0.6)

( To minimize the number of conversation set NLS_LANG during export to the same as character set of the exported database. In this case WE8ISO8859P1.This means no conversation takes place, all data is stored in export file as it was stored in the database)

3- export NLS_LANG=american_america.WE8ISO8859P1

4- Take the export of 11g database from this server

exp system/manager@11gDB owner=scott file=test.dmp log=a.log

(Run import in 9.2.0.6 database to import the export dump created from 11.1.0.7 with 9.2.0.6 export)

5- export ORACLE_SID of 9i database

6- export NLS_LANG=american_america.WE8ISO8859P1

7- Now import the export of 11g database data which is taken from 9i export utility

imp system/manage fromuser=scott touser=new file=test.dmp log=b.log

Oracle Database 12c New Features for Developers

0 comments
Oracle Database 12c introduces a new multitenant architecture that makes it easy to deploy and manage database clouds. Oracle 12c is a pluggable database environment, where we can plug multiple databases into single database container. All these databases then share same background processes and memory. This helps in reducing the overhead of managing multiple databases.

I have tried to compile some of the important new features of Oracle Database 12c. Below are the top 15 new features of Oracle Database 12c for Oracle Developer & professional.
1. Sequence as Default Value
With Oracle Database 12c, we can directly assign sequence nextval as a default value for a column, So you no longer need to create a trigger to populate the column with the next value of sequence, you just need to declare it with table definition.

Example:

create sequence test_seq start with 1 increment by 1 nocycle;

create table test_tab
(
    id number default test_seq.nextval primary key
);

2. Invisible column:
Oracle Database 12c provides you the Invisible column feature. A Column defined as invisible, will not appear in generic queries (select * from). An Invisible Column need to be explicitly referred to in the SQL statement or condition. Also invisible column must be explicitly referred in INSERT statement to insert the database into invisible columns.

Example:

SQL> create table my_table
  2  (
  3  id number,
  4  name varchar2(100),
  5  email varchar2(100),
  6  password varchar2(100) INVISIBLE
  7  );
 
SQL> ALTER TABLE my_table MODIFY (password visible); 

3. Multiple indexes on the same column
Before Oracle Database 12c, we could not have multiple indexes on a single column. In Oracle Database 12c a column may have multiple indexes but all should be of different types. Like a column may have B-Tree and BitMap Index both. But, only one index will be used at a given time.

4. VARCHAR2 length up to 32767
Form Oracle Database 12c, a varchar2 column can be sized upto 32767, which was earlier 4000. The maximum size of the VARCHAR2, NVARCHAR2, and RAW data types has been increased from 4,000 to 32,767 bytes. Increasing the allotted size for these data types allows users to store more information in character data types before switching to large objects (LOBs).

5. Top-N feature
A Top-N query is used to retrieve the top or bottom N rows from an ordered set. Combining two Top-N queries gives you the ability to page through an ordered set
Example:

SQL> SELECT value
  2  FROM   mytable
  3  ORDER BY value DESC
  4  FETCH FIRST 10 ROWS ONLY;

6. IDENTITY Columns
In Oracle Database 12c, We can define Table columns with SQL keyword IDENTITY which is a American National Standards Institute (ANSI) SQL keyword. Which are auto-incremented at the time of insertion (like in MySQL).
Example:

SQL> create table my_table
  2  (
  3  id number generated as identity,
  4  name varchar2(100),
  5  email varchar2(100),
  6  password varchar2(100) INVISIBLE
  7  );

7. With Clause improvement
In Oracle 12c, we can declare PL/SQL functions in the WITH Clause of a select statement and use it as an ordinary function. Using this construct results in better performance as compared with schema-level functions
Example:

SQL> WITH
  2    FUNCTION f_test(n IN NUMBER) RETURN NUMBER IS
  3    BEGIN
  4      RETURN n+1;
  5    END;
  6  SELECT f_test(1)
  7  FROM   dual
  8  ;

8. Cascade for TRUNCATE and EXCHANGE partition.
With Oracle Database 12c, The TRUNCATE can be executed with CASCADE option which will also delete the child records.

9. Online RENAME/MOVE of Datafiles
Oracle Database 12c has provided a simple way to online renamed or moved data files by simply "ALTER DATABASE MOVE DATAFILE" command. Data files can also be migrated online from ASM to NON-ASM and NON-ASM to ASM easily now.

Examples:

Rename datafile: 
  SQL> ALTER DATABASE MOVE DATAFILE '/u01/oradata/indx.dbf' TO '/u01/oradata/indx_01.dbf';
Move Datafile:   
  SQL> ALTER DATABASE MOVE DATAFILE '/u01/oradata/indx.dbf' TO '/u01/oradata/orcl/indx.dbf';
NON-ASM to ASM:  
  SQL> ALTER DATABASE MOVE DATAFILE '/u01/oradata/indx.dbf' TO '+DISKGROUP_DATA01';

10. Move table partition to different Tablespace online
From Oracle 12c, it become very easy to move Table Partition to different tablespace and does not require complex steps
Example:

  SQL> ALTER TABLE MY_LARGE_TABLE MOVE PARTITION MY_LARGE_TABLE_PART1 TO TABLESPACE USERS_NEW;


11. Temporary Undo
Before Oracle Database 12c, undo records of temporary tables used to be stored in undo tablespace. With the temporary undo feature in Oracle Database 12c, the undo records of temporary tables can now be stored in a temporary table instead of stored in undo tablespace. The main benefits of temporary undo are 1) Low undo tablespace usages 2) less redo data generation. For using this feature Compatibility parameter must be set to 12.0.0 or higher and TEMP_UNDO_ENABLED initialization parameter must be Enabled.

12. DDL logging
By using the ENABLE_DDL_LOGGING initiation parameter in Oracle Database 12c, we can now log the DDL action into xml and log files to capture when the drop or create command was executed and by whom under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. The parameter can be set at the database or session levels.
Example:

  SQL> ALTER SYSTEM SET ENABLE_DDL_LOGGING=TRUE;


13. PGA_AGGREGATE_LIMIT parameter
Oracle Database 12c has provided us a way to limit PGA by PGA_AGGREGATE_LIMIT parameter. Before Oracle Database 12c there was no option to limit and control the PGA size. Oracle will automatically abort the session that holds the most untenable PGA memory when PGA limits exceeds the defined value.

14. SQL statement in RMAN
From Oracle Database 12c, we can execute any SQL and PL/SQL commands in RMAN without SQL prefix
Example:

  RMAN> SELECT username,machine FROM v$session;

15. Turning off redo for Data Pump the import
The new TRANSFORM option, DISABLE_ARCHIVE_LOGGING, to the impdp command line causes Oracle Data Pump to disable redo logging when loading data into tables and when creating indexes. This feature provides a great relief when importing large tables, and reduces the excessive redo generation, which results in quicker imports. This attribute applies to tables and indexes.
Example:

  impdp directory=mydir dumpfile=mydmp.dmp logfile=mydmp.log TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y

Unix for the DBA

0 comments
How to kill all similar processes with single command (in this case opmn)

ps -ef | grep opmn |grep -v grep | awk ‘{print $2}’ |xargs -i kill -9 {}
Locating Files under a particular directory
find . -print |grep -i test.sql
 Using AWK in UNIX
To remove a specific column of output from a UNIX command – for example to determine the UNIX process Ids for all Oracle processes on server (second column)
ps -ef |grep -i oracle |awk '{ print $2 }'
Changing the standard prompt for Oracle Users
Edit the .profile for the oracle user
PS1="`hostname`*$ORACLE_SID:$PWD>"
 Display top 10 CPU consumers using the ps command
/usr/ucb/ps auxgw | head -11
 Show number of active Oracle dedicated connection users for a particular ORACLE_SID
ps -ef | grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l
 Display the number of CPU’s in Solaris
psrinfo -v | grep "Status of processor"|wc -l
Display the number of CPU’s in AIX
lsdev -C | grep Process|wc -l
Display RAM Memory size on Solaris
prtconf |grep -i mem
Display RAM memory size on AIX
First determine name of memory device
lsdev -C |grep mem
then assuming the name of the memory device is ‘mem0’
lsattr -El mem0
Swap space allocation and usage
Solaris : swap -s or swap -lAix : lsps -a
 Total number of semaphores held by all instances on server
ipcs -as | awk '{sum += $9} END {print sum}'
View allocated RAM memory segments
ipcs -pmb
Manually deallocate shared memeory segments
ipcrm -m '<ID>'
 Show mount points for a disk in AIX
lspv -l hdisk13
 Display amount of occupied space (in KB) for a file or collection of files in a directory or sub-directory
du -ks * | sort -n| tail
Display total file space in a directory
du -ks .
 Cleanup any unwanted trace files more than seven days old
find . *.trc -mtime +7 -exec rm {} \;
 Locate Oracle files that contain certain strings
find . -print | xargs grep rollback
 Locate recently created UNIX files (in the past one day)
find . -mtime -1 -print
 Finding large files on the server (more than 100MB in size)
find . -size +102400 -print
Crontab :
To submit a task every Tuesday (day 2) at 2:45PM
45 14 2 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every 15 minutes on weekdays (days 1-5)
15,30,45 * 1-5 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every hour at 15 minutes past the hour on weekends (days 6 and 0)
15 * 0,6 * * opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1

Oracle library cache tuning – A Real Case Study.

1 comments
Library cache tuning – A Real Case Study.

Scenario :
Users started complaining about very poor application performance, hung and incomplete transactions. Some severity one ticket for immediate action.
Environment :
Oracle 10g (10.2.0.4) Two node RAC, About 15 different Java and .net legacy applications.
Highly transactional.

11 Gig Shared pool
16 Gig DB cache

Availability : 24*7, Outage unacceptable to business
I made some observations, thought it would be a nice idea to share them with you all.
Here are the citation from AWR report(1 hr.) during the problem period.
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 39718 07-Jun-11 05:00:13 438 66.3
End Snap: 39719 07-Jun-11 06:00:19 651 64.4
Elapsed:   60.09 (mins)
DB Time:   490.75 (mins)

  • Very poor response time can be visible from the differences between db time and elapsed time – No wonder why users were complaining.
Load Profile :
Redo size: 698,307.96 9,249.30
Logical reads: 187,227.45 2,479.88
Block changes: 4,078.58 54.02
Physical reads: 29.63 0.39
Physical writes: 79.30 1.05
User calls: 2,638.29 34.94
Parses: 1200.23 5.81
Hard parses: 52.84 0.70
Sorts: 230.95 3.06
Logons: 0.91 0.01
Executes: 601.64 7.97
Transactions: 75.50

  • Parse count is just half of user calls, so every two user call there is a parse request.Dammn bad, this going to be a huge CPU consumer.
  • Hard parses per second is 52.84, if I said very bad for parse count, I should say very very very bad for hard parse – this is very much resource intensive. Unacceptable !!
Instance Efficiency :
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.99 In-memory Sort %: 100.00
Library Hit %: 91.75 Soft Parse %: 87.95
Execute to Parse %: 27.13 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 2.17 % Non-Parse CPU: 97.35

  • Low execute to parse ratio denotes CPU is significantly busy in parsing. Soft Parse% showing, most of the parse are soft parses. It means we should focus on soft parsing activity.
  • Parse CPU to Parse Elapsed % is quite low, which indicate  some bottleneck is there related to parsing. It could be a side-effect of huge parsing pressure. Like CPU cycles are not available.
  • Library Hit % is very poor, it should be atleast above 95% .
Top 5 Timed Events :
latch: library cache 6,470 16,152 2,496 54.9 Concurrency
CPU time   11,675   39.7
log file sync 193,236 1,109 6 3.8 Commit
log file parallel write 224,479 795 4 2.7 System I/O
db file sequential read 38,110 386 10 1.3 User I/O

  • High wait on latch: library cache . So very clear that slow performance is due to the contention in Library cache.
  • Also looked at SQL ordered by Parse Calls and SQL ordered by Version Count. That pin point the issue at library chache.
Whats next ? ?
Above observations indicate that there is some serious problem to share the sqls. It could be a result of bad application code wihout bind variables . Each calls are going for a parse – either soft or hard and no caching. The obvious advantage to caching cursors by session is reduced parse times, which can leads to faster overall execution times.
Caching Caching .. Lets look at the Oracle caching parameters now. The value was
cursor_sharing= similar
open_cursors= 3000
session_cached_cursors= 100
Two things I played around was cursor_sharing and session_cached_cursors. I can ignore open_cursors for time being as it is already high for my environment.
I have chosen to use SESSION_CACHED_CURSORS to help out in this scenario as an application that is continually closing and reopening cursors, you can monitor its effectiveness via two more statistics in v$sesstat. The statistic “session cursor cache hits” reflects the number of times that a statement the session sent for parsing was found in the session cursor cache, meaning it didn’t have to be reparsed and your session didn’t have to search through the library cache for it. You can compare this to the statistic “parse count (total)”; subtract “session cursor cache hits” from “parse count (total)” to see the number of parses that actually occurred.
v$sesstat also provides a statistic to monitor the number of cursors each session has in its session cursor cache.
–session cached cursors, by session
select a.value, s.username, s.sid, s.serial#
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and b.name = ‘session cursor cache count’ ;

–session cached cursor usage.
select a.value curr_cached, p.value max_cached, s.username, s.sid, s.serial#
from v$sesstat a, v$statname b, v$session s, v$parameter2 p
where a.statistic# = b.statistic#  and s.sid=a.sid
and p.name=’session_cached_cursors’
and b.name = ‘session cursor cache count

Above query gave a result that CUR_CACHED=MAX_CACHED . If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT will help with latch contention and can give boost to performance.
One fix is ready now i.e increase SESSION_CACHED_CURSOR from 100 to 300 (Decided to put 300 first and  monitor the session cursor caching).
We had a high hard parse rate as well – So the second parameters to play around was CURSOR_SHARING. For poorly coded application Oracle highly recomend to keep CURSOR_SHARING=FORCE . Also there are few bugs reported with CURSOR_SHARING=SIMILAR option as well as it is Deprecated from 11g onwards. [ANNOUNCEMENT: Deprecating the cursor_sharing = ‘SIMILAR’ setting [MOS Note ID 1169017.1]]
Conclusion :
Changed initialization parameters for session cached cursors and cursor sharing hence reduced library cache contention by avoiding parse calls and releasing cpu cycle.
I have changed to
CURSOR_SHARING=FORCE
SESSION_CACHED_CURSOR=300

Certaily you can dynamicaly do CURSOR_SHARING=FORCE
ALTER SYSTEM SET CURSOR_SHARING=FORCE SCOPE=BOTH;
But SESSION_CACHED_CURSOR is not dynamic and you need a bounce to reflects this. I was in trouble for this because it is a zero tolerence system and outage should be planned a month before. I cannot wait until that to tune this perf issue. Again there is anotehr back door method to implement SESSION_CACHED_CURSOR to get effect, but that’s a topic for another blog – you can see it in another post here ..
I have mentioned at the very start cursor parameters that can introduce  surprises. Yes, very true. After these two changes all the library cache contention was vanished.
Hard parses reduced from 58 per sec to 1 or 0 per sec. Library hit ratio become 98% and soft parse ration become 99%  and response time become jet fast.

Oracle Performance Tuning – AWR Queries

0 comments

AWR query to find load spikes

select
to_char(round(sub1.sample_time, ‘HH24′), ‘YYYY-MM-DD HH24:MI’) as sample_hour,
round(avg(sub1.on_cpu),1) as cpu_avg,
round(avg(sub1.waiting),1) as wait_avg,
round(avg(sub1.active_sessions),1) as act_avg,
round( (variance(sub1.active_sessions)/avg(sub1.active_sessions)),1) as act_var_mean
from
( — sub1: one row per ASH/AWR sample observation
select
sample_id,
sample_time,
sum(decode(session_state, ‘ON CPU’, 1, 0))  as on_cpu,
sum(decode(session_state, ‘WAITING’, 1, 0)) as waiting,
count(*) as active_sessions
from
dba_hist_active_sess_history
where
sample_time > sysdate – (&hours/24)
group by
sample_id,
sample_time
) sub1
group by
round(sub1.sample_time, ‘HH24′)
order by
round(sub1.sample_time, ‘HH24′)
;
SAMPLE_HOUR CPU_AVG   WAIT_AVG ACT_AVG ACT_VAR_MEAN
—————– ———- ———- ———- ————
2008-07-26 04:00         1.5          0        1.5           .4
2008-07-26 05:00         1.5         .1        1.5           .5
2008-07-26 06:00         1.5         .1        1.6           .5
2008-07-26 07:00         1.6          0        1.6           .6
2008-07-26 08:00         5.5         .1        5.6         14.2
2008-07-26 09:00         1.5         .1        1.6           .6
2008-07-26 10:00           2         .3        2.3            1
2008-07-26 11:00         1.8         .2        1.9           .6
2008-07-26 12:00         1.9         .3        2.1           .6
2008-07-26 13:00         1.5         .1        1.6           .5
2008-07-26 14:00         1.5         .1        1.6           .4
2008-07-26 15:00         1.5         .2        1.7           .5
2008-07-26 16:00         1.5         .1        1.7           .5
2008-07-26 17:00         2.1         .1        2.3          1.2
2008-07-26 18:00        16.4         .4       16.8         52.2
2008-07-26 19:00         1.7         .1        1.8           .6
2008-07-26 20:00         3.5         .1        3.6          4.1
2008-07-26 21:00         2.3         .1        2.4            1
2008-07-26 22:00         2.7         .4        3.1          1.3
2008-07-26 23:00           2         .4        2.4           .6
2008-07-27 00:00         2.1         .2        2.4           .7
2008-07-27 01:00           3         .1        3.1          1.9
2008-07-27 02:00           2         .1        2.1           .9
2008-07-27 03:00         1.5          0        1.6           .5
2008-07-27 04:00         1.6         .1        1.6           .5
2008-07-27 05:00         1.3         .1        1.4           .4
2008-07-27 06:00           2          0        2.1          1.3
2008-07-27 07:00         1.6          0        1.7           .7
Here we found two spikes ..
  • No! short spikes:
  • might not generate user complaints,
  • might be ignored by monitors,
  • but are in fact problems
  • No! Very high AAS values always a problem
  • Use of variance to find skew minimizes limits of aggregated statistics
  • Enough detail in AWR/ASH to find bottlenecks
  • Perhaps you should go fishing sometimes!
  • Month’s worth of AWR makes this easy

Aggregate statistics

select
sub.sql_id,
sub.seconds_since_date,
sub.execs_since_date,
sub.gets_since_date
from
( — sub to sort before rownum
select
sql_id,
round(sum(elapsed_time_delta)/1000000) as seconds_since_date,
sum(executions_delta) as execs_since_date,
sum(buffer_gets_delta) as gets_since_date
from
dba_hist_snapshot natural join dba_hist_sqlstat
where
begin_interval_time > to_date(‘&start_YYYYMMDD’,'YYYY-MM-DD’)
group by
sql_id
order by
2 desc
) sub
where
rownum < 30
;

Enter value for start_yyyymmdd: 2010-03-01
old  16:     begin_interval_time > to_date(‘&&start_YYYYMMDD’,'YYYY-MM-DD’)
new  16:     begin_interval_time > to_date(’2010-03-01′,’YYYY-MM-DD’)

SQL_ID SECONDS_SINCE_DATE   EXECS_SINCE_DATE GETS_SINCE_DATE
————- —————— —————- —————
1wc4bx0qap8ph              30617            21563       284059357
6hudrj03d3w5g              23598         20551110       472673974
6tccf6u9tf891              18731            33666       457970700
2u874gr7qz2sk              15175            29014       370715705
fpth08dw8pyr6              14553             2565        36018228
1jt5kjbg7fs5p              11812            12451      2004271887
2f75gyksy99zn              10805            21529       567776447
ccp5w0adc6xx9               5222             6167       222949142
gn26ddjqk93wc               3568        114084711       248687700
b6usrg82hwsa3               2888                2       165621244
ctaajfgak033z               2391                4        66644336
7zwhhjv42qmfq               2197           592377        31495833
96v8tzx8zb99s               2152             6167       117875813
cxjsw79bprkm4               1526           396277       137413869
f2awk3951dcxv               1500             3462        35853709
fzmzt8mf2sw16               1421              311        44067742
01bqmm3gcy9yj               1329           299778        23504806

Non-uniform statistics

select
sub1.sql_id,
round(avg(sub1.seconds_per_hour)) as avg_seconds_per_hour,
round(variance(sub1.seconds_per_hour)/avg(sub1.seconds_per_hour)) as var_over_mean,
count(*) as ct
from
( — sub1
select
snap_id,
sql_id,
elapsed_time_delta/1000000 as seconds_per_hour
from
dba_hist_snapshot natural join dba_hist_sqlstat
where
– look at recent history only
begin_interval_time > sysdate – &&days_back
and
executions_delta > 0
) sub1
group by
sub1.sql_id
having
– only queries that consume 10 seconds per hour on the average
avg(sub1.seconds_per_hour) > 10
and
– only queries that run 50% of the time, assumes hourly snapshots too
count(*) > ( &&days_back * 24) * 0.50
order by
3;

Example (sqlstat, high variance): obvious outlier with high variance, but not the most elapsed time
SQL_ID AVG_SECONDS_PER_HOUR      VAR_OVER_MEAN CT
————- ——————– ————- ———-
72wuyy9sxdmpx                   41             7        167
bgpag6tkxt34h                   29            12        167
crxfkabz8atgn                   14            14        167
66uc7dydx131a                   16            16        167
334d2t692js2z                   36            19        167
6y7mxycfs7afs                   23            20        167
36vs0kyfmr0qa                   17            21        129
fp10bju9zh6qn                   45            22        167
fas56fsc7j9u5                   10            22        167
61dyrn8rjqva2                   17            22        129
4f8wgv0d5hgua                   31            23        167
7wvy5xpy0c6k5                   15            23        151
8v59g9tn46y3p                   17            24        132
9pw7ucw4n113r                   59            27        167
41n1dhb0r3dhv                   32            32        120
8mqxjr571bath                   35            38        117
8jp67hs2296v3                   46           154        128
afdjq1cf8dpwx                   34           184        150
6n3h2sgxpr78g                  454           198        145
g3176qdxahvv9                   42           383         92
b72dmps6rp8z8                  209          1116        167
6qv7az2048hk4                 3409         50219        167

Behavior of a specific SQL over time

select
snap_id,
to_char(begin_interval_time,’YYYY-MM-DD HH24:MI’) as begin_hour,
executions_delta as execs_per_hour,
buffer_gets_delta as gets_per_hour,
round(buffer_gets_delta/executions_delta) as gets_per_exec,
round(elapsed_time_delta/1000000) as seconds_per_hour
from
dba_hist_snapshot natural join dba_hist_sqlstat
where
begin_interval_time between to_date(‘&start_hour’, ‘YYYY-MM-DD HH24:MI’)
and to_date(‘&end_hour’,   ‘YYYY-MM-DD HH24:MI’)
and
sql_id = ‘&sql_id’
and
executions_delta > 0
order by
snap_id
;

Example (sqlstat, one sql_id): sustained high execution rates, occasional wait pile-ups
SNAP_ID BEGIN_HOUR    EXECS_PER_HOUR GETS_PER_HOUR   GETS_PER_EXEC SECONDS_PER_HOUR
———- —————- ————– ————- ————- —————-
1978 2008-04-07 20:00         140449        540639             4          11
1979 2008-04-07 21:00         124142        477807             4          17
1980 2008-04-07 22:00          90568        347286             4            20
1981 2008-04-07 23:00          83287        323100             4            30
1982 2008-04-08 00:00          57094        221166             4            49
1983 2008-04-08 01:00          43925        170594             4             7
1984 2008-04-08 02:00          38596        150277             4             4
1985 2008-04-08 03:00          35710        139576             4              4
1986 2008-04-08 04:00          29700        115429             4               4
1987 2008-04-08 05:00          43666        170520             4              5
1988 2008-04-08 06:00          50755        197116             4               6
1989 2008-04-08 07:00          80371        310652             4               9
1990 2008-04-08 08:00         111924        431470             4             11
1991 2008-04-08 09:00         127154        489649             4             27
1992 2008-04-08 10:00         139270        536962             4            25
1993 2008-04-08 11:00         128697        496013             4             18
1994 2008-04-08 12:00         158739        613554             4      45287
1995 2008-04-08 13:00         152515        587605             4            40
1996 2008-04-08 14:00         144389        555770             4        37589
1997 2008-04-08 15:00         149278        575827             4           26
1998 2008-04-08 16:00         140632        542580             4           12
1999 2008-04-08 17:00         120113        462665             4            11
2000 2008-04-08 18:00         121394        468684             4            12
2001 2008-04-08 19:00         127948        493084             4            13


System statistic history

select
stat_start.snap_id,
to_char(snap.begin_interval_time,’YYYY-MM-DD HH24:MI’) as begin_hour,
stat_end.value – stat_start.value as delta_value
from
dba_hist_sysstat stat_start,
dba_hist_sysstat stat_end,
dba_hist_snapshot snap
where
– assumes the snap_id at the end of the interval
– is one greater than the snap_id at the start ofthe interval
stat_end.snap_id = stat_start.snap_id + 1
and
– otherwise, we join stat_end and stat_start on exact matches of the remaining PK columns
(     stat_end.dbid = stat_start.dbid
and stat_end.instance_number = stat_start.instance_number
and stat_end.stat_name = stat_start.stat_name
)
and
– filter for the statistic we are interested in (might want to add date range filter too)
stat_end.stat_name = ‘&stat_name’
and
– join stat_start to snap on FK
(     stat_start.snap_id = snap.snap_id
and stat_start.dbid = snap.dbid
and stat_start.instance_number = snap.instance_number
)
order by
stat_start.snap_id
;

SQL> @sys-stat-hist.sql
Enter value for stat_name: DB time
old  27:    stat_end.stat_name = ‘&stat_name’
new  27:    stat_end.stat_name = ‘DB time’

SNAP_ID BEGIN_HOUR       DELTA_VALUE
———- —————- ———–
4159 2010-07-07 17:00     2089225
4160 2010-07-07 18:00     1505607
4161 2010-07-07 19:00    31188489
4162 2010-07-07 20:00    24930866
4163 2010-07-07 21:00     1828924
4164 2010-07-07 22:00     1258286
4165 2010-07-07 23:00      396076
4166 2010-07-08 00:00      688963
4167 2010-07-08 01:00      354481
4168 2010-07-08 02:00      411555
4169 2010-07-08 03:00      325875
4170 2010-07-08 04:00      328739
4171 2010-07-08 05:00      447432
4172 2010-07-08 06:00      838585
4173 2010-07-08 07:00     1138196
4174 2010-07-08 08:00     1852437
4175 2010-07-08 09:00    68736587
4176 2010-07-08 10:00      739175
4177 2010-07-08 11:00      647451
4178 2010-07-08 12:00      702787
4179 2010-07-08 13:00     3722000
4180 2010-07-08 14:00      712481
4181 2010-07-08 15:00      475688
4182 2010-07-08 16:00      416013

Copyright © ORACLE ONLINE DBA
Developed By Pavan Yennampelli