Updates

    Unix for the DBA

    0 comments
    How to kill all similar processes with single command (in this case opmn)

    ps -ef | grep opmn |grep -v grep | awk ‘{print $2}’ |xargs -i kill -9 {}
    Locating Files under a particular directory
    find . -print |grep -i test.sql
     Using AWK in UNIX
    To remove a specific column of output from a UNIX command – for example to determine the UNIX process Ids for all Oracle processes on server (second column)
    ps -ef |grep -i oracle |awk '{ print $2 }'
    Changing the standard prompt for Oracle Users
    Edit the .profile for the oracle user
    PS1="`hostname`*$ORACLE_SID:$PWD>"
     Display top 10 CPU consumers using the ps command
    /usr/ucb/ps auxgw | head -11
     Show number of active Oracle dedicated connection users for a particular ORACLE_SID
    ps -ef | grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l
     Display the number of CPU’s in Solaris
    psrinfo -v | grep "Status of processor"|wc -l
    Display the number of CPU’s in AIX
    lsdev -C | grep Process|wc -l
    Display RAM Memory size on Solaris
    prtconf |grep -i mem
    Display RAM memory size on AIX
    First determine name of memory device
    lsdev -C |grep mem
    then assuming the name of the memory device is ‘mem0’
    lsattr -El mem0
    Swap space allocation and usage
    Solaris : swap -s or swap -lAix : lsps -a
     Total number of semaphores held by all instances on server
    ipcs -as | awk '{sum += $9} END {print sum}'
    View allocated RAM memory segments
    ipcs -pmb
    Manually deallocate shared memeory segments
    ipcrm -m '<ID>'
     Show mount points for a disk in AIX
    lspv -l hdisk13
     Display amount of occupied space (in KB) for a file or collection of files in a directory or sub-directory
    du -ks * | sort -n| tail
    Display total file space in a directory
    du -ks .
     Cleanup any unwanted trace files more than seven days old
    find . *.trc -mtime +7 -exec rm {} \;
     Locate Oracle files that contain certain strings
    find . -print | xargs grep rollback
     Locate recently created UNIX files (in the past one day)
    find . -mtime -1 -print
     Finding large files on the server (more than 100MB in size)
    find . -size +102400 -print
    Crontab :
    To submit a task every Tuesday (day 2) at 2:45PM
    45 14 2 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
    To submit a task to run every 15 minutes on weekdays (days 1-5)
    15,30,45 * 1-5 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
    To submit a task to run every hour at 15 minutes past the hour on weekends (days 6 and 0)
    15 * 0,6 * * opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1

    Oracle library cache tuning – A Real Case Study.

    1 comments
    Library cache tuning – A Real Case Study.

    Scenario :
    Users started complaining about very poor application performance, hung and incomplete transactions. Some severity one ticket for immediate action.
    Environment :
    Oracle 10g (10.2.0.4) Two node RAC, About 15 different Java and .net legacy applications.
    Highly transactional.

    11 Gig Shared pool
    16 Gig DB cache

    Availability : 24*7, Outage unacceptable to business
    I made some observations, thought it would be a nice idea to share them with you all.
    Here are the citation from AWR report(1 hr.) during the problem period.
    Snap Id Snap Time Sessions Cursors/Session
    Begin Snap: 39718 07-Jun-11 05:00:13 438 66.3
    End Snap: 39719 07-Jun-11 06:00:19 651 64.4
    Elapsed:   60.09 (mins)
    DB Time:   490.75 (mins)

    • Very poor response time can be visible from the differences between db time and elapsed time – No wonder why users were complaining.
    Load Profile :
    Redo size: 698,307.96 9,249.30
    Logical reads: 187,227.45 2,479.88
    Block changes: 4,078.58 54.02
    Physical reads: 29.63 0.39
    Physical writes: 79.30 1.05
    User calls: 2,638.29 34.94
    Parses: 1200.23 5.81
    Hard parses: 52.84 0.70
    Sorts: 230.95 3.06
    Logons: 0.91 0.01
    Executes: 601.64 7.97
    Transactions: 75.50

    • Parse count is just half of user calls, so every two user call there is a parse request.Dammn bad, this going to be a huge CPU consumer.
    • Hard parses per second is 52.84, if I said very bad for parse count, I should say very very very bad for hard parse – this is very much resource intensive. Unacceptable !!
    Instance Efficiency :
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 99.99 In-memory Sort %: 100.00
    Library Hit %: 91.75 Soft Parse %: 87.95
    Execute to Parse %: 27.13 Latch Hit %: 99.97
    Parse CPU to Parse Elapsd %: 2.17 % Non-Parse CPU: 97.35

    • Low execute to parse ratio denotes CPU is significantly busy in parsing. Soft Parse% showing, most of the parse are soft parses. It means we should focus on soft parsing activity.
    • Parse CPU to Parse Elapsed % is quite low, which indicate  some bottleneck is there related to parsing. It could be a side-effect of huge parsing pressure. Like CPU cycles are not available.
    • Library Hit % is very poor, it should be atleast above 95% .
    Top 5 Timed Events :
    latch: library cache 6,470 16,152 2,496 54.9 Concurrency
    CPU time   11,675   39.7
    log file sync 193,236 1,109 6 3.8 Commit
    log file parallel write 224,479 795 4 2.7 System I/O
    db file sequential read 38,110 386 10 1.3 User I/O

    • High wait on latch: library cache . So very clear that slow performance is due to the contention in Library cache.
    • Also looked at SQL ordered by Parse Calls and SQL ordered by Version Count. That pin point the issue at library chache.
    Whats next ? ?
    Above observations indicate that there is some serious problem to share the sqls. It could be a result of bad application code wihout bind variables . Each calls are going for a parse – either soft or hard and no caching. The obvious advantage to caching cursors by session is reduced parse times, which can leads to faster overall execution times.
    Caching Caching .. Lets look at the Oracle caching parameters now. The value was
    cursor_sharing= similar
    open_cursors= 3000
    session_cached_cursors= 100
    Two things I played around was cursor_sharing and session_cached_cursors. I can ignore open_cursors for time being as it is already high for my environment.
    I have chosen to use SESSION_CACHED_CURSORS to help out in this scenario as an application that is continually closing and reopening cursors, you can monitor its effectiveness via two more statistics in v$sesstat. The statistic “session cursor cache hits” reflects the number of times that a statement the session sent for parsing was found in the session cursor cache, meaning it didn’t have to be reparsed and your session didn’t have to search through the library cache for it. You can compare this to the statistic “parse count (total)”; subtract “session cursor cache hits” from “parse count (total)” to see the number of parses that actually occurred.
    v$sesstat also provides a statistic to monitor the number of cursors each session has in its session cursor cache.
    –session cached cursors, by session
    select a.value, s.username, s.sid, s.serial#
    from v$sesstat a, v$statname b, v$session s
    where a.statistic# = b.statistic#  and s.sid=a.sid
    and b.name = ‘session cursor cache count’ ;

    –session cached cursor usage.
    select a.value curr_cached, p.value max_cached, s.username, s.sid, s.serial#
    from v$sesstat a, v$statname b, v$session s, v$parameter2 p
    where a.statistic# = b.statistic#  and s.sid=a.sid
    and p.name=’session_cached_cursors’
    and b.name = ‘session cursor cache count

    Above query gave a result that CUR_CACHED=MAX_CACHED . If the session cursor cache count is maxed out, session_cursor_cache_hits is low compared to all parses, and you suspect that the application is re-submitting the same queries for parsing repeatedly, then increasing SESSION_CURSOR_CACHE_COUNT will help with latch contention and can give boost to performance.
    One fix is ready now i.e increase SESSION_CACHED_CURSOR from 100 to 300 (Decided to put 300 first and  monitor the session cursor caching).
    We had a high hard parse rate as well – So the second parameters to play around was CURSOR_SHARING. For poorly coded application Oracle highly recomend to keep CURSOR_SHARING=FORCE . Also there are few bugs reported with CURSOR_SHARING=SIMILAR option as well as it is Deprecated from 11g onwards. [ANNOUNCEMENT: Deprecating the cursor_sharing = ‘SIMILAR’ setting [MOS Note ID 1169017.1]]
    Conclusion :
    Changed initialization parameters for session cached cursors and cursor sharing hence reduced library cache contention by avoiding parse calls and releasing cpu cycle.
    I have changed to
    CURSOR_SHARING=FORCE
    SESSION_CACHED_CURSOR=300

    Certaily you can dynamicaly do CURSOR_SHARING=FORCE
    ALTER SYSTEM SET CURSOR_SHARING=FORCE SCOPE=BOTH;
    But SESSION_CACHED_CURSOR is not dynamic and you need a bounce to reflects this. I was in trouble for this because it is a zero tolerence system and outage should be planned a month before. I cannot wait until that to tune this perf issue. Again there is anotehr back door method to implement SESSION_CACHED_CURSOR to get effect, but that’s a topic for another blog – you can see it in another post here ..
    I have mentioned at the very start cursor parameters that can introduce  surprises. Yes, very true. After these two changes all the library cache contention was vanished.
    Hard parses reduced from 58 per sec to 1 or 0 per sec. Library hit ratio become 98% and soft parse ration become 99%  and response time become jet fast.

    Oracle Performance Tuning – AWR Queries

    0 comments

    AWR query to find load spikes

    select
    to_char(round(sub1.sample_time, ‘HH24′), ‘YYYY-MM-DD HH24:MI’) as sample_hour,
    round(avg(sub1.on_cpu),1) as cpu_avg,
    round(avg(sub1.waiting),1) as wait_avg,
    round(avg(sub1.active_sessions),1) as act_avg,
    round( (variance(sub1.active_sessions)/avg(sub1.active_sessions)),1) as act_var_mean
    from
    ( — sub1: one row per ASH/AWR sample observation
    select
    sample_id,
    sample_time,
    sum(decode(session_state, ‘ON CPU’, 1, 0))  as on_cpu,
    sum(decode(session_state, ‘WAITING’, 1, 0)) as waiting,
    count(*) as active_sessions
    from
    dba_hist_active_sess_history
    where
    sample_time > sysdate – (&hours/24)
    group by
    sample_id,
    sample_time
    ) sub1
    group by
    round(sub1.sample_time, ‘HH24′)
    order by
    round(sub1.sample_time, ‘HH24′)
    ;
    SAMPLE_HOUR CPU_AVG   WAIT_AVG ACT_AVG ACT_VAR_MEAN
    —————– ———- ———- ———- ————
    2008-07-26 04:00         1.5          0        1.5           .4
    2008-07-26 05:00         1.5         .1        1.5           .5
    2008-07-26 06:00         1.5         .1        1.6           .5
    2008-07-26 07:00         1.6          0        1.6           .6
    2008-07-26 08:00         5.5         .1        5.6         14.2
    2008-07-26 09:00         1.5         .1        1.6           .6
    2008-07-26 10:00           2         .3        2.3            1
    2008-07-26 11:00         1.8         .2        1.9           .6
    2008-07-26 12:00         1.9         .3        2.1           .6
    2008-07-26 13:00         1.5         .1        1.6           .5
    2008-07-26 14:00         1.5         .1        1.6           .4
    2008-07-26 15:00         1.5         .2        1.7           .5
    2008-07-26 16:00         1.5         .1        1.7           .5
    2008-07-26 17:00         2.1         .1        2.3          1.2
    2008-07-26 18:00        16.4         .4       16.8         52.2
    2008-07-26 19:00         1.7         .1        1.8           .6
    2008-07-26 20:00         3.5         .1        3.6          4.1
    2008-07-26 21:00         2.3         .1        2.4            1
    2008-07-26 22:00         2.7         .4        3.1          1.3
    2008-07-26 23:00           2         .4        2.4           .6
    2008-07-27 00:00         2.1         .2        2.4           .7
    2008-07-27 01:00           3         .1        3.1          1.9
    2008-07-27 02:00           2         .1        2.1           .9
    2008-07-27 03:00         1.5          0        1.6           .5
    2008-07-27 04:00         1.6         .1        1.6           .5
    2008-07-27 05:00         1.3         .1        1.4           .4
    2008-07-27 06:00           2          0        2.1          1.3
    2008-07-27 07:00         1.6          0        1.7           .7
    Here we found two spikes ..
    • No! short spikes:
    • might not generate user complaints,
    • might be ignored by monitors,
    • but are in fact problems
    • No! Very high AAS values always a problem
    • Use of variance to find skew minimizes limits of aggregated statistics
    • Enough detail in AWR/ASH to find bottlenecks
    • Perhaps you should go fishing sometimes!
    • Month’s worth of AWR makes this easy

    Aggregate statistics

    select
    sub.sql_id,
    sub.seconds_since_date,
    sub.execs_since_date,
    sub.gets_since_date
    from
    ( — sub to sort before rownum
    select
    sql_id,
    round(sum(elapsed_time_delta)/1000000) as seconds_since_date,
    sum(executions_delta) as execs_since_date,
    sum(buffer_gets_delta) as gets_since_date
    from
    dba_hist_snapshot natural join dba_hist_sqlstat
    where
    begin_interval_time > to_date(‘&start_YYYYMMDD’,'YYYY-MM-DD’)
    group by
    sql_id
    order by
    2 desc
    ) sub
    where
    rownum < 30
    ;

    Enter value for start_yyyymmdd: 2010-03-01
    old  16:     begin_interval_time > to_date(‘&&start_YYYYMMDD’,'YYYY-MM-DD’)
    new  16:     begin_interval_time > to_date(’2010-03-01′,’YYYY-MM-DD’)

    SQL_ID SECONDS_SINCE_DATE   EXECS_SINCE_DATE GETS_SINCE_DATE
    ————- —————— —————- —————
    1wc4bx0qap8ph              30617            21563       284059357
    6hudrj03d3w5g              23598         20551110       472673974
    6tccf6u9tf891              18731            33666       457970700
    2u874gr7qz2sk              15175            29014       370715705
    fpth08dw8pyr6              14553             2565        36018228
    1jt5kjbg7fs5p              11812            12451      2004271887
    2f75gyksy99zn              10805            21529       567776447
    ccp5w0adc6xx9               5222             6167       222949142
    gn26ddjqk93wc               3568        114084711       248687700
    b6usrg82hwsa3               2888                2       165621244
    ctaajfgak033z               2391                4        66644336
    7zwhhjv42qmfq               2197           592377        31495833
    96v8tzx8zb99s               2152             6167       117875813
    cxjsw79bprkm4               1526           396277       137413869
    f2awk3951dcxv               1500             3462        35853709
    fzmzt8mf2sw16               1421              311        44067742
    01bqmm3gcy9yj               1329           299778        23504806

    Non-uniform statistics

    select
    sub1.sql_id,
    round(avg(sub1.seconds_per_hour)) as avg_seconds_per_hour,
    round(variance(sub1.seconds_per_hour)/avg(sub1.seconds_per_hour)) as var_over_mean,
    count(*) as ct
    from
    ( — sub1
    select
    snap_id,
    sql_id,
    elapsed_time_delta/1000000 as seconds_per_hour
    from
    dba_hist_snapshot natural join dba_hist_sqlstat
    where
    – look at recent history only
    begin_interval_time > sysdate – &&days_back
    and
    executions_delta > 0
    ) sub1
    group by
    sub1.sql_id
    having
    – only queries that consume 10 seconds per hour on the average
    avg(sub1.seconds_per_hour) > 10
    and
    – only queries that run 50% of the time, assumes hourly snapshots too
    count(*) > ( &&days_back * 24) * 0.50
    order by
    3;

    Example (sqlstat, high variance): obvious outlier with high variance, but not the most elapsed time
    SQL_ID AVG_SECONDS_PER_HOUR      VAR_OVER_MEAN CT
    ————- ——————– ————- ———-
    72wuyy9sxdmpx                   41             7        167
    bgpag6tkxt34h                   29            12        167
    crxfkabz8atgn                   14            14        167
    66uc7dydx131a                   16            16        167
    334d2t692js2z                   36            19        167
    6y7mxycfs7afs                   23            20        167
    36vs0kyfmr0qa                   17            21        129
    fp10bju9zh6qn                   45            22        167
    fas56fsc7j9u5                   10            22        167
    61dyrn8rjqva2                   17            22        129
    4f8wgv0d5hgua                   31            23        167
    7wvy5xpy0c6k5                   15            23        151
    8v59g9tn46y3p                   17            24        132
    9pw7ucw4n113r                   59            27        167
    41n1dhb0r3dhv                   32            32        120
    8mqxjr571bath                   35            38        117
    8jp67hs2296v3                   46           154        128
    afdjq1cf8dpwx                   34           184        150
    6n3h2sgxpr78g                  454           198        145
    g3176qdxahvv9                   42           383         92
    b72dmps6rp8z8                  209          1116        167
    6qv7az2048hk4                 3409         50219        167

    Behavior of a specific SQL over time

    select
    snap_id,
    to_char(begin_interval_time,’YYYY-MM-DD HH24:MI’) as begin_hour,
    executions_delta as execs_per_hour,
    buffer_gets_delta as gets_per_hour,
    round(buffer_gets_delta/executions_delta) as gets_per_exec,
    round(elapsed_time_delta/1000000) as seconds_per_hour
    from
    dba_hist_snapshot natural join dba_hist_sqlstat
    where
    begin_interval_time between to_date(‘&start_hour’, ‘YYYY-MM-DD HH24:MI’)
    and to_date(‘&end_hour’,   ‘YYYY-MM-DD HH24:MI’)
    and
    sql_id = ‘&sql_id’
    and
    executions_delta > 0
    order by
    snap_id
    ;

    Example (sqlstat, one sql_id): sustained high execution rates, occasional wait pile-ups
    SNAP_ID BEGIN_HOUR    EXECS_PER_HOUR GETS_PER_HOUR   GETS_PER_EXEC SECONDS_PER_HOUR
    ———- —————- ————– ————- ————- —————-
    1978 2008-04-07 20:00         140449        540639             4          11
    1979 2008-04-07 21:00         124142        477807             4          17
    1980 2008-04-07 22:00          90568        347286             4            20
    1981 2008-04-07 23:00          83287        323100             4            30
    1982 2008-04-08 00:00          57094        221166             4            49
    1983 2008-04-08 01:00          43925        170594             4             7
    1984 2008-04-08 02:00          38596        150277             4             4
    1985 2008-04-08 03:00          35710        139576             4              4
    1986 2008-04-08 04:00          29700        115429             4               4
    1987 2008-04-08 05:00          43666        170520             4              5
    1988 2008-04-08 06:00          50755        197116             4               6
    1989 2008-04-08 07:00          80371        310652             4               9
    1990 2008-04-08 08:00         111924        431470             4             11
    1991 2008-04-08 09:00         127154        489649             4             27
    1992 2008-04-08 10:00         139270        536962             4            25
    1993 2008-04-08 11:00         128697        496013             4             18
    1994 2008-04-08 12:00         158739        613554             4      45287
    1995 2008-04-08 13:00         152515        587605             4            40
    1996 2008-04-08 14:00         144389        555770             4        37589
    1997 2008-04-08 15:00         149278        575827             4           26
    1998 2008-04-08 16:00         140632        542580             4           12
    1999 2008-04-08 17:00         120113        462665             4            11
    2000 2008-04-08 18:00         121394        468684             4            12
    2001 2008-04-08 19:00         127948        493084             4            13


    System statistic history

    select
    stat_start.snap_id,
    to_char(snap.begin_interval_time,’YYYY-MM-DD HH24:MI’) as begin_hour,
    stat_end.value – stat_start.value as delta_value
    from
    dba_hist_sysstat stat_start,
    dba_hist_sysstat stat_end,
    dba_hist_snapshot snap
    where
    – assumes the snap_id at the end of the interval
    – is one greater than the snap_id at the start ofthe interval
    stat_end.snap_id = stat_start.snap_id + 1
    and
    – otherwise, we join stat_end and stat_start on exact matches of the remaining PK columns
    (     stat_end.dbid = stat_start.dbid
    and stat_end.instance_number = stat_start.instance_number
    and stat_end.stat_name = stat_start.stat_name
    )
    and
    – filter for the statistic we are interested in (might want to add date range filter too)
    stat_end.stat_name = ‘&stat_name’
    and
    – join stat_start to snap on FK
    (     stat_start.snap_id = snap.snap_id
    and stat_start.dbid = snap.dbid
    and stat_start.instance_number = snap.instance_number
    )
    order by
    stat_start.snap_id
    ;

    SQL> @sys-stat-hist.sql
    Enter value for stat_name: DB time
    old  27:    stat_end.stat_name = ‘&stat_name’
    new  27:    stat_end.stat_name = ‘DB time’

    SNAP_ID BEGIN_HOUR       DELTA_VALUE
    ———- —————- ———–
    4159 2010-07-07 17:00     2089225
    4160 2010-07-07 18:00     1505607
    4161 2010-07-07 19:00    31188489
    4162 2010-07-07 20:00    24930866
    4163 2010-07-07 21:00     1828924
    4164 2010-07-07 22:00     1258286
    4165 2010-07-07 23:00      396076
    4166 2010-07-08 00:00      688963
    4167 2010-07-08 01:00      354481
    4168 2010-07-08 02:00      411555
    4169 2010-07-08 03:00      325875
    4170 2010-07-08 04:00      328739
    4171 2010-07-08 05:00      447432
    4172 2010-07-08 06:00      838585
    4173 2010-07-08 07:00     1138196
    4174 2010-07-08 08:00     1852437
    4175 2010-07-08 09:00    68736587
    4176 2010-07-08 10:00      739175
    4177 2010-07-08 11:00      647451
    4178 2010-07-08 12:00      702787
    4179 2010-07-08 13:00     3722000
    4180 2010-07-08 14:00      712481
    4181 2010-07-08 15:00      475688
    4182 2010-07-08 16:00      416013

    Oracle DBA Silver Bullets – Performance Queries

    0 comments
    Just thought to share few queries from my repository. It may be useful for all new DBAs. Of course these are quite old ones and you can do most of them through Enterprise Manager, still we DBAs always like command line :-)

    Show sessions that are blocking each other :

    select 'SID ' || l1.sid ||' is blocking  ' || l2.sid blocking
    from v$lock l1, v$lock l2
    where l1.block =1 and l2.request > 0
    and l1.id1=l2.id1
    and l1.id2=l2.id2
    /

    Show locked objects :

    set lines 100 pages 999
    col username  format a20
    col sess_id  format a10
    col object format a25
    col mode_held format a10
    select oracle_username || ' (' || s.osuser || ')' username
    , s.sid || ',' || s.serial# sess_id
    , owner || '.' || object_name object
    , object_type
    , decode( l.block
     , 0, 'Not Blocking'
     , 1, 'Blocking'
     , 2, 'Global') status
    , decode(v.locked_mode
     , 0, 'None'
     , 1, 'Null'
     , 2, 'Row-S (SS)'
     , 3, 'Row-X (SX)'
     , 4, 'Share'
     , 5, 'S/Row-X (SSX)'
     , 6, 'Exclusive', TO_CHAR(lmode)) mode_held
    from v$locked_object v
    , dba_objects d
    , v$lock l
    , v$session s
    where  v.object_id = d.object_id
    and  v.object_id = l.id1
    and  v.session_id = s.sid
    order by oracle_username
    , session_id
    /

    Show which row is locked :

    select do.object_name
    , row_wait_obj#
    , row_wait_file#
    , row_wait_block#
    , row_wait_row#
    , dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#,
        ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
    from v$session s
    , dba_objects do
    where sid=&sid
    and  s.ROW_WAIT_OBJ# = do.OBJECT_ID
    /
    
    Then select the row with that rowid...
    
    select * from <table> where rowid=<rowid>;

    List locks :

    column lock_type format a12
    column mode_held format a10
    column mode_requested format a10
    column blocking_others format a20
    column username format a10
    SELECT session_id
    , lock_type
    , mode_held
    , mode_requested
    , blocking_others
    , lock_id1
    FROM dba_lock l
    WHERE  lock_type NOT IN ('Media Recovery', 'Redo Thread')
    /

    Show all ddl locks in the system :

    select ses.username
    , ddl.session_id
    , ses.serial#
    , owner || '.' || ddl.name object
    , ddl.type
    , ddl.mode_held
    from dba_ddl_locks ddl
    , v$session ses
    where owner like '%userid%'
    and ddl.session_id = ses.sid
    /

    Generate kill statement for ddl locking sessions :

    select    'alter system kill session ''' || ddl.session_id || ',' || ses.serial# || ''' immediate;'
    from    dba_ddl_locks ddl
    ,    v$session ses
    where    owner like '%userid%'
    and    ddl.session_id = ses.sid
    /

    Show currently exectuing sql :

    select sql_text
    from v$sqlarea
    where users_executing > 0
    /

    Session statistics :

    select    sn.name
    ,    st.value
    from    v$sesstat st
    ,    v$statname sn
    where    st.STATISTIC# = sn.STATISTIC#
    and    st.VALUE > 0
    and    st.SID = &SID
    order     by value desc
    /

    Resource intensive sql :

    change 8192 to match block size
    
    select sql_text
    ,      executions
    ,      to_char((((disk_reads+buffer_gets)/executions) * 8192)/1048576, '9,999,999,990.00')
     as total_gets_per_exec_mb
    ,      to_char((( disk_reads             /executions) * 8192)/1048576, '9,999,999,990.00')
     as disk_reads_per_exec_mb
    ,      to_char((( buffer_gets            /executions) * 8192)/1048576, '9,999,999,990.00')
     as buffer_gets_per_exec_mb
    ,      parsing_user_id
    from   v$sqlarea
    where  executions > 10
    order by 6 desc
    /

    File io stats :

    Requires timed_statistics=true
    
    set lines 80 pages 999
    col fname heading "File Name" format a60
    col sizemb heading "Size(Mb)" format 99,999
    col phyrds heading "Reads" format 999,999,999
    col readtim heading "Time" format 99.999
    col phywrts heading "Writes" format 9,999,999
    col writetim heading "Time" format 99.999
    select     lower(name) fname
    ,          (bytes / 1048576) sizemb
    ,          phyrds
    ,    readtim
    ,          phywrts
    ,    writetim
    from       v$datafile df
    ,          v$filestat fs
    where      df.file# = fs.file#
    order      by 1
    /

    In session tracing :

    To switch it on:
    
    exec dbms_system.set_sql_trace_in_session (<sid>, <serial#>, true);
    
    To switch it off:
    
    exec dbms_system.set_sql_trace_in_session (<sid>, <serial#>, false);

    switch on event 10046 :

    To switch it on:
    
    alter session set events '10046 trace name context forever, level 8'; 
    
    To switch it off:
    
    alter session set events '10046 trace name context off';
    Note. use tkprof to interpret the results.

    Rows per block :

    select    avg(row_count) avg
    , max(row_count) max
    , min(row_count) min
    from      (
     select  count(*) row_count
     from    &table_name
     group   by substr(rowid, 1, 15)
     )
    /

    Show the buffer cache advisory :

    Note. The current setting is halfway down and has a read factor of one.
    
    set lines 100 pages 999
    col est_mb format 99,999
    col estd_physical_reads format 999,999,999,999,999
    select    size_for_estimate est_mb
    ,    estd_physical_read_factor
    ,    estd_physical_reads
    from    v$db_cache_advice
    where    name = 'DEFAULT'
    order by size_for_estimate
    /
    
    db_cache_advice needs to be on for the above to work
    
    alter system set db_cache_advice=on;

    Script to find out Backups taken and their duration in last 24 hours

    0 comments
    To get a picture of all the full or incremental backups taken along with the time it took, use the below script


    select decode(BACKUP_TYPE, 'L', 'ARCH', 'D', 'DB', 'I', 'INC',
                  'Unknown type='||BACKUP_TYPE) TYPE,
           to_char(a.start_time, 'DDMON HH24:MI') start_time,
           to_char(a.elapsed_seconds/60, '99.9')||' Min' DURATION,
           substr(handle, -35) handle,
           nvl(d.file#, l.sequence#) file#, nvl(d.blocks, l.blocks) blocks
    from   SYS.V_$BACKUP_SET a, SYS.V_$BACKUP_PIECE b,
           SYS.V_$BACKUP_DATAFILE d, SYS.V_$BACKUP_REDOLOG l
    where  a.start_time between sysdate-1 and sysdate
      and  a.SET_STAMP = b.SET_STAMP
      and  a.SET_STAMP = d.SET_STAMP(+)
      and  a.SET_STAMP = l.SET_STAMP(+)
    order  by start_time, file#



    This shows the backup of all datafiles and archive log files

    Script to Compile all invalid Materialized Views in a schema

    0 comments
    select 'alter '||object_type||' '||owner||'."'||object_name||'" compile;' from dba_objects where owner = 'OWNER' and object_type = 'MATERIALIZED VIEW' and status <> 'VALID'

    execute the above statement, copy the output and run it in SQLPLUS

    OR

    SET SERVEROUTPUT ON 
    BEGIN
      FOR i IN (SELECT owner,object_name, object_type FROM   dba_objects
                      WHERE  object_type IN ('MATERIALIZED VIEW')
                      AND    status <> 'VALID'
                      AND OWNER='SCHEMA NAME'
                      ORDER BY 2)
      LOOP
        BEGIN
          IF i.object_type = 'MATERIALIZED VIEW' THEN
            EXECUTE IMMEDIATE 'ALTER ' || i.object_type ||' "' || i.owner || '"."' || i.object_name || '" COMPILE';
        
          END IF;
        EXCEPTION
          WHEN OTHERS THEN
            DBMS_OUTPUT.put_line(i.object_type || ' : ' || i.owner ||' : ' || i.object_name);
        END;
      END LOOP;
    END;


    you can use the following in SQLPLUS if need to compile all objects in schema

    exec dbms_utility.compile_schema('SCHEMA NAME')

    Script to find out if any new table/procedure added in the schema in past 24 hours

    0 comments
    set serveroutput on
    declare
    v varchar2(2000);
    begin
         for i in (select object_name,object_type from dba_objects where to_date(created,'yy-mm-dd') >= to_date(sysdate-1,'yy-mm-dd') and owner = 'USER')
    loop

    select dbms_metadata.get_ddl(i.object_type,i.object_name,'USER') into v from dual;
    dbms_output.put_line(v);
    end loop;
    end;

    Find duplicate columns from all tables in a schema

    0 comments
    The below script is used to find the same columns appearing in multiple tables of a particular schema


    SELECT column_name,data_type,data_length,nullable,table_name
    FROM dba_tab_columns
    WHERE column_name IN
    (
        SELECT column_name
        FROM dba_tab_columns
        GROUP BY
            column_name
        HAVING COUNT(1) > 1  -- more than one value
    )
    and owner = 'USER' -- provide the user name
    AND COLUMN_NAME LIKE '%TIME%'
    ORDER BY column_name

    Difference b/w Oracle Restore and Recovery

    0 comments
    Restore and recovery are two separate steps. Restore is the process of copying back data-files from the backup files.
    Recovery is the process of applying transaction information to the data-files to recover them to the state they were in just before the failure occurred.

    See the sample example below in case the database has to recovered with the loss of any datafile assuming the controls file, archived redo log file and online redo log files are available.


     RMAN> connect target / 
     RMAN> startup mount; 
     RMAN> restore database; 

     You’ll see several lines of output as RMAN tells you what it is restoring.    

     Next recover your database as follows:

     RMAN> recover database; 

     You can now open your database for use with the alter database opencommand:

      RMAN> alter database open; 
      database opened

       How It Works

        RMAN uses information stored in the control file to determine where to retrieve backups and which files to      restore and recover.



    When you issue the recover database command, RMAN will automatically apply redo to any datafiles that need recovery. The recovery process includes applying changes found in the following:

    •Incremental backup pieces (applicable only if using incremental backups)
    •Archived redo log files (generated since the last backup or last incremental backup that is applied)
    •Online redo log files (current and unarchived)


    You can open your database after the restore and recovery process is complete. If you restore from a backup control file, you are required to open your database with the open resetlogs command.

    Enabling/Disabling Archivelog mode in Oracle

    0 comments
    Enabling Archivelog Mode
    SQL> connect sys/password as sysdba
    SQL> shutdown immediate;
    SQL> startup mount;
    SQL> alter database archivelog;
    SQL> alter database open;

    Disabling Archivelog Mode


    SQL> connect sys/password as sysdba
    SQL> shutdown immediate;
    SQL> startup mount;
    SQL> alter database noarchivelog;
    SQL> alter database open;

    Note  If  flashback database feature is enabled, first disable it before you disable database archiving.

    Displaying Archive Information


    Two ways can be used to display the mode of database


    SQL> select log_mode from v$database;
              LOG_MODE
              --------------------
              ARCHIVELOG


    SQL> archive log list;
              Database log mode                    Archive Mode
              Automatic archival                     Enabled
              Archive destination                     /u02/ora_archives/
              Oldest online log sequence         1
              Next log sequence to archive       2
              Current log sequence                  2

    Prior to Oracle Database 10g, it was also required to enable automatic archiving that tells Oracle to automatically create an archived redo log file when the online redo log file becomes full. With Oracle Database 10g onward, it is no longer required to do that by setting the archive_log_start parameter. The archive_log_start parameter is deprecated

    Where is my Alert log file?

    0 comments
    Beginning with Release 11g, the alert log file is written as XML formatted and as a text file (like in previous releases). The default location of both these files is the new ADR home (Automatic Diagnostic Respository).

    The ADR is set by using the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted, then, the default location of ADR is, 'u01/oracle/product/ora11g/log' depending on your ORACLE_HOME setting.

    The location of an ADR home is given by the following path, which starts at the ADR base directory: ADR_BASE/diag/product_type/product_id/instance_id

    If environment variable ORACLE_BASE is not set, DIAGNOSTIC_DEST is set to ORACLE_HOME/log.


    Also use the script below to locate you alert file in sqlplus

    SQL> select * from v$diag_info;
    Within the ADR home directory are subdirectories:alert - The XML formatted alertlog
    trace - files and text alert.log file
    cdump - core files

    The XML formatted alert.log is named as 'log.xml'

    Script to find out if Archlog Files backed up in last 24 Hours

    0 comments
    To see if archived logs got backed up in last 24 hours and how many are still sitting on the disks, use the script below

    SELECT backedup||' out of  '||archived||' archive logs backed up'  "Archlog files backed up",
           ondisk "Archlog files on disk"
      FROM (select count(*) archived
              from v$archived_log where completion_time > sysdate - 1),
           (select count(*) backedup from v$archived_log
             where backup_count > 0
               and completion_time > sysdate - 1),
           (select count(*) ondisk from v$archived_log
             where archived = 'YES' and deleted  = 'NO');

    Keeping PL/SQL in Oracle Memory

    0 comments
    If an application calls a BIG pl/sql block into a memory, it may result in kicking out several other cached SQL statements because of LRU (least recently used) algorithm. Now the subsequent call will increase the 'reloads'. That's where the reserved area of shared pool comes in. This area is called Shared Pool Reserved Area which is set by SHARED_POOL_RESERVED_SIZE parameter. The size of the reserved pool can be up 50 % of the shared_pool_size.

    We can also get a help on properly setting the shared_pool_reserved_size  by querying the dynamic performance view called V$SHARED_POOL_RESERVED as below

    SQL>SELECT free_space,request_misses, request_failures
    2        FROM v$shared_pool_reserved;


    FREE_SPACE            REQUEST_MISSES        REQUEST_FAILURES
    -------------------------------------------------------------------------------------------------------
    37181672             0                                             1

     If you see the following then probably your reserved_pool_size is over congigured

    1. Request_misses shows constantly 0 or static
    2. The value shown in FREE_SPACE is more than 50 % of the shared_pool_size
    3. Any non-zero or steady increase size of REQUEST_FAILURE shows that reserved area is too small.
    We can use the below script to find out the size of shared_pool and see if the FREE_SPACE is more than 50 % of it.

    SQL> SELECT pool, sum(bytes) "SIZE"
      2  FROM v$sgastat
      3  WHERE pool = ’shared pool’
      4 GROUP BY pool;

    POOL              SIZE
    ----------- ---------------------
    shared pool  838860800

    From that we know that our free_space column value in v$SHARED_POOL_RESERVED is less than 50 % of the shared pool. So we are good here!

    Now the process of adding the  PL/SQL code permanently into the memory is call pinning. Once pinned, the PL/SQL code will remain in a portion of shared pool allocated by shared_pool_reserved parameter until the instance is bounced.

    Since, now we have the basic understanding of shared pool reserved area and reasons for its configuration, lets now configure it in two simple steps

    1.  Build DBMS_SHARED_POOL package as its not installed by default by running dbmspool.sql script

    SQL> @$ORACLE_HOME/rdbms/admin/dbmspool.sql

    2. Use the dbms_shared_pool two procedures (KEEP or UNKEEP) to pin the object in memory as below

    SQL> EXECUTE DBMS_SHARED_POOL.KEEP (‘PREOCEDURE_TO_BE_PINNED’);

    Once done you can verify your steps above by query the below mentioned view


    SQL> SELECT owner, name, type

      2  FROM v$db_object_cache
      3  WHERE kept = ’YES’;

    Oracle RMAN: Industry Standards & best practices for Stable/Reliable backups

    0 comments
    Following are some of the best practices that we can adopt in order to have must stable and reliable oracle backups
    1. Turn on block checking.

    Block checking is enabled as below
                                         
    SQL> show parameter db_block_checking

    NAME                                 TYPE VALUE
    ------------------------------------ ---- ---------
    db_block_checking                 string FALSE
                                         
    SQL> alter system set db_block_checking = true scope=both;

    When set to 'TRUE' this allows oracle to detect early presence of corrupt blocks in the database.This has a slight performance overhead but can detect corruption caused by underlying disk, storage system, or I/O system problems.


    2.  Block Change Tracking tracking (incremental backups 10g & higher)

    Change Tracking File maintains  information that allows the RMAN incremental backup process to avoid reading data that has not yet been modified since the last backup. When Block Change Tracking is not used, all blocks must be read to determine if they have been modified since the last backup.


    SQL> alter database enable block change tracking using file '/u01/oradata/chg_trcing/chg_tracking.f';
    Once set the file can be queried from SQLPLUS as below

    SQL> SELECT filename, status, bytes  FROM v$block_change_tracking;
    for further information on incremental backups and block change tracking follow the below link
    http://docs.oracle.com/cd/B19306_01/backup.102/b14192/bkup004.htm


    3.  Archive log destination.

    Very important to have more than one archive log destinations. The reason is if an archivelog is corrupted or lost, by having multiple copies in multiple locations, the other logs will still be available and could be used.

    This is how an another archive log location can be added to the database

    SQL> alter system set log_archive_dest_2='location=/new/location/archive2' scope=both;
    4. Duplex redo log groups and members

    If an online log is deleted or becomes corrupt, you will have another member that can be used to recover if required.

    SQL> alter database add logfile member '/new/location/redo21.log' to group 1;

    Below SQL can be used to find out the number of members in the group


    SQL> SELECT a.group#, count(a.member) FROM v$logfile a, v$log b WHERE a.group# = b.group# group by a.group#  order by 1;

        GROUP# COUNT(A.MEMBER)
    ---------- ---------------
             1               2
             2               2
             3               2
             4               2
             5               2
             6               2
             7               2



    5.  RMAN  CHECK LOGICAL option.

    While taking backups with RMAN, using 'check logical ' option checks the  logical corruption within a block, in addition to the normal checksum verification. This is the best way to ensure that you will get a good backup.

    RMAN> backup check logical database plus archivelog delete input;

    for further information, see the below link

    http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmvalid.htm

    6. Test the backups.

    Use 'Validate' command to test your backups. This will do everything except actually restore the database. This is the best method to determine if your backup is good and usable before being in a situation where it is critical and issues exist.
    RMAN> restore validate database;
    See below for more information
    http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmvalid.htm

    7. When using RMAN have each datafile in a single backup piece

    When doing a partial restore RMAN must read through the entire piece to get the datafile/archivelog requested. The smaller the backup piece the quicker the restore can complete. This is especially relevent with tape backups of large databases or where the restore is only on individual / few files.

    However, very small values for filesperset will also cause larger numbers of backup pieces to be created, which can reduce backup performance and increase processing time for maintenance operations. So those factors must be weighed against the desired restore performance.


    RMAN> backup database filesperset 1 plus archivelog delete input;

    8. Maintain your RMAN catalog/controlfile

    Choose your retention policy carefully. Make sure that it complements your tape subsystem retention policy, requirements for backup recovery strategy. If not using a catalog, ensure that your CONTROL_FILE_RECORD_KEEP_TIME parameter matches your retention policy.


    SQL> alter system set control_file_record_keep_time=31 scope=both;
    This will keep 31 days of backup records in the control file.

    For more details :
    BASH-DBA: Relation between RMAN retention period and ...
    Note 461125.1 .
    9. Control file backup

    Ensure autobackup parameter in RMAN is always set to 'ON'.This will ensure that you always have an up to date controlfile available that has been taken at the end of the current backup, rather then during the backup itself.


    RMAN> configure controlfile autobackup on;
    Also, keep your backup logs. These logs contain parameters for your tape access, locations on controlfile backups that can be utilized if complete loss occurs.
    10. Test your recovery


    During a recovery situation this will let you know how the recovery will go without
    actually doing it, and can avoid having to restore source datafiles again.

    SQL> recover database test;

    11. In RMAN backups do not specify 'delete all input' when backing up archivelogs

    REASON: Delete all input' will backup from one destination then delete both copies of the
    archivelog where as 'delete input' will backup from one location and then delete what has
    been backed up. The next backup will back up those from location 2 as well as new logs
    from location 1, then delete all that are backed up. This means that you will have the
    archivelogs since the last backup available on disk in location 2 (as well as backed up
    once) and two copies backup up prior to the previous backup.

    See 
    Note 443814.1 Managing multiple archive log destinations with RMAN for details.


    Copyright © ORACLE ONLINE DBA
    Developed By Pavan Yennampelli