Updates

Recovering drop table using RMAN

0 comments
Hi,
The best method to recover the drop table is as given below:

Steps for Option 3 from MY Yodlee inteview questions:

1. To recover from a dropped or truncated table, a dummy database (copy of primary) will be restored and recovered to point in time so the table can be exported.

2. Once the table export is complete, the table can be imported into the primary database. This dummy database can be a subset of the primary database. However,the ‘dummy’ database must include the SYSTEM, UNDO (or ROLLBACK), and the tablespace(s) where the dropped/truncated table resides.

The simpliest method to create this ‘dummy’ database is to use the RMAN duplicate command.

RMAN Duplicate Command

CONNECT TARGET SYS/oracle@trgt
CONNECT AUXILIARY SYS/oracle@dupdb

DUPLICATE TARGET DATABASE TO dupdb
NOFILENAMECHECK UNTIL TIME ‘SYSDATE-7′;

Assuming the following

•The target database trgt and duplicate database dupdb are on different hosts but have exactly the same directory structure.

•You want to name the duplicate database files the same as the target files.

•You are not using a recovery catalog.

•You are using automatic channels for disk and sbt, which are already configured.

•You want to recover the duplicate database to one week ago in order to view the data in prod1 as it appeared at that time (and you have the required backups and logs to recover the duplicate to that point in time).



Hope it helps.

Best regards,

Writing DBA scripts :

0 comments

Seven SQL and SQL*Plus tricks which work wonders



Whether you like it or not and however strongly tool providers - Oracle included - try to sell their wares, SQL*Plus remains and is likely to remain for a while, the tool of choice for DBAs. You obviously needn't be a SQL expert to be a good DBA - although it doesn't hurt but, expert or not, when you want to write usable SQL*Plus scripts there are a number of tricks worth knowing..

Here are seven of them which ,if you don't already know them, will probably save you a lot of hassle.


If in order to be a good DBA all you had to do was to type SELECT * from the ad hoc DBA_something view, check a few things and then execute a single DROP, ALTER or whatever to fix them, life would be easy. Unfortunately, most of the time you need to run a sequence of statements, which moreover changes over time. Therefore, one of the favorite tricks of experienced DBAs is to write SQL scripts which generate other SQL scripts which do the job.

Let's take an example to illustrate most of the SQL*Plus tricks you need.

Say that you need a script to execute an on-line backup of your database (we are assuming your database is running in the required ARCHIVELOG mode).

Basically, what you need to do for each database is to execute

ALTER TABLESPACE xxx BEGIN BACKUP;
then backup the file(s) associated with the tablespace, then execute

ALTER TABLESPACE xxx END BACKUP;

The brute force method would be to write a first SQL script in which the BEGIN BACKUP is hard-coded for each tablespace, some kind of operating-system script which deals with the file backup and a second SQL script for the END BACKUP part.

Very, very bad method indeed, chiefly because things evolve. It's highly likely that you will add files to your database and quite possibly you will have new tablespaces too, which means that you will have to maintain all your scripts. Are you ready to bet you (or, for that matter, the guy who looks after your database while you are on vacation) won't forget? And your initial backup script may run smoothly every day without telling you you're omitting some files. One of the irritating things with backup scripts is that it's only when you're in really dire straits and trying to recover your production database, that you realize that they have been out of date and therefore totally useless for months. This is typically a case where you MUST use a SQL script to generate what is needed; for the Oracle data dictionary contains all the information you need, and is ALWAYS up-to-date.

To generate the 'ALTER TABLESPACE' statements, no better place to look than DBA_TABLESPACES. You can easily type :

select 'alter tablespace ' || tablespace_name || chr(10) ||
'begin backup;'
from dba_tablespaces
where status <> 'INVALID';

(INVALID refers to tablespaces which were dropped).


Trick #1 : chr(10) is quite a useful thing to concatenate in strings, as it inserts a 'newline' character.

It's not really required here but it's very useful with lines which would otherwise be much too long and generally speaking improves the legibility of the scripts you generate.


To get the up-to-date list of files which constitute your database, you can refer to DBA_DATAFILES. Let's assume that our underlying system is Unix and that we want to copy files to the /backup directory, you can write something such as :

select 'cp ' || f.file_name || ' /backup'
from dba_datafiles f;

Trick #2 : WATCH THE LINE LENGTH!

SQL*Plus has the bad habit of 'wrapping' long lines (slicing them, in effect), which can generate invalid operating system commands. Don't forget that a full file name can be up to 255 (give or take one) characters in length and so can the name of the backup directory. If you really want to be safe, you will have to insert at the beginning of your script :

set linesize 600

This is the first but certainly not the last SQL*Plus command we meet.


We are beginning to see the generation of the required commands taking shape but we're certainly not done yet. For one thing, backup procedures are among the most sensitive procedures in operations. You must handle errors, either SQL errors or operating system errors - what about some change of protections preventing you from copying where you want, or a full file system ?


Trick #3 : Use WHENEVER OSERROR EXIT

and WHENEVER SQLERROR EXIT SQL.SQLCODE and end your SQL procedures with EXIT 0.

This will enable you to test return codes from SQL*Plus in an operating system script and will allow you to handle errors properly.


Now let's improve our script. We could of course generate and run BEGIN BACKUP on all the tablespaces, generate and run the copy of all the files, and then generate and run END BACKUP on all the tablespaces. This is fairly crude and (let's leave out the details) not very good in terms of database management. What should be done is for each tablespace

step 1 : execute BEGIN BACKUP

step 2 : copy the files associated with the tablespace

step 3 : execute END BACKUP


The difficulty is in ordering correctly a set of unrelated commands. In that case we need two more tricks :

Trick #4 : Use UNIONs to fetch commands generated by independent queries

and

Trick #5 : Use dummy, unprinted columns for ordering.


Here's how it works :

We want to order by tablespace, so we return the tablespace name but we use the COLUMN SQL*Plus comment to make it invisible in the output. We also want the steps to be returned in a given order, so we add a constant column, also invisible, which we shall use as a minor sorting key (note that in a UNION you need to give aliases to the columns in the first select list only):


column tablespace_name noprint

column step noprint
column text format A550
select tablespace_name,
'step_1' step,
'alter tablespace ' || tablespace_name || chr(10) ||
'begin backup;' text
from dba_tablespaces
where status <> 'INVALID'
union
select tablespace_name,
'step_2',
'cp ' || file_name || ' /backup'
from dba_datafiles
union
select tablespace_name,
'step_3',
'alter tablespace ' || tablespace_name || chr(10) ||
'end backup;'
from dba_tablespaces
where status <> 'INVALID'
order by 1, 2;
Trick #6 : Use the three required SQL*Plus commands needed to remove unwanted output!

Those three commands are :

set pagesize 0 -- Removes breaks on pages AND headings
set feedback off -- Removes the message about how many lines
-- were selected
set recsep off -- Removes unwanted blank lines.
With these settings, you can write your script in the following way :

spool do_backup.sql

spool off

set feedback on
@do_backup

to generate and immediately run a backup procedure which is always guaranteed to be up-to-date.


Well, we have what we want, so what about the seventh promised trick?

Don't forget that when you run something under SQL*Plus you may run several 'profile' files : glogin.sql which you Unix DBAs will find under $ORACLE_HOME/sqlplus/admin and possibly a login.sql file in the current directory. You don't know what is or ever will be in these scripts. So :


Trick #7 : code defensively and unset any potentially troublesome setting, even if the default is just right.

For instance :

set pause off
set echo off
set verify off -- If you are using parameters
set scan off
You now know most of the techniques you need to write truly useful scripts; then it's a matter of imagination, knowledge of the data dictionary and, of course, SQL skills. Beware that hot-backup as it is introduced here is still a prototype for tutorial purposes. If you are interested by the subject, a complete version named hotbackup.sql is freely available on this site, www.oriolecorp.com. The adventurous can also have a look at other SQL scripts, some of them are in the 'North Face' category...

Oracle ASM Queries for DBA

0 comments
The GV$ASM views,Automatic Storage Management views are very important to gather asm structure and components related details of Oracle ASM,I find some time to have the queries which will help to gather all details with regards to Oracle ASM.

Below are some dynamic views and queries for knowing Oracle asm structure and components details:


SQL> SELECT *FROM V$VERSION;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Solaris: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> SELECT TABLE_NAME FROM DICT WHERE TABLE_NAME LIKE '%ASM%';

TABLE_NAME
------------------------------
V$ASM_ACFSSNAPSHOTS
V$ASM_ACFSVOLUMES
V$ASM_ALIAS
V$ASM_ATTRIBUTE
V$ASM_CLIENT
V$ASM_DISK
V$ASM_DISKGROUP
V$ASM_DISKGROUP_STAT
V$ASM_DISK_IOSTAT
V$ASM_DISK_STAT
V$ASM_FILE

TABLE_NAME
------------------------------
V$ASM_FILESYSTEM
V$ASM_OPERATION
V$ASM_TEMPLATE
V$ASM_USER
V$ASM_USERGROUP
V$ASM_USERGROUP_MEMBER
V$ASM_VOLUME
V$ASM_VOLUME_STAT
GV$ASM_ACFSSNAPSHOTS
GV$ASM_ACFSVOLUMES
GV$ASM_ALIAS

TABLE_NAME
------------------------------
GV$ASM_ATTRIBUTE
GV$ASM_CLIENT
GV$ASM_DISK
GV$ASM_DISKGROUP
GV$ASM_DISKGROUP_STAT
GV$ASM_DISK_IOSTAT
GV$ASM_DISK_STAT
GV$ASM_FILE
GV$ASM_FILESYSTEM
GV$ASM_OPERATION
GV$ASM_TEMPLATE

TABLE_NAME
------------------------------
GV$ASM_USER
GV$ASM_USERGROUP
GV$ASM_USERGROUP_MEMBER
GV$ASM_VOLUME
GV$ASM_VOLUME_STAT

38 rows selected.


1)GV$ASM_DISKGROUP:
This view displays asm instances related details along with one row for every ASM diskgroup discovered by the ASM instance on the node.

SQL> SELECT INST_ID||' '||NAME ||' '||STATE||' '||TOTAL_MB||' '||FREE_MB
FROM GV$ASM_DISKGROUP;

INST_ID||''||NAME||''||STATE||''||TOTAL_MB||''||FREE_MB
--------------------------------------------------------------------------------
2 GRID_DATA_01 MOUNTED 12084 11152
2 TESTASMDB_REDO_01 CONNECTED 65296 40504
2 TESTASMDB_REDO_02 CONNECTED 65296 40504
2 TESTASMDB_FRA_01 CONNECTED 261872 142831
2 TESTASMDB_DATA_01 CONNECTED 1048016 59453
1 GRID_DATA_01 MOUNTED 12084 11152
1 TESTASMDB_DATA_01 CONNECTED 1048016 59453
1 TESTASMDB_FRA_01 CONNECTED 261872 142831
1 TESTASMDB_REDO_01 CONNECTED 65296 40504
1 TESTASMDB_REDO_02 CONNECTED 65296 40504

2)GV$ASM_ALIAS:
This view displays all system and user-defined aliases.
There is one row for every alias present in every diskgroup mounted by the ASM instance. The RDBMS instance displays no rows in this view.


SQL> SELECT INST_ID||''||NAME||''||FILE_NUMBER||''||ALIAS_DIRECTORY
FROM GV$ASM_ALIAS;


INST_ID||''||NAME||''||FILE_NUMBER||''||ALIAS_DIRECTORY
--------------------------------------------------------------------------------
2 USERS.323.750185521 323 N
2 ASMWORK.320.750263763 320 N
2 ASMMASTER.319.750263765 319 N
2 UNDOTBS3.306.768750281 306 N
2 TEMPFILE 4294967295 Y
2 TEMP.325.750185505 325 N

3)GV$ASM_ATTRIBUTE:
This Oracle Database 11g view displays one row for each ASM attribute defined.
These attributes are listed when they are defined in CREATE DISKGROUP or ALTER DISKGROUP statements. DISK_REPAIR_TIMER is an example of an attribute.

SQL> SELECT INST_ID||' '||NAME||' '|| GROUP_NUMBER
FROM GV$ASM_ATTRIBUTE;

no rows selected

4)GV$ASM_CLIENT:
This view displays one row for each RDBMS instance that has an opened ASM diskgroup.

SQL> SELECT INST_ID||' '||INSTANCE_NAME||' '||DB_NAME||' '||STATUS
FROM GV$ASM_CLIENT;

INST_ID||''||INSTANCE_NAME||''||DB_NAME||''||STATUS
--------------------------------------------------------------------------------
1 +ASM1 TESTASMDB CONNECTED
1 +ASM1 TESTASMDB CONNECTED
1 +ASM1 TESTASMDB CONNECTED
1 +ASM1 TESTASMDB CONNECTED
2 +ASM2 TESTASMDB CONNECTED
2 +ASM2 TESTASMDB CONNECTED
2 +ASM2 TESTASMDB CONNECTED
2 +ASM2 TESTASMDB CONNECTED



5)GV$ASM_DISK:
This view contains specifics about all disks discovered by the ASM isntance,
including mount status, disk state, and size.There is one row for every disk discovered by the ASM instance.


SQL> SELECT INST_ID||' '||STATE||' '||TOTAL_MB||' '||FREE_MB||' '||NAME
FROM GV$ASM_DISK;

INST_ID||''||STATE||''||TOTAL_MB||''||FREE_MB||''||NAME
--------------------------------------------------------------------------------
2 NORMAL 2014 1843 GRID_DATA_01_0000
2 NORMAL 2014 1844 GRID_DATA_01_0001
2 NORMAL 2014 1843 GRID_DATA_01_0002
2 NORMAL 2014 1878 GRID_DATA_01_0003
2 NORMAL 2014 1875 GRID_DATA_01_0004
2 NORMAL 2014 1869 GRID_DATA_01_0005
2 NORMAL 8162 5063 TESTASMDB_REDO_01_0000
2 NORMAL 8162 5061 TESTASMDB_REDO_01_0001
2 NORMAL 8162 5065 TESTASMDB_REDO_01_0002

6)GV$ASM_DISK_IOSTAT:
This displays information about disk I/O statistics for each ASM Client. If this view is queried from the database instance, only the rows for that instance are shown.

SQL> SELECT INST_ID||' '||INSTNAME||' '||DBNAME||' '||GROUP_NUMBER||' '||DISK_NUMBER||' '||READS||' '||WRITES||' '||READ_TIME||' '||WRITE_TIME
FROM GV$ASM_DISK_IOSTAT;

7)GV$ASM_DISK_STAT:
This view contains similar content as the v$ASM_DISK, except v$ASM_DISK_STAT reads disk information from cache and thus performs no disk discovery.
Thsi view is primarily used form quick acces to the disk information without the overhead of disk discovery.

8)GV$ASM_DISKGROUP_STAT:
This view contains all the similar view contents as the v$ASM_DISKGROUP,
except that v$ASM_DISK_STAT reads disk information from the cache and thus performs no disk discovery.
This view is primarily used for quick access to the diskgroup information without the overhead of disk discovery.


SQL> SELECT INST_ID||' '||NAME||' '||ALLOCATION_UNIT_SIZE||' '||STATE||' '||TOTAL_MB||' '||FREE_MB
FROM GV$ASM_DISKGROUP_STAT;

INST_ID||''||NAME||''||ALLOCATION_UNIT_SIZE||''||STATE||''||TOTAL_MB||''||FREE_M
--------------------------------------------------------------------------------
1 GRID_DATA_01 1048576 MOUNTED 12084 11152
1 TESTASMDB_DATA_01 1048576 CONNECTED 1048016 59453
1 TESTASMDB_FRA_01 1048576 CONNECTED 261872 142831
1 TESTASMDB_REDO_01 1048576 CONNECTED 65296 40504
1 TESTASMDB_REDO_02 1048576 CONNECTED 65296 40504
2 GRID_DATA_01 1048576 MOUNTED 12084 11152
2 TESTASMDB_REDO_01 1048576 CONNECTED 65296 40504
2 TESTASMDB_REDO_02 1048576 CONNECTED 65296 40504
2 TESTASMDB_FRA_01 1048576 CONNECTED 261872 142831
2 TESTASMDB_DATA_01 1048576 CONNECTED 1048016 59453

10 rows selected.

9)GV$ASM_FILE:
This view displays information about ASM files.
There is one row for every ASM file in every diskgroup mounted by the ASM instance. In a RDBMS instance, V$ASM_FILE displays no row.


SELECT INST_ID||''||FILE_NUMBER||''||BLOCK_SIZE||''||CREATION_DATE||''||REDUNDANCY
FROM GV$ASM_FILE;

INST_ID||''||FILE_NUMBER||''||BLOCK_SIZE||''||CREATION_DATE||''||REDUNDANCY
--------------------------------------------------------------------------------
1 335 512 17-MAR-12 UNPROT
1 336 512 15-MAR-12 UNPROT
1 337 512 15-MAR-12 UNPROT
1 339 512 18-MAR-12 UNPROT
1 340 512 23-MAR-12 UNPROT

10)GV$ASM_OPERATION:
This view describes the progress of an influx ASM rebalance operation.
In a RDBMS instance,GV$ASM_OPERATION displays no rows.

SQL> SELECT INST_ID||' '||OPERATION||' '||STATE||' '||POWER||' '||ACTUAL||' '||SOFAR
FROM GV$ASM_OPERATION;

11)GV$ASM_TEMPLATE:
This view contains information on user and system-defined templated.
GV$ASM_TEMPLATE displays one row for every template present in every diskgroup mounted by the ASM instance.
In a RDBMS instance,GV$ASM_TEMPLATE displays one row for every template present in every diskgroup mounted by the ASM instance with which the RDBMS instance communicates.

SQL> SELECT INST_ID||''||GROUP_NUMBER||''||STRIPE||''||SYSTEM
FROM GV$ASM_TEMPLATE;


SQL> SELECT FS_NAME||' '||VOL_DEVICE||' '||SNAP_NAME||' '||CREATE_TIME
FROM V$ASM_ACFSSNAPSHOTS;

no rows selected


Enjoy Oracle ASM learning........

Best regards,

Reference summary of asmcmd commands

0 comments
asmcmd is a various nice feature of ASM.It helps us understand and do things from command line itself.I find this utility very useful.Thanks to Oracle again for making things easy.

Reference summary of asmcmd commands:

cd Changes the current directory to the specified directory.

du Displays the total disk space occupied by ASM files in the
specified ASM directory and all its subdirectories, recursively.

exit Exits ASMCMD.

find Lists the paths of all occurrences of the specified name (with
wildcards) under the specified directory.

help Displays the syntax and description of ASMCMD commands.

ls Lists the contents of an ASM directory, the attributes of the
specified file, or the names and attributes of all disk groups.

lsct Lists information about current ASM clients.

lsdg Lists all disk groups and their attributes.

mkalias Creates an alias for a system-generated filename.

mkdir Creates ASM directory.

pwd Displays the path of the current ASM directory.

rm Deletes the specified ASM files or directories.

rmalias Deletes the specified alias, retaining the file that the alias points to.

asm.sh, a Linux shell script, that demonstrates some of the asmcmd functionality.

lsct :list all connected Oracle instances

ASMCMD> lsct
DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
+ASM CONNECTED 11.2.0.1.0 11.2.0.1.0 +ASM1 GRID_DATA_01
+ASM CONNECTED 11.2.0.1.0 11.2.0.1.0 +ASM1 TESTDB_DATA_01
TESTDB CONNECTED 11.2.0.1.0 11.2.0.0.0 TESTDB1 TESTDB_FRA_01
TESTDB CONNECTED 11.2.0.1.0 11.2.0.0.0 TESTDB1 TESTDB_DATA_01
ICMQB CONNECTED 11.2.0.1.0 11.2.0.0.0 ICMQB1 TESTDB_FRA_01
ICMQB CONNECTED 11.2.0.1.0 11.2.0.0.0 ICMQB1 TESTDB_REDO_01
ICMQB CONNECTED 11.2.0.1.0 11.2.0.0.0 ICMQB1 TESTDB_REDO_02

lsdg:List Diskgroup status and type.

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 1048576 5934 5002 173 2414 0 N GRID_DATA_01/
MOUNTED EXTERN N 512 4096 1048576 2161538 112220 0 112220 0 N TESTDB_DATA_01/
MOUNTED EXTERN N 512 4096 1048576 654684 560350 0 560350 0 N TESTDB_FRA_01/
MOUNTED EXTERN N 512 4096 1048576 32652 11922 0 11922 0 N TESTDB_REDO_01/
MOUNTED EXTERN N 512 4096 1048576 32652 11954 0 11954 0 N TESTDB_REDO_02/


ASMCMD> iostat
Group_Name Dsk_Name Reads Writes
GRID_DATA_01 GRID_DATA_01_0000 3121340416 0
GRID_DATA_01 GRID_DATA_01_0001 3616768 0
GRID_DATA_01 GRID_DATA_01_0002 4354048 4096
GRID_DATA_01 GRID_DATA_01_0003 2061918208 4096
GRID_DATA_01 GRID_DATA_01_0004 3616768 0
GRID_DATA_01 GRID_DATA_01_0005 3731456 4096
TESTDB_DATA_01 TESTDB_DATA_01_0000 1868196700160 205746507776
TESTDB_DATA_01 TESTDB_DATA_01_0001 1860077395968 203464622080
TESTDB_DATA_01 TESTDB_DATA_01_0002 1860528062464 204175556608

Reference :MY ORACLE SUPPORT and Oracle Documentations

Hope is helps...

RAW Device and ASM for Oracle Database Creation on Linux & Solaris Platform

0 comments
Hi,
We do have some doubts related to RAW Device and ASM when it comes to practical.
The below practical of mine can be useful to understand RAW Device and ASM and how both can be used to Create Database on Linux and Solaris Platform.

1.A)DATABASE CREATION ON RAW PARTITION ON LINUX:

Step 1:Create Logical volumes from Volume Group:
#pvcreate /dev/sdc
#pvdisplay – to display the information of volume
#vgcreate rawvg /dev/sdc
#vgdisplay -- to display the volume groups
#lvcreate –L 500 –n rawvg1 rawvg  for system (fixed)
#lvcreate –L 500 –n rawvg2 rawvg  for sysaux (fixed)
#lvcreate –L 500 –n rawvg3 rawvg  for undo
#lvcreate –L 250 –n rawvg4 rawvg  for temp
#lvcreate –L 160 –n rawvg5 rawvg  for example
#lvcreate –L 120 –n rawvg6 rawvg  for users
#lvcreate –L 120 –n rawvg7 rawvg  for redo logfiles
#lvcreate –L 120 –n rawvg8 rawvg  for redo logfiles
#lvcreate –L 120 –n rawvg9 rawvg  for redo logfiles
#lvcreate –L 110 –n rawvg10 rawvg  for control file
#lvcreate –L 110 –n rawvg11 rawvg  for control file
#lvcreate –L 110 –n rawvg12 rawvg  for control file
#lvcreate –L 5 –n rawvg13 rawvg  for spfile
#lvcreate –L 5 –n rawvg14 rawvg  for password file

Step 2:Grant ownership and permissions to your Logical volume:
chown oracle:oinstall /dev/rawvg/rawvg*
chmod 775 /dev/rawvg/rawvg*


Step 3:Create the configuration file for RAW DEVICE
create a text file orcl_raw.conf in /export/home/oracle

vi /export/home/oracle/orcl_raw.conf

system=/dev/rawvg/rawvg1
sysaux=/dev/rawvg/rawvg2
undotbs1=/dev/rawvg/rawvg3
temp=/dev/rawvg/rawvg4
example=/dev/rawvg/rawvg5
users=/dev/rawvg/rawvg6
redo1_1=/dev/rawvg/rawvg7
redo1_2=/dev/rawvg/rawvg8
red01_3=/dev/rawvg/rawvg9
control1=/dev/rawvg/rawvg10
control2=/dev/rawvg/rawvg11
control3=/dev/rawvg/rawvg12
spfile=/dev/rawvg/rawvg13
pwdfile=/dev/rawvg/rawvg14

Step 4:Create a Mapping file for Linux Understanding that it is RAW PARTITIONS

Linux does not support directly raw partitions, so we have to bind it explicitely to make linux understand that these are raw partitions.

DBCA_RAW_CONFIG=/export/home/oracle/raw_orcl.conf -- creation of mapping file
export DBCA_RAAW_CONFIG


vi /etc/sysconfig/rawdevices

/dev/raw/raw1 /dev/rawvg/rawvg1
/dev/raw/raw2 /dev/rawvg/rawvg2
/dev/raw/raw3 /dev/rawvg/rawvg3
/dev/raw/raw4 /dev/rawvg/rawvg4
/dev/raw/raw5 /dev/rawvg/rawvg5
/dev/raw/raw6 /dev/rawvg/rawvg6
/dev/raw/raw7 /dev/rawvg/rawvg7
/dev/raw/raw8 /dev/rawvg/rawvg8
/dev/raw/raw9 /dev/rawvg/rawvg9
/dev/raw/raw10 /dev/rawvg/rawvg10
/dev/raw/raw11 /dev/rawvg/rawvg11
/dev/raw/raw12 /dev/rawvg/rawvg12
/dev/raw/raw13 /dev/rawvg/rawvg13
/dev/raw/raw14 /dev/rawvg/rawvg14

#/sbin/service rawdevicess restart ----- to restart the rawdevices services
#/usr/bin/raw –qa ------ to view the binded raw devices

chown oracle:oinstall /dev/raw/raw*
chmod 775 /dev/raw/raw*

Step 5: RUN DBCA for creating Database
RUN THE DBCA AND CREATE THE DATABASE, MAKE SURE TO SELECT THE DATABASE TO BE CREATED ON RAW PARTITIONS.

**1.B)CREATING THE DATABASE WITH ASM ON LINUX:

Step 1: Create four partitions using fdisk utility like
/dev/sdb6
/dev/sdb7
/dev/sdb8
/dev/sdb9

Step 2:edit /etc/sysconfig/rawdevices and add following lines
/dev/raw/raw15 /dev/sdb6
/dev/raw/raw16 /dev/sbd7
/dev/raw/raw17 /dev/sdb8
/dev/raw/raw18 /dev/sdb9

Step 3:Now restart the raw devices services
/sbin/service rawdevices restart
To check it, /usr/bin/raw –qa

Step 4:Now we have to change the ownership to oracle user (software owner)
chown oracle:oinstall /dev/raw/raw*
chmod 775 /dev/raw/raw*

Step 5:Create the ASM Parameter file(i.e Parameter file for ASM Database)
#cd /export/home/oracle
#vi asminit.ora
Instance_name=+ASM
Instance_type=ASM
Asm_power_limit=1
Asm_diskstring=’/dev/raw/raw15’,’/dev/raw/raw16’.’/dev/raw/raw17’,’/dev/raw/raw18’
:wq

Step 6:Start Our ASM Instance
# cd $ORACLE_HOME/bin
./localconfig add => it will start the services for asm instance

Now login as Oracle USER and start the listener
$ lsnrctl start
$export ORACLE_SID=+ASM
$orapwd file=$ORACLE_HOME/dbs/orapw+ASM password=oracle
$sqlplus “/as sysdba”
>create spfile from pfile=’/export/home/oracle/asminit.ora’
>startup nomount
>create diskgroup dbgroup1 normal redundancy
Failgroup controller1 disk
‘/dev/raw/raw15’,
‘/dev/raw/raw16’
Failgroup controller2 disk
‘/dev/raw/raw17’,
‘/dev/raw/raw18’;

Step 7:Create the ASM Database using DBCA
* Open another terminal and run the dbca utility.

[ if after configuring the listner through netmgr,it is not working for connection string, then give following command on sql prompt,

>alter system register;

Cool your ASM Database is ready on Linux.Now lets try on Solaris platform.


** 2.A)DATABASE CREATION WITH RAW PARTITION ON SOLARIS :

 Make a replica
# metadb –a –f c1t1d0s6
#metadb –a –c 2 c1t1d0s7
#metadb –i  will display the information of replica
#metainit d4 1 4 c1t2d0s0 c1t2d0s1 c1t3d0s0 c1t4d0s1
#metainit d10 –p d4 500mb - system
#metainit d11 –p d4 500mb  sysaux
#metainit d12 –p d4 500mb  undotbs1
#metainit d13 –p d4 250mb  temp
#metainit d14 –p d4 160mb  example
#metainit d15 –p d4 120mb  users
#metainit d16 –p d4 120mb  redo1_1
#metainit d17 –p d4 120mb  redo1_2
#metainit d18 –p d4 120mb  control1
#metainit d19 –p d4 120mb  control2
#metainit d20 –p d4 5mb  spfile
#metainit d21 –p d4 5mb  pfile

 #chown oracle:oinstall /dev/md/rdsk/d*
 #chmod 775 /dev/md/rdsk/d*
 LOG IN AS ORACLE
 create a text file in ORACLE_HOME named dbname_raw.conf (where dbname is the name of the database u want to make)
$vi $ORACLE_HOME/dbname_raw.conf
System=/dev/md/rdsk/d10
Sysaux= /dev/md/rdsk/d11
Undotbs1=/dev/md/rdsk/d12
Temp=/dev/md/rdsk/d13
Example=/dev/md/rdsk/d14
Users=/dev/md/rdsk/d15
Redo1_1=/dev/md/rdsk/d16
Redo1_2=/dev/md/rdsk/d17
Control1=/dev/md/rdsk/d18
Control2=/dev/md/rdsk/d19
Spfile=/dev/md/rdsk/d20
Pfile=/dev/md/rdsk/d21
:wq

$export DBCA_RAW_CONFIG=$ORACLE_HOME/dbname_raw.conf
$dbca (enter)

**2.B)DATABASE CREATION WITH ASM ON SOLARIS :

 Create 4 slices named c1t2d0s3, c1t2d0s4,c1t2d0s5 and c1t2d0s6. we will use this slices for asm database creation.
 create init.ora file for asm
# cd /export/home
vi init.ora
instance_name=+ASM
instance_type=ASM
ASM_POWER_LIMIT=1
ASM_DISKSTRING=/dev/rdsk/c1t2d0s[3456]
:wq

#chown oracle:oinstall /dev/rdsk/c1t2d0s3
#chown oracle:oinstall /dev/rdsk/c1t2d0s4
#chown oracle:oinstall /dev/rdsk/c1t2d0s5
#chown oracle:oinstall /dev/rdsk/c1t2d0s6
#chmod 775 /dev/rdsk/c1t2d0s3
#chmod 775 /dev/rdsk/c1t2d0s4
#chmod 775 /dev/rdsk/c1t2d0s5
#chmod 775 /dev/rdsk/c1t2d0s6

 #cd $ORACLE_HOME/bin
#./localconfig add

 log out and log in as oracle user
$export ORACLE_SID=+ASM
$sqlplus “/as sysdba”
>create spfile from pfile=’/export/home/init.ora’
>startup nomount
> create diskgroup dbgroup1 normal redundancy
Failgroup controller1 disk
‘/dev/rdsk/c1t2d0s3’,
‘/dev/rdsk/c1t2d0s4’
Failgroup controller2 disk
‘/dev/rdsk/c1t2d0s5’,
‘/dev/rdsk/c1t2d0s6’;
>exit
$dbca (enter)


Hope it helps in understanding RAW and ASM better.

Best regards,

Copying Files and Directories commands for DBAs:

0 comments
Copying Files  and Directories commands for DBAs:
 Hi DBAs,
         Many of our tasks deals with copying of Data within same and Different server from Windows to Linux(Unix Based) OS or from any Unix based Operating system
to Other Unix based Operating system.We can use different commands and tools for doing it.Below I'm sharing my experience.
       
1)tar:tar stands for translate archive.It is most favourite commands for Apps DBAs when performing clonning and refreshing of environment,we use this command for copying huge size directories from one environment to other environment,tar when combined with gzip is the best combination.tar is the most powerful command.

2)scp:
scp stands for secure copy.It is the fastest command for copying,again handy tool for DBAs and Apps DBAs.It is use to copy files and directories from one server to another server.

3)gzip and gunzip:gzip is use for compressing files on Unix-based system and gunzip is use for uncompressing the files on Unix-based systems i.e Solaris,Linux,IBM-AIX,HP-UX.
Again very useful command for DBA.

Eg 1:Let's say I want to tar and zip a directory 'bin' and copy it to target server

On Source System:
=================

Tarring and zipping:
===================

tar -cvf - bin | gzip >bin.tar.gz


On Target System:
=================

Copying to Target system:
==========================

scp -rp oracle@testpapp01cdp.comp.com:/product/app/TESTAPPSDB/apps/apps_st/appl/xxtwc/12.0.0/bin.tar.gz .

passwd:

Extracting and Untarring:
=========================

gunzip -c bin.tar.gz|tar -xvf -

Eg 2:


For Compressing:
=================

Let's say you are currently in /product/obiee Directoy and want to compress and zip the OracleBIData Directory,than it can be done as follows:

tar -cvf  /product/obiee/OracleBIData.tar OracleBIData/


>ls -altr
total 987816
drwxr-x---   9 obiee    psapps      1024 Jul  6  2010 OracleBIData
drwxr-xr-x   4 obiee    psapps        96 Jul  9  2010 java
drwxr-x---  20 obiee    psapps      1024 Dec  9  2010 OracleBI
drwxr-xr-x  16 applmgr  oinstall    1024 May 12  2011 ..
drwxrwxrwx   5 obiee    oinstall    1024 Aug  8 21:05 .
-rw-r--r--   1 obidev2  psapps   505448448 Aug  8 21:17 OracleBIData.tar


For Zipping the Directory:
-----------------------------------------------------

$gzip OracleBIData.tar


>ls -altr
total 78256
drwxr-x---   9 obiee    psapps      1024 Jul  6  2010 OracleBIData
drwxr-xr-x   4 obiee    psapps        96 Jul  9  2010 java
drwxr-x---  20 obiee    psapps      1024 Dec  9  2010 OracleBI
drwxr-xr-x  16 applmgr  oinstall    1024 May 12  2011 ..
-rw-r--r--   1 obidev2  psapps   40054038 Aug  8 21:17 OracleBIData.tar.gz


Eg 3:


For Unzipping the Directory:
------------------------------------

$gunzip OracleBIData.tar.gz


For untarring/extracting the Directory:
-------------------------------------------

$tar -xvf OracleBIData.tar


Note:
gzip=>compression
c=create
v=verbose

tar with gzip are the best method to take the backup of directory,since we can do it very fast and it saves the Disk space efficiently.

Note:tar command options are operating system dependent,so please check with man command command for using various options.


4)rsync:rsync is a great tool for backing up and restoring files,from source system to target system.It is Useful tool for DBAs.It is very important to remember this command works on linux and Doesn't work on Solaris Operating system as rsync doesn't have manual entry on it.

Eg 4:Copy all directories and file structure with all the ownership,permissions and symbolic links.


On Target System:
=================

rsync -arv oracle@testpapp01cdp.comp.com:/tmp/rafi/bin /tmp/rafi/backup_bin

(/tmp/rafi=>current working directory on Target Server)


5)sftp:sftp stands for secure file transfer protocol.some times if scp ports are disabled due to security reason than you can use sftp.It is very much secure way
of copying files and directories from source to target Server.This is very useful for DBAs and Apps DBAs.

Eg 5:Copying files from one server to other server

Using sftp for copying from qb to prod:
------------------------------------------

On Target System:
=================

oracle@[testpapp01cdp]-> sftp oracle@testqapp02cdp.comp.com:/tmp/rafi
Connecting to testqapp02cdp.comp.com...
Password:
Changing to: /tmp/rafi
sftp> ls
sftp> get lookup_xxtwc_perf_level_planned_perc.ldt
Fetching /tmp/rafi/lookup_xxtwc_perf_level_planned_perc.ldt to lookup_xxtwc_perf_level_planned_perc.ldt

6)winscp:winscp is GUI(Graphical user interface) based tool for copying files from windows to Linux(Unix environment) and vice-versa.I like this tool very much,since it is very much user friendly.But,for copying large amount of data it takes using this tool.In this tool you can save the machine name and login with privilege user and copy the files.

7)ftp:ftp stands for  file transfer protocol and use for copying files from windows to linux(Unix environment) and vice-versa.This is fast method of copying files.
 2mb of data can be copied in less than 5 minutes.

Eg 7:

On Target System(Windows):
===========================

Go to the target folder in windows.

C:\Documents and Settings\rafiuddin_alvi>cd D:\TEST

C:\Documents and Settings\rafiuddin_alvi>d:

D:\TEST>ftp testdapp01cdc.comp.com
Connected to testdapp01cdc.comp.com.
220 testdapp01cdc.comp.com FTP server ready.
User (testdapp01cdc.comp.com:(none)): oracle
331 Password required for oracle.
Password:
230 User oracle logged in.
ftp> help
Commands may be abbreviated.  Commands are:

!               delete          literal         prompt          send
?               debug           ls              put             status
append          dir             mdelete         pwd             trace
ascii           disconnect      mdir            quit            type
bell            get             mget            quote           user
binary          glob            mkdir           recv            verbose
bye             hash            mls             remotehelp
cd              help            mput            rename
close           lcd             open            rmdir
ftp> cd /tmp/rafi
250 CWD command successful.
ftp> dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 4640
-rwxrwxrwx   1 oracle  twcother 2272275 Aug  6 03:18 cn_reports.rpd
-rwxrwxrwx   1 oracle  twcother    8115 Jul  7 19:07 DisputeSrchPG.xml
-rwxrwxrwx   1 appldev2 oinstall     916 Jul 25 04:38 L6958530.log
-rwxrwxrwx   1 appldev2 oinstall     880 Aug  6 08:06 L6960409.log
-rwxr-xr-x   1 oracle  twcother      23 Aug  8 20:11 test1.sh
-rwxrwxrwx   1 appldev2 oinstall   22889 Jul 25 04:38 XX_QUINTILE_BATCH.ldt
-rwxrwxrwx   1 oracle  twcother    1802 Jul  5 06:48 XXTaskPerzSumCO.class
-rwxrwxrwx   1 oracle  twcother    1934 Jul  7 19:07 XXTaskSummaryCO.class
-rwxrwxrwx   1 appldev2 oinstall   20235 Aug  6 08:06 XXTWC_REPORT_METRIC_VAL
.ldt
226 Transfer complete.
ftp: 672 bytes received in 0.08Seconds 8.62Kbytes/sec.
ftp> get test1.sh
200 PORT command successful.
150 Opening ASCII mode data connection for test1.sh (23 bytes).
226 Transfer complete.
ftp: 24 bytes received in 0.00Seconds 24000.00Kbytes/sec.

ftp> get cn_reports.rpd
200 PORT command successful.
150 Opening ASCII mode data connection for cn_reports.rpd (2272275 bytes).
226 Transfer complete.
ftp: 2284457 bytes received in 113.08Seconds 20.20Kbytes/sec.

     To exit ftp:
  
ftp> bye
221-You have transferred 2284481 bytes in 2 files.
221-Total traffic for this session was 2285843 bytes in 3 transfers.
221-Thank you for using the FTP service on testdapp01cdc.comp.com.
221 Goodbye.

b)Putting files from source(Windows) to Target(Linux):

ftp> mput my_docs.txt
mput my_docs.txt? y
200 PORT command successful.
150 Opening ASCII mode data connection for my_docs.txt.
226 Transfer complete.
ftp> dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 4640
-rwxrwxrwx   1 oracle  twcother 2272275 Aug  6 03:18 cn_reports.rpd
-rwxrwxrwx   1 oracle  twcother    8115 Jul  7 19:07 DisputeSrchPG.xml
-rwxrwxrwx   1 appldev2 oinstall     916 Jul 25 04:38 L6958530.log
-rwxrwxrwx   1 appldev2 oinstall     880 Aug  6 08:06 L6960409.log
-rw-r--r--   1 oracle  twcother       0 Aug  8 20:48 my_docs.txt
-rwxr-xr-x   1 oracle  twcother      23 Aug  8 20:11 test1.sh

8)cp:cp is the most basic command in linux(Unix environment) for copying files within same directory or directories on Same server.

Eg 8:To copy test1.sh to Target location /tmp/rafi,we can use below command.

testdapp01cdc(/export/home/oracle) >cp test1.sh /tmp/rafi

testdapp01cdc(/tmp/rafi) >ls -altr *test*
-rwxr-xr-x   1 oracle  twcother      23 Aug  8 20:11 test1.sh



Hope it helps....


Enjoy DBA learning...

Oracle Database Health check scripts

0 comments
Hi All,
Lot of time DBAs are asked to check the health of the Database,Health of the Database can be check in various ways.It includes:

SL No Monitoring Scope Current Status OS Level
1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36
2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.
3 Top 10 process consuming memory:No process using exceptional high memory
4 Free volumes available :Sufficient disk space is available on the mount points
5 Filesystem space Under normal threshold
Database level.
6 Check extents / Pro active Space addition:Space is being regularly added.
7 Check alert log for ORA- and warn messages.
8 Major wait events (latch/enqueue/Lib cache pin) No major wait events
9 Max Sessions
10 Long running Jobs 6 inactive sessions running for more than 8 hrs
11 Invalid objects 185
12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06
13 Temp usage / Rollback segment usage Normal
14 Nologging Indexes
15 Hotbackup/Coldbackup Gone fine
16 Redo generation normal
17 PQ proceses Normal
18 I/O Generation Under normal threshold
19 2 PC Pending transactions 0
DR / backup
1 Sync arch Normal
2 Purge arch Normal
3 Recovery status Normal
20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details

In Detail DATABASE Health check:
OPERATING SYSTEM:

1)Physical memory/ Load:
1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel.
Usage:
$ free -m

2) vmstat:vmstat reports report virtual memory statistics, which has information about processes, swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context switches and CPU activity
Usage:
$vmstat 5

3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in Linux system. The memory usage stats by top command include real-time live total, used and free physical memory and swap memory with their buffers and cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active processes. ps will show the percentage of memory resource that is used by each process or task running in the system. With this command, top memory hogging processes can be identified.
Usage:
$ps aux

2) OS Space threshold ( archive, ora_dump etc.. ):
Checking the OS space is available in all filesystems,specially the location which is having archive logs ,oracle Database files.We can use the below OS commands:
$df –h
$du –csh *
3) Top 10 process consuming memory:
We can Displaying top 10 memory consuming processes as follows:

ps aux|head -1;ps aux|sort -m

We can use the top command, and press M which orders the process list by memory usage.

4) Free volumes available:

We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running.

$df –h

5)Filesystem space:

Under normal threshold.Check the filesystem in the OS side whether the sufficient space is available at all mount points.

DATABASE :

6)Check extents / Pro active Space addition:
Check each of the Data,Index and temporary tablespaces for extend and blocks
Allocation details.

SET LINES 1000
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=’STAR01D’;

7) Check alert log for ORA- and warn messages:

Checking the alert log file regulary is a vital task we have to do.In the alert log files we have to looks for the following things:

1) Look for any of the oracle related errors.
Open the alert log file with less or more command and search for ORA-
This will give you the error details and time of occurrence.

2) Look for the Database level or Tablespace level changes
Monitor the alert log file and search the file for each Day activities happening
In the Database either whether it is bouncing of Database.Increase in the size of the tablespaces,Increase in the size of the Database parameters.In the 11g Database we can look for TNS errors in the alert log file.

8) Major wait events (latch/enqueue/Lib cache pin):

We can check the wait events details with the help of below queries:

SELECT s.saddr, s.SID, s.serial#, s.audsid, s.paddr, s.user#, s.username,
s.command, s.ownerid, s.taddr, s.lockwait, s.status, s.server,
s.schema#, s.schemaname, s.osuser, s.process, s.machine, s.terminal,
UPPER (s.program) program, s.TYPE, s.sql_address, s.sql_hash_value,
s.sql_id, s.sql_child_number, s.sql_exec_start, s.sql_exec_id,
s.prev_sql_addr, s.prev_hash_value, s.prev_sql_id,
s.prev_child_number, s.prev_exec_start, s.prev_exec_id,
s.plsql_entry_object_id, s.plsql_entry_subprogram_id,
s.plsql_object_id, s.plsql_subprogram_id, s.module, s.module_hash,
s.action, s.action_hash, s.client_info, s.fixed_table_sequence,
s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#,
s.row_wait_row#, s.logon_time, s.last_call_et, s.pdml_enabled,
s.failover_type, s.failover_method, s.failed_over,
s.resource_consumer_group, s.pdml_status, s.pddl_status, s.pq_status,
s.current_queue_duration, s.client_identifier,
s.blocking_session_status, s.blocking_instance, s.blocking_session,
s.seq#, s.event#, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2,
s.p2raw, s.p3text, s.p3, s.p3raw, s.wait_class_id, s.wait_class#,
s.wait_class, s.wait_time, s.seconds_in_wait, s.state,
s.wait_time_micro, s.time_remaining_micro,
s.time_since_last_wait_micro, s.service_name, s.sql_trace,
s.sql_trace_waits, s.sql_trace_binds, s.sql_trace_plan_stats,
s.session_edition_id, s.creator_addr, s.creator_serial#
FROM v$session s
WHERE ( (s.username IS NOT NULL)
AND (NVL (s.osuser, 'x') <> 'SYSTEM')
AND (s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE'
)
ORDER BY "PROGRAM";

The following query provides clues about whether Oracle has been waiting for library cache activities:

Select sid, event, p1raw, seconds_in_wait, wait_time
From v$session_wait
Where event = 'library cache pin'
And state = 'WAITING';

The below Query gives details of Users sessions wait time and state:

SELECT NVL (s.username, '(oracle)') AS username, s.SID, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.SID = sw.SID
ORDER BY sw.seconds_in_wait DESC;

9) Max Sessions:
There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources.

a)Users and Sessions CPU consumption can be obtained by below query:

Set lines 1000
select ss.username, se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;


b) Users and Sessions CPU and I/O consumption can be obtained by below query:

-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumption
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;

10) Long running Jobs:

We can find out long running jobs with the help of the below query:

col username for a20
col message for a50
col remaining for 9999
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,
time_remaining remaining, message
from v$session_longops
where time_remaining = 0
order by time_remaining desc;

11) Invalid objects:

We can check the invalid objects with the help of the below query:

select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID';

12) Analyze Jobs ( once in a week ):

We need to analyze the jobs that are running once in a week as a golden rule.
The below steps can be considered for analyzing jobs.

Analyzing a Running Job
The status of a job or a task changes several times during its life cycle. A job can have the following as its status:
Scheduled: The job is created and will run at the specified time.
Running: The job is being executed and is in progress.
Initialization Error: The job or step could not be run successfully. If a step in a job fails initialization, the job status is Initialization Error.
Failed: The job was executed but failed.
Succeeded: The job was executed completely.
Stopped: The user canceled the job.
Stop Pending: The user has stopped the job. The already running steps are completing execution.
Suspended: This indicates that the execution of the job is deferred.
Inactive: This status indicates that the target has been deleted.
Reassigned: The owner of the job has changed.
Skipped: The job was not executed at the specified time and has been omitted.
The running jobs can be found out by the help of below query:

select sid, job,instance from dba_jobs_running;

We can find out the failed jobs and Broken jobs details with the help of the Below query:

select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs;

13) Temp usage / Rollback segment/PGA usage:

We can get information of temporary tablespace usage details with the help of below query:
Set lines 1000
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

We can get information of Undo tablespace usage details with the help of the below query:
set lines 1000
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';

We can get the PGA usage details with the help of the below query:
select st.sid "SID", sn.name "TYPE",
ceil(st.value / 1024 / 1024/1024) "GB"
from v$sesstat st, v$statname sn
where st.statistic# = sn.statistic#
and sid in
(select sid from v$session where username like UPPER('&user'))
and upper(sn.name) like '%PGA%'
order by st.sid, st.value desc;
Enter value for user: STARTXNAPP
14)Validating the Backup:

We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.Make sure you are having a valid backups of all the Databases.Check the Backup locations to make sure the Backup completed on time with the required Backup data.

14)Hotbackup/Coldbackup:
Validating the backup of Database.It should complete on time with the required data for restoring and recovery purpose if required.

15) Redo generation/Archive logs generation details:
We should make sure there should not be frequent log switch happening in a Database.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20.

We can the log switch details with the help of the below query:

Redolog switch Datewise and hourwise:
-------------------------------
set lines 120;
set pages 999;
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
Archive logs generations is directly proportional to the number of log switches happening in a Database. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.


We can use the below queries for archive logs generation details:

a)Archive logs by dates:
set lines 1000
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
b)Archive log generation details Day-wise :

select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)
from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')
order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

c) Archive log count of the day:

select count(*)
from v$archived_log
where trunc(completion_time)=trunc(sysdate);

count of archived logs generated today on hourly basis:
-------------------------------------------------------
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
where to_char(first_time,'DD-MON-RR')='16-AUG-10'
group by to_char(first_time,'DD-MON-RR')
order by 1
/

16)I/O Generation:
We can find out CPU and I/O generation details for all the users in the Database with the help of the below query:
-- Show IO per session,CPU in seconds, sessionIOS.
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
To know what the session is doing and what kind of sql it is using:

-- what kind of sql a session is using
set lines 9999
set pages 9999

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg: sid=1853

17)Sync arch:
In a Dataguard environment we have to check primary is in sync with the secondary Database.This we can check as follows:
The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed by
both redo transport and Redo Apply processes in a Data Guard environment
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;

If it displays no rows than the primary Database is in sync with the standy Database.If it display any information with row than manually we have to apply the archive logs.

After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
Eg:
SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
2> WHERE NEXT_CHANGE# NOT IN
3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
4> ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
1 6 /disk1/oracle/dbs/log-1292880008_6.arc
1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
Monitoring Log File Archival Information:
Step 1 Determine the current archived redo log file sequence numbers.
Enter the following query on the primary database to determine the current archived redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG
WHERE STATUS='CURRENT';
Step 2 Determine the most recent archived redo log file.
Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#
------------------ ------ ---------------- -------------
/private1/prmy/lad VALID 1 947
standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
3> LOCAL WHERE
4> LOCAL.SEQUENCE# NOT IN
5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14

18)Purge arch:
We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.

19)Recovery status:
In order to do recover make sure you are having latest archive logs,so that you can restore and do the recovery if required.


20) MY DATABASE HEALTH CHECK SCRIPT:
/* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */

-- SHOWS RUNNING JOBS
select 'RUNNING JOBS', sid, job,instance from dba_jobs_running;
set lines 1000
-- SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASIS
select 'ARCHIVE LOG REPORT',to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
-- WHAT ALL THE SESSIONS ARE GETTING BLOCKED
select 'SESSIONS BLOCKED',process,sid, blocking_session from v$session where blocking_session is not null;
-- WHICH SESSION IS BLOCKING WHICH SESSION
set lines 9999
set pages 9999
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;
-- SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAIL
select a.sid,b.username,block_gets,consistent_gets,physical_reads,block_changes
from V$SESS_IO a,V$SESSION b
where a.sid=b.sid and block_changes > 10000 order by block_changes desc;
-- show IO per session / CPU in seconds. sessionIOS.sql
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
-- SCRIPT TO IDENTIFY LONG RUNNING STATEMENTS
rem LONGOPS.SQL
rem Long Running Statements
rem Helmut Pfau, Oracle Deutschland GmbH
set linesize 120
col opname format a20
col target format a15
col units format a10
col time_remaining format 99990 heading Remaining[s]
col bps format 9990.99 heading [Units/s]
col fertig format 90.99 heading "complete[%]"
select sid,
opname,
target,
sofar,
totalwork,
units,
(totalwork-sofar)/time_remaining bps,
time_remaining,
sofar/totalwork*100 fertig
from v$session_longops
where time_remaining > 0
/
-- ACTIVE SESSIONS IN DATABASE
select 'ACTIVE SESSION', sid, serial#,machine, osuser,username,status from v$session where username!='NULL' and status='ACTIVE';
-- WHAT SQL A SESSION IS USING
set lines 9999
set pages 9999
select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg:SID=1844
I would like to add one more script which will tell me details regarding the Size of the Database used,occupied and available and Tablespace usage
details along with hit ratio of various SGA components which can be very helpfull
to monitor the performance of the Databases.

Database_monitor.sql:

ttitle "1. :============== Tablespace Usage Information ==================:" skip 2
set linesize 140
col Total format 99999.99 heading "Total space(MB)"
col Used format 99999.99 heading "Used space(MB)"
col Free format 99999.99 heading "Free space(MB)"
break on report
compute sum of Total space(MB) on report
compute sum of Used space(MB) on report
compute sum of Free space(MB) on report
select a.tablespace_name, round(a.bytes/1024/1024,2) Total,
round( nvl( b.bytes,0)/1024/1024,2) Used,
round(nvl(c.bytes, 0)/1024/1024,2) Free ,
round(nvl(b.bytes,0)*100/nvl(a.bytes,0),2) "% Used"
from sys.sm$ts_avail a, sys.sm$ts_used b, sys.sm$ts_free c
where a.tablespace_name=b.tablespace_name(+)
and b.tablespace_name=c.tablespace_name(+);

ttitle "2. :============== Hit Ratio Information ==================:" skip 2
set linesize 80
clear columns
clear breaks
set pagesize 60 heading off termout off echo off verify off
REM
col val1 new_val lib noprint
select 100*(1-(SUM(Reloads)/SUM(Pins))) val1
from V$LIBRARYCACHE;

ttitle off
col val2 new_val dict noprint
select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2
from V$ROWCACHE;

ttitle off
col val3 new_val phys_reads noprint
select Value val3
from V$SYSSTAT
where Name = 'physical reads';

ttitle off
col val4 new_val log1_reads noprint
select Value val4
from V$SYSSTAT
where Name = 'db block gets';

ttitle off
col val5 new_val log2_reads noprint
select Value val5
from V$SYSSTAT
where Name = 'consistent gets';

ttitle off
col val6 new_val chr noprint
select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6
from DUAL;

ttitle off
col val7 new_val avg_users_cursor noprint
col val8 new_val avg_stmts_exe noprint
select SUM(Users_Opening)/COUNT(*) val7,
SUM(Executions)/COUNT(*) val8
from V$SQLAREA;
ttitle off
set termout on
set heading off
ttitle -
center 'SGA Cache Hit Ratios' skip 2

select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio,
' Shared SQL Pool ',
' Dictionary Hit Ratio : '||&dict dict_hit,
' Shared SQL Buffers (Library Cache) ',
' Cache Hit Ratio : '||&lib lib_hit,
' Avg. Users/Stmt : '||
&avg_users_cursor||' ',
' Avg. Executes/Stmt : '||
&avg_stmts_exe||' '
from DUAL;

ttitle "3. :============== Sort Information ==================:" skip 2

select A.Value Disk_Sorts,
B.Value Memory_Sorts,
ROUND(100*A.Value/
DECODE((A.Value+B.Value),0,1,(A.Value+B.Value)),2)
Pct_Disk_Sorts
from V$SYSSTAT A, V$SYSSTAT B
where A.Name = 'sorts (disk)'
and B.Name = 'sorts (memory)';

ttitle "4. :============== Database Size Information ==================:" skip 2


select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free;


Hope this helps you in monitoring your Databases.

Best regards.

Using ORA_ROWSCN and SCN_TO_TIMESTAMP to find the last updated date or timestamp of a record.

0 comments
Using this ORA_ROWSCN column and SCN_TO_TIMESTAMP function, the last date or timestamp can be found when a table or record updated.

ORA_ROWSCN

ORA_ROWSCN is a pseudocolumn of any table which has the most recent change information to a given row.

Here is an example to get the ora_rowscn value when a row updated.

SQL> connect scott/tiger@mydb
Connected.

SQL> SELECT ename FROM emp WHERE empno=7839;

ENAME
----------
KING

SQL> UPDATE emp SET ename='SABDAR' WHERE empno=7839;

1 row updated.

SQL> COMMIT;

Commit complete.

SQL> SELECT ora_rowscn,ename FROM emp WHERE empno=7839;

ORA_ROWSCN ENAME
---------- ----------
353845494 SABDAR

Here we got the scn value (353845494) of the empno record number 7839 when updated. Now we have to use the SCN_TO_TIMESTAMP function to convert an SCN, like ORA_ROWSCN 353845494 to the corresponding TIMESTAMP value.

SCN_TO_TIMESTAMP

SCN_TO_TIMESTAMP is a new function, in Oracle 10g, which is used to convert the SCN value generated, using ORA_ROWSCN coumn, into timestamp. SCN_TO_TIMESTAMP takes as an argument a number that evaluates to a system change number (SCN), and returns the approximate timestamp associated with that SCN. The returned value is of TIMESTAMP datatype. This function is useful any time you want to know the timestamp associated with an SCN.

Here we pass the scn value generated in the above query.

SQL> SELECT scn_to_timestamp(353845494) FROM emp WHERE empno=7839;

SCN_TO_TIMESTAMP(353845494)
-----------------------------------
02-SEP-08 03.20.20.000000000 PM

SCN_TO_TIMESTAMP function can also be used in conjunction with the ORA_ROWSCN pseudocolumn to associate a timestamp with the most recent change to a row.

SQL> SELECT scn_to_timestamp(ORA_ROWSCN) FROM emp WHERE empno=7839;

SCN_TO_TIMESTAMP(ORA_ROWSCN)
------------------------------------
02-SEP-08 03.20.20.000000000 PM

How We Resolved the Account Locked (Timed) issue?

0 comments
An application user account, in one of our Oracle 10g Databases, is being locked every time. Below are our findings and solution to resolve the issue.

Details:
Oracle Database Version: 10g R2 (10.2.0.1)
Application User: APPUSR
Error: ORA-28000: the account is locked

Login as SYSDBA

SQL> conn /as sysdba

Check the APPSUSR account status.

SQL> SELECT username, account_status FROM dba_users WHERE username= ‘APPUSR’;
USERNAME ACCOUNT_STATUS PROFILE
-------------------- -------------------- ---------------
APPUSR LOCKED(TIMED) DEFAULT

Here we can see the account status is LOCKED (TIMED) and the default user’s profile is DEFAULT.

Check the resource limits of DEFAULT profile.

SQL> SELECT resource_name,resource_type,limit FROM dba_profiles WHERE profile='DEFAULT';

RESOURCE_NAME RESOURCE LIMIT
-------------------------------- -------- ----------
COMPOSITE_LIMIT KERNEL UNLIMITED
SESSIONS_PER_USER KERNEL UNLIMITED
CPU_PER_SESSION KERNEL UNLIMITED
CPU_PER_CALL KERNEL UNLIMITED
LOGICAL_READS_PER_SESSION KERNEL UNLIMITED
LOGICAL_READS_PER_CALL KERNEL UNLIMITED
IDLE_TIME KERNEL UNLIMITED
CONNECT_TIME KERNEL UNLIMITED
PRIVATE_SGA KERNEL UNLIMITED
FAILED_LOGIN_ATTEMPTS PASSWORD 10PASSWORD_LIFE_TIME PASSWORD UNLIMITED
PASSWORD_REUSE_TIME PASSWORD UNLIMITED
PASSWORD_REUSE_MAX PASSWORD UNLIMITED
PASSWORD_VERIFY_FUNCTION PASSWORD NULL
PASSWORD_LOCK_TIME PASSWORD UNLIMITED
PASSWORD_GRACE_TIME PASSWORD UNLIMITED

All resource limits for DEFAULT profile is set to UNLIMITED, but only for FAILED_LOGIN_ATTEPTS attribute, it’s set to some value (10). Due to this the user account keeps getting locked(timed).When we check in the Oracle Documentations, it’s stated that FAILED_LOGIN_ATTEPTS attribute for DEFAULT profile has been changed from 10.2.0.1 from UNLIMITED to 10.

What we can do is, either we may need to change the resource limit for FAILED_LOGIN_ATTEPTS attribute in DEFAULT profile, or create a new profile for that user with FAILED_LOGIN_ATTEPTS attribute value set to UNLIMITED. But for security reasons, we will not tamper the DEFAULT profile, which is not recommended too. Then let’s go for creating a new profile and assign that profile to the user.

Create a profile.

SQL> CREATE PROFILE APPUSR_DEFAULT LIMIT
2 COMPOSITE_LIMIT UNLIMITED
3 SESSIONS_PER_USER UNLIMITED
4 CPU_PER_SESSION UNLIMITED
5 CPU_PER_CALL UNLIMITED
6 LOGICAL_READS_PER_SESSION UNLIMITED
7 LOGICAL_READS_PER_CALL UNLIMITED
8 IDLE_TIME UNLIMITED
9 CONNECT_TIME UNLIMITED
10 PRIVATE_SGA UNLIMITED
11 FAILED_LOGIN_ATTEMPTS UNLIMITED
12 PASSWORD_LIFE_TIME UNLIMITED
13 PASSWORD_REUSE_TIME UNLIMITED
14 PASSWORD_REUSE_MAX UNLIMITED
15 PASSWORD_VERIFY_FUNCTION NULL
16 PASSWORD_LOCK_TIME UNLIMITED
17 PASSWORD_GRACE_TIME UNLIMITED;

Profile created.

Assign the newly created profile to the user as default profile.

SQL> ALTER USER appusr PROFILE appusr_default;

User altered.

Unlock the user account:

SQL> ALTER USER appusr ACCOUNT UNLOCK;

User altered.

Now check again the status of APPUSR user.

SQL> SELECT username, account_status FROM dba_users WHERE username= ‘APPUSR’;
USERNAME ACCOUNT_STATUS PROFILE
-------------------- -------------------- ---------------
APPUSR OPEN APPUSR_DEFAULT

Download Oracle 11g New Features PDF.

0 comments
Below is the link for downloading Oracle 11g New Features pdf.

Oracle 11g New Features for Administrators !

Installing & Configuring Oracle ASM on Linux – Red Hat Enterprise Linux 4.0

0 comments
System Details:

Operating System: Linux – Red Hat Enterprise Linux 4.0
Kernel Version: 2.6.9-42.ELsmp
Architecture: 32 bit x86
Database: Oracle 10g
Version: Release 2 (10.2.0.1)

Following are the steps I have followed.

1) Download the ASMLib software packages from the Oracle Technology Network, below is an URL for the same.

2) You may see 4-6 packages for your Linux platform; the oracleasmlib package provides the actual ASM library. The oracleasm-support package provides the utilities used to get the ASM driver up and running. Both of these packages need to be installed.

oracleasm-support-2.0.3-1.i386.rpm
oracleasmlib-2.0.2-1.i386.rpm
oracleasm-2.6.9-42.ELsmp-2.0.3-1.i686.rpm

Note: Download as per your OS Kernel and Architecture
# uname -r
2.6.9-42.ELsmp
# uname -i
i386

3) Upload the above 3 packages using ftp in Linux server, and log on to the server as root , and install them as below.

$ su – root
Password:

Go to the rpms directory where rpms are downloaded.

# rpm –Uvh oracleasm-support-2.0.3-1.i386.rpm \
oracleasmlib-2.0.2-1.i386.rpm \
oracleasm-2.6.9-42.ELsmp-2.0.3-1.i686.rpm

Preparing...########################################## [100%]
1:oracleasm-support################################### [ 33%]
2:oracleasm-2.6.9-42.ELsm############################# [ 67%]
3:oracleasmlib######################################## [100%]

ASMLib software packages are now installed successfully.

4) Run the following command to configure ASMLib, outputs of the commands are the self explanatory.

To Configure

# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]

To Start and Stop ASM

# /etc/init.d/oracleasm disable
Writing Oracle ASM library driver configuration: [ OK ]
Unmounting ASMlib driver filesystem: [ OK ]
Unloading module "oracleasm": [ OK ]

# /etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]

5) Making Disks Available to ASMLib

To create a Disk Group

# /etc/init.d/oracleasm createdisk VOL1 /dev/sda2
Marking disk "/dev/sda2" as an ASM disk: [ OK ]

To Delete a Disk Group

# /etc/init.d/oracleasm deletedisk VOL1
Deleting Oracle ASM disk "VOL1" [ OK ]

To query the disks

# /etc/init.d/oracleasm querydisk /dev/sda2
Disk "/dev/sda2" is marked an ASM disk with the label "VOL1"

# /etc/init.d/oracleasm querydisk /dev/sda1
Disk "/dev/sda1" is not marked an ASM disk

To list and query the existing disks

# /etc/init.d/oracleasm listdisks
VOL1

# /etc/init.d/oracleasm querydisk VOL1
Disk "VOL1" is a valid ASM disk on device [8, 2]

We are now done with installation and configuration of ASMLib on Linux platform.

Conclusion: This article is specific to one of our Linux boxes, I would request you to refer other documents in Oracle or Google for respective Operating Systems i.e. Windows and Unix.

Creating a Recovery Catalog – RMAN in Oracle 10g

0 comments
As we all know, RMAN maintains metadata about the target database and its backup and
recovery operations in the RMAN repository. The RMAN repository data is always in the control file of the target database. The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter controls how long backup records are kept in the control file before those records are re-used to hold information about more recent backups. By default this parameter set to 7 days.

Another copy of the RMAN repository data can also be saved in the recovery catalog.

Using a recovery catalog preserves RMAN repository information if the control file is lost, making it much easier to restore and recover following the loss of the control file. (A backup control file may not contain complete information about recent available backups.) The recovery catalog can also store a much more extensive history of your backups than the control file, due to limits on the number of control file records.

In addition to RMAN repository records, the recovery catalog can also hold RMAN stored scripts, sequences of RMAN commands for common backup tasks. Centralized storage of scripts in recovery catalog can be more convenient than working with command files.

Create a new database for RMAN – Recovery catalog database

Note: You can create a small database with minimal sizes of tablespaces and others, and you can name the database as CATDB for naming convention and to avoid the confusion between your production and rman databases.

Create a new tablespace in the new database (CATDB)

$ sqlplus /nolog

CONNECT SYS/passwd@catdb AS SYSDBA;

CREATE TABLESPACE rman
DATAFILE '/u02/app/oradata/rman/rman01.dbf' size 100m;

Create the Recovery Catalog Owner in the new database (CATDB)

CREATE USER rman IDENTIFIED BY rman
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;

Grant the necessary privileges to the schema owner

SQL> GRANT connect, resource, recovery_catalog_owner TO rman;

Here the role "RECOVERY_CATALOG_OWNER" provides the user with all privileges required to maintain and query the recovery catalog

Creating the Recovery Catalog

Connect to the database which will contain the catalog as the catalog owner. For example:

$ rman catalog rman/passwd@catdb

Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:22:13 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to recovery catalog database
RMAN>

Run the CREATE CATALOG command to create the catalog

RMAN> CREATE CATALOG;

recovery catalog created

Registering a Database in the Recovery Catalog

Connect to the target database and recovery catalog database.

$ ORACLE_SID=prod; export ORACLE_SID

$ rman target / catalog rman/passwd@catdb

Recovery Manager: Release 10.2.0.3.0 - Production on Sun Apr 1 14:25:30 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: PROD (DBID=3677528376)
connected to recovery catalog database

RMAN> REGISTER DATABASE;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Make sure that the registration was successful by running REPORT SCHEMA:

RMAN> REPORT SCHEMA;

Report of database schema

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 500 SYSTEM YES /u02/app/oradata/prod/system01.dbf
2 200 UNDOTBS1 YES /u02/app/oradata/prod/undotbs01.dbf
3 325 SYSAUX NO /u02/app/oradata/prod/sysaux01.dbf
4 100 EXAMPLE NO /u02/app/oradata/prod/example01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 200 TEMP 200 /u02/app/oradata/prod/temp01.dbf
Copyright © ORACLE ONLINE DBA
Developed By Pavan Yennampelli