Following are some of the best practices that we can adopt in order to have must stable and reliable oracle backups
1. Turn on block checking.
Block checking is enabled as below
SQL> show parameter db_block_checking
NAME TYPE VALUE
------------------------------------ ---- ---------
db_block_checking string FALSE
SQL> alter system set db_block_checking = true scope=both;
When set to 'TRUE' this allows oracle to detect early presence of corrupt blocks in the database.This has a slight performance overhead but can detect corruption caused by underlying disk, storage system, or I/O system problems.
2. Block Change Tracking tracking (incremental backups 10g & higher)
Change Tracking File maintains information that allows the RMAN incremental backup process to avoid reading data that has not yet been modified since the last backup. When Block Change Tracking is not used, all blocks must be read to determine if they have been modified since the last backup.
SQL> alter database enable block change tracking using file '/u01/oradata/chg_trcing/chg_tracking.f';
Once set the file can be queried from SQLPLUS as below
SQL> SELECT filename, status, bytes FROM v$block_change_tracking;
for further information on incremental backups and block change tracking follow the below link
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/bkup004.htm
Below SQL can be used to find out the number of members in the group
SQL> SELECT a.group#, count(a.member) FROM v$logfile a, v$log b WHERE a.group# = b.group# group by a.group# order by 1;
5. RMAN CHECK LOGICAL option.
While taking backups with RMAN, using 'check logical ' option checks the logical corruption within a block, in addition to the normal checksum verification. This is the best way to ensure that you will get a good backup.
7. When using RMAN have each datafile in a single backup piece
When doing a partial restore RMAN must read through the entire piece to get the datafile/archivelog requested. The smaller the backup piece the quicker the restore can complete. This is especially relevent with tape backups of large databases or where the restore is only on individual / few files.
However, very small values for filesperset will also cause larger numbers of backup pieces to be created, which can reduce backup performance and increase processing time for maintenance operations. So those factors must be weighed against the desired restore performance.
RMAN> backup database filesperset 1 plus archivelog delete input;
8. Maintain your RMAN catalog/controlfile
Choose your retention policy carefully. Make sure that it complements your tape subsystem retention policy, requirements for backup recovery strategy. If not using a catalog, ensure that your CONTROL_FILE_RECORD_KEEP_TIME parameter matches your retention policy.
SQL> alter system set control_file_record_keep_time=31 scope=both;
Ensure autobackup parameter in RMAN is always set to 'ON'.This will ensure that you always have an up to date controlfile available that has been taken at the end of the current backup, rather then during the backup itself.
RMAN> configure controlfile autobackup on;
Also, keep your backup logs. These logs contain parameters for your tape access, locations on controlfile backups that can be utilized if complete loss occurs.
10. Test your recovery
During a recovery situation this will let you know how the recovery will go without
actually doing it, and can avoid having to restore source datafiles again.
SQL> recover database test;
11. In RMAN backups do not specify 'delete all input' when backing up archivelogs
REASON: Delete all input' will backup from one destination then delete both copies of the
archivelog where as 'delete input' will backup from one location and then delete what has
been backed up. The next backup will back up those from location 2 as well as new logs
from location 1, then delete all that are backed up. This means that you will have the
archivelogs since the last backup available on disk in location 2 (as well as backed up
once) and two copies backup up prior to the previous backup.
See Note 443814.1 Managing multiple archive log destinations with RMAN for details.
1. Turn on block checking.
Block checking is enabled as below
SQL> show parameter db_block_checking
NAME TYPE VALUE
------------------------------------ ---- ---------
db_block_checking string FALSE
SQL> alter system set db_block_checking = true scope=both;
When set to 'TRUE' this allows oracle to detect early presence of corrupt blocks in the database.This has a slight performance overhead but can detect corruption caused by underlying disk, storage system, or I/O system problems.
2. Block Change Tracking tracking (incremental backups 10g & higher)
Change Tracking File maintains information that allows the RMAN incremental backup process to avoid reading data that has not yet been modified since the last backup. When Block Change Tracking is not used, all blocks must be read to determine if they have been modified since the last backup.
SQL> alter database enable block change tracking using file '/u01/oradata/chg_trcing/chg_tracking.f';
Once set the file can be queried from SQLPLUS as below
SQL> SELECT filename, status, bytes FROM v$block_change_tracking;
for further information on incremental backups and block change tracking follow the below link
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/bkup004.htm
3. Archive log destination.
Very important to have more than one archive log destinations. The reason is if an archivelog is corrupted or lost, by having multiple copies in multiple locations, the other logs will still be available and could be used.
Very important to have more than one archive log destinations. The reason is if an archivelog is corrupted or lost, by having multiple copies in multiple locations, the other logs will still be available and could be used.
This is how an another archive log location can be added to the database
SQL> alter system set log_archive_dest_2='location=/new/location/archive2' scope=both;
4. Duplex redo log groups and members
If an online log is deleted or becomes corrupt, you will have another member that can be used to recover if required.
4. Duplex redo log groups and members
If an online log is deleted or becomes corrupt, you will have another member that can be used to recover if required.
SQL> alter database add logfile member '/new/location/redo21.log' to group 1;
GROUP# COUNT(A.MEMBER)
---------- ---------------
1 2
2 2
3 2
4 2
5 2
6 2
7 2
5. RMAN CHECK LOGICAL option.
While taking backups with RMAN, using 'check logical ' option checks the logical corruption within a block, in addition to the normal checksum verification. This is the best way to ensure that you will get a good backup.
RMAN> backup check logical database plus archivelog delete input;
for further information, see the below link
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmvalid.htm
6. Test the backups.
Use 'Validate' command to test your backups. This will do everything except actually restore the database. This is the best method to determine if your backup is good and usable before being in a situation where it is critical and issues exist.RMAN> restore validate database;
See below for more information
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmvalid.htm
6. Test the backups.
Use 'Validate' command to test your backups. This will do everything except actually restore the database. This is the best method to determine if your backup is good and usable before being in a situation where it is critical and issues exist.RMAN> restore validate database;
See below for more information
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmvalid.htm
7. When using RMAN have each datafile in a single backup piece
When doing a partial restore RMAN must read through the entire piece to get the datafile/archivelog requested. The smaller the backup piece the quicker the restore can complete. This is especially relevent with tape backups of large databases or where the restore is only on individual / few files.
However, very small values for filesperset will also cause larger numbers of backup pieces to be created, which can reduce backup performance and increase processing time for maintenance operations. So those factors must be weighed against the desired restore performance.
RMAN> backup database filesperset 1 plus archivelog delete input;
8. Maintain your RMAN catalog/controlfile
Choose your retention policy carefully. Make sure that it complements your tape subsystem retention policy, requirements for backup recovery strategy. If not using a catalog, ensure that your CONTROL_FILE_RECORD_KEEP_TIME parameter matches your retention policy.
SQL> alter system set control_file_record_keep_time=31 scope=both;
9. Control file backup
Ensure autobackup parameter in RMAN is always set to 'ON'.This will ensure that you always have an up to date controlfile available that has been taken at the end of the current backup, rather then during the backup itself.
RMAN> configure controlfile autobackup on;
Also, keep your backup logs. These logs contain parameters for your tape access, locations on controlfile backups that can be utilized if complete loss occurs.
10. Test your recovery
During a recovery situation this will let you know how the recovery will go without
actually doing it, and can avoid having to restore source datafiles again.
SQL> recover database test;
11. In RMAN backups do not specify 'delete all input' when backing up archivelogs
REASON: Delete all input' will backup from one destination then delete both copies of the
archivelog where as 'delete input' will backup from one location and then delete what has
been backed up. The next backup will back up those from location 2 as well as new logs
from location 1, then delete all that are backed up. This means that you will have the
archivelogs since the last backup available on disk in location 2 (as well as backed up
once) and two copies backup up prior to the previous backup.
See Note 443814.1 Managing multiple archive log destinations with RMAN for details.
0 comments:
Post a Comment