My name is Mark Tiger, creator of this blog. I am an Oracle Certified Professional (OCP DBA 11g).
Gathering information for some DBA tasks can be time-consuming, even although the commands that you need to issue eventually can be over quite quickly. I have gone through this process over and over again, and have decided to help other Oracle DBA’s in the community.
In this blog, I will give you the details of how to carry out those tasks; that typically need a lot of research, before you can do them. I will try to present the information in an easy to understand way. My hope is that this will save you lots of time in research, and help to make you more productive as an Oracle DBA. The illustrations are primarily meant for Linux, since this is a Platform; that enjoys preference from Oracle. However they are easily adaptable for versions of UNIX/AIX and windows etc.
Oracle recommends that you create the backup using RMAN. However any backup copy of the primary database can be used, provided that you have the archived redo logs to completely recover the database.
If you are using RMAN, then you can create a standby control file with the RMAN utility. Otherwise you can shut the database down.
SQL> startup mount;
Then you can create the control file for the standby database, and open the database again.
SQL> alter database create standby controlfile as ‘/u03/jhb.ctl’;
SQL> alter database open;
You need a standby control file, because you can’t use the same control file for both the primary and standby databases.
Firstly, you need to create a parameter file from the server parameter file. Secondly you need to modify the parameter file for the standby database.
SQL> create pfile from spfile;
Or to keep it specific for the standby database:
SQL> create pfile=’/u03/initjhb.ora’ from spfile;
This created pfile will then be modified and used as the parameter file for the standby database.
Most of the parameters in the parameter file will be appropriate for the physical standby database, but there will be some parameters that need to be modified. (highlighted values are the ones to be modified)
Make sure that the COMPATIBLE initialization parameter is set to the same value on both the primary and standby databases. If the values differ then the redo transport services may not be able to transmit redo data from the primary database to the standby database.
SQL> show parameters
To examine and verify that all the parameters have the correct settings.
Recommended Setting for Standby Database
Specify a unique name for this database. This name stays with the database and does not change even if the primary and standby databases reverse riles.
Specify the path name for the control files on the standby database. It is recommended that a second copy of the control file is available so an instance can be easily restarted after copying the good control file to the location of the bad control file.
Specify the path name and filename location of the primary database datafiles followed by the standby location. This parameter converts the path names of the primary database to the standby datafile path names. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site, then this parameter is required.
Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required.
Specify where the redo data is to be archived.
· LOG_ARCHIVE_DEST_1 archives redo data received from the primary database to archived redo log files in /arch1/jhb/
· LOG_ARCHIVE_DEST_2 is currently ignored because this destination is valid only for the primary role. If a switchover occurs and this instance becomes the primary database, then it will transmit redo data to the remote ‘pretoria’ destination.
If a fast recovery area was configured with the DB_RECOVERY_FILE_DEST initialization parameter, and you have not explicitly configured a local archive destination with the LOCATION attribute, then Data Guard automatically uses the LOG_ARCHIVE_DEST_1 initialization parameter, as the default destination for local archiving.
Specify the Oracle Net service name of the FAL server(typically this is the database running in the primary role). When the ‘jhb’ database is running in the standby role, it uses the ‘pretoria’ database as the FAL server from which to fetch (request) missing archived redo log files if ‘pretoria’ is unable to automatically send the missing log files.
Always review all the initialization parameters to check if there are any additiona parameters that need to be modified. Check for the ump destination parameters etc.
Use operating system command to copy all the needed files from the primary database to the standby database.
· Backup of the datafiles
· Standby control file
· Initialization parameter file
You need to follow a number of steps to set up the Standby Database environment.
1. If you are on a windows server, then you need to create a windows based service, using the ORADIM utility.
> oradim –NEW –SID jhb -STARTMODE manual
2. If the primary database has a remote login password file, copy it to the appropriate directory on the physical standby database system. Note that the password file must be re-copied each time the SYSDBA or SYSOPER privilege is granted or revoked and whenever the login password of a user with these privileges is changed. This step is optional if OS authentication is used for administrative users and if SSL is used for redo transport authentication.
3. Configure the listeners for the primary and standby databases. On the Primary site, and on the Standby sites, use Oracle Net Manager to configure a listener for the databases. Then restart the listeners, to pick up the new definitions:
% lsnrctl stop <LISTENER>
% lsnrctl start <LISTENER>
4. Create an Oracle Net Service name, on both the Primary and Standby sites, that will be used by redo transport services. This Oracle Net Service name must resolve to a connect descriptor, that uses the same protocol, host address, port, and service that you specified when you configured the listeners for the primary and stand databases. A dedicated server must be used with this connect descriptor.
5. The standby database needs to run with an SPFILE
SQL> create spfile from pfile=’/u03/initjhb.ora’;
6. Copy the primary database encryption wallet to the standby database system. The standby database, must then also be configured to use this wallet. Whenever the master encryption key is changed on the primary database, the database encryption wallet, must be copied from the primary database system to the standby database system again.
Encrypted data in a standby database cannot be accessed, unless the standby database is configured to point to a database encryption wallet, or hardware security module, which must contain the current master encryption key from the primary database.
There are a few steps to carry out, in order to start the physical standby database up, and start applying the redo data to it.
· We need to start the physical standby database in mounted mode. Remember that we used a standby control file to create the standby database, so the standby database should already be in standby mode.
SQL> startup mount
· The second step is to prepare the standby database to receive and archive data from the primary database.
o Creating and Managing a Standby Redo Log: The synchronous and asynchronous redo transport modes require that a redo transport destination have a standby redo log. Standby redo logs store data from another database, such as the primary database. Standby redo logs are managed in the same way as redo logs. Redo received from the primary database is written to the current standby redo log group by a RFS process. When a log switch occurs on the source base, a corresponding switch occurs on the standby database, and incoming redo is then written to the next standby redo log group. The previously used standby do log group is then archived by an ARCn process.
The sequential writing to redo log groups, and archiving redo log groups at the source or primary database; is mirrored at all the standby databases.
There must always be one more standby redo log group, than the number of redo groups at the source or primary database. You can determine the size of each log file, and the number of log groups in the redo log.
SQL> select group#, bytes from v$log;
You can determine the size and number of standby redo log groups.
SQL> select group#, bytes from v$standby_log;
Oracle recommends that a standby redo log be created on the primary database in a Data Guard configuration. This way the database is immediately ready to receive redo data following a switchover to the standby role.
SQL> alter database add standby logfile (‘/orac/logfiles/standby_log1.rdo’) size 500M;
If the source database is a RAC or Oracle One Node Database, then query the v$log view to determine how many threads exist. You can then specify te thread numbers when adding the standby redo logs.
SQL> alter database add standby logfile thread 1 size 500m;
SQL> alter database add standby logfile thread 2 size 500m;
Remember that whenever you add a log group to the primary standby database, that you must also add a standby redo log group to the primary database, plus a logfile group to the standby database and a standby log file group to each standby database. If you don’t do this then the standby database, may become unsynchronized after a log switch. If the primary database is operating in maximum protection mode, it could cause the primary database to shut down.
o Configuring Standby Redo Log Archival:
Firstly if archiving is not enabled, you need to put the database in ARCHIVELOG mode and enable archiving. The database has to be in ARCHIVELOG mode for standby redo log archival to be performed.
SQL> shutdown immediate
SQL> startup mount;
SQL> alter database archivelog;
To verify that you are in archivelog mode:
SQL> archive log list
Secondly, you can setup standby redo log archival to a fast recovery area.
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to use the DB_RECOVERY_FILE_DEST.
Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.
LOG_ARCHIVE_DEST_2 = ‘LOCATION=DB_RECOVERY_FILE_DEST
Oracle recommends the use of the fast recovery area, because it simplifies the management of archived redo log files.
Thirdly set up standby redo log archival to a local file system location.
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to a valid pathname.
Set the VALID_FOR attribute for the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo log archival.
o Cases where Redo is written directly to an Archived Redo Log File:
This can occur if a standby redo log group is not available, or if the redo was sent to the standby database to resolve a redo gap.
When this situation occurs, the redo is written to the location specified by the LOCATION attribute of the LOG_ARCHIVE_DEST_n parameter that is valid for archiving redo received from the primary or source database. The value of the LOG_ARCHIVE_DEST_n parameter is determined when the standby database is mounted, and also every time a modification is made to the LOG_ARCHIVE_DEST_n parameter.
· Step 3 is to create online redo logs on the standby database.
This step is optional, because a standby database does not need to have redo logs configured in order to operate. However when it transitions to a primary role; it will need the online redo logs in order to operate. So it is best practice to create a set of redo logs that map to the primary databases redo logs, so that the standby database is ready to transition.
· Step 4 is to start the redo apply on the standby database.
Once everything is in place you can start applying the redo data. It is good practice to include a clause to disconnect from the session, so that the redo apply can run in a background session. You can also include the USING CURRENT LOGFILE clause, so that the redo can start as soon as it is received.
SQL> alter database recover managed standby database using current logfile disconnect from session;
You have set up the physical standby database and configured the redo transport services. Now you can verify that the database modifications are being successfully transmitted from the primary database to the standby database.
To check that the redo data is being received on the standby database, you identify the existing archived redo log files on the standby database. Then you force a log switch, which will also cause an online redo log archive to occur. Then you can check the standby database again to validate the results. Here it is step by step:
Step 1: On the standby database, query the v$archived_log view.
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
We may have for example sequence numbers 8,9,10
Step 2: On the primary database we force a logswitch, which will also result in the archival of the current online redo logfile group.
SQL> alter system switch logfile;
Step 3: Now we can go across to the standby database side again and verify that the redo data was also archived on the standby side.
SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
Now we see for example the sequence# numbers of 8,9,10,11.
This shows us that the archived redo log files are available to be applied to the physical standby database.
Step 4: Now we verify that the received redo has been applied to the standby database.
SQL> select sequence#, applied from v$archived_log order by sequence#;
The value for the “APPLIED” column will be “YES”, if that log has already been applied, or “IN-MEMORY” if it has not yet been applied.
Now that your Data Guard configuration is up and running you can look at some additional aspects.
· By default Data Guard is configured to be in maximum performance mode. You may want to change this to another mode, for example you may prefer it to be maximum protection mode.
· Enable Flashback Database: Flashback database enables you to return the database to its state at any time in the recent past. This works much faster than a point in time recovery and does not need any datafiles etc to restore from. You can flashback the database to a time just before a failover occurred, and avoid having to rebuild the primary database. In a Data Guard environment you can enable flashback recovery on the primary database or on the Standby database, or on both of them.
P.S. I am busy preparing Nine books on preparing for the Oracle 11g R2 OCM DBA exam. Watch this spot for details on the books, as they start becoming available.