Archive

Archive for January, 2012

How to fix Target Database Incarnation Is Not Current In Recovery Catalog RMAN-20011

January 31, 2012 Leave a comment

When doing a incomplete recovery (i.e. database is open with resetlogs) and when running RMAN to backup that database after incomplete recovery you will get the following error.

RMAN-03014: implicit resync of recovery catalog failed
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20011: target database incarnation is not current in recovery catalog

To fix this issue by performing the following steps.


1) Connect to RMAN catalog via SQL*Plus
$ sqlplus @rmancatalog

SQL> SELECT name, DBID, RESETLOGS_TIME FROM rc_database WHERE dbid=9999999999;

NAME DBID RESETLOGS
——– ———- ———
TESTDB 9999999999 29-JUL-09

Check the latest incarnation
SQL> SELECT dbid, name, dbinc_key, resetlogs_change#, resetlogs_time FROM rc_database_incarnation WHERE dbid=9999999999 ORDER BY resetlogs_time;

SQL>SELECT db_key,DBID,name,current_incarnation FROM rc_database_incarnation WHERE dbid = 9999999999 order by 1;

DBID NAME DBINC_KEY RESETLOGS_CHANGE# TO_CHAR(RESETLOGS_TI
———- ——– ———- —————– ——————–
9999999999 TESTDB 473490 1 16-FEB-2006 08:59:37
9999999999 TESTDB 473484 565658 07-MAY-2007 10:15:58
9999999999 TESTDB 774712 52500357 29-JUL-2009 11:12:21

So then connect to the rman catalog
$ export NLS_DATE_FORMAT=’DD-MON-YYYY HH24:MI:SS’
$ rman catalog target /
RMAN> reset database to incarnation 774712; — From step 1 after running this step you should be able to run backup on the database on which incomplete recovery was done

RMAN> resync catalog;

– shows the current incarnation is the current one
RMAN> list incarnation;

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
——- ——- ——– —————- — ———- ———-
473483 473490 TESTDB 9999999999 PARENT 1 16-FEB-2006 08:59:37
473483 473484 TESTDB 9999999999 PARENT 565658 07-MAY-2007 10:15:58
473483 774712 TESTDB 9999999999 CURRENT 52500357 29-JUL-2009 11:12:21

 

Categories: Oracle

How To Determine The DBID When Using RMAN

January 31, 2012 Leave a comment

1)  If your database is open you may issue the following query:
SQL>  SELECT dbid FROM v$database;

      DBID
----------
1411146558


2)  If you are using a recovery catalog then connect to the recovery catalog via
RMAN and issue the "list incarnation" command. You must first nomount the target
database.  For example:

D:\> rman target <user>/<pwd>@mydb rcvcat <user>/<pwd>@rcat

Recovery Manager: Release 8.1.7.4.1 - Production

RMAN-06193: connected to target database (not started)
RMAN-06008: connected to recovery catalog database

RMAN> startup nomount

RMAN-06196: Oracle instance started

Total System Global Area      94980124 bytes

Fixed Size                       75804 bytes
Variable Size                 57585664 bytes
Database Buffers              37240832 bytes
Redo Buffers                     77824 bytes

RMAN> list incarnation;

RMAN-03022: compiling command: list

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            CUR Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       2       ORCL817  1411146558       YES 282854     03-DEC-02
--------------------------------------------------------------------------------


3)  If you have a saved copy of the screen details from a previous RMAN session
you may refer to this output for the dbid.  For example:

D:\> rman target <user>/<pwd>@mydb rcvcat <user>/<pwd>@rcat

Recovery Manager: Release 8.1.7.4.1 - Production

RMAN-06005: connected to target database: ORCL817 (DBID=1411146558)
RMAN-06008: connected to recovery catalog database
--------------------------------------------------------------------------------


4)  If you are using RMAN with version 9i you have the ability to configure the
automatic backup of your control files.  If you have this feature on locate one
of your control file autobackups.  The name of this file will tell you the dbid
of your database. For example:

D:\ORACLE\ORA92\DATABASE> dir
Volume in drive D has no label.
Volume Serial Number is 3E3B-12FD

Directory of D:\ORACLE\ORA92\DATABASE

11/20/2002  02:06p      <DIR>          .
11/20/2002  02:06p      <DIR>          ..
01/23/2003  11:12a           2,064,384 C-2282329623-20030123-00
...

In this case 2282329623 is the dbid for this database.


5)  If the four steps above are not available because you have lost all the files
for your database, you are not using a recovery catalog, you are not using
autobackup of your controlfile, etc., but you have an old control file available,
mount the database with the old control file then query v$database as in step 1
to obtain the dbid of your database.


6)  If the platform is UNIX and you have a datafile still on disk for the problem database, you may
be able to obtain the DBID using the strings command as in the following example:

$ strings undotbs01.dbf  | grep MAXVALUE

3587267724, MAXVALUE
... etc.

The output above shows the DBID in this example to be 3587267724

Categories: Oracle

Modifying VIP address or VIP hostname in Oracle RAC

January 26, 2012 Leave a comment

check the current settings


# srvctl config nodeapps -a

Network exists: 1/10.1.223.0/255.255.255.0/lan0, type static
VIP exists: /cruz-vip/10.1.223.73/10.1.223.0/255.255.255.0/lan0, hosting node cruz
VIP exists: /davis-vip/10.1.223.72/10.1.223.0/255.255.255.0/lan0, hosting node davis

Stop Instance,ASM,Nodeapps resources


$srvctl stop instance -d romeo -i romeo2
$srvctl stop asm -n romeo1
$srvctl stop nodeapps -n romeo1

Verify the VIP Address is no longer running by using below command


crsctl status  resource  -t

Modify VIP Address or VIP hostname on nodeapps by using srvctl command (Run as root)


#srvctl modify nodeapps -n [-o ] [-A ]
Where
-n < node_name> - Node name.
-o - Oracle Home for the cluster software (CRS-Home).
-A The node level VIP address (/netmask[/if1[|if2|...]]).

Example:- Modify the VIP Address to 10.1.223.72

#srvctl modify nodeapps -n davis -A 10.1.223.72/255.255.255.0/eth0

Modify VIP hostname from ”davis-vip” to “davis-v”

#srvctl modify nodeapps -n  -A davis-v/255.255.255.0/eth0

Verify the change by running below command

$srvctl config nodeapps -n davis –a

Start all resources


$srvctl start nodeapps -n davis
$srvctl start asm -n davis

$srvctl start instance -d romeo –i romeo1

 

 

Categories: Oracle Rac

Restoring OCR and Voting Disk on shared storage 11gr2

January 25, 2012 Leave a comment

RESTORING OCR

1. List the nodes in your cluster by running the following command as root on one node:

#./olsnodes
davis
cruz

2. Stop Oracle Clusterware by running the following command as root on all of the nodes:

# crsctl stop crs

If the preceding command returns any error due to OCR corruption, stop Oracle Clusterware by running the following command as root on all of the nodes:

# crsctl stop crs -f

If you are using ASM  we need to start ASM on the node where we will recover the OCR

#crstcl start crs -excl

After startup verify that the diskgroups we need are mounted

SQL> select name, state from v$asm_diskgroup

Stop crsd if running

crsctl stop resource ora.crsd -init

3. Run the following command to list the backup files:

<strong># ./ocrconfig -showbackup</strong>

cruz     2012/07/18 14:33:22     /u05/ocrbackup/backup00.ocr
cruz     2012/07/18 10:33:21     /u05/ocrbackup/backup01.ocr
cruz     2012/07/18 06:33:20     /u05/ocrbackup/backup02.ocr
cruz     2012/07/16 18:33:10     /u05/ocrbackup/day.ocr
cruz     2012/07/11 10:32:33     /u05/ocrbackup/week.ocr
davis     2012/01/26 11:13:52    /u05/ocrbackup/backup_20120126_111352.ocr

I have moved the ocr backups onto shared storage just in case we have node failure and dont have access to the backups

If you need show both  automatically or manually created backups


<strong># ./ocrconfig -showbackup manual</strong>
cruz     2012/07/18 16:39:44     /u05/ocrbackup/backup_20120718_163944.ocr
davis     2012/07/18 16:18:35     /u05/ocrbackup/backup_20120718_161835.ocr
davis     2012/01/26 11:13:52     /u05/ocrbackup/backup_20120126_111352.ocr
davis     2012/01/26 11:11:43     /u01/app/11.2.0.2/grid/cdata/ractestscan/backup_20120126_111143.ocr

<strong># ./ocrconfig -showbackup auto</strong>
cruz     2012/07/18 14:33:22     /u05/ocrbackup/backup00.ocr
cruz     2012/07/18 10:33:21     /u05/ocrbackup/backup01.ocr
cruz     2012/07/18 06:33:20     /u05/ocrbackup/backup02.ocr
cruz     2012/07/16 18:33:10     /u05/ocrbackup/day.ocr
cruz     2012/07/11 10:32:33     /u05/ocrbackup/week.ocr

4. If you are restoring OCR to a cluster file system or network file system

if you need to find the location of the OCR

(HP-UX)

/var/opt/oracle/ocr.loc

<strong># ./ocrcheck</strong>
Status of Oracle Cluster Registry is as follows :
Version                  :          3
Total space (kbytes)     :     262120
Used space (kbytes)      :       3316
Available space (kbytes) :     258804
ID                       :  579764850
Device/File Name         : /u02/ocr/ocr
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

If this command does not display the message Device/File integrity check succeeded for at least one copy of the OCR,
then both the primary OCR and the OCR mirrors have failed. You must restore the OCR from a backup.

Review the contents of the backup using the following ocrdump
command, where file_name is the name of the OCR backup file. The command generates an ASCII file OCRDUMPFILE:

#ocrdump -backupfile OCR_backup_file_name

# ocrconfig -restore /u05/ocrbackup/backup00.ocr

Restart Oracle Clusterware on all nodes of the cluster

#crsctl start crs

Use the Cluster Verification Utility (CVU) to verify the OCR integrity. Run the following command,
where the -n all argument retrieves a list of all the cluster nodes that are configured as part of your cluster:

<strong>grid#cluvfy comp ocr -n all -verbose</strong>

Verifying OCR integrity

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations

Checking OCR config file "/var/opt/oracle/ocr.loc"...

OCR config file "/var/opt/oracle/ocr.loc" check successful

Checking OCR location "/u02/ocr/ocr"...

Check for OCR location "/u02/ocr/ocr" successful

Check for compatible storage device for OCR location "/u02/ocr/ocr"...

Check for compatible storage device for OCR location "/u02/ocr/ocr" is successful...

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Verification of OCR integrity was successful.

The output above confirms that OCR was successfully restored

RESTORING VOTING DISK

1.  Make sure that the OCR is restored and valid using ocrcheck
2. Run the following command as root from only one node to start the Oracle Clusterware stack in exclusive mode, which does not require voting files to be present or usable:
# crsctl start crs -excl

3. Run the crsctl query css votedisk command to retrieve the list of voting files currently defined, similar to the following:

## STATE File Universal Id File Name Disk group
— —– —————– ——— ———
1. ONLINE a9f61832112b4f33bf98988caac790f6 (/u02/ocr/vdsk) []
Located 1 voting disk(s).

4. run the following command using the File Universal Identifier (FUID) obtained in the previous step:

$ crsctl delete css votedisk FUID

5.  Add a voting disk, as follows:

$ crsctl add css votedisk path_to_voting_disk

6. Stop the Oracle Clusterware stack as root:
# crsctl stop crs
If the Oracle Clusterware stack is running in exclusive mode, then use the -f option to force the shutdown of the stack.

7. Restart the Oracle Clusterware stack in normal mode as root:
# crsctl start crs

Oracle Local Registry

Each node in a cluster has a local registry for node-specific resources, called an Oracle Local Registry (OLR),

By default, OLR is located at Grid_home/cdata/host_name.olr on each node.

You can check the status of OLR on the local node using the OCRCHECK utility, as follows:

# ocrcheck -local

You can display the content of OLR on the local node to the text terminal that initiated the program using the OCRDUMP utility, as follows:

# ocrdump -local -stdout

You can perform administrative tasks on OLR on the local node using the OCRCONFIG utility.

To export OLR to a file

# ocrconfig –local –export file_name

Oracle recommends that you use the -manualbackup and -restore commands and not the -import and -export commands.

When exporting OLR, Oracle recommends including “olr“, the host name, and the timestamp in the name string. For example:

olr_myhost1_20090603_0130_export

To import a specified file to OLR:

# ocrconfig –local –import file_name

To manually back up OLR:

# ocrconfig –local –manualbackup

Note:

The OLR is backed up at the end of an installation or an upgrade. After that time, you can only manually back up the OLR. Automatic backups are not supported for the OLR. You should create a new backup when you migrate OCR from Oracle ASM to other storage, or when you migrate OCR from other storage to Oracle ASM.

The default backup location for the OLR is in the path Grid_home/cdata/host_name.

To view the contents of the OLR backup file:

ocrdump -local -backupfile olr_backup_file_name

To change the OLR backup location:

ocrconfig -local -backuploc new_olr_backup_path

To restore OLR:


# crsctl stop crs # ocrconfig -local -restore file_name

# ocrcheck -local

# crsctl start crs

$ cluvfy comp olr

Categories: Oracle Rac

11gR2 Target Database or Repository Producing Several TNS-12599

January 25, 2012 Leave a comment

***********************************************************************
NI cryptographic checksum mismatch error: 12599.

VERSION INFORMATION:
TNS for Linux: Version 11.2.0.1.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.1.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.1.0 - Production
Time: 14-JUL-2010 11:39:22
Tracing not turned on.
Tns error struct:
ns main err code: 12599

TNS-12599: TNS:cryptographic checksum mismatch
ns secondary err code: 2526
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Wed Jul 14 11:48:14 2010
***********************************************************************

Workaround:

on the target database side, set SQLNET.ENCRYPTION_TYPES_SERVER= (3DES168)  in sqlnet.ora

Patches for bug 9953045 and  bug 12360281 fix this problem.

– bug 9953045   is to fix the issue for repository connection made from OMS
– bug 12360281 is to fix target database connections made from OMS

Download patches from Oracle

p9953045_111010_Generic.zip

p12360281_111010_Generic.zip

Categories: Grid

11gR2 Rac fix the “ORACLE_BASE not set in environment” error in alertlog

January 25, 2012 Leave a comment

The following error was reported in the alert log during instance startup after recovering from a pmon crash in a RAC environment

Starting background process RSMN
Fri Dec 16 16:55:40 2011
RSMN started with pid=28, OS id=24795
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Fri Dec 16 16:55:40 2011
ALTER DATABASE MOUNT /* db agent *//* {4:8516:36317} */

to fix the error run the following command using srvctl

srvctl setenv database -d romeo -t “ORACLE_BASE=/u01/app/oracle”

Once that was executed, I bounced this particular instance using:

srvctl stop instance -d romeo -i romeo2
srvctl start instance -d romeo -i romeo2

this resolved the issue

Starting background process RSMN
Wed Jan 25 12:39:47 2012
RSMN started with pid=28, OS id=16293
ORACLE_BASE from environment = /u01/app/oracle
Wed Jan 25 12:39:48 2012
ALTER DATABASE MOUNT /* db agent *//* {2:43873:47317} */
Categories: Oracle Rac

Duplicate RAC database to another new RAC system

January 20, 2012 1 comment

Source

4 nodes Oracle 11.2.0.2 on HP-UX 11.31 hostA1 hostB1 hhostC1 hostD1 database romeo

Target

4 nodes Oracle 11.2.0.2 on HP-UX 11.31  hostA2 hostB2 hostC2 hostD2  database Juliet

1. On the source host, backup the source using RMAN. Sample here is shown using control file to disk.

$ORACLE_HOME/bin/rman

oracle@sausalito:/u01/app/oracle/product/11.2.0/db_1/dbs > rman target / catalog rman/rman@rmancpr

Recovery Manager: Release 11.2.0.2.0 – Production on Fri Jan 20 09:08:25 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ROMEO(DBID=498205723)

connected to recovery catalog database

run {
backup format ‘/u1/df_%t_%s_%p’ database
include current controlfile
plus archivelog;
release channel d1;
}

2.  Copy all the backupset files to one of the target hosts to the same location of backup

$scp /rmanbackup/oradata/romeo/df*   hostA2: /rmanbackup/oradata/romeo/

 Create a pfile from the source database and move it to the target system

SQL> create pfile=’/tmp/initjuliet.ora’ from spfile ;

Move initjuliet.ora to the target system $ORACLE_HOME/dbs

3. Edit initjuliet.ora  and change

*.cluster_database=false

And any references of romeo to Juliet

Add convert name if names or paths are different

*.db_file_name_convert = (‘/u02/oradata/romeo’,’/u02/oradata/juliet’)

*.log_file_name_convert= (‘/u02/oradata/romeo’,’/u02/oradata/juliet’)

Create a password file using orapwd on the target host

orapwd file=orapwjuliet password=xxxxxxxx

4. On the target host create a TNSNAME alias pointing to the source database

ROMEO =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = maverick)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = romeo)

    )

  )

5. Startup the Juliet instance in nomount  mode

6. Start the rman duplicate

 rman target sys/xxxxxxx@romeo auxiliary /  catalog rman/rman@rmancpr

 RMAN> run {

allocate channel D1 device type disk;

allocate auxiliary channel D2 device type disk;

DUPLICATE TARGET DATABASE TO juliet;

}

If successfully you should see the following extract at the end

 

datafile 2 switched to datafile copy

input datafile copy RECID=1 STAMP=772991657 file name=/u03/oradata/dupdb/sysaux01.dbf

datafile 3 switched to datafile copy

input datafile copy RECID=2 STAMP=772991657 file name=/u03/oradata/dupdb/undotbs01.dbf

datafile 4 switched to datafile copy

input datafile copy RECID=3 STAMP=772991657 file name=/u03/oradata/dupdb/users01.dbf

datafile 5 switched to datafile copy

input datafile copy RECID=4 STAMP=772991657 file name=/u03/oradata/dupdb/undotbs02.dbf

datafile 6 switched to datafile copy

input datafile copy RECID=5 STAMP=772991657 file name=/u03/oradata/dupdb/undotbs03.dbf

datafile 7 switched to datafile copy

input datafile copy RECID=6 STAMP=772991657 file name=/u03/oradata/dupdb/undotbs04.dbf

 contents of Memory Script:

{

   Alter clone database open resetlogs;

}

executing Memory Script

 database opened

Finished Duplicate Db at 19-JAN-2012 15:54:40

released channel: D1

released channel: D2

7. Add a new temporary file into the temporary table space

 alter tablespace temp add tempfile ‘/u02/oradata/dupdb/temp01.dbf’ size 2048m reuse ;

8. Make the database RAC aware

Each node of the cluster has its own oracle home n u01 and shared filesystem u02 and u03  and therefore needed to make the following changes enable RAC

Shutdown database on hostA2 and move the spfile from $ORACLE_HOME/dbs to /u03/oradata/Juliet

This is shared storage which all the 4 hosts can access

srvctl modify database -d juliet -o /u01/app/oracle/product/11.2.0/db_1 -p /u03/oradata/dupdb/spfilejuliet.ora

srvctl add instance -d julite -i juliet1 –n hostA2

srvctl add instance -d julite -i juliet2 –n hostB2

srvctl add instance -d julite -i juliet3 –n hostC2

srvctl add instance -d julite -i juliet4 –n hostD2

move the password file from hostA2  $ORACLE_HOME/dbs  to /u03/oradata/Juliet

create links  from each host  to the password file in  /u03/oradata/Juliet

hostA2 : $ORACLE_HOME/dbs ln –s /u03/oradata/Juliet/orapwjuliet  orapwdupdb1

hostB2 : $ORACLE_HOME/dbs ln –s /u03/oradata/Juliet/orapwjuliet  orapwdupdb2

hostC2 : $ORACLE_HOME/dbs ln –s /u03/oradata/Juliet/orapwjuliet  orapwdupdb3

hostD2 : $ORACLE_HOME/dbs ln –s /u03/oradata/Juliet/orapwjuliet  orapwdupdb4

on each host create the following init.ora 

hostA2  initjuliet1.ora   containing SPFILE=’/u03/oradata/dupdb/spfiledupdb.ora’

hostB2  initjuliet1.ora   containing SPFILE=’/u03/oradata/dupdb/spfiledupdb.ora’

hostC2  initjuliet1.ora   containing SPFILE=’/u03/oradata/dupdb/spfiledupdb.ora’

hostD2  initjuliet1.ora   containing SPFILE=’/u03/oradata/dupdb/spfiledupdb.ora’

add the following entries on each host /etc/oratab

hostA2        

juliet1:/u01/app/oracle/product/11.2.0/db_1:N     

hostB2

juliet2:/u01/app/oracle/product/11.2.0/db_1:N     

hostC2

juliet3:/u01/app/oracle/product/11.2.0/db_1:N     

hostD2

juliet4:/u01/app/oracle/product/11.2.0/db_1:N     

On hostA2 where the database was created

Mount the database

Export ORACLE_SID=juliet1

SQL> connect / as sysdba

SQL> startup mount

SQL> alter system set cluster_database=TRUE scope=spfile sid=’*’

SQL> shutdown immediate

Start up instances or databases using srvctl

srvctl start instance –d Juliet –I juliet1

Srvctl start instance –d Juliet –I juliet2

Srvctl start instance –d Juliet –I juliet3

Srvctl start instance –d Juliet –I juliet4

9   Add the database to OEM

 You might get the following issues in OEM

 Service romeoXDB and romeo  and any services that existed in romeo is down alert in GRID Control

 Refer to the following article for a resolution and how to avoid the problem

 http://managingoracle.blogspot.com/2011/01/service-xdb-is-down-alert-in-grid.html

 To resolve the problem  if  the services are not relevant  delete them

 SQL> exec dbms_service.delete_service(‘romeoXDB’) ;

SQL> exec dbms_service.delete_service(‘romeo’) ;

useful web links

http://oraexplorer.com/2007/11/rman-duplicate/

http://www.colestock.com/blogs/2007/09/how-to-duplicate-rac-database-using.html

http://www.oracleracexpert.com/2009/12/duplicate-rac-database-using-rman.html

http://surachartopun.com/2009/11/duplicate-without-connection-to-target.html

Categories: Oracle Rac