Zero-Downtime Oracle Grid Infrastructure patching fails on node 2 running root.sh

patching grid infrastructure using ZDOGIP

When running /u01/app/19.16.0/grid/root.sh -transparent -nodriverupdate on node 2
of RAC cluster it fails with 


Can't locate strict.pm:   /root/perl5/lib/perl5/strict.pm: (null) at /u01/app/19.16.0/grid/sqlpatch/sqlpatch.pl line 108.
BEGIN failed--compilation aborted at /u01/app/19.16.0/grid/sqlpatch/sqlpatch.pl line 108.
2022/08/22 17:31:43 CLSRSC-488: Patching the Grid Infrastructure Management Repository database failed.
Died at /u01/app/19.16.0/grid/crs/install/crspatch.pm line 1916.
The command '/u01/app/19.16.0/grid/perl/bin/perl -I/u01/app/19.16.0/grid/perl/lib -I/u01/app/19.16.0/grid/crs/install /u01/app/19.16.0/grid/crs/install/rootcrs.pl  -transparent -nodriverupdate -dstcrshome /u01/app/19.16.0/grid -postpatch' execution failed
[root@node2 ~]#

running on node 1 runs successfully 

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
LD_LIBRARY_PATH='/u01/app/19.15.0/grid/lib:/u01/app/19.16.0/grid/lib:'
Using configuration parameter file: /u01/app/19.16.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/e03572/crsconfig/rootcrs_e03572_2022-08-22_04-02-55PM.log
Using configuration parameter file: /u01/app/19.16.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/e03572/crsconfig/crs_prepatch_apply_oop_e03572_2022-08-22_04-02-56PM.log
2022/08/22 16:03:05 CLSRSC-347: Successfully unlock /u01/app/19.16.0/grid
2022/08/22 16:03:06 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
Using configuration parameter file: /u01/app/19.16.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/e03572/crsconfig/crs_postpatch_apply_oop_e03572_2022-08-22_04-03-06PM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3063913975].
2022/08/22 16:03:29 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'
2022/08/22 16:04:11 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [3063913975].
2022/08/22 16:05:17 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2022/08/22 16:05:18 CLSRSC-672: Post-patch steps for patching GI home successfully completed.
[root@node1 app]# 2022/08/22 16:07:58 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
To fix it I did the following 

1. Locate the strict.pm file location

ls -l /<GRID_HOME>/perl/lib/5.28.0/strict.pm
ls -l /usr/share/perl5/strict.pm

2. Set the Perl variable as below:

export PERL5LIB=<grid_home>/perl/lib/:/usr/share/perl5/
export PATH=$PATH:<grid_home>/perl/lib/:/usr/share/perl5/

e.g.


export PERL5LIB=/<GRID_HOME>/perl/lib/:/usr/share/perl5/
export PATH=$PATH:/<GRID_HOME>/perl/lib/:/usr/share/perl5/

3. Rerun the /u01/app/19.16.0/grid/root.sh -transparent -nodriverupdate

SQL Patching tool version 19.16.0.0.0 Production on Tue Aug 23 11:53:18 2022
Copyright (c) 2012, 2022, Oracle.  All rights reserved.
Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_35258_2022_08_23_11_53_18/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
  No interim patches found
Current state of release update SQL patches:
  Binary registry:
    19.16.0.0.0 Release_Update 220703022223: Installed
  PDB CDB$ROOT:
    Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-MAY-22 16.12.58.011967
  PDB GIMR_DSCREP_10:
    Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-MAY-22 16.13.00.084820
  PDB PDB$SEED:
    Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-MAY-22 16.12.59.018312
Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10
    No interim patches need to be rolled back
    Patch 34133642 (Database Release Update : 19.16.0.0.220719 (34133642)):
      Apply from 19.15.0.0.0 Release_Update 220331125408 to 19.16.0.0.0 Release_Update 220703022223
    No interim patches need to be applied
Installing patches...
Patch installation complete.  Total patches installed: 3
Validating logfiles...done
Patch 34133642 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/34133642/24865470/34133642_apply__MGMTDB_CDBROOT_2022Aug23_11_54_00.log (no errors)
Patch 34133642 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/34133642/24865470/34133642_apply__MGMTDB_PDBSEED_2022Aug23_11_55_26.log (no errors)
Patch 34133642 apply (pdb GIMR_DSCREP_10): SUCCESS
  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/34133642/24865470/34133642_apply__MGMTDB_GIMR_DSCREP_10_2022Aug23_11_55_26.log (no errors)
SQL Patching tool complete on Tue Aug 23 11:56:34 2022
2022/08/23 11:57:37 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2022/08/23 11:57:40 CLSRSC-672: Post-patch steps for patching GI home successfully completed.
[root@node2~]#
 

restoring OMF and NON OMF datafiles via RMAN

Problem
if when restoring a database which has a mixture of OMF and NON OMF datafiles
you come across this error

channel c1: ORA-19870: error while restoring backup piece bk_6559_1_906629460
ORA-19504: failed to create file "+ASM_DATA/HFMP/DATAFILE/oddevcontent.dbf"
ORA-17502: ksfdcre:3 Failed to create file +ASM_DATA/HFMP/DATAFILE/oddevcontent.dbf
ORA-15001: diskgroup "ASM_DATA" does not exist or is not mounted
ORA-15001: diskgroup "ASM_DATA" does not exist or is not mounted

failover to previous backup

Recovery catalog is down or not connected to catalog, trying to reconnect.
Reconnection with the recovery catalog is successful.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03012: fatal error during compilation of command
RMAN-03028: fatal error code for command restore : 600
RMAN-00600: internal error, arguments [7530] [] [] [] []
[orahfm@rac1 (LMT:hfmp1) HFMP_20160329 ] $
[orahfm@rac1 (LMT:hfmp1) HFMP_20160329 ] $

Solution
The solution is to use “SET NEWNAME” when restoring non-OMF files. If using OMF files
then only the DB_CREATE_FILE_DEST needs to be set. But when it is a mixed env you must
use “SET NEWNAME” for all non-OMF datafiles that will be restored.

Eg
set newname for DATAFILE 7 to ‘+PRE_ASM_DATA/HFMP/DATAFILE/oddevcontent.dbf’;
switch DATAFILE 7 to datafilecopy ‘+PRE_ASM_DATA/HFMP/DATAFILE/oddevcontent.dbf’;

ORA-00845: MEMORY_TARGET not supported on this system

encountered the oracle error “ORA-00845: MEMORY_TARGET not supported on this system” on a 12c database recently and occurs when an instance tries to use AMM and tmpfs mount point is less than the value specified  in  parameter memory_max_target

to solve this issue we need to  unmount the tmpfs and mount with increased space for tmps and make sure that this change is made permanent by adding the enty in /etc/fstab


As root

# unmount  tmpfs

umount: /dev/shm: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))

# umount -l tmpfs

# mount -t tmpfs tmpfs -o size=10240m /dev/shm

# vi /etc/fstab

# cat /etc/fstab

tmps       /dev/shm        tmpfs    size=10240m    0    0

# exit

 

Change IP adresses of SCAN Name in Oracle 11gR2 RAC

[root@rac1 ]# srvctl stop scan_listener
[root@rac1 ]# srvctl stop scan

Be sure that all the SCAN VIP services are down:

[root@rac1 ]# srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running
SCAN VIP scan3 is enabled
SCAN VIP scan3 is not running

reconfigure the virtual ip addresses

[root@rac1 ]# srvctl modify scan -n rac-scan.maverick.com
[root@rac1 ]# srvctl start scan
[root@rac1 ]# srvctl start scan_listener
[root@rac1 ]# srvctl config scan