--*****************************************
-- 使用 runcluvfy 校 验 Oracle RAC 安装 环 境
--*****************************************
所 谓 工欲善其事,必先利其器。安装 Orale RAC 可 谓 是一 个 浩大的工程,尤其是 没 有做好前期的 规划与 配置工作 时将导 致安装的 复杂
度 绝 非想象。幸好有 runcluvfy 工具, 这 大大 简 化了安装工作。下面的演示是基于安装 Oracle 10 g RAC / Linux 来 完成的。
1 . 从 安装文件路 径 下使用 runcluvfy 实 施安装前的校 验
[oracle@node1 cluvfy]$ pwd
/u01/Clusterware/clusterware/cluvfy
[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check : Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result : Node reachability check passed from node "node1".
Checking user equivalence...
Check : User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result : User equivalence check passed for user "oracle".
Checking administrative privileges...
Check : Existence of user "oracle"
Node Name User Exists Comment
------------ ------------------------ ------------------------
node2 yes passed
node1 yes passed
Result : User existence check passed for "oracle".
Check : Existence of group "oinstall"
Node Name Status Group ID
------------ ------------------------ ------------------------
node2 exists 500
node1 exists 500
Result : Group existence check passed for "oinstall".
Check : Membership of user "oracle" in group "oinstall" [ as Primary ]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result : Membership check for user "oracle" in group "oinstall" [ as Primary ] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "node2"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168 .0.12 192.168 .0.0
eth1 10.101 .0.12 10.101 .0.0
Interface information for node "node1"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168 .0.11 192.168 .0.0
eth1 10.101 .0.11 10.101 .0.0
Check : Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result : Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.
Check : Node connectivity of subnet "10.101.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth1 node1:eth1 yes
Result : Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.
Suitable interfaces for the private interconnect on subnet "192.168.0.0":
node2 eth0: 192.168 .0.12
node1 eth0: 192.168 .0.11
Suitable interfaces for the private interconnect on subnet "10.101.0.0":
node2 eth1: 10.101 .0.12
node1 eth1: 10.101 .0.11
ERROR :
Could not find a suitable set of interfaces for VIPs.
Result : Node connectivity check failed.
Checking system requirements for 'crs' ...
Check : Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 689.38 MB ( 705924 KB) 512 MB ( 524288 KB) passed
node1 689.38 MB ( 705924 KB) 512 MB ( 524288 KB) passed
Result : Total memory check passed.
Check : Free disk space in "/tmp" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 4.22 GB ( 4428784 KB) 400 MB ( 409600 KB) passed
node1 4.22 GB ( 4426320 KB) 400 MB ( 409600 KB) passed
Result : Free disk space check passed.
Check : Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2 GB ( 2096472 KB) 1 GB ( 1048576 KB) passed
node1 2 GB ( 2096472 KB) 1 GB ( 1048576 KB) passed
Result : Swap space check passed.
Check : System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 i686 i686 passed
node1 i686 i686 passed
Result : System architecture check passed.
Check : Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2.6 .18- 194 .el5 2.4 .21- 15 EL passed
node1 2.6 .18- 194 .el5 2.4 .21- 15 EL passed
Result : Kernel version check passed.
Check : Package existence for "make-3.79"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 make - 3.81 - 3 .el5 passed
node1 make - 3.81 - 3 .el5 passed
Result : Package existence check passed for "make-3.79".
Check : Package existence for "binutils-2.14"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 binutils- 2.17 .50.0.6- 14 .el5 passed
node1 binutils- 2.17 .50.0.6- 14 .el5 passed
Result : Package existence check passed for "binutils-2.14".
Check : Package existence for "gcc-3.2"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 gcc- 4.1 .2- 48 .el5 passed
node1 gcc- 4.1 .2- 48 .el5 passed
Result : Package existence check passed for "gcc-3.2".
Check : Package existence for "glibc-2.3.2-95.27"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 glibc- 2.5 - 49 passed
node1 glibc- 2.5 - 49 passed
Result : Package existence check passed for "glibc-2.3.2-95.27".
Check : Package existence for "compat-db-4.0.14-5"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 compat-db- 4.2 .52- 5.1 passed
node1 compat-db- 4.2 .52- 5.1 passed
Result : Package existence check passed for "compat-db-4.0.14-5".
Check : Package existence for "compat-gcc-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result : Package existence check failed for "compat-gcc-7.3-2.96.128".
Check : Package existence for "compat-gcc-c++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result : Package existence check failed for "compat-gcc-c++-7.3-2.96.128".
Check : Package existence for "compat-libstdc++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result : Package existence check failed for "compat-libstdc++-7.3-2.96.128".
Check : Package existence for "compat-libstdc++-devel-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result : Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".
Check : Package existence for "openmotif-2.2.3"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 openmotif- 2.3 .1- 2 .el5_4.1 passed
node1 openmotif- 2.3 .1- 2 .el5_4.1 passed
Result : Package existence check passed for "openmotif-2.2.3".
Check : Package existence for "setarch-1.3-1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 setarch- 2.0 - 1.1 passed
node1 setarch- 2.0 - 1.1 passed
Result : Package existence check passed for "setarch-1.3-1".
Check : Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result : Group existence check passed for "dba".
Check : Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result : Group existence check passed for "oinstall".
Check : User existence for "nobody"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result : User existence check passed for "nobody".
System requirement failed for 'crs'
Pre- check for cluster services setup was unsuccessful on all the nodes.
Could not find a suitable set of interfaces for VIPs. ”,可以忽略 该错误
信息, 这 是一 个 bug , Metalink 中有 详细说 明, doc.id: 338924.1 。 参 考本文尾部列出的 内 容。
对 于上面描述的 failed 的包, 尽 可能的 将 其安装到系 统 。
2 . 安装 Clusterware 后的 检查 , 注意,此 时执 行的 cluvfy 是位于已安装的路 径
[oracle@node1 ~]$ pwd
/u01/app/oracle/product/ 10.2 .0/crs_1/ bin
[oracle@node1 ~]$./cluvfy stage - post crsinst -n node1,node2
Performing post -checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "node1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local - only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application ( required )
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Post - check for cluster services setup was successful.
从 上面的校 验 可以看出, Clusterware 的相 关 后台 进 程, nodeapps 相 关资 源以及 OCR 等 处 于 passed 状态 ,即 Clusterware 成功安装
3 .cluvfy 的用法
[oracle@node1 ~]$ cluvfy -help # 直接使用 -help 参数 即可 获 得 cluvfy 的 帮 助信息
USAGE :
cluvfy [ -help ]
cluvfy stage { - list | -help }
cluvfy stage {-pre|- post } <stage- name > <stage-specific options> [-verbose]
cluvfy comp { - list | -help }
cluvfy comp <component- name > <component-specific options> [-verbose]
[oracle@node1 ~]$ cluvfy comp - list
USAGE :
cluvfy comp <component- name > <component-specific options> [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
4 .ID 338924.1
CLUVFY Fails With Error : Could not find a suitable set of interfaces for VIPs [ ID 338924.1 ]
________________________________________
Modified 29 -JUL- 2010 Type PROBLEM Status PUBLISHED
In this Document
Symptoms
Cause
Solution
References
________________________________________
Applies to :
Oracle Server - Enterprise Edition - Version : 10.2 .0.1 to 11.1 .0.7 - Release: 10.2 to 11.1
Information in this document applies to any platform.
Symptoms
When running cluvfy to check network connectivity at various stages of the RAC/CRS installation process, cluvfy fails
with errors similar to the following :
=========================
Suitable interfaces for the private interconnect on subnet "10.0.0.0":
node1 eth0: 10.0 .0.1
node2 eth0: 10.0 .0.2
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
node1_internal eth1: 192.168 .1.2
node2_internal eth1: 192.168 .1.1
ERROR :
Could not find a suitable set of interfaces for VIPs.
Result : Node connectivity check failed.
========================
On Oracle 11 g, you may still see a warning in some cases, such as :
========================
WARNING:
Could not find a suitable set of interfaces for VIPs.
========================
Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names
of 'node1' , 'node2' , 'node1_internal' , 'node2_internal' will be substituted with your actual Public and Private node names.
A second problem that will be encountered in this situation is that at the end of the CRS installation for 10 gR2, VIPCA
will be run automatically in silent mode , as one of the 'optional' configuration assistants. In this scenario, the VIPCA
will fail at the end of the CRS installation. The InstallActions log will show output such as :
> />> Oracle CRS stack installed and running under init ( 1 M)
> />> Running vipca(silent) for configuring nodeapps
> />> The given interface(s), "eth0" is not public. Public interfaces should
> />> be used to configure virtual IPs.
Cause
This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -
"RFC1918 - Address Allocation for Private Internets". This Internet Best Practice RFC can be viewed here:
http://www.faqs.org/rfcs/rfc1918.html
From an Oracle perspective, this issue is tracked in BUG: 4437727
Per BUG: 4437727 , cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins with any
of the following octets is private and hence may not be fit for use as a VIP:
172.16 .x.x through 172.31 .x.x
192.168 .x.x
10 .x.x.x
However, this assumption does not take into account that it is possible to use these IPs as Public IP 's on an internal
network (or intranet). Therefore, it is very common to use IP addresses in these ranges as Public IP' s and as Virtual
IP(s), and this is a supported configuration.
Solution
The solution to the error above that is given when running 'cluvfy' is to simply ignore it if you intend to use an IP in
one of the above ranges for your VIP. The installation and configuration can continue with no corrective action necessary.
One result of this, as noted in the problem section , is that the silent VIPCA will fail at the end of the 10 gR2 CRS
installation. This is because VIPCA is running in silent mode and is trying to notify that the IPs that were provided
may not be fit to be used as VIP(s). To correct this, you can manually execute the VIPCA GUI after the CRS installation
is complete. VIPCA needs to be executed from the CRS_HOME/ bin directory as the 'root' user ( on Unix/Linux) or as a
Local Administrator ( on Windows):
$ cd $ORA_CRS_HOME/ bin
$ ./vipca
Follow the prompts for VIPCA to select the appropriate interface for the public network , and assign the VIPs for each node
when prompted. Manually running VIPCA in the GUI mode , using the same IP addresses, should complete successfully.
Note that if you patch to 10.2 .0.3 or above, VIPCA will run correctly in silent mode. The command to re-run vipca
silently can be found in CRS_HOME/cfgtoollogs ( or CRS_HOME/cfgtoollogs) in the file 'configToolAllCommands' or
'configToolFailedCommands' . Thus, in the case of a new install, the silent mode VIPCA command will fail after the
10.2 .0.1 base release install, but once the CRS Home is patched to 10.2 .0.3 or above, vipca can be re-run silently,
without the need to invoke the GUI tool
References
NOTE: 316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC
Related
________________________________________
Products
________________________________________
? Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
Keywords
________________________________________
INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS
Errors
________________________________________RFC- 1918
上面的描述很多,下面 给 出 处 理 办 法
在出 现错误 的 节 点修改 vipca 文件
[root@node2 ~]# vi $CRS_ORA_HOME/ bin /vipca
找到如下 内 容:
Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL= 2.4 .19
export LD_ASSUME_KERNEL
fi
#End workaround
在 fi 后新添加一行:
unset LD_ASSUME_KERNEL
以及 srvctl 文件
[root@node2 ~]# vi $CRS_ORA_HOME/ bin /srvctl
找到如下 内 容:
LD_ASSUME_KERNEL= 2.4 .19
export LD_ASSUME_KERNEL
同 样 在其后新增加一行:
unset LD_ASSUME_KERNEL
保存退出,然后在故障重新 执 行 root.sh
5. 快捷参考
有 关 性能 优 化 请参 考
共享池的 调 整 与优 化 (Shared pool Tuning)
Oracle 表 缓 存 (caching table) 的使用
有 关 ORACLE 体系 结构请参 考
Oracle 联 机重做日志文件 (ONLINE LOG FILE)
Oracle 回 滚 (ROLLBACK) 和撤 销 (UNDO)
Oracle 实 例和 Oracle 数 据 库 (Oracle 体系 结构 )
有 关闪 回特性 请参 考
Oracle 闪 回特性 (FLASHBACK DATABASE)
Oracle 闪 回特性 (FLASHBACK DROP & RECYCLEBIN)
Oracle 闪 回特性 (Flashback Query 、Flashback Table)
Oracle 闪 回特性 (Flashback Version 、Flashback Transaction)
有 关 基于用 户 管理的 备份 和 备份 恢 复 的 概 念 请参 考
Oracle 基于用 户 管理恢 复 的 处 理 ( 详细 描述了介 质 恢 复 及其 处 理 )
有 关 RMAN 的 备份 恢 复与 管理 请参 考
RMAN 备份 路 径 困惑 ( 使用plus archivelog 时 )
有 关 ORACLE 故障 请参 考
又一例SPFILE 设 置 错误导 致 数 据 库 无法 启动
对参数 FAST_START_MTTR_TARGET = 0 的 误 解及 设 定
SPFILE 错误导 致 数 据 库 无法 启动 (ORA-01565)
有 关 ASM 请参 考
有 关 SQL/PLSQL 请参 考
SQL 基 础 --> 集合 运 算 (UNION 与 UNION ALL)
SQL 基 础 --> 层 次化 查询 (START BY ... CONNECT BY PRIOR)
SQL 基 础 --> ROLLUP 与 CUBE 运 算符 实现数 据 汇总
PL/SQL --> 异 常 处 理 (Exception)
有 关 ORACLE 其 它 特性
使用OEM,SQL*Plus,iSQL*Plus 管理Oracle 实 例
日志 记录 模式 (LOGGING 、FORCE LOGGING 、NOLOGGING)
使用外部表管理Oracle 告警日志(ALAERT_$SID.LOG)
簇表及簇表管理(Index clustered tables)
system sys ,sysoper sysdba 的 区别
ORACLE_SID 、DB_NAME 、INSTANCE_NAME 、DB_DOMIAN 、GLOBAL_NAME
Oracle 补 丁全集 (Oracle 9i 10g 11g Path)
Oracle 10.2.0.1 升 级 到 10.2.0.4