oracle 12.1 RAC的ocr磁盘组异常恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:oracle 12.1 RAC的ocr磁盘组异常恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在11.2或者12.1的RAC中,ocr和votedisk可以放到asm中,而很多人安装系统把ocr和votedisk放到一个单独的asm 磁盘组里面,但是如果这个磁盘组坏了,而数据所在的磁盘组是好的,这个时候该怎么恢复呢?这里的恢复分两种情况,一种是有ocr备份的恢复,另外一种是无ocr备份的恢复。但是在一般情况下ocr是每4个小时自动备份一份,因此大部分的系统中都会有ocr的备份。本blog主要对于oracle 12c rac在有ocr备份,存储ocr,votedisk的asm磁盘组异常恢复
确定ocr,votedisk,asm spfile存在一个独立asm diskgroup中

[grid@xifenfei ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1360
         Available space (kbytes) :     408208
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG_SYS/xff-cluster/ASMPARAMET
                                                 ERFILE/registry.253.825628977
[grid@xifenfei ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   3e20d13ae98a4fcfbffa489ab4df68a3 (/dev/sdc2) [DG_SYS]
Located 1 voting disk(s).
ASMCMD>  lsdsk -t -G dg_sys
Create_Date  Mount_Date  Repair_Timer  Path
08-SEP-13    08-SEP-13   0             /dev/sdc2

查看当前rac状态

[grid@xifenfei ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------

kfed查看磁盘头

[grid@xifenfei ~]$ kfed read /dev/sdc2
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2879801080 ; 0x00c: 0xaba646f8
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                202375168 ; 0x020: 0x0c100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:             DG_SYS_0000 ; 0x028: length=11
kfdhdb.grpname:                  DG_SYS ; 0x048: length=6
kfdhdb.fgname:              DG_SYS_0000 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             32990483 ; 0x0a8: HOUR=0x13 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.crestmp.lo:            303455232 ; 0x0ac: USEC=0x0 MSEC=0x197 SECS=0x21 MINS=0x4
kfdhdb.mntstmp.hi:             32990485 ; 0x0b0: HOUR=0x15 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.mntstmp.lo:           1776845824 ; 0x0b4: USEC=0x0 MSEC=0x221 SECS=0x1e MINS=0x1a
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    5451 ; 0x0c4: 0x0000154b
kfdhdb.pmcnt:                         3 ; 0x0c8: 0x00000003
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                     10 ; 0x0d4: 0x0000000a
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             32990483 ; 0x0e4: HOUR=0x13 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.grpstmp.lo:            301063168 ; 0x0e8: USEC=0x0 MSEC=0x77 SECS=0x1f MINS=0x4
kfdhdb.vfstart:                     224 ; 0x0ec: 0x000000e0
kfdhdb.vfend:                       256 ; 0x0f0: 0x00000100
kfdhdb.spfile:                      219 ; 0x0f4: 0x000000db   ----asm spfile的起点
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001
kfdhdb.flags:                         1 ; 0x0fc: 0x00000001

备份ocr

[root@xifenfei xff-cluster]# ocrconfig  -manualbackup
xifenfei     2013/09/08 23:48:57     /u01/app/12.1/grid/product/cdata/xff-cluster/backup_20130908_234857.ocr
[root@xifenfei xff-cluster]# ocrconfig -showbackup
xifenfei     2013/08/08 21:11:00     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup00.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup01.ocr
xifenfei     2013/07/08 20:23:18     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup02.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/day.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/week.ocr
xifenfei     2013/09/08 23:48:57     /u01/app/12.1/grid/product/cdata/xff-cluster/backup_20130908_234857.ocr
xifenfei     2013/06/28 22:55:02     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup_20130628_225502.ocr

破坏asm disk

[grid@xifenfei ~]$ dd if=/dev/zero of=/dev/sdc2 bs=4096 count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 6.6061e-05 seconds, 62.0 MB/s

关闭crs

[root@xifenfei xff-cluster]# crsctl stop crs

启动crs

[root@xifenfei xff-cluster]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@xifenfei admin]$ crsctl status res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE                               STABLE
ora.crf
      1        ONLINE  OFFLINE                               STABLE
ora.crsd
      1        ONLINE  OFFLINE                               STABLE
ora.cssd
      1        ONLINE  OFFLINE      xifenfei                 STARTING
ora.cssdmonitor
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.ctssd
      1        ONLINE  OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.evmd
      1        ONLINE  INTERMEDIATE xifenfei                 STABLE
ora.gipcd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.gpnpd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.mdnsd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.storage
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------

GI相关日志

--alert日志
2013-09-08 23:53:37.662:
[gpnpd(1507)]CRS-2328:GPNPD started on node xifenfei.
2013-09-08 23:54:10.244:
[cssd(1584)]CRS-1713:CSSD daemon is started in hub mode
2013-09-08 23:54:10.915:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:11.183:
[ohasd(1367)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-09-08 23:54:11.183:
[ohasd(1367)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2013-09-08 23:54:26.044:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:41.146:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:56.195:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
--ocssd日志
2013-09-08 23:54:25.976: [    GPNP][1090226496]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2360] Result: (0) CLSGPNP_OK. Successful get-profile CALL to remote "ipc://GPNPD_xifenfei" disco ""
2013-09-08 23:54:25.976: [    CSSD][1090226496]clssnmReadDiscoveryProfile: voting file discovery string(/dev/sd*)
2013-09-08 23:54:25.976: [    CSSD][1090226496]clssnmvDDiscThread: using discovery string /dev/sd* for initial discovery
2013-09-08 23:54:25.976: [   SKGFD][1090226496]Discovery with str:/dev/sd*:
2013-09-08 23:54:25.976: [   SKGFD][1090226496]UFS discovery with :/dev/sd*:
2013-09-08 23:54:26.032: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdb:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc2:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdd:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdd1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda2:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]OSS discovery with :/dev/sd*:
2013-09-08 23:54:26.042: [   SKGFD][1090226496]Handle 0x1d65c10 from lib :UFS:: for disk :/dev/sdb:
2013-09-08 23:54:26.043: [   SKGFD][1090226496]Handle 0x20c95a0 from lib :UFS:: for disk :/dev/sdc2:
2013-09-08 23:54:26.043: [   SKGFD][1090226496]Handle 0x20c9dd0 from lib :UFS:: for disk :/dev/sdd:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x1d65c10 for disk :/dev/sdb:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x20c95a0 for disk :/dev/sdc2:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x20c9dd0 for disk :/dev/sdd:
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmvDiskVerify: Successful discovery of 0 disks
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmvFindInitialConfigs: No voting files found
2013-09-08 23:54:26.044: [    CSSD][1090226496](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds

在我们破坏了ocr所在的asm disk的磁盘后,启动crs明显提示无法找到votedisk信息

强制关闭crs

[root@xifenfei xff-cluster]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'xifenfei'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'xifenfei'
CRS-2677: Stop of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'xifenfei' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.gipcd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.evmd' on 'xifenfei'
CRS-2677: Stop of 'ora.gpnpd' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.evmd' on 'xifenfei' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'xifenfei' has completed
CRS-4133: Oracle High Availability Services has been stopped.

exclusive模式启动crs

[root@xifenfei xff-cluster]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'xifenfei'
CRS-2677: Stop of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'xifenfei'
CRS-2672: Attempting to start 'ora.mdnsd' on 'xifenfei'
CRS-2676: Start of 'ora.evmd' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'xifenfei'
CRS-2676: Start of 'ora.gpnpd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xifenfei'
CRS-2672: Attempting to start 'ora.gipcd' on 'xifenfei'
CRS-2676: Start of 'ora.cssdmonitor' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.gipcd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'xifenfei'
CRS-2672: Attempting to start 'ora.diskmon' on 'xifenfei'
CRS-2676: Start of 'ora.diskmon' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.cssd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'xifenfei'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'xifenfei'
CRS-2672: Attempting to start 'ora.ctssd' on 'xifenfei'
CRS-2676: Start of 'ora.ctssd' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'xifenfei'
CRS-2676: Start of 'ora.asm' on 'xifenfei' succeeded
[grid@xifenfei xifenfei]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  INTERMEDIATE xifenfei                 OCR not started,STAB
                                                             LE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.crf
      1        OFFLINE OFFLINE                               STABLE
ora.crsd
      1        OFFLINE OFFLINE                               STABLE
ora.cssd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       xifenfei                 ACTIVE:0,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.evmd
      1        ONLINE  INTERMEDIATE xifenfei                 STABLE
ora.gipcd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.gpnpd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.mdnsd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.storage
      1        OFFLINE OFFLINE                               STABLE

创建磁盘组

[grid@xifenfei xifenfei]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 00:23:40 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup DG_OCR external redundancy disk '/dev/sdc2' attribute 'COMPATIBLE.ASM' = '12.1.0';
Diskgroup created.
[root@xifenfei xff-cluster]# ls -l
total 1472
-rw-r--r-- 1 root root 1503232 Sep  8 23:48 backup_20130908_234857.ocr
[root@xifenfei xff-cluster]# ocrconfig -restore backup_20130908_234857.ocr
PROT-35: The configured OCR locations are not accessible
SQL> conn / as sysasm
Connected.
SQL> drop diskgroup DG_OCR force including contents;
Diskgroup dropped.
SQL>  create diskgroup DG_SYS  external redundancy disk '/dev/sdc2' attribute 'COMPATIBLE.ASM' = '12.1.0';
Diskgroup created.

为了操作方便,建议创建磁盘组和以前ocr所在异常的磁盘组一致

还原ocr

[root@xifenfei xff-cluster]# ocrconfig -restore backup_20130908_234857.ocr
--ALERT 日志
2013-09-09 00:26:50.584:
[client(3015)]CRS-1002:The OCR was restored from file backup_20130908_234857.ocr.

处理votedisk

[root@xifenfei xff-cluster]# crsctl replace votedisk +DG_SYS
Successful addition of voting disk 60a037da30714f6bbfe5d90206ff27a7.
Successfully replaced voting disk group with +DG_SYS.
CRS-4266: Voting file(s) successfully replaced

创建asm spfile

[grid@xifenfei dbs]$ vi /tmp/asm.txt
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile= "EXCLUSIVE"
asm_diskstring           = "/dev/sd*"
asm_power_limit          = 1
[grid@xifenfei dbs]$ sqlplus '/ as sysasm'
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 00:34:18 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL>  create spfile='+DG_SYS' FROM pfile='/tmp/asm.txt';
File created.

重启crs

[root@xifenfei xff-cluster]# crsctl stop crs -f
[root@xifenfei xff-cluster]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@xifenfei dbs]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------

这里crs已经恢复正常,进一步检查ocr,votedisk,asm spfile情况

[grid@xifenfei ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1380
         Available space (kbytes) :     408188
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
[grid@xifenfei ~]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 16:12:21 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG_SYS/xff-cluster/ASMPARAMET
                                                 ERFILE/registry.253.825640465
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[grid@xifenfei ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   60a037da30714f6bbfe5d90206ff27a7 (/dev/sdc2) [DG_SYS]
Located 1 voting disk(s).

至此在ocr 磁盘组异常,而有ocr备份的情况下故障恢复完毕,对于没有ocr备份的故障,只能通过重建ocr来完成,大概步骤为

--deconfigure(root)
remote node
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
lastnode
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode
--配置信息重建ocr等(grid)
# $GRID_HOME/crs/config/config.sh

误杀进程导致rac hang住

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:误杀进程导致rac hang住

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户反馈系统hang住,不能归档,需要我们紧急介入分析
节点1日志
出现redo不能归档,redo日志都已经被写满,人工执行了ALTER SYSTEM ARCHIVE LOG CURRENT,数据库就开始把redo全部归档,但是后面产生的redo又不能归档,当redo全部写满之后,数据库有出现大量log file switch (archiving needed)等待

Tue Sep 24 22:05:37 2013
Thread 1 advanced to log sequence 47282 (LGWR switch)
  Current log# 6 seq# 47282 mem# 0: +DATA/q9db/onlinelog/group_6.1244.818697409
Tue Sep 24 22:07:31 2013
ORACLE Instance q9db1 - Can not allocate log, archival required
Thread 1 cannot allocate new log, sequence 47283
All online logs needed archiving
  Current log# 6 seq# 47282 mem# 0: +DATA/q9db/onlinelog/group_6.1244.818697409
Tue Sep 24 22:28:17 2013
ALTER SYSTEM ARCHIVE LOG
Archived Log entry 259646 added for thread 1 sequence 47266 ID 0x354620c2 dest 1:
Tue Sep 24 22:28:18 2013
Thread 1 advanced to log sequence 47283 (LGWR switch)
  Current log# 7 seq# 47283 mem# 0: +DATA/q9db/onlinelog/group_7.1243.818697415
Archived Log entry 259647 added for thread 1 sequence 47267 ID 0x354620c2 dest 1:
Archived Log entry 259648 added for thread 1 sequence 47268 ID 0x354620c2 dest 1:
Archived Log entry 259649 added for thread 1 sequence 47269 ID 0x354620c2 dest 1:
Archived Log entry 259650 added for thread 1 sequence 47270 ID 0x354620c2 dest 1:
Archived Log entry 259651 added for thread 1 sequence 47271 ID 0x354620c2 dest 1:
Archived Log entry 259652 added for thread 1 sequence 47272 ID 0x354620c2 dest 1:
Tue Sep 24 22:28:28 2013
Archived Log entry 259653 added for thread 1 sequence 47273 ID 0x354620c2 dest 1:
Archived Log entry 259654 added for thread 1 sequence 47274 ID 0x354620c2 dest 1:
Archived Log entry 259655 added for thread 1 sequence 47275 ID 0x354620c2 dest 1:
Archived Log entry 259656 added for thread 1 sequence 47276 ID 0x354620c2 dest 1:
Archived Log entry 259657 added for thread 1 sequence 47277 ID 0x354620c2 dest 1:
Archived Log entry 259658 added for thread 1 sequence 47278 ID 0x354620c2 dest 1:
Archived Log entry 259659 added for thread 1 sequence 47279 ID 0x354620c2 dest 1:
Tue Sep 24 22:28:39 2013
Archived Log entry 259660 added for thread 1 sequence 47280 ID 0x354620c2 dest 1:
Archived Log entry 259661 added for thread 1 sequence 47281 ID 0x354620c2 dest 1:
Archived Log entry 259662 added for thread 1 sequence 47282 ID 0x354620c2 dest 1:
Tue Sep 24 22:29:39 2013
Thread 1 advanced to log sequence 47284 (LGWR switch)
  Current log# 8 seq# 47284 mem# 0: +DATA/q9db/onlinelog/group_8.1242.818697417
Tue Sep 24 22:31:18 2013
Thread 1 advanced to log sequence 47285 (LGWR switch)
  Current log# 16 seq# 47285 mem# 0: +DATA/q9db/onlinelog/group_16.1884.827003545
Thread 1 advanced to log sequence 47286 (LGWR switch)
  Current log# 17 seq# 47286 mem# 0: +DATA/q9db/onlinelog/group_17.1885.827003587

节点2日志
节点2中出现大量的IPC Send timeout

Tue Sep 24 15:22:19 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]
…………
Tue Sep 24 18:51:55 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]
Tue Sep 24 18:57:54 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]
Receiver: inst 1 binc 464003926 ospid 1566
Tue Sep 24 19:03:57 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]
Receiver: inst 1 binc 464003926 ospid 1566
Tue Sep 24 19:09:53 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]
…………
Tue Sep 24 20:22:00 2013
IPC Send timeout detected. Sender: ospid 4008 [oracle@q9db02.800best.com (PING)]

节点1因为不能归档hang住,节点2紧接着也就hang住。对节点1hang住之时对两个节点分别做systemstate dump,使用ass进行分析得到节点1和节点2的记录大体如下:
节点1

393:waiting for 'log file switch (archiving needed)'
394:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
395:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
397:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
398:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
451:waiting for 'SQL*Net message from client'
469:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
470:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
471:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
618:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
626:waiting for 'log file switch (archiving needed)'
     Cmd: Insert
NO BLOCKING PROCESSES FOUND

节点2

515:waiting for 'gc buffer busy acquire'
     Cmd: Insert
516:waiting for 'gc buffer busy acquire'
     Cmd: Insert
517:waiting for 'gc buffer busy acquire'
     Cmd: Insert
518:waiting for 'gc buffer busy acquire'
     Cmd: Insert
519:waiting for 'gc buffer busy acquire'
     Cmd: Insert
520:waiting for 'gc buffer busy acquire'
     Cmd: Select
521:waiting for 'gc current request'
     Cmd: Insert
522:waiting for 'enq: TX - row lock contention'[Enq TX-00BA0020-001E3E3C]
     Cmd: Select
523:waiting for 'gc buffer busy acquire'
     Cmd: Insert
524:waiting for 'SQL*Net message from client'
525:waiting for 'gc buffer busy acquire'
     Cmd: Insert
526:waiting for 'gc buffer busy acquire'
     Cmd: Insert
527:waiting for 'enq: TX - row lock contention'[Enq TX-00BA0020-001E3E3C]
     Cmd: Select
528:waiting for 'SQL*Net message from client'
529:waiting for 'gc buffer busy acquire'
     Cmd: Select
                    Resource Holder State
    Enq TX-0005001E-0022374F   223: waiting for 'gc current request'
    Enq TX-0047001B-002BCEB2   247: waiting for 'gc current request'
    Enq TX-015B001E-000041FF   330: waiting for 'gc current request'
    Enq TX-00010010-002EA7CD   179: waiting for 'gc current request'
    Enq TX-00BA0020-001E3E3C    ??? Blocker
Object Names
~~~~~~~~~~~~
Enq TX-0005001E-0022374F
Enq TX-0047001B-002BCEB2
Enq TX-015B001E-000041FF
Enq TX-00010010-002EA7CD
Enq TX-00BA0020-001E3E3C

通过这里,我们可以明白,节点2的很多事务hang住是因为请求gc current request,而该等待是因为节点1无法归档,有些block无法正常传输到节点2,导致节点2一直hang在这里,然后就出现IPC Send timeout;节点1上的事务阻塞甚至hang住是因为无法归档导致.到此需要定位的问题是为什么节点1不能归档
继续分析节点1 alert日志

Tue Sep 24 15:18:20 2013
opidrv aborting process O000 ospid (7332) as a result of ORA-28
Immediate Kill Session#: 1904, Serial#: 1065
Immediate Kill Session: sess: 0x24a2522a38  OS pid: 7338
Immediate Kill Session#: 3597, Serial#: 11107
Immediate Kill Session: sess: 0x24c27cf498  OS pid: 7320
Tue Sep 24 15:18:23 2013
opidrv aborting process W000 ospid (7980) as a result of ORA-28
Tue Sep 24 15:18:23 2013
opidrv aborting process W001 ospid (8560) as a result of ORA-28
Tue Sep 24 15:18:35 2013
LGWR: Detected ARCH process failure
LGWR: Detected ARCH process failure
LGWR: Detected ARCH process failure
LGWR: Detected ARCH process failure
LGWR: STARTING ARCH PROCESSES
Tue Sep 24 15:18:35 2013
ARC0 started with pid=66, OS id=10793
Tue Sep 24 15:18:35 2013
Errors in file /u01/app/oracle/diag/rdbms/q9db/q9db1/trace/q9db1_nsa2_12635.trc:
ORA-00028: your session has been killed
LNS: Failed to archive log 8 thread 1 sequence 47156 (28)
ARC0: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 advanced to log sequence 47157 (LGWR switch)
ARC0: STARTING ARCH PROCESSES
  Current log# 9 seq# 47157 mem# 0: +DATA/q9db/onlinelog/group_9.1241.818697421
Tue Sep 24 15:18:36 2013
ARC1 started with pid=81, OS id=10805
Tue Sep 24 15:18:36 2013
ARC2 started with pid=84, OS id=10807
Tue Sep 24 15:18:36 2013
ARC3 started with pid=87, OS id=10809
ARC1: Archival started
ARC2: Archival started
ARC2: Becoming the 'no FAL' ARCH
ARC2: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Error 1031 received logging on to the standby
PING[ARC1]: Heartbeat failed to connect to standby 'q9adgdg'. Error is 1031.
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Archived Log entry 259135 added for thread 1 sequence 47156 ID 0x354620c2 dest 1:
Error 1031 received logging on to the standby
FAL[server, ARC3]: Error 1031 creating remote archivelog file 'q9adgdg'
FAL[server, ARC3]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance q9db1 - Archival Error. Archiver continuing.
Tue Sep 24 15:18:46 2013
opidrv aborting process O001 ospid (9605) as a result of ORA-28
Tue Sep 24 15:18:46 2013
opidrv aborting process O000 ospid (10813) as a result of ORA-28
Tue Sep 24 15:18:46 2013
Immediate Kill Session#: 2909, Serial#: 369
Immediate Kill Session: sess: 0x24226c7200  OS pid: 9091
Immediate Kill Session#: 3380, Serial#: 30271
Immediate Kill Session: sess: 0x2422782c58  OS pid: 10265
Immediate Kill Session#: 3597, Serial#: 11109
Immediate Kill Session: sess: 0x24c27cf498  OS pid: 10267
Tue Sep 24 15:20:14 2013
Restarting dead background process DIAG
Tue Sep 24 15:20:14 2013
DIAG started with pid=64, OS id=20568
Restarting dead background process PING
Tue Sep 24 15:20:14 2013
PING started with pid=68, OS id=20570
Restarting dead background process LMHB
Tue Sep 24 15:20:14 2013
LMHB started with pid=70, OS id=20572
Restarting dead background process SMCO
…………
Tue Sep 24 15:23:13 2013
ARC0: Detected ARCH process failure
Tue Sep 24 15:23:13 2013
Thread 1 advanced to log sequence 47158 (LGWR switch)
  Current log# 10 seq# 47158 mem# 0: +DATA/q9db/onlinelog/group_10.1240.818697423
ARC0: STARTING ARCH PROCESSES
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance q9db1 - Archival Error
ORA-00028: your session has been killed

查看ARCn进程

[oracle@q9db01 ~]$ ps -ef|grep ora_ar
oracle   20718 12870  0 22:07 pts/14   00:00:00 grep ora_ar
[oracle@q9db01 ~]$ ps -ef|grep ora_ar
oracle   25998 12870  0 22:07 pts/14   00:00:00 grep ora_ar

这里基本上明白了,因为客户的系统从15:15开始由于中间件程序异常,导致大量会话连接数据库,然后dba为了防止其他业务不受影响,然后开始大量通过alter system kill session,误杀了不少系统进程,包括ARCn(0,1,2,3)进程,在后面ARCn进程因为某种原因无法正常启动,导致redo无法归档,所有的redo组写满系统即hang住,该系统由于大量kill session已经导致了实例本身异常(正常情况ARCn进程kill之后会自动重启),处理方案:先增加redo组配合定时人工归档,等待业务低峰重启节点1,解决问题。温馨提醒:kill进程请小心

安装 ORACLE 12C 单节点RAC

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:安装 ORACLE 12C 单节点RAC

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

装过ORACLE 12C RAC 的朋友应该感觉到12C的RAC简直就是一个怪物,需要消耗太多的内存、IO、CPU资源,在没有物理机器的情况下,使用虚拟机装ORACLE 12C RAC那可能需要比较好的主机资源(8G的内存不运行主机,仅仅够2个节点的虚拟机运行,所以8G的内存主机基本上无法ORACLE 12C 2节点RAC),而没有好的资源情况下,又需要玩12C RAC功能的朋友,我这里展示了单节点RAC(一个节点的RAC的rac),这里主要显示的是单节点RAC和多节点RAC安装不同之处,同时这里的安装也仅仅是为了玩
内存要求
12c_rac_require


准备环境

[oracle@xifenfei ~]$ more /etc/oracle-release
Oracle Linux Server release 5.8
[oracle@xifenfei ~]$ free -m
             total       used       free     shared    buffers     cached
Mem:          4350       4036        313          0          4       2805
-/+ buffers/cache:       1226       3124
Swap:         2047        853       1193
[oracle@xifenfei ~]$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:02:1B:A7
          inet addr:192.168.30.22  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:66152 errors:0 dropped:0 overruns:0 frame:0
          TX packets:83647 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:21143877 (20.1 MiB)  TX bytes:44756747 (42.6 MiB)
eth1      Link encap:Ethernet  HWaddr 00:0C:29:02:1B:B1
          inet addr:10.10.30.22  Bcast:10.10.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8064 errors:0 dropped:0 overruns:0 frame:0
          TX packets:366 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:645158 (630.0 KiB)  TX bytes:64125 (62.6 KiB)
[oracle@xifenfei ~]$ more /etc/hosts
127.0.0.1       localhost.localdomain localhost
10.10.30.22     xifenfei-priv
192.168.30.22   xifenfei
192.168.30.32   xifenfei-vip
192.168.30.42   scan-ip
[root@xifenfei ~]# yum install oracle-validated
[root@xifenfei rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm

安装GRID软件
single_rac_gi0.jpg
single_rac_gi1
single_rac_gi2.jpg


安装ORACE DB软件
single_rac_db1


创建数据库
single_rac_dbca


安装结果

[root@xifenfei ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.243.16 10.10
                                                             .30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------
[root@xifenfei ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:02:1B:A7
          inet addr:192.168.30.22  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:66419 errors:0 dropped:0 overruns:0 frame:0
          TX packets:83849 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:21168865 (20.1 MiB)  TX bytes:44798962 (42.7 MiB)
eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:02:1B:A7
          inet addr:192.168.30.42  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:02:1B:A7
          inet addr:192.168.30.32  Bcast:192.168.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
eth1      Link encap:Ethernet  HWaddr 00:0C:29:02:1B:B1
          inet addr:10.10.30.22  Bcast:10.10.30.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12244 errors:0 dropped:0 overruns:0 frame:0
          TX packets:368 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:994988 (971.6 KiB)  TX bytes:64536 (63.0 KiB)
eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:02:1B:B1
          inet addr:169.254.243.16  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:91800 errors:0 dropped:0 overruns:0 frame:0
          TX packets:91800 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:68999524 (65.8 MiB)  TX bytes:68999524 (65.8 MiB)

在11GR2 GI上配置第二个监听

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:在11GR2 GI上配置第二个监听

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户因为归档日志量每天很大,为了不影响业务,需要配置一个单独的万兆网络来专门的传输归档日志到DG库,这里就涉及到在11G(11203 Linux) RAC中增加一个监听用来使用专门的网络.这里提供在主库配置第二个监听的整体操作过程,主要涉及配置解析,增加网络,增加vip,配置监听,配置listener_networks
网卡情况

[oracle@q9db01 admin]$ /sbin/ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 90:E2:BA:1E:14:34
          inet addr:192.168.5.60  Bcast:192.168.5.255  Mask:255.255.255.0
          inet6 addr: fe80::92e2:baff:fe1e:1434/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3932094203 errors:0 dropped:0 overruns:0 frame:0
          TX packets:176073749 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5752649063554 (5.2 TiB)  TX bytes:31298228144 (29.1 GiB)
[grid@q9db02 ~]$ /sbin/ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 90:E2:BA:1E:13:5C
          inet addr:192.168.5.61  Bcast:192.168.5.255  Mask:255.255.255.0
          inet6 addr: fe80::92e2:baff:fe1e:135c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22861938 errors:0 dropped:0 overruns:0 frame:0
          TX packets:187447459 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2603356463 (2.4 GiB)  TX bytes:276672018723 (257.6 GiB)

配置hosts文件

#public dg ip
192.168.5.60     q9db01-dg
192.168.5.61     q9db02-dg
192.168.5.64    q9db01-dg-vip
192.168.5.65    q9db02-dg-vip

CRS中配置

--增加网络资源
[root@q9db01 ~]# srvctl add network -k 2 -S 192.168.5.0/255.255.255.0/eth2 -w static -v
--启动网络资源
[root@q9db01 ~]# crsctl start res ora.net2.network
--增加vip资源
[root@q9db01 ~]# srvctl add vip -n q9db01 -A 192.168.5.64/255.255.255.0 -k 2
[root@q9db01 ~]# srvctl add vip -n q9db02 -A 192.168.5.65/255.255.255.0 -k 2
--启动vip资源
[root@q9db01 ~]# srvctl start vip -i q9db01-dg-vip
[root@q9db01 ~]# srvctl start vip -i q9db02-dg-vip
--netca创建监听
[root@q9db01 ~]# su - grid
[grid@q9db01 ~]$ export DISPLAY=172.18.50.150:0.0
[grid@q9db01 ~]$ netca
------选择Subnet 2(选择网络192.168.5.60网段),选择非1521端口---------
--查看资源状态
[grid@q9db01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.DATA.dg
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.LISTENER.lsnr
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.LISTENER_DG.lsnr
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.OCR_VOTE.dg
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.asm
               ONLINE  ONLINE       q9db01                   Started
               ONLINE  ONLINE       q9db02                   Started
ora.gsd
               OFFLINE OFFLINE      q9db01
               OFFLINE OFFLINE      q9db02
ora.net1.network
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.net2.network
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.ons
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
ora.registry.acfs
               ONLINE  ONLINE       q9db01
               ONLINE  ONLINE       q9db02
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       q9db01
ora.cvu
      1        ONLINE  ONLINE       q9db01
ora.oc4j
      1        ONLINE  ONLINE       q9db01
ora.q9db.db
      1        ONLINE  ONLINE       q9db01                   Open
      2        ONLINE  ONLINE       q9db02                   Open
ora.q9db01-dg-vip.vip
      1        ONLINE  ONLINE       q9db01
ora.q9db01.vip
      1        ONLINE  ONLINE       q9db01
ora.q9db02-dg-vip.vip
      1        ONLINE  ONLINE       q9db02
ora.q9db02.vip
      1        ONLINE  ONLINE       q9db02
ora.scan1.vip
      1        ONLINE  ONLINE       q9db01
--查看监听状态
[grid@q9db01 ~]$ lsnrctl status listener_dg
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-JUN-2013 14:40:24
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                19-JUN-2013 14:38:43
Uptime                    0 days 0 hr. 1 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/q9db01/listener_dg/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.5.64)(PORT=1522)))
The listener supports no services
The command completed successfully

配置listener_networks

--在RDBMS的tnsnames.ora中配置
Q9DB01_LOCAL_NET1 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = q9db01-vip )(PORT = 1521)))
Q9DB01_LOCAL_NET2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = q9db01-dg-vip )(PORT = 1522)))
Q9DB02_LOCAL_NET1 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = q9db02-vip )(PORT = 1521)))
Q9DB02_LOCAL_NET2 =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = q9db02-dg-vip )(PORT = 1522)))
Q9DB_REMOTE_NET2 =(DESCRIPTION_LIST =(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = q9db01-dg-vip )
(PORT = 1522)))(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = q9db02-dg-vip )(PORT = 1522))))
--配置listener_networks
----节点1
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=Q9DB01_LOCAL_NET1)
     (REMOTE_LISTENER=q9dbscan:1521))','((NAME=network2)(LOCAL_LISTENER=Q9DB01_LOCAL_NET2)
     (REMOTE_LISTENER=Q9DB_REMOTE_NET2))'SCOPE=BOTH SID='q9db1';
System altered.
----节点2
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=Q9DB02_LOCAL_NET1)
     (REMOTE_LISTENER=q9db-scan:1521))','((NAME=network2)(LOCAL_LISTENER=Q9DB02_LOCAL_NET2)
     (REMOTE_LISTENER=Q9DB_REMOTE_NET2))'SCOPE=BOTH SID='q9db2';
System altered.

查看监听状态

--节点1
[grid@q9db01 ~]$ lsnrctl status listener_dg
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-JUN-2013 17:12:45
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                19-JUN-2013 16:47:03
Uptime                    0 days 0 hr. 25 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/q9db01/listener_dg/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.5.64)(PORT=1522)))
Services Summary...
Service "q9db" has 2 instance(s).
  Instance "q9db1", status READY, has 2 handler(s) for this service...
  Instance "q9db2", status READY, has 1 handler(s) for this service...
The command completed successfully
--节点2
[grid@q9db02 ~]$ lsnrctl status listener_dg
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-JUN-2013 17:12:02
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                19-JUN-2013 16:52:24
Uptime                    0 days 0 hr. 19 min. 37 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/q9db02/listener_dg/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.5.65)(PORT=1522)))
Services Summary...
Service "q9db" has 2 instance(s).
  Instance "q9db1", status READY, has 1 handler(s) for this service...
  Instance "q9db2", status READY, has 2 handler(s) for this service...
The command completed successfully

测试新监听

--RDBMS目录中tns配置
q9db1dg=
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = q9db01-dg-vip)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = q9db1)
    )
 )
q9db2dg=
(DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = q9db02-dg-vip)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = q9db2)
    )
 )
--验证结果
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> conn test/test@q9db1dg
Connected.
SQL> show parameter instance_name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
instance_name                        string      q9db1
SQL> conn test/test@q9db2dg
Connected.
SQL>  show parameter instance_name
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
instance_name                        string      q9db2

到这里,可以正常的访问两个库,监听已经配置完成,数据库之间也可以使用特定网络

网关不通致使vip/lsnr资源异常

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:网关不通致使vip/lsnr资源异常

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

crs_stat显示节点1的listener和vip时断时续(一会online,一会offline)

rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    rac1
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    OFFLINE
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac2
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    OFFLINE
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac1
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    rac1
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    OFFLINE
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac2
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

查看crsd.log日志

0Attempting to start `ora.rac1.vip` on member `rac2`
0Start of `ora.rac1.vip` on member `rac2` failed.
0startRunnable: setting CLI values
0Attempting to start `ora.rac1.vip` on member `rac1`
0Start of `ora.rac1.vip` on member `rac1` succeeded.
0startRunnable: setting CLI values
0Attempting to start `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1`
0Start of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded.
u_freem: mem passed is null
0CheckResource error for ora.rac1.vip error code = 1
0In stateChanged, ora.rac1.vip target is ONLINE
0ora.rac1.vip on rac1 went OFFLINE unexpectedly
0StopResource: setting CLI values
0Attempting to stop `ora.rac1.vip` on member `rac1`
0Stop of `ora.rac1.vip` on member `rac1` succeeded.
0ora.rac1.vip RESTART_COUNT=0 RESTART_ATTEMPTS=0
0ora.rac1.vip failed on rac1 relocating.
0StopResource: setting CLI values
0Attempting to stop `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1`
0Stop of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded.
0Attempting to start `ora.rac1.vip` on member `rac2`
0Start of `ora.rac1.vip` on member `rac2` failed.
0Attempting to start `ora.rac1.vip` on member `rac2`
0Start of `ora.rac1.vip` on member `rac2` succeeded.
0CRS-1002: Resource 'ora.rac1.vip' is already running on member 'rac2'

这里可以看出由于vip资源失败,致使lsnr资源也出现失败,紧接着又是启动vip,再启动lsnr。所以使得我们通过crs_stat -t观察资源情况时,看到这两个进程一直处于波动状态

分析ora.rac1.vip.log日志

[ora.rac1.vip]: clsrcexecut:env ORACLE_CONFIG_HOME=/u01/app/oracle/product/10.2.0/crs_1
[ora.rac1.vip]: clsrcexecut:cmd=/u01/app/oracle/product/10.2.0/crs_1/bin/racgeut -e
_USR_ORA_DEBUG=0 54 /u01/app/oracle/product/10.2.0/crs_1/bin/racgvip check rac1
[ora.rac1.vip]: clsrcexecut: rc = 1, time = 6.430s
[ora.rac1.vip]: end for resource = ora.rac1.vip, action=check,status=1,time=6.450s
[ora.rac1.vip]: ping to 192.168.1.1 via eth0 failed, rc = 1 (host=rac1)
ping to 192.168.1.1 via eth0 failed, rc = 1 (host=rac1)
[ora.rac1.vip]: clsrcstartorp: Error with malloc
[ora.rac1.vip]: ping to 192.168.1.1 via eth0 failed, rc = 1 (host=rac1)
ping to 192.168.1.1 via eth0 failed, rc = 1 (host=rac1)
Interface eth0 checked failed (host=rac1)
Invalid parameters, or failed to bring up VIP (host=rac1)

通过这里发现:从eth0网卡ping192.168.1.1(网关)不通,导致VIP资源不能正常工作

核实问题原因/解决
我们人工从节点1上ping 网关(192.168.1.1),果真不通.继续检查发现,网关服务器上意外的开启了防火墙,对部分进来的包进行了过滤,恰好节点1在被禁止之列,使得节点1 ping 网关不成功,从而出现该了该错误.关闭防火墙或者重新设置规则后,rac工作正常,未出现vip和lsnr资源出现波动情况.

OCR/Vote disk 维护操作

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:OCR/Vote disk 维护操作

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

数据库版本

SQL>  select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Prod
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

ocr测试(可以online处理)

rac2-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     160396
         Used space (kbytes)      :       4376
         Available space (kbytes) :     156020
         ID                       : 1302494786
         Device/File Name         : /dev/raw/raw11
                                    Device/File integrity check succeeded
                                    Device/File not configured
         Cluster registry integrity check succeeded
rac2-> more /etc/oracle/ocr.loc
ocrconfig_loc=/dev/raw/raw11
local_only=false
--增加ocr镜像
[root@rac2 bin]# ./ocrconfig -replace ocrmirror /dev/raw/raw12
rac2-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     160396
         Used space (kbytes)      :       4376
         Available space (kbytes) :     156020
         ID                       : 1302494786
         Device/File Name         : /dev/raw/raw11
                                    Device/File integrity check succeeded
         Device/File Name         : /dev/raw/raw12
                                    Device/File integrity check succeeded
         Cluster registry integrity check succeeded
rac2-> more /etc/oracle/ocr.loc
#Device/file  getting replaced by device /dev/raw/raw12
ocrconfig_loc=/dev/raw/raw11
ocrmirrorconfig_loc=/dev/raw/raw12
local_only=false
--删除ocr
[root@rac2 bin]# ./ocrconfig -replace ocr
rac2-> more /etc/oracle/ocr.loc
#Device/file /dev/raw/raw11 being deleted
ocrconfig_loc=/dev/raw/raw12
local_only=false
rac2-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     160396
         Used space (kbytes)      :       4376
         Available space (kbytes) :     156020
         ID                       : 1302494786
         Device/File Name         : /dev/raw/raw12
                                    Device/File integrity check succeeded
                                    Device/File not configured
         Cluster registry integrity check succeeded
--补充删除ocr镜像
[root@rac2 bin]# ./ocrconfig -replace ocrmirror

Vote disk测试(10g offline/11g online)

--关闭crs
[root@rac2 bin]# ./crsctl stop crs
[root@rac1 bin]# ./crsctl stop crs
--查询vote disk
rac2-> crsctl query css votedisk
 0.     0    /dev/raw/raw31
--增加vote disk
[root@rac2 bin]# ./crsctl add css votedisk /dev/raw/raw23 -force
Now formatting voting disk: /dev/raw/raw23
successful addition of votedisk /dev/raw/raw23.
[root@rac2 bin]# ./crsctl add css votedisk /dev/raw/raw33 -force
Now formatting voting disk: /dev/raw/raw33
successful addition of votedisk /dev/raw/raw33.
[root@rac2 bin]# ./crsctl add css votedisk /dev/raw/raw32 -force
Now formatting voting disk: /dev/raw/raw32
successful addition of votedisk /dev/raw/raw32.
rac2-> crsctl query css votedisk
 0.     0    /dev/raw/raw31
 1.     0    /dev/raw/raw23
 2.     0    /dev/raw/raw33
 3.     0    /dev/raw/raw32
located 4 votedisk(s).
--删除vote disk
[root@rac2 bin]# ./crsctl delete css votedisk /dev/raw/raw33 -force
successful deletion of votedisk /dev/raw/raw33.
--启动crs
[root@rac2 bin]# ./crsctl start crs
[root@rac1 bin]# ./crsctl start crs

补充官方操作说明[ID 428681.1]
http://www.xifenfei.com/wp-content/uploads/2012/04/OCR_Vote_disk_Maintenance_Operations.pdf

RAC 10g升级到10.2.0.5

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:RAC 10g升级到10.2.0.5

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

1.Back Up database
一般情况下rman备份

2.备份ocr和vote disk

[root@rac2 bin]# ./ocrconfig -export /tmp/ocr_export.bak
[root@rac2 bin]# more /etc/oracle/ocr.loc
ocrconfig_loc=/dev/raw/raw11
local_only=FALSE
[root@rac2 bin]# dd if=/dev/raw/raw11 of=/tmp/ocr_dd.bak
[root@rac2 bin]# dd if=/dev/raw/raw31 of=/tmp/vote_dd.bak

3.Update Oracle Time Zone Definitions
Actions for the DSTv4 update in the 10.2.0.5 patchset [ID 1086400.1]

4.Stopping All Processes
滚动升级关闭一个节点所有进程,非滚动升级关闭所有进程

$ isqlplusctl stop
$ emctl stop dbconsole
$ srvctl stop service -d db_name [-s service_name_list [-i inst_name]]
$ srvctl stop instance -d db_name -i inst_name
$ srvctl stop asm -n node
$ srvctl stop listener -n node [-l listenername]
$ srvctl stop nodeapps -n node
# CRS_home/bin/crsctl stop crs(root执行,滚动升级不需要关闭)

5.Back Up the System
$ORACLE_BASE中文件,主要包括(db和crs安装文件/oraInventory文件)

6.升级crs软件
执行./runInstaller选择crs目录

执行下面命令
# CRS_home/bin/crsctl stop crs
# CRS_home/install/root102.sh

7.升级db软件
关闭crs和db所有进程(步骤同4)
执行./runInstaller选择db目录

执行下面命令
# ORACLE_HOME/root.sh

8.升级数据库
8.1)检查数据库升级需要满足条件,对存在不合适之处,进行修正
How to Download and Run Oracle’s Database Pre-Upgrade Utility [ID 884522.1]

SQL> STARTUP UPGRADE
SQL> SPOOL upgrade_info.log
SQL> @/rdbms/admin/utlu102i.sql
SQL> SPOOL OFF
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=spfile;
--其他根据upgrade_info.log中提示修改
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP UPGRADE

8.2)启动监听
srvctl start listener -n node

8.3)升级数据库

SQL> SPOOL patch.log
SQL> @?/rdbms/admin/catupgrd.sql
--检查patch.log,发现有错误查找原因,重新执行catupgrd.sql脚本
SQL> SPOOL OFF
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> @?/rdbms/admin/utlrp.sql
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=spfile;
--包括其他修改调整参数
SQL> SHUTDOWN IMMEDIATE
--使用rac管理相关命令,启动需要启动资源

9.修改相关目录权限
# ORACLE_HOME/install/changePerm.sh

具体操作步骤请阅读README.html

在RAC中lsnrctl和srvctl操作监听区别

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:在RAC中lsnrctl和srvctl操作监听区别

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

朋友今天询问了一个问题RAC中使用srvctl 操作监听和lsnrctl 操作监听结果不一样,下面我通过实验说明问题
0.listener.ora文件内容

LISTENER_RAC1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.11)(PORT = 1521)(IP = FIRST))
    )
  )
SID_LIST_LISTENER_RAC1 =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
      (PROGRAM = extproc)
    )
  )

1.srvctl 启动监听

rac1-> srvctl start listener -n rac1
rac1-> lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 11-MAR-2012 22:09:34
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_RAC1
Version                   TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date                11-MAR-2012 22:07:21
Uptime                    0 days 0 hr. 2 min. 13 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/10.2.0/db_1/network/log/listener_rac1.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.21)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.....XFF.cs application    ONLINE    ONLINE    rac1
ora....db1.srv application    ONLINE    ONLINE    rac2
ora.devdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

srvctl操作监听,自动反馈到crs中

2.使用srvctl关闭监听

rac1-> srvctl stop listener -n rac1
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.....XFF.cs application    ONLINE    ONLINE    rac1
ora....db1.srv application    ONLINE    ONLINE    rac2
ora.devdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    OFFLINE   OFFLINE
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

3.使用lsnrctl查看监听状态

rac1-> lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 11-MAR-2012 22:15:54
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) <--host为空
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
rac1-> lsnrctl
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 11-MAR-2012 22:16:55
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL> status listener_rac1
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521)(IP=FIRST)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.11)(PORT=1521)(IP=FIRST)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused

这里可以发现问题:
1)如果当前没有监听在运行,使用lsnrctl status的时候,会去检查默认的监听名称为listener的监听,如果该监听不存在不会使用hostname填充到hostname项中(注意下面的启动默认监听过程)
2)lsnrctl查看指定监听为listener_rac1,发现和listener.ora中配置相同

4.lsnrctl 关闭监听

rac1-> srvctl start listener -n rac1
rac1-> lsnrctl stop
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 11-MAR-2012 22:43:14
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) <--host为空
The command completed successfully
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.....XFF.cs application    ONLINE    ONLINE    rac1
ora....db1.srv application    ONLINE    ONLINE    rac2
ora.devdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    OFFLINE   OFFLINE
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

这里可以说明问题:
1)lsnrctl stop虽然是要停止掉默认监听,但是也会停止掉非默认监听
2)lsnrctl stop如果默认监听不存在,那么注册host也为空

5.使用lsnrctl启动默认监听

rac1-> lsnrctl start
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 11-MAR-2012 22:17:37
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1521)))
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date                11-MAR-2012 22:17:37
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1)(PORT=1521))) <--主机名
The listener supports no services
The command completed successfully
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.....XFF.cs application    ONLINE    ONLINE    rac1
ora....db1.srv application    ONLINE    ONLINE    rac2
ora.devdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    OFFLINE   OFFLINE
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

这里发现问题:
1)监听的ip只有主机名的一个,和srvctl启动的监听不一样
2)虽然监听启动了,crs中依然显示为offline状态

6.使用lsnrctl启动listener_rac1监听

LSNRCTL> start listener_rac1
Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener_rac1.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.21)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1-vip)(PORT=1521)(IP=FIRST)))
STATUS of the LISTENER
------------------------
Alias                     listener_rac1
Version                   TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date                11-MAR-2012 22:19:04
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/10.2.0/db_1/network/log/listener_rac1.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.21)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.....XFF.cs application    ONLINE    ONLINE    rac1
ora....db1.srv application    ONLINE    ONLINE    rac2
ora.devdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2

这里可以说明两个问题:
1)使用lsnrctl启动监听和srvctl启动一样
2)启动listener_rac1后,crs中监听资源变成online

7.问题原因分析

rac1-> srvctl config listener -n rac1
rac1 LISTENER_RAC1

通过这里可以发现,其实srvctl操作的监听就是LISTENER_RAC1,所以当我使用lsnrctl 操作LISTENER_RAC1监听时候crs会自动offline或者online,而lsnrctl 操作默认监听时crs不会online

在RAC中expdp 修改Service_Name

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:在RAC中expdp 修改Service_Name

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

检查数据库日志文件,发现在执行expdp操作前后都有修改service_names操作
1.数据库版本信息

SQL>  select instance_name from v$instance;
INSTANCE_NAME
----------------
ora9i2
SQL>  select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux IA64: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
spfile文件中,无service_names配置,证明都是在修改MEMORY中。

2.alert日志内容

Thu Jan  5 01:10:06 2012
The value (30) of MAXTRANS parameter ignored.
Thu Jan  5 01:10:09 2012
ALTER SYSTEM SET service_names='ora9i','SYS$SYS.KUPC$C_2_20120105011007.ORA9I' SCOPE=MEMORY SID='ora9i2';
Thu Jan  5 01:10:09 2012
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$C_2_20120105011007.ORA9I','ora9i','SYS$SYS.KUPC$S_2_20120105011007.ORA9I' SCOPE=MEMORY SID='ora9i2';
kupprdp: master process DM00 started with pid=305, OS id=9526
         to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_TABLE_05', 'VAS', 'KUPC$C_2_20120105011007', 'KUPC$S_2_20120105011007', 0);
kupprdp: worker process DW01 started with worker id=1, pid=307, OS id=9641
         to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_05', 'VAS');
kupprdp: worker process DW02 started with worker id=2, pid=308, OS id=9964
         to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_05', 'VAS');
kupprdp: worker process DW03 started with worker id=3, pid=309, OS id=9966
         to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_05', 'VAS');
kupprdp: worker process DW04 started with worker id=4, pid=310, OS id=9968
         to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_05', 'VAS');
Thu Jan  5 01:13:15 2012
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$S_2_20120105011007.ORA9I','ora9i' SCOPE=MEMORY SID='ora9i2';
Thu Jan  5 01:13:16 2012
ALTER SYSTEM SET service_names='ora9i' SCOPE=MEMORY SID='ora9i2';

3.MOS解决信息[ID 1269319.1]

Depending on the version of your database, Patch:8513146 may exist.
As of Nov. 25th 2010, this patch exists for:
- 10.2.0.4 / IBM AIX on POWER Systems (64-bit)
- 10.2.0.4.3 / Linux x86-64
- 10.2.0.5 / Linux x86 and Linux x86-64

RAC中关于"Immediate Kill Session#" bug记录

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:RAC中关于"Immediate Kill Session#" bug记录

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

今天在rac的一个节点上发现很多Immediate Kill Session#的错误,分析记录如下
1.alert日志内容

Sun Jan  1 02:12:28 2012
ALTER SYSTEM SET service_names='' SCOPE=MEMORY SID='ora9i1';
Sun Jan  1 02:12:28 2012
Immediate Kill Session#: 496, Serial#: 51199
Immediate Kill Session: sess: 0x406bfa26b78  OS pid: 12900
Immediate Kill Session#: 497, Serial#: 38504
Immediate Kill Session: sess: 0x406bfa280e0  OS pid: 12496
Immediate Kill Session#: 499, Serial#: 45296
Immediate Kill Session: sess: 0x406bfa2abb0  OS pid: 12467
Immediate Kill Session#: 502, Serial#: 18910
Immediate Kill Session: sess: 0x406bfa2ebe8  OS pid: 28887
Immediate Kill Session#: 503, Serial#: 26631
Immediate Kill Session: sess: 0x406bfa30150  OS pid: 20749
Immediate Kill Session#: 508, Serial#: 63586
Immediate Kill Session: sess: 0x406bfa36c58  OS pid: 27614
Immediate Kill Session#: 512, Serial#: 43388
Immediate Kill Session: sess: 0x406bfa3c1f8  OS pid: 4021
Immediate Kill Session#: 516, Serial#: 33975
Immediate Kill Session: sess: 0x406bfa41798  OS pid: 18481
Immediate Kill Session#: 517, Serial#: 24240
Immediate Kill Session: sess: 0x406bfa42d00  OS pid: 823
Immediate Kill Session#: 526, Serial#: 59767
Immediate Kill Session: sess: 0x406bfa4eda8  OS pid: 12529
Immediate Kill Session#: 527, Serial#: 45765
Immediate Kill Session: sess: 0x406bfa50310  OS pid: 6059
……………………
Sun Jan  1 02:22:29 2012
ALTER SYSTEM SET service_names='ora9i' SCOPE=MEMORY SID='ora9i1';

2.数据库配置
2.1)A节点相关配置

SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
ora9i1
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux IA64: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> show parameter name;
NAME                                 TYPE       VALUE
------------------------------------ ---------- --------------------
db_file_name_convert                 string
db_name                              string     ora9i
db_unique_name                       string     ora9i
global_names                         boolean    FALSE
instance_name                        string     ora9i1
lock_name_space                      string
log_file_name_convert                string
service_names                        string     ora9i

2.2)B节点相关配置

SQL>  select instance_name from v$instance;
INSTANCE_NAME
----------------
ora9i2
SQL>  select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux IA64: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> show parameter name;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      ora9i
db_unique_name                       string      ora9i
global_names                         boolean     FALSE
instance_name                        string      ora9i2
lock_name_space                      string
log_file_name_convert                string
service_names                        string      SYS$SYS.KUPC$C_2_2012010601100
                                                 6.ORA9I, ora9i, SYS$SYS.KUPC$S
                                                 _2_20120106011006.ORA9I

3.查看MOS,寻找解决方案
3.1)产生该问题原因

This is caused by unpublished Bug 6955040 ALL THE SESSIONS LOST CONNECTION AFTER KILLING CRSD.BIN.
The problem is when CRSD is killed or crashed and restarted,
CRSD will run resource check action but CRS resource status will not be available at that time.
Then in instance check action,
it fails to get the preferred node VIP resource status and considered the preferred node VIP resource is not running.
Therefore, instance check action will remove the default database service name
and disconnect sessions connected using default database service name.
This causes messages "ALTER SYSTEM" and "Immediate Kill Session" printed in alert log.

3.2)解决方案

1) The fix is included in 10.2.0.5 patchset and 11.1.0.7 patchset.
    Apply the patchset once they are available.
OR
2) Configure a service name other than the default one (same as db_name),
and get user to use the non-default service name for connection.