ORACLE 12C RAC hub AND leaf 相互转换

感谢Lunar的指导,完成ORACLE 12C RAC hub和leaf相互转换,参考官方文档Oracle Flex Clusters部分
当前数据库状态

--集群状态
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDB_NEW.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDG.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac2                     169.254.177.226 10.1
                                                             .1.104,STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac2                     Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac2                     STABLE
ora.ora12c.db
      1        ONLINE  ONLINE       rac1                     Open,STABLE
      2        ONLINE  ONLINE       rac2                     Open,STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
--rac运行在flex模式
[root@rac1 ~]#  crsctl get cluster mode status
Cluster is running in "flex" mode
--asm运行在flex模式
[grid@rac1 ~]$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode enabled
--节点角色
[root@rac1 ~]# crsctl get node role config
Node 'rac1' configured role is 'hub'
[root@rac2 ~]# crsctl get node role config
Node 'rac2' configured role is 'hub'

转换hub to leaf

--转换hub为leaf
[root@rac1 ~]# crsctl set node role leaf
CRS-4408: Node 'rac1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
--关闭集群
[root@rac1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.SYSDB_NEW.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.SYSDG.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.ora12c.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
CRS-2677: Stop of 'ora.ora12c.db' on 'rac1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.SYSDB_NEW.dg' on 'rac1' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.SYSDG.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
--启动集群
[root@rac1 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac1'
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-6017: Processing resource auto-start for servers: rac1
CRS-6016: Resource auto-start has completed for server rac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
--hub转换为leaf后状态
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDB_NEW.dg
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDG.dg
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac2                     169.254.177.226 10.1
                                                             .1.104,STABLE
ora.asm
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac2                     Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac2                     STABLE
ora.ora12c.db
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        ONLINE  ONLINE       rac2                     Open,STABLE
ora.rac1.vip
      1        ONLINE  INTERMEDIATE rac2                     FAILED OVER,STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
--集群角色
[root@rac1 ~]# crsctl get node role config
Node 'rac1' configured role is 'leaf'
[root@rac2 ~]# crsctl get node role config
Node 'rac2' configured role is 'hub'

leaf转换为hub

--leaf转换为hub
[root@rac1 ~]# crsctl set node role hub
CRS-4408: Node 'rac1' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
--关闭集群
[root@rac1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
--启动集群
[root@rac1 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac1'
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-6017: Processing resource auto-start for servers: rac1
CRS-2672: Attempting to start 'ora.ons' on 'rac1'
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac2'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac1'
CRS-2676: Start of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac1'
CRS-2676: Start of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded
CRS-2676: Start of 'ora.ons' on 'rac1' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.proxy_advm' on 'rac1'
CRS-2676: Start of 'ora.proxy_advm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ora12c.db' on 'rac1'
CRS-2676: Start of 'ora.ora12c.db' on 'rac1' succeeded
CRS-6016: Resource auto-start has completed for server rac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
--集群状态
[root@rac1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDB_NEW.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.SYSDG.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac2                     169.254.177.226 10.1
                                                             .1.104,STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac2                     Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac2                     STABLE
ora.ora12c.db
      1        ONLINE  ONLINE       rac1                     Open,STABLE
      2        ONLINE  ONLINE       rac2                     Open,STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
--集群角色
[root@rac1 ~]# crsctl get node role config
Node 'rac1' configured role is 'hub'
[root@rac2 ~]# crsctl get node role config
Node 'rac2' configured role is 'hub'

这里实现了ORACLE 12C RAC的leaf和hub 角色相互转换,在转换的过程中需要转移确认集群和ASM均为flex mode,如果参考相关文档完成转换

OLR相关维护

官方关于OLR描述
OLR is a registry similar to OCR located on each node in a cluster, but contains information specific to each node. It contains manageability information about Oracle Clusterware, including dependencies between various services. Oracle High Availability Services uses this information. OLR is located on local storage on each node in a cluster. Its default location is in the path Grid_home/cdata/host_name.olr, where Grid_home is the Oracle Grid Infrastructure home, and host_name is the host name of the node.
OLR是类似OCR的东西,存储在集群的每个节点本地

查看OLR位置

[root@rac2 cdata]# cd /etc/oracle
[root@rac2 oracle]# ls -l
total 2868
drwxrwx--- 2 root oinstall    4096 Nov 24 20:00 lastgasp
drwxrwxrwt 2 root oinstall    4096 Dec 21 20:51 maps
-rw-r--r-- 1 root oinstall      96 Nov 25 18:38 ocr.loc
-rw-r--r-- 1 root root           0 Nov 24 19:58 ocr.loc.orig
-rw-r--r-- 1 root oinstall      80 Nov 24 19:58 olr.loc
-rw-r--r-- 1 root root           0 Nov 24 19:58 olr.loc.orig
drwxrwxr-x 5 root oinstall    4096 Nov 24 19:57 oprocd
drwxr-xr-x 3 root oinstall    4096 Nov 24 19:57 scls_scr
-rws--x--- 1 root oinstall 2904377 Nov 24 19:57 setasmgid
[root@rac2 oracle]# more olr.loc
olrconfig_loc=/u01/app/12.1.0/grid/cdata/rac2.olr
crs_home=/u01/app/12.1.0/grid
--在部分平台olr.loc文件可能在/var/opt/oracle/目录下
[root@rac2 oracle]#  ocrcheck -config -local
Oracle Local Registry configuration is :
         Device/File Name         : /u01/app/12.1.0/grid/cdata/rac2.olr
[root@rac2 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :        996
         Available space (kbytes) :     408572
         ID                       :  816087519
         Device/File Name         : /u01/app/12.1.0/grid/cdata/rac2.olr
                                    Device/File integrity check succeeded
         Local registry integrity check succeeded
         Logical corruption check succeeded
[root@rac2 oracle]# ls -l /u01/app/12.1.0/grid/cdata/rac2.olr
-rw------- 1 root oinstall 503484416 Dec 22 12:09 /u01/app/12.1.0/grid/cdata/rac2.olr

查看OLR备份

[root@rac2 oracle]# ocrconfig -local -showbackup
rac2     2013/11/24 20:02:38     /u01/app/12.1.0/grid/cdata/rac2/backup_20131124_200238.olr

备份OLR

[root@rac2 oracle]# ocrconfig -local -manualbackup
rac2     2013/12/22 12:09:33     /u01/app/12.1.0/grid/cdata/rac2/backup_20131222_120933.olr
rac2     2013/11/24 20:02:38     /u01/app/12.1.0/grid/cdata/rac2/backup_20131124_200238.olr
[root@rac2 oracle]# ls -l /u01/app/12.1.0/grid/cdata/rac2/
total 1908
-rw-r--r-- 1 root root  860160 Nov 24 20:02 backup_20131124_200238.olr
-rw-r--r-- 1 root root 1085440 Dec 22 12:09 backup_20131222_120933.olr

OLR异常恢复

--破坏OLR
[root@rac2 oracle]# ls -l /u01/app/12.1.0/grid/cdata/rac2.olr
-rw------- 1 root oinstall 503484416 Dec 22 12:09 /u01/app/12.1.0/grid/cdata/rac2.olr
[root@rac2 oracle]# /u01/app/12.1.0/grid/cdata/rac2.olr /u01/app/12.1.0/grid/cdata/rac2.olr_bak
--关闭crs
[root@rac2 oracle]# crsctl stop crs
--启动crs报错
[root@rac2 oracle]# crsctl start crs
PROCL-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
CRS-4000: Command Start failed, or completed with errors.
--跟踪crs启动
[root@rac2 oracle]# strace crsctl start crs
……
uname({sys="Linux", node="rac2", ...})  = 0
open("/etc/oracle/olr.loc", O_RDONLY)   = 14
fstat(14, {st_mode=S_IFREG|0644, st_size=80, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd8ac628000
read(14, "olrconfig_loc=/u01/app/12.1.0/gr"..., 4096) = 80
read(14, "", 4096)                      = 0
close(14)                               = 0
munmap(0x7fd8ac628000, 4096)            = 0
stat("/u01/app/12.1.0/grid/cdata/rac2.olr", 0x7fffa215a580) = -1 ENOENT (No such file or directory)
--这里可以看到先是读取/etc/oracle/olr.loc,然后获取/u01/app/12.1.0/grid/cdata/rac2.olr失败
……
--确定ohasd.bin关闭
[root@rac2 cdata]# ps -ef|grep ohasd
root     15715 31578  0 14:34 pts/3    00:00:00 grep ohasd
--还原OLR
[root@rac2 oracle]# ocrconfig -local -restore /u01/app/12.1.0/grid/cdata/rac2/backup_20131124_200238.olr
PROTL-35: The configured OLR location is not accessible
[root@rac2 oracle]# cd /u01/app/12.1.0/grid/cdata/
[root@rac2 cdata]# ls
localhost  rac12c-cluster  rac2  rac2.olr_bak
[root@rac2 cdata]# touch rac2.olr
[root@rac2 cdata]# chmod 600 rac2.olr
[root@rac2 cdata]# ocrconfig -local -restore /u01/app/12.1.0/grid/cdata/rac2/backup_20131124_200238.olr
--确定还原成功
[root@rac2 cdata]# ls -l
total 84200
drwxr-xr-x 2 grid oinstall      4096 Nov 24 19:37 localhost
drwxrwxr-x 2 grid oinstall      4096 Dec 22 09:07 rac12c-cluster
drwxr-xr-x 2 grid oinstall      4096 Dec 22 12:09 rac2
-rw------- 1 root root     503484416 Dec 22 14:29 rac2.olr
-rw------- 1 root oinstall 503484416 Dec 22 12:43 rac2.olr_bak
--启动crs
[root@rac2 cdata]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

其他OLR命令

To export OLR to a file:
# ocrconfig –local –export file_name
To import a specified file to OLR:
# ocrconfig –local –import file_name
To view the contents of the OLR file:
ocrdump -local file_name
To view the contents of the OLR backup file:
ocrdump -local -backupfile olr_backup_file_name
To change the OLR backup location:
ocrconfig -local -backuploc new_olr_backup_path

当OLR异常时,RAC节点不能正常启动,而且OLR不像OCR会定时自动备份,建议人工定时备份OLR

通过bbed替换bootstarp$表

在11G和12C中,我们可以通过DBMS_DDL_INTERNAL.SWAP_BOOTSTRAP过程来替换bootstarp$表(见:替换bootstarp$表),但是对于10G或者其他版本,oracle没有提供相关程序来完成使用其他表替换bootstarp$,通过分析,使用bbed修改root rdba也可以完成DBMS_DDL_INTERNAL.SWAP_BOOTSTRAP的任务

SQL> SELECT * FROM V$VERSION;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE    10.2.0.3.0      Production
TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production
SQL> DESC DBMS_DDL_INTERNAL
PROCEDURE CHECK_TRIGGER_FIRING_PROPERTY
参数名称                       类型                    输入/输出默认值?
------------------------------ ----------------------- ------ --------
 TRIG_OWNER                     VARCHAR2                IN
 TRIG_NAME                      VARCHAR2                IN
 CANON_OWNER                    VARCHAR2                OUT
 CANON_ONAME                    VARCHAR2                OUT
 P_PROPERTY                     NUMBER                  IN/OUT
 UNSUPPORTED_TRIG               BOOLEAN                 OUT
FUNCTION HAS_ALTER_ANY_TRIGGER_PRIV RETURNS BOOLEAN
参数名称                       类型                    输入/输出默认值?
------------------------------ ----------------------- ------ --------
 P_USER                         VARCHAR2                IN
 P_TRIG_PROPERTY                NUMBER                  IN
FUNCTION HAS_EXP_IMP_PRIV RETURNS BOOLEAN
参数名称                       类型                    输入/输出默认值?
------------------------------ ----------------------- ------ --------
 P_UID                          NUMBER                  IN
 P_PRIVS_TO_CHECK               VARCHAR2                IN
FUNCTION IS_DDL_TRIGGER RETURNS BOOLEAN
参数名称                       类型                    输入/输出默认值?
------------------------------ ----------------------- ------ --------
 SYS_EVTS                       NUMBER                  IN

跟踪数据库启动过程

SQL> startup mount
ORACLE 例程已经启动。
Total System Global Area  209715200 bytes
Fixed Size                  1289724 bytes
Variable Size             100663812 bytes
Database Buffers          100663296 bytes
Redo Buffers                7098368 bytes
数据库装载完毕。
SQL> oradebug setmypid
已处理的语句
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
已处理的语句
SQL> alter session set db_file_multiblocK_read_count=1;
会话已更改。
SQL> oradebug TRACEFILE_NAME
e:\oracle\product\10.2.0\admin\ora10g\udump\ora10g_ora_8360.trc
SQL> alter database Open;
数据库已更改。
SQL> oradebug EVENT 10046 trace name context off
已处理的语句

阅读10046 trace文件

WAIT #1: nam='instance state change' ela= 28 layer=2 value=1 waited=1 obj#=-1 tim=377999209439
WAIT #1: nam='db file sequential read' ela= 94860 file#=1 block#=377 blocks=1 obj#=-1 tim=377999304467
=====================
PARSING IN CURSOR #2 len=188 dep=1 uid=0 oct=1 lid=0 tim=377999305344 hv=1365064427 ad='8baee680'
create table bootstrap$ ( line#         number not null,   obj#           number not null,   sql_text   varchar2(4000) not null)   storage (initial 50K objno 56 extents (file 1 block 377))
END OF STMT
PARSE #2:c=0,e=662,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=377999305341
BINDS #2:
EXEC #2:c=0,e=102,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=377999305545
=====================
PARSING IN CURSOR #2 len=55 dep=1 uid=0 oct=3 lid=0 tim=377999305925 hv=2111436465 ad='8baedf0c'
select line#, sql_text from bootstrap$ where obj# != :1
END OF STMT
PARSE #2:c=0,e=308,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=377999305922
BINDS #2:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=00288360  bln=22  avl=02  flg=05
  value=56
EXEC #2:c=0,e=580,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=377999306621
WAIT #2: nam='db file sequential read' ela= 328 file#=1 block#=377 blocks=1 obj#=-1 tim=377999307005
WAIT #2: nam='db file sequential read' ela= 345 file#=1 block#=378 blocks=1 obj#=-1 tim=377999307423

这里可以发现,数据库是在启动的时候读file 1 block 377,然后create table bootstrap$(注意:这里的语句小写),对于bootstarp$的查询除掉了obj#<>56

分析bootstarp$对象

SQL> select header_file,header_block from dba_segments where segment_name='BOOTSTRAP$';
HEADER_FILE HEADER_BLOCK
----------- ------------
          1          377
SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='BOOTSTRAP$';
 OBJECT_ID
----------
        56
SQL> select line#, sql_text from bootstrap$ where obj# =56;
     LINE#
----------
SQL_TEXT
--------------------------------------------------------------------------------
        56
CREATE TABLE BOOTSTRAP$("LINE#" NUMBER NOT NULL,"OBJ#" NUMBER NOT NULL,"SQL_TEXT
" VARCHAR2(4000) NOT NULL) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE
 (  INITIAL 56K NEXT 1024K MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 OBJN
O 56 EXTENTS (FILE 1 BLOCK 377))

这里可以发现bootstrap$中obj#=56的那条记录为CREATE TABLE BOOTSTRAP$(注意:该表里面保存为大写)

bbed查看root rdba

C:\Windows\system32>e:\oracle\product\10.2.0\dbhome_1\bin\bbed password=blockedit blocksize=8192
BBED: Release 2.0.0.0.0 - Limited Production on Tue Dec 17 18:36:01 2013
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
************* !!! For Oracle Internal Use only !!! ***************
BBED> set block 1
BBED-00310: no datafile specified
BBED> set filename 'E:\APP\XIFENFEI\ORADATA\ORA10G\SYSTEM01.DBF'
        FILENAME        E:\APP\XIFENFEI\ORADATA\ORA10G\SYSTEM01.DBF
BBED> set block 2
        BLOCK#          2
BBED> map
 File: E:\APP\XIFENFEI\ORADATA\ORA10G\SYSTEM01.DBF (0)
 Block: 2                                     Dba:0x00000000
------------------------------------------------------------
 Data File Header
 struct kcvfh, 360 bytes                    @0
 ub4 tailchk                                @8188
BBED> p kcvfhrdb
ub4 kcvfhrdb                                @96       0x00400179
SQL> Select to_number('00400179','xxxxxxxxxxxxxxxxxx') from dual;
TO_NUMBER('00400179','XXXXXXXXXXXXXXXXXX')
------------------------------------------
                                   4194681
SQL> select dbms_utility.data_block_address_block(4194681) "block",
  2  dbms_utility.data_block_address_file(4194681) "file" from dual;
     block       file
---------- ----------
       377          1

通过bbed查看kcvfhrdb(root rdba)指向的地址和数据库启动扫描block一致(file 1b block 377)

创建bootstarp$替换表(xifenfei)

SQL> create table xifenfei as select * from bootstrap$;
表已创建。
SQL> select count(*) from bootstrap$;
  COUNT(*)
----------
        57
SQL> select count(*) from xifenfei;
  COUNT(*)
----------
        57
SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='XIFENFEI';
 OBJECT_ID
----------
     51736
SQL> select header_file,header_block from dba_segments where segment_name='XIFENFEI';
HEADER_FILE HEADER_BLOCK
----------- ------------
          1        60241
SQL> SELECT TO_CHAR(60241,'XXXX') FROM DUAL;
TO_CH
-----
 EB51
--对应rdba为0040EB51

这里可以确定创建的xifenfei的segment header rdba为0x0040EB51,obj#为51736

清理bootstarp$中对象

SQL> DELETE FROM BOOTSTRAP$;
已删除57行。
SQL> COMMIT;
提交完成。
SQL> SHUTDOWN IMMEDIATE;
数据库已经关闭。
已经卸载数据库。
ORACLE 例程已经关闭。
SQL> STARTUP MOUNT
ORACLE 例程已经启动。
Total System Global Area  209715200 bytes
Fixed Size                  1289724 bytes
Variable Size             104858116 bytes
Database Buffers           96468992 bytes
Redo Buffers                7098368 bytes
数据库装载完毕。
SQL> oradebug setmypid
已处理的语句
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
已处理的语句
SQL> alter session set db_file_multiblocK_read_count=1;
会话已更改。
SQL> oradebug TRACEFILE_NAME
e:\oracle\product\10.2.0\admin\ora10g\udump\ora10g_ora_7704.trc
SQL>
SQL> alter database Open;
alter database Open
*
第 1 行出现错误:
ORA-01092: ORACLE 实例终止。强制断开连接
--trace文件
PARSING IN CURSOR #2 len=55 dep=1 uid=0 oct=3 lid=0 tim=379061819061 hv=2111436465 ad='8baedf18'
select line#, sql_text from bootstrap$ where obj# != :1
END OF STMT
PARSE #2:c=0,e=346,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=379061819058
BINDS #2:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=00288360  bln=22  avl=02  flg=05
  value=56
EXEC #2:c=0,e=681,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=379061819868
WAIT #2: nam='db file sequential read' ela= 343 file#=1 block#=377 blocks=1 obj#=-1 tim=379061820273
WAIT #2: nam='db file sequential read' ela= 291 file#=1 block#=378 blocks=1 obj#=-1 tim=379061820651
WAIT #2: nam='db file sequential read' ela= 309 file#=1 block#=379 blocks=1 obj#=-1 tim=379061821012
WAIT #2: nam='db file sequential read' ela= 293 file#=1 block#=380 blocks=1 obj#=-1 tim=379061821416
FETCH #2:c=0,e=1542,p=4,cr=5,cu=0,mis=0,r=0,dep=1,og=4,tim=379061821450
ORA-00704: 引导程序进程失败
ORA-00702: 引导程序版本 '' 与版本 '8.0.0.0.0' 不一致
*** 2013-12-17 18:50:07.325
EXEC #1:c=62400,e=4990345,p=10,cr=6,cu=0,mis=0,r=0,dep=0,og=1,tim=379065822300
ERROR #1:err=1092 tim=37915057

删除掉bootstarp中记录后,数据库无法正常启动,报错误为ORA-00704/ORA-00702,因为数据库读取bootstarp$中记录出错导致.

bbed修改root rdba

BBED> set mode edit
        MODE            Edit
BBED> set count 32
        COUNT           32
BBED> d
 File: E:\APP\XIFENFEI\ORADATA\ORA10G\SYSTEM01.DBF (0)
 Block: 2                Offsets:   96 to  127           Dba:0x00000000
------------------------------------------------------------------------
 79014000 0b000000 00000000 d10ff624 485dbc31 4bf60700 00000000 00000000
 <32 bytes per line>
BBED> m /x 51eb
 File: E:\APP\XIFENFEI\ORADATA\ORA10G\SYSTEM01.DBF (0)
 Block: 2                Offsets:   96 to  127           Dba:0x00000000
------------------------------------------------------------------------
 51eb4000 0b000000 00000000 d10ff624 485dbc31 4bf60700 00000000 00000000
 <32 bytes per line>
BBED> sum apply
Check value for File 0, Block 2:
current = 0xa3bd, required = 0xa3bd
BBED> p kcvfhrdb
ub4 kcvfhrdb                                @96       0x0040eb51

修改root rdba地址为xifenfei segment header的地址

尝试启动数据库

SQL> startup mount;
ORACLE 例程已经启动。
Total System Global Area  209715200 bytes
Fixed Size                  1289724 bytes
Variable Size             113246724 bytes
Database Buffers           88080384 bytes
Redo Buffers                7098368 bytes
数据库装载完毕。
SQL> oradebug setmypid
已处理的语句
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
已处理的语句
SQL> alter session set db_file_multiblocK_read_count=1;
会话已更改。
SQL> oradebug TRACEFILE_NAME
e:\oracle\product\10.2.0\admin\ora10g\udump\ora10g_ora_7356.trc
SQL> alter database Open;
alter database Open
*
第 1 行出现错误:
ORA-01092: ORACLE 实例终止。强制断开连接
--trace文件
WAIT #1: nam='db file sequential read' ela= 26895 file#=1 block#=60241 blocks=1 obj#=-1 tim=380397162424
=====================
PARSING IN CURSOR #2 len=193 dep=1 uid=0 oct=1 lid=0 tim=380397162916 hv=1250491271 ad='8baee6a0'
create table bootstrap$ ( line#         number not null,   obj#           number not null,   sql_text   varchar2(4000) not null)   storage (initial 50K objno 51736 extents (file 1 block 60241))
END OF STMT
PARSE #2:c=0,e=372,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=380397162912
BINDS #2:
EXEC #2:c=0,e=80,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=380397163083
=====================
PARSING IN CURSOR #2 len=55 dep=1 uid=0 oct=3 lid=0 tim=380397163449 hv=2111436465 ad='8baedf2c'
select line#, sql_text from bootstrap$ where obj# != :1
END OF STMT
PARSE #2:c=0,e=311,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=380397163447
BINDS #2:
kkscoacd
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=00288360  bln=22  avl=04  flg=05
  value=51736
EXEC #2:c=0,e=515,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=380397164052
WAIT #2: nam='db file sequential read' ela= 314 file#=1 block#=60241 blocks=1 obj#=-1 tim=380397164415
WAIT #2: nam='db file sequential read' ela= 396 file#=1 block#=60242 blocks=1 obj#=-1 tim=380397164902
…………
PARSING IN CURSOR #2 len=272 dep=1 uid=0 oct=1 lid=0 tim=380397203298 hv=2124945659 ad='8bacb620'
CREATE TABLE BOOTSTRAP$("LINE#" NUMBER NOT NULL,"OBJ#" NUMBER NOT NULL,"SQL_TEXT" VARCHAR2(4000) NOT NULL) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE (  INITIAL 56K NEXT 1024K MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 OBJNO 56 EXTENTS (FILE 1 BLOCK 377))
END OF STMT
PARSE #2:c=0,e=239,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,tim=380397203295
BINDS #2:
EXEC #2:c=0,e=324,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=380397203701
ERROR #2:err=955 tim=38048197
ORA-00704: 引导程序进程失败
ORA-00604: 递归 SQL 级别 1 出现错误
ORA-00955: 名称已由现有对象使用
*** 2013-12-17 19:12:21.783
EXEC #1:c=93601,e=4199938,p=10,cr=60,cu=0,mis=0,r=0,dep=0,og=1,tim=380400250570
ERROR #1:err=1092 tim=38048501

数据库启动到创建bootstarp$的时候报错,报错的原因是因为xifenfei对象中的obj#=56的为CREATE TABLE BOOTSTRAP$,而前面的查询bootstarp$是过滤掉了obj#=56(为过滤掉xifenfei对象本身的obj#[51736])

upgrade模式启动数据库

SQL> conn / as sysdba
已连接到空闲例程。
SQL> startup mount
ORACLE 例程已经启动。
Total System Global Area  209715200 bytes
Fixed Size                  1289724 bytes
Variable Size             117441028 bytes
Database Buffers           83886080 bytes
Redo Buffers                7098368 bytes
数据库装载完毕。
SQL> alter database Open upgrade;
数据库已更改。

虽然启动的时候在报CREATE TABLE BOOTSTRAP$(注意大写,而不是启动第一条的create table bootstrap$),但是upgrade模式可以正常启动数据库

修改xifenfei中关于CREATE TABLE BOOTSTRAP$语句对应的obj#为xifenfei object_id

SQL> UPDATE XIFENFEI SET OBJ#=51736 WHERE OBJ#=56;
已更新 1 行。
SQL> commit;
提交完成。
SQL> shutdown immediate
数据库已经关闭。
已经卸载数据库。
ORACLE 例程已经关闭。
SQL> startup mount
ORACLE 例程已经启动。
Total System Global Area  209715200 bytes
Fixed Size                  1289724 bytes
Variable Size             121635332 bytes
Database Buffers           79691776 bytes
Redo Buffers                7098368 bytes
数据库装载完毕。
SQL> alter database open;
数据库已更改。

至此通过bbed结合修改CREATE TABLE BOOTSTRAP$语句对应的obj#完成数据库启动读取非bootstarp$表的过程

继续分析xifenfei和bootstarp$关系

SQL> select count(*) from bootstrap$;
  COUNT(*)
----------
        57
SQL> select count(*) from xifenfei;
  COUNT(*)
----------
        57
SQL> select obj# from bootstrap$ where line#=56;
      OBJ#
----------
     51736
SQL> select obj# from xifenfei where line#=56;
      OBJ#
----------
     51736
SQL> select header_file,header_block from dba_segments where segment_name='BOOTSTRAP$';
HEADER_FILE HEADER_BLOCK
----------- ------------
          1          377
SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='BOOTSTRAP$';
 OBJECT_ID
----------
        56
SQL> truncate table xifenfei;
truncate table xifenfei
               *
第 1 行出现错误:
ORA-00701: 无法改变热启动数据库所需的对象

数据库启动过程中,会读xifenfei(root rdba指向表),然后加载bootstarp$表,而且bootstarp$表中记录和xifenfei表中记录完全相同.

替换bootstarp$表

对数据库有一定了解的人都知道,数据库启动是通过bootstarp$表来实现启动引导的。那这里有两个疑问:
1.引导表bootstarp$名字是否是唯一?
2.引导表的位置是不是在数据文件固定?比如11g/12c在file 1 block 520,10g在file 1 block 377?

跟踪数据库启动

[oracle@rac2 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Dec 15 14:16:58 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options
SQL> startup mount;
ORACLE instance started.
Total System Global Area  400846848 bytes
Fixed Size                  2288872 bytes
Variable Size             310379288 bytes
Database Buffers           79691776 bytes
Redo Buffers                8486912 bytes
Database mounted.
SQL> oradebug setmypid
Statement processed.
SQL> alter session set db_file_multiblocK_read_count=1;
Statement processed.
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
Session altered.
SQL> oradebug TRACEFILE_NAME
/u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_31364.trc
SQL> alter database Open;
Database altered.
SQL> oradebug EVENT 10046 trace name context off
Statement processed.
SQL> select HEADER_FILE,HEADER_BLOCK from dba_segments where segment_name='BOOTSTRAP$';
HEADER_FILE HEADER_BLOCK
----------- ------------
          1          520
SQL> select object_id from dba_objects where object_name='BOOTSTRAP$';
 OBJECT_ID
----------
        59

分析trace文件

WAIT #140077386411120: nam='db file sequential read' ela= 56 file#=1 block#=520 blocks=1 obj#=-1 tim=1719385755334
=====================
PARSING IN CURSOR #140077386402760 len=188 dep=1 uid=0 oct=1 lid=0 tim=1719385757322 hv=4006182593 ad='6645d370' sqlid='32r4f1brckzq1'
create table bootstrap$ (
END OF STMT
PARSE #140077386402760:c=2000,e=1711,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1719385757319
EXEC #140077386402760:c=0,e=430,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=0,tim=1719385757909
CLOSE #140077386402760:c=0,e=9,dep=1,type=0,tim=1719385758105
=====================
PARSING IN CURSOR #140077386402760 len=55 dep=1 uid=0 oct=3 lid=0 tim=1719385759507 hv=2111436465 ad='6645bc80' sqlid='6apq2rjyxmxpj'
select line#, sql_text from bootstrap$ where obj# != :1
END OF STMT
PARSE #140077386402760:c=1000,e=1365,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1719385759505
BINDS #140077386402760:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=1000001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f664edb8780  bln=22  avl=02  flg=05
  value=59
EXEC #140077386402760:c=3000,e=8859,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=867914364,tim=1719385768574
WAIT #140077386402760: nam='db file sequential read' ela= 27 file#=1 block#=520 blocks=1 obj#=59 tim=1719385768753
WAIT #140077386402760: nam='db file sequential read' ela= 23 file#=1 block#=521 blocks=1 obj#=59 tim=1719385769575

这里我们可以发现,数据库启动的时候是读file 1 block 520,object_id为59,为bootstarp$对象

bbed查看root rdba地址

ub4 kcvfhrdb                             @96       0x00400208
SQL> select to_number('208','xxx') from dual;
TO_NUMBER('208','XXX')
----------------------
                   520

这里可以看出来,数据库启动的file header rdba地址和trace文件中的一致,也是指定到bootstarp$;

替换bootstrap$表为xifenfei

SQL> create table xifenfei as select * from bootstrap$;
Table created.
SQL> desc xifenfei
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 LINE#                                     NOT NULL NUMBER
 OBJ#                                      NOT NULL NUMBER
 SQL_TEXT                                  NOT NULL VARCHAR2(4000)
SQL> select sql_text from xifenfei where line#=59;
SQL_TEXT
--------------------------------------------------------------------------------
CREATE TABLE BOOTSTRAP$("LINE#" NUMBER NOT NULL,"OBJ#" NUMBER NOT NULL,"SQL_TEXT
" VARCHAR2(4000) NOT NULL) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE
 (  INITIAL 56K NEXT 1024K MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 OBJN
O 59 EXTENTS (FILE 1 BLOCK 520))
--清除bootstarp$记录
SQL> delete from xifenfei where line#=59;
1 row deleted.
SQL> commit;
Commit complete.
SQL> select objECT_ID from dba_objects where object_name='XIFENFEI';
 OBJECT_ID
----------
     20314
SQL> select HEADER_FILE,HEADER_BLOCK from dba_segments where segment_name='XIFENFEI';
HEADER_FILE HEADER_BLOCK
----------- ------------
          1        45712
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade;
ORACLE instance started.
Total System Global Area  400846848 bytes
Fixed Size                  2288872 bytes
Variable Size             310379288 bytes
Database Buffers           79691776 bytes
Redo Buffers                8486912 bytes
Database mounted.
Database opened.
--删除bootstarp$表
SQL>  drop table bootstrap$;
Table dropped.
SQL>  exec DBMS_DDL_INTERNAL.SWAP_BOOTSTRAP('XIFENFEI');
PL/SQL procedure successfully completed.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> EXIT
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@rac2 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Dec 15 14:29:54 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> STARTUP MOUNT
ORACLE instance started.
Total System Global Area  400846848 bytes
Fixed Size                  2288872 bytes
Variable Size             310379288 bytes
Database Buffers           79691776 bytes
Redo Buffers                8486912 bytes
Database mounted.
SQL> oradebug setmypid
Statement processed.
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12
Statement processed.
SQL> alter session set db_file_multiblocK_read_count=1;
Session altered.
SQL> oradebug TRACEFILE_NAME
/u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_32727.trc
SQL> alter database Open;
Database altered.
SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='BOOTSTRAP$';
no rows selected
SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='XIFENFEI';
 OBJECT_ID
----------
     20314

分析trace文件

WAIT #139841534602352: nam='db file sequential read' ela= 23 file#=1 block#=45712 blocks=1 obj#=-1 tim=1720082484775
=====================
PARSING IN CURSOR #139841534593992 len=193 dep=1 uid=0 oct=1 lid=0 tim=1720082488552 hv=2096904950 ad='6645d650' sqlid='bs6v55xygsfrq'
create table bootstrap$ (
END OF STMT
PARSE #139841534593992:c=2000,e=2925,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1720082488550
EXEC #139841534593992:c=0,e=466,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=0,tim=1720082489124
CLOSE #139841534593992:c=0,e=8,dep=1,type=0,tim=1720082489266
=====================
PARSING IN CURSOR #139841534593992 len=55 dep=1 uid=0 oct=3 lid=0 tim=1720082490510 hv=2111436465 ad='6645c050' sqlid='6apq2rjyxmxpj'
select line#, sql_text from bootstrap$ where obj# != :1
END OF STMT
PARSE #139841534593992:c=1999,e=1211,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1720082490509
BINDS #139841534593992:
 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=08 fl2=1000001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f2f64fe8780  bln=22  avl=04  flg=05
  value=20314
EXEC #139841534593992:c=1000,e=1789,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=867914364,tim=1720082492533
WAIT #139841534593992: nam='db file sequential read' ela= 20 file#=1 block#=45712 blocks=1 obj#=20314 tim=1720082492685
WAIT #139841534593992: nam='db file sequential read' ela= 25 file#=1 block#=45713 blocks=1 obj#=20314 tim=1720082492986

这里可以看出来,数据库在启动的时候是读取file 1 block 45712,object_id为20314

bbed查看root rdba地址

ub4 kcvfhrdb                             @96       0x0040b290
SQL> select to_number('b290','xxxxxxx') from dual;
TO_NUMBER('B290','XXXXXXX')
---------------------------
                      45712

证明现在的文件头里面的file header rdba 已经修改为file 1 block 45712和trace里面看到的一致,都是XIFENFEI这个表

测试结论
1. 通过使用SWAP_BOOTSTRAP可以置换掉数据库启动开始表bootstrap$变为另外一个表,而且该过程直接修改文件头的kcvfhrdb值
2. 通过试验证明,oracle启动的时候不是程序里面写死的去读file 1的某个block,而是通过读取kcvfhrdb然后启动数据库

一次数据库优化全过程分析

最近对客户的一个数据库进行了优化,在本次优化过程中,主要涉及以下方面:
1. 确保系统有足够的内存,处理方法配置Hugepage,减小SGA
2. 优化因为主键表频繁插入引起的user$,con$,cdef$递归查询sql

SQL> select c.name, u.name from con$ c, cdef$ cd, user$ u where
   2 c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.user#;
Execution Plan
----------------------------------------------------------
Plan hash value: 2409458995
-----------------------------------------------------------------------------
| Id  | Operation           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |       |  3820 |   164K|    38   (6)| 00:00:01 |
|*  1 |  HASH JOIN          |       |  3820 |   164K|    38   (6)| 00:00:01 |
|   2 |   TABLE ACCESS FULL | USER$ |    64 |   896 |     3   (0)| 00:00:01 |
|*  3 |   HASH JOIN         |       |  3820 |   111K|    34   (3)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| CDEF$ |  3820 | 34380 |    25   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| CON$  |  6368 |   130K|     8   (0)| 00:00:01 |
-----------------------------------------------------------------------------

具体见:一次数据库优化全过程分析

ORACLE 12C RAC修改ocr/votedisk/asm spfile所在磁盘组名称

今天看着我这个单节点的12C rac,突然觉得ocr所在的磁盘组叫做+DG_SYS有点不舒服,想改成+SYS_DG。处理方法是先把ocr/votedisk/asm spfile迁移到已经存在的asm中,然后修改磁盘组名称,最后迁移到新名称磁盘组中(本次处理流程+DG_SYS—>+DATA—>+SYS_DG)
当前运行情况

[grid@xifenfei ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------
SQL> select * from v$version;
BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0
PL/SQL Release 12.1.0.1.0 - Production                                                    0
CORE    12.1.0.1.0      Production                                                                0
TNS for Linux: Version 12.1.0.1.0 - Production                                            0
NLSRTL Version 12.1.0.1.0 - Production                                                    0
SQL> select name,state from v$asm_diskgroup;
NAME                           STATE
------------------------------ -----------
DG_SYS                         MOUNTED
DATA                           MOUNTED
[grid@xifenfei ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   60a037da30714f6bbfe5d90206ff27a7 (/dev/sdc2) [DG_SYS]
Located 1 voting disk(s).
[grid@xifenfei ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1380
         Available space (kbytes) :     408188
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG_SYS/xff-cluster/ASMPARAMET
                                                 ERFILE/registry.253.825640465

修改ocr路径
ocrconfig -add和ocrconfig -delete完成ocr更换磁盘组,该过程可以在线处理

[root@xifenfei ~]# ocrconfig -add +data
--alert 日志
2013-09-09 22:32:40.799:
[crsd(5064)]CRS-1007:The OCR/OCR mirror location was replaced by +data.
[root@xifenfei ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1380
         Available space (kbytes) :     408188
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
         Device/File Name         :      +data
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
[root@xifenfei ~]# ocrconfig -delete +DG_SYS
--alert 日志
2013-09-09 22:35:53.585:
[crsd(5064)]CRS-1010:The OCR mirror location +DG_SYS was removed.
[root@xifenfei ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1380
         Available space (kbytes) :     408188
         ID                       : 2132096904
         Device/File Name         :      +data
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

修改votedisk路径
通过crsctl replace votedisk命令修改

[root@xifenfei ~]# crsctl replace votedisk +DATA
Successful addition of voting disk 161ddea0a5fe4f28bfb67536e6105122.
Successful deletion of voting disk 60a037da30714f6bbfe5d90206ff27a7.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced
-alert日志
2013-09-09 22:38:15.259:
[cssd(4685)]CRS-1605:CSSD voting file is online: /dev/sdb; details in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log.
2013-09-09 22:38:15.259:
[cssd(4685)]CRS-1626:A Configuration change request completed successfully
2013-09-09 22:38:15.285:
[cssd(4685)]CRS-1601:CSSD Reconfiguration complete. Active nodes are xifenfei .
[root@xifenfei ~]# crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   161ddea0a5fe4f28bfb67536e6105122 (/dev/sdb) [DATA]
Located 1 voting disk(s).

修改asm spfile位置

[grid@xifenfei ~]$  gpnptool get -o-
Success.
…………
<orcl:ASM-Profile id="asm" DiscoveryString="/dev/sd*" SPFile="+DG_SYS/xff-cluster/ASMPARAMETERFILE/registry.253.825640465" Mode="legacy"/>
…………
[grid@xifenfei ~]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 22:42:05 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create pfile='/tmp/pfile.asm' from spfile;
File created.
SQL> create spfile='+DATA' FROM PFILE='/tmp/pfile.asm';
File created.
[grid@xifenfei ~]$  gpnptool get -o-
Success.
…………
<orcl:ASM-Profile id="asm" DiscoveryString="/dev/sd*" SPFile="+DATA/xff-cluster/ASMPARAMETERFILE/registry.253.825720159" Mode="legacy"/>
…………

这里证明create asm spfile会自动修改spfile在gpnptool对应的profile里面的配置,无需人工干预

重启crs
为了使得asm使用新的磁盘组中的spfile文件

[root@xifenfei ~]# crsctl stop crs
[root@xifenfei ~]# crsctl start crs

验证+DG_SYS磁盘组未被使用

[grid@xifenfei ~]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 22:59:49 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DATA/xff-cluster/ASMPARAMETER
                                                 FILE/registry.253.825720159
ASMCMD> lsof
DB_Name  Instance_Name  Path
+ASM     +ASM1          +DATA.255.819326577
cdb      cdb1           +DATA/CDB/CONTROLFILE/current.274.819356503
cdb      cdb1           +DATA/CDB/DATAFILE/sysaux.278.819355829
cdb      cdb1           +DATA/CDB/DATAFILE/system.269.819356101
cdb      cdb1           +DATA/CDB/DATAFILE/undotbs1.276.819356317
cdb      cdb1           +DATA/CDB/DATAFILE/users.279.819356309
cdb      cdb1           +DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/pdbseed_temp01.dbf
cdb      cdb1           +DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.272.819356709
cdb      cdb1           +DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.819356709
cdb      cdb1           +DATA/CDB/ONLINELOG/group_1.277.822736453
cdb      cdb1           +DATA/CDB/ONLINELOG/group_2.280.822736461
cdb      cdb1           +DATA/CDB/ONLINELOG/group_3.275.822736397
cdb      cdb1           +DATA/CDB/TEMPFILE/temp.273.819356649

dismount +DG_SYS磁盘组

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576     20480     3369                0            3369              0             Y  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      5451     5231                0            5231              0             N  DG_SYS/
ASMCMD> umount dg_sys
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576     20480     3369                0            3369              0             Y  DATA/

修改asm dg名称
修改磁盘组+DG_SYS为+SYS_DG

[grid@xifenfei ~]$ renamedg phase=both dgname=DG_SYS newdgname=SYS_DG verbose=true
Parsing parameters..
Parameters in effect:
         Old DG name       : DG_SYS
         New DG name          : SYS_DG
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=DG_SYS newdgname=SYS_DG verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk UFS:/dev/sdc2 with disk number:0 and timestamp (32990496 1727895552)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk UFS:/dev/sdc2 with disk number:0 and timestamp (32990496 1727895552)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/sdc2
Modifying the header
Completed phase 2
Terminating kgfd context 0x7fceeb02a0a0

mount +SYS_DG

SQL> select name,state from v$asm_diskgroup;
NAME                           STATE
------------------------------ -----------
DATA                           MOUNTED
SYS_DG                         DISMOUNTED
SQL> alter diskgroup sys_dg mount;
Diskgroup altered.
SQL>  select name,state from v$asm_diskgroup;
NAME                           STATE
------------------------------ -----------
DATA                           MOUNTED
SYS_DG                         MOUNTED

asm spfile/ocr/votedisk迁移从+DATA到+SYS_DG

SQL> create spfile='+SYS_DG' FROM pfile='/tmp/pfile.asm';
File created.
[root@xifenfei ~]# ocrconfig -add +SYS_DG
[root@xifenfei ~]# ocrconfig -DELETE +DATA
[root@xifenfei ~]# crsctl replace votedisk +SYS_DG
Successful addition of voting disk 9694a31053ea4ff4bfb57891461a1296.
Successful deletion of voting disk 161ddea0a5fe4f28bfb67536e6105122.
Successfully replaced voting disk group with +SYS_DG.
CRS-4266: Voting file(s) successfully replaced
[root@xifenfei ~]# crsctl stop crs
[root@xifenfei ~]# crsctl start crs

删除ocr里面老磁盘组(+DG_SYS)信息

[root@xifenfei ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DG_SYS.dg
               ONLINE  OFFLINE      xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.SYS_DG.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------
[root@xifenfei ~]# srvctl remove diskgroup -diskgroup dg_sys
[root@xifenfei ~]# crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.SYS_DG.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------

至此ocr/votedisk/asm spfile所在磁盘组修改名称完成,因为该库是一个单节点的rac,如果是两个或者更多节点的rac可以实现不停机的情况下进行(分步重启节点).该处理过程和11.2 rac完全相同,未有任何的改变

oracle 12.1 RAC的ocr磁盘组异常恢复

在11.2或者12.1的RAC中,ocr和votedisk可以放到asm中,而很多人安装系统把ocr和votedisk放到一个单独的asm 磁盘组里面,但是如果这个磁盘组坏了,而数据所在的磁盘组是好的,这个时候该怎么恢复呢?这里的恢复分两种情况,一种是有ocr备份的恢复,另外一种是无ocr备份的恢复。但是在一般情况下ocr是每4个小时自动备份一份,因此大部分的系统中都会有ocr的备份。本blog主要对于oracle 12c rac在有ocr备份,存储ocr,votedisk的asm磁盘组异常恢复
确定ocr,votedisk,asm spfile存在一个独立asm diskgroup中

[grid@xifenfei ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1360
         Available space (kbytes) :     408208
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG_SYS/xff-cluster/ASMPARAMET
                                                 ERFILE/registry.253.825628977
[grid@xifenfei ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   3e20d13ae98a4fcfbffa489ab4df68a3 (/dev/sdc2) [DG_SYS]
Located 1 voting disk(s).
ASMCMD>  lsdsk -t -G dg_sys
Create_Date  Mount_Date  Repair_Timer  Path
08-SEP-13    08-SEP-13   0             /dev/sdc2

查看当前rac状态

[grid@xifenfei ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------

kfed查看磁盘头

[grid@xifenfei ~]$ kfed read /dev/sdc2
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2879801080 ; 0x00c: 0xaba646f8
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                202375168 ; 0x020: 0x0c100000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:             DG_SYS_0000 ; 0x028: length=11
kfdhdb.grpname:                  DG_SYS ; 0x048: length=6
kfdhdb.fgname:              DG_SYS_0000 ; 0x068: length=11
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             32990483 ; 0x0a8: HOUR=0x13 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.crestmp.lo:            303455232 ; 0x0ac: USEC=0x0 MSEC=0x197 SECS=0x21 MINS=0x4
kfdhdb.mntstmp.hi:             32990485 ; 0x0b0: HOUR=0x15 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.mntstmp.lo:           1776845824 ; 0x0b4: USEC=0x0 MSEC=0x221 SECS=0x1e MINS=0x1a
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    5451 ; 0x0c4: 0x0000154b
kfdhdb.pmcnt:                         3 ; 0x0c8: 0x00000003
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                     10 ; 0x0d4: 0x0000000a
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             32990483 ; 0x0e4: HOUR=0x13 DAYS=0x8 MNTH=0x9 YEAR=0x7dd
kfdhdb.grpstmp.lo:            301063168 ; 0x0e8: USEC=0x0 MSEC=0x77 SECS=0x1f MINS=0x4
kfdhdb.vfstart:                     224 ; 0x0ec: 0x000000e0
kfdhdb.vfend:                       256 ; 0x0f0: 0x00000100
kfdhdb.spfile:                      219 ; 0x0f4: 0x000000db   ----asm spfile的起点
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001
kfdhdb.flags:                         1 ; 0x0fc: 0x00000001

备份ocr

[root@xifenfei xff-cluster]# ocrconfig  -manualbackup
xifenfei     2013/09/08 23:48:57     /u01/app/12.1/grid/product/cdata/xff-cluster/backup_20130908_234857.ocr
[root@xifenfei xff-cluster]# ocrconfig -showbackup
xifenfei     2013/08/08 21:11:00     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup00.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup01.ocr
xifenfei     2013/07/08 20:23:18     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup02.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/day.ocr
xifenfei     2013/08/08 17:10:56     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/week.ocr
xifenfei     2013/09/08 23:48:57     /u01/app/12.1/grid/product/cdata/xff-cluster/backup_20130908_234857.ocr
xifenfei     2013/06/28 22:55:02     /u01/app/12.1/grid/product/cdata/xifenfe-cluster/backup_20130628_225502.ocr

破坏asm disk

[grid@xifenfei ~]$ dd if=/dev/zero of=/dev/sdc2 bs=4096 count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 6.6061e-05 seconds, 62.0 MB/s

关闭crs

[root@xifenfei xff-cluster]# crsctl stop crs

启动crs

[root@xifenfei xff-cluster]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@xifenfei admin]$ crsctl status res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE                               STABLE
ora.crf
      1        ONLINE  OFFLINE                               STABLE
ora.crsd
      1        ONLINE  OFFLINE                               STABLE
ora.cssd
      1        ONLINE  OFFLINE      xifenfei                 STARTING
ora.cssdmonitor
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.ctssd
      1        ONLINE  OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.evmd
      1        ONLINE  INTERMEDIATE xifenfei                 STABLE
ora.gipcd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.gpnpd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.mdnsd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.storage
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------

GI相关日志

--alert日志
2013-09-08 23:53:37.662:
[gpnpd(1507)]CRS-2328:GPNPD started on node xifenfei.
2013-09-08 23:54:10.244:
[cssd(1584)]CRS-1713:CSSD daemon is started in hub mode
2013-09-08 23:54:10.915:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:11.183:
[ohasd(1367)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-09-08 23:54:11.183:
[ohasd(1367)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2013-09-08 23:54:26.044:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:41.146:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
2013-09-08 23:54:56.195:
[cssd(1584)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/12.1/grid/product/log/xifenfei/cssd/ocssd.log
--ocssd日志
2013-09-08 23:54:25.976: [    GPNP][1090226496]clsgpnp_profileCallUrlInt: [at clsgpnp.c:2360] Result: (0) CLSGPNP_OK. Successful get-profile CALL to remote "ipc://GPNPD_xifenfei" disco ""
2013-09-08 23:54:25.976: [    CSSD][1090226496]clssnmReadDiscoveryProfile: voting file discovery string(/dev/sd*)
2013-09-08 23:54:25.976: [    CSSD][1090226496]clssnmvDDiscThread: using discovery string /dev/sd* for initial discovery
2013-09-08 23:54:25.976: [   SKGFD][1090226496]Discovery with str:/dev/sd*:
2013-09-08 23:54:25.976: [   SKGFD][1090226496]UFS discovery with :/dev/sd*:
2013-09-08 23:54:26.032: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdb:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdc2:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdd:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sdd1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda1:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]Fetching UFS disk :/dev/sda2:
2013-09-08 23:54:26.037: [   SKGFD][1090226496]OSS discovery with :/dev/sd*:
2013-09-08 23:54:26.042: [   SKGFD][1090226496]Handle 0x1d65c10 from lib :UFS:: for disk :/dev/sdb:
2013-09-08 23:54:26.043: [   SKGFD][1090226496]Handle 0x20c95a0 from lib :UFS:: for disk :/dev/sdc2:
2013-09-08 23:54:26.043: [   SKGFD][1090226496]Handle 0x20c9dd0 from lib :UFS:: for disk :/dev/sdd:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x1d65c10 for disk :/dev/sdb:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x20c95a0 for disk :/dev/sdc2:
2013-09-08 23:54:26.044: [   SKGFD][1090226496]Lib :UFS:: closing handle 0x20c9dd0 for disk :/dev/sdd:
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmvDiskVerify: Successful discovery of 0 disks
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2013-09-08 23:54:26.044: [    CSSD][1090226496]clssnmvFindInitialConfigs: No voting files found
2013-09-08 23:54:26.044: [    CSSD][1090226496](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds

在我们破坏了ocr所在的asm disk的磁盘后,启动crs明显提示无法找到votedisk信息

强制关闭crs

[root@xifenfei xff-cluster]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'xifenfei'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'xifenfei'
CRS-2677: Stop of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'xifenfei' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.gipcd' on 'xifenfei'
CRS-2673: Attempting to stop 'ora.evmd' on 'xifenfei'
CRS-2677: Stop of 'ora.gpnpd' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'xifenfei' succeeded
CRS-2677: Stop of 'ora.evmd' on 'xifenfei' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'xifenfei' has completed
CRS-4133: Oracle High Availability Services has been stopped.

exclusive模式启动crs

[root@xifenfei xff-cluster]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'xifenfei'
CRS-2677: Stop of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'xifenfei'
CRS-2672: Attempting to start 'ora.mdnsd' on 'xifenfei'
CRS-2676: Start of 'ora.evmd' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'xifenfei'
CRS-2676: Start of 'ora.gpnpd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xifenfei'
CRS-2672: Attempting to start 'ora.gipcd' on 'xifenfei'
CRS-2676: Start of 'ora.cssdmonitor' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.gipcd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'xifenfei'
CRS-2672: Attempting to start 'ora.diskmon' on 'xifenfei'
CRS-2676: Start of 'ora.diskmon' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.cssd' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'xifenfei'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'xifenfei'
CRS-2672: Attempting to start 'ora.ctssd' on 'xifenfei'
CRS-2676: Start of 'ora.ctssd' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'xifenfei' succeeded
CRS-2676: Start of 'ora.drivers.acfs' on 'xifenfei' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'xifenfei'
CRS-2676: Start of 'ora.asm' on 'xifenfei' succeeded
[grid@xifenfei xifenfei]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  INTERMEDIATE xifenfei                 OCR not started,STAB
                                                             LE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.crf
      1        OFFLINE OFFLINE                               STABLE
ora.crsd
      1        OFFLINE OFFLINE                               STABLE
ora.cssd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.ctssd
      1        ONLINE  ONLINE       xifenfei                 ACTIVE:0,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.drivers.acfs
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.evmd
      1        ONLINE  INTERMEDIATE xifenfei                 STABLE
ora.gipcd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.gpnpd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.mdnsd
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.storage
      1        OFFLINE OFFLINE                               STABLE

创建磁盘组

[grid@xifenfei xifenfei]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 00:23:40 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup DG_OCR external redundancy disk '/dev/sdc2' attribute 'COMPATIBLE.ASM' = '12.1.0';
Diskgroup created.
[root@xifenfei xff-cluster]# ls -l
total 1472
-rw-r--r-- 1 root root 1503232 Sep  8 23:48 backup_20130908_234857.ocr
[root@xifenfei xff-cluster]# ocrconfig -restore backup_20130908_234857.ocr
PROT-35: The configured OCR locations are not accessible
SQL> conn / as sysasm
Connected.
SQL> drop diskgroup DG_OCR force including contents;
Diskgroup dropped.
SQL>  create diskgroup DG_SYS  external redundancy disk '/dev/sdc2' attribute 'COMPATIBLE.ASM' = '12.1.0';
Diskgroup created.

为了操作方便,建议创建磁盘组和以前ocr所在异常的磁盘组一致

还原ocr

[root@xifenfei xff-cluster]# ocrconfig -restore backup_20130908_234857.ocr
--ALERT 日志
2013-09-09 00:26:50.584:
[client(3015)]CRS-1002:The OCR was restored from file backup_20130908_234857.ocr.

处理votedisk

[root@xifenfei xff-cluster]# crsctl replace votedisk +DG_SYS
Successful addition of voting disk 60a037da30714f6bbfe5d90206ff27a7.
Successfully replaced voting disk group with +DG_SYS.
CRS-4266: Voting file(s) successfully replaced

创建asm spfile

[grid@xifenfei dbs]$ vi /tmp/asm.txt
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile= "EXCLUSIVE"
asm_diskstring           = "/dev/sd*"
asm_power_limit          = 1
[grid@xifenfei dbs]$ sqlplus '/ as sysasm'
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 00:34:18 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL>  create spfile='+DG_SYS' FROM pfile='/tmp/asm.txt';
File created.

重启crs

[root@xifenfei xff-cluster]# crsctl stop crs -f
[root@xifenfei xff-cluster]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@xifenfei dbs]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.DATA.dg
               ONLINE  ONLINE       xifenfei                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xifenfei                 STABLE
ora.net1.network
               ONLINE  ONLINE       xifenfei                 STABLE
ora.ons
               ONLINE  ONLINE       xifenfei                 STABLE
ora.proxy_advm
               ONLINE  OFFLINE      xifenfei                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xifenfei                 169.254.196.108 10.1
                                                             0.30.22,STABLE
ora.asm
      1        ONLINE  ONLINE       xifenfei                 Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       xifenfei                 Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.oc4j
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
ora.xifenfei.vip
      1        ONLINE  ONLINE       xifenfei                 STABLE
--------------------------------------------------------------------------------

这里crs已经恢复正常,进一步检查ocr,votedisk,asm spfile情况

[grid@xifenfei ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1380
         Available space (kbytes) :     408188
         ID                       : 2132096904
         Device/File Name         :    +DG_SYS
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
[grid@xifenfei ~]$ sqlplus / as sysasm
SQL*Plus: Release 12.1.0.1.0 Production on Mon Sep 9 16:12:21 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG_SYS/xff-cluster/ASMPARAMET
                                                 ERFILE/registry.253.825640465
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[grid@xifenfei ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   60a037da30714f6bbfe5d90206ff27a7 (/dev/sdc2) [DG_SYS]
Located 1 voting disk(s).

至此在ocr 磁盘组异常,而有ocr备份的情况下故障恢复完毕,对于没有ocr备份的故障,只能通过重建ocr来完成,大概步骤为

--deconfigure(root)
remote node
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
lastnode
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode
--配置信息重建ocr等(grid)
# $GRID_HOME/crs/config/config.sh

证明递归session存在并解释为什么不在v$session中显示

我们在数据库的使用过程中,有时候会遇到类似情况,我会话是登录的,但是我进行某种操作,缺报session不足.这种情况证明该sql后台还产生了其他会话,这里通过试验分析证明了递归session的存在
会话创建表报session超

CDB_PDB@CHF>  create table t_xifenfei(id number) ;
create table t_xifenfei(id number)
ERROR at line 1:
ORA-00018: maximum number of sessions exceeded

这里有个问题:当前会话已经登录成功了,证明当前session是足够的,但是为什么在执行创建表操作之时依然会报ORA-00018呢?通过10046继续分析

CDB_PDB@CHF> alter session set events '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';
会话已更改。
CDB_PDB@CHF> create table t_xifenfei as select * from dual;
表已创建。
CDB_PDB@CHF> select value from v$diag_info where name='Default Trace File';
VALUE
--------------------------------------------------------------------------------
E:\APP\XIFENFEI\diag\rdbms\cdb\cdb\trace\cdb_ora_6596.trc

分析trace文件

CDB_PDB@CHF> host tkprof  E:\APP\XIFENFEI\diag\rdbms\cdb\cdb\trace\cdb_ora_6596.trc d:/1.txt
--查看trace文件,发现里面有很多基表操作,拿其中的一个tab$表分析,创建表过程有如下insert操作
insert into tab$(obj#,ts#,file#,block#,bobj#,tab#,intcols,kernelcols,clucols,
  audit$,flags,pctfree$,pctused$,initrans,maxtrans,rowcnt,blkcnt,empcnt,
  avgspc,chncnt,avgrln,analyzetime,samplesize,cols,property,degree,instances,
  dataobj#,avgspc_flb,flbcnt,trigflag,spare1,spare6)
values
(:1,:2,:3,:4,decode(:5,0,null,:5),decode(:6,0,null,:6),:7,:8,decode(:9,0,null,
  :9),:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,
  decode(:26,1,null,:26),decode(:27,1,null,:27),:28,:29,:30,:31,:32,:33)

尝试人工插入

CDB_PDB@CHF> insert into sys.tab$ select * from sys.tab$ where rownum=1;
insert into sys.tab$ select * from sys.tab$ where rownum=1
                *
第 1 行出现错误:
ORA-01031: 权限不足

证明当前执行创建表的session无权限直接操作tab$表,证明应该有其他表操作它

v$session视图基表
通过查询V$FIXED_VIEW_DEFINITION视图获得相关sql语句,不同版本可能有出入,但是大体一致

/* Formatted on 2013/11/8 23:09:30 (QP5 v5.227.12220.39754) */
SELECT inst_id,
       addr,
       indx,
       ksuseser,
       ksuudses,
       ksusepro,
       ksuudlui,
       ksuudlna,
       ksuudoct,
       ksusesow,
       DECODE (ksusetrn, HEXTORAW ('00'), NULL, ksusetrn),
       DECODE (ksqpswat, HEXTORAW ('00'), NULL, ksqpswat),
       DECODE (BITAND (ksuseidl, 11),
               1, 'ACTIVE',
               0, DECODE (BITAND (ksuseflg, 4096), 0, 'INACTIVE', 'CACHED'),
               2, 'SNIPED',
               3, 'SNIPED',
               'KILLED'),
       DECODE (ksspatyp,
               1, 'DEDICATED',
               2, 'SHARED',
               3, 'PSEUDO',
               'NONE'),
       ksuudsid,
       ksuudsna,
       ksuseunm,
       ksusepid,
       ksusemnm,
       ksusetid,
       ksusepnm,
       DECODE (BITAND (ksuseflg, 19),
               17, 'BACKGROUND',
               1, 'USER',
               2, 'RECURSIVE',
               '?'),
       ksusesql,
       ksusesqh,
       ksusepsq,
       ksusepha,
       ksuseapp,
       ksuseaph,
       ksuseact,
       ksuseach,
       ksusecli,
       ksusefix,
       ksuseobj,
       ksusefil,
       ksuseblk,
       ksuseslt,
       ksuseltm,
       ksusectm,
       DECODE (BITAND (ksusepfl, 16), 0, 'NO', 'YES'),
       DECODE (ksuseft,
               2, 'SESSION',
               4, 'SELECT',
               8, 'TRANSACTIONAL',
               'NONE'),
       DECODE (ksusefm,
               1, 'BASIC',
               2, 'PRECONNECT',
               4, 'PREPARSE',
               'NONE'),
       DECODE (ksusefs, 1, 'YES', 'NO'),
       ksusegrp,
       DECODE (BITAND (ksusepfl, 16),
               16, 'ENABLED',
               DECODE (BITAND (ksusepfl, 32), 32, 'FORCED', 'DISABLED')),
       DECODE (BITAND (ksusepfl, 64),
               64, 'FORCED',
               DECODE (BITAND (ksusepfl, 128), 128, 'DISABLED', 'ENABLED')),
       DECODE (BITAND (ksusepfl, 512),
               512, 'FORCED',
               DECODE (BITAND (ksusepfl, 256), 256, 'DISABLED', 'ENABLED')),
       ksusecqd,
       ksuseclid
  FROM x$ksuse
 WHERE BITAND (ksspaflg, 1) != 0 AND BITAND (ksuseflg, 1) != 0

注意:v$session查询的肯定是BITAND (ksuseflg, 1)!=0的记录

通过锁住表测试
CDB_PDB@SYS表示sys用户,CDB_PDB@CHF表示chf用户,使用两个session,不同用户测试

CDB_PDB@SYS> show user;
USER 为 "SYS"
--SYS用户锁住表
CDB_PDB@SYS> lock table tab$ IN exclusive MODE;
表已锁定。
CDB_PDB@CHF> show user;
USER 为 "CHF"
CDB_PDB@CHF>  select sid from v$mystat where rownum=1;
       SID
----------
        57
CDB_PDB@CHF> select paddr from v$session where sid=57;
PADDR
----------------
000007FF1E10F228
--CHF用户创建表
CDB_PDB@CHF>  create table t_xifenfei_new as select * from dual;
--SYS用户查询
CDB_PDB@SYS> SELECT s.addr,
  2         s.indx sid,
  3         s.ksuseser SERIAL#,
  4         ksuudsna username,
  5         DECODE (BITAND (ksuseflg, 19),
  6                 17, 'BACKGROUND',
  7                 1, 'USER',
  8                 2, 'RECURSIVE',
  9                 '?')
 10            TYPE
 11    FROM x$ksuse s
 12   WHERE ksusepro = '000007FF1E10F228';
ADDR                    SID    SERIAL# USERNAME                       TYPE
---------------- ---------- ---------- ------------------------------ ----------
000007FF1E1EBEA0         57         23 CHF                            USER
000007FF1E1D7F90         67        183 SYS                            RECURSIVE
CDB_PDB@SYS> SELECT ksuudsna username,
  2         ksuseflg
  3    FROM x$ksuse s
  4   WHERE ksusepro = '000007FF1E10F228';
USERNAME                         KSUSEFLG
------------------------------ ----------
CHF                             135266369
SYS                                     2
--这里我们发现递归sys调用的sql,在v$session视图中被排除了,因此递归sql的session不能在v$session显示
CDB_PDB@SYS> select bitand(2,1) from dual;
BITAND(2,1)
-----------
          0

至此,我们可以验证,我们当前的会话,在创建表的过程中有一个sys的递归session执行了关于基表的操作,但是由于v$session视图对于x$ksuse表中的部分记录进行了过滤因此我们不能在v$session查看到这些递归session

继续分析bitand函数
通过观察v$session的创建语句,我们可以发现如下规律,如果某个session是递归session,那么BITAND (ksuseflg, 19)=2,那当这个值为2的时候,是不是BITAND (ksuseflg, 1)一定为0呢?bitand函数实际上就是把里面的两个参数转换为二进制然后进行and运算,也就是两个对应位都为1的情况才会结果得带1(bitand(3,1)=1,bitand(2,1)=0),这里可以发现19转换为二进制为10011,要使得BITAND (ksuseflg, 19)=2成立,那就是说ksuseflg转换为二进制后,最后一位必须是0;而BITAND (ksuseflg, 1)在这样的情况下,一定为0,因此递归session的一定不会在v$session视图显示.

因RAC的undo_management参数不一致导致数据库mount报ORA-01105 ORA-01606

环境Linux 5.8 10.2.0.5 RAC,两个节点只能一个节点mount,如果尝试mount另外节点就报ORA-01105和ORA-01606错误
数据库版本

SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

crs资源情况

[oracle@node1 dbs]$ $ORA_CRS_HOME/bin/crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....D1.inst application    OFFLINE   OFFLINE
ora....D2.inst application    ONLINE    ONLINE    node2
ora.PROD.db    application    ONLINE    ONLINE    node2
ora....SM1.asm application    ONLINE    ONLINE    node1
ora....E1.lsnr application    ONLINE    ONLINE    node1
ora.node1.gsd  application    ONLINE    ONLINE    node1
ora.node1.ons  application    ONLINE    ONLINE    node1
ora.node1.vip  application    ONLINE    ONLINE    node1
ora....SM2.asm application    ONLINE    ONLINE    node2
ora....E2.lsnr application    ONLINE    ONLINE    node2
ora.node2.gsd  application    ONLINE    ONLINE    node2
ora.node2.ons  application    ONLINE    ONLINE    node2
ora.node2.vip  application    ONLINE    ONLINE    node2

节点1 mount报错

SQL> startup
ORACLE instance started.
Total System Global Area  171966464 bytes
Fixed Size                  2094832 bytes
Variable Size             113248528 bytes
Database Buffers           50331648 bytes
Redo Buffers                6291456 bytes
ORA-01105: mount is incompatible with mounts by other instances
ORA-01606: gc_files_to_locks not identical to that of another mounted instance
Error:    ORA 1105
Text:     mount is incompatible with mounts by other instances
-------------------------------------------------------------------------------
Cause:  An attempt was made to mount the database, but another instance has already mounted
        a database by the same name, and the mounts are not compatible.
        dditional messages will accompany this message to report why the mounts are incompatible.
Action:  See the accompanying messages for the appropriate action to take.
Error:  ORA 1606
Text:   GC_FILES_TO_LOCKS not identical to that of another mounted instance
-------------------------------------------------------------------------------
Cause:  The initialization parameter GC_FILES_TO_LOCKS is not the same as
        another instance mounted in parallel mode.
        This parameter must be the same as that for all shared instances.
Action: Modify the parameter to be compatible with the other instances, then
        shut down and restart the instance.

根据这个错误提示,查询两个节点的gc_files_to_locks参数,均为空值(默认值),也就是值相同

SQL> show parameter gc_files_to_locks;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
gc_files_to_locks                    string

检查两个节点的gc相关隐含参数,发现所有值也均一致

NAME                           DESCRIPTION                                                           VALUE
------------------------------ --------------------------------------------------------------------- -------
_gc_affinity_limit             dynamic affinity limit                                                50
_gc_affinity_minimum           dynamic affinity minimum activity per minute                          6000
_gc_affinity_time              if non zero, enable dynamic object affinity                           10
_gc_async_memcpy               if TRUE, use async memcpy                                             FALSE
_gc_check_bscn                 if TRUE, check for stale blocks                                       TRUE
_gc_coalesce_recovery_reads    if TRUE, coalesce recovery reads                                      TRUE
_gc_defer_time                 how long to defer down converts for hot buffers                       3
_gc_dissolve_undo_affinity     if TRUE, dissolve undo affinity after an offline                      FALSE
_gc_dynamic_affinity_locks     if TRUE, get dynamic affinity locks                                   TRUE
_gc_element_percent            global cache element percent                                          103
_gc_global_lru                 turn global lru off, make it automatic, or turn it on                 AUTO
_gc_initiate_undo_affinity     if TRUE, initiate undo affinity after an online                       TRUE
_gc_integrity_checks           set the integrity check level                                         1
_gc_keep_recovery_buffers      if TRUE, make recovery buffers current                                TRUE
_gc_latches                    number of latches per LMS process                                     8
_gc_maximum_bids               maximum number of bids which can be prepared                          0
_gcs_fast_reconfig             if TRUE, enable fast reconfiguration for gcs locks                    TRUE
_gcs_latches                   number of gcs resource hash latches to be allocated per LMS process   64
_gcs_pkey_history              number of pkey remastering history                                    4000
_gcs_process_in_recovery       if TRUE, process gcs requests during instance recovery                TRUE
_gcs_resources                 number of gcs resources to be allocated
_gcs_shadow_locks              number of pcm shadow locks to be allocated
_gc_statistics                 if TRUE, kcl statistics are maintained                                TRUE
_gcs_testing                   GCS testing parameter                                                 0
_gc_tsn_undo_affinity          if TRUE, use TSN undo affinity                                        TRUE
_gc_undo_affinity              if TRUE, enable dynamic undo affinity                                 TRUE
_gc_undo_affinity_locks        if TRUE, get affinity locks for undo                                  TRUE
_gc_use_cr                     if TRUE, allow CR pins on PI and WRITING buffers                      TRUE
_gc_vector_read                if TRUE, vector read current buffers                                  TRUE

仔细对比数据库参数,发现undo异常

--节点1
SQL>  show parameter undo
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      MANUAL
undo_retention                       integer     900
undo_tablespace                      string      SYSTEM
--节点2
SQL>  show parameter undo
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1

这里已经明确,因为两个节点的undo_*相关参数配置不正确,导致数据库只能一个节点mount。进一步定位问题发现,原来是因为dba粗心在编辑节点1的参数文件的时候把undo_*相关的参数给弄丢了,从而数据库使用了默认值undo_management=manual,undo_tablespace=system

win平台rman备份和删除dg备库归档日志脚本

总觉得使用windows跑oracle是不靠谱的事情,可以这个世界上总有很多人喜欢做类似这样的事情,对于数据库比较常见的两件事情:rman和删除dg备库归档日志,在linux/unix平台上使用shell实现很简单,可是跑到win里面,就变的烦了,不是因为其麻烦,而是因为用的人少,不知道怎么下手处理该事情,我编写了简单的实现初级功能的win下面rman备份和删除备库归档日志脚本,供大家参考,也更加欢迎朋友提出来更加好的处理方法(win是真心的不懂)
rman备份脚本

--backup_oracle.bat文件
rman target / cmdfile=D:\backup\rman\backup_db.rman
log=d:/backup/rman/logfile/rmanlog%date:~0,4%%date:~5,2%%date:~8,2%.log
--backup_db.rman文件
CONFIGURE RETENTION POLICY TO REDUNDANCY = 2;
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
backup filesperset = 5 as compressed backupset database format 'd:/backup/rman/full_%U.rman';
sql 'alter system archive log current';
backup  filesperset = 50 as compressed backupset archivelog  all format 'd:/backup/rman/arch_%U.rman'  delete input;
DELETE noprompt OBSOLETE;
crosscheck backup;
delete noprompt expired backup;
backup  format 'd:/backup/rman/ctl_%U.rman' current controlfile;
backup spfile format 'd:/backup/rman/spfile_%U.rman' ;
exit;

backup_oracle.bat文件加入到计划任务即可

删除dg备库归档日志(已经应用)

--delete_dg_archivelog.bat
rem 注意修改 部署目录
cd D:\win_xifenfei
d:
rem 注意delete_archive.sql 查询是否有记录
echo delete archivelog staring > delete_archivelog.bak
sqlplus / as sysdba @delete_archive.sql
echo rman target / cmdfile=rman_checkcross.rman>>delete_archivelog.bat
delete_archivelog.bat >>delete_dg_archivelog_%DATE:~0,4%%DATE:~5,2%%DATE:~8,2%".log
exit
--delete_archive.sql
set lines 150
col name for a150
set pagesize 0 feedback off verify off heading off echo off
spool delete_archivelog.bat
select 'del '||name from v$archived_log where APPLIED='YES' AND NAME IS NOT NULL and DEST_ID=1;
spool off
exit;
--rman_checkcross.rman
crosscheck archivelog all;
delete noprompt expired archivelog all;
exit

delete_dg_archivelog.bat加入到计划任务即可