ORA-15130: diskgroup “ORADATA” is being dismounted

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-15130: diskgroup “ORADATA” is being dismounted

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

磁盘组mount之后,立马又dismount

Sat Dec 25 17:48:45 2021
SQL> alter diskgroup ORADATA mount 
NOTE: cache registered group ORADATA number=5 incarn=0xd4b7ac6a
NOTE: cache began mount (first) of group ORADATA number=5 incarn=0xd4b7ac6a
NOTE: Assigning number (5,24) to disk (/dev/mapper/data31)
NOTE: Assigning number (5,26) to disk (/dev/mapper/data33)
NOTE: Assigning number (5,21) to disk (/dev/mapper/data29)
NOTE: Assigning number (5,23) to disk (/dev/mapper/data30)
NOTE: Assigning number (5,25) to disk (/dev/mapper/data32)
NOTE: Assigning number (5,19) to disk (/dev/mapper/data27)
NOTE: Assigning number (5,20) to disk (/dev/mapper/data28)
NOTE: Assigning number (5,18) to disk (/dev/mapper/data26)
NOTE: Assigning number (5,14) to disk (/dev/mapper/data22)
NOTE: Assigning number (5,17) to disk (/dev/mapper/data25)
NOTE: Assigning number (5,16) to disk (/dev/mapper/data24)
NOTE: Assigning number (5,15) to disk (/dev/mapper/data23)
NOTE: Assigning number (5,13) to disk (/dev/mapper/data21)
NOTE: Assigning number (5,12) to disk (/dev/mapper/data20)
NOTE: Assigning number (5,10) to disk (/dev/mapper/data19)
NOTE: Assigning number (5,9) to disk (/dev/mapper/data18)
NOTE: Assigning number (5,8) to disk (/dev/mapper/data17)
NOTE: Assigning number (5,3) to disk (/dev/mapper/data12)
NOTE: Assigning number (5,22) to disk (/dev/mapper/data3)
NOTE: Assigning number (5,2) to disk (/dev/mapper/data11)
NOTE: Assigning number (5,7) to disk (/dev/mapper/data16)
NOTE: Assigning number (5,28) to disk (/dev/mapper/data5)
NOTE: Assigning number (5,32) to disk (/dev/mapper/data9)
NOTE: Assigning number (5,6) to disk (/dev/mapper/data15)
NOTE: Assigning number (5,5) to disk (/dev/mapper/data14)
NOTE: Assigning number (5,4) to disk (/dev/mapper/data13)
NOTE: Assigning number (5,1) to disk (/dev/mapper/data10)
NOTE: Assigning number (5,30) to disk (/dev/mapper/data7)
NOTE: Assigning number (5,29) to disk (/dev/mapper/data6)
NOTE: Assigning number (5,31) to disk (/dev/mapper/data8)
NOTE: Assigning number (5,11) to disk (/dev/mapper/data2)
NOTE: Assigning number (5,27) to disk (/dev/mapper/data4)
NOTE: Assigning number (5,0) to disk (/dev/mapper/data1)
Sat Dec 25 17:48:52 2021
NOTE: GMON heartbeating for grp 5
GMON querying group 5 at 153 for pid 32, osid 68608
NOTE: cache opening disk 0 of grp 5: ORADATA_0000 path:/dev/mapper/data1
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache opening disk 1 of grp 5: ORADATA_0001 path:/dev/mapper/data10
NOTE: cache opening disk 2 of grp 5: ORADATA_0002 path:/dev/mapper/data11
NOTE: cache opening disk 3 of grp 5: ORADATA_0003 path:/dev/mapper/data12
NOTE: cache opening disk 4 of grp 5: ORADATA_0004 path:/dev/mapper/data13
NOTE: cache opening disk 5 of grp 5: ORADATA_0005 path:/dev/mapper/data14
NOTE: cache opening disk 6 of grp 5: ORADATA_0006 path:/dev/mapper/data15
NOTE: cache opening disk 7 of grp 5: ORADATA_0007 path:/dev/mapper/data16
NOTE: cache opening disk 8 of grp 5: ORADATA_0008 path:/dev/mapper/data17
NOTE: cache opening disk 9 of grp 5: ORADATA_0009 path:/dev/mapper/data18
NOTE: cache opening disk 10 of grp 5: ORADATA_0010 path:/dev/mapper/data19
NOTE: cache opening disk 11 of grp 5: ORADATA_0011 path:/dev/mapper/data2
NOTE: cache opening disk 12 of grp 5: ORADATA_0012 path:/dev/mapper/data20
NOTE: cache opening disk 13 of grp 5: ORADATA_0013 path:/dev/mapper/data21
NOTE: cache opening disk 14 of grp 5: ORADATA_0014 path:/dev/mapper/data22
NOTE: cache opening disk 15 of grp 5: ORADATA_0015 path:/dev/mapper/data23
NOTE: cache opening disk 16 of grp 5: ORADATA_0016 path:/dev/mapper/data24
NOTE: cache opening disk 17 of grp 5: ORADATA_0017 path:/dev/mapper/data25
NOTE: cache opening disk 18 of grp 5: ORADATA_0018 path:/dev/mapper/data26
NOTE: cache opening disk 19 of grp 5: ORADATA_0019 path:/dev/mapper/data27
NOTE: cache opening disk 20 of grp 5: ORADATA_0020 path:/dev/mapper/data28
NOTE: cache opening disk 21 of grp 5: ORADATA_0021 path:/dev/mapper/data29
NOTE: cache opening disk 22 of grp 5: ORADATA_0022 path:/dev/mapper/data3
NOTE: cache opening disk 23 of grp 5: ORADATA_0023 path:/dev/mapper/data30
NOTE: cache opening disk 24 of grp 5: ORADATA_0024 path:/dev/mapper/data31
NOTE: cache opening disk 25 of grp 5: ORADATA_0025 path:/dev/mapper/data32
NOTE: cache opening disk 26 of grp 5: ORADATA_0026 path:/dev/mapper/data33
NOTE: cache opening disk 27 of grp 5: ORADATA_0027 path:/dev/mapper/data4
NOTE: cache opening disk 28 of grp 5: ORADATA_0028 path:/dev/mapper/data5
NOTE: cache opening disk 29 of grp 5: ORADATA_0029 path:/dev/mapper/data6
NOTE: cache opening disk 30 of grp 5: ORADATA_0030 path:/dev/mapper/data7
NOTE: cache opening disk 31 of grp 5: ORADATA_0031 path:/dev/mapper/data8
NOTE: cache opening disk 32 of grp 5: ORADATA_0032 path:/dev/mapper/data9
NOTE: cache mounting (first) external redundancy group 5/0xD4B7AC6A (ORADATA)
Sat Dec 25 17:48:52 2021
* allocate domain 5, invalid = TRUE 
kjbdomatt send to inst 2
Sat Dec 25 17:48:52 2021
NOTE: attached to recovery domain 5
NOTE: starting recovery of thread=1 ckpt=92.6417 group=5 (ORADATA)
NOTE: advancing ckpt for group 5 (ORADATA) thread=1 ckpt=92.6418
NOTE: cache recovered group 5 to fcn 0.9502919
NOTE: redo buffer size is 256 blocks (1053184 bytes)
Sat Dec 25 17:48:52 2021
NOTE: LGWR attempting to mount thread 1 for diskgroup 5 (ORADATA)
NOTE: LGWR found thread 1 closed at ABA 92.6417
NOTE: LGWR mounted thread 1 for diskgroup 5 (ORADATA)
NOTE: LGWR opening thread 1 at fcn 0.9502919 ABA 93.6418
NOTE: cache mounting group 5/0xD4B7AC6A (ORADATA) succeeded
NOTE: cache ending mount (success) of group ORADATA number=5 incarn=0xd4b7ac6a
Sat Dec 25 17:48:53 2021
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 5
SUCCESS: diskgroup ORADATA was mounted
SUCCESS: alter diskgroup ORADATA mount
Sat Dec 25 17:48:53 2021
NOTE: diskgroup resource ora.ORADATA.dg is online
WARNING:cache read  a corrupt block: group=5(ORADATA)dsk=5 blk=2 disk=5(ORADATA_0005)incarn=2406 au=0 blk=2 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
NOTE: a corrupted block from group ORADATA was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc
WARNING:cache read(retry)a corrupt block:group=5(ORADATA)dsk=5 blk=2 disk=5(ORADATA_0005)incarn=2406 au=0 blk=2 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
ERROR: cache failed to read group=5(ORADATA) dsk=5 blk=2 from disk(s): 5(ORADATA_0005)
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
NOTE: cache initiating offline of disk 5 group ORADATA
NOTE: process _rbal_+asm1 (48956) initiating offline of disk 5.240607694 (ORADATA_0005) with mask 0x7e in group 5
NOTE: initiating PST update: grp = 5, dsk = 5/0xe5761ce, mask = 0x6a, op = clear
GMON updating disk modes for group 5 at 155 for pid 18, osid 48956
ERROR: Disk 5 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 5)
Sat Dec 25 17:48:55 2021
NOTE: cache dismounting (not clean) group 5/0xD4B7AC6A (ORADATA) 
WARNING: Offline for disk ORADATA_0005 in mode 0x7f failed.
Sat Dec 25 17:48:55 2021
NOTE: halting all I/Os to diskgroup 5 (ORADATA)
NOTE: messaging CKPT to quiesce pins Unix process pid: 22744, image: oracle@wxzldb1 (B000)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc  (incident=1289754):
ORA-15335: ASM metadata corruption detected in disk group 'ORADATA'
ORA-15130: diskgroup "ORADATA" is being dismounted
ORA-15066: offlining disk "ORADATA_0005" in group "ORADATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1]
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_1289754/+ASM1_rbal_48956_i1289754.trc
NOTE: LGWR doing non-clean dismount of group 5 (ORADATA)
NOTE: LGWR sync ABA=93.6418 last written ABA 93.6418
kjbdomdet send to inst 2
detach from dom 5, sending detach message to inst 2
Sat Dec 25 17:48:56 2021
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 4)
Sat Dec 25 17:48:56 2021
Sweep [inc][1289754]: completed
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 5 invalid = TRUE 
 41 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
freeing rdom 5
System State dumped to trace file /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_1289754/+ASM1_rbal_48956_i1289754.trc
WARNING: dirty detached from domain 5
NOTE: cache dismounted group 5/0xD4B7AC6A (ORADATA) 

问题比较明显是由于disk=5 au=0 blk=2有问题导致磁盘组mount之后立马异常.通过kfed分析对应block情况

C:\Users\XFF>kfed read h:\temp\asmdisk\data14.dd|more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483653 ; 0x008: disk=5
kfbh.check:                   314993330 ; 0x00c: 0x12c66ab2
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        5 ; 0x024: 0x0005
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:            ORADATA_0005 ; 0x028: length=12
kfdhdb.grpname:                 ORADATA ; 0x048: length=7
kfdhdb.fgname:             ORADATA_0005 ; 0x068: length=12

C:\Users\XFF>kfed read h:\temp\asmdisk\data14.dd aun=0 blkn=2|more
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
0066D8200 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

通过kfed分析,该block确实异常,该block主要记录au的分配信息,如果asm 磁盘组的空间不变化,不执行rebalance,一般不会主动访问该block,不访问该block磁盘组也就不会dismount,按照这个解决思路,通过patch解决,让oradata磁盘组不再执行rebalance和分配/回收空间即可一直稳定的mount
20211227210314


数据库直接open成功,实现数据0丢失
20211227210411

ORA-600 kfrValAcd30 恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ORA-600 kfrValAcd30 恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户由于存储控制器损坏,修复控制器之后,asm无法正常mount,报ORA-600 kfrValAcd30错误,让我们提供技术支持
kfrValAcd30


asm alert日志报错

Wed Apr 03 16:50:57 2019
SQL> alter diskgroup data mount
NOTE: cache registered group DATA number=1 incarn=0x14248741
NOTE: cache began mount (first) of group DATA number=1 incarn=0x14248741
NOTE: Assigning number (1,0) to disk (ORCL:DATAVOL1)
Wed Apr 03 16:51:03 2019
NOTE: start heartbeating (grp 1)
kfdp_query(DATA): 15
kfdp_queryBg(): 15
NOTE: cache opening disk 0 of grp 1: DATAVOL1 label:DATAVOL1
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache mounting (first) external redundancy group 1/0x14248741 (DATA)
Wed Apr 03 16:51:04 2019
* allocate domain 1, invalid = TRUE
Wed Apr 03 16:51:04 2019
NOTE: attached to recovery domain 1
NOTE: starting recovery of thread=1 ckpt=27.2697 group=1 (DATA)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_ora_15951.trc  (incident=23394):
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], []
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM2/incident/incdir_23394/+ASM2_ora_15951_i23394.trc
Abort recovery for domain 1
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup DATA
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], []
ERROR: alter diskgroup data mount
NOTE: cache dismounting (clean) group 1/0x14248741 (DATA)
NOTE: lgwr not being msg'd to dismount
freeing rdom 1
Wed Apr 03 16:51:05 2019
Sweep [inc][23394]: completed
Sweep [inc2][23394]: completed
Wed Apr 03 16:51:05 2019
Trace dumping is performing id=[cdmp_20190403165105]
NOTE: detached from domain 1
NOTE: cache dismounted group 1/0x14248741 (DATA)
NOTE: cache ending mount (fail) of group DATA number=1 incarn=0x14248741
kfdp_dismount(): 16
kfdp_dismountBg(): 16
NOTE: De-assigning number (1,0) from disk (ORCL:DATAVOL1)
ERROR: diskgroup DATA was not mounted
NOTE: cache deleting context for group DATA 1/337938241

mos相关记录
参考:ORA-600 [KFRVALACD30] in ASM (Doc ID 2123013.1)
kfrValAcd30-mos


ORA-00600: internal error code, arguments: [kfrValAcd30]可能是bug或者硬件故障导致.基于客户的情况,最大可能就是由于硬件故障导致asm 磁盘组的acd无法正常进行,从而无法mount成功.如果运气好,通过kfed相关修复可以正常mount成功,运气不好可以通过底层进行恢复数据文件,从而最大限度恢复数据.

hp平台rdisk中磁盘丢失导致asm启动报ORA-15042恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:hp平台rdisk中磁盘丢失导致asm启动报ORA-15042恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有老朋友找到我,说一个客户的数据库异常,问题是asm无法正常mount,提示是缺少两块磁盘.问我是否可以恢复.因为是内网环境,通过他那边发过来的零零散散的信息,大概分析如下
asm alert日志报错
ERROR: diskgroup DGROUP1 was not mounted

Fri Aug 12 16:03:12 EAT 2016
SQL> alter diskgroup DGROUP1 mount
Fri Aug 12 16:03:12 EAT 2016
NOTE: cache registered group DGROUP1 number=1 incarn=0xf6781b5c
Fri Aug 12 16:03:12 EAT 2016
NOTE: Hbeat: instance first (grp 1)
Fri Aug 12 16:03:16 EAT 2016
NOTE: start heartbeating (grp 1)
Fri Aug 12 16:03:16 EAT 2016
NOTE: cache dismounting group 1/0xF6781B5C (DGROUP1)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DGROUP1 was not mounted

前台尝试mount asm 磁盘组报错ORA-15042
ORA-15042


从这里可以明显的看出来asm 磁盘组无法正常mount,是由于缺少asm disk 15,16.如果想恢复asm,最好的方法就是找出来这两个磁盘.通过kfed对现在的磁盘进行分析,最后我们发现asm disk 14对应的磁盘为disk160,,asm disk 17对应的disk163,根据第一感觉很可能是disk161和disk161两块盘异常,让机房检查硬件无任何告警

OS层面分析
省略和本次结论无关的记录

ls -l /dev/rdisk
crw-rw----   1 oracle     dba         13 0x000070 Jan  1  2016 disk160
crw-rw----   1 oracle     dba         13 0x000073 Jan  1  2016 disk163
ls -l /dev/disk
brw-r-----   1 bin        sys          1 0x000070 Jan 13  2015 disk160
brw-r-----   1 bin        sys          1 0x000071 Jan 13  2015 disk161
brw-r-----   1 bin        sys          1 0x000072 Jan 13  2015 disk162
brw-r-----   1 bin        sys          1 0x000073 Jan 13  2015 disk163

这里我们发现在hp unix中/dev/disk下面磁盘都存在,但是/dev/rdisk下面丢失,通过ioscan相关命令继续分析

ioscan -fNnkC disk
disk    160  64000/0xfa00/0x70  esdisk   CLAIMED     DEVICE       HP      OPEN-V
                      /dev/disk/disk160   /dev/rdisk/disk160
disk    161  64000/0xfa00/0x71  esdisk   CLAIMED     DEVICE       HP      OPEN-V
                      /dev/disk/disk161
disk    162  64000/0xfa00/0x72  esdisk   CLAIMED     DEVICE       HP      OPEN-V
                      /dev/disk/disk162
disk    163  64000/0xfa00/0x73  esdisk   CLAIMED     DEVICE       HP      OPEN-V
                      /dev/disk/disk163   /dev/rdisk/disk163

这里我们基本上可以确定是/dev/rdisk下面的盘发生丢失.进一步分析,因为rdisk是聚合后的盘符,那我们分析聚合前的盘符是否正常

ioscan -m dsf
/dev/rdisk/disk160       /dev/rdsk/c29t12d4
                         /dev/rdsk/c28t12d4
/dev/rdisk/disk163       /dev/rdsk/c29t12d7
                         /dev/rdsk/c28t12d7
ls -l /dev/rdsk
crw-r-----   1 bin        sys        188 0x1dc000 Apr 22  2014 c29t12d0
crw-r-----   1 bin        sys        188 0x1dc100 Apr 22  2014 c29t12d1
crw-r-----   1 bin        sys        188 0x1dc300 Jan 13  2015 c29t12d3
crw-r-----   1 bin        sys        188 0x1dc400 Jan 13  2015 c29t12d4
crw-r-----   1 bin        sys        188 0x1dc500 Jan 13  2015 c29t12d5
crw-r-----   1 bin        sys        188 0x1dc600 Jan 13  2015 c29t12d6
crw-r-----   1 bin        sys        188 0x1dc700 Jan 13  2015 c29t12d7
crw-r-----   1 bin        sys        188 0x1cc100 Apr 22  2014 c28t12d1
crw-r-----   1 bin        sys        188 0x1cc300 Jan 13  2015 c28t12d3
crw-r-----   1 bin        sys        188 0x1cc400 Jan 13  2015 c28t12d4
crw-r-----   1 bin        sys        188 0x1cc500 Jan 13  2015 c28t12d5
crw-r-----   1 bin        sys        188 0x1cc600 Jan 13  2015 c28t12d6
crw-r-----   1 bin        sys        188 0x1cc700 Jan 13  2015 c28t12d7

通过这里我们基本上可以大概判断出来/dev/rdsk/c28t12d5,/dev/rdsk/c28t12d6,/dev/rdsk/c29t12d5,/dev/rdsk/c29t12d6就是我们需要找的/dev/rdisk/disk161和disk162的聚合之前的盘符.也就是说,现在我们判断只有/dev/rdisk下面的字符设备有问题,其他均正常.

通过系统命令修复异常

insf -e -H 64000/0xfa00/0x71
insf -e -H 64000/0xfa00/0x72

hp-asm-disk


现在已经可以正常看到/dev/rdisk/disk161和/dev/rdisk/disk162盘符,初步判断,os层面盘符已经恢复正常.修改磁盘权限和所属组

chmod 660 /dev/rdisk/disk161
chmod 660 /dev/rdisk/disk162
chown oracle:dba /dev/rdisk/disk161
chown oracle:dba /dev/rdisk/disk162

正常启动asm,mount磁盘组,open数据库
asm-mount


这次的恢复,主要是从操作系统层面判断解决问题,从而实现数据库完美恢复,数据0丢失.有类似恢复案例:分区无法识别导致asm diskgroup无法mount
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

oracle asm系列文章汇总

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:oracle asm系列文章汇总

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

为了方便大家更容易的查看相关asm内容,今天(2016年7月28日)对asm的相关文章进行了汇总整理.如果有asm相关的其他问题,可以通过手机(17813235971)或者QQ(107644445)交流
远程访问ASM
ASMCMD常用命令
ASM简单管理(1)
ASM简单管理(2)
找回ASM中数据文件
bbed修改ASM中数据
ASM迁移至文件系统
普通库迁移至ASM存储
使用dd复制asm中文件
配置Oracle ASM磁盘
ASM中磁盘组权限设置
监控asm disk磁盘性能
create spfile to asm
pvid=yes导致asm无法mount—ASM恢复案例
asm数据文件迁移(os–>asm)
asm数据文件迁移(asm–>asm)
asm数据文件迁移(asm–>os)
通过ftp/http拷贝asm中文件
ASM DISK HEADER 备份与恢复
手工修复ASM DISK HEADER 异常
asm disk header 彻底损坏恢复—ASM恢复案例
ASM未正常启动,使用dd找回数据文件
ORACLE 12C ASM 新特性:共享密码文件
asm备份元数据之md_backup和md_restore
分区无法识别导致asm diskgroup无法mount—ASM恢复案例
使用losetup实现linux普通文件做asm disk
asm 加磁盘导致磁盘组损坏恢复—ASM恢复案例
asm磁盘头全部损坏数据0丢失恢复—ASM恢复案例
Oracle异常恢复前备份保护现场建议—ASM环境
多cpu环境中运行root.sh失败,asm报ORA-04031
asmlib异常报ORA-00600[kfklLibFetchNext00]
存储精简卷导致asm磁盘组异常
因asm sga_target设置不当导致11gr2 rac无法正常启动
asm disk误设置pvid导致asm diskgroup无法mount恢复—ASM恢复案例
oracle asm disk格式化恢复—格式化为ntfs文件系统—ASM恢复案例
aoracle asm disk格式化恢复—格式化为ext4文件系统—ASM恢复案例
使用_asm_allow_only_raw_disks实现普通文件做asm disk
ORACLE 12C RAC修改ocr/votedisk/asm spfile所在磁盘组名称
使用asm disk header 自动备份信息恢复异常asm disk header
分享oracleasm createdisk重新创建asm disk后数据0丢失恢复案例—ASM恢复案例
asm磁盘组操作不当导致数据文件丢失恢复—ASM恢复案例
ADHU(ASM Disk Header Utility)—asm disk header备份恢复工具
How to Get the Contents of an Spfile on ASM when ASM/GRID is down
ORA-15042: ASM disk “N” is missing from group number “M” 故障恢复—ASM恢复案例

pvid=yes导致asm无法mount

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:pvid=yes导致asm无法mount

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

今天凌晨接到客户恢复请求,对于aix rac,两个ibm存储做mirror的环境中,客户做存储容灾演练,发现磁盘的名称发生改变,然后对其中一个磁盘设置pvid,结果悲剧了导致asm一个磁盘组无法正常起来。然后又aix端删除这些设备,然后重新扫描设备。结果不是一个磁盘组不能mount,而是整个gi就无法正常启动。希望我们给予技术支持。
查看asm 日志,确定asm disk信息
asm-disk1
asm-disk2


从这里可以确定,一共有两个asm diskgroup,每个group有两个磁盘,hdisk2和hdisk3 为hisdata,hdisk4,和hdisk5为emrdata.

使用kfed分析磁盘头

dd if=/dev/rhdisk2 of=/tmp/xifenfei/rhdisk2.dd bs=1024k count=10
dd if=/dev/rhdisk3 of=/tmp/xifenfei/rhdisk3.dd bs=1024k count=10
dd if=/dev/rhdisk4 of=/tmp/xifenfei/rhdisk4.dd bs=1024k count=10
dd if=/dev/rhdisk5 of=/tmp/xifenfei/rhdisk5.dd bs=1024k count=10
--传输到我电脑上分析
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk2.dd|grep name
kfdhdb.dskname:            HISDATA_0000 ; 0x028: length=12
kfdhdb.grpname:                 HISDATA ; 0x048: length=7
kfdhdb.fgname:             HISDATA_0000 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk3.dd|grep name
kfdhdb.dskname:            HISDATA_0001 ; 0x028: length=12
kfdhdb.grpname:                 HISDATA ; 0x048: length=7
kfdhdb.fgname:             HISDATA_0001 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk4.dd|grep name
kfdhdb.dskname:            EMRDATA_0000 ; 0x028: length=12
kfdhdb.grpname:                 EMRDATA ; 0x048: length=7
kfdhdb.fgname:             EMRDATA_0000 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd|grep name
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd
kfbh.endian:                        201 ; 0x000: 0xc9
kfbh.hard:                          194 ; 0x001: 0xc2
kfbh.type:                          212 ; 0x002: *** Unknown Enum ***
kfbh.datfmt:                        193 ; 0x003: 0xc1
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
000000000 C1D4C2C9 00000000 00000000 00000000  [................]
000000010 00000000 00000000 00000000 00000000  [................]
  Repeat 254 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][212]
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd blkn=2|grep kfbh
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL
kfbh.datfmt:                          2 ; 0x003: 0x02
kfbh.block.blk:                33554432 ; 0x004: blk=33554432
kfbh.block.obj:                16777344 ; 0x008: file=128
kfbh.check:                  2654889601 ; 0x00c: 0x9e3e6681
kfbh.fcn.base:               1696071680 ; 0x010: 0x65180000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd blkn=510|grep name
kfdhdb.dskname:            EMRDATA_0001 ; 0x028: length=12
kfdhdb.grpname:                 EMRDATA ; 0x048: length=7
kfdhdb.fgname:             EMRDATA_0001 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0

通过上述分析,基本上确定由于对hdisk5设置了pvid导致该asm disk的磁盘头损坏.这个可以直接使用asm repair功能修复(注意要clear pvid)

C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd |grep name
kfdhdb.dskname:            EMRDATA_0001 ; 0x028: length=12
kfdhdb.grpname:                 EMRDATA ; 0x048: length=7
kfdhdb.fgname:             EMRDATA_0001 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0

启动crs到cssd进程报错分析
1. 由于删除磁盘,扫描设备导致hdisk[2-5] 权限和用户组不对
2. 由于删除,扫描磁盘导致磁盘共享模式不对
修复磁盘头和解决这两个问题之后,gi启动正常,磁盘组也正常mount,数据库也正常启动,数据0丢失,至此完美恢复
oracle-open


类似客户恢复案例:asm disk误设置pvid导致asm diskgroup无法mount恢复
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

ORA-15042: ASM disk "N" is missing from group number "M" 故障恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ORA-15042: ASM disk "N" is missing from group number "M" 故障恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接到一个朋友恢复请求,19个lun的asm 磁盘组,由于其中一个lun有问题,他们进行了增加一个新lun,删除老lun的方法操作,但是操作一半hang住了(因为坏的lun是底层损坏,无法完成rebalance),然后存储工程师继续修复异常lun,非常幸运异常lun修复好了,但是高兴过了头,直接从存储上删除了新加入的lun(已经rebalance一部分数据进去了),这个时候asm dg彻底趴下了,不能mount成功,请求恢复支持。由于某种原因,无法从lun层面恢复,只能让我们提供数据库层面恢复

Mon Sep 21 19:52:35 2015
SQL> alter diskgroup  dg_XFF add disk '/dev/rhdisk116' size 716800M drop disk dg_XFF_0012
NOTE: Assigning number (1,20) to disk (/dev/rhdisk116)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DG_XFF_0020
NOTE: requesting all-instance disk validation for group=1
Mon Sep 21 19:52:44 2015
NOTE: skipping rediscovery for group 1/0xb94738f1 (DG_XFF) on local instance.
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0xb94738f1 (DG_XFF) on local instance.
NOTE: initiating PST update: grp = 1
Mon Sep 21 19:52:44 2015
GMON updating group 1 at 25 for pid 27, osid 12124486
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xb94738f1 (DG_XFF)
GMON querying group 1 at 26 for pid 18, osid 10092734
NOTE: cache opening disk 20 of grp 1: DG_XFF_0020 path:/dev/rhdisk116
GMON querying group 1 at 27 for pid 18, osid 10092734
SUCCESS: refreshed membership for 1/0xb94738f1 (DG_XFF)
Mon Sep 21 19:52:47 2015
SUCCESS: alter diskgroup  dg_XFF add disk '/dev/rhdisk116' size 716800M drop disk dg_XFF_0012
NOTE: starting rebalance of group 1/0xb94738f1 (DG_XFF) at power 1
Starting background process ARB0
Mon Sep 21 19:52:47 2015
ARB0 started with pid=28, OS id=10944804
NOTE: assigning ARB0 to group 1/0xb94738f1 (DG_XFF) with 1 parallel I/O
NOTE: Attempting voting file refresh on diskgroup DG_XFF
Mon Sep 21 20:35:06 2015
SQL> ALTER DISKGROUP DG_XFF MOUNT  /* asm agent *//* {1:51107:7083} */
NOTE: cache registered group DG_XFF number=1 incarn=0xdd6f975a
NOTE: cache began mount (first) of group DG_XFF number=1 incarn=0xdd6f975a
NOTE: Assigning number (1,0) to disk (/dev/rhdisk10)
NOTE: Assigning number (1,1) to disk (/dev/rhdisk11)
NOTE: Assigning number (1,2) to disk (/dev/rhdisk16)
NOTE: Assigning number (1,3) to disk (/dev/rhdisk17)
NOTE: Assigning number (1,4) to disk (/dev/rhdisk22)
NOTE: Assigning number (1,5) to disk (/dev/rhdisk23)
NOTE: Assigning number (1,6) to disk (/dev/rhdisk28)
NOTE: Assigning number (1,7) to disk (/dev/rhdisk29)
NOTE: Assigning number (1,8) to disk (/dev/rhdisk33)
NOTE: Assigning number (1,9) to disk (/dev/rhdisk34)
NOTE: Assigning number (1,10) to disk (/dev/rhdisk4)
NOTE: Assigning number (1,11) to disk (/dev/rhdisk40)
NOTE: Assigning number (1,12) to disk (/dev/rhdisk41)
NOTE: Assigning number (1,13) to disk (/dev/rhdisk45)
NOTE: Assigning number (1,14) to disk (/dev/rhdisk46)
NOTE: Assigning number (1,15) to disk (/dev/rhdisk5)
NOTE: Assigning number (1,16) to disk (/dev/rhdisk52)
NOTE: Assigning number (1,17) to disk (/dev/rhdisk53)
NOTE: Assigning number (1,18) to disk (/dev/rhdisk57)
NOTE: Assigning number (1,19) to disk (/dev/rhdisk58)
Wed Sep 30 11:08:07 2015
NOTE: start heartbeating (grp 1)
GMON querying group 1 at 33 for pid 35, osid 4194488
NOTE: Assigning number (1,20) to disk ()
GMON querying group 1 at 34 for pid 35, osid 4194488
NOTE: cache dismounting (clean) group 1/0xDD6F975A (DG_XFF)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xDD6F975A (DG_XFF)
NOTE: cache ending mount (fail) of group DG_XFF number=1 incarn=0xdd6f975a
NOTE: cache deleting context for group DG_XFF 1/0xdd6f975a
GMON dismounting group 1 at 35 for pid 35, osid 4194488
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
ERROR: diskgroup DG_XFF was not mounted
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "20" is missing from group number "1"
ERROR: ALTER DISKGROUP DG_XFF MOUNT  /* asm agent *//* {1:51107:7083} */

这里比较明显,由于存储工程师直接删除了lun,这里导致磁盘组DG_XFF丢失asm disk 20,使得磁盘组无法直接mount,由于该磁盘组已经进行了较长时间的rebalance,丢失的盘中已经有大量数据(包括元数据),因此就算修改pst让磁盘组mount起来(不一定成功),也会丢失大量数据,也不一定可以直接拿出来里面的数据,如果只是加入盘,但是由于某种原因没有做rebalance,那我们直接可以通过修改pst,使得磁盘组mount起来。因此对于这样的情况,我们能够做的,只能从底层扫描磁盘,生成数据文件(因为有部分文件的元数据在丢失lun之上,如果直接使用现存元数据信息,直接拷贝,或者unload数据都会丢失大量数据),然后再进一步unload数据,完成恢复。需要恢复磁盘信息

grp# dsk# bsize ausize disksize diskname        groupname       path
---- ---- ----- ------ -------- --------------- --------------- -------------
   1    0  4096  4096K   179200 DG_XFF_0000     DG_XFF          /dev/rhdisk10
   1    1  4096  4096K   179200 DG_XFF_0001     DG_XFF          /dev/rhdisk11
   1    2  4096  4096K   179200 DG_XFF_0002     DG_XFF          /dev/rhdisk16
   1    3  4096  4096K   179200 DG_XFF_0003     DG_XFF          /dev/rhdisk17
   1    4  4096  4096K   179200 DG_XFF_0004     DG_XFF          /dev/rhdisk22
   1    5  4096  4096K   179200 DG_XFF_0005     DG_XFF          /dev/rhdisk23
   1    6  4096  4096K   179200 DG_XFF_0006     DG_XFF          /dev/rhdisk28
   1    7  4096  4096K   179200 DG_XFF_0007     DG_XFF          /dev/rhdisk29
   1    8  4096  4096K   179200 DG_XFF_0008     DG_XFF          /dev/rhdisk33
   1    9  4096  4096K   179200 DG_XFF_0009     DG_XFF          /dev/rhdisk34
   1   10  4096  4096K   179200 DG_XFF_0010     DG_XFF          /dev/rhdisk4
   1   11  4096  4096K   179200 DG_XFF_0011     DG_XFF          /dev/rhdisk40
   1   12  4096  4096K   179200 DG_XFF_0012     DG_XFF          /dev/rhdisk41
   1   13  4096  4096K   179200 DG_XFF_0013     DG_XFF          /dev/rhdisk45
   1   14  4096  4096K   179200 DG_XFF_0014     DG_XFF          /dev/rhdisk46
   1   15  4096  4096K   179200 DG_XFF_0015     DG_XFF          /dev/rhdisk5
   1   16  4096  4096K   179200 DG_XFF_0016     DG_XFF          /dev/rhdisk52
   1   17  4096  4096K   179200 DG_XFF_0017     DG_XFF          /dev/rhdisk53
   1   18  4096  4096K   179200 DG_XFF_0018     DG_XFF          /dev/rhdisk57
   1   19  4096  4096K   179200 DG_XFF_0019     DG_XFF          /dev/rhdisk58

这次运气比较好,丢失的磁盘组只是一个业务磁盘组,而且里面只有19个表空间,10个分区表,因此在数据字典完成的情况下,恢复10个分区表(一共6443个分区)的数据,整体恢复效果如下:
RECOVER


从整体数据量看恢复比例为:6003.26953/6027.26935*100%=99.6018127%,对于丢失了一个已经rebalance的大部分的lun,依旧能够恢复如此的数据,整体看非常理想.
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com