ORA-00600 kfrHtAdd01

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-00600 kfrHtAdd01

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

由于存储掉电,报ORA-15096: lost disk write detected错误,无法mount磁盘组.

Sun Dec 20 16:56:51 2020
SQL> alter diskgroup data mount 
NOTE: cache registered group DATA number=1 incarn=0x0c1a7a4e
NOTE: cache began mount (first) of group DATA number=1 incarn=0x0c1a7a4e
NOTE: Assigning number (1,2) to disk (/dev/mapper/multipath12)
NOTE: Assigning number (1,5) to disk (/dev/mapper/multipath15)
NOTE: Assigning number (1,3) to disk (/dev/mapper/multipath13)
NOTE: Assigning number (1,7) to disk (/dev/mapper/multipath17)
NOTE: Assigning number (1,1) to disk (/dev/mapper/multipath11)
NOTE: Assigning number (1,6) to disk (/dev/mapper/multipath16)
NOTE: Assigning number (1,0) to disk (/dev/mapper/multipath10)
NOTE: Assigning number (1,4) to disk (/dev/mapper/multipath14)
Sun Dec 20 16:56:57 2020
NOTE: GMON heartbeating for grp 1
GMON querying group 1 at 19 for pid 32, osid 130347
NOTE: cache opening disk 0 of grp 1: DATA_0000 path:/dev/mapper/multipath10
NOTE: F1X0 found on disk 0 au 2 fcn 0.14159360
NOTE: cache opening disk 1 of grp 1: DATA_0001 path:/dev/mapper/multipath11
NOTE: F1X0 found on disk 1 au 2 fcn 0.14159360
NOTE: cache opening disk 2 of grp 1: DATA_0002 path:/dev/mapper/multipath12
NOTE: F1X0 found on disk 2 au 2 fcn 0.14159360
NOTE: cache opening disk 3 of grp 1: DATA_0003 path:/dev/mapper/multipath13
NOTE: cache opening disk 4 of grp 1: DATA_0004 path:/dev/mapper/multipath14
NOTE: cache opening disk 5 of grp 1: DATA_0005 path:/dev/mapper/multipath15
NOTE: cache opening disk 6 of grp 1: DATA_0006 path:/dev/mapper/multipath16
NOTE: cache opening disk 7 of grp 1: DATA_0007 path:/dev/mapper/multipath17
NOTE: cache mounting (first) normal redundancy group 1/0x0C1A7A4E (DATA)
Sun Dec 20 16:56:57 2020
* allocate domain 1, invalid = TRUE 
Sun Dec 20 16:56:58 2020
NOTE: attached to recovery domain 1
NOTE: starting recovery of thread=1 ckpt=233.4189 group=1 (DATA)
NOTE: starting recovery of thread=2 ckpt=542.6409 group=1 (DATA)
lost disk write detected during recovery (apply)
NOTE: recovery (pass 2) of diskgroup 1 (DATA) caught error ORA-15096
Errors in file /grid/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_130347.trc:
ORA-15096: lost disk write detected
Abort recovery for domain 1
NOTE: crash recovery signalled OER-15096
ERROR: ORA-15096 signalled during mount of diskgroup DATA
NOTE: cache dismounting (clean) group 1/0x0C1A7A4E (DATA) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 130347, image: oracle@db1.rac.com (TNS V1-V3)
NOTE: lgwr not being msg'd to dismount

通过一系列修复之后报错如下

Sun Dec 20 20:12:35 2020
NOTE: GMON heartbeating for grp 1
GMON querying group 1 at 23 for pid 26, osid 67538
Sun Dec 20 20:12:35 2020
NOTE: cache opening disk 0 of grp 1: DATA_0000 path:/dev/mapper/multipath10
NOTE: F1X0 found on disk 0 au 2 fcn 0.14159360
NOTE: cache opening disk 1 of grp 1: DATA_0001 path:/dev/mapper/multipath11
NOTE: F1X0 found on disk 1 au 2 fcn 0.14159360
NOTE: cache opening disk 2 of grp 1: DATA_0002 path:/dev/mapper/multipath12
NOTE: F1X0 found on disk 2 au 2 fcn 0.14159360
NOTE: cache opening disk 3 of grp 1: DATA_0003 path:/dev/mapper/multipath13
NOTE: cache opening disk 4 of grp 1: DATA_0004 path:/dev/mapper/multipath14
NOTE: cache opening disk 5 of grp 1: DATA_0005 path:/dev/mapper/multipath15
NOTE: cache opening disk 6 of grp 1: DATA_0006 path:/dev/mapper/multipath16
NOTE: cache opening disk 7 of grp 1: DATA_0007 path:/dev/mapper/multipath17
NOTE: cache mounting (first) normal redundancy group 1/0x64848829 (DATA)
Sun Dec 20 20:12:36 2020
* allocate domain 1, invalid = TRUE 
Sun Dec 20 20:12:36 2020
NOTE: attached to recovery domain 1
NOTE: Fallback recovery: thread 2 read 10751 blocks oldest redo found in ABA 540.6429
NOTE: Fallback recovery: thread 1 read 10751 blocks oldest redo found in ABA 232.4218
Errors in file /grid/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_67538.trc  (incident=1692689):
ORA-00600: internal error code, arguments: [kfrHtAdd01], [2147483651], [1025], [0], [38660545], [0],
 [38687990], [1], [2], [6429], [], []
Incident details in: /grid/app/grid/diag/asm/+asm/+ASM1/incident/incdir_1692689/+ASM1_ora_67538_i1692689.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Sun Dec 20 20:12:39 2020
Sweep [inc][1692689]: completed
Sweep [inc2][1692689]: completed
Errors in file /grid/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_67538.trc:
ORA-00600: internal error code, arguments: [kfrHtAdd01], [2147483651], [1025], [0], [38660545], [0],
 [38687990], [1], [2], [6429], [], []
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup DATA
NOTE: cache dismounting (clean) group 1/0x64848829 (DATA) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 67538, image: oracle@db1.rac.com (TNS V1-V3)
NOTE: lgwr not being msg'd to dismount
freeing rdom 1
NOTE: detached from domain 1
NOTE: cache dismounted group 1/0x64848829 (DATA) 
NOTE: cache ending mount (fail) of group DATA number=1 incarn=0x64848829
NOTE: cache deleting context for group DATA 1/0x64848829
GMON dismounting group 1 at 24 for pid 26, osid 67538
NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0002 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0003 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0004 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0005 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0006 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0007 in mode 0x7f marked for de-assignment
ERROR: diskgroup DATA was not mounted
ORA-00600: internal error code, arguments: [kfrHtAdd01], [2147483651], [1025], [0],
 [38660545], [0], [38687990], [1], [2], [6429], [], []
ERROR: alter diskgroup data mount

分析trace文件

*** 2020-12-20 20:11:54.956
kfdp_query(DATA): 19 
----- Abridged Call Stack Trace -----
ksedsts()+465<-kfdp_query()+530<-kfdPstSyncPriv()+585<-kfgFinalizeMount()+1630<-kfgscFinalize()+1433<
-kfgForEachKfgsc()+285<-kfgsoFinalize()+135<-kfgFinalize()+398<-kfxdrvMount()+5558<-kfxdrvEntry()
+2207<-opiexe()+20624<-opiosq0()+3932<-kpooprx()+274<-kpoal8()+842<-opiodr()+917<-ttcpip()
+2183<-opitsk()+1710<-opiino()+969<-opiodr()+917<-opidrv()+570<-sou2o()
+103<-opimai_real()+133<-ssthrdmain()+265<-main()+201<-__libc_start_main()+253 
----- End of Abridged Call Stack Trace -----
2020-12-20 20:11:55.393106 : Start recovery for domain=1, valid=0, flags=0x4
NOTE: starting recovery of thread=1 ckpt=233.4189 group=1 (DATA)
NOTE: starting recovery of thread=2 ckpt=542.6409 group=1 (DATA)
lost disk write detected during recovery (apply):
last written kfcn: 0.38747593 aba=233.4208 thd=1
kfcn_kfrbcd=0.38747593 flags_kfrbcd=0x001c aba=542.6410 thd=2
CE: (0x0x66edc798) group=1 (DATA) fn=4 blk=1
    hashFlags=0x0000 lid=0x0002 lruFlags=0x0000 bastCount=1
    mirror=0
    flags_kfcpba=0x38 copies=3 blockIndex=1 AUindex=0 AUcount=0 loctr fcn=0.0
    copy #0:  disk=6  au=35 flags=01
    copy #1:  disk=0  au=34 flags=01
    copy #2:  disk=4  au=52 flags=01
BH: (0x0x66e10d00) bnum=33 type=COD_RBO state=rcv chgSt=not modifying pageIn=rcvRead
    flags=0x00000000 pinmode=excl lockmode=null bf=0x66020000
    kfbh_kfcbh.fcn_kfbh = 0.38747538 lowAba=0.0 highAba=0.0
    modTime=0
    last kfcbInitSlot return code=null chgCount=0 cpkt lnk is null ralFlags=0x00000000
    PINS:
    (kfcbps) pin=91 get by kfr.c line 7879 mode=excl
             fn=4 blk=1 status=pinned
             flags=0x88000000 flags2=0x00000000
             class=0 type=INVALID stateWanted=rcvRead
             bastCount=1 waitStatus=0x00000000 relocCount=0
             scanBastCount=0 scanBxid=0 scanSkipCode=0
             last released by kfc.c 21183
NOTE: recovery (pass 2) of diskgroup 1 (DATA) caught error ORA-15096
last new 0.0
kfrPass2: dump of current log buffer for error 15096 follows
=======================
OSM metadata block dump:
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            8 ; 0x002: KFBTYP_CHNGDIR
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                   17162 ; 0x004: blk=17162
kfbh.block.obj:                       3 ; 0x008: file=3
kfbh.check:                  4226524538 ; 0x00c: 0xfbeba57a
kfbh.fcn.base:                 38747431 ; 0x010: 0x024f3d27
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfracdb.aba.seq:                    542 ; 0x000: 0x0000021e
kfracdb.aba.blk:                   6409 ; 0x004: 0x00001909
kfracdb.ents:                         1 ; 0x008: 0x0001
kfracdb.ub2spare:                     0 ; 0x00a: 0x0000
kfracdb.lge[0].valid:                 1 ; 0x00c: V=1 B=0 M=0
kfracdb.lge[0].chgCount:              1 ; 0x00d: 0x01
kfracdb.lge[0].len:                  68 ; 0x00e: 0x0044
kfracdb.lge[0].kfcn.base:      38747432 ; 0x010: 0x024f3d28
kfracdb.lge[0].kfcn.wrap:             0 ; 0x014: 0x00000000
kfracdb.lge[0].bcd[0].kfbl.blk:    1292 ; 0x018: blk=1292
kfracdb.lge[0].bcd[0].kfbl.obj:       1 ; 0x01c: file=1
kfracdb.lge[0].bcd[0].kfcn.base:38743102 ; 0x020: 0x024f2c3e
kfracdb.lge[0].bcd[0].kfcn.wrap:      0 ; 0x024: 0x00000000
kfracdb.lge[0].bcd[0].oplen:          8 ; 0x028: 0x0008
kfracdb.lge[0].bcd[0].blkIndex:      12 ; 0x02a: 0x000c
kfracdb.lge[0].bcd[0].flags:         28 ; 0x02c: F=0 N=0 F=1 L=1 V=1 A=0 C=0
kfracdb.lge[0].bcd[0].opcode:       135 ; 0x02e: 0x0087
kfracdb.lge[0].bcd[0].kfbtyp:         4 ; 0x030: KFBTYP_FILEDIR
kfracdb.lge[0].bcd[0].redund:        19 ; 0x031: SCHE=0x1 NUMB=0x3
kfracdb.lge[0].bcd[0].pad:        63903 ; 0x032: 0xf99f
kfracdb.lge[0].bcd[0].KFFFD_COMMIT.modts.hi:33108586 ; 0x034: HOUR=0xa DAYS=0x13 MNTH=0xc YEAR=0x7e4
kfracdb.lge[0].bcd[0].KFFFD_COMMIT.modts.lo:0 ; 0x038: USEC=0x0 MSEC=0x0 SECS=0x0 MINS=0x0
kfracdb.lge[0].bcd[0].au[0]:     292415 ; 0x03c: 0x0004763f
kfracdb.lge[0].bcd[0].au[1]:     292452 ; 0x040: 0x00047664
kfracdb.lge[0].bcd[0].au[2]:     292474 ; 0x044: 0x0004767a
kfracdb.lge[0].bcd[0].disks[0]:       2 ; 0x048: 0x0002
kfracdb.lge[0].bcd[0].disks[1]:       1 ; 0x04a: 0x0001
kfracdb.lge[0].bcd[0].disks[2]:       0 ; 0x04c: 0x0000

彻底屏蔽asm的实例恢复,mount磁盘组,尝试启动库进行数据库恢复.如果如果此类asm无法mount问题,无法自行解决请联系我们
电话/微信:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

asm磁盘类似_DROPPED_0001_DATA名称故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:asm磁盘类似_DROPPED_0001_DATA名称故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

发现一客户数据库的asm磁盘组中有磁盘掉线(通过分析日志确认2016年就已经掉线,而且不在做rebalance)
20201205195855


20201205221937

进一步检查

SQL> /

NAME			       PATH		  GROUP_NUMBER DISK_NUMBER MOUNT_STATUS   HEADER_STATUS
------------------------------ --------------------- ------------ ----------- -------------- ------------------------
MODE_STATUS    STATE		FAILGROUP
-------------- ---------------- --------------------
			       ORCL:DATA2	  0		 0 CLOSED	  MEMBER
ONLINE	       NORMAL

			       ORCL:FLASH1	  0		 1 CLOSED	  MEMBER
ONLINE	       NORMAL

			       ORCL:GRID3	  0		 2 CLOSED	  MEMBER
ONLINE	       NORMAL

_DROPPED_0000_FLASH				  2		 0 MISSING	  UNKNOWN
OFFLINE        FORCING		FLASH1

_DROPPED_0001_DATA				  1		 1 MISSING	  UNKNOWN
OFFLINE        FORCING		DATA2

DATA1			       ORCL:DATA1	  1		 0 CACHED	  MEMBER
ONLINE	       NORMAL		DATA1

FLASH2			       ORCL:FLASH2	  2		 1 CACHED	  MEMBER
ONLINE	       NORMAL		FLASH2

GRID1			       ORCL:GRID1	  3		 0 CACHED	  MEMBER
ONLINE	       NORMAL		GRID1

GRID2			       ORCL:GRID2	  3		 1 CACHED	  MEMBER
ONLINE	       NORMAL		GRID2

GRID4			       ORCL:GRID4	  3		 3 CACHED	  MEMBER
ONLINE	       NORMAL		GRID4

GRID5			       ORCL:GRID5	  3		 4 CACHED	  MEMBER
ONLINE	       NORMAL		GRID5

GRID6			       ORCL:GRID6	  3		 5 CACHED	  MEMBER
ONLINE	       NORMAL		GRID6


12 rows selected.


SQL> select NAME,STATE,TYPE,OFFLINE_DISKS from v$asm_diskgroup;

NAME
------------------------------------------------------------
STATE		       TYPE	    OFFLINE_DISKS
---------------------- ------------ -------------
DATA
MOUNTED 	       NORMAL			1

FLASH
MOUNTED 	       NORMAL			1

GRID
MOUNTED 	       NORMAL			0

主要问题是由于ORCL:FLASH1和ORCL:DATA2磁盘掉线导致处于_DROPPED_0000_FLASH和_DROPPED_0001_DATA状态.底层检查,确定现在这些磁盘都正常.然后使用force命令进行强制增加掉线的磁盘到对应的磁盘组中

SQL> alter diskgroup FLASH add failgroup flg1 disk 'ORCL:FLASH1'  force;

Diskgroup altered.

SQL> alter diskgroup data add failgroup dg2 disk 'ORCL:DATA2'  force;

Diskgroup altered.

观察asm 日志,等rebalance完成

Sat Dec 05 16:48:10 2020
SQL> alter diskgroup FLASH add failgroup flg1 disk 'ORCL:FLASH1'  force 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (2,2) to disk (ORCL:FLASH1)
NOTE: requesting all-instance membership refresh for group=2
NOTE: initializing header on grp 2 disk FLASH1
NOTE: requesting all-instance disk validation for group=2
Sat Dec 05 16:48:13 2020
NOTE: skipping rediscovery for group 2/0x58e713e7 (FLASH) on local instance.
NOTE: requesting all-instance disk validation for group=2
NOTE: skipping rediscovery for group 2/0x58e713e7 (FLASH) on local instance.
Sat Dec 05 16:48:19 2020
GMON updating for reconfiguration, group 2 at 14 for pid 34, osid 12203
NOTE: group 2 PST updated.
NOTE: initiating PST update: grp = 2
GMON updating group 2 at 15 for pid 34, osid 12203
NOTE: cache closing disk 0 of grp 2: (not open) _DROPPED_0000_FLASH
NOTE: group FLASH: updated PST location: disk 0001 (PST copy 0)
NOTE: group FLASH: updated PST location: disk 0002 (PST copy 1)
NOTE: PST update grp = 2 completed successfully 
NOTE: membership refresh pending for group 2/0x58e713e7 (FLASH)
GMON querying group 2 at 16 for pid 18, osid 41180
NOTE: cache closing disk 0 of grp 2: (not open) _DROPPED_0000_FLASH
NOTE: cache opening disk 2 of grp 2: FLASH1 label:FLASH1
NOTE: Attempting voting file refresh on diskgroup FLASH
NOTE: Refresh completed on diskgroup FLASH. No voting file found.
GMON querying group 2 at 17 for pid 18, osid 41180
NOTE: cache closing disk 0 of grp 2: (not open) _DROPPED_0000_FLASH
Sat Dec 05 16:48:25 2020
SUCCESS: refreshed membership for 2/0x58e713e7 (FLASH)
Sat Dec 05 16:48:25 2020
SUCCESS: alter diskgroup FLASH add failgroup flg1 disk 'ORCL:FLASH1'  force
NOTE: starting rebalance of group 2/0x58e713e7 (FLASH) at power 1
Starting background process ARB0
Sat Dec 05 16:48:26 2020
ARB0 started with pid=36, OS id=12451 
NOTE: assigning ARB0 to group 2/0x58e713e7 (FLASH) with 1 parallel I/O
cellip.ora not found.
NOTE: F1X0 copy 2 relocating from 0:2 to 2:2 for diskgroup 2 (FLASH)
NOTE: Attempting voting file refresh on diskgroup FLASH
NOTE: Refresh completed on diskgroup FLASH. No voting file found.
Sat Dec 05 16:48:45 2020
NOTE: Rebalance has restored redundancy for any existing control file or redo log in disk group FLASH
Sat Dec 05 16:49:06 2020
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 2/0x58e713e7 (FLASH)
Sat Dec 05 16:49:08 2020
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=2
Sat Dec 05 16:49:11 2020
GMON updating for reconfiguration, group 2 at 18 for pid 36, osid 12681
NOTE: cache closing disk 0 of grp 2: (not open) _DROPPED_0000_FLASH
NOTE: group FLASH: updated PST location: disk 0001 (PST copy 0)
NOTE: group FLASH: updated PST location: disk 0002 (PST copy 1)
NOTE: group 2 PST updated.
SUCCESS: grp 2 disk _DROPPED_0000_FLASH going offline 
GMON updating for reconfiguration, group 2 at 19 for pid 36, osid 12681
NOTE: cache closing disk 0 of grp 2: (not open) _DROPPED_0000_FLASH
NOTE: group FLASH: updated PST location: disk 0001 (PST copy 0)
NOTE: group FLASH: updated PST location: disk 0002 (PST copy 1)
NOTE: group 2 PST updated.
NOTE: membership refresh pending for group 2/0x58e713e7 (FLASH)
GMON querying group 2 at 20 for pid 18, osid 41180
GMON querying group 2 at 21 for pid 18, osid 41180
NOTE: Disk _DROPPED_0000_FLASH in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 2/0x58e713e7 (FLASH)
Sat Dec 05 16:51:56 2020
SQL> alter diskgroup data add failgroup dg2 disk 'ORCL:DATA2'  force 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,2) to disk (ORCL:DATA2)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DATA2
NOTE: requesting all-instance disk validation for group=1
Sat Dec 05 16:51:57 2020
NOTE: skipping rediscovery for group 1/0x58d713e6 (DATA) on local instance.
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0x58d713e6 (DATA) on local instance.
Sat Dec 05 16:52:02 2020
GMON updating for reconfiguration, group 1 at 22 for pid 34, osid 12203
NOTE: group 1 PST updated.
NOTE: initiating PST update: grp = 1
GMON updating group 1 at 23 for pid 34, osid 12203
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0002 (PST copy 1)
NOTE: PST update grp = 1 completed successfully 
NOTE: membership refresh pending for group 1/0x58d713e6 (DATA)
GMON querying group 1 at 24 for pid 18, osid 41180
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
NOTE: cache opening disk 2 of grp 1: DATA2 label:DATA2
Sat Dec 05 16:52:08 2020
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
GMON querying group 1 at 25 for pid 18, osid 41180
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
SUCCESS: refreshed membership for 1/0x58d713e6 (DATA)
Sat Dec 05 16:52:08 2020
SUCCESS: alter diskgroup data add failgroup dg2 disk 'ORCL:DATA2'  force
NOTE: starting rebalance of group 1/0x58d713e6 (DATA) at power 1
Starting background process ARB0
Sat Dec 05 16:52:08 2020
ARB0 started with pid=37, OS id=13463 
NOTE: assigning ARB0 to group 1/0x58d713e6 (DATA) with 1 parallel I/O
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sat Dec 05 16:52:44 2020
cellip.ora not found.
NOTE: F1X0 copy 2 relocating from 1:2 to 2:2 for diskgroup 1 (DATA)
Sat Dec 05 16:53:22 2020
NOTE: Rebalance has restored redundancy for any existing control file or redo log in disk group DATA
NOTE: membership refresh pending for group 1/0x58d713e6 (DATA)
GMON querying group 1 at 27 for pid 18, osid 41180
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
SUCCESS: refreshed membership for 1/0x58d713e6 (DATA)
SUCCESS: alter diskgroup data rebalance power 11
NOTE: starting rebalance of group 1/0x58d713e6 (DATA) at power 11
Starting background process ARB0
Sat Dec 05 17:27:52 2020
ARB0 started with pid=35, OS id=23318 
NOTE: assigning ARB0 to group 1/0x58d713e6 (DATA) with 11 parallel I/Os
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sat Dec 05 17:28:29 2020
cellip.ora not found.
Sat Dec 05 17:28:45 2020
NOTE: Rebalance has restored redundancy for any existing control file or redo log in disk group DATA
Sat Dec 05 18:48:10 2020
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Sat Dec 05 18:48:32 2020
GMON updating for reconfiguration, group 1 at 28 for pid 36, osid 47454
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0002 (PST copy 1)
Sat Dec 05 18:48:32 2020
NOTE: group 1 PST updated.
SUCCESS: grp 1 disk _DROPPED_0001_DATA going offline 
GMON updating for reconfiguration, group 1 at 29 for pid 36, osid 47454
NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DATA
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0002 (PST copy 1)
NOTE: group 1 PST updated.
Sat Dec 05 18:48:32 2020
NOTE: membership refresh pending for group 1/0x58d713e6 (DATA)
GMON querying group 1 at 30 for pid 18, osid 41180
GMON querying group 1 at 31 for pid 18, osid 41180
NOTE: Disk _DROPPED_0001_DATA in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 1/0x58d713e6 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sat Dec 05 18:52:24 2020
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x58d713e6 (DATA)

查询磁盘状态,掉线磁盘已经被加入,asm磁盘组恢复正常
20201205201841


20201205201851
总结:对于normal磁盘组由于某种原因磁盘从磁盘组中掉,v$asm_disk.name类似_DROPPED_0001_DATA,v$asm_disk.state为FORCING,可以通过类似alter diskgroup data add failgroup dg2 disk ‘ORCL:DATA2′ force;方式强制增加掉线的磁盘进入磁盘组,然后待rebalance完成,问题修复

ORA-15096: lost disk write detected

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-15096: lost disk write detected

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

又一例由于存储掉电导致asm磁盘组,由于ORA-15096: lost disk write detected,导致无法mount的恢复请求

SQL> ALTER DISKGROUP DATA MOUNT  /* asm agent *//* {1:45277:148} */
NOTE: cache registered group DATA number=2 incarn=0x73886b6a
NOTE: cache began mount (first) of group DATA number=2 incarn=0x73886b6a
NOTE: Assigning number (2,2) to disk (/dev/asm-data3)
NOTE: Assigning number (2,1) to disk (/dev/asm-data2)
NOTE: Assigning number (2,0) to disk (/dev/asm-data1)
Fri Nov 06 19:06:56 2020
NOTE: GMON heartbeating for grp 2
GMON querying group 2 at 94 for pid 30, osid 11596
NOTE: cache opening disk 0 of grp 2: DATA_0000 path:/dev/asm-data1
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache opening disk 1 of grp 2: DATA_0001 path:/dev/asm-data2
NOTE: cache opening disk 2 of grp 2: DATA_0002 path:/dev/asm-data3
NOTE: cache mounting (first) external redundancy group 2/0x73886B6A (DATA)
Fri Nov 06 19:06:57 2020
* allocate domain 2, invalid = TRUE
kjbdomatt send to inst 2
Fri Nov 06 19:06:57 2020
NOTE: attached to recovery domain 2
NOTE: starting recovery of thread=1 ckpt=25.7986 group=2 (DATA)
NOTE: starting recovery of thread=2 ckpt=33.364 group=2 (DATA)
NOTE: BWR validation signaled ORA-15096
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_11596.trc:
ORA-15096: lost disk write detected
NOTE: crash recovery signalled OER-15096
ERROR: ORA-15096 signalled during mount of diskgroup DATA
NOTE: cache dismounting (clean) group 2/0x73886B6A (DATA)
NOTE: messaging CKPT to quiesce pins Unix process pid: 11596, image: oracle@db1 (TNS V1-V3)
NOTE: lgwr not being msg'd to dismount
kjbdomdet send to inst 2
detach from dom 2, sending detach message to inst 2
freeing rdom 2
NOTE: detached from domain 2
NOTE: cache dismounted group 2/0x73886B6A (DATA)
NOTE: cache ending mount (fail) of group DATA number=2 incarn=0x73886b6a
NOTE: cache deleting context for group DATA 2/0x73886b6a
GMON dismounting group 2 at 95 for pid 30, osid 11596
NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0002 in mode 0x7f marked for de-assignment
ERROR: diskgroup DATA was not mounted
ORA-15032: not all alterations performed
ORA-15096: lost disk write detected
ERROR: ALTER DISKGROUP DATA MOUNT  /* asm agent *//* {1:45277:148} */

通过判断,通过一系列处理之后,数据库进行了mount操作发现报错ORA-600 2130

Fri Nov 06 17:03:27 2020
ALTER DATABASE RECOVER  database
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 40 slaves
Fri Nov 06 17:03:29 2020
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_pr00_7393.trc  (incident=195869):
ORA-00600: internal error code, arguments: [2130], [2], [1], [2], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/ynhis/ynhis1/incident/incdir_195869/ynhis1_pr00_7393_i195869.trc
Fri Nov 06 17:03:30 2020
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Media Recovery failed with error 600
ORA-10877 signalled during: ALTER DATABASE RECOVER  database  ...

判断redo异常,通过resetlogs打开库,发现报错ORA-00600 2662

Fri Nov 06 18:21:32 2020
alter database open resetlogs
RESETLOGS is being done without consistancy checks. This may result
in a corrupted database. The database should be recreated.
RESETLOGS after incomplete recovery UNTIL CHANGE 8670753264
Resetting resetlogs activation ID 306909514 (0x124b114a)
Redo thread 2 enabled by open resetlogs or standby activation
Fri Nov 06 18:21:39 2020
Setting recovery target incarnation to 2
Initializing SCN for created control file
Database SCN compatibility initialized to 3
Warning - High Database SCN: Current SCN value is 8670753267, threshold SCN value is 0
Fri Nov 06 18:21:39 2020
Assigning activation ID 408224320 (0x18550240)
Thread 1 opened at log sequence 1
  Current log# 1 seq# 1 mem# 0: /orabak/data/group_1.289.954514319
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Nov 06 18:21:40 2020
SMON: enabling cache recovery
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_ora_24310.trc  (incident=231847):
ORA-00600: internal error code, arguments: [2662], [2], [80818679], [2], [93545365], [4194545], [], [], [], [], [],[]
Incident details in: /u01/app/oracle/diag/rdbms/ynhis/ynhis1/incident/incdir_231847/ynhis1_ora_24310_i231847.trc
Fri Nov 06 18:21:42 2020
Dumping diagnostic data in directory=[cdmp_20201106182142],requested by(instance=1,osid=24310),summary=[incident=231847]
Fri Nov 06 18:21:43 2020
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_ora_24310.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [2662], [2], [80818679], [2], [93545365],[4194545],[],[],[],[],[],[]
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_ora_24310.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [2662], [2], [80818679], [2], [93545365],[4194545],[],[],[],[],[],[]
Error 704 happened during db open, shutting down database
USER (ospid: 24310): terminating the instance due to error 704
Instance terminated by USER, pid = 24310
ORA-1092 signalled during: alter database open resetlogs...
opiodr aborting process unknown ospid (24310) as a result of ORA-1092

处理该错误之后,数据库resetlog之后,数据库open成功但是报错ORA-00600 4137

Database Characterset is ZHS16GBK
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_smon_26195.trc  (incident=255799):
ORA-00600: internal error code, arguments: [4137], [25.33.122556], [0], [0], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/ynhis/ynhis1/incident/incdir_255799/ynhis1_smon_26195_i255799.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
No Resource Manager plan active
Fri Nov 06 18:30:46 2020
replication_dependency_tracking turned off (no async multimaster replication found)
ORACLE Instance ynhis1 (pid = 23) - Error 600 encountered while recovering transaction (25, 33).
Errors in file /u01/app/oracle/diag/rdbms/ynhis/ynhis1/trace/ynhis1_smon_26195.trc:
ORA-00600: internal error code, arguments: [4137], [25.33.122556], [0], [0], [], [], [], [], [], [], [], []

对异常undo进行处理,数据库可以正常启动关闭,然后安排数据导出导入新库操作,恢复完成.

win asm disk header 异常恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:win asm disk header 异常恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有朋友反馈win环境下rac异常,asm无法正常mount,检查日志发现

Fri Jul 03 03:55:46 2020
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7004.trc:
ORA-15025: could not open disk "\\.\ORCLDISKDATA1"
ORA-27041: unable to open file
OSD-04002: 无法打开文件
O/S-Error: (OS 2) 系统找不到指定的文件。
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7004.trc:
ORA-15025: could not open disk "\\.\ORCLDISKDATA1"
ORA-27041: unable to open file
OSD-04002: 无法打开文件
O/S-Error: (OS 2) 系统找不到指定的文件。
WARNING: failed to read mirror side 1 of virtual extent 0 logical extent 0 of file 267 in group [2.2254399778] 
from disk DATA_0000  allocation unit 3502 reason error; if possible, will try another mirror side
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7004.trc:
ORA-15081: failed to submit an I/O operation to a disk
Fri Jul 03 03:59:46 2020
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7328.trc:
ORA-15025: could not open disk "\\.\ORCLDISKDATA1"
ORA-27041: unable to open file
OSD-04002: 无法打开文件
O/S-Error: (OS 2) 系统找不到指定的文件。
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7328.trc:
ORA-15025: could not open disk "\\.\ORCLDISKDATA1"
ORA-27041: unable to open file
OSD-04002: 无法打开文件
O/S-Error: (OS 2) 系统找不到指定的文件。
WARNING: failed to read mirror side 1 of virtual extent 0 logical extent 0 of file 267 in group [2.2254399778] 
from disk DATA_0000  allocation unit 3502 reason error; if possible, will try another mirror side
Errors in file C:\APP\ADMINISTRATOR\diag\asm\+asm\+asm2\trace\+asm2_ora_7328.trc:
ORA-15081: failed to submit an I/O operation to a disk

报错信息比较明显是由于无法找到\\.\ORCLDISKDATA1磁盘,因此异常,通过asmtool查看磁盘信息

C:\app\11.2.0\grid>asmtool -list
NTFS                             \Device\Harddisk0\Partition3            81920M
NTFS                             \Device\Harddisk0\Partition4           200000M
NTFS                             \Device\Harddisk0\Partition5          4293849M
                                 \Device\Harddisk1\Partition2             4062M
                                 \Device\Harddisk2\Partition2          2097022M
ORCLDISKFRA0                     \Device\Harddisk3\Partition2           511870M

明显的发现ORCLDISKDATA1磁盘丢失,通过对磁盘dd到本地然后进行分析发现,asm disk header损坏

C:\Users\Administrator>kfed read F:\temp\disk3\1\disk2.dd
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
006B38C00 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]


C:\Users\Administrator>kfed read F:\temp\disk3\1\disk2.dd blkn=2
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL
kfbh.datfmt:                          2 ; 0x003: 0x02
kfbh.block.blk:                       2 ; 0x004: blk=2
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2349305287 ; 0x00c: 0x8c078dc7
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdatb.aunum:                         0 ; 0x000: 0x00000000
kfdatb.shrink:                      448 ; 0x004: 0x01c0
kfdatb.ub2pad:                        0 ; 0x006: 0x0000
kfdatb.auinfo[0].link.next:           8 ; 0x008: 0x0008
kfdatb.auinfo[0].link.prev:           8 ; 0x00a: 0x0008
kfdatb.auinfo[1].link.next:          12 ; 0x00c: 0x000c
kfdatb.auinfo[1].link.prev:          12 ; 0x00e: 0x000c
kfdatb.auinfo[2].link.next:         456 ; 0x010: 0x01c8
kfdatb.auinfo[2].link.prev:         456 ; 0x012: 0x01c8
kfdatb.auinfo[3].link.next:         488 ; 0x014: 0x01e8
kfdatb.auinfo[3].link.prev:         488 ; 0x016: 0x01e8
kfdatb.auinfo[4].link.next:          24 ; 0x018: 0x0018
kfdatb.auinfo[4].link.prev:          24 ; 0x01a: 0x0018
kfdatb.auinfo[5].link.next:          28 ; 0x01c: 0x001c
kfdatb.auinfo[5].link.prev:          28 ; 0x01e: 0x001c
kfdatb.auinfo[6].link.next:         552 ; 0x020: 0x0228
kfdatb.auinfo[6].link.prev:        3112 ; 0x022: 0x0c28
kfdatb.spare:                         0 ; 0x024: 0x00000000
kfdate[0].discriminator:              1 ; 0x028: 0x00000001
kfdate[0].allo.lo:                    0 ; 0x028: XNUM=0x0
kfdate[0].allo.hi:              8388608 ; 0x02c: V=1 I=0 H=0 FNUM=0x0
kfdate[1].discriminator:              1 ; 0x030: 0x00000001
kfdate[1].allo.lo:                    0 ; 0x030: XNUM=0x0
kfdate[1].allo.hi:              8388608 ; 0x034: V=1 I=0 H=0 FNUM=0x0
kfdate[2].discriminator:              1 ; 0x038: 0x00000001
kfdate[2].allo.lo:                    0 ; 0x038: XNUM=0x0
kfdate[2].allo.hi:              8388609 ; 0x03c: V=1 I=0 H=0 FNUM=0x1

fra磁盘虽然磁盘asm label信息存在,但是其他信息依旧损坏,但是也只是磁盘头信息损坏
20200705203336


通过现场分析,基本上可以确定是由于某种原因导致win asm 的磁盘的所有磁盘头都损坏(两个磁盘头被置空,另外一个磁盘头基本上损坏),基于原因未知
基于客户现场的情况,以及他们有前一天的rman备份,而且客户有保障现场(进一步故障原因分析)的需求,未在现场环境进行恢复,而是在不对现场环境做任何修改的情况下,直接恢复fra里面的redo和归档日志,进而结合备份异地实现数据库恢复,实现数据0丢失,又不破坏现场的效果
20200705203742

以前遇到过类似我其他操作系统平台中asm disk header异常的case:
asm磁盘分区丢失恢复
pvid=yes导致asm无法mount
asm磁盘头全部损坏数据0丢失恢复
分区无法识别导致asm diskgroup无法mount
asm disk误设置pvid导致asm diskgroup无法mount恢复

ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh]故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh]故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户对asm进行扩容,由于配置不恰当,在使用asmca增加asm disk的时候直接选中了已经被用作文件系统的vg中的磁盘

Tue Nov 19 09:48:48 2019
Non critical error ORA-48180 cFri Nov 22 12:47:48 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk29' SIZE 491520M ,
'/dev/rhdisk30' SIZE 491520M ,
'/dev/rhdisk31' SIZE 491520M /* ASMCA */
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (4,15) to disk (/dev/rhdisk29)
NOTE: Assigning number (4,16) to disk (/dev/rhdisk30)
NOTE: Assigning number (4,17) to disk (/dev/rhdisk31)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0015
NOTE: initializing header on grp 4 disk XIFENFEI_0016
NOTE: initializing header on grp 4 disk XIFENFEI_0017
NOTE: requesting all-instance disk validation for group=4
Fri Nov 22 12:47:51 2019
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 12:47:59 2019
NOTE: initiating PST update: grp = 4
Fri Nov 22 12:47:59 2019
GMON updating group 4 at 12 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 13 for pid 18, osid 39912680
Fri Nov 22 12:48:01 2019
NOTE: cache opening disk 15 of grp 4: XIFENFEI_0015 path:/dev/rhdisk29
NOTE: cache opening disk 16 of grp 4: XIFENFEI_0016 path:/dev/rhdisk30
NOTE: cache opening disk 17 of grp 4: XIFENFEI_0017 path:/dev/rhdisk31
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 14 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk29' SIZE 491520M ,
'/dev/rhdisk30' SIZE 491520M ,
'/dev/rhdisk31' SIZE 491520M /* ASMCA */

发现增加错磁盘之后,从vg里面强制踢掉被asm使用的磁盘,并且尝试在asm中删除这些磁盘,并加入新磁盘

Fri Nov 22 12:52:03 2019
SQL> ALTER DISKGROUP XIFENFEI DROP  DISK 'XIFENFEI_0015','XIFENFEI_0016','XIFENFEI_0017' /* ASMCA */
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:52:03 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: requesting all-instance membership refresh for group=4
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
Fri Nov 22 12:52:12 2019
GMON querying group 4 at 15 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI DROP  DISK 'XIFENFEI_0015','XIFENFEI_0016','XIFENFEI_0017' /* ASMCA */
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
Starting background process ARB0
…………
Fri Nov 22 12:58:26 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk7' SIZE 491520M /* ASMCA */
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:58:26 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: ASM did background COD recovery for group 4/0xb08c40b (XIFENFEI)
NOTE: Assigning number (4,18) to disk (/dev/rhdisk7)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0018
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 12:58:41 2019
NOTE: initiating PST update: grp = 4
Fri Nov 22 12:58:41 2019
GMON updating group 4 at 16 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully
Fri Nov 22 12:58:41 2019
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 17 for pid 18, osid 39912680
NOTE: cache opening disk 18 of grp 4: XIFENFEI_0018 path:/dev/rhdisk7
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 18 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk7' SIZE 491520M /* ASMCA */
Starting background process ARB0
Fri Nov 22 12:58:46 2019
ARB0 started with pid=44, OS id=54460432
…………
Fri Nov 22 12:59:57 2019
SQL> ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk10' SIZE 491520M ,
'/dev/rhdisk11' SIZE 491520M ,
'/dev/rhdisk8' SIZE 491520M ,
'/dev/rhdisk9' SIZE 491520M /* ASMCA */
NOTE: GroupBlock outside rolling migration privileged region
Fri Nov 22 12:59:57 2019
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 4/0xb08c40b (XIFENFEI)
NOTE: ASM did background COD recovery for group 4/0xb08c40b (XIFENFEI)
NOTE: Assigning number (4,19) to disk (/dev/rhdisk10)
NOTE: Assigning number (4,20) to disk (/dev/rhdisk11)
NOTE: Assigning number (4,21) to disk (/dev/rhdisk8)
NOTE: Assigning number (4,22) to disk (/dev/rhdisk9)
NOTE: requesting all-instance membership refresh for group=4
NOTE: initializing header on grp 4 disk XIFENFEI_0019
NOTE: initializing header on grp 4 disk XIFENFEI_0020
NOTE: initializing header on grp 4 disk XIFENFEI_0021
NOTE: initializing header on grp 4 disk XIFENFEI_0022
NOTE: requesting all-instance disk validation for group=4
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
Fri Nov 22 13:00:08 2019
NOTE: requesting all-instance disk validation for group=4
Fri Nov 22 13:00:08 2019
NOTE: skipping rediscovery for group 4/0xb08c40b (XIFENFEI) on local instance.
NOTE: initiating PST update: grp = 4
Fri Nov 22 13:00:13 2019
GMON updating group 4 at 19 for pid 27, osid 12649908
NOTE: PST update grp = 4 completed successfully
NOTE: membership refresh pending for group 4/0xb08c40b (XIFENFEI)
GMON querying group 4 at 20 for pid 18, osid 39912680
NOTE: cache opening disk 19 of grp 4: XIFENFEI_0019 path:/dev/rhdisk10
NOTE: cache opening disk 20 of grp 4: XIFENFEI_0020 path:/dev/rhdisk11
NOTE: cache opening disk 21 of grp 4: XIFENFEI_0021 path:/dev/rhdisk8
NOTE: cache opening disk 22 of grp 4: XIFENFEI_0022 path:/dev/rhdisk9
NOTE: Attempting voting file refresh on diskgroup XIFENFEI
NOTE: Refresh completed on diskgroup XIFENFEI. No voting file found.
GMON querying group 4 at 21 for pid 18, osid 39912680
SUCCESS: refreshed membership for 4/0xb08c40b (XIFENFEI)
SUCCESS: ALTER DISKGROUP XIFENFEI ADD  DISK '/dev/rhdisk10' SIZE 491520M ,
'/dev/rhdisk11' SIZE 491520M ,
'/dev/rhdisk8' SIZE 491520M ,
'/dev/rhdisk9' SIZE 491520M /* ASMCA */
NOTE: starting rebalance of group 4/0xb08c40b (XIFENFEI) at power 1
Starting background process ARB0

asm在做着reblance的过程中遭遇到坏块,直接导致磁盘组dismount

Sun Nov 24 04:42:27 2019
NOTE: group 4 PST updated.
WARNING: cache read  a corrupt block: group=4(XIFENFEI) dsk=15 blk=258 disk=15
 (XIFENFEI_0015) incarn=1717056824 au=113792 blk=2 count=254
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
NOTE: a corrupted block from group XIFENFEI was dumped to
   /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc
WARNING: cache read (retry) a corrupt block: group=4(XIFENFEI)
 dsk=15 blk=258 disk=15 (XIFENFEI_0015) incarn=1717056824 au=113792 blk=2 count=1
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_x000_28639240.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ERROR: cache failed to read group=4(XIFENFEI) dsk=15 blk=258 from disk(s): 15(XIFENFEI_0015)
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483663] [258] [56 != 0]
NOTE: cache initiating offline of disk 15 group XIFENFEI
NOTE: process _x000_+asm2 (28639240) initiating offline of disk
    15.1717056824 (XIFENFEI_0015) with mask 0x7e in group 4
NOTE: initiating PST update: grp = 4, dsk = 15/0x66583538, mask = 0x6a, op = clear
GMON updating disk modes for group 4 at 23 for pid 28, osid 28639240
ERROR: Disk 15 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 4)
Sun Nov 24 04:42:27 2019
NOTE: cache dismounting (not clean) group 4/0x0B08C40B (XIFENFEI)
WARNING: Offline for disk XIFENFEI_0015 in mode 0x7f failed.
Sun Nov 24 04:42:27 2019
NOTE: halting all I/Os to diskgroup 4 (XIFENFEI)
NOTE: messaging CKPT to quiesce pins Unix process pid: 59441780, image: oracle@xifenfei2 (B000)
Sun Nov 24 04:42:27 2019
ERROR: ORA-15130 thrown in ARB0 for group number 4
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM2/trace/+ASM2_arb0_50856926.trc:
ORA-15130: diskgroup "XIFENFEI" is being dismounted

至此两个节点的该磁盘组就陷入了不停的mount,然后dismount的轮流循环中.这里我们可以大概的分析出来,由于vg的磁盘组被写入了数据或者强制剔除的时候导致asm写入该文件的数据被破坏,导致后续的asm reblance遭遇坏块,然后直接dismount.对于该问题的解决方案,通过对对该磁盘组的acd和cod进行patch,让其不进行reblance,保持该磁盘组现在,稳定的mount状态,然后对其数据进行备份和重建该磁盘组.这个客户运气不错,vg中的asm disk磁盘写入较少,数据库运行正常.
对于这种情况,如果发生极端损坏,比如asm磁盘组无法mount,可以参考:找回ASM中数据文件
如果是asm的元数据大量损坏,无法通过asm字典级别恢复,可以通过参考:asm disk header 彻底损坏恢复

通过sql获取asm别名实际文件名

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:通过sql获取asm别名实际文件名

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

asm的别名机制带来了一些便利的同时有些时候也引入了一些麻烦,比如某些工具无法很好的识别别名,闲着写了sql直接获取别名和实际数据文件对应关系
直接查询数据文件
1-3


通过asmcmd中的ls命令查看

ASMCMD> ls -l +MGMT/ora18c/test01.dbf
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   SEP 06 00:00:00  N    test01.dbf => +MGMT/ORA18C/DATAFILE/TEST16K.274.1016183943

但是如果太多,这样一个个替换效率太低,通过sql语句实现

获取实际数据文件
1-2


通过sql语句快速实现把数据文件路径中的别名转换为实际存储路径(omf方式存储)

ERROR: diskgroup XXXX was not mounted

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ERROR: diskgroup XXXX was not mounted

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

aix平台10.2.0.5 2节点RAC,由于节点2系统盘故障,通过节点1镜像系统,复制到节点2,结果由于节点2磁盘顺序和节点1不匹配,aix工程师进行了相关操作之后,节点1重启之后datadg磁盘组无法mount

SQL> alter diskgroup datadg mount
Mon Jun 10 23:23:46 CST 2019
NOTE: cache registered group DATADG number=1 incarn=0x8cf61164
Mon Jun 10 23:23:46 CST 2019
NOTE: Hbeat: instance first (grp 1)
Mon Jun 10 23:23:50 CST 2019
NOTE: start heartbeating (grp 1)
Mon Jun 10 23:23:50 CST 2019
NOTE: cache dismounting group 1/0x8CF61164 (DATADG)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DATADG was not mounted

检查datadg磁盘组相关信息

Tue Jan 29 19:21:45 CST 2019
NOTE: start heartbeating (grp 2)
NOTE: cache opening disk 0 of grp 2: DATADG_0000 path:/dev/rhdisk6
Tue Jan 29 19:21:45 CST 2019
NOTE: F1X0 found on disk 0 fcn 0.0
NOTE: cache opening disk 1 of grp 2: DATADG_0001 path:/dev/rhdisk7
NOTE: cache opening disk 2 of grp 2: DATADG_0002 path:/dev/rhdisk8
NOTE: cache opening disk 3 of grp 2: DATADG_0003 path:/dev/rhdisk9
NOTE: cache mounting (first) group 2/0x60E59155 (DATADG)
* allocate domain 2, invalid = TRUE
Tue Jan 29 19:21:45 CST 2019
NOTE: attached to recovery domain 2
Tue Jan 29 19:21:45 CST 2019
NOTE: cache recovered group 2 to fcn 0.849668
Tue Jan 29 19:21:45 CST 2019
NOTE: LGWR attempting to mount thread 1 for disk group 2
NOTE: LGWR mounted thread 1 for disk group 2
NOTE: opening chunk 1 at fcn 0.849668 ABA
NOTE: seq=21 blk=5394
Tue Jan 29 19:21:46 CST 2019
NOTE: cache mounting group 2/0x60E59155 (DATADG) succeeded
SUCCESS: diskgroup DATADG was mounted

通过这里可以看出来datadg磁盘组是由rhdisk6-9 四块磁盘组成,查询相关磁盘信息发现
asm-foreign


这里确定rhdisk7磁盘异常,通过kfed分析磁盘情况

D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                           34 ; 0x001: 0x22
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                   49407 ; 0x004: blk=49407
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                    58396 ; 0x010: 0x0000e41c
kfbh.fcn.wrap:                   131072 ; 0x014: 0x00020000
kfbh.spare1:                 4294967064 ; 0x018: 0xffffff18
kfbh.spare2:                 2105310074 ; 0x01c: 0x7d7c7b7a
005918A00 00002200 0000C0FF 00000000 00000000  [."..............]
005918A10 0000E41C 00020000 FFFFFF18 7D7C7B7A  [............z{|}]
005918A20 00000000 00000000 00000000 00000000  [................]
  Repeat 253 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]
D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd blkn=1
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
006EF8A00 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]
D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd blkn=2|more
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                33554432 ; 0x004: blk=33554432
kfbh.block.obj:                16777344 ; 0x008: file=128
kfbh.check:                  3844041089 ; 0x00c: 0xe51f6981
kfbh.fcn.base:               1297484544 ; 0x010: 0x4d560b00
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdatb10.aunum:                       0 ; 0x000: 0x00000000
kfdatb10.shrink:                  49153 ; 0x004: 0xc001
kfdatb10.ub2pad:                  20555 ; 0x006: 0x504b
kfdatb10.auinfo[0].link.next:      2048 ; 0x008: 0x0800
kfdatb10.auinfo[0].link.prev:      2048 ; 0x00a: 0x0800
kfdatb10.auinfo[0].free:              0 ; 0x00c: 0x0000
kfdatb10.auinfo[0].total:         49153 ; 0x00e: 0xc001
kfdatb10.auinfo[1].link.next:      4096 ; 0x010: 0x1000
kfdatb10.auinfo[1].link.prev:      4096 ; 0x012: 0x1000
kfdatb10.auinfo[1].free:              0 ; 0x014: 0x0000
kfdatb10.auinfo[1].total:             0 ; 0x016: 0x0000
kfdatb10.auinfo[2].link.next:      6144 ; 0x018: 0x1800
kfdatb10.auinfo[2].link.prev:      6144 ; 0x01a: 0x1800
kfdatb10.auinfo[2].free:              0 ; 0x01c: 0x0000
kfdatb10.auinfo[2].total:             0 ; 0x01e: 0x0000
kfdatb10.auinfo[3].link.next:      8192 ; 0x020: 0x2000
kfdatb10.auinfo[3].link.prev:      8192 ; 0x022: 0x2000
kfdatb10.auinfo[3].free:              0 ; 0x024: 0x0000

对比磁盘可能的损坏情况,由于在aix 平台asm disk的block有一个特征一般0082开头,通过工具打开磁盘,检索该标记对比
正常磁盘
0082-good


异常磁盘
0082-had


通过上述分析,大概评估rhdisk7 元数据部分损坏的不光是block 0和1,人工修复继续使用的可能性不太大,而且基于客户的数据库不大,采取方案是直接拷贝数据文件、redo、控制文件到文件系统,然后在本地文件系统open库
open_db


运气不错,实现完美恢复数据0丢失

WARNING: Read Failed.导致asm磁盘组异常

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:WARNING: Read Failed.导致asm磁盘组异常

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户对asm dg进行扩容,一段时间之后,asm data 磁盘组直接dismount

Wed May 29 18:37:25 2019
SUCCESS: ALTER DISKGROUP DATA ADD  DISK '/dev/oracleasm/disks/DATA_0028' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0027' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0026' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0025' SIZE 511993M /* ASMCA */
NOTE: starting rebalance of group 1/0x9e18e2f1 (DATA) at power 1
Wed May 29 18:37:26 2019
Starting background process ARB0
Wed May 29 18:37:26 2019
ARB0 started with pid=34, OS id=96638
NOTE: assigning ARB0 to group 1/0x9e18e2f1 (DATA) with 1 parallel I/O
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
cellip.ora not found.
Wed May 29 19:21:43 2019
WARNING: Read Failed. group:1 disk:27 AU:0 offset:360448 size:4096
WARNING: cache failed reading from group=1(DATA) dsk=27 blk=88 count=1 from disk= 27
(DATA_0027) kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=11596
ERROR: cache failed to read group=1(DATA) dsk=27 blk=88 from disk(s): 27(DATA_0027)
ORA-15080: synchronous I/O operation to a disk failed
ORA-27072: File I/O error
Linux-x86_64 Error: 5: Input/output error
Additional information: 4
Additional information: 704
Additional information: -1
NOTE: cache initiating offline of disk 27 group DATA
NOTE: process _user31879_+asm1 (31879) initiating offline of disk 27.3915911747 (DATA_0027) with mask 0x7e in group 1
NOTE: initiating PST update: grp = 1, dsk = 27/0xe9681243, mask = 0x6a, op = clear
Wed May 29 19:21:43 2019
GMON updating disk modes for group 1 at 10 for pid 35, osid 31879
ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 1)
Wed May 29 19:21:43 2019
NOTE: cache dismounting (not clean) group 1/0x9E18E2F1 (DATA)
NOTE: messaging CKPT to quiesce pins Unix process pid: 90256, image: oracle@ftz-db-o1 (B000)
Wed May 29 19:21:43 2019
NOTE: halting all I/Os to diskgroup 1 (DATA)
WARNING: Offline for disk DATA_0027 in mode 0x7f failed.
Wed May 29 19:21:43 2019
NOTE: LGWR doing non-clean dismount of group 1 (DATA)
NOTE: LGWR sync ABA=27.3207 last written ABA 27.3207
Wed May 29 19:21:43 2019
ERROR: ORA-15130 thrown in ARB0 for group number 1
Errors in file /oracle/grid_base/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_96638.trc:
ORA-15130: diskgroup "" is being dismounted
ORA-15130: diskgroup "DATA" is being dismounted
Wed May 29 19:21:43 2019
NOTE: stopping process ARB0

后续继续mount data 磁盘组成功,但是立马又dismount

Wed May 29 18:37:25 2019
SUCCESS: ALTER DISKGROUP DATA ADD  DISK '/dev/oracleasm/disks/DATA_0028' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0027' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0026' SIZE 511993M ,
'/dev/oracleasm/disks/DATA_0025' SIZE 511993M /* ASMCA */
NOTE: starting rebalance of group 1/0x9e18e2f1 (DATA) at power 1
Wed May 29 18:37:26 2019
Starting background process ARB0
Wed May 29 18:37:26 2019
ARB0 started with pid=34, OS id=96638
NOTE: assigning ARB0 to group 1/0x9e18e2f1 (DATA) with 1 parallel I/O
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
cellip.ora not found.
Wed May 29 19:21:43 2019
WARNING: Read Failed. group:1 disk:27 AU:0 offset:360448 size:4096
WARNING: cache failed reading from group=1(DATA) dsk=27 blk=88 count=1 from disk= 27
(DATA_0027) kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=11596
ERROR: cache failed to read group=1(DATA) dsk=27 blk=88 from disk(s): 27(DATA_0027)
ORA-15080: synchronous I/O operation to a disk failed
ORA-27072: File I/O error
Linux-x86_64 Error: 5: Input/output error
Additional information: 4
Additional information: 704
Additional information: -1
NOTE: cache initiating offline of disk 27 group DATA
NOTE: process _user31879_+asm1 (31879) initiating offline of disk 27.3915911747 (DATA_0027) with mask 0x7e in group 1
NOTE: initiating PST update: grp = 1, dsk = 27/0xe9681243, mask = 0x6a, op = clear
Wed May 29 19:21:43 2019
GMON updating disk modes for group 1 at 10 for pid 35, osid 31879
ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 1)
Wed May 29 19:21:43 2019
NOTE: cache dismounting (not clean) group 1/0x9E18E2F1 (DATA)
NOTE: messaging CKPT to quiesce pins Unix process pid: 90256, image: oracle@ftz-db-o1 (B000)
Wed May 29 19:21:43 2019
NOTE: halting all I/Os to diskgroup 1 (DATA)
WARNING: Offline for disk DATA_0027 in mode 0x7f failed.
Wed May 29 19:21:43 2019
NOTE: LGWR doing non-clean dismount of group 1 (DATA)
NOTE: LGWR sync ABA=27.3207 last written ABA 27.3207
Wed May 29 19:21:43 2019
ERROR: ORA-15130 thrown in ARB0 for group number 1
Errors in file /oracle/grid_base/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_96638.trc:
ORA-15130: diskgroup "" is being dismounted
ORA-15130: diskgroup "DATA" is being dismounted
Wed May 29 19:21:43 2019
NOTE: stopping process ARB0

对于上述的故障现象,本质原因是由于asm 磁盘组增加新磁盘之后,开始做rebalance,但是由于遭遇到 27号盘上有IO读错误,使得asm磁盘组无法正常完成rebalance,因而data磁盘组无法稳定的mount。解决该问题思路,通过patch asm磁盘组,禁止rebalance,从而使得data磁盘组不再dismount,再进行后续恢复

ORA-600 kfrValAcd30 恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ORA-600 kfrValAcd30 恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户由于存储控制器损坏,修复控制器之后,asm无法正常mount,报ORA-600 kfrValAcd30错误,让我们提供技术支持
kfrValAcd30


asm alert日志报错

Wed Apr 03 16:50:57 2019
SQL> alter diskgroup data mount
NOTE: cache registered group DATA number=1 incarn=0x14248741
NOTE: cache began mount (first) of group DATA number=1 incarn=0x14248741
NOTE: Assigning number (1,0) to disk (ORCL:DATAVOL1)
Wed Apr 03 16:51:03 2019
NOTE: start heartbeating (grp 1)
kfdp_query(DATA): 15
kfdp_queryBg(): 15
NOTE: cache opening disk 0 of grp 1: DATAVOL1 label:DATAVOL1
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache mounting (first) external redundancy group 1/0x14248741 (DATA)
Wed Apr 03 16:51:04 2019
* allocate domain 1, invalid = TRUE
Wed Apr 03 16:51:04 2019
NOTE: attached to recovery domain 1
NOTE: starting recovery of thread=1 ckpt=27.2697 group=1 (DATA)
Errors in file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_ora_15951.trc  (incident=23394):
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], []
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM2/incident/incdir_23394/+ASM2_ora_15951_i23394.trc
Abort recovery for domain 1
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup DATA
ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], []
ERROR: alter diskgroup data mount
NOTE: cache dismounting (clean) group 1/0x14248741 (DATA)
NOTE: lgwr not being msg'd to dismount
freeing rdom 1
Wed Apr 03 16:51:05 2019
Sweep [inc][23394]: completed
Sweep [inc2][23394]: completed
Wed Apr 03 16:51:05 2019
Trace dumping is performing id=[cdmp_20190403165105]
NOTE: detached from domain 1
NOTE: cache dismounted group 1/0x14248741 (DATA)
NOTE: cache ending mount (fail) of group DATA number=1 incarn=0x14248741
kfdp_dismount(): 16
kfdp_dismountBg(): 16
NOTE: De-assigning number (1,0) from disk (ORCL:DATAVOL1)
ERROR: diskgroup DATA was not mounted
NOTE: cache deleting context for group DATA 1/337938241

mos相关记录
参考:ORA-600 [KFRVALACD30] in ASM (Doc ID 2123013.1)
kfrValAcd30-mos


ORA-00600: internal error code, arguments: [kfrValAcd30]可能是bug或者硬件故障导致.基于客户的情况,最大可能就是由于硬件故障导致asm 磁盘组的acd无法正常进行,从而无法mount成功.如果运气好,通过kfed相关修复可以正常mount成功,运气不好可以通过底层进行恢复数据文件,从而最大限度恢复数据.

又一例asm格式化文件系统恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:又一例asm格式化文件系统恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

又一个客户把win rac中的asm disk给格式化为ntfs了(data磁盘组由三个500G的磁盘组成,被格式化掉前面两个还剩下一个),而且格式化之后,还进行了一系列恢复(比如修复磁盘头,又进行分区等一些磁盘操作),导致恢复难度增加,也增加了一些数据覆盖
asm alert日志报错

Thu Aug 23 11:20:14 2018
NOTE: ASM client orcl1:orcl disconnected unexpectedly.
NOTE: check client alert log.
NOTE: Process state recorded in trace file d:\app\administrator\diag\asm\+asm\+asm1\trace\+asm1_ora_2260.trc
Thu Aug 23 11:20:28 2018
Errors in file d:\app\administrator\diag\asm\+asm\+asm1\trace\+asm1_lgwr_3820.trc:
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 87) 参数错误。
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:26 disk_offset(bytes):27566080 io_size:4096 operation:Write type:synchronous
	 result:I/O error process_id:3820
NOTE: unable to write any mirror side for diskgroup DATA
NOTE: cache initiating offline of disk 1 group DATA
NOTE: process 3268:3820 initiating offline of disk 1.4042301899 (DATA_0001) with mask 0x7e in group 2
WARNING: Disk DATA_0001 in mode 0x7f is now being taken offline
NOTE: initiating PST update: grp = 2, dsk = 1/0xf0f0a1cb, mode = 0x15
kfdp_updateDsk(): 22
Thu Aug 23 11:20:28 2018
kfdp_updateDskBg(): 22
ERROR: too many offline disks in PST (grp 2)
WARNING: Disk DATA_0001 in mode 0x7f offline aborted

数据库alert日志报错

WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:422 disk_offset(bytes):442515456 io_size:16384 operation:Read type:synchronous
	 result:I/O error process_id:11992
WARNING: failed to read mirror side 1 of virtual extent 5 logical extent 0 of file 260 in
group [2.1859146063] from disk DATA_0001  allocation unit 422 reason error; if possible,will try another mirror side
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_ora_11992.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 5 logical extent 0 of file 260
in group 2 on disk 1 allocation unit 422
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_ora_11992.trc:
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
Thu Aug 23 11:20:13 2018
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-27070: 异步读取/写入失败
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:841 disk_offset(bytes):882532352 io_size:131072 operation:Write type:asynchronous
	 result:I/O error process_id:3224
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 240 logical extent 0 of file 259 in group 2 on disk 1
allocation unit 841 KCF: read, write or open error, block=0x7853 online=1
        file=4 '+DATA/orcl/datafile/users.259.944422883'
        error=15081 txt: ''
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-27070: 异步读取/写入失败
OSD-04006: ReadFile() 失败, 无法读取文件
O/S-Error: (OS 87) 参数错误。
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:422 disk_offset(bytes):442515456 io_size:16384 operation:Read type:synchronous
	 result:I/O error process_id:3224
WARNING: failed to read mirror side 1 of virtual extent 5 logical extent 0 of file 260 in group [2.1859146063] from
disk DATA_0001  allocation unit 422 reason error; if possible,will try another mirror side
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 5 logical extent 0 of file 260 in group 2 on disk 1
allocation unit 422
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-00204: 读取控制文件时出错 (块 41, # 块 1)
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
DBW1 (ospid: 3224): terminating the instance due to error 204

由于客户进行了一系列恢复恢复操作导致查看磁盘都不全

D:\>asmtool -list
NTFS                             \Device\Harddisk0\Partition1              100M
NTFS                             \Device\Harddisk0\Partition2           102298M
NTFS                             \Device\Harddisk1\Partition1           102397M
NTFS                             \Device\Harddisk2\Partition1           204797M
---这里还有一个磁盘没有正常显示
ORCLDISKDATA10                   \Device\Harddisk4\Partition1           511997M--客户尝试修复的磁盘
ORCLDISKDATA2                    \Device\Harddisk5\Partition1           511997M
ORCLDISKRECOVERY0                \Device\Harddisk6\Partition1            51197M
ORCLDISKRECOVERY1                \Device\Harddisk7\Partition1            51197M
ORCLDISKRECOVERY2                \Device\Harddisk8\Partition1            51197M
ORCLDISKCRS0                     \Device\Harddisk9\Partition1            10237M
ORCLDISKCRS1                     \Device\Harddisk10\Partition1           10237M
ORCLDISKCRS2                     \Device\Harddisk11\Partition1           10237M
NTFS                             \Device\Harddisk12\Partition2         4194174M

通过主机层面激活卷,删除分区等一系列操作,然后通过kfed构造磁盘头,让这些磁盘在os层面可以正常显示

C:\Users\Administrator>asmtool -list
NTFS                             \Device\Harddisk0\Partition1              100M
NTFS                             \Device\Harddisk0\Partition2           102298M
NTFS                             \Device\Harddisk1\Partition1           102397M
NTFS                             \Device\Harddisk2\Partition1           204797M
------需要处理的磁盘------
ORCLDISKDATA0                    \Device\Harddisk3\Partition1           511997M
ORCLDISKDATA1                    \Device\Harddisk4\Partition1           511997M
ORCLDISKDATA2                    \Device\Harddisk5\Partition1           511997M
-----------------------
ORCLDISKRECOVERY0                \Device\Harddisk6\Partition1            51197M
ORCLDISKRECOVERY1                \Device\Harddisk7\Partition1            51197M
ORCLDISKRECOVERY2                \Device\Harddisk8\Partition1            51197M
ORCLDISKCRS0                     \Device\Harddisk9\Partition1            10237M
ORCLDISKCRS1                     \Device\Harddisk10\Partition1           10237M
ORCLDISKCRS2                     \Device\Harddisk11\Partition1           10237M
NTFS                             \Device\Harddisk12\Partition2         4194174M

由于asm磁盘组内部目录au被彻底损坏,导致无法通过asm直接拷贝出来数据,通过底层扫描,按照au恢复出来相关数据,由于格式化ntfs和后续的误操作导致部分数据au被覆盖.其余数据均恢复,抢救了绝大部分数据.
数据文件恢复参考:asm disk header 彻底损坏恢复
另外有一次win平台类似恢复经历:asm disk格式化为ntfs恢复
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com