删除分区 oracle asm disk 恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:删除分区 oracle asm disk 恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接到一个朋友数据库故障请求case.大概操作是这样的:有一个39T的lun,通过parted分了15个分区,给oracle asm使用创建磁盘组data4,然后分了4个分区做成data5(由于ausize写错误了),删除掉磁盘组和这四个分区.然后重新分配了6个分区,并且使用最后5个分区创建了data5磁盘组.使用了一段时间之后,由于oracle空间不足,检查的时候误以为这个lun就前面15个分区使用,人工把后面的6个分区给删除了,并且创建了4个新分区,然后发现数据库crash了,发现误删除了在使用的分区.然后又把新创建的4个分区给删除了.接手该故障的时候,这个39T lun的分区信息如下

[root@node1 linux64]# parted /dev/mapper/36000d31003d39e000000000000000004
GNU Parted 2.1
Using /dev/mapper/36000d31003d39e000000000000000004
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                            
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/36000d31003d39e000000000000000004: 39.6TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name         Flags
 1      2097kB  2000GB  2000GB               asmdata4-1
 2      2000GB  4000GB  2000GB               asmdata4-2
 3      4000GB  6000GB  2000GB               asmdata4-3
 4      6000GB  8000GB  2000GB               asmdata4-4
 5      8000GB  10.0TB  2000GB               asmdata4-5
 6      10.0TB  12.0TB  2000GB               asmdata4-6
 7      12.0TB  14.0TB  2000GB               asmdata4-7
 8      14.0TB  16.0TB  2000GB               asmdata4-8
 9      16.0TB  18.0TB  2000GB               asmdata4-9
10      18.0TB  20.0TB  2000GB               asmdata4-10
11      20.0TB  22.0TB  2000GB               asmdata4-11
12      22.0TB  24.0TB  2000GB               asmdata4-12
13      24.0TB  26.0TB  2000GB               asmdata4-13
14      26.0TB  28.0TB  2000GB               asmdata4-14
15      28.0TB  30.0TB  2000GB               asmdata4-15

客户正常使用情况下,这个lun上面相关分区的asm disk信息

 SQL> CREATE DISKGROUP DATA4 EXTERNAL REDUNDANCY  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p1' SIZE 1907346M ,
 '/dev/mapper/36000d31003d39e000000000000000004p2' SIZE 1907350M ,
 '/dev/mapper/36000d31003d39e000000000000000004p3' SIZE 1907348M ,
 '/dev/mapper/36000d31003d39e000000000000000004p4' SIZE 1907348M ,
 '/dev/mapper/36000d31003d39e000000000000000004p5' SIZE 1907350M  
ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='8M' /* ASMCA */ 

SQL> ALTER DISKGROUP DATA4 ADD  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p10' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p6' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p7' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p8' SIZE 1907350M ,
'/dev/mapper/36000d31003d39e000000000000000004p9' SIZE 1907348M /* ASMCA */ 

SQL> ALTER DISKGROUP DATA4 ADD  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p11' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p12' SIZE 1907350M ,
'/dev/mapper/36000d31003d39e000000000000000004p13' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p14' SIZE 1907348M ,
'/dev/mapper/36000d31003d39e000000000000000004p15' SIZE 1907350M /* ASMCA */ 

SQL> CREATE DISKGROUP DATA5 EXTERNAL REDUNDANCY  DISK 
'/dev/mapper/36000d31003d39e000000000000000004p17' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p18' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p19' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p20' SIZE 1716614M ,
'/dev/mapper/36000d31003d39e000000000000000004p21' SIZE 1621246M  
ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='4M' /* ASMCA */ 

基于客户现在的情况,data4中的所有分区都正常,主要是要找出来data5中的5个分区的数据.因为客户不确定p16分区大小,导致后续的5个分区起始位置不好定位.从而使得恢复无法进行.通过shell脚本结合kfed尝试定位asm disk header信息

#!/bin/bash
j=xxxxxxxxxxx
for ((i=xxxxxx; i<=j; i++))
do
 echo "-----$i--------" >> /home/get_au1.txt
 kfed read /dev/mapper/36000d31003d39e000000000000000004 aun=$i |
 > grep  "kfdhdb.dskname:              DATA" >> /home/get_au.txt
done

结果发现无法获取到结果,通过分析发现这里由于lun过大,导致aun值过大,从而使得kfed溢出无法读取到正常值.根据parted的特性,人工dd部分block进行分析

[root@node1 bak]# kfed read xifenfei.dd aun=134|more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3357988283 ; 0x00c: 0xc826d5bb
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:              DATA5_0000 ; 0x028: length=10
kfdhdb.grpname:                   DATA5 ; 0x048: length=5
kfdhdb.fgname:               DATA5_0000 ; 0x068: length=10
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33116450 ; 0x0a8: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5
kfdhdb.crestmp.lo:            477378560 ; 0x0ac: USEC=0x0 MSEC=0x10e SECS=0x7 MINS=0x7
kfdhdb.mntstmp.hi:             33116450 ; 0x0b0: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5
kfdhdb.mntstmp.lo:            486256640 ; 0x0b4: USEC=0x0 MSEC=0x2ec SECS=0xf MINS=0x7
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80
kfdhdb.dsksize:                  429153 ; 0x0c4: 0x00068c61
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002

顺利找到了data5中的第一块磁盘,而且确定了起始位置,然后构造相关的dd语句把分区的数据dd到一个新磁盘中

dd if=/dev/mapper/36000d31003d39e000000000000000004 bs=4M skip=xxxxx count=429153 of=/dev/sdu

然后通过kfed查看数据
20210704160347


通过类似方法依次处理,最终把5块asm disk全部找到,并且顺利dd到新的磁盘中.尝试启动crs,并mount data5
20210704153634

20210704153548

data5 磁盘组mount成功之后,数据库顺利启动,实现lun中删除分区之后,asm磁盘组数据完美恢复
20210704153728

这次运气还不错,仅仅是对lun的分区使用了parted进行了删除和创建等操作,没有格式化文件系统和做成新的asm disk,不然数据会有一部分丢失.对于有部分破坏的分区,需要通过底层碎片的方法进行最大限度抢救数据.参考类似文档:
asm disk被加入vg恢复
又一例asm格式化文件系统恢复
文件系统损坏导致数据文件异常恢复
一次完美的asm disk被格式化ntfs恢复
Oracle 数据文件大小为0kb或者文件丢失恢复
再一起asm disk被格式化成ext3文件系统故障恢复
oracle asm disk格式化恢复—格式化为ext4文件系统
oracle asm disk格式化恢复—格式化为ntfs文件系统
分享oracleasm createdisk重新创建asm disk后数据0丢失恢复案例

drop tablesapce 数据恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:drop tablesapce 数据恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户执行了drop tablespace xxx including contents and datafiles,虽然删除命令返回错误,但是该表空间中大量对象被删除,文件没有被从系统层面删除

Tue Jun 29 14:38:26 2021
DROP TABLESPACE "XFF" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS
Tue Jun 29 14:38:32 2021
Thread 1 advanced to log sequence 975 (LGWR switch)
  Current log# 3 seq# 975 mem# 0: D:\APP\ADMINISTRATOR\ORADATA\XFF\REDO03.LOG
ORA-604 signalled during: DROP TABLESPACE "XFF" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS...
Tue Jun 29 14:40:44 2021
ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M
Completed: ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M
ALTER TABLESPACE "XFF" DROP DATAFILE  'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF'
WARNING: Cannot delete file D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF
Errors in file d:\app\administrator\diag\rdbms\XFF\XFF\trace\XFF_ora_2528.trc:
ORA-01265: 无法删除 DATA D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF
ORA-27056: 无法删除文件
OSD-04024: 无法删除文件。
O/S-Error: (OS 32) 另一个程序正在使用此文件,进程无法访问。
Completed: ALTER TABLESPACE "XFF" DROP DATAFILE  'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF02.DBF'
Tue Jun 29 14:58:51 2021
ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED
Completed: ALTER DATABASE DATAFILE 'D:\APP\ADMINISTRATOR\ORADATA\XFF\XFF'  AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED

通过工具获取obj$字典信息,结合user$,判断业务用户表数据基本上全部删除
20210701193502


对于这类情况,常规方法无法恢复,只能按照表被drop的思路进行恢复.参考文章:dul恢复drop表测试,通过结合客户提供的表结构和一些特殊方法恢复出来所有对象的obj#,dataobj#信息,快速的恢复了客户数据,得到客户认可
20210701194153

ORA-15335: ASM metadata corruption detected in disk group ‘DATA’

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-15335: ASM metadata corruption detected in disk group ‘DATA’

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

asm磁盘组增加磁盘进行扩容之后报ORA-15335: ASM metadata corruption detected in disk group ‘DATA’和ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479],磁盘组dismount,然后mount之后立马dismount掉.

Tue Jun 29 09:19:09 2021
SQL> ALTER DISKGROUP DATA ADD  DISK '/dev/raw/raw5' SIZE 102400M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (2,1) to disk (/dev/raw/raw5)
NOTE: requesting all-instance membership refresh for group=2
NOTE: initializing header on grp 2 disk DATA_0001
NOTE: requesting all-instance disk validation for group=2
Tue Jun 29 09:19:11 2021
NOTE: skipping rediscovery for group 2/0xb0c845ce (DATA) on local instance.
NOTE: requesting all-instance disk validation for group=2
NOTE: skipping rediscovery for group 2/0xb0c845ce (DATA) on local instance.
NOTE: initiating PST update: grp = 2
Tue Jun 29 09:19:16 2021
GMON updating group 2 at 7 for pid 27, osid 25020
NOTE: PST update grp = 2 completed successfully 
NOTE: membership refresh pending for group 2/0xb0c845ce (DATA)
GMON querying group 2 at 8 for pid 18, osid 3852
NOTE: cache opening disk 1 of grp 2: DATA_0001 path:/dev/raw/raw5
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
GMON querying group 2 at 9 for pid 18, osid 3852
SUCCESS: refreshed membership for 2/0xb0c845ce (DATA)
Tue Jun 29 09:19:20 2021
SUCCESS: ALTER DISKGROUP DATA ADD  DISK '/dev/raw/raw5' SIZE 102400M /* ASMCA */
NOTE: starting rebalance of group 2/0xb0c845ce (DATA) at power 1
Starting background process ARB0
Tue Jun 29 09:19:21 2021
ARB0 started with pid=33, OS id=25176 
NOTE: assigning ARB0 to group 2/0xb0c845ce (DATA) with 1 parallel I/O
cellip.ora not found.
Tue Jun 29 09:19:24 2021
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Tue Jun 29 09:19:46 2021
WARNING: cache read  a corrupt block: group=2(DATA) dsk=0 blk=7 disk=0 (DATA_0000) incarn=3915953476 au=0 blk=7 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
NOTE: a corrupted block from group DATA was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ERROR: cache failed to read group=2(DATA) dsk=0 blk=7 from disk(s): 0(DATA_0000)
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
NOTE: cache initiating offline of disk 0 group DATA
NOTE: process _arb0_+asm1 (25176) initiating offline of disk 0.3915953476 (DATA_0000) with mask 0x7e in group 2
NOTE: initiating PST update: grp = 2, dsk = 0/0xe968b544, mask = 0x6a, op = clear
Tue Jun 29 09:19:46 2021
GMON updating disk modes for group 2 at 10 for pid 33, osid 25176
ERROR: Disk 0 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 2)
Tue Jun 29 09:19:46 2021
NOTE: cache dismounting (not clean) group 2/0xB0C845CE (DATA) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 25395, image: oracle@frsrac1 (B000)
Tue Jun 29 09:19:46 2021
NOTE: halting all I/Os to diskgroup 2 (DATA)
Tue Jun 29 09:19:46 2021
NOTE: LGWR doing non-clean dismount of group 2 (DATA)
NOTE: LGWR sync ABA=11.10715 last written ABA 11.10715
WARNING: Offline for disk DATA_0000 in mode 0x7f failed.
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc  (incident=54665):
ORA-15335: ASM metadata corruption detected in disk group 'DATA'
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15066: offlining disk "DATA_0000" in group "DATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_54665/+ASM1_arb0_25176_i54665.trc
Tue Jun 29 09:19:46 2021
kjbdomdet send to inst 2
detach from dom 2, sending detach message to inst 2
Tue Jun 29 09:19:46 2021
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 24)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 2 invalid = TRUE 
 796 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Jun 29 09:19:46 2021
WARNING: dirty detached from domain 2
NOTE: cache dismounted group 2/0xB0C845CE (DATA) 
SQL> alter diskgroup DATA dismount force /* ASM SERVER:2965915086 */ 
Tue Jun 29 09:19:47 2021
ERROR: ORA-15130 thrown in ARB0 for group number 2
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15335: ASM metadata corruption detected in disk group 'DATA'
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15066: offlining disk "DATA_0000" in group "DATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
Tue Jun 29 09:19:47 2021
NOTE: stopping process ARB0
Tue Jun 29 09:19:47 2021
Sweep [inc][54665]: completed
Tue Jun 29 09:19:47 2021
Sweep [inc2][54665]: completed
NOTE: cache deleting context for group DATA 2/0xb0c845ce
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_3852.trc:
ORA-15130: diskgroup "DATA" is being dismounted
GMON dismounting group 2 at 11 for pid 27, osid 25395
NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
SUCCESS: diskgroup DATA was dismounted
SUCCESS: alter diskgroup DATA dismount force /* ASM SERVER:2965915086 */

通过kfed分析报错block,确认错误

kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL
kfbh.datfmt:                          2 ; 0x003: 0x02
kfbh.block.blk:                       7 ; 0x004: blk=7
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2183628676 ; 0x00c: 0x82278784 <<======该值错误,应该为:686982479
kfbh.fcn.base:                     3430 ; 0x010: 0x00000d66
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdatb.aunum:                      2240 ; 0x000: 0x000008c0
kfdatb.shrink:                      448 ; 0x004: 0x01c0
kfdatb.ub2pad:                        0 ; 0x006: 0x0000

通过修复该错误,并且禁止reblance操作[增加磁盘数据需要重新分布],mount磁盘组,然后open库,发现redo已经被覆盖(非归档),强制打开库报错

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [2662], [0], [2691201882], [0],
[2691227745], [12583040], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [2662], [0], [2691201881], [0],
[2691227745], [12583040], [], [], [], [], [], []
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [2662], [0], [2691201879], [0],
[2691227745], [12583040], [], [], [], [], [], []
Process ID: 25110
Session ID: 287 Serial number: 3

通过对scn进行处理,数据库顺利open

SQL> startup mount pfile='/tmp/pfile';
ORACLE instance started.

Total System Global Area 5044088832 bytes
Fixed Size		    2261928 bytes
Variable Size		 1442843736 bytes
Database Buffers	 3590324224 bytes
Redo Buffers		    8658944 bytes
Database mounted.
SQL> alter database open;

Database altered.

/u01空间100%导致数据库报ORA-01114 ORA-29701

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:/u01空间100%导致数据库报ORA-01114 ORA-29701

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

数据库操作报ORA-01114 ORA-29701错误
ORA-01114 ORA-29704


通过分析发现磁盘/u01分区使用100%
20210629191917

sqlplus / as sysdba 无法登陆
20210629191935

清理trace,释放/u01空间之后,数据库恢复正常,mos中关于ORA-01114错误描述

APPLIES TO:
BI Publisher (formerly XML Publisher) - Version 11.1.1.7.x and later
Information in this document applies to any platform.

SYMPTOMS
 While generating reports the following error occurred.

oracle.xdo.XDOException: java.sql.SQLException: ORA-01114: IO error writing block to file (block # ) 
ORA-01114: IO error writing block to file 201 (block # 755561) 
ORA-27072: File I/O error Additional information: 4 
Additional information: 755561 Additional information: 36864

CHANGES
 

CAUSE
 The issue is caused due to DB not having enough space.

SOLUTION
Cleared space in DB and the report worked fine.

lvm缩小xfs文件系统空间和对swap进行扩容操作

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:lvm缩小xfs文件系统空间和对swap进行扩容操作

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

xfs文件系统lvm缩小空间操作(/home从100G减小到80G)

[root@xifenfei ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G  6.0G  443G   2% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G     0   63G   0% /dev/shm
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhel-home  100G   38M  100G   1% /home
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   32K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media

[root@xifenfei u01]# xfsdump -f /home.xfsdump /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> home
session label entered: "tar czvf /home.tar.gz /home
home"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of xifenfei:/home
xfsdump: dump date: Fri Jun 25 11:37:13 2021
xfsdump: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsdump: session label: 
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 4828224 bytes
xfsdump: /var/lib/xfsdump/inventory created

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> home
media label entered: "home"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 4732672 bytes
xfsdump: dump size (non-dir files) : 4588480 bytes
xfsdump: dump complete: 4 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /home.xfsdump OK (success)
xfsdump: Dump Status: SUCCESS

[root@xifenfei u01]# umount /home
[root@xifenfei u01]# lvreduce -L 80G /dev/mapper/rhel-home
  WARNING: Reducing active logical volume to 80.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce rhel/home? [y/n]: y
  Size of logical volume rhel/home changed from 100.00 GiB (25600 extents) to 80.00 GiB (20480 extents).
  Logical volume rhel/home successfully resized.

[root@xifenfei u01]# mkfs.xfs -f /dev/mapper/rhel-home
meta-data=/dev/mapper/rhel-home  isize=512    agcount=16, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=20971520, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=10240, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@xifenfei u01]# mount /home
xfsrestore -f /home.xfsdump /home
[root@xifenfei u01]# xfsrestore -f /home.xfsdump /home
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: xifenfei
xfsrestore: mount point: /home
xfsrestore: volume: /dev/mapper/rhel-home
xfsrestore: session time: Fri Jun 25 11:37:13 2021
xfsrestore: level: 0
xfsrestore: session label: "tar czvf /home.tar.gz /home
home"
xfsrestore: media label: "home"
xfsrestore: file system id: b996cff9-332b-4c07-96e1-8335a1f23627
xfsrestore: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsrestore: media id: 6094b9b5-a45f-4638-a0e2-c1b982ead67b
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 119 directories and 188 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /home.xfsdump OK (success)
xfsrestore: Restore Status: SUCCESS
[root@xifenfei u01]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G   14G  435G   4% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   28K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media
tmpfs                   63G     0   63G   0% /dev/shm
/dev/mapper/rhel-home   80G   38M   80G   1% /home

xfs系统的lvm无法直接缩小空间,只能是通过xfsdump /home内容,然后lvm缩小空间重做xfs文件系统,再使用xfsdump还原

lvm扩容swap空间(swap从8G扩大到16G)

[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         8192           0        8192
[root@xifenfei home]# lvextend -L 16GB /dev/rhel/swap
  Size of logical volume rhel/swap changed from 8.00 GiB (2048 extents) to 16.00 GiB (4096 extents).
  Logical volume rhel/swap successfully resized.
[root@xifenfei home]# sync;sync
[root@xifenfei home]# swapoff /dev/rhel/swap
mkswap /dev/rhel/swap 
[root@xifenfei home]# mkswap /dev/rhel/swap 
mkswap: /dev/rhel/swap: warning: wiping old swap signature.
swapon /dev/rhel/swap Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=8d79ccf4-1796-49c9-968d-23abb67bc6eb
[root@xifenfei home]# swapon /dev/rhel/swap 
[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         16383           0       16383

expdp 并行导出单表数据

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:expdp 并行导出单表数据

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在某些情况下,需要使用并行的方法使用 datapump 对单个对象并行导出,导入加快数据迁移的数据
expdp导出操作

#!/bin/bash
chunk=10
for ((i=0;i<chunk;i++));
do
  expdp USERNAME/Password@DB_NAME TABLES=LOB_TEST QUERY=LOB_TEST:\"where mod\(dbms_rowid.rowid_block_number\(rowid\)\, 
>${chunk}\) = ${i}\" directory=DMP dumpfile=lob_test_${i}.dmp logfile= log_test_${i}.log &
  echo $i
done 

impdp导入操作

#!/bin/bash
chunk=10
for ((i=0;i<chunk;i++));
do
 impdp USERNAME/Password@DB_NAME  directory=DMP REMAP_TABLE=LOB_TEST:LOB_TEST  remap_schema=source:target 
>dumpfile= lob_test_${i}.dmp logfile=TABLE_imp_log_test_${i}.log  DATA_OPTIONS=DISABLE_APPEND_HINT  CONTENT=DATA_ONLY &
 echo $i
done

在12c版本开始impdp可能会启用ENABLE_PARALLEL_DML特性,需要注意
参考:Optimising LOB Export and Import Performance via Oracle DataPump

datapump network_link遭遇ORA-12899错误

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:datapump network_link遭遇ORA-12899错误

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在给一个客户使用expdp+network_link导出数据,然后通过impdp导入数据的过程中遇到ORA-12899问题.
20210608215836


对原库和现在库进行分析
20210608215825
20210608215817

原库和目标库表结构一致,原库该表存储数据实际长度确实为1,但是在impdp导入的时候提示需要长度为3.通过分析,确认原库的nls_length_semantics参数设置为char了,直接使用impdp+network_link不落地方式导入该表数据成功
20210608215845

根据上述情况,查询相关文档,确认类似记录为:
ORA-12899 When Using IMPDP Over Network Link (Doc ID 414901.1)
ORA-26059 During Impdp Using Export Dump Taken With NETWORK_LINK Option (Doc ID 2266956.1)
虽然都不是完全匹配该问题,但是基本上可以确认expdp的network_link和nls_length_semantics参数是引起该问题的根本原因,在后续的迁移中,尽量保持nls_length_semantics参数一致.

tab$异常被处理之后报ORA-600 13304故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:tab$异常被处理之后报ORA-600 13304故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

又一例数据库启动报ORA-600 16703 1403 20错误故障

Sun Jun 13 14:00:56 2021
NOTE: dependency between database xff and diskgroup resource ora.DG_ARCH_xff.dg is established
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc  (incident=348265):
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Incident details in: /opt/oracle/diag/rdbms/xff/xff1/incident/incdir_348265/xff1_ora_56340_i348265.trc
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Error 704 happened during db open, shutting down database
USER (ospid: 56340): terminating the instance due to error 704
Instance terminated by USER, pid = 56340

这个故障比较明显,根据我们之前的分析经验(警告:互联网中有oracle介质被注入恶意程序导致—ORA-600 16703),应该是tab$被恶意破坏导致,通过分析安装程序,确认是该问题,客户通过互联网上的相关文章,dd方式进行处理,结果数据库报ORA-600 13304错误,无法继续,让我们提供技术支持

SMON: enabling tx recovery
Database Characterset is AL32UTF8
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc  (incident=396265):
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /opt/oracle/diag/rdbms/xff/xff1/incident/incdir_396265/xff1_ora_83843_i396265.trc
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc:
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc:
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER (ospid: 83843): terminating the instance due to error 600
Instance terminated by USER, pid = 83843

通过我们的技术对数据库进行一系列恢复之后,open过程报错

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier
Process ID: 23346
Session ID: 680 Serial number: 51933

通过跟踪启动过程分析

PARSE ERROR #140574232044112:len=45 dep=1 uid=0 oct=3 lid=0 tim=1623621695884944 err=904
select value$ from sys.props$ where name = :1
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier

基本上可以确定是由于客户自行恢复导致props$表异常.通过进一步分析,确认是由于在对tab$处理不合适导致,进一步对tab$进行处理,数据库恢复正常,实现数据0丢失

磁盘空间不足迁移数据文件导致故障恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:磁盘空间不足迁移数据文件导致故障恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户由于磁盘空间不足,在线把oracle数据迁移到其他位置

Tue Jun 01 11:44:32 2021
Thread 1 advanced to log sequence 28754 (LGWR switch)
  Current log# 2 seq# 28754 mem# 0: /u01/app/oracle/oradata/orcl/redo02.log
Tue Jun 01 11:59:54 2021
Non critical error ORA-48113 caught while writing to trace file
      "/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_mmon_23341.trc"
Error message: 
Writing to the above trace file is disabled for now on...
Tue Jun 01 12:00:00 2021
Non critical error ORA-48181 caught while writing to trace file
       "/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_29692.trc"
Error message: Linux-x86_64 Error: 28: No space left on device
Additional information: 1
Writing to the above trace file is disabled for now on...
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_29692.trc:
ORA-12012: error on auto execute of job "XIFENFEI"."STATISTICS_1_JOBS"
ORA-06575: Package or function PKG_STAT_1_2018 is in an invalid state
Tue Jun 01 12:12:26 2021

迁移走数据文件之后,数据库报错,并且强制关闭数据库

ORA-01116: error in opening database file 30
ORA-01110: data file 30: '/u02/orcdate/AAAA.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_m001_29106.trc:
ORA-01116: error in opening database file 31
ORA-01110: data file 31: '/u02/orcdate/CBD.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 07 10:25:03 2021
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_m000_9817.trc:
ORA-01116: error in opening database file 24
ORA-01110: data file 24: '/u02/orcdate/ABC.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 07 10:25:10 2021
Shutting down instance (immediate)
Stopping background process SMCO
Shutting down instance: further logons disabled
Read of datafile '/u02/orcdate/XXXXXXX.dbf' (fno 21) header failed with ORA-01208
Rereading datafile 21 header failed with ORA-01208
Mon Jun 07 10:25:36 2021
Adjusting the default value of parameter parallel_max_servers
from 640 to 485 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
Mon Jun 07 10:28:20 2021
Shutting down instance (abort)
License high water mark = 152
USER (ospid: 7987): terminating the instance
Termination issued to instance processes. Waiting for the processes to exit
Mon Jun 07 10:28:30 2021
Instance termination failed to kill one or more processes
Instance terminated by USER, pid = 7987
Mon Jun 07 10:28:31 2021
Instance shutdown complete

然后又把文件迁移回来,并且进行了一系列数据库恢复,最后我们接手是情况是有多个文件被offline,并且有一个文件报WRONG FILE NUMBER,通过Oracle数据库异常恢复检查脚本(Oracle Database Recovery Check)脚本检查,对其中的v$datafile,v$datafile_header,v$tablespace综合分析
20210612154127
20210612154301
20210612154350


确认是WXD_YPT表空间数据文件直接拷贝为WXD表空间数据文件,经过客户确认,WXD数据不重要,客户先忽略.
通过一系列处理,尝试open数据库,报ORA-600 2662错误

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [2662], [3786], [2612118101], [3786], [2612128448], [12583040]
ORA-00600: internal error code, arguments: [2662], [3786], [2612118100], [3786], [2612128448], [12583040]
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [2662], [3786], [2612118098], [3786], [2612128448], [12583040]
Process ID: 14888
Session ID: 198 Serial number: 3

修改数据库scn(参考blog相关link:ORA-600 2662)数据库顺利open,并且协助客户导出数据并导入新库,完成数据库恢复.
这次运气比较好,只是丢失了一点数据,没有引起重大事故.再此提醒:不太了解oracle的朋友,操作数据库需谨慎,不要在线直接移动数据文件,另外为了更好的恢复效果,更快的恢复,故障之后,最好尽可能的告知所有操作.

文件系统重新分区oracle恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:文件系统重新分区oracle恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近处理的一个恢复,算是这几年中的一个奇葩.
1. oracle dg 主备库raid同时损坏,找硬件恢复厂商软件重组raid,恢复厂商判断所有磁盘全部都是好的
2. 主库系统被重装,文件系统重新分区.备库在使用duplicate搭建dg的过程中(通过alert日志分析以前的dg是正常的,直接rm掉了所有文件,然后使用duplicate搭建),只是部分文件拷贝到了备库
3. 备份放在一台单独的存储上,但是当上去看是发现存储上面空空的,没有任何数据(通过对ctl的分析,确认存储上面只有一个月之前的备份记录,估计也被删除或者重新分区了(通过后续分析,判断应该是被重新分区了)
客户没有和我们说任何信息,就是说突然两个raid都损坏了,找硬件厂商进行恢复,硬件厂商开始也觉得这个会比较简单,直接通过raid模拟恢复出来lun,然后通过软件恢复出来一些数据文件(反馈给我的信息是少了redo,需要我们协助恢复),通过深入分析,发现少了大量数据文件,基于现在的恢复基本上没意义.然后通过低主库的raid模拟恢复,拷贝出来数据文件,结果发现恢复出来的文件大小,和文件头记录不匹配
20210607232818


这里显示文件大小应该是30G,但是实际拷贝的文件只有26G大小
20210607232731

通过底层进一步分析,发现任何大于4G的文件,按照4G为单位间隔损坏(4G好,4G损坏,4G好……)
20210605203719
20210605201235

出现这类情况,通过底层分析,判断是客户对磁盘进行了重新分区,引起底层问题导致
20210607214629

基于这样的情况,没有太多好的方法处理,直接使用底层碎片技术进行恢复
20210607233847

运气不错,顺利open数据库
20210607234450

本次恢复走了很多弯路,主要是客户不清楚客户那边处于什么原因,多次隐秘故障原因,没有如实的告知我们故障情况,一步步尝试,走了很多弯路,耽误了不少时间.如果可能请尽量告诉我们准确情况,便于我们准确做出判断,快速高效的恢复.
类似oracle 碎片层面恢复,我们进行了挺多的,类似:
dbca删除库和rm删库恢复
文件系统损坏导致数据文件异常恢复
Oracle 数据文件大小为0kb或者文件丢失恢复
alter database create datafile 导致数据文件丢失恢复
rm -rf 删除数据文件恢复方法—文件系统反删除+oracle碎片重组