ORA-15335: ASM metadata corruption detected in disk group ‘DATA’

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-15335: ASM metadata corruption detected in disk group ‘DATA’

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

asm磁盘组增加磁盘进行扩容之后报ORA-15335: ASM metadata corruption detected in disk group ‘DATA’和ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479],磁盘组dismount,然后mount之后立马dismount掉.

Tue Jun 29 09:19:09 2021
SQL> ALTER DISKGROUP DATA ADD  DISK '/dev/raw/raw5' SIZE 102400M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (2,1) to disk (/dev/raw/raw5)
NOTE: requesting all-instance membership refresh for group=2
NOTE: initializing header on grp 2 disk DATA_0001
NOTE: requesting all-instance disk validation for group=2
Tue Jun 29 09:19:11 2021
NOTE: skipping rediscovery for group 2/0xb0c845ce (DATA) on local instance.
NOTE: requesting all-instance disk validation for group=2
NOTE: skipping rediscovery for group 2/0xb0c845ce (DATA) on local instance.
NOTE: initiating PST update: grp = 2
Tue Jun 29 09:19:16 2021
GMON updating group 2 at 7 for pid 27, osid 25020
NOTE: PST update grp = 2 completed successfully 
NOTE: membership refresh pending for group 2/0xb0c845ce (DATA)
GMON querying group 2 at 8 for pid 18, osid 3852
NOTE: cache opening disk 1 of grp 2: DATA_0001 path:/dev/raw/raw5
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
GMON querying group 2 at 9 for pid 18, osid 3852
SUCCESS: refreshed membership for 2/0xb0c845ce (DATA)
Tue Jun 29 09:19:20 2021
SUCCESS: ALTER DISKGROUP DATA ADD  DISK '/dev/raw/raw5' SIZE 102400M /* ASMCA */
NOTE: starting rebalance of group 2/0xb0c845ce (DATA) at power 1
Starting background process ARB0
Tue Jun 29 09:19:21 2021
ARB0 started with pid=33, OS id=25176 
NOTE: assigning ARB0 to group 2/0xb0c845ce (DATA) with 1 parallel I/O
cellip.ora not found.
Tue Jun 29 09:19:24 2021
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Tue Jun 29 09:19:46 2021
WARNING: cache read  a corrupt block: group=2(DATA) dsk=0 blk=7 disk=0 (DATA_0000) incarn=3915953476 au=0 blk=7 count=1
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
NOTE: a corrupted block from group DATA was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ERROR: cache failed to read group=2(DATA) dsk=0 blk=7 from disk(s): 0(DATA_0000)
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
NOTE: cache initiating offline of disk 0 group DATA
NOTE: process _arb0_+asm1 (25176) initiating offline of disk 0.3915953476 (DATA_0000) with mask 0x7e in group 2
NOTE: initiating PST update: grp = 2, dsk = 0/0xe968b544, mask = 0x6a, op = clear
Tue Jun 29 09:19:46 2021
GMON updating disk modes for group 2 at 10 for pid 33, osid 25176
ERROR: Disk 0 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 2)
Tue Jun 29 09:19:46 2021
NOTE: cache dismounting (not clean) group 2/0xB0C845CE (DATA) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 25395, image: oracle@frsrac1 (B000)
Tue Jun 29 09:19:46 2021
NOTE: halting all I/Os to diskgroup 2 (DATA)
Tue Jun 29 09:19:46 2021
NOTE: LGWR doing non-clean dismount of group 2 (DATA)
NOTE: LGWR sync ABA=11.10715 last written ABA 11.10715
WARNING: Offline for disk DATA_0000 in mode 0x7f failed.
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc  (incident=54665):
ORA-15335: ASM metadata corruption detected in disk group 'DATA'
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15066: offlining disk "DATA_0000" in group "DATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
Incident details in: /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_54665/+ASM1_arb0_25176_i54665.trc
Tue Jun 29 09:19:46 2021
kjbdomdet send to inst 2
detach from dom 2, sending detach message to inst 2
Tue Jun 29 09:19:46 2021
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 24)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 2 invalid = TRUE 
 796 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Jun 29 09:19:46 2021
WARNING: dirty detached from domain 2
NOTE: cache dismounted group 2/0xB0C845CE (DATA) 
SQL> alter diskgroup DATA dismount force /* ASM SERVER:2965915086 */ 
Tue Jun 29 09:19:47 2021
ERROR: ORA-15130 thrown in ARB0 for group number 2
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_25176.trc:
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15335: ASM metadata corruption detected in disk group 'DATA'
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15066: offlining disk "DATA_0000" in group "DATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
ORA-15196: invalid ASM block header [kfc.c:26368] [check_kfbh] [2147483648] [7] [2183628676 != 686982479]
Tue Jun 29 09:19:47 2021
NOTE: stopping process ARB0
Tue Jun 29 09:19:47 2021
Sweep [inc][54665]: completed
Tue Jun 29 09:19:47 2021
Sweep [inc2][54665]: completed
NOTE: cache deleting context for group DATA 2/0xb0c845ce
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_3852.trc:
ORA-15130: diskgroup "DATA" is being dismounted
GMON dismounting group 2 at 11 for pid 27, osid 25395
NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
SUCCESS: diskgroup DATA was dismounted
SUCCESS: alter diskgroup DATA dismount force /* ASM SERVER:2965915086 */

通过kfed分析报错block,确认错误

kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            3 ; 0x002: KFBTYP_ALLOCTBL
kfbh.datfmt:                          2 ; 0x003: 0x02
kfbh.block.blk:                       7 ; 0x004: blk=7
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2183628676 ; 0x00c: 0x82278784 <<======该值错误,应该为:686982479
kfbh.fcn.base:                     3430 ; 0x010: 0x00000d66
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdatb.aunum:                      2240 ; 0x000: 0x000008c0
kfdatb.shrink:                      448 ; 0x004: 0x01c0
kfdatb.ub2pad:                        0 ; 0x006: 0x0000

通过修复该错误,并且禁止reblance操作[增加磁盘数据需要重新分布],mount磁盘组,然后open库,发现redo已经被覆盖(非归档),强制打开库报错

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [2662], [0], [2691201882], [0],
[2691227745], [12583040], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [2662], [0], [2691201881], [0],
[2691227745], [12583040], [], [], [], [], [], []
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [2662], [0], [2691201879], [0],
[2691227745], [12583040], [], [], [], [], [], []
Process ID: 25110
Session ID: 287 Serial number: 3

通过对scn进行处理,数据库顺利open

SQL> startup mount pfile='/tmp/pfile';
ORACLE instance started.

Total System Global Area 5044088832 bytes
Fixed Size		    2261928 bytes
Variable Size		 1442843736 bytes
Database Buffers	 3590324224 bytes
Redo Buffers		    8658944 bytes
Database mounted.
SQL> alter database open;

Database altered.

/u01空间100%导致数据库报ORA-01114 ORA-29701

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:/u01空间100%导致数据库报ORA-01114 ORA-29701

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

数据库操作报ORA-01114 ORA-29701错误
ORA-01114 ORA-29704


通过分析发现磁盘/u01分区使用100%
20210629191917

sqlplus / as sysdba 无法登陆
20210629191935

清理trace,释放/u01空间之后,数据库恢复正常,mos中关于ORA-01114错误描述

APPLIES TO:
BI Publisher (formerly XML Publisher) - Version 11.1.1.7.x and later
Information in this document applies to any platform.

SYMPTOMS
 While generating reports the following error occurred.

oracle.xdo.XDOException: java.sql.SQLException: ORA-01114: IO error writing block to file (block # ) 
ORA-01114: IO error writing block to file 201 (block # 755561) 
ORA-27072: File I/O error Additional information: 4 
Additional information: 755561 Additional information: 36864

CHANGES
 

CAUSE
 The issue is caused due to DB not having enough space.

SOLUTION
Cleared space in DB and the report worked fine.

lvm缩小xfs文件系统空间和对swap进行扩容操作

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:lvm缩小xfs文件系统空间和对swap进行扩容操作

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

xfs文件系统lvm缩小空间操作(/home从100G减小到80G)

[root@xifenfei ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G  6.0G  443G   2% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G     0   63G   0% /dev/shm
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhel-home  100G   38M  100G   1% /home
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   32K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media

[root@xifenfei u01]# xfsdump -f /home.xfsdump /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> home
session label entered: "tar czvf /home.tar.gz /home
home"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of xifenfei:/home
xfsdump: dump date: Fri Jun 25 11:37:13 2021
xfsdump: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsdump: session label: 
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 4828224 bytes
xfsdump: /var/lib/xfsdump/inventory created

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> home
media label entered: "home"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 4732672 bytes
xfsdump: dump size (non-dir files) : 4588480 bytes
xfsdump: dump complete: 4 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /home.xfsdump OK (success)
xfsdump: Dump Status: SUCCESS

[root@xifenfei u01]# umount /home
[root@xifenfei u01]# lvreduce -L 80G /dev/mapper/rhel-home
  WARNING: Reducing active logical volume to 80.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce rhel/home? [y/n]: y
  Size of logical volume rhel/home changed from 100.00 GiB (25600 extents) to 80.00 GiB (20480 extents).
  Logical volume rhel/home successfully resized.

[root@xifenfei u01]# mkfs.xfs -f /dev/mapper/rhel-home
meta-data=/dev/mapper/rhel-home  isize=512    agcount=16, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=20971520, imaxpct=25
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=10240, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@xifenfei u01]# mount /home
xfsrestore -f /home.xfsdump /home
[root@xifenfei u01]# xfsrestore -f /home.xfsdump /home
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: xifenfei
xfsrestore: mount point: /home
xfsrestore: volume: /dev/mapper/rhel-home
xfsrestore: session time: Fri Jun 25 11:37:13 2021
xfsrestore: level: 0
xfsrestore: session label: "tar czvf /home.tar.gz /home
home"
xfsrestore: media label: "home"
xfsrestore: file system id: b996cff9-332b-4c07-96e1-8335a1f23627
xfsrestore: session id: 4d75008e-9927-417d-9722-52d13bb89eb0
xfsrestore: media id: 6094b9b5-a45f-4638-a0e2-c1b982ead67b
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 119 directories and 188 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /home.xfsdump OK (success)
xfsrestore: Restore Status: SUCCESS
[root@xifenfei u01]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root  449G   14G  435G   4% /
devtmpfs                63G     0   63G   0% /dev
tmpfs                   63G   20M   63G   1% /run
tmpfs                   63G     0   63G   0% /sys/fs/cgroup
/dev/sda2             1014M  165M  850M  17% /boot
/dev/sda1              200M  9.8M  191M   5% /boot/efi
tmpfs                   13G  4.0K   13G   1% /run/user/42
tmpfs                   13G   28K   13G   1% /run/user/0
/dev/sr0               4.2G  4.2G     0 100% /media
tmpfs                   63G     0   63G   0% /dev/shm
/dev/mapper/rhel-home   80G   38M   80G   1% /home

xfs系统的lvm无法直接缩小空间,只能是通过xfsdump /home内容,然后lvm缩小空间重做xfs文件系统,再使用xfsdump还原

lvm扩容swap空间(swap从8G扩大到16G)

[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         8192           0        8192
[root@xifenfei home]# lvextend -L 16GB /dev/rhel/swap
  Size of logical volume rhel/swap changed from 8.00 GiB (2048 extents) to 16.00 GiB (4096 extents).
  Logical volume rhel/swap successfully resized.
[root@xifenfei home]# sync;sync
[root@xifenfei home]# swapoff /dev/rhel/swap
mkswap /dev/rhel/swap 
[root@xifenfei home]# mkswap /dev/rhel/swap 
mkswap: /dev/rhel/swap: warning: wiping old swap signature.
swapon /dev/rhel/swap Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=8d79ccf4-1796-49c9-968d-23abb67bc6eb
[root@xifenfei home]# swapon /dev/rhel/swap 
[root@xifenfei home]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128355       86907       26110         274       15338       37632
Swap:         16383           0       16383

expdp 并行导出单表数据

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:expdp 并行导出单表数据

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在某些情况下,需要使用并行的方法使用 datapump 对单个对象并行导出,导入加快数据迁移的数据
expdp导出操作

#!/bin/bash
chunk=10
for ((i=0;i<chunk;i++));
do
  expdp USERNAME/Password@DB_NAME TABLES=LOB_TEST QUERY=LOB_TEST:\"where mod\(dbms_rowid.rowid_block_number\(rowid\)\, 
>${chunk}\) = ${i}\" directory=DMP dumpfile=lob_test_${i}.dmp logfile= log_test_${i}.log &
  echo $i
done 

impdp导入操作

#!/bin/bash
chunk=10
for ((i=0;i<chunk;i++));
do
 impdp USERNAME/Password@DB_NAME  directory=DMP REMAP_TABLE=LOB_TEST:LOB_TEST  remap_schema=source:target 
>dumpfile= lob_test_${i}.dmp logfile=TABLE_imp_log_test_${i}.log  DATA_OPTIONS=DISABLE_APPEND_HINT  CONTENT=DATA_ONLY &
 echo $i
done

在12c版本开始impdp可能会启用ENABLE_PARALLEL_DML特性,需要注意
参考:Optimising LOB Export and Import Performance via Oracle DataPump

datapump network_link遭遇ORA-12899错误

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:datapump network_link遭遇ORA-12899错误

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

在给一个客户使用expdp+network_link导出数据,然后通过impdp导入数据的过程中遇到ORA-12899问题.
20210608215836


对原库和现在库进行分析
20210608215825
20210608215817

原库和目标库表结构一致,原库该表存储数据实际长度确实为1,但是在impdp导入的时候提示需要长度为3.通过分析,确认原库的nls_length_semantics参数设置为char了,直接使用impdp+network_link不落地方式导入该表数据成功
20210608215845

根据上述情况,查询相关文档,确认类似记录为:
ORA-12899 When Using IMPDP Over Network Link (Doc ID 414901.1)
ORA-26059 During Impdp Using Export Dump Taken With NETWORK_LINK Option (Doc ID 2266956.1)
虽然都不是完全匹配该问题,但是基本上可以确认expdp的network_link和nls_length_semantics参数是引起该问题的根本原因,在后续的迁移中,尽量保持nls_length_semantics参数一致.

tab$异常被处理之后报ORA-600 13304故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:tab$异常被处理之后报ORA-600 13304故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

又一例数据库启动报ORA-600 16703 1403 20错误故障

Sun Jun 13 14:00:56 2021
NOTE: dependency between database xff and diskgroup resource ora.DG_ARCH_xff.dg is established
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc  (incident=348265):
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Incident details in: /opt/oracle/diag/rdbms/xff/xff1/incident/incdir_348265/xff1_ora_56340_i348265.trc
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_56340.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [16703], [1403], [20], [], [], [], [], [], [], [], [], []
Error 704 happened during db open, shutting down database
USER (ospid: 56340): terminating the instance due to error 704
Instance terminated by USER, pid = 56340

这个故障比较明显,根据我们之前的分析经验(警告:互联网中有oracle介质被注入恶意程序导致—ORA-600 16703),应该是tab$被恶意破坏导致,通过分析安装程序,确认是该问题,客户通过互联网上的相关文章,dd方式进行处理,结果数据库报ORA-600 13304错误,无法继续,让我们提供技术支持

SMON: enabling tx recovery
Database Characterset is AL32UTF8
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc  (incident=396265):
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /opt/oracle/diag/rdbms/xff/xff1/incident/incdir_396265/xff1_ora_83843_i396265.trc
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc:
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Errors in file /opt/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_83843.trc:
ORA-00600: internal error code, arguments: [13304], [], [], [], [], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER (ospid: 83843): terminating the instance due to error 600
Instance terminated by USER, pid = 83843

通过我们的技术对数据库进行一系列恢复之后,open过程报错

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier
Process ID: 23346
Session ID: 680 Serial number: 51933

通过跟踪启动过程分析

PARSE ERROR #140574232044112:len=45 dep=1 uid=0 oct=3 lid=0 tim=1623621695884944 err=904
select value$ from sys.props$ where name = :1
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier
ORA-00604: error occurred at recursive SQL level 1
ORA-00904: "NAME": invalid identifier

基本上可以确定是由于客户自行恢复导致props$表异常.通过进一步分析,确认是由于在对tab$处理不合适导致,进一步对tab$进行处理,数据库恢复正常,实现数据0丢失

磁盘空间不足迁移数据文件导致故障恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:磁盘空间不足迁移数据文件导致故障恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户由于磁盘空间不足,在线把oracle数据迁移到其他位置

Tue Jun 01 11:44:32 2021
Thread 1 advanced to log sequence 28754 (LGWR switch)
  Current log# 2 seq# 28754 mem# 0: /u01/app/oracle/oradata/orcl/redo02.log
Tue Jun 01 11:59:54 2021
Non critical error ORA-48113 caught while writing to trace file
      "/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_mmon_23341.trc"
Error message: 
Writing to the above trace file is disabled for now on...
Tue Jun 01 12:00:00 2021
Non critical error ORA-48181 caught while writing to trace file
       "/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_29692.trc"
Error message: Linux-x86_64 Error: 28: No space left on device
Additional information: 1
Writing to the above trace file is disabled for now on...
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_29692.trc:
ORA-12012: error on auto execute of job "XIFENFEI"."STATISTICS_1_JOBS"
ORA-06575: Package or function PKG_STAT_1_2018 is in an invalid state
Tue Jun 01 12:12:26 2021

迁移走数据文件之后,数据库报错,并且强制关闭数据库

ORA-01116: error in opening database file 30
ORA-01110: data file 30: '/u02/orcdate/AAAA.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_m001_29106.trc:
ORA-01116: error in opening database file 31
ORA-01110: data file 31: '/u02/orcdate/CBD.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 07 10:25:03 2021
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_m000_9817.trc:
ORA-01116: error in opening database file 24
ORA-01110: data file 24: '/u02/orcdate/ABC.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Mon Jun 07 10:25:10 2021
Shutting down instance (immediate)
Stopping background process SMCO
Shutting down instance: further logons disabled
Read of datafile '/u02/orcdate/XXXXXXX.dbf' (fno 21) header failed with ORA-01208
Rereading datafile 21 header failed with ORA-01208
Mon Jun 07 10:25:36 2021
Adjusting the default value of parameter parallel_max_servers
from 640 to 485 due to the value of parameter processes (500)
Starting ORACLE instance (normal)
Mon Jun 07 10:28:20 2021
Shutting down instance (abort)
License high water mark = 152
USER (ospid: 7987): terminating the instance
Termination issued to instance processes. Waiting for the processes to exit
Mon Jun 07 10:28:30 2021
Instance termination failed to kill one or more processes
Instance terminated by USER, pid = 7987
Mon Jun 07 10:28:31 2021
Instance shutdown complete

然后又把文件迁移回来,并且进行了一系列数据库恢复,最后我们接手是情况是有多个文件被offline,并且有一个文件报WRONG FILE NUMBER,通过Oracle数据库异常恢复检查脚本(Oracle Database Recovery Check)脚本检查,对其中的v$datafile,v$datafile_header,v$tablespace综合分析
20210612154127
20210612154301
20210612154350


确认是WXD_YPT表空间数据文件直接拷贝为WXD表空间数据文件,经过客户确认,WXD数据不重要,客户先忽略.
通过一系列处理,尝试open数据库,报ORA-600 2662错误

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [2662], [3786], [2612118101], [3786], [2612128448], [12583040]
ORA-00600: internal error code, arguments: [2662], [3786], [2612118100], [3786], [2612128448], [12583040]
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [2662], [3786], [2612118098], [3786], [2612128448], [12583040]
Process ID: 14888
Session ID: 198 Serial number: 3

修改数据库scn(参考blog相关link:ORA-600 2662)数据库顺利open,并且协助客户导出数据并导入新库,完成数据库恢复.
这次运气比较好,只是丢失了一点数据,没有引起重大事故.再此提醒:不太了解oracle的朋友,操作数据库需谨慎,不要在线直接移动数据文件,另外为了更好的恢复效果,更快的恢复,故障之后,最好尽可能的告知所有操作.

文件系统重新分区oracle恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:文件系统重新分区oracle恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

最近处理的一个恢复,算是这几年中的一个奇葩.
1. oracle dg 主备库raid同时损坏,找硬件恢复厂商软件重组raid,恢复厂商判断所有磁盘全部都是好的
2. 主库系统被重装,文件系统重新分区.备库在使用duplicate搭建dg的过程中(通过alert日志分析以前的dg是正常的,直接rm掉了所有文件,然后使用duplicate搭建),只是部分文件拷贝到了备库
3. 备份放在一台单独的存储上,但是当上去看是发现存储上面空空的,没有任何数据(通过对ctl的分析,确认存储上面只有一个月之前的备份记录,估计也被删除或者重新分区了(通过后续分析,判断应该是被重新分区了)
客户没有和我们说任何信息,就是说突然两个raid都损坏了,找硬件厂商进行恢复,硬件厂商开始也觉得这个会比较简单,直接通过raid模拟恢复出来lun,然后通过软件恢复出来一些数据文件(反馈给我的信息是少了redo,需要我们协助恢复),通过深入分析,发现少了大量数据文件,基于现在的恢复基本上没意义.然后通过低主库的raid模拟恢复,拷贝出来数据文件,结果发现恢复出来的文件大小,和文件头记录不匹配
20210607232818


这里显示文件大小应该是30G,但是实际拷贝的文件只有26G大小
20210607232731

通过底层进一步分析,发现任何大于4G的文件,按照4G为单位间隔损坏(4G好,4G损坏,4G好……)
20210605203719
20210605201235

出现这类情况,通过底层分析,判断是客户对磁盘进行了重新分区,引起底层问题导致
20210607214629

基于这样的情况,没有太多好的方法处理,直接使用底层碎片技术进行恢复
20210607233847

运气不错,顺利open数据库
20210607234450

本次恢复走了很多弯路,主要是客户不清楚客户那边处于什么原因,多次隐秘故障原因,没有如实的告知我们故障情况,一步步尝试,走了很多弯路,耽误了不少时间.如果可能请尽量告诉我们准确情况,便于我们准确做出判断,快速高效的恢复.
类似oracle 碎片层面恢复,我们进行了挺多的,类似:
dbca删除库和rm删库恢复
文件系统损坏导致数据文件异常恢复
Oracle 数据文件大小为0kb或者文件丢失恢复
alter database create datafile 导致数据文件丢失恢复
rm -rf 删除数据文件恢复方法—文件系统反删除+oracle碎片重组

ext4 lvm在线扩容

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ext4 lvm在线扩容

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

linux扫描新磁盘

[root@xifenfei ~]#  ls /sys/class/scsi_host/
host0  host1  host2
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host0/scan
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host1/scan
[root@xifenfei ~]# echo '- - -'  > /sys/class/scsi_host/host2/scan

vg扩容

[root@xifenfei ~]# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
[root@xifenfei ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  vg_xifenfei   1   4   0 wz--n- 499.51g 584.00m
[root@xifenfei ~]# vgextend vg_xifenfei /dev/sdc1
  Volume group "vg_xifenfei" successfully extended
[root@xifenfei ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree  
  vg_xifenfei   2   4   0 wz--n- 999.50g 500.56g

lv进行扩容

[root@xifenfei ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xifenfei-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  63G     0   63G   0% /dev/shm
/dev/sda1             477M   84M  364M  19% /boot
/dev/mapper/vg_xifenfei-lv_home
                      1.9G   29M  1.8G   2% /home
/dev/mapper/vg_xifenfei-lvu01
                      436G  335G   80G  81% /u01
/dev/sdb1             985G  462G  473G  50% /oracle_data
[root@xifenfei ~]# lvresize -L +500G /dev/mapper/vg_xifenfei-lvu01
  Size of logical volume vg_xifenfei/lvu01 changed from 443.00 GiB (113408 extents) to 943.00 GiB (241408 extents).
  Logical volume lvu01 successfully resized.
[root@xifenfei ~]# resize2fs /dev/mapper/vg_xifenfei-lvu01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/vg_xifenfei-lvu01 is mounted on /u01; on-line resizing required
old_desc_blocks = 28, new_desc_blocks = 59
The filesystem on /dev/mapper/vg_xifenfei-lvu01 is now 247201792 blocks long.

[root@xifenfei ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_xifenfei-lv_root
                       50G  6.4G   41G  14% /
tmpfs                  63G     0   63G   0% /dev/shm
/dev/sda1             477M   84M  364M  19% /boot
/dev/mapper/vg_xifenfei-lv_home
                      1.9G   29M  1.8G   2% /home
/dev/mapper/vg_xifenfei-lvu01
                      929G  335G  552G  38% /u01
/dev/sdb1             985G  462G  473G  50% /oracle_data

ORA-19921: maximum number of 64 rows exceeded

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-19921: maximum number of 64 rows exceeded

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

rman 登录报ORA-19921错

[oracle@db-base ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Fri May 28 11:58:18 2021

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN-06900: WARNING: unable to generate V$RMAN_STATUS or V$RMAN_OUTPUT row
RMAN-06901: WARNING: disabling update of the V$RMAN_STATUS and V$RMAN_OUTPUT rows
ORACLE error from target database: 
ORA-19921: maximum number of 64 rows exceeded

connected to target database: ORCL (DBID=1590736012)

RMAN> 

通过检查rman进程发现大量未退出进程

[oracle@db-base trace]$ ps -ef|grep rman
oracle     998   985  0 May18 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle    1054  1039  0 Apr18 ?        00:00:10 rman oracle/11.2.0/db_1/bin/rman target /
oracle    1738  1726  0 Apr27 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle    4294  4281  0 May11 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle    4655  4642  0 May27 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    4955  4943  0 Apr30 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    5712  5700  0 Apr28 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle    7162  7149  0 May19 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    7275  7262  0 Apr17 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle    7983  7971  0 May12 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle    8013  8002  0 10:59 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    8376  8364  0 May26 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    8519  8507  0 11:03 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    9196  9184  0 11:10 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    9345  9333  0 Apr29 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    9420  9407  0 May01 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle    9831  9818  0 11:16 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   10242 10229  0 May25 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   11023 11010  0 Apr10 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   11040 11020  0 Apr16 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   11345 11332  0 Apr11 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   11364 11343  0 Apr12 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   11696 11684  0 Apr13 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   12008 11998  0 11:39 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   12454 12441  0 Apr15 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   12680 12667  0 Apr14 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   12751 12739  0 May13 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   12849 12833  2 11:48 pts/1    00:00:26 rman oracle/11.2.0/db_1/bin/rman target /
oracle   13152 13140  0 May02 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   13731 13719  0 Apr05 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   13869 13857  0 May24 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   14027 14014  0 Apr04 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   14073 14061  0 Apr03 ?        00:00:07 rman oracle/11.2.0/db_1/bin/rman target /
oracle   14366 13332  0 12:03 pts/2    00:00:00 grep --color=auto rman
oracle   15073 15061  0 May23 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   15263 15251  0 May22 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   15766 15753  0 Apr02 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   15915 15903  0 May14 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   16805 16793  0 Mar31 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   16953 16939  0 Apr01 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   17648 17635  0 May21 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   17740 17728  0 May03 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   18265 18253  0 Apr09 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   18964 18951  0 May15 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   20731 20719  0 May20 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   21104 21092  0 May04 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   23116 23104  0 May16 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   23230 23216  0 Apr07 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   23969 23956  0 Apr08 ?        00:00:07 rman oracle/11.2.0/db_1/bin/rman target /
oracle   24092 24079  0 Apr24 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   25648 25636  0 May07 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   25843 25831  0 Apr23 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   26261 26248  0 Apr25 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   26421 26408  0 May08 ?        00:00:08 rman oracle/11.2.0/db_1/bin/rman target /
oracle   26470 26458  0 Apr22 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   26776 26763  0 May05 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   28587 28574  0 Apr26 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   29102 29090  0 May09 ?        00:00:07 rman oracle/11.2.0/db_1/bin/rman target /
oracle   29402 29389  0 Apr20 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   29628 29613  0 May17 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   29638 29625  0 Apr06 ?        00:00:07 rman oracle/11.2.0/db_1/bin/rman target /
oracle   30118 30105  0 Apr21 ?        00:00:01 rman oracle/11.2.0/db_1/bin/rman target /
oracle   32536 32523  0 Apr19 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /
oracle   32609 32597  0 May10 ?        00:00:02 rman oracle/11.2.0/db_1/bin/rman target /

kill相关rman进程

[oracle@db-base trace]$ kill -9 `ps -ef|grep rman|grep -v grep|awk '{print $2}'`

rman 登录正常

[oracle@db-base trace]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Fri May 28 12:04:19 2021

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1590736012)

RMAN>