oracleasm createdisk破坏的acfs文件系统恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:oracleasm createdisk破坏的acfs文件系统恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接到一个朋友请求,客户12.2.0.1的asm被执行了oracleasm createdisk,把之前的asmdisk给重建了
oracleasm-createdisk


根据以前恢复经验,这个故障会把前面1M的数据全部重置
删除asmlib磁盘导致磁盘组故障恢复
分享oracleasm createdisk重新创建asm disk后数据0丢失恢复案例
这个客户有点特殊,他的asm 磁盘组中跑的不是oracle 数据库而是直接跑acfs,然后再里面跑mysql数据库,也就是利用grid实现mysql的底层高可用,acfs实现共享挂载(我的理解一次也只能启动一个节点的mysql),现在asm disk的头被oracleasm createdisk重置之后,导致asm磁盘组无法mount,从而acfs也无法mount.对于这个故障,让现场提供被破坏磁盘使用dd前面100M 发给我进行分析
使用kfed读取asm disk磁盘头信息

H:\TEMP\0423\0423>kfed read data3_100m
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                  1096040823 ; 0x00c: 0x41544177
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
005B78600 00000000 00000000 00000000 41544177  [............wATA]
005B78610 00000000 00000000 00000000 00000000  [................]
005B78620 4C43524F 4B534944 41544144 00000033  [ORCLDISKDATA3...]
005B78630 00000000 00000000 00000000 00000000  [................]
  Repeat 252 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

这里可以发现比较明显的asmlib的标记信息ORCLDISKDATA3,证明该磁盘被oracleasm createdisk重建之后,没有再进行kfed修复或者重建新磁盘组

SUCCESS: CREATE DISKGROUP DATA EXTERNAL REDUNDANCY  DISK '/dev/oracleasm/disks/DATA1' SIZE 1430507M
 DISK '/dev/oracleasm/disks/DATA2' SIZE 1430507M
 DISK '/dev/oracleasm/disks/DATA3' SIZE 1430507M
 ATTRIBUTE 'compatible.asm'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M'

SUCCESS: CREATE DISKGROUP CRS EXTERNAL REDUNDANCY  DISK '/dev/oracleasm/disks/CRS' SIZE 190732M
 ATTRIBUTE 'compatible.asm'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M' /* ASMCA */

通过alert日志中磁盘组的创建语句,确认该磁组是ausize为4M,这样的情况下,asm的磁盘头备份备份和au备份应该都是好的,直接通过winhex来确认
orcldisk


再次通过kfed来确认相关磁盘头信息

H:\TEMP\0423\0423>kfed read data3_100m aus=4096k blkn=1022 aun=1|grep name
kfdhdb.dskname:               DATA_0002 ; 0x028: length=9
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                DATA_0002 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0

H:\TEMP\0423\0423>kfed read data3_100m aus=4096k blkn=0 aun=11|grep name
kfdhdb.dskname:               DATA_0002 ; 0x028: length=9
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                DATA_0002 ; 0x068: length=9
kfdhdb.capname:                         ; 0x088: length=0

基于上述情况,证明磁盘头和au的备份都还在,而且au的备份是4M,完全包含了被createdisk破坏的1M数据,直接吧这个au给还原回去理论上就可以了,但是很不幸,这样处理之后,crs启动过程依旧无法找到表决盘
vote


通过分析crs盘的情况,发现它的偏移量是错误的
voteoffset

通过分析是由于磁盘分区问题导致

磁盘 /dev/sdc:200.0 GB, 199996997632 字节,390619136 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 1048576 字节
磁盘标签类型:dos
磁盘标识符:0x00000000

   设备 Boot      Start         End      Blocks   Id  System
/dev/sdc1               1   390619135   195309567+  ee  GPT

磁盘 /dev/sdd:1500.0 GB, 1499996356608 字节,2929680384 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 1048576 字节
磁盘标签类型:gpt
Disk identifier: 99F1679E-DC32-4F6A-B85D-D91C87B09775


#         Start          End    Size  Type            Name
 1         2048   2929678335    1.4T  Linux LVM       

处理好分区问题之后,重启crs,一切自动恢复成功
acfs-mysql


至此这个被oracleasm createdisk的case完美恢复,数据0丢失

先offline数据文件,再resetlogs导致恢复复杂的故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:先offline数据文件,再resetlogs导致恢复复杂的故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

本来是一个简单的数据文件被误删除,然后通过底层恢复出来数据文件,再启动库就可以的事情,结果由于对oracle的不了解和自以为是,直接把丢失的文件不存在的情况下,offline文件,然后尝试resetlogs打开库,并且进行了各种尝试,结果使得问题比较麻烦.
故障之后现象
通过分析alert日志大概的主要错误,大概梳理故障情况

1. 启动数据库报control03.ctl丢失

Fri Apr 17 21:53:03 2026
MMNL started with pid=16, OS id=3613 
ORACLE_BASE from environment = /data/oracle
Fri Apr 17 21:53:08 2026
alter database mount
ORA-00210: cannot open the specified control file
ORA-00202: control file: '/data/oracle/oradata/orcl/control03.ctl'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00210: cannot open the specified control file
ORA-00202: control file: '/data/oracle/oradata/orcl/control02.ctl'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-205 signalled during: alter database mount...

如果只是这个文件丢失(这里还没有看到其他数据文件丢失的报错),本身是一个非常简单的故障,直接修改control_files参数即可

2. 结果当时操作的人直接rectl

Fri Apr 17 21:57:01 2026
Successful mount of redo thread 1, with mount id 1758675116
Completed: CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/data/oracle/oradata/orcl/redo01.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 '/data/oracle/oradata/orcl/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 '/data/oracle/oradata/orcl/redo03.log'  SIZE 50M BLOCKSIZE 512
DATAFILE
  '/data/oracle/oradata/orcl/system01.dbf',
  '/data/oracle/oradata/orcl/sysaux01.dbf',
  '/data/oracle/oradata/orcl/undotbs01.dbf',
  '/data/oracle/oradata/orcl/users01.dbf'
CHARACTER SET ZHS16GBK

3.然后启动数据库报错

Fri Apr 17 22:02:43 2026
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 3 processes
Started redo scan
Completed redo scan
 read 39020 KB redo, 0 data blocks need recovery
Started redo application at
 Thread 1: logseq 11590, block 2, scn 137806010
Recovery of Online Redo Log: Thread 1 Group 1 Seq 11590 Reading mem 0
  Mem# 0: /data/oracle/oradata/orcl/redo01.log
Completed redo application of 0.00MB
Completed crash recovery at
 Thread 1: logseq 11590, block 78042, scn 137831847
 0 data blocks read, 0 data blocks written, 39020 redo k-bytes read
Fri Apr 17 22:02:44 2026
Thread 1 advanced to log sequence 11591 (thread open)
Thread 1 opened at log sequence 11591
  Current log# 2 seq# 11591 mem# 0: /data/oracle/oradata/orcl/redo02.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Apr 17 22:02:44 2026
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Dictionary check beginning
Tablespace 'TEMP' #3  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ERP_XXXX' #6  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ERP_AAAA' #7  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ABCD' #8  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ERP_BBBB' #9  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ERP_XXD' #10  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'ERP_12SF' #11  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'XXX14' #12  found in data dictionary,
but not in the controlfile. Adding to controlfile.
Tablespace 'P_ZY' #13 found in data dictionary,
but not in the controlfile. Adding to controlfile.
File #5 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00005' in the controlfile.
File #6 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00006' in the controlfile.
File #7 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00007' in the controlfile.
File #8 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00008' in the controlfile.
File #9 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00009' in the controlfile.
File #10  found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00010' in the controlfile.
File #11  found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00011' in the controlfile.
File #12 found in data dictionary but not in controlfile.
Creating OFFLINE file 'MISSING00012' in the controlfile.

4.然后尝试resetlogs操作

Sat Apr 18 05:55:10 2026
ALTER DATABASE   MOUNT
Successful mount of redo thread 1, with mount id 1758652862
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT
Sat Apr 18 05:55:14 2026
ALTER DATABASE OPEN RESETLOGS
ORA-1139 signalled during: ALTER DATABASE OPEN RESETLOGS...
Sat Apr 18 05:56:29 2026
Starting ORACLE instance (normal)
ALTER DATABASE RECOVER  DATABASE UNTIL CANCEL  
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 4 slaves
Sat Apr 18 05:56:29 2026
Warning: Datafile 5 (/data/oracle/orcl/xxxx.dbf) is offline during full database recovery and will not be recovered
Warning: Datafile 6 (/data/oracle/orcl/xxxx.dbf) is offline during full database recovery and will not be recovered
Warning: Datafile 7 (/data/oracle/orcl/xxxx.dbf) is offline during full database recovery and will not be recovered
Warning: Datafile 8 (/data/oracle/orcl/xxxx.dbf) is offline during full database recovery and will not be recovered
Media Recovery Not Required
Completed: ALTER DATABASE RECOVER  DATABASE UNTIL CANCEL  
Sat Apr 18 05:57:45 2026
ALTER DATABASE OPEN RESETLOGS
RESETLOGS after complete recovery through change 137865786
Resetting resetlogs activation ID 1645665187 (0x6216dba3)
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2549.trc:
ORA-00367: checksum error in log file header
ORA-00322: log 1 of thread 1 is not current copy
ORA-00312: online log 1 thread 1: '/data/oracle/oradata/orcl/redo01.log'
Sat Apr 18 05:57:45 2026
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_m000_2554.trc:
ORA-00316: log 1 of thread 1, type 0 in header is not log file
ORA-00312: online log 1 thread 1: '/data/oracle/oradata/orcl/redo01.log'
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2549.trc:
ORA-00367: checksum error in log file header
ORA-00322: log 2 of thread 1 is not current copy
ORA-00312: online log 2 thread 1: '/data/oracle/oradata/orcl/redo02.log'
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_m000_2554.trc:
ORA-00316: log 2 of thread 1, type 0 in header is not log file
ORA-00312: online log 2 thread 1: '/data/oracle/oradata/orcl/redo02.log'
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_m000_2554.trc:
ORA-00322: log 3 of thread 1 is not current copy
ORA-00312: online log 3 thread 1: '/data/oracle/oradata/orcl/redo03.log'
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2549.trc:
ORA-00367: checksum error in log file header
ORA-00322: log 3 of thread 1 is not current copy
ORA-00312: online log 3 thread 1: '/data/oracle/oradata/orcl/redo03.log'
Sat Apr 18 05:57:46 2026
Setting recovery target incarnation to 2
Sat Apr 18 05:57:46 2026
Assigning activation ID 1758652862 (0x68d2e9be)
Thread 1 opened at log sequence 1
  Current log# 1 seq# 1 mem# 0: /data/oracle/oradata/orcl/redo01.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Sat Apr 18 05:57:46 2026
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Dictionary check beginning
File #5 is offline, but is part of an online tablespace.
data file 5: '/data/oracle/oradata/orcl/xxxx.dbf'
File #6 is offline, but is part of an online tablespace.
data file 6: '/data/oracle/oradata/orcl/xxxx.dbf'
File #7 is offline, but is part of an online tablespace.
data file 7: '/data/oracle/oradata/orcl/xxxx.dbf'
File #8 is offline, but is part of an online tablespace.
data file 8: '/data/oracle/oradata/orcl/xxxx.dbf'

到这一步悲剧基本上已经发生,犯了一个在oracle恢复里面比较忌讳的事情,有数据文件offline的情况下,执行resetlogs操作,导致部分数据文件的resetlogs信息没有被及时更新,导致一套库里面,被offline的这个部分数据文件resetlogs信息小于其他online的数据文件的。

5. 后续其他操作各种报错

Completed: ALTER DATABASE   MOUNT
Sun Apr 19 08:13:02 2026
ALTER DATABASE DATAFILE 5 OFFLINE DROP
Sun Apr 19 08:13:02 2026
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_9212.trc  (incident=67094):
ORA-00600: internal error code, arguments: [3600], [5], [14], [], [], [], [], [], [], [], [], []
Incident details in: /data/oracle/diag/rdbms/orcl/orcl/incident/incdir_67094/orcl_dbw0_9212_i67094.trc
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_9212.trc:
ORA-00600: internal error code, arguments: [3600], [5], [14], [], [], [], [], [], [], [], [], []
DBW0 (ospid: 9212): terminating the instance due to error 471
Tue Apr 21 22:31:23 2026
Assigning activation ID 1758985759 (0x68d7fe1f)
Thread 1 opened at log sequence 1
  Current log# 1 seq# 1 mem# 0: /data/oracle/oradata/orcl/redo01.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Tue Apr 21 22:31:23 2026
SMON: enabling cache recovery
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4951.trc  (incident=87950):
ORA-00600: internal error code, arguments: [2662], [0], [137890858], [0], [137891091], [12583056], []
Incident details in: /data/oracle/diag/rdbms/orcl/orcl/incident/incdir_87950/orcl_ora_4951_i87950.trc
Errors in file /data/oracle/diag/rdbms/orcl/orcl/incident/incdir_87950/orcl_ora_4951_i87950.trc:
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/data/oracle/oradata/orcl/redo03.log'
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/data/oracle/oradata/orcl/redo02.log'
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/data/oracle/oradata/orcl/redo02.log'
ORA-00339: archived log does not contain any redo
ORA-00334: archived log: '/data/oracle/oradata/orcl/redo03.log'
ORA-00600: internal error code, arguments: [2662], [0], [137890858], [0], [137891091], [12583056], []
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4951.trc:
ORA-00600: internal error code, arguments: [2662], [0], [137890858], [0], [137891091], [12583056], []
Errors in file /data/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4951.trc:
ORA-00600: internal error code, arguments: [2662], [0], [137890858], [0], [137891091], [12583056], []
Error 600 happened during db open, shutting down database
USER (ospid: 4951): terminating the instance due to error 600

接手故障之后分析
使用obet工具直接快速的检查坏块情况和文件头信息,关于obet的介绍参考:
obet实现对数据文件坏块检测功能
Oracle数据块编辑工具( Oracle Block Editor Tool)-obet
dbv


dbv检测没任何坏块,比较好好的消息
reset

但是检测数据文件头信息,发现有三种类型的resetlogs的信息,证明进行了多次部分文件的情况下进行了resetlogs操作

恢复处理
1. 使用Oracle Recovery Tools工具修改 resetlogs 信息
由于大量reseltogs 信息不一致,先使用Oracle Recovery Tools修改scn等相关信息Oracle Recovery Tools恢复案例总结—202505(注意选择resetlogs scn最大的文件为参照文件)
orarec

2. 重建ctl,打开库
open-db

比较幸运直接打开成功(本来也就应该成功,因为客户本身之前丢失主要业务文件的时候多次打开过库)


3. 然后增加temp文件,并expdp导出数据,完成本次恢复工作

exp dmp导入报IMP-00098: INTERNAL ERROR: impgst2故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:exp dmp导入报IMP-00098: INTERNAL ERROR: impgst2故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接到客户反馈,exp导出来的dmp无法导入到库中,而且原库已经被删除并且创建了新库,希望我们协助把dmp里面几个核心表给恢复出来
故障现象
imp导入dmp文件报 IMP-00098: INTERNAL ERROR: impgst2错误
imp


imp导入操作直接终止,无法恢复需要的数据
故障原因
分析导出日志,发现”ORA-24801: 在 OCI lob 函数中非法的参数值” 错误
ORA-24801

查询mos发现EXP-56 ORA-24801 During Export KB83982文章
nls

进一步和客户确认,他们确实修改过该库的字符集。基本上可以确认是由于修改字符集导致lob数据损坏,然后exp导出dmp中这些损坏的lob破坏了dmp的完整性,使得dmp无法正常导入
故障解决
对于这样的情况,由于损坏的lob比较靠前,而且不是客户业务用户中数据.处理方法有两种:
1. 直接使用winhex把损坏的lob表从dmp中剔除掉,然后导入数据
2. 直接使用工具从dmp中提取需要的表数据,以前处理过类似文章:
解决imp导入数据报IMP-00098错误
IMP-00098: INTERNAL ERROR: impgst2
exp dmp文件损坏(坏块/corruption)恢复—跳过dmp坏块

Oracle 19c Grid Infrastructure Release Update-202604(19.31)

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:Oracle 19c Grid Infrastructure Release Update-202604(19.31)

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

Release Date Version Download Link Additional CVEs Addressed  
21-Apr-2026 GI Release Update 19.31.0 PATCH 39036936 CVE-2025-15467, CVE-2026-33870, CVE-2026-33013, CVE-2025-31948, CVE-2026-34312, CVE-2026-25210, CVE-2026-24400, and CVE-2026-35229  
20-Jan-2026 GI Release Update 19.30.0 PATCH 38629535 CVE-2025-61795, and CVE-2025-67735  
21-Oct-2025 GI Release Update 19.29.0 PATCH 38298204 CVE-2025-59375, CVE-2025-53047, CVE-2025-52520 and CVE-2025-26333 -
15-Jul-2025 GI Release Update 19.28.0 PATCH 37957391 CVE-2025-49125, CVE-2025-27363, CVE-2023-1436, CVE-2023-29162, CVE-2025-50066, CVE-2024-56406, CVE-2025-0725, CVE-2025-30751, CVE-2025-30750 and CVE-2025-26333  
15-Apr-2025 GI Release Update 19.27.0 PATCH 37641958 CVE-2025-30701, CVE-2025-30733, CVE-2025-30694, CVE-2025-30702, CVE-2024-8176, CVE-2024-11053 and CVE-2025-24813  
21-Jan-2025 GI Release Update 19.26.0 PATCH 37257886 CVE-2022-26345, CVE-2024-7254, CVE-2024-38998, CVE-2024-38999, CVE-2024-52316, CVE-2024-47554, and CVE-2024-52317  
15-Oct-2024 GI Release Update 19.25.0 PATCH 36916690 CVE-2024-37371, CVE-2024-7264, CVE-2024-21242, CVE-2024-21233, CVE-2024-28887, CVE-2022-41342, CVE-2024-45492, CVE-2024-38999, and CVE-2024-34750  
16-Jul-2024 GI Release Update 19.24.0 PATCH 36582629 CVE-2023-45853, CVE-2022-37434, CVE-2023-52425, CVE-2023-52426, CVE-2024-0853, CVE-2024-21123, CVE-2024-21184, and CVE-2024-21126  
16-Apr-2024 GI Release Update 19.23.0 PATCH 36233126 CVE-2022-34381, CVE-2023-5363, CVE-2023-48795, CVE-2022-34169, CVE-2024-21058, CVE-2024-21066, CVE-2024-20995, CVE-2023-28823, CVE-2023-27391, CVE-2023-47038, CVE-2023-47039, CVE-2023-47100, CVE-2023-42503, CVE-2023-39975, CVE-2024-23672, and CVE-2024-24549  
16-Jan-2024 GI Release Update 19.22.0 PATCH 35940989 CVE-2022-21432, CVE-2023-46589, CVE-2023-42794, CVE-2023-42795, CVE-2023-44487, CVE-2023-45648, CVE-2022-46337, CVE-2023-2976, CVE-2023-38545, CVE-2023-38039, and CVE-2023-38546  
17-Oct-2023 GI Release Update 19.21.0 PATCH 35642822 CVE-2023-38039, CVE-2023-28320, CVE-2023-28321, CVE-2023-28322, CVE-2022-44729, CVE-2023-22071, CVE-2023-22077, CVE-2023-22073, CVE-2023-35116, CVE-2023-22075, CVE-2023-22074, CVE-2021-24031, CVE-2023-2976, and CVE-2022-46908  
18-Jul-2023 GI Release Update 19.20.0 PATCH 35319490 CVE-2022-43680, CVE-2023-22034, CVE-2023-21949, CVE-2021-3520, CVE-2023-34981, CVE-2022-45143, CVE-2023-24998, CVE-2023-28708, and CVE-2023-28709  
18-Apr-2023 GI Release Update 19.19.0 PATCH 35037840 CVE-2023-21918, CVE-2023-24998, and CVE-2022-45143  
17-Jan-2023 GI Release Update 19.18.0 PATCH 34762026 CVE-2018-25032, CVE-2022-42003, CVE-2023-21829, CVE-2023-21827, CVE-2021-37750, CVE-2022-42889, CVE-2020-10878, CVE-2022-1122, CVE-2021-29338, CVE-2022-3171, CVE-2022-45047, and CVE-2022-42004  
18-Oct-2022 GI Release Update 19.17.0 PATCH 34416665 CVE-2022-21596, CVE-2022-21603, CVE-2020-36518, CVE-2022-1586, CVE-2020-13956, CVE-2022-34305, CVE-2021-25122, CVE-2021-25329, CVE-2021-30129, CVE-2022-2047, CVE-2022-25647, and CVE-2019-2904  
19-Jul-2022 GI Release Update 19.16.0 PATCH 34130714 CVE-2021-45943, CVE-2022-21432, CVE-2022-0839, CVE-2020-26185, CVE-2020-26184, CVE-2020-35169, and CVE-2022-29885  
19-Apr-2022 GI Release Update 19.15.0 PATCH 33803476 CVE-2021-22569, CVE-2022-21410, CVE-2021-2464, and CVE-2021-42340  
18-Jan-2022 GI Release Update 19.14.0 PATCH 33509923 CVE-2022-21247, CVE-2021-45105  
19-Oct-2021 GI Release Update 19.13.0 PATCH 33182768 CVE-2021-2332, CVE-2021-35551, CVE-2021-35557, CVE-2021-35558, CVE-2021-35576, CVE-2021-29425, CVE-2021-35579, CVE-2020-27824, CVE-2021-25122, CVE-2020-9484, and CVE-2021-25329  
20-Jul-2021 GI Release Update 19.12.0 PATCH 32895426 CVE-2021-2351, CVE-2021-2328, CVE-2021-2329, CVE-2021-2337, CVE-2021-2333, CVE-2019-17545, CVE-2021-2330, CVE-2020-7760, CVE-2021-2334, CVE-2021-2335, CVE-2021-2336, CVE-2021-2326  
20-Apr-2021 GI Release Update 19.11.0 PATCH 32545008 CVE-2021-2207, CVE-2021-2175, CVE-2021-2173, CVE-2019-3738, CVE-2019-3739, CVE-2019-3740, CVE-2020-5360, CVE-2020-17527, CVE-2020-13943, CVE-2020-9484. CVE-2021-2245, and CVE-2020-5359  
19-Jan-2021 GI Release Update 19.10.0 PATCH 32226239 CVE-2021-2035, CVE-2021-2000, CVE-2021-2054, CVE-2021-2045  
20-Oct-2020 GI Release Update 19.9.0 PATCH 31750108 CVE-2020-14901, CVE-2020-14735, CVE-2020-14734, CVE-2020-9488, CVE-2020-11022, CVE-2020-14742, CVE-2019-17543, CVE-2019-11922, CVE-2019-12900, CVE-2020-13935, CVE-2016-1000031, CVE-2018-8013, CVE-2017-7658, CVE-2019-11358, CVE-2019-16335, CVE-2020-14745, CVE-2020-14744, CVE-2020-11022, CVE-2020-11023, CVE-2016-10244, CVE-2016-10328, CVE-2016-5300, CVE-2016-6153, CVE-2017-10989, CVE-2017-13685, CVE-2017-13745, CVE-2017-14232, CVE-2017-15286, CVE-2017-7857, CVE-2017-7858, CVE-2017-7864, CVE-2017-8105, CVE-2017-8287, CVE-2018-18873, CVE-2018-19139, CVE-2018-19539, CVE-2018-19540, CVE-2018-19541, CVE-2018-19542, CVE-2018-19543, CVE-2018-20346, CVE-2018-20505, CVE-2018-20506, CVE-2018-20570, CVE-2018-20584, CVE-2018-20622, CVE-2018-20843, CVE-2018-6942, CVE-2018-8740, CVE-2018-9055, CVE-2018-9154, CVE-2018-9252, CVE-2019-15903, CVE-2019-16168, CVE-2019-5018, CVE-2019-8457, CVE-2019-9936, CVE-2019-9937, and CVE-2016-3189  
14-Jul-2020 GI Release Update 19.8.0 PATCH 31305339 CVE-2020-2969, CVE-2020-2978, CVE-2019-13990, CVE-2019-17569, CVE-2016-1000031, CVE-2018-10237, CVE-2018-8013, CVE-2020-1935 and CVE-2020-1938  
14-Apr-2020 GI Release Update 19.7.0 PATCH 30899722 CVE-2019-2756, CVE-2019-2759, CVE-2019-2852, CVE-2019-2853, CVE-2019-12418, CVE-2019-17563, CVE-2020-2734, CVE-2020-2737  
14-Jan-2020 GI Release Update 19.6.0 PATCH 30501910 CVE-2020-2510, CVE-2020-2511, CVE-2020-2512, CVE-2020-2515, CVE-2020-2516, CVE-2020-2517, CVE-2020-2527, CVE-2020-2731, CVE-2020-2568, CVE-2020-2569, CVE-2019-10072, CVE-2018-11784, CVE-2019-0199, CVE-2019-0221, CVE-2019-0232  
15-Oct-2019 GI Release Update 19.5.0 PATCH 30116789 CVE-2019-2956, CVE-2019-2913, CVE-2019-2939, CVE-2018-2875, CVE-2019-2734, CVE-2018-11784, CVE-2019-2954, CVE-2019-2955, CVE-2018-8034, CVE-2018-1000873, CVE-2018-14719, CVE-2018-14720, CVE-2018-14721, CVE-2018-19360, CVE-2018-19361 and CVE-2018-19362  
16-Jul-2019 GI Release Update 19.4.0 PATCH 29708769 CVE-2018-11058, CVE-2019-2776, CVE-2016-0701, CVE-2016-2183, CVE-2016-6306, CVE-2016-8610, CVE-2018-11054, CVE-2018-11055, CVE-2018-11056, CVE-2018-11057 and CVE-2018-15769  
16-Apr-2019 GI Release Update 19.3.0 (patch suspended)  

 

Oracle Database 19c Release Update-202604(19.31)

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:Oracle Database 19c Release Update-202604(19.31)

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

Release Date Version Download Link Included in Windows Bundle Additional CVEs Addressed
21-Apr-2026 Database Release Update 19.31.0 PATCH 39034528 PATCH 38818049 CVE-2025-15467, CVE-2026-33870, CVE-2026-33013, CVE-2025-31948, CVE-2026-34312, CVE-2026-25210, and CVE-2026-24400
20-Jan-2026 Database Release Update 19.30.0 PATCH 38632161 PATCH 38523609 -
21-Oct-2025 Database Release Update 19.29.0 PATCH 38291812 PATCH 38111211 CVE-2025-59375 and CVE-2025-26333
15-Jul-2025 Database Release Update 19.28.0 PATCH 37960098 PATCH 37962957 CVE-2025-27363, CVE-2023-1436, CVE-2023-29162, CVE-2025-50066, CVE-2024-56406, CVE-2025-0725, CVE-2025-30751, CVE-2025-30750 and CVE-2025-26333
15-Apr-2025 Database Release Update 19.27.0 PATCH 37642901 PATCH 37532350 CVE-2025-30701, CVE-2025-30733, CVE-2025-30694, CVE-2025-30702, CVE-2024-8176, and CVE-2024-11053
21-Jan-2025 Database Release Update 19.26.0 PATCH 37260974 PATCH 37486199 CVE-2022-26345, CVE-2024-7254, CVE-2024-38998, and CVE-2024-38999
15-Oct-2024 Database Release Update 19.25.0 PATCH 36912597 PATCH 36878821 CVE-2024-37371, CVE-2024-7264, CVE-2024-21242, CVE-2024-21233, CVE-2024-28887, CVE-2022-41342, CVE-2024-45492, and CVE-2024-38999
16-Jul-2024 Database Release Update 19.24.0 PATCH 36582781 PATCH 36521936 CVE-2023-45853, CVE-2022-37434, CVE-2023-52425, CVE-2023-52426, CVE-2024-0853, CVE-2024-21123, and CVE-2024-21184
16-Apr-2024 Database Release Update 19.23.0 PATCH 36233263 PATCH 36219938 CVE-2022-34381, CVE-2023-5363, CVE-2023-48795, CVE-2022-34169, CVE-2024-21058, CVE-2024-21066, CVE-2024-20995, CVE-2023-28823, CVE-2023-27391, CVE-2023-47038, CVE-2023-47039, CVE-2023-47100, CVE-2023-42503, and CVE-2023-39975
16-Jan-2024 Database Release Update 19.22.0 PATCH 35943157 PATCH 35962832 CVE-2022-21432, CVE-2022-46337, CVE-2023-2976, CVE-2023-38545, CVE-2023-38039, CVE-2023-38546, and CVE-2022-41409 (Windows)
17-Oct-2023 Database Release Update 19.21.0 PATCH 35643107 PATCH 35681552 CVE-2023-38039, CVE-2023-28320, CVE-2023-28321, CVE-2023-28322, CVE-2022-44729, CVE-2023-22071, CVE-2023-22077, CVE-2023-22073, CVE-2023-35116, CVE-2023-22075, CVE-2023-22074, CVE-2021-24031, CVE-2023-2976, and CVE-2022-46908
18-Jul-2023 Database Release Update 19.20.0 PATCH 35320081 PATCH 35348034 CVE-2022-43680, CVE-2023-22034, CVE-2023-21949, and CVE-2021-3520
18-Apr-2023 Database Release Update 19.19.0 PATCH 35042068 PATCH 35046439 CVE-2023-21918, and CVE-2023-24998
17-Jan-2023 Database Release Update 19.18.0 PATCH 34765931 PATCH 34750795 CVE-2018-25032, CVE-2022-42003, CVE-2023-21829, CVE-2023-21827, CVE-2021-37750, CVE-2022-42889, CVE-2020-10878, CVE-2022-1122, CVE-2021-29338, CVE-2022-3171, CVE-2022-45047, & CVE-2022-42004
18-Oct-2022 Database Release Update 19.17.0 PATCH 34419443 PATCH 34468114 CVE-2022-21596, CVE-2022-21603, CVE-2020-36518, CVE-2022-1586, CVE-2020-13956, CVE-2021-25122, CVE-2021-25329, CVE-2021-30129, CVE-2022-2047, CVE-2022-25647, and CVE-2019-2904
19-Jul-2022 Database Release Update 19.16.0 PATCH 34133642 PATCH 34110685 CVE-2021-45943, CVE-2022-21432, CVE-2022-0839, CVE-2020-26185, CVE-2020-26184, CVE-2020-35169
19-Apr-2022 Database Release Update 19.15.0 PATCH 33806152 PATCH 33829175 CVE-2021-22569, CVE-2022-21410, CVE-2021-2464
18-Jan-2022 Database Release Update 19.14.0 PATCH 33515361 PATCH 33575656 CVE-2022-21247, CVE-2021-45105
19-Oct-2021 Database Release Update 19.13.0 PATCH 33192793 PATCH 33155330 CVE-2021-2332, CVE-2021-35551, CVE-2021-35557, CVE-2021-35558, CVE-2021-35576, CVE-2021-29425, CVE-2021-35579, CVE-2020-27824
20-Jul-2021 Database Release Update 19.12.0 PATCH 32904851 PATCH 32832237 CVE-2021-2351, CVE-2021-2328, CVE-2021-2329, CVE-2021-2337, CVE-2021-2333, CVE-2019-17545, CVE-2021-2330, CVE-2020-7760, CVE-2021-2334, CVE-2021-2335, CVE-2021-2336, CVE-2021-2326
20-Apr-2021 Database Release Update 19.11.0 PATCH 32545013 PATCH 32409154 CVE-2021-2207, CVE-2021-2175, CVE-2021-2173, CVE-2019-3738, CVE-2019-3739, CVE-2019-3740, CVE-2020-5360, CVE-2020-17527, CVE-2020-13943, CVE-2020-9484. CVE-2021-2245, and CVE-2020-5359
19-Jan-2021 Database Release Update 19.10.0 PATCH 32218454 PATCH 32062765 CVE-2021-2035, CVE-2021-2000, CVE-2021-2054, CVE-2021-2045
20-Oct-2020 Database Release Update 19.9.0 PATCH 31771877 PATCH 31719903 CVE-2020-14901, CVE-2020-14735, CVE-2020-14734, CVE-2020-9488, CVE-2020-11022, CVE-2020-14742, CVE-2019-17543, CVE-2019-11922, CVE-2019-12900, CVE-2020-13935, CVE-2016-1000031, CVE-2018-8013, CVE-2017-7658, CVE-2019-11358, CVE-2019-16335, CVE-2020-14745, CVE-2020-14744, CVE-2020-11022, CVE-2020-11023, CVE-2016-10244, CVE-2016-10328, CVE-2016-5300, CVE-2016-6153, CVE-2017-10989, CVE-2017-13685, CVE-2017-13745, CVE-2017-14232, CVE-2017-15286, CVE-2017-7857, CVE-2017-7858, CVE-2017-7864, CVE-2017-8105, CVE-2017-8287, CVE-2018-18873, CVE-2018-19139, CVE-2018-19539, CVE-2018-19540, CVE-2018-19541, CVE-2018-19542, CVE-2018-19543, CVE-2018-20346, CVE-2018-20505, CVE-2018-20506, CVE-2018-20570, CVE-2018-20584, CVE-2018-20622, CVE-2018-20843, CVE-2018-6942, CVE-2018-8740, CVE-2018-9055, CVE-2018-9154, CVE-2018-9252, CVE-2019-15903, CVE-2019-16168, CVE-2019-5018, CVE-2019-8457, CVE-2019-9936, CVE-2019-9937, and CVE-2016-3189
14-Jul-2020 Database Release Update 19.8.0 PATCH 31281355 PATCH 31247621 CVE-2020-2969, CVE-2020-2978, CVE-2019-13990, CVE-2019-17569, CVE-2016-1000031, CVE-2018-10237, CVE-2018-8013, CVE-2020-1935 and CVE-2020-1938
14-Apr-2020 Database Release Update 19.7.0 PATCH 30869156 PATCH 30869156 CVE-2019-2756, CVE-2019-2759, CVE-2019-2852, CVE-2019-2853, CVE-2019-12418, CVE-2019-17563, CVE-2020-2734, CVE-2020-2737
14-Jan-2020 Database Release Update 19.6.0 PATCH 30557433 PATCH 30445947, which has been superseded by PATCH 30901317 CVE-2020-2510, CVE-2020-2511, CVE-2020-2512, CVE-2020-2515, CVE-2020-2516, CVE-2020-2517, CVE-2020-2527, CVE-2020-2731, CVE-2020-2568, CVE-2020-2569, CVE-2019-10072, CVE-2018-11784, CVE-2019-0199, CVE-2019-0221, CVE-2019-0232
15-Oct-2019 Database Release Update 19.5.0 PATCH 30125133 PATCH 30151705 CVE-2019-2956, CVE-2019-2913, CVE-2019-2939, CVE-2018-2875, CVE-2019-2734, CVE-2018-11784, CVE-2019-2954, CVE-2019-2955, CVE-2018-8034, CVE-2018-1000873, CVE-2018-14719, CVE-2018-14720, CVE-2018-14721, CVE-2018-19360, CVE-2018-19361 and CVE-2018-19362
16-Jul-2019 Database Release Update 19.4.0 PATCH 29834717 PATCH 29859191 CVE-2018-11058, CVE-2019-2776, CVE-2016-0701, CVE-2016-2183, CVE-2016-6306, CVE-2016-8610, CVE-2018-11054, CVE-2018-11055, CVE-2018-11056, CVE-2018-11057 and CVE-2018-15769.
16-Apr-2019 Database Release Update 19.3.0 (patch suspended) none  

aix环境rac 私网直连导致haip启动异常

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:aix环境rac 私网直连导致haip启动异常

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

以前写过一篇在linux平台rac环境,心跳网络通过网线直连,当其中一台机器关机之后,另外一个节点无法检测到心跳网络是active,导致无法启动的情况:私网直连后遗症:一节点无法启动导致另外节点haip无法启动
昨天晚上在aix环境中遇到类似情况,由于某种原因,需要关闭rac的一个节点,另外一个节点启动crs的过程中,haip始终无法启动,虽然haip起不来,但是过了一会儿,asm服务启动成功,磁盘组mount,数据库正常open(这个和linux环境有一定的区别,linux 下面11.2.0.4的rac,如果haip无法启动,默认情况启动asm服务),业务临时恢复

bash-4.2$ crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       db2                      Started             
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE                                                   
ora.crf
      1        ONLINE  ONLINE       db2                                          
ora.crsd
      1        ONLINE  ONLINE       db2                                          
ora.cssd
      1        ONLINE  ONLINE       db2                                          
ora.cssdmonitor
      1        ONLINE  ONLINE       db2                                          
ora.ctssd
      1        ONLINE  ONLINE       db2                      OBSERVER            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  ONLINE       db2                                          
ora.evmd
      1        ONLINE  ONLINE       db2                                          
ora.gipcd
      1        ONLINE  ONLINE       db2                                          
ora.gpnpd
      1        ONLINE  ONLINE       db2                                          
ora.mdnsd
      1        ONLINE  ONLINE       db2                                          

分析haip对应的日志如下

[ USRTHRD][7257]{0:0:221} Starting Probe for ip 169.254.57.103
[ USRTHRD][7257]{0:0:221} Transitioning to Probe State
[ USRTHRD][7257]{0:0:221}  Arp::sProbe { 
[ USRTHRD][7257]{0:0:221} Arp::sSend:  sending type 1
[ USRTHRD][7257]{0:0:221} [NetHAWork] thread hit OSD exception failed to send arp
[ USRTHRD][7257]{0:0:221} (null) category: -2, operation: write, loc: arpsend:1,os, OS error: 69, other: 
[ USRTHRD][7257]{0:0:221} [NetHAWork] thread stopping
[ USRTHRD][7257]{0:0:221} Thread:[NetHAWork]isRunning is reset to false here
[ USRTHRD][5201]{0:0:221} use all detected INF
[ USRTHRD][5201]{0:0:221} Thread:[NetHAWork]thread constructor
[ USRTHRD][5201]{0:0:221} HAIP:  Moving ip '' from inf 'en6' to inf 'en6'
[ USRTHRD][5201]{0:0:221} pausing thread
[ USRTHRD][5201]{0:0:221} posting thread
[ USRTHRD][5201]{0:0:221} Waiting for HAIP work thread to cleanup ARP
[ USRTHRD][5201]{0:0:221} timeout to wait thread to cleanup ARP
[ USRTHRD][5201]{0:0:221} Thread:[NetHAWork]start {
[ USRTHRD][5201]{0:0:221} Thread:[NetHAWork]start }
[ USRTHRD][7514]{0:0:221} [NetHAWork] thread started
[ USRTHRD][7514]{0:0:221}  Arp::sCreateSocket { 
[ USRTHRD][7514]{0:0:221}  Arp::sCreateSocket } 
[ USRTHRD][5201]{0:0:221} use all detected INF
[ USRTHRD][7514]{0:0:221} Failed to check 169.254.57.103 on en6
[ USRTHRD][7514]{0:0:221} (null) category: 0, operation: , loc: , OS error: 0, other: 

这里初步看是把169.254.57.103这个ip增加到en6的网卡上,但是由于OS error: 69失败了.通过aix工程师分析,这个错误可能是物理网络不通导致,对网卡状态进行分析

bash-4.2# entstat -d ent6
-------------------------------------------------------------
ETHERNET STATISTICS (ent6) :
Device Type: 2-Port Gigabit Ethernet-SX PCI-Express Adapter (14103f03)
Hardware Address: 40:f2:e9:91:eb:7a
Elapsed Time: 0 days 1 hours 38 minutes 14 seconds

Transmit Statistics:                          Receive Statistics:
--------------------                          -------------------
Packets: 4128                                 Packets: 5077
Bytes: 35215659                               Bytes: 370511
Interrupts: 0                                 Interrupts: 4815
Transmit Errors: 0                            Receive Errors: 0
Packets Dropped: 0                            Packets Dropped: 0
                                              Bad Packets: 0
Max Packets on S/W Transmit Queue: 1         
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 0

Broadcast Packets: 12                         Broadcast Packets: 0
Multicast Packets: 62                         Multicast Packets: 66
No Carrier Sense: 0                           CRC Errors: 0
DMA Underrun: 0                               DMA Overrun: 0
Lost CTS Errors: 0                            Alignment Errors: 0
Max Collision Errors: 0                       No Resource Errors: 0
Late Collision Errors: 0                      Receive Collision Errors: 0
Deferred: 0                                   Packet Too Short Errors: 0
SQE Test: 0                                   Packet Too Long Errors: 0
Timeout Errors: 0                             Packets Discarded by Adapter: 0
Single Collision Count: 0                     Receiver Start Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 0

General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 2000
Driver Flags: Up Broadcast Simplex 
        Limbo 64BitSupport ChecksumOffload 
        LargeSend DataRateSet 

2-Port Gigabit Ethernet-SX PCI-Express Adapter (14103f03) Specific Statistics:
------------------------------------------------------------------------------
Link Status : Down      <======表示网络链路状态异常(一般就是直连导致,如果通过交换机不会这样)
Media Speed Selected: Auto negotiation
Media Speed Running: Unknown
PCI Mode: PCI-Express X4
    Relaxed Ordering: Enabled
    TLP Size: 256
    MRR Size: 4096
Jumbo Frames: Disabled
TCP Segmentation Offload: Enabled
TCP Segmentation Offload Packets Transmitted: 3625
TCP Segmentation Offload Packet Errors: 0
Transmit and Receive Flow Control Status: Enabled
XON Flow Control Packets Transmitted: 0
XON Flow Control Packets Received: 0
XOFF Flow Control Packets Transmitted: 0
XOFF Flow Control Packets Received: 0
Transmit and Receive Flow Control Threshold (High): 40960
Transmit and Receive Flow Control Threshold (Low): 20480
Transmit and Receive Storage Allocation (TX/RX): 4/44

通过解决掉异常问题,把故障主机启动之后,启动该机器之后,网络链路状态恢复正常,启动haip成功,但是由于该集群在haip异常的时候启动成功,心跳网络使用是直接的私网ip(没有使用haip),因此还是要对集群进行一次重启恢复到正常状态.

又一例TRIM导致asm磁盘数据丢失的故障

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:又一例TRIM导致asm磁盘数据丢失的故障

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

以前遇到过一个case,存储直连虚拟机,对磁盘误操作之后触发trim,导致数据被清空:ssd trim导致fdisk格式化磁盘之后无法恢复,最近再次遇到类似案例:客户错误对一块asm disk磁盘进行了格式化
mkfs


该磁盘是由6块磁盘组成了磁盘组
data

被格式化之后data磁盘组直接dismount

Tue Apr 07 18:22:31 2026
WARNING: cache read  a corrupt block: group=2(DATA) fn=261 indblk=0 disk=0 (DATA_0000) incarn=3958745085 au=605 blk=0 count=1
Errors in file /home/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_639087.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
NOTE: a corrupted block from group DATA was dumped to /home/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_639087.trc
WARNING: cache read (retry) a corrupt block: group=2(DATA) fn=261 indblk=0 disk=0 (DATA_0000) incarn=3958745085 au=605 blk=0 count=1
Errors in file /home/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_639087.trc:
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
ERROR: cache failed to read group=2(DATA) fn=261 indblk=0 from disk(s): 0(DATA_0000)
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
NOTE: cache initiating offline of disk 0 group DATA
NOTE: process _user639087_+asm1 (639087) initiating offline of disk 0.3958745085 (DATA_0000) with mask 0x7e in group 2
NOTE: initiating PST update: grp = 2, dsk = 0/0xebf5a7fd, mask = 0x6a, op = clear
Tue Apr 07 18:22:31 2026
GMON updating disk modes for group 2 at 10 for pid 28, osid 639087
ERROR: Disk 0 cannot be offlined, since diskgroup has external redundancy.
ERROR: too many offline disks in PST (grp 2)
Tue Apr 07 18:22:31 2026
NOTE: cache dismounting (not clean) group 2/0xE9E5571F (DATA) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 115720, image: oracle@ajjorcl1 (B000)
Tue Apr 07 18:22:31 2026
NOTE: halting all I/Os to diskgroup 2 (DATA)
WARNING: Offline for disk DATA_0000 in mode 0x7f failed.
Tue Apr 07 18:22:31 2026
NOTE: LGWR doing non-clean dismount of group 2 (DATA)
NOTE: LGWR sync ABA=15.1625 last written ABA 15.1625
Errors in file /home/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_639087.trc  (incident=309345):
ORA-15335: ASM metadata corruption detected in disk group 'DATA'
ORA-15130: diskgroup "DATA" is being dismounted
ORA-15066: offlining disk "DATA_0000" in group "DATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [261] [2147483648] [0 != 1]
Incident details in: /home/app/grid/diag/asm/+asm/+ASM1/incident/incdir_309345/+ASM1_ora_639087_i309345.trc
Tue Apr 07 18:22:31 2026
List of instances:
 1
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 30)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 2 invalid = TRUE 
 26 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Apr 07 18:22:31 2026
freeing rdom 2
Tue Apr 07 18:22:31 2026
WARNING: dirty detached from domain 2
NOTE: cache dismounted group 2/0xE9E5571F (DATA) 
SQL> alter diskgroup DATA dismount force /* ASM SERVER:3924121375 */ 
Tue Apr 07 18:22:32 2026
Sweep [inc][309345]: completed
System State dumped to trace file /home/app/grid/diag/asm/+asm/+ASM1/incident/incdir_309345/+ASM1_ora_639087_i309345.trc
Tue Apr 07 18:22:32 2026
Dumping diagnostic data in directory=[cdmp_20260407182232], requested by (instance=1, osid=639087), summary=[incident=309345].
Tue Apr 07 18:22:32 2026
NOTE: cache deleting context for group DATA 2/0xe9e5571f
GMON dismounting group 2 at 11 for pid 32, osid 115720
NOTE: Disk DATA_0000 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0001 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0002 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0003 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0004 in mode 0x7f marked for de-assignment
NOTE: Disk DATA_0005 in mode 0x7f marked for de-assignment
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 2
Tue Apr 07 18:22:34 2026
Sweep [inc2][309345]: completed
NOTE: AMDU dump of disk group DATA created at /home/app/grid/diag/asm/+asm/+ASM1/incident/incdir_309345
Tue Apr 07 18:22:37 2026
NOTE: ASM client orcl1:orcl disconnected unexpectedly.
NOTE: check client alert log.
NOTE: Trace records dumped in trace file /home/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_504268.trc
Tue Apr 07 18:23:02 2026
SUCCESS: diskgroup DATA was dismounted
SUCCESS: alter diskgroup DATA dismount force /* ASM SERVER:3924121375 */
SUCCESS: ASM-initiated MANDATORY DISMOUNT of group DATA

通过kfed分析被格式化的磁盘,随机找了一些au发现都被置空
kfed


使用lsblk查看对应磁盘是否启用了TRIM 特性
trim

基于这样的情况,基本上可以判断,该磁盘大概率已经触发了trim,数据被置空的概率非常大,最后对于镜像磁盘通过winhex查看,确认磁盘中除了基本的分区和文件系统信息之外其他都为空
kong
基于此种情况,最好的结果就是恢复该6个磁盘组磁盘中5个磁盘的数据,这样丢失数据最少1/6以上,但是也是没有办法中的办法,尽可能减少损失了.

一次运气好的ORA-600 kcratr_nab_less_than_odr故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:一次运气好的ORA-600 kcratr_nab_less_than_odr故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户由于虚拟化环境空间不足,导致数据库异常,启动报ORA-600 kcratr_nab_less_than_odr错误

Mon Apr 06 00:13:16 2026
Completed: alter database mount exclusive
alter database open
Beginning crash recovery of 1 threads
 parallel recovery started with 3 processes
Started redo scan
Mon Apr 06 00:13:26 2026
Completed redo scan
 read 5480 KB redo, 459 data blocks need recovery
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_ora_2324.trc  (incident=418959):
ORA-00600: ??????, ??: [kcratr_nab_less_than_odr], [1], [53856], [40105], [43042], [], [], [], [], [], [], []
Incident details in: d:\app\administrator\diag\rdbms\orcl\orcl\incident\incdir_418959\orcl_ora_2324_i418959.trc
Aborting crash recovery due to error 600
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_ora_2324.trc:
ORA-00600: ??????, ??: [kcratr_nab_less_than_odr], [1], [53856], [40105], [43042], [], [], [], [], [], [], []
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl\trace\orcl_ora_2324.trc:
ORA-00600: ??????, ??: [kcratr_nab_less_than_odr], [1], [53856], [40105], [43042], [], [], [], [], [], [], []
ORA-600 signalled during: alter database open...
Mon Apr 06 00:13:33 2026
Trace dumping is performing id=[cdmp_20260406001333]

由于客户自己不熟悉,故障之后,没有再次继续操作,一直保留着现场。这个故障一般是由于ctl写丢失导致,一般首先选择rectl
11111


然后尝试open库,运气不错,直接打开成功
22222

这样就完成了本次恢复工作,数据库一切正常,运气不错。对于ORA-600 kcratr_nab_less_than_odr错误大部分时候,可以这样简单的恢复,但是也遇到过rectl之后,继续报ORA-600等错误的情况:
ORA-600 kcratr_nab_less_than_odr和ORA-600 4194故障处理
ORA-600 kcratr_nab_less_than_odr和ORA-600 2662故障处理
ORA-600 kcratr_nab_less_than_odr和ORA-600 4193故障处理

OraFHR快速open被勒索加密破坏的Oracle数据库

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:OraFHR快速open被勒索加密破坏的Oracle数据库

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

今天客户一个12.2的库,被.[[yatesnet@cock.li]].wman加密
wman_1
通过obet工具分析(obet实现对数据文件坏块检测功能,确认每个文件损坏63个block
64


而且数据库版本是12.2符合Oracle数据库被勒索加密一键open工具–OraFHR重构文件头的条件,对其进行重构
865

然后根据软件提示命令操作直接打开数据库
866

obet一键恢复offline数据文件

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:obet一键恢复offline数据文件

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

客户有一个数据库由于异常断电之后无法正常启动,自行尝试恢复之后但是没有open成功,让可以通过Oracle数据库异常恢复检查脚本(Oracle Database Recovery Check)脚本收集信息进行评估,发现两个问题:
1. 根据查询信息确认users01.dbf(file# 4)文件处于offline状态,而且checkpoint scn明显小于其他文件
offline


2. 通过分析alert日志确认客户在尝试offline file 4 之后open数据库报ORA-600 4194错误,数据库没有open成功

Mon Mar 30 06:26:30 2026
Starting ORACLE instance (normal)
Mon Mar 30 06:27:00 2026
ALTER DATABASE DATAFILE 4 OFFLINE DROP
Completed: ALTER DATABASE DATAFILE 4 OFFLINE DROP
Mon Mar 30 06:27:26 2026
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 7 processes
Started redo scan
Completed redo scan
 read 88 KB redo, 103 data blocks need recovery
Started redo application at
 Thread 1: logseq 3, block 3
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
  Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo03.log
Completed redo application of 0.07MB
Completed crash recovery at
 Thread 1: logseq 3, block 180, scn 415466134
 103 data blocks read, 103 data blocks written, 88 redo k-bytes read
Mon Mar 30 06:27:28 2026
Thread 1 advanced to log sequence 4 (thread open)
Thread 1 opened at log sequence 4
  Current log# 1 seq# 4 mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Mar 30 06:27:28 2026
SMON: enabling cache recovery
Successfully onlined Undo Tablespace 2.
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is ZHS16GBK
No Resource Manager plan active
Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_smon_17628.trc(incident=186839):
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /home/oracle/app/oracle/diag/rdbms/orcl/orcl/orcl_smon_17628_i186839.trc
Exception [type: SIGBUS, Non-existent physical address] [ADDR:0x6C0C4B62] [PC:0x2297750, kgegpa()+40]
Exception [type: SIGBUS, Non-existent physical address] [ADDR:0x6C0C4B62] [PC:0x229597B, kgebse()+279]
Mon Mar 30 06:27:28 2026
PMON (ospid: 17604): terminating the instance due to error 397
Errors in file /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_smon_17628.trc:
ORA-00328: archived log ends at change 415466135, need later change 415466136
ORA-00334: archived log: '/home/oracle/app/oracle/oradata/orcl/redo03.log'
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
Instance terminated by PMON, pid = 17604

接手这个故障之后,由于数据库是非归档模式,而且已经被屏蔽一致性强制打开过(open过程没成功),redo已经被clear过,因此基于这样的情况,直接上obet工具(Oracle Block Editor Tool修改file# 4的文件头状态
obet修复之前文件状态

STATUS	CHECKPOINT_TIME 			 FUZ CHECKPOINT_CHANGE# 	 ROW_NUM
------- ---------------------------------------- --- ------------------ ----------------
OFFLINE 2026-03-30 06:24:41			 YES	      415446014 	       1
ONLINE	2026-03-30 06:28:26			 YES	      415486184 	       7

obet修改文件头操作

OBET> open listfile.txt
Loaded 8 files from  datafile list 'listfile.txt'.

OBET> info

Loaded files (2 total):
----------------------------------------
Number  Path
----------------------------------------
     1  /home/oracle/app/oracle/oradata/orcl/system01.dbf
     4  /opt/oradata/orcl/users01.dbf
----------------------------------------

OBET> set mode edit
mode set to: edit


OBET> set file 4
filename set to: /opt/oradata/orcl/users01.dbf (file#4)

OBET> backup block 1
Created backup directory: backup_blk
Successfully backed up block 1 from current file to /tmp/backup_blk/users01.dbf_1.20260331092333


OBET> copy chkscn file 1 to file 4

Confirm Modify chkscn:
Source: file#1 (/home/oracle/app/oracle/oradata/orcl/system01.dbf)
Target: file#4 (/opt/oradata/orcl/users01.dbf)
Proceed? (Y/YES to confirm): y
Successfully copied checkpoint SCN information from file#1 to file#4.

OBET> exit
Exiting OBET.

再次查询文件头scn信息

STATUS	CHECKPOINT_TIME 			 FUZ CHECKPOINT_CHANGE# 	 ROW_NUM
------- ---------------------------------------- --- ------------------ ----------------
OFFLINE 2026-03-30 06:28:26			 NO	      415486184 	       1
ONLINE	2026-03-30 06:28:26			 YES	      415486184 	       7

尝试online文件,并open库成功

SQL> recover datafile 4;
Media recovery complete.
SQL> alter database datafile 4 online;

Database altered.

SQL> alter database open;

Database altered.

然后expdp导出数据成功,基本完成本次数据恢复工作
dump