O/S-Error: (OS 23) 数据错误(循环冗余检查)—故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:O/S-Error: (OS 23) 数据错误(循环冗余检查)—故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户由于磁盘坏道导致数据文件访问报ORA-27070 OSD-04016 O/S-Error等相关错误
OSD-04016


rman 尝试读取88号文件

RMAN> backup datafile 88 format 'e:\rman\%d_%T_%I.%s%p';

启动 backup 于 05-6月 -24
使用目标数据库控制文件替代恢复目录
分配的通道: ORA_DISK_1
通道 ORA_DISK_1: SID=2246 设备类型=DISK
通道 ORA_DISK_1: 正在启动全部数据文件备份集
通道 ORA_DISK_1: 正在指定备份集内的数据文件
输入数据文件: 文件号=00088 名称=E:\APP\ADMINISTRATOR\ORADATA\XFF\XIFENFEI.ORA
通道 ORA_DISK_1: 正在启动段 1 于 05-6月 -24
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: backup 命令 (ORA_DISK_1 通道上, 在 06/05/2024 18:33:43 上) 失败
ORA-19501: 文件 "E:\APP\ADMINISTRATOR\ORADATA\XFF\XIFENFEI.ORA", 块编号 322944 (块大小=8192) 上出现读取错误
ORA-27070: 异步读取/写入失败
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 23) 数据错误(循环冗余检查)。

检查系统日志,发现有报:设备 \Device\Harddisk0\DR0 有一个不正确的区块
disk-error


到目前为止基本上可以判断是文件系统或者磁盘层面出现坏道导致该问题(磁盘坏道概率更大),使用工具对损坏数据文件进行强制拷贝,提示少量扇区数据无法拷贝
force-copy

通过dbv检查恢复出来文件效果

E:\check_db>DBV FILE=D:/XIFENFEI.ORA

DBVERIFY: Release 11.2.0.3.0 - Production on 星期日 6月 9 22:17:28 2024

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - 开始验证: FILE = D:\XIFENFEI.ORA
页 323055 标记为损坏
Corrupt block relative dba: 0x1604edef (file 88, block 323055)
Bad header found during dbv:
Data in bad block:
 type: 229 format: 5 rdba: 0xe5e5e5e5
 last change scn: 0xe5e5.e5e5e5e5 seq: 0xe5 flg: 0xe5
 spare1: 0xe5 spare2: 0xe5 spare3: 0xe5e5
 consistency value in tail: 0x4c390601
 check value in block header: 0xe5e5
 computed block checksum: 0x5003



DBVERIFY - 验证完成

检查的页总数: 524288
处理的页总数 (数据): 204510
失败的页总数 (数据): 0
处理的页总数 (索引): 127485
失败的页总数 (索引): 0
处理的页总数 (其他): 3030
处理的总页数 (段)  : 0
失败的总页数 (段)  : 0
空的页总数: 189262
标记为损坏的总页数: 1
流入的页总数: 0
加密的总页数        : 0
最高块 SCN            : 184063522 (3470.184063522)

运气不错,就一个数据库block异常,通过dba_extents查询坏块所属对象,运气不太好是一个表数据

SQL> SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, A.PARTITION_NAME
  2    FROM DBA_EXTENTS A
  3   WHERE FILE_ID = &FILE_ID
  4     AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1;
输入 file_id 的值:  88
原值    3:  WHERE FILE_ID = &FILE_ID
新值    3:  WHERE FILE_ID = 88
输入 block_id 的值:  323055
原值    4:    AND &BLOCK_ID BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1
新值    4:    AND 323055 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
SEGMENT_TYPE       TABLESPACE_NAME                PARTITION_NAME
------------------ ------------------------------ ------------------------------
XFFUSER
XFFTABLE
TABLE              XFFTBS_TAB2

设置跳过坏块导出数据,然后重命名原表导入数据,完成本次恢复,以前有过类似恢复:

一次侥幸的OSD-04016 O/S-Error异常恢复
O/S-Error: (OS 23) 数据错误(循环冗余检查) 数据库恢复

O/S-Error: (OS 23) 数据错误(循环冗余检查) 数据库恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:O/S-Error: (OS 23) 数据错误(循环冗余检查) 数据库恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户数据库运行过程中突然crash,检测发现ORA-27070 OSD-04016 O/S-Error: (OS 23) 等报错

Thu May 12 11:25:53 2022
KCF: write/open error block=0x19e95f online=1
     file=57 H:\ORADATA\xifenfei\XFF51.DBF
     error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 23) 数据错误(循环冗余检查)。'
Thu May 12 11:25:53 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_dbw0_3532.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式
ORA-01114: 将块写入文件 57 时出现 IO 错误 (块 # 1698143)
ORA-01110: 数据文件 57: 'H:\ORADATA\xifenfei\XFF51.DBF'
ORA-27070: 异步读取/写入失败
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 23) 数据错误(循环冗余检查)。

DBW0: terminating instance due to error 1242
Thu May 12 11:25:54 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_mman_3528.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:25:54 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_lgwr_3544.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:25:55 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_dbw1_3536.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:25:55 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_psp0_3524.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:25:55 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_ckpt_3548.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:25:55 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_pmon_3520.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:26:06 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_q002_37468.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:26:08 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_reco_3556.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:26:08 2022
Errors in file e:\oracle\product\10.2.0\admin\xifenfei\bdump\xifenfei_smon_3552.trc:
ORA-01242: 数据文件出现介质故障: 数据库处于 NOARCHIVELOG 模式

Thu May 12 11:26:10 2022
Instance terminated by DBW0, pid = 3532

再次重启数据库报错 ORA-27070: 异步读取/写入失败 OSD-04016: 异步 I/O 请求排队时出错。类似错误
osd-04006


dbv检查数据文件报异常
dbv-io-error

通过以上信息基本上可以确认是由于底层故障(文件系统或者硬件故障),导致数据库文件访问异常,检查系统日志发现异常
20220518142942

通过专业恢复软件对异常文件进行恢复,实现数据库正常open(跳过坏块)
20220518143342

硬件故障导致ORA-600 2662错误处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:硬件故障导致ORA-600 2662错误处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

前几天恢复了一个40多T的CASE:ORA-00600: internal error code, arguments: [16513], [1403] 恢复,又一个近30T的库由于硬件故障,通过其他人一系列恢复之后,无法正常open,让我们提供技术支持:
故障最初原因是由于存储异常

Fri Feb 19 09:03:49 2021
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_ora_3460.trc:
ORA-01114: 将块写入文件 849 时出现 IO 错误 (块 # 3871748)
ORA-27070: 异步读取/写入失败
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1167) 设备没有连接。
ORA-01114: 将块写入文件 849 时出现 IO 错误 (块 # 3871748)
ORA-27070: 异步读取/写入失败
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1167) 设备没有连接。

通过其他人一系列处理后,数据库报ORA-600 2662错误

Sat Feb 20 08:19:35 2021
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Sat Feb 20 08:19:35 2021
SMON: enabling cache recovery
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_ora_5304.trc(incident=1960181):
ORA-00600:internal error code,arguments:[2662],[4],[2185364344], [4],[2185453722],[893388032],[],[],[],[],[],[]
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_ora_5304.trc:
ORA-00600:internal error code,arguments:[2662],[4],[2185364344], [4],[2185453722],[893388032],[],[],[],[],[],[]
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_ora_5304.trc:
ORA-00600:internal error code,arguments:[2662],[4],[2185364344], [4],[2185453722],[893388032],[],[],[],[],[],[]
Error 600 happened during db open, shutting down database
USER (ospid: 5304): terminating the instance due to error 600
Instance terminated by USER, pid = 5304
ORA-1092 signalled during: ALTER DATABASE OPEN...
opiodr aborting process unknown ospid (5304) as a result of ORA-1092
Sat Feb 20 08:19:42 2021
ORA-1092 : opitsk aborting process

通过对scn处理,数据库顺利绕过该错误,然后报ORA-600 4194错误

Doing block recovery for file 213 block 4688
No block recovery was needed
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_7048.trc(incident=1984136):
ORA-00600: internal error code, arguments: [4194], [38.4.1381252], [0], [], [],[],[],[],[],[],[],[]
Sat Feb 20 10:50:45 2021
Doing block recovery for file 213 block 4688
No block recovery was needed
Fatal internal error happened while SMON was doing active transaction recovery.
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_7048.trc:
ORA-00600: internal error code, arguments: [4194], [38.4.1381252], [0], [], [],[],[],[],[],[],[],[]
SMON (ospid: 7048): terminating the instance due to error 474
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_ora_6652.trc(incident=1984185):
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
ORA-00600: internal error code, arguments: [4194], [], [], [], [], [], [], [], [], [], [], []
Sat Feb 20 10:50:52 2021
Instance terminated by SMON, pid = 7048

通过对异常事务进行处理,屏蔽smon进程进行回滚,数据库open成功,但是报ORA-600 4137错误

Sat Feb 20 10:53:46 2021
Sweep [inc][1992133]: completed
Stopping background process MMNL
Sat Feb 20 10:53:47 2021
Trace dumping is performing id=[cdmp_20210220105347]
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_6576.trc(incident=1992134):
ORA-00600: internal error code, arguments: [4137], [23.13.3094188], [0], [0], [], [], [], [], [], [], [], []
ORACLE Instance xifenfei (pid = 14) - Error 600 encountered while recovering transaction (23, 13).
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_6576.trc:
ORA-00600: internal error code, arguments: [4137], [23.13.3094188], [0], [0], [], [], [], [], [], [], [], []
Sat Feb 20 10:53:47 2021
Sweep [inc2][1992133]: completed
Sat Feb 20 10:53:47 2021
Sweep [inc][1992134]: completed
Stopping background process MMON
Trace dumping is performing id=[cdmp_20210220105348]
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_6576.trc(incident=1992135):
ORA-00600: internal error code, arguments: [4137], [38.4.1381252], [0], [0], [], [], [], [], [], [], [], []
Starting background process MMON
Starting background process MMNL
Sat Feb 20 10:53:48 2021
MMON started with pid=16, OS id=6448 
ALTER SYSTEM enable restricted session;
Sat Feb 20 10:53:48 2021
MMNL started with pid=36, OS id=6840 
ORACLE Instance xifenfei (pid = 14) - Error 600 encountered while recovering transaction (38, 4).
Errors in file d:\app\administrator\diag\rdbms\xifenfei\xifenfei\trace\xifenfei_smon_6576.trc:
ORA-00600: internal error code, arguments: [4137], [38.4.1381252], [0], [0], [], [], [], [], [], [], [], []
Sat Feb 20 10:53:49 2021
Sweep [inc][1992135]: completed
Trace dumping is performing id=[cdmp_20210220105349]
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: alter database open

对异常回滚段进行处理,数据库后端启动正常,不再报明显ORA-错误.通过hcheck.sql检查字典正常

HCheck Version 07MAY18 on 20-FEB-2021 11:35:11
----------------------------------------------
Catalog Version 11.2.0.1.0 (1102000100)
db_name: JYJG

                                   Catalog       Fixed
Procedure Name                     Version    Vs Release    Timestamp
Result
------------------------------ ... ---------- -- ---------- --------------
------
.- LobNotInObj                 ... 1102000100 <=  *All Rel* 02/20 11:35:11 PASS
.- MissingOIDOnObjCol          ... 1102000100 <=  *All Rel* 02/20 11:35:11 PASS
.- SourceNotInObj              ... 1102000100 <=  *All Rel* 02/20 11:35:11 PASS
.- IndIndparMismatch           ... 1102000100 <= 1102000100 02/20 11:35:12 PASS
.- InvCorrAudit                ... 1102000100 <= 1102000100 02/20 11:35:12 PASS
.- OversizedFiles              ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- PoorDefaultStorage          ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- PoorStorage                 ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- PartSubPartMismatch         ... 1102000100 <= 1102000100 02/20 11:35:12 PASS
.- TabPartCountMismatch        ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- OrphanedTabComPart          ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- MissingSum$                 ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- MissingDir$                 ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- DuplicateDataobj            ... 1102000100 <=  *All Rel* 02/20 11:35:12 PASS
.- ObjSynMissing               ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- ObjSeqMissing               ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedUndo                ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedIndex               ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedIndexPartition      ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedIndexSubPartition   ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedTable               ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedTablePartition      ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedTableSubPartition   ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- MissingPartCol              ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedSeg$                ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- OrphanedIndPartObj#         ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- DuplicateBlockUse           ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- FetUet                      ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- Uet0Check                   ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- ExtentlessSeg               ... 1102000100 <= 1102000100 02/20 11:35:13 PASS
.- SeglessUET                  ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadInd$                     ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadTab$                     ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadIcolDepCnt               ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- ObjIndDobj                  ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- TrgAfterUpgrade             ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- ObjType0                    ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadOwner                    ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- StmtAuditOnCommit           ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadPublicObjects            ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadSegFreelist              ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- BadDepends                  ... 1102000100 <=  *All Rel* 02/20 11:35:13 PASS
.- CheckDual                   ... 1102000100 <=  *All Rel* 02/20 11:35:14 PASS
.- ObjectNames                 ... 1102000100 <=  *All Rel* 02/20 11:35:14 PASS
.- BadCboHiLo                  ... 1102000100 <=  *All Rel* 02/20 11:35:14 PASS
.- ChkIotTs                    ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- NoSegmentIndex              ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- BadNextObject               ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- DroppedROTS                 ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- FilBlkZero                  ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- DbmsSchemaCopy              ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- OrphanedObjError            ... 1102000100 >  1102000000 02/20 11:35:15 PASS
.- ObjNotLob                   ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- MaxControlfSeq              ... 1102000100 <=  *All Rel* 02/20 11:35:15 PASS
.- SegNotInDeferredStg         ... 1102000100 >  1102000000 02/20 11:35:18 PASS
.- SystemNotRfile1             ... 1102000100 >   902000000 02/20 11:35:18 PASS
.- DictOwnNonDefaultSYSTEM     ... 1102000100 <=  *All Rel* 02/20 11:35:19 PASS
.- OrphanTrigger               ... 1102000100 <=  *All Rel* 02/20 11:35:19 PASS
.- ObjNotTrigger               ... 1102000100 <=  *All Rel* 02/20 11:35:19 PASS
---------------------------------------
20-FEB-2021 11:35:19  Elapsed: 8 secs
---------------------------------------
Found 0 potential problem(s) and 0 warning(s)

PL/SQL procedure successfully completed.

Statement processed.

虽然字典正常,但是由于数据库屏蔽了一致性,建议客户在条件允许的情况下,进行逻辑迁移,排除风险隐患.

硬件故障数据库异常恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:硬件故障数据库异常恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

硬件故障数据库crash
有客户由于硬件故障导致数据库异常ORA-00345 ORA-00312 ORA-27070 OSD-04016

Tue Feb 05 16:58:26 2019
Thread 1 advanced to log sequence 17139 (LGWR switch)
  Current log# 12 seq# 17139 mem# 0: S:\ORADATA\ORCL\REDO12A.LOG
  Current log# 12 seq# 17139 mem# 1: S:\ORADATA\ORCL\REDO12B.LOG
Tue Feb 05 19:47:24 2019
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_lgwr_2420.trc:
ORA-00345: redo log write error block 152097 count 8
ORA-00312: online log 12 thread 1: 'S:\ORADATA\ORCL\REDO12A.LOG'
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1) 函数不正确。
ORA-00345: redo log write error block 152097 count 8
ORA-00312: online log 12 thread 1: 'S:\ORADATA\ORCL\REDO12B.LOG'
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1) 函数不正确。
ORA-00345: redo log write error block 152105 count 1
ORA-00312: online log 12 thread 1: 'S:\ORADATA\ORCL\REDO12A.LOG'
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1) 函数不正确。

直接启动数据库报错
修复好硬件之后,直接启动数据库报ORA-00600 kcratr_scan_lastbwr错误

Fri Feb 08 20:58:15 2019
alter database mount exclusive
Successful mount of redo thread 1, with mount id 1527506791
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: alter database mount exclusive
alter database open
Beginning crash recovery of 1 threads
Started redo scan
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_3672.trc  (incident=41353):
ORA-00600: ??????, ??: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_41353\orcl_ora_3672_i41353.trc
Aborting crash recovery due to error 600
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_3672.trc:
ORA-00600: ??????, ??: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_3672.trc:
ORA-00600: ??????, ??: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
ORA-600 signalled during: alter database open...
Fri Feb 08 20:58:24 2019
Trace dumping is performing id=[cdmp_20190208205824]
Fri Feb 08 20:59:04 2019
alter database open
Beginning crash recovery of 1 threads
Started redo scan
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_1696.trc  (incident=41354):
ORA-00600: 内部错误代码, 参数: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_41354\orcl_ora_1696_i41354.trc
Aborting crash recovery due to error 600
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_1696.trc:
ORA-00600: 内部错误代码, 参数: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_ora_1696.trc:
ORA-00600: 内部错误代码, 参数: [kcratr_scan_lastbwr], [], [], [], [], [], [], [], [], [], [], []
ORA-600 signalled during: alter database open ...

recover database报错
执行recover database报错ORA-00600 6101,ORA-00600 kdourp_inorder2,ORA-00600 ktbsdp1,ORA-00600 3020

Fri Feb 08 21:09:20 2019
ALTER DATABASE RECOVER  database
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 4 slaves
Fri Feb 08 21:09:21 2019
Recovery of Online Redo Log: Thread 1 Group 12 Seq 17139 Reading mem 0
  Mem# 0: S:\ORADATA\ORCL\REDO12A.LOG
  Mem# 1: S:\ORADATA\ORCL\REDO12B.LOG
Fri Feb 08 21:09:21 2019
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr02_3780.trc  (incident=49379):
ORA-00600: internal error code, arguments: [6101], [17], [21], [0], [], [], [], [], [], [], [], []
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_49379\orcl_pr02_3780_i49379.trc
Fri Feb 08 21:09:21 2019
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr01_2040.trc  (incident=49371):
ORA-00600: internal error code, arguments: [kdourp_inorder2], [34], [0], [0], [44], [], [], [], [], [], [], []
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_49371\orcl_pr01_2040_i49371.trc
Fri Feb 08 21:09:21 2019
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr03_1068.trc  (incident=49387):
ORA-00600: internal error code, arguments: [ktbsdp1], [], [], [], [], [], [], [], [], [], [], []
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_49387\orcl_pr03_1068_i49387.trc
Fri Feb 08 21:09:24 2019
Trace dumping is performing id=[cdmp_20190208210924]
Slave exiting with ORA-10562 exception
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr03_1068.trc:
ORA-10562: Error occurred while applying redo to data block (file# 4, block# 1716972)
ORA-10564: tablespace USERS
ORA-01110: data file 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 204127
ORA-00600: internal error code, arguments: [ktbsdp1], [], [], [], [], [], [], [], [], [], [], []
Slave exiting with ORA-10562 exception
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr02_3780.trc:
ORA-10562: Error occurred while applying redo to data block (file# 4, block# 1738552)
ORA-10564: tablespace USERS
ORA-01110: data file 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 211606
ORA-00600: internal error code, arguments: [6101], [17], [21], [0], [], [], [], [], [], [], [], []
Slave exiting with ORA-10562 exception
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr01_2040.trc:
ORA-10562: Error occurred while applying redo to data block (file# 4, block# 1725898)
ORA-10564: tablespace USERS
ORA-01110: data file 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 73907
ORA-00600: internal error code, arguments: [kdourp_inorder2], [34], [0], [0], [44], [], [], [], [], [], [], []
Recovery Slave PR03 previously exited with exception 10562
Fri Feb 08 21:09:28 2019
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr04_2608.trc  (incident=49395):
ORA-00600: internal error code, arguments: [3020], [4], [1739291], [18516507], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 4, block# 1739291, file offset is 1363369984 bytes)
ORA-10564: tablespace USERS
ORA-01110: data file 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 211552
Incident details in: c:\oracle\diag\rdbms\orcl\orcl\incident\incdir_49395\orcl_pr04_2608_i49395.trc
Slave exiting with ORA-600 exception
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr04_2608.trc:
ORA-00600: internal error code, arguments: [3020], [4], [1739291], [18516507], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 4, block# 1739291, file offset is 1363369984 bytes)
ORA-10564: tablespace USERS
ORA-01110: data file 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 211552
Media Recovery failed with error 448
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr00_1548.trc:
ORA-00283: recovery session canceled due to errors
ORA-00448: normal completion of background process
Slave exiting with ORA-283 exception
Errors in file c:\oracle\diag\rdbms\orcl\orcl\trace\orcl_pr00_1548.trc:
ORA-00283: recovery session canceled due to errors
ORA-00448: normal completion of background process
ORA-10562 signalled during: ALTER DATABASE RECOVER  database  ...

出现上述问题主要是由于硬件突然故障,数据写丢失导致相关问题.

处理思路

RMAN> recover datafile 1;
启动 recover 于 09-2月 -19
使用通道 ORA_DISK_1
正在开始介质的恢复
介质恢复完成, 用时: 00:00:01
完成 recover 于 09-2月 -19
RMAN> recover datafile 2;
启动 recover 于 09-2月 -19
使用通道 ORA_DISK_1
正在开始介质的恢复
介质恢复完成, 用时: 00:00:01
完成 recover 于 09-2月 -19
RMAN> recover datafile 3;
启动 recover 于 09-2月 -19
使用通道 ORA_DISK_1
正在开始介质的恢复
介质恢复完成, 用时: 00:00:02
完成 recover 于 09-2月 -19
RMAN> recover datafile 4;
启动 recover 于 09-2月 -19
使用通道 ORA_DISK_1
正在开始介质的恢复
无法恢复介质
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: recover 命令 (在 02/09/2019 21:48:19 上) 失败
ORA-00283: recovery session canceled due to errors
RMAN-11003: 在分析/执行 SQL 语句期间失败: alter database recover if needed
 datafile 4
ORA-00283: 恢复会话因错误而取消
ORA-10562: Error occurred while applying redo to data block (file# 4, block# 172
5913)
ORA-10564: tablespace USERS
ORA-01110: 数据文件 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 73907
ORA-00600: 内部错误代码, 参数: [kdourp_inorder2], [34], [43], [44], [44], [], []
, [], [], [], [], []
SQL> recover datafile 4;
ORA-00283: 恢复会话因错误而取消
ORA-10562: Error occurred while applying redo to data block (file# 4, block#
1725913)
ORA-10564: tablespace USERS
ORA-01110: 数据文件 4: 'S:\ORADATA\ORCL\USERS01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 73907
ORA-00600: 内部错误代码, 参数: [kdourp_inorder2], [34], [43], [44], [44], [],
[], [], [], [], [], []
--通过bbed修改异常文件,屏蔽文件恢复,直接open库
SQL> alter database open;
数据库已更改。

数据库open之后,逻辑方式导出数据,重建新库,导入数据.

又一例asm格式化文件系统恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:又一例asm格式化文件系统恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

又一个客户把win rac中的asm disk给格式化为ntfs了(data磁盘组由三个500G的磁盘组成,被格式化掉前面两个还剩下一个),而且格式化之后,还进行了一系列恢复(比如修复磁盘头,又进行分区等一些磁盘操作),导致恢复难度增加,也增加了一些数据覆盖
asm alert日志报错

Thu Aug 23 11:20:14 2018
NOTE: ASM client orcl1:orcl disconnected unexpectedly.
NOTE: check client alert log.
NOTE: Process state recorded in trace file d:\app\administrator\diag\asm\+asm\+asm1\trace\+asm1_ora_2260.trc
Thu Aug 23 11:20:28 2018
Errors in file d:\app\administrator\diag\asm\+asm\+asm1\trace\+asm1_lgwr_3820.trc:
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 87) 参数错误。
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:26 disk_offset(bytes):27566080 io_size:4096 operation:Write type:synchronous
	 result:I/O error process_id:3820
NOTE: unable to write any mirror side for diskgroup DATA
NOTE: cache initiating offline of disk 1 group DATA
NOTE: process 3268:3820 initiating offline of disk 1.4042301899 (DATA_0001) with mask 0x7e in group 2
WARNING: Disk DATA_0001 in mode 0x7f is now being taken offline
NOTE: initiating PST update: grp = 2, dsk = 1/0xf0f0a1cb, mode = 0x15
kfdp_updateDsk(): 22
Thu Aug 23 11:20:28 2018
kfdp_updateDskBg(): 22
ERROR: too many offline disks in PST (grp 2)
WARNING: Disk DATA_0001 in mode 0x7f offline aborted

数据库alert日志报错

WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:422 disk_offset(bytes):442515456 io_size:16384 operation:Read type:synchronous
	 result:I/O error process_id:11992
WARNING: failed to read mirror side 1 of virtual extent 5 logical extent 0 of file 260 in
group [2.1859146063] from disk DATA_0001  allocation unit 422 reason error; if possible,will try another mirror side
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_ora_11992.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 5 logical extent 0 of file 260
in group 2 on disk 1 allocation unit 422
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_ora_11992.trc:
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
Thu Aug 23 11:20:13 2018
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-27070: 异步读取/写入失败
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:841 disk_offset(bytes):882532352 io_size:131072 operation:Write type:asynchronous
	 result:I/O error process_id:3224
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 240 logical extent 0 of file 259 in group 2 on disk 1
allocation unit 841 KCF: read, write or open error, block=0x7853 online=1
        file=4 '+DATA/orcl/datafile/users.259.944422883'
        error=15081 txt: ''
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-27070: 异步读取/写入失败
OSD-04006: ReadFile() 失败, 无法读取文件
O/S-Error: (OS 87) 参数错误。
WARNING: IO Failed. group:2 disk(number.incarnation):1.0xf0f0a1cb disk_path:\\.\ORCLDISKDATA1
	 AU:422 disk_offset(bytes):442515456 io_size:16384 operation:Read type:synchronous
	 result:I/O error process_id:3224
WARNING: failed to read mirror side 1 of virtual extent 5 logical extent 0 of file 260 in group [2.1859146063] from
disk DATA_0001  allocation unit 422 reason error; if possible,will try another mirror side
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-15080: 与磁盘的同步 I/O 操作失败
WARNING: failed to write mirror side 1 of virtual extent 5 logical extent 0 of file 260 in group 2 on disk 1
allocation unit 422
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
Errors in file d:\app\administrator\diag\rdbms\orcl\orcl1\trace\orcl1_dbw1_3224.trc:
ORA-00204: 读取控制文件时出错 (块 41, # 块 1)
ORA-00202: 控制文件: ''+DATA/orcl/controlfile/current.260.944422981''
ORA-15081: 无法将 I/O 操作提交到磁盘
DBW1 (ospid: 3224): terminating the instance due to error 204

由于客户进行了一系列恢复恢复操作导致查看磁盘都不全

D:\>asmtool -list
NTFS                             \Device\Harddisk0\Partition1              100M
NTFS                             \Device\Harddisk0\Partition2           102298M
NTFS                             \Device\Harddisk1\Partition1           102397M
NTFS                             \Device\Harddisk2\Partition1           204797M
---这里还有一个磁盘没有正常显示
ORCLDISKDATA10                   \Device\Harddisk4\Partition1           511997M--客户尝试修复的磁盘
ORCLDISKDATA2                    \Device\Harddisk5\Partition1           511997M
ORCLDISKRECOVERY0                \Device\Harddisk6\Partition1            51197M
ORCLDISKRECOVERY1                \Device\Harddisk7\Partition1            51197M
ORCLDISKRECOVERY2                \Device\Harddisk8\Partition1            51197M
ORCLDISKCRS0                     \Device\Harddisk9\Partition1            10237M
ORCLDISKCRS1                     \Device\Harddisk10\Partition1           10237M
ORCLDISKCRS2                     \Device\Harddisk11\Partition1           10237M
NTFS                             \Device\Harddisk12\Partition2         4194174M

通过主机层面激活卷,删除分区等一系列操作,然后通过kfed构造磁盘头,让这些磁盘在os层面可以正常显示

C:\Users\Administrator>asmtool -list
NTFS                             \Device\Harddisk0\Partition1              100M
NTFS                             \Device\Harddisk0\Partition2           102298M
NTFS                             \Device\Harddisk1\Partition1           102397M
NTFS                             \Device\Harddisk2\Partition1           204797M
------需要处理的磁盘------
ORCLDISKDATA0                    \Device\Harddisk3\Partition1           511997M
ORCLDISKDATA1                    \Device\Harddisk4\Partition1           511997M
ORCLDISKDATA2                    \Device\Harddisk5\Partition1           511997M
-----------------------
ORCLDISKRECOVERY0                \Device\Harddisk6\Partition1            51197M
ORCLDISKRECOVERY1                \Device\Harddisk7\Partition1            51197M
ORCLDISKRECOVERY2                \Device\Harddisk8\Partition1            51197M
ORCLDISKCRS0                     \Device\Harddisk9\Partition1            10237M
ORCLDISKCRS1                     \Device\Harddisk10\Partition1           10237M
ORCLDISKCRS2                     \Device\Harddisk11\Partition1           10237M
NTFS                             \Device\Harddisk12\Partition2         4194174M

由于asm磁盘组内部目录au被彻底损坏,导致无法通过asm直接拷贝出来数据,通过底层扫描,按照au恢复出来相关数据,由于格式化ntfs和后续的误操作导致部分数据au被覆盖.其余数据均恢复,抢救了绝大部分数据.
数据文件恢复参考:asm disk header 彻底损坏恢复
另外有一次win平台类似恢复经历:asm disk格式化为ntfs恢复
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

Automatic datafile offline due to write error on

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:Automatic datafile offline due to write error on

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

由于存储突然掉线导致数据文件无法访问,导致部分数据文件被自动offline

Thu May 17 14:49:03 2018
KCF: read, write or open error, block=0xe93b8 online=1
Thu May 17 14:49:03 2018
KCF: read, write or open error, block=0x24eb65 online=1
        file=25 'F:\ORACLE\ORADATA\ORCL\QYSCZH12.ORA'
        file=28 'F:\ORACLE\ORADATA\ORCL\QYSCZH15.ORA'
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 2) 系统找不到指定的文件。'
Automatic datafile offline due to write error on
file 25: F:\ORACLE\ORADATA\ORCL\QYSCZH12.ORA
Thu May 17 14:49:03 2018
KCF: read, write or open error, block=0x22b0a1 online=1
        file=28 'F:\ORACLE\ORADATA\ORCL\QYSCZH15.ORA'
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 2) 系统找不到指定的文件。'
Automatic datafile offline due to write error on
file 28: F:\ORACLE\ORADATA\ORCL\QYSCZH15.ORA
Thu May 17 14:49:03 2018
KCF: read, write or open error, block=0x138def online=1
        file=11 'F:\ORACLE\ORADATA\ORCL\QYSCZH4'
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 2) 系统找不到指定的文件。'
        file=30 'F:\ORACLE\ORADATA\ORCL\QYSCZH17.ORA'
        file=11 'F:\ORACLE\ORADATA\ORCL\QYSCZH4'
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 2) 系统找不到指定的文件。'
O/S-Error: (OS 2) 系统找不到指定的文件。'
……
        file=15 'F:\ORACLE\ORADATA\ORCL\QYSCZH6.ORA'
        error=27072 txt: 'OSD-04008: WriteFile() 失败, 无法写入文件
O/S-Error: (OS 21) 设备未就绪。'
Automatic datafile offline due to write error on
file 15: F:\ORACLE\ORADATA\ORCL\QYSCZH6.ORA
KCF: read, write or open error, block=0xade96 online=1
        file=9 'F:\ORACLE\ORADATA\ORCL\QYSCZH2'
        error=27072 txt: 'OSD-04008: WriteFile() 失败, 无法写入文件
O/S-Error: (OS 21) 设备未就绪。'
Automatic datafile offline due to write error on
file 9: F:\ORACLE\ORADATA\ORCL\QYSCZH2
Thu May 17 14:49:28 2018
KCF: read, write or open error, block=0x378c66 online=1
        file=15 'F:\ORACLE\ORADATA\ORCL\QYSCZH6.ORA'
        error=27072 txt: 'OSD-04008: WriteFile() 失败, 无法写入文件
O/S-Error: (OS 21) 设备未就绪。'
Automatic datafile offline due to write error on
file 15: F:\ORACLE\ORADATA\ORCL\QYSCZH6.ORA
KCF: read, write or open error, block=0x35f6de online=1
……

存储掉线是悲剧的起点,按理说数据库是归档模式,存储恢复之后,继续recover datafile,然后online应该问题不大,但是由于客户没有及时处理这个问题(也许业务实时性要求不高,可能挂几个小时也没人知道),导致第二个悲剧发生,删除归档的定时任务把数据库的归档日志给删除了.导致后面存储挂载上来之后,数据文件也无法正常online成功

Tue May 22 16:28:13 2018
ALTER DATABASE RECOVER  datafile 'F:\ORACLE\ORADATA\ORCL\QYSCZH'
Media Recovery Start
Serial Media Recovery started
ORA-279 signalled during: ALTER DATABASE RECOVER  datafile 'F:\ORACLE\ORADATA\ORCL\QYSCZH'  ...
Tue May 22 16:28:42 2018
ALTER DATABASE RECOVER    CONTINUE DEFAULT
Media Recovery Log D:\ORALCE\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2018_05_22\O1_MF_1_267346_%U_.ARC
Errors with log D:\ORALCE\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2018_05_22\O1_MF_1_267346_%U_.ARC
ORA-308 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT
Media Recovery Log D:\ORALCE\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2018_05_22\O1_MF_1_267346_%U_.ARC
Errors with log D:\ORALCE\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2018_05_22\O1_MF_1_267346_%U_.ARC
ORA-308 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...

通过Oracle数据库异常恢复检查脚本(Oracle Database Recovery Check)脚本检测发现结果如下:
recover


遭遇这种情况,常规方法无法恢复,考虑使用bbed或者其他方法强制online文件,由于存储突然掉线,这样恢复的库可能后续还有大量工作需要处理,最常见的可能有表和index不一致,表的segment header信息和extent实际信息不匹配等

OSD-04016: 异步 I/O 请求排队时出错

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:OSD-04016: 异步 I/O 请求排队时出错

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有某客户由于硬件故障,导致数据库无法启动,让我们介入处理
数据库启动报错

Mon Feb 26 17:28:24 2018
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 3 processes
Started redo scan
Completed redo scan
 read 2054 KB redo, 509 data blocks need recovery
Started redo application at
 Thread 1: logseq 41341, block 54
Recovery of Online Redo Log: Thread 1 Group 1 Seq 41341 Reading mem 0
  Mem# 0: E:\ORADATA\ORCL\REDO01.LOG
Completed redo application of 1.77MB
KCF: read, write or open error, block=0x16439 online=1
        file=1 'E:\ORADATA\ORCL\SYSTEM01.DBF'
        error=27070 txt: 'OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1) 函数不正确。'
Errors in file d:\oracle\diag\rdbms\orcl\orcl\trace\orcl_dbw0_4928.trc:
ORA-01243: system tablespace file suffered media failure
ORA-01114: IO error writing block to file 1 (block # 91193)
ORA-01110: data file 1: 'E:\ORADATA\ORCL\SYSTEM01.DBF'
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 1) 函数不正确。
DBW0 (ospid: 4928): terminating the instance due to error 1243
Mon Feb 26 17:28:29 2018
Instance terminated by DBW0, pid = 4928

20180227223540


这里错误比较明显,由于io错误,在数据库实例恢复之时,写回block正好在该损坏位置从而使得数据库无法正常实例恢复,进而无法open.

dbv验证文件
通过专业工具对system文件进行了重构system文件,然后dbv检查结果如下

DBVERIFY: Release 11.2.0.1.0 - Production on 星期二 2月 27 17:20:14 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
DBVERIFY - 开始验证: FILE = D:\OK\SYSTEM01.DBF
页 91156 标记为损坏
Corrupt block relative dba: 0x00416414 (file 1, block 91156)
Bad header found during dbv:
Data in bad block:
 type: 229 format: 6 rdba: 0xe1d9e3e7
 last change scn: 0xd682.ffc8c7cb seq: 0x8c flg: 0x8c
 spare1: 0xc0 spare2: 0xf6 spare3: 0x70b3
 consistency value in tail: 0x71f50602
 check value in block header: 0x8195
 computed block checksum: 0x8689
DBVERIFY - 验证完成
检查的页总数: 215040
处理的页总数 (数据): 178112
失败的页总数 (数据): 0
处理的页总数 (索引): 19070
失败的页总数 (索引): 0
处理的页总数 (其他): 3118
处理的总页数 (段)  : 1
失败的总页数 (段)  : 0
空的页总数: 14739
标记为损坏的总页数: 1
流入的页总数: 0
加密的总页数        : 0
最高块 SCN            : 1638554501 (0.1638554501)

这里比较明显重构出来的system文件只有block 91156坏块,这里注意和没有处理之前的坏块不一样

通过dump分析坏块所属对象

Start dump data block from file D:\OK\SYSTEM01.DBF minblk 91156 maxblk 91156
 V10 STYLE FILE HEADER:
	Compatibility Vsn = 186646528=0xb200000
	Db ID=1383974140=0x527dc4fc, Db Name='ORCL'
	Activation ID=0=0x0
	Control Seq=806694=0xc4f26, File size=215040=0x34800
	File Number=1, Blksiz=8192, File Type=3 DATA
Dump all the blocks in range:
buffer tsn: 0 rdba: 0xe1d9e3e7 (903/1696743)
scn: 0xd682.ffc8c7cb seq: 0x8c flg: 0x8c tail: 0x71f50602
frmt: 0x06 chkval: 0x8195 type: 0xe5=unknown
Hex dump of corrupt header 4 = CORRUPT
Start dump data block from file D:\OK\SYSTEM01.DBF minblk 91155 maxblk 91155
 V10 STYLE FILE HEADER:
	Compatibility Vsn = 186646528=0xb200000
	Db ID=1383974140=0x527dc4fc, Db Name='ORCL'
	Activation ID=0=0x0
	Control Seq=806694=0xc4f26, File size=215040=0x34800
	File Number=1, Blksiz=8192, File Type=3 DATA
Dump all the blocks in range:
buffer tsn: 0 rdba: 0x00416413 (1/91155)
scn: 0x0000.613ad8d6 seq: 0x01 flg: 0x06 tail: 0xd8d60601
frmt: 0x02 chkval: 0xcc5f type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Block header dump:  0x00416413
 Object id on Block? Y
 seg/obj: 0x25  csc: 0x00.613ad8ce  itc: 2  flg: -  typ: 2 - INDEX
     fsl: 0  fnx: 0x0 ver: 0x01
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0009.000.0016fda0  0x00c0130e.6f38.01  CB--    0  scn 0x0000.3e6ed294
0x02   0x0009.008.0023e862  0x00c002cf.a217.13  --U-    1  fsc 0x0000.613ad8d6
Start dump data block from file D:\OK\SYSTEM01.DBF minblk 91157 maxblk 91157
 V10 STYLE FILE HEADER:
	Compatibility Vsn = 186646528=0xb200000
	Db ID=1383974140=0x527dc4fc, Db Name='ORCL'
	Activation ID=0=0x0
	Control Seq=806694=0xc4f26, File size=215040=0x34800
	File Number=1, Blksiz=8192, File Type=3 DATA
Dump all the blocks in range:
buffer tsn: 0 rdba: 0x00416415 (1/91157)
scn: 0x0000.6193dc0c seq: 0x01 flg: 0x06 tail: 0xdc0c0601
frmt: 0x02 chkval: 0x8c21 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Block header dump:  0x00416415
 Object id on Block? Y
 seg/obj: 0x25  csc: 0x00.6193dc04  itc: 2  flg: -  typ: 2 - INDEX
     fsl: 0  fnx: 0x0 ver: 0x01
 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0006.00c.001ee103  0x00c008a5.8dbd.02  C---    0  scn 0x0000.57303e03
0x02   0x000b.00f.00053100  0x00c008af.1563.09  --U-    1  fsc 0x0000.6193dc0c

这里比较明显,可以确定坏块为index,object_id=0x25=37,通过查询其他库,确定为i_obj2(obj$的index)

使用该文件启动数据库

SQL> alter database rename file 'E:\ORADATA\ORCL\SYSTEM01.DBF' to 'd:\orcl\SYSTEM01.DBF';
数据库已更改。
SQL> recover database;
完成介质恢复。
SQL> alter database open;
数据库已更改。

检查alert日志
发现smon进程由于坏块的存储,出现大量报错,需要处理,不然数据库一段时间后就会crash.

Tue Feb 27 20:31:23 2018
QMNC started with pid=27, OS id=4652
Completed: alter database open
Tue Feb 27 20:31:25 2018
Starting background process CJQ0
Tue Feb 27 20:31:25 2018
CJQ0 started with pid=30, OS id=2172
Tue Feb 27 20:31:25 2018
db_recovery_file_dest_size of 3912 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Hex dump of (file 1, block 91156) in trace file d:\oracle\diag\rdbms\orcl\orcl\trace\orcl_cjq0_2172.trc
Corrupt block relative dba: 0x00416414 (file 1, block 91156)
Bad header found during multiblock buffer read
Data in bad block:
 type: 229 format: 6 rdba: 0xe1d9e3e7
 last change scn: 0xd682.ffc8c7cb seq: 0x8c flg: 0x8c
 spare1: 0xc0 spare2: 0xf6 spare3: 0x70b3
 consistency value in tail: 0x71f50602
 check value in block header: 0x8195
 computed block checksum: 0x8689
Reading datafile 'D:\ORCL\SYSTEM01.DBF' for corruption at rdba: 0x00416414 (file 1, block 91156)
Reread (file 1, block 91156) found same corrupt data
Errors in file d:\oracle\diag\rdbms\orcl\orcl\trace\orcl_smon_3992.trc  (incident=77085):
ORA-01578: ORACLE data block corrupted (file # 1, block # 91156)
ORA-01110: data file 1: 'D:\ORCL\SYSTEM01.DBF'
Incident details in: d:\oracle\diag\rdbms\orcl\orcl\incident\incdir_77085\orcl_smon_3992_i77085.trc
Errors in file d:\oracle\diag\rdbms\orcl\orcl\trace\orcl_cjq0_2172.trc  (incident=77221):
ORA-01578: ORACLE data block corrupted (file # 1, block # 91156)
ORA-01110: data file 1: 'D:\ORCL\SYSTEM01.DBF'
Incident details in: d:\oracle\diag\rdbms\orcl\orcl\incident\incdir_77221\orcl_cjq0_2172_i77221.trc

重建i_obj2 index,参考:http://www.xifenfei.com/?p=5566

一次侥幸的OSD-04016 O/S-Error异常恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:一次侥幸的OSD-04016 O/S-Error异常恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

一台数据库因为异常断电导致硬盘IO出现O/S-Error: (OS 23) 数据错误(循环冗余检查)错误,使得datafile 6无法完成实例恢复.使用dbv检查该数据文件也出现类似错误,尝试copy该文件,也出现了类似的错误.尝试dd拷贝完整,发现dd也只能拷贝81951个block.

Tue May 14 15:32:10 2013
Completed redo scan
 16941 redo blocks read, 1106 data blocks need recovery
Tue May 14 15:32:17 2013
Errors in file d:\oracle\product\10.2.0\admin\water\bdump\water_p002_1472.trc:
ORA-01115: IO error reading block from file 6 (block # 81951)
ORA-01110: data file 6: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\WATER\YD_DATA01.DBF'
ORA-27070: async read/write failed
OSD-04016: 异步 I/O 请求排队时出错。
O/S-Error: (OS 23) 数据错误(循环冗余检查)。

因为该数据库有一天前的备份,而且他们只要求恢复其中三张核心表的数据,通过分析数据字典,确定出来相关表的block均不在block 81951之上,也就是说,如果数据库只是该block异常了,可以通过跳过该block,从而copy相关block,来实现数据库恢复,因为是一个文件的中间部分异常了,所以决定使用dd来copy文件正常部分

dd if=D:\ORACLE\PRODUCT\10.2.0\ORADATA\WATER\YD_DATA01.DBF bs=8192 count=81951 of=h:\dd\yd_data01_1.dbf
dd if=D:\ORACLE\PRODUCT\10.2.0\ORADATA\WATER\YD_DATA01.DBF bs=8192  skip=81952   of=h:\dd\yd_data01_2.dbf

dd出来文件之后,因为我们跳过了block 81952(block 0 数据库为记录),所以我们需要通过dd来构造block 81952,并且把他们合并到一起

dd if=/dev/zero of=h:\dd\yd_data01_1.dbf seek=81951 bs=8192 count=1
dd if=h:\dd\yd_data01_2.dbf seek=81952 bs=8192 of=h:\dd\yd_data01_1.dbf

然后使用dul工具抽出来客户需要的三张核心表的数据,恢复工作算完成。
针对本次恢复,如果需求是open数据库,通过设置隐含参数,bbed之类原则上也可以实现.
这次的恢复算是比较侥幸:1.客户有一天前的exp,只需要恢复三张核心表数据;2.三张表的数据恰好都不在损坏的block中;3.数据库就损坏了一个block.
如果出现不幸情况,那可能需要先硬盘恢复,然后数据库恢复,最后折腾数据.
总之再次提醒各位:数据库备份很重要,很重要.对于需求是不能丢失数据的系统备份,一定要rman的方式备份,千万别选择exp/expdp