ORA-01172 ORA-01151故障处理

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-01172 ORA-01151故障处理

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

接手客户一个云平台硬件故障恢复之后,数据库无法启动的case,通过分析alert日志,发现数据库在open过程中报ORA-01172: recovery of thread 1 stuck at block 2220167 of file 262,ORA-01151: use media recovery to recover block, restore backup if needed等相关错误(其实也就是在做实例恢复的过程中报了logically corrupt导致无法完成实例恢复)

Sat Nov 01 14:29:10 2025
ALTER DATABASE OPEN
Beginning crash recovery of 1 threads
 parallel recovery started with 15 processes
Started redo scan
Completed redo scan
 read 7034 KB redo, 937 data blocks need recovery
Started redo application at
 Thread 1: logseq 296553, block 389408
Recovery of Online Redo Log: Thread 1 Group 3 Seq 296553 Reading mem 0
  Mem# 0: /data/orcl/onlinelog/redo03a.log
Sat Nov 01 14:29:11 2025
Hex dump of (file 262, block 2220584) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p009_2648.trc
Sat Nov 01 14:29:11 2025
Hex dump of (file 262, block 2218886) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p007_2644.trc
Reading datafile '/data/orcl/datafile/xifenfei12.dbf' for corruption at rdba: 0x41a1db86 (file 262, block 2218886)
Reading datafile '/data/orcl/datafile/xifenfei12.dbf' for corruption at rdba: 0x41a1e228 (file 262, block 2220584)
Sat Nov 01 14:29:11 2025
Hex dump of (file 262, block 2219845) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_2646.trc
Reading datafile '/data/orcl/datafile/xifenfei12.dbf' for corruption at rdba: 0x41a1df45 (file 262, block 2219845)
Sat Nov 01 14:29:11 2025
Hex dump of (file 262, block 2220167) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p001_2632.trc
Reading datafile '/data/orcl/datafile/xifenfei12.dbf' for corruption at rdba: 0x41a1e087 (file 262, block 2220167)
Reread (file 262, block 2218886) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 2218886 OF FILE 262
Reread (file 262, block 2220584) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 2220584 OF FILE 262
Reread (file 262, block 2219845) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 2219845 OF FILE 262
Reread (file 262, block 2220167) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 2220167 OF FILE 262
Sat Nov 01 14:29:26 2025
Slave exiting with ORA-1172 exception
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p007_2644.trc:
ORA-01172: recovery of thread 1 stuck at block 2218886 of file 262
ORA-01151: use media recovery to recover block, restore backup if needed
Sat Nov 01 14:29:26 2025
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_2646.trc:
ORA-10388: parallel query server interrupt (failure)
Sat Nov 01 14:29:26 2025
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p009_2648.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p008_2646.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p009_2648.trc:
ORA-10388: parallel query server interrupt (failure)
Sat Nov 01 14:29:26 2025
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p001_2632.trc:
ORA-10388: parallel query server interrupt (failure)
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_p001_2632.trc:
ORA-10388: parallel query server interrupt (failure)
Sat Nov 01 14:29:26 2025
Aborting crash recovery due to slave death, attempting serial crash recovery
Beginning crash recovery of 1 threads
Started redo scan
Completed redo scan
 read 7034 KB redo, 937 data blocks need recovery
Started redo application at
 Thread 1: logseq 296553, block 389408
Recovery of Online Redo Log: Thread 1 Group 3 Seq 296553 Reading mem 0
  Mem# 0: /data/orcl/onlinelog/redo03a.log
Hex dump of (file 262,block 2220167) in trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2606.trc
Reading datafile '/data/orcl/datafile/xifenfei12.dbf'for corruption at rdba: 0x41a1e087 (file 262,block 2220167)
Reread (file 262, block 2220167) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 2220167 OF FILE 262
Aborting crash recovery due to error 1172
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2606.trc:
ORA-01172: recovery of thread 1 stuck at block 2220167 of file 262
ORA-01151: use media recovery to recover block, restore backup if needed
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2606.trc:
ORA-01172: recovery of thread 1 stuck at block 2220167 of file 262
ORA-01151: use media recovery to recover block, restore backup if needed
ORA-1172 signalled during: ALTER DATABASE OPEN...

接手故障之后,尝试recover database恢复,结果报ORA-600 4552错误

SQL> recover database;
ORA-10562: Error occurred while applying redo to data block (file# 262, block#
2222153)
ORA-10564: tablespace XIFENFEI
ORA-01110: data file 262: '/data/orcl/datafile/xifenfei12.dbf'
ORA-10560: block type '0'
ORA-00600: internal error code, arguments: [4552], [1], [0], [], [], [], [],
[], [], [], [], []

关于ORA-600 4552对应的alert日志信息

Sat Nov 01 17:49:58 2025
ALTER DATABASE RECOVER  database  
Media Recovery Start
 started logmerger process
Parallel Media Recovery started with 16 slaves
Sat Nov 01 17:49:59 2025
Recovery of Online Redo Log: Thread 1 Group 3 Seq 296553 Reading mem 0
  Mem# 0: /data/orcl/onlinelog/redo03a.log
Sat Nov 01 17:49:59 2025
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_pr0c_28770.trc  (incident=1018821):
ORA-00600: internal error code, arguments: [4552], [1], [0], [], [], [], [], [], [], [], [], []
Incident details in:/u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_1018821/orcl_pr0c_28770_i1018821.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Sat Nov 01 17:50:03 2025
Sweep [inc][1018821]: completed
Sweep [inc2][1018821]: completed
Slave exiting with ORA-10562 exception
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_pr0c_28770.trc:
ORA-10562: Error occurred while applying redo to data block (file# 262, block# 2222153)
ORA-10564: tablespace xifenfei
ORA-01110: data file 262: '/data/orcl/datafile/xifenfei12.dbf'
ORA-10560: block type '0'
ORA-00600: internal error code, arguments: [4552], [1], [0], [], [], [], [], [], [], [], [], []
Recovery Slave PR0C previously exited with exception 10562
Media Recovery failed with error 448
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_pr00_28732.trc:
ORA-00283: recovery session canceled due to errors
ORA-00448: normal completion of background process
ORA-10562 signalled during: ALTER DATABASE RECOVER  database  ...

无法整个库级别recover,尝试数据文件recover操作

SQL> recover datafile 1;
Media recovery complete.
…………
SQL> recover datafile 22,23,24,26,25,27,28,29,30;
Media recovery complete.
…………
SQL>    recover datafile  251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261;
Media recovery complete.
SQL> recover datafile 262;
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [262], [2215808],
[1101123456], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 262, block# 2215808,
file offset is 972029952 bytes)
ORA-10564: tablespace XIFENFEI
ORA-01110: data file 262: '/data/orcl/datafile/xifenfei12.dbf'
ORA-10560: block type '0'

SQL> recover datafile 263;
Media recovery complete.

出file# 262数据文件之外,其他文件全部recover成功,对应的ORA-600 3020错误相关alert日志信息

Sat Nov 01 17:53:37 2025
ALTER DATABASE RECOVER  datafile 262  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 1 Group 3 Seq 296553 Reading mem 0
  Mem# 0: /data/orcl/onlinelog/redo03a.log
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_28561.trc  (incident=1018717):
ORA-00600: internal error code, arguments: [3020], [262], [2215808], [1101123456], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 262, block# 2215808, file offset is 972029952 bytes)
ORA-10564: tablespace xifenfei
ORA-01110: data file 262: '/data/orcl/datafile/xifenfei12.dbf'
ORA-10560: block type '0'
Incident details in:/u01/app/oracle/diag/rdbms/orcl/orcl/incident/incdir_1018717/orcl_ora_28561_i1018717.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Media Recovery failed with error 600
ORA-283 signalled during: ALTER DATABASE RECOVER  datafile 262  ...

对于这种情况,有两种处理方式:
1)在recover过程中对于报错的block标记为坏块,然后继续恢复,这样正常应用日志成功,再把标记的坏块修复好
2)直接修改该文件头跳过该文件跳过这些block的应用日志,直接骗过数据库
在本case中由于客户急着恢复业务,需要尽快处理,所以采用了第一个方案,这里我使用自研的m_scn(modify_scn)工具快速修改相关数据文件信息

[oracle@host-172-18-50-10 tmp]$ cat 1.txt
1@/data/orcl/datafile/system01.dbf
262@/data/orcl/datafile/xifenfei12.dbf
[oracle@host-172-18-50-10 tmp]$ ./m_scn 1.txt
Please Enter Password: 

===== Starting Datafile Header modification program =====
Datafile list file: 1.txt
Operation Mode: Only Modify Datafile Header CheckPoint
Block Size: 8192
Log Path: /tmp/modify_scn
---------------------------------------------------------
Preparing Datafile list file...
Verifying Datafile existence...
Datafile verification passed
Initializing working directory...
Recovery script created: /tmp/modify_scn/backup/recover_datafile.sh
---------------------------------------------------------
Starting Datafile Header processing (total 2 files)...
[1/2] Processing Datafile Header: /data/orcl/datafile/system01.dbf (File number: 1)
  - Skipping file number 1 (control file)
---------------------------------------------------------
[2/2] Processing Datafile Header: /data/orcl/datafile/xifenfei12.dbf (File number: 262)
  - Backing up Datafile header...
  - Executing Datafile Header modification with block size 8192...
  - Datafile Header processing completed
---------------------------------------------------------
Cleaning up temporary files...
================= All operations completed =================

Note: Execute /tmp/modify_scn/backup/recover_datafile.sh operation for rollback

然后查询相关scn信息,确认修改文件信息没有问题并尝试recover 262号文件

[oracle@host-172-18-50-10 tmp]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 1 18:02:18 2025

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 4.1823E+10 bytes
Fixed Size                  2262368 bytes
Variable Size            4294970016 bytes
Database Buffers         3.7447E+10 bytes
Redo Buffers               78614528 bytes
Database mounted.
SQL> set pages 10000
SQL> set numw 16
SQL> SELECT status,
  2  checkpoint_change#,
  3  to_char(checkpoint_time,'yyyy-mm-dd hh24:mi:ss') checkpoint_time,
  4  last_change#,
  5  count(*) ROW_NUM
FROM v$datafile
  6    7  GROUP BY status, checkpoint_change#, checkpoint_time,last_change#
ORDER BY status, checkpoint_change#, checkpoint_time;
  8  

set numw 16
col CHECKPOINT_TIME for a40
set lines 150
set pages 1000
SELECT status,
to_char(checkpoint_time,'yyyy-mm-dd hh24:mi:ss') checkpoint_time,FUZZY,checkpoint_change#,
count(*) ROW_NUM
FROM v$datafile_header
GROUP BY status, checkpoint_change#, to_char(checkpoint_time,'yyyy-mm-dd hh24:mi:ss'),fuzzy
ORDER BY status, checkpoint_change#, checkpoint_time;

SELECT dd.FILE#,
dd.NAME,
dd.STATUS,
dd.checkpoint_change# dfile_chkp_change,
dh.checkpoint_change# dfile_hed_chkp_change,
dh.recover,
dh.fuzzy
FROM v$datafile dd,
v$datafile_header dh
WHERE dd.FILE#=dh.FILE#
AND dd.checkpoint_change#<>dh.checkpoint_change#;


STATUS  CHECKPOINT_CHANGE# CHECKPOINT_TIME         LAST_CHANGE#          ROW_NUM
------- ------------------ ------------------- ---------------- ----------------
ONLINE      16816934458875 2025-11-01 17:58:28   16816934458875              258
RECOVER     16816934368799 2025-11-01 05:29:39   16816934456943                1
SYSTEM      16816934458875 2025-11-01 17:58:28   16816934458875                4

SQL> SQL> SQL> SQL> SQL> SQL> SQL>   2    3    4    5    6  
STATUS  CHECKPOINT_TIME                          FUZ CHECKPOINT_CHANGE#          ROW_NUM
------- ---------------------------------------- --- ------------------ ----------------
OFFLINE 2025-11-01 17:58:28                      NO      16816934458875                1
ONLINE  2025-11-01 17:58:28                      NO      16816934458875              262

SQL> SQL>   2    3    4    5    6    7    8    9   10   11  
           FILE#
----------------
NAME
----------------------------------------------------------------------------------
STATUS  DFILE_CHKP_CHANGE DFILE_HED_CHKP_CHANGE REC FUZ
------- ----------------- --------------------- --- ---
             262
/data/orcl/datafile/xifenfei12.dbf
RECOVER    16816934368799        16816934458875 YES NO

SQL> recover datafile 262;
Media recovery complete.

open数据库成功

SQL> alter database open;

Database altered.
Sat Nov 01 18:06:00 2025
ALTER DATABASE OPEN
Thread 1 opened at log sequence 296554
  Current log# 1 seq# 296554 mem# 0: /data/orcl/onlinelog/redo01a.log
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
SMON: enabling cache recovery
[33941] Successfully onlined Undo Tablespace 143.
Undo initialization finished serial:0 start:12793234 end:12793304 diff:70 (0 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is ZHS16GBK
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Sat Nov 01 18:06:01 2025
QMNC started with pid=20, OS id=33973 
Completed: ALTER DATABASE OPEN

至此基本上完成本次恢复任务,后续根据alert日志,有个别表可能由于在file# 262中丢失一些数据导致不一致的问题进行单独,其他没有太大问题,最快帮客户恢复了业务

ORA-01172 ORA-01151 故障恢复

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:ORA-01172 ORA-01151 故障恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

节点2报Error: Controlfile sequence number in file header is different from the one in memory,导致实例异常

Tue May 09 23:03:24 2023
Thread 2 cannot allocate new log, sequence 16728
Checkpoint not complete
  Current log# 3 seq# 16727 mem# 0: +DATA/xff/onlinelog/group_3.265.941900045
  Current log# 3 seq# 16727 mem# 1: +FRA/xff/onlinelog/group_3.259.941900045
Thread 2 advanced to log sequence 16728 (LGWR switch)
  Current log# 4 seq# 16728 mem# 0: +DATA/xff/onlinelog/group_4.266.941900045
  Current log# 4 seq# 16728 mem# 1: +FRA/xff/onlinelog/group_4.260.941900045
Tue May 09 23:03:31 2023
LNS: Standby redo logfile selected for thread 2 sequence 16728 for destination LOG_ARCHIVE_DEST_2
Tue May 09 23:03:32 2023
Archived Log entry 431615 added for thread 2 sequence 16727 ID 0x5ffc99b5 dest 1:
Tue May 09 23:05:30 2023
Error: Controlfile sequence number in file header is different from the one in memory
       Please check that the correct mount options are used if controlfile is located on NFS
USER (ospid: 30162): terminating the instance
Tue May 09 23:05:30 2023
System state dump requested by (instance=2, osid=30162), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/xff/xff2/trace/xff2_diag_6650.trc
Instance terminated by USER, pid = 30162

在节点1 进行实例重组之后,节点1 实例异常

Tue May 09 23:04:54 2023
Thread 1 cannot allocate new log, sequence 2060
Checkpoint not complete
  Current log# 1 seq# 2059 mem# 0: +DATA/xff/onlinelog/group_1.261.941899887
  Current log# 1 seq# 2059 mem# 1: +FRA/xff/onlinelog/group_1.257.941899887
Thread 1 advanced to log sequence 2060 (LGWR switch)
  Current log# 2 seq# 2060 mem# 0: +DATA/xff/onlinelog/group_2.262.941899889
  Current log# 2 seq# 2060 mem# 1: +FRA/xff/onlinelog/group_2.258.941899889
Tue May 09 23:04:58 2023
********************* ATTENTION: ******************** 
 The controlfile header block returned by the OS
 has a sequence number that is too old. 
 The controlfile might be corrupted.
 PLEASE DO NOT ATTEMPT TO START UP THE INSTANCE 
 without following the steps below.
 RE-STARTING THE INSTANCE CAN CAUSE SERIOUS DAMAGE 
 TO THE DATABASE, if the controlfile is truly corrupted.
 In order to re-start the instance safely, 
 please do the following:
 (1) Save all copies of the controlfile for later 
     analysis and contact your OS vendor and Oracle support.
 (2) Mount the instance and issue: 
     ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 (3) Unmount the instance. 
 (4) Use the script in the trace file to
     RE-CREATE THE CONTROLFILE and open the database. 
*****************************************************
Tue May 09 23:05:31 2023
Reconfiguration started (old inc 20, new inc 22)
List of instances:
 1 (myinst: 1) 
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Tue May 09 23:05:31 2023
 LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue May 09 23:05:31 2023
 LMS 0: 3 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Post SMON to start 1st pass IR
Tue May 09 23:05:32 2023
Instance recovery: looking for dead threads
 Submitted all GCS remote-cache requests
 Post SMON to start 1st pass IR
 Fix write in gcs resources
Reconfiguration complete
Tue May 09 23:06:00 2023
ARC1 (ospid: 26512): terminating the instance
Tue May 09 23:06:00 2023
System state dump requested by (instance=1, osid=26512 (ARC1)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_diag_26311.trc
Tue May 09 23:06:01 2023
ORA-1092 : opitsk aborting process
Instance terminated by ARC1, pid = 26512

实例重启报错

Recovery of Online Redo Log: Thread 1 Group 1 Seq 2059 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_1.261.941899887
  Mem# 1: +FRA/dbm/onlinelog/group_1.257.941899887
Recovery of Online Redo Log: Thread 2 Group 3 Seq 16727 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_3.265.941900045
  Mem# 1: +FRA/dbm/onlinelog/group_3.259.941900045
Recovery of Online Redo Log: Thread 2 Group 4 Seq 16728 Reading mem 0
  Mem# 0: +DATA/dbm/onlinelog/group_4.266.941900045
  Mem# 1: +FRA/dbm/onlinelog/group_4.260.941900045
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc
Reading datafile '+DATA/dbm/datafile/system.256.941899799' for corruption at rdba: 0x00419179 (file 1, block 102777)
Reread (file 1, block 102777) found different corrupt data (logically corrupt)
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc
RECOVERY OF THREAD 2 STUCK AT BLOCK 102777 OF FILE 1
Abort recovery for domain 0
Aborting crash recovery due to error 1172
Errors in file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc:
ORA-01172: recovery of thread 2 stuck at block 102777 of file 1
ORA-01151: use media recovery to recover block, restore backup if needed
Abort recovery for domain 0
Errors in file /u01/app/oracle/diag/rdbms/dbm/dbm2/trace/dbm2_ora_30749.trc:
ORA-01172: recovery of thread 2 stuck at block 102777 of file 1
ORA-01151: use media recovery to recover block, restore backup if needed
ORA-1172 signalled during: ALTER DATABASE OPEN /* db agent *//* {0:890:17} */...

人工recover操作失败报ORA-600 3020错误

SQL> recover datafile 1;
ORA-00283: recovery session canceled due to errors
ORA-00600: internal error code, arguments: [3020], [1], [102777], [4297081],[], []
ORA-10567: Redo is inconsistent with data block (file# 1, block# 102777, file
offset is 841949184 bytes)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '+DATA/dbm/datafile/system.256.941899799'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 469884

---alert日志
Tue May 09 23:28:44 2023
ALTER DATABASE RECOVER  datafile 1  
Media Recovery Start
Serial Media Recovery started
Recovery of Online Redo Log: Thread 2 Group 3 Seq 16727 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_3.265.941900045
  Mem# 1: +FRA/xff/onlinelog/group_3.259.941900045
ORA-279 signalled during: ALTER DATABASE RECOVER  datafile 1  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2055.20899.1136415701
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2056.20837.1136415753
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2057.20911.1136415803
ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER    CONTINUE DEFAULT  
Media Recovery Log +FRA/xff/archivelog/2023_05_09/thread_1_seq_2058.21898.1136415853
Recovery of Online Redo Log: Thread 2 Group 4 Seq 16728 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_4.266.941900045
  Mem# 1: +FRA/xff/onlinelog/group_4.260.941900045
Recovery of Online Redo Log: Thread 1 Group 1 Seq 2059 Reading mem 0
  Mem# 0: +DATA/xff/onlinelog/group_1.261.941899887
  Mem# 1: +FRA/xff/onlinelog/group_1.257.941899887
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc
Reading datafile '+DATA/xff/datafile/system.256.941899799' for corruption at rdba: 0x00419179 (file 1, block 102777)
Reread (file 1, block 102777) found different corrupt data (logically corrupt)
Hex dump of (file 1, block 102777) in trace file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc
Tue May 09 23:28:59 2023
Errors in file /u01/app/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_16246.trc  (incident=6868615):
ORA-00600: internal error code, arguments: [3020], [1], [102777], [4297081], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 1, block# 102777, file offset is 841949184 bytes)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '+DATA/xff/datafile/system.256.941899799'
ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 469884
Incident details in: /u01/app/oracle/diag/rdbms/xff/xff1/incident/incdir_6868615/xff1_ora_16246_i6868615.trc
Tue May 09 23:29:00 2023
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Media Recovery failed with error 600
ORA-283 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
ALTER DATABASE RECOVER CANCEL 
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...

根据上述报错信息可以确认报错的是一个index,而且非系统核心对象,可以通过allow 1 corruption方式进行恢复,并且open库成功

SQL> recover  datafile 1 allow 1 corruption;
Media recovery complete.
SQL> alter database open;

Database altered.

SQL> select owner,object_name,object_type from dba_objects where object_id=469884;

OWNER
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE
---------------------------------------------------------
SYSTEM
PK_XFF_SERVERS
INDEX

SQL> alter index system.PK_XFF_SERVERS rebuild online;

Index altered.

数据库完美恢复,数据0丢失,业务可以直接正常使用

ORA-01172 ORA-01151 故障恢复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ORA-01172 ORA-01151 故障恢复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有客户存储异常断电,导致数据库启动报ORA-01172错,导致数据库无法open
数据库启动报ORA-01172错误

Wed Mar 23 14:16:23 2016
ALTER DATABASE OPEN
Wed Mar 23 14:16:24 2016
Beginning crash recovery of 1 threads
 parallel recovery started with 15 processes
Wed Mar 23 14:16:24 2016
Started redo scan
Wed Mar 23 14:16:25 2016
Completed redo scan
 62588 redo blocks read, 15 data blocks need recovery
Wed Mar 23 14:16:25 2016
Started redo application at
 Thread 1: logseq 15050, block 2, scn 2439828667
Wed Mar 23 14:16:25 2016
Recovery of Online Redo Log: Thread 1 Group 1 Seq 15050 Reading mem 0
  Mem# 0 errs 0: /oracle/oradata/orcl/redo01.log
Wed Mar 23 14:16:25 2016
Completed redo application
Wed Mar 23 14:16:25 2016
RECOVERY OF THREAD 1 STUCK AT BLOCK 26185 OF FILE 3
Wed Mar 23 14:16:25 2016
RECOVERY OF THREAD 1 STUCK AT BLOCK 69385 OF FILE 3
Wed Mar 23 14:16:25 2016
RECOVERY OF THREAD 1 STUCK AT BLOCK 566 OF FILE 2
Wed Mar 23 14:16:25 2016
RECOVERY OF THREAD 1 STUCK AT BLOCK 89 OF FILE 2
Wed Mar 23 14:16:25 2016
RECOVERY OF THREAD 1 STUCK AT BLOCK 53769 OF FILE 3
Wed Mar 23 14:16:26 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p012_6540.trc:
ORA-01172: recovery of thread 1 stuck at block 566 of file 2
ORA-01151: use media recovery to recover block, restore backup if needed
Wed Mar 23 14:16:26 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p008_6532.trc:
ORA-01172: recovery of thread 1 stuck at block 53769 of file 3
ORA-01151: use media recovery to recover block, restore backup if needed
Wed Mar 23 14:16:26 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p011_6538.trc:
ORA-01172: recovery of thread 1 stuck at block 69385 of file 3
ORA-01151: use media recovery to recover block, restore backup if needed
Wed Mar 23 14:16:26 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p005_6526.trc:
ORA-01172: recovery of thread 1 stuck at block 26185 of file 3
ORA-01151: use media recovery to recover block, restore backup if needed
Wed Mar 23 14:16:27 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p014_6544.trc:
ORA-01172: recovery of thread 1 stuck at block 89 of file 2
ORA-01151: use media recovery to recover block, restore backup if needed
Wed Mar 23 14:16:27 2016
Aborting crash recovery due to slave death, attempting serial crash recovery
Wed Mar 23 14:16:27 2016
Beginning crash recovery of 1 threads
Wed Mar 23 14:16:27 2016
Started redo scan
Wed Mar 23 14:16:27 2016
Completed redo scan
 62588 redo blocks read, 15 data blocks need recovery
Wed Mar 23 14:16:27 2016
Started redo application at
 Thread 1: logseq 15050, block 2, scn 2439828667
Wed Mar 23 14:16:27 2016
Recovery of Online Redo Log: Thread 1 Group 1 Seq 15050 Reading mem 0
  Mem# 0 errs 0: /oracle/oradata/orcl/redo01.log
RECOVERY OF THREAD 1 STUCK AT BLOCK 566 OF FILE 2
Wed Mar 23 14:16:27 2016
Aborting crash recovery due to error 1172
Wed Mar 23 14:16:27 2016
Errors in file /oracle/admin/orcl/udump/orcl_ora_6514.trc:
ORA-01172: recovery of thread 1 stuck at block 566 of file 2
ORA-01151: use media recovery to recover block, restore backup if needed
ORA-1172 signalled during: ALTER DATABASE OPEN...

ALTER DATABASE RECOVER datafile 1 报错
尝试recover datafile 1之后报ORA-600 kcbrapply_4,ORA-600 kcfrbd_3,ORA-600 kcbrapply_12等错误,从报错信息看,出现这些错误的原因,是由于断电导致坏块引起.

Thu Mar 24 21:50:18 2016
ALTER DATABASE RECOVER  datafile 1
Thu Mar 24 21:50:18 2016
Media Recovery Start
 parallel recovery started with 15 processes
Thu Mar 24 21:50:18 2016
Recovery of Online Redo Log: Thread 1 Group 1 Seq 15050 Reading mem 0
  Mem# 0 errs 0: /oracle/oradata/orcl/redo01.log
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p004_13391.trc:
ORA-00600: internal error code, arguments: [kcbrapply_4], [2], [], [], [], [], [], []
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p010_13403.trc:
ORA-00600: internal error code, arguments: [kcbrapply_4], [0], [], [], [], [], [], []
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p000_13383.trc:
ORA-00600: internal error code, arguments: [kcbrapply_4], [0], [], [], [], [], [], []
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p009_13401.trc:
ORA-00600: internal error code, arguments: [kcbrapply_4], [3], [], [], [], [], [], []
Thu Mar 24 21:50:19 2016
Hex dump of (file 1, block 61562) in trace file /oracle/admin/orcl/bdump/orcl_p001_13385.trc
Corrupt block relative dba: 0x0040f07a (file 1, block 61562)
Bad header found during media recovery
Data in bad block:
 type: 0 format: 0 rdba: 0xf07a0000
 last change scn: 0x916c.dc4b0040 seq: 0x0 flg: 0x00
 spare1: 0x6 spare2: 0xa2 spare3: 0xb088
 consistency value in tail: 0x06010fc1
 check value in block header: 0x601
 block checksum disabled
Thu Mar 24 21:50:19 2016
Hex dump of (file 1, block 55706) in trace file /oracle/admin/orcl/bdump/orcl_p014_13411.trc
Corrupt block relative dba: 0x0040d99a (file 1, block 55706)
Bad header found during media recovery
Data in bad block:
 type: 0 format: 0 rdba: 0xd99a0000
 last change scn: 0x916c.e1ad0040 seq: 0x0 flg: 0x00
 spare1: 0x6 spare2: 0xa2 spare3: 0xa520
 consistency value in tail: 0x06012222
 check value in block header: 0x601
 block checksum disabled
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p006_13395.trc:
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [3342335], [1], [0], [64000], [], []
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p003_13389.trc:
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [3932159], [1], [0], [64000], [], []
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p002_13387.trc:
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [2293759], [1], [0], [64000], [], []
Reread of rdba: 0x0040d99a (file 1, block 55706) found valid data
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p014_13411.trc:
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], []
Thu Mar 24 21:50:19 2016
Reread of rdba: 0x0040f07a (file 1, block 61562) found valid data
Thu Mar 24 21:50:19 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p001_13385.trc:
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p014_13411.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9782BF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p006_13395.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9C82BF4] [] []
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [3342335], [1], [0], [64000], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p009_13401.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9A02BF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_4], [3], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p003_13389.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9F02AF4] [] []
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [3932159], [1], [0], [64000], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p004_13391.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xBA182AF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_4], [2], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p010_13403.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xBA402AF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_4], [0], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p000_13383.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9282AF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_4], [0], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p001_13385.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+450] [SIGSEGV] [Address not mapped to object] [0xB9C82AF4] [] []
ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], []
Thu Mar 24 21:50:23 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p002_13387.trc:
ORA-10562: Error occurred while applying redo to data block (file# 1, block# 11042)
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: '/oracle/oradata/orcl/system01.dbf'
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [kcfrbd_3], [1], [2293759], [1], [0], [64000], [], []

ALTER DATABASE RECOVER datafile 3 报错
该文件恢复主要报ORA-600 kcbrsearchflist_2,ORA-600 kdxlin:psno out of range,ORA-600 kcbs_dump_adv_state等错误

Thu Mar 24 21:52:04 2016
ALTER DATABASE RECOVER  datafile 3
Thu Mar 24 21:52:04 2016
Media Recovery Start
 parallel recovery started with 15 processes
Thu Mar 24 21:52:04 2016
Recovery of Online Redo Log: Thread 1 Group 1 Seq 15050 Reading mem 0
  Mem# 0 errs 0: /oracle/oradata/orcl/redo01.log
Thu Mar 24 21:52:05 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p007_13462.trc:
ORA-00600: internal error code, arguments: [kdxlin:psno out of range], [], [], [], [], [], [], []
Thu Mar 24 21:52:05 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p001_13450.trc:
ORA-00600: internal error code, arguments: [kcbrsearchflist_2], [], [], [], [], [], [], []
Thu Mar 24 21:52:05 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p007_13462.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9F076F4] [] []
ORA-00600: internal error code, arguments: [kdxlin:psno out of range], [], [], [], [], [], [], []
Thu Mar 24 21:52:05 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p001_13450.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9C874F4] [] []
ORA-00600: internal error code, arguments: [kcbrsearchflist_2], [], [], [], [], [], [], []
Thu Mar 24 21:52:06 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p007_13462.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9F066F4] [] []
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9F076F4] [] []
ORA-00600: internal error code, arguments: [kdxlin:psno out of range], [], [], [], [], [], [], []
Thu Mar 24 21:52:06 2016
Errors in file /oracle/admin/orcl/bdump/orcl_p001_13450.trc:
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9C864F4] [] []
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+464] [SIGSEGV] [Address not mapped to object] [0xB9C874F4] [] []
ORA-00600: internal error code, arguments: [kcbrsearchflist_2], [], [], [], [], [], [], []

恢复过程

SQL> startup mount
ORACLE instance started.
Total System Global Area 2147483648 bytes
Fixed Size          1220432 bytes
Variable Size             369098928 bytes
Database Buffers         1761607680 bytes
Redo Buffers               15556608 bytes
Database mounted.
SQL> select file# from v$datafile;
     FILE#
----------
         1
         2
         3
         4
         5
         6
6 rows selected.
SQL> recover datafile 1;
ORA-03113: end-of-file on communication channel
SQL> startup mount
ORACLE instance started.
Total System Global Area 2147483648 bytes
Fixed Size          1220432 bytes
Variable Size             369098928 bytes
Database Buffers         1761607680 bytes
Redo Buffers               15556608 bytes
Database mounted.
SQL> recover datafile 3;
ORA-03113: end-of-file on communication channel
SQL> startup mount
ORACLE instance started.
Total System Global Area 2147483648 bytes
Fixed Size          1220432 bytes
Variable Size             369098928 bytes
Database Buffers         1761607680 bytes
Redo Buffers               15556608 bytes
Database mounted.
SQL> recover datafile 5;
Media recovery complete.
SQL> recover datafile 6;
Media recovery complete.
SQL> recover datafile 4;
Media recovery complete.
SQL> recover datafile 2;
Media recovery complete.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [57], [11], [], [], [], [],
[]
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ WRITE

这次运气不错,system坏的是mon_mods$,undo异常可以重建,基本上可以说没有数据丢失,数据库恢复完成.
重要的库,通过open过程报错信息,分析可能的坏块所属对象,然后确定处理方法,以免造成永久性数据块损坏.